WorldWideScience

Sample records for accurate light-time correction

  1. Accurate adiabatic correction in the hydrogen molecule

    Pachucki, Krzysztof, E-mail: krp@fuw.edu.pl [Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw (Poland); Komasa, Jacek, E-mail: komasa@man.poznan.pl [Faculty of Chemistry, Adam Mickiewicz University, Umultowska 89b, 61-614 Poznań (Poland)

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  2. Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions

    Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara

    2012-01-01

    This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…

  3. Significance of accurate diffraction corrections for the second harmonic wave in determining the acoustic nonlinearity parameter

    Jeong, Hyunjo; Zhang, Shuzeng; Barnard, Dan; Li, Xiongbing

    2015-09-01

    The accurate measurement of acoustic nonlinearity parameter β for fluids or solids generally requires making corrections for diffraction effects due to finite size geometry of transmitter and receiver. These effects are well known in linear acoustics, while those for second harmonic waves have not been well addressed and therefore not properly considered in previous studies. In this work, we explicitly define the attenuation and diffraction corrections using the multi-Gaussian beam (MGB) equations which were developed from the quasilinear solutions of the KZK equation. The effects of making these corrections are examined through the simulation of β determination in water. Diffraction corrections are found to have more significant effects than attenuation corrections, and the β values of water can be estimated experimentally with less than 5% errors when the exact second harmonic diffraction corrections are used together with the negligible attenuation correction effects on the basis of linear frequency dependence between attenuation coefficients, α2 ≃ 2α1.

  4. Karect: accurate correction of substitution, insertion and deletion errors for next-generation sequencing data

    Allam, Amin

    2015-07-14

    Motivation: Next-generation sequencing generates large amounts of data affected by errors in the form of substitutions, insertions or deletions of bases. Error correction based on the high-coverage information, typically improves de novo assembly. Most existing tools can correct substitution errors only; some support insertions and deletions, but accuracy in many cases is low. Results: We present Karect, a novel error correction technique based on multiple alignment. Our approach supports substitution, insertion and deletion errors. It can handle non-uniform coverage as well as moderately covered areas of the sequenced genome. Experiments with data from Illumina, 454 FLX and Ion Torrent sequencing machines demonstrate that Karect is more accurate than previous methods, both in terms of correcting individual-bases errors (up to 10% increase in accuracy gain) and post de novo assembly quality (up to 10% increase in NGA50). We also introduce an improved framework for evaluating the quality of error correction.

  5. Correction for solute/solvent interaction extends accurate freezing point depression theory to high concentration range.

    Fullerton, G D; Keener, C R; Cameron, I L

    1994-12-01

    The authors describe empirical corrections to ideally dilute expressions for freezing point depression of aqueous solutions to arrive at new expressions accurate up to three molal concentration. The method assumes non-ideality is due primarily to solute/solvent interactions such that the correct free water mass Mwc is the mass of water in solution Mw minus I.M(s) where M(s) is the mass of solute and I an empirical solute/solvent interaction coefficient. The interaction coefficient is easily derived from the constant in the linear regression fit to the experimental plot of Mw/M(s) as a function of 1/delta T (inverse freezing point depression). The I-value, when substituted into the new thermodynamic expressions derived from the assumption of equivalent activity of water in solution and ice, provides accurate predictions of freezing point depression (+/- 0.05 degrees C) up to 2.5 molal concentration for all the test molecules evaluated; glucose, sucrose, glycerol and ethylene glycol. The concentration limit is the approximate monolayer water coverage limit for the solutes which suggests that direct solute/solute interactions are negligible below this limit. This is contrary to the view of many authors due to the common practice of including hydration forces (a soft potential added to the hard core atomic potential) in the interaction potential between solute particles. When this is recognized the two viewpoints are in fundamental agreement. PMID:7699200

  6. Should scatter be corrected in both transmission and emission data for accurate quantitation in cardiac SPET?

    Ideally, reliable quantitation in single-photon emission tomography (SPET) requires both emission and transmission data to be scatter free. Although scatter in emission data has been extensively studied, it is not well known how scatter in transmission data affects relative and absolute quantitation in reconstructed images. We studied SPET quantitative accuracy for different amounts of scatter in emission and transmission data using a Utah phantom and a cardiac Data Spectrum phantom including different attenuating media. Acquisitions over 180 were considered and three projection sets were derived: 20% images and Jaszczak and triple-energy-window scatter-corrected projections. Transmission data were acquired using gadolinium-153 line sources in a 90-110 keV window using a narrow or wide scanning window. The transmission scans were performed either simultaneously with the emission acquisition or 24 h later. Transmission maps were reconstructed using filtered backprojection and μ values were linearly scaled from 100 to 140 keV. Attenuation-corrected images were reconstructed using a conjugate gradient minimal residual algorithm. The μ value underestimation varied between 4% with a narrow transmission window in soft tissue and 22% with a wide window in a material simulating bone. Scatter in the emission and transmission data had little effect on the uniformity of activity distribution in the left ventricle wall and in a uniformly hot compartment of the Utah phantom. Correcting the transmission data for scatter had no impact on contrast between a hot and a cold region or on signal-to-noise ratio (SNR) in regions with uniform activity distribution, while correcting the emission data for scatter improved contrast and reduced SNR. For absolute quantitation, the most accurate results (bias <4% in both phantoms) were obtained when reducing scatter in both emission and transmission data. In conclusion, trying to obtain the same amount of scatter in emission and transmission

  7. Should scatter be corrected in both transmission and emission data for accurate quantitation in cardiac SPET?

    Fakhri, G.E. [Harvard Medical School, Boston, MA (United States). Dept. of Radiology; U494 INSERM, CHU Pitie-Salpetriere, Paris (France); Buvat, I.; Todd-Pokropek, A.; Benali, H. [U494 INSERM, CHU Pitie-Salpetriere, Paris (France); Almeida, P. [Servico de Medicina Nuclear, Hospital Garcia de Orta, Almada (Portugal); Bendriem, B. [CTI, Inc., Knoxville, TN (United States)

    2000-09-01

    Ideally, reliable quantitation in single-photon emission tomography (SPET) requires both emission and transmission data to be scatter free. Although scatter in emission data has been extensively studied, it is not well known how scatter in transmission data affects relative and absolute quantitation in reconstructed images. We studied SPET quantitative accuracy for different amounts of scatter in emission and transmission data using a Utah phantom and a cardiac Data Spectrum phantom including different attenuating media. Acquisitions over 180 were considered and three projection sets were derived: 20% images and Jaszczak and triple-energy-window scatter-corrected projections. Transmission data were acquired using gadolinium-153 line sources in a 90-110 keV window using a narrow or wide scanning window. The transmission scans were performed either simultaneously with the emission acquisition or 24 h later. Transmission maps were reconstructed using filtered backprojection and {mu} values were linearly scaled from 100 to 140 keV. Attenuation-corrected images were reconstructed using a conjugate gradient minimal residual algorithm. The {mu} value underestimation varied between 4% with a narrow transmission window in soft tissue and 22% with a wide window in a material simulating bone. Scatter in the emission and transmission data had little effect on the uniformity of activity distribution in the left ventricle wall and in a uniformly hot compartment of the Utah phantom. Correcting the transmission data for scatter had no impact on contrast between a hot and a cold region or on signal-to-noise ratio (SNR) in regions with uniform activity distribution, while correcting the emission data for scatter improved contrast and reduced SNR. For absolute quantitation, the most accurate results (bias <4% in both phantoms) were obtained when reducing scatter in both emission and transmission data. In conclusion, trying to obtain the same amount of scatter in emission and

  8. Accurate tracking of tumor volume change during radiotherapy by CT-CBCT registration with intensity correction

    Park, Seyoun; Robinson, Adam; Quon, Harry; Kiess, Ana P.; Shen, Colette; Wong, John; Plishker, William; Shekhar, Raj; Lee, Junghoon

    2016-03-01

    In this paper, we propose a CT-CBCT registration method to accurately predict the tumor volume change based on daily cone-beam CTs (CBCTs) during radiotherapy. CBCT is commonly used to reduce patient setup error during radiotherapy, but its poor image quality impedes accurate monitoring of anatomical changes. Although physician's contours drawn on the planning CT can be automatically propagated to daily CBCTs by deformable image registration (DIR), artifacts in CBCT often cause undesirable errors. To improve the accuracy of the registration-based segmentation, we developed a DIR method that iteratively corrects CBCT intensities by local histogram matching. Three popular DIR algorithms (B-spline, demons, and optical flow) with the intensity correction were implemented on a graphics processing unit for efficient computation. We evaluated their performances on six head and neck (HN) cancer cases. For each case, four trained scientists manually contoured the nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial image registration software based on conventional mutual information (MI), VelocityAI (Varian Medical Systems Inc.). The volume differences (mean±std in cc) between the average of the manual segmentations and automatic segmentations are 3.70+/-2.30 (B-spline), 1.25+/-1.78 (demons), 0.93+/-1.14 (optical flow), and 4.39+/-3.86 (VelocityAI). The proposed method significantly reduced the estimation error by 9% (B-spline), 38% (demons), and 51% (optical flow) over the results using VelocityAI. Although demonstrated only on HN nodal GTVs, the results imply that the proposed method can produce improved segmentation of other critical structures over conventional methods.

  9. Enabling accurate first-principle calculations of electronic properties with a corrected k dot p scheme

    Berland, Kristian

    2016-01-01

    A computationally inexpensive kp-based interpolation scheme is developed that can extend the eigenvalues and momentum matrix elements of a sparsely sampled k-point grid into a densely sampled one. Dense sampling, often required to accurately describe transport and optical properties of bulk materials, can be computationally demanding to compute, for instance, in combination with hybrid functionals within the density functional theory (DFT) or with perturbative expansions beyond DFT such as the GW method. The scheme is based on solving the k$\\cdot$p method and extrapolating from multiple reference k points. It includes a correction term that reduces the number of empty bands needed and ameliorates band discontinuities. We show that the scheme can be used to generate accurate band structures, density of states, and dielectric functions. Several examples are given, using traditional and hybrid functionals, with Si, TiNiSn, and Cu as test cases. We illustrate that d-electron and semi-core states, which are partic...

  10. A Highly Accurate Classification of TM Data through Correction of Atmospheric Effects

    Bill Smith

    2009-07-01

    Full Text Available Atmospheric correction impacts on the accuracy of satellite image-based land cover classification are a growing concern among scientists. In this study, the principle objective was to enhance classification accuracy by minimizing contamination effects from aerosol scattering in Landsat TM images due to the variation in solar zenith angle corresponding to cloud-free earth targets. We have derived a mathematical model for aerosols to compute and subtract the aerosol scattering noise per pixel of different vegetation classes from TM images of Nicolet in north-eastern Wisconsin. An algorithm in C++ has been developed with iterations to simulate, model, and correct for the solar zenith angle influences on scattering. Results from a supervised classification with corrected TM images showed increased class accuracy for land cover types over uncorrected images. The overall accuracy of the supervised classification was improved substantially (between 13% and 18%. The z-score shows significant difference between the corrected data and the raw data (between 4.0 and 12.0. Therefore, the atmospheric correction was essential for enhancing the image classification.

  11. Accurate mass error correction in liquid chromatography time-of-flight mass spectrometry based metabolomics

    Mihaleva, V.V.; Vorst, O.F.J.; Maliepaard, C.A.; Verhoeven, H.A.; Vos, de C.H.; Hall, R.D.; Ham, van R.C.H.J.

    2008-01-01

    Compound identification and annotation in (untargeted) metabolomics experiments based on accurate mass require the highest possible accuracy of the mass determination. Experimental LC/TOF-MS platforms equipped with a time-to-digital converter (TDC) give the best mass estimate for those mass signals

  12. Is SPECT or CT Based Attenuation Correction More Quantitatively Accurate for Dedicated Breast SPECT Acquired with Non-Traditional Trajectories?

    Perez, Kristy L.; Mann, Steve D.; Pachon, Jan H.; Madhav, Priti; Tornai, Martin P.

    2010-01-01

    Attenuation correction is necessary for SPECT quantification. There are a variety of methods to create attenuation maps. For dedicated breast SPECT imaging, it is unclear if either SPECT- or CT-based attenuation map would provide the most accurate quantification and whether or not segmenting the different tissue types will have an effect on the qunatification. For these experiments, 99mTc diluted in methanol and water was filled into geometric and anthropomorphic breast phantoms and was image...

  13. Size-extensivity-corrected multireference configuration interaction schemes to accurately predict bond dissociation energies of oxygenated hydrocarbons.

    Oyeyemi, Victor B; Krisiloff, David B; Keith, John A; Libisch, Florian; Pavone, Michele; Carter, Emily A

    2014-01-28

    Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs. PMID:25669533

  14. Accurate Treatment of Large Supramolecular Complexes by Double-Hybrid Density Functionals Coupled with Nonlocal van der Waals Corrections.

    Calbo, Joaquín; Ortí, Enrique; Sancho-García, Juan C; Aragó, Juan

    2015-03-10

    In this work, we present a thorough assessment of the performance of some representative double-hybrid density functionals (revPBE0-DH-NL and B2PLYP-NL) as well as their parent hybrid and GGA counterparts, in combination with the most modern version of the nonlocal (NL) van der Waals correction to describe very large weakly interacting molecular systems dominated by noncovalent interactions. Prior to the assessment, an accurate and homogeneous set of reference interaction energies was computed for the supramolecular complexes constituting the L7 and S12L data sets by using the novel, precise, and efficient DLPNO-CCSD(T) method at the complete basis set limit (CBS). The correction of the basis set superposition error and the inclusion of the deformation energies (for the S12L set) have been crucial for obtaining precise DLPNO-CCSD(T)/CBS interaction energies. Among the density functionals evaluated, the double-hybrid revPBE0-DH-NL and B2PLYP-NL with the three-body dispersion correction provide remarkably accurate association energies very close to the chemical accuracy. Overall, the NL van der Waals approach combined with proper density functionals can be seen as an accurate and affordable computational tool for the modeling of large weakly bonded supramolecular systems. PMID:26579747

  15. AN ACCURATE NEW METHOD OF CALCULATING ABSOLUTE MAGNITUDES AND K-CORRECTIONS APPLIED TO THE SLOAN FILTER SET

    We describe an accurate new method for determining absolute magnitudes, and hence also K-corrections, that is simpler than most previous methods, being based on a quadratic function of just one suitably chosen observed color. The method relies on the extensive and accurate new set of 129 empirical galaxy template spectral energy distributions from Brown et al. A key advantage of our method is that we can reliably estimate random errors in computed absolute magnitudes due to galaxy diversity, photometric error and redshift error. We derive K-corrections for the five Sloan Digital Sky Survey filters and provide parameter tables for use by the astronomical community. Using the New York Value-Added Galaxy Catalog, we compare our K-corrections with those from kcorrect. Our K-corrections produce absolute magnitudes that are generally in good agreement with kcorrect. Absolute griz magnitudes differ by less than 0.02 mag and those in the u band by ∼0.04 mag. The evolution of rest-frame colors as a function of redshift is better behaved using our method, with relatively few galaxies being assigned anomalously red colors and a tight red sequence being observed across the whole 0.0 < z < 0.5 redshift range

  16. A Highly Accurate Classification of TM Data through Correction of Atmospheric Effects

    Bill Smith; Frank Scarpace; Widad Elmahboub

    2009-01-01

    Atmospheric correction impacts on the accuracy of satellite image-based land cover classification are a growing concern among scientists. In this study, the principle objective was to enhance classification accuracy by minimizing contamination effects from aerosol scattering in Landsat TM images due to the variation in solar zenith angle corresponding to cloud-free earth targets. We have derived a mathematical model for aerosols to compute and subtract the aerosol scattering noise per pixel o...

  17. A fast experimental beam hardening correction method for accurate bone mineral measurements in 3D μCT imaging system.

    Koubar, Khodor; Bekaert, Virgile; Brasse, David; Laquerriere, Patrice

    2015-06-01

    Bone mineral density plays an important role in the determination of bone strength and fracture risks. Consequently, it is very important to obtain accurate bone mineral density measurements. The microcomputerized tomography system provides 3D information about the architectural properties of bone. Quantitative analysis accuracy is decreased by the presence of artefacts in the reconstructed images, mainly due to beam hardening artefacts (such as cupping artefacts). In this paper, we introduced a new beam hardening correction method based on a postreconstruction technique performed with the use of off-line water and bone linearization curves experimentally calculated aiming to take into account the nonhomogeneity in the scanned animal. In order to evaluate the mass correction rate, calibration line has been carried out to convert the reconstructed linear attenuation coefficient into bone masses. The presented correction method was then applied on a multimaterial cylindrical phantom and on mouse skeleton images. Mass correction rate up to 18% between uncorrected and corrected images were obtained as well as a remarkable improvement of a calculated mouse femur mass has been noticed. Results were also compared to those obtained when using the simple water linearization technique which does not take into account the nonhomogeneity in the object. PMID:25818096

  18. Accurate plutonium waste measurements using the 252Cf add-a- source technique for matrix corrections

    We have developed a new measurement technique to improve the accuracy and sensitivity of the nondestructive assay (NDA) of plutonium scrap and waste. The 200-ell drum assay system uses the classical NDA method of counting passive-neutron coincidences from plutonium but has added the new features of ''add-a-source'' to improve the accuracy for matrix corrections and statistical techniques to improve the low-level detectability limits. The add-a-source technique introduces a small source of 252Cf (10-8 g) near the external surface of the sample drum. The drum perturbs the rate at which coincident neutrons from the 252Cf are counted. The perturbation provides the data to correct for the matrix and plutonium inside the drum. The errors introduced from matrix materials in 200-ell drums have been reduced by an order of magnitude using the add-a-source technique. In addition, the add-a-source method can detect unexpected neutron-shielding material inside the drum that might hide the presence of special nuclear materials. The detectability limit of the new waste-drum assay system for plutonium is better than prior systems for actual waste materials. For the in-plant installation at a mixed-oxide fabrication facility, the detectability limit is 0.73 mg of 24OPu (or 2.3 mg of high-burnup plutonium) for a 15-min. measurement. For a drum containing 100 kg of waste, this translates to about 7 nCi/g. This excellent sensitivity was achieved using a special low-background detector design, good overhead shielding, and statistical techniques in the software to selectively reduce the cosmic-ray neutron background

  19. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  20. Accurate Evaluation of Ion Conductivity of the Gramicidin A Channel Using a Polarizable Force Field without Any Corrections.

    Peng, Xiangda; Zhang, Yuebin; Chu, Huiying; Li, Yan; Zhang, Dinglin; Cao, Liaoran; Li, Guohui

    2016-06-14

    Classical molecular dynamic (MD) simulation of membrane proteins faces significant challenges in accurately reproducing and predicting experimental observables such as ion conductance and permeability due to its incapability of precisely describing the electronic interactions in heterogeneous systems. In this work, the free energy profiles of K(+) and Na(+) permeating through the gramicidin A channel are characterized by using the AMOEBA polarizable force field with a total sampling time of 1 μs. Our results indicated that by explicitly introducing the multipole terms and polarization into the electrostatic potentials, the permeation free energy barrier of K(+) through the gA channel is considerably reduced compared to the overestimated results obtained from the fixed-charge model. Moreover, the estimated maximum conductance, without any corrections, for both K(+) and Na(+) passing through the gA channel are much closer to the experimental results than any classical MD simulations, demonstrating the power of AMOEBA in investigating the membrane proteins. PMID:27171823

  1. Accurate non-Born-Oppenheimer calculations of the complete pure vibrational spectrum of D2 with including relativistic corrections.

    Bubin, Sergiy; Stanke, Monika; Adamowicz, Ludwik

    2011-08-21

    In this work we report very accurate variational calculations of the complete pure vibrational spectrum of the D(2) molecule performed within the framework where the Born-Oppenheimer (BO) approximation is not assumed. After the elimination of the center-of-mass motion, D(2) becomes a three-particle problem in this framework. As the considered states correspond to the zero total angular momentum, their wave functions are expanded in terms of all-particle, one-center, spherically symmetric explicitly correlated Gaussian functions multiplied by even non-negative powers of the internuclear distance. The nonrelativistic energies of the states obtained in the non-BO calculations are corrected for the relativistic effects of the order of α(2) (where α = 1/c is the fine structure constant) calculated as expectation values of the operators representing these effects. PMID:21861559

  2. Correction.

    2015-11-01

    In the article by Heuslein et al, which published online ahead of print on September 3, 2015 (DOI: 10.1161/ATVBAHA.115.305775), a correction was needed. Brett R. Blackman was added as the penultimate author of the article. The article has been corrected for publication in the November 2015 issue. PMID:26490278

  3. Dixon sequence with superimposed model-based bone compartment provides highly accurate PET/MR attenuation correction of the brain

    Koesters, Thomas; Friedman, Kent P.; Fenchel, Matthias; Zhan, Yiqiang; Hermosillo, Gerardo; Babb, James; Jelescu, Ileana O.; Faul, David; Boada, Fernando E.; Shepherd, Timothy M.

    2016-01-01

    Simultaneous PET/MR of the brain is a promising new technology for characterizing patients with suspected cognitive impairment or epilepsy. Unlike CT though, MR signal intensities do not provide a direct correlate to PET photon attenuation correction (AC) and inaccurate radiotracer standard uptake value (SUV) estimation could limit future PET/MR clinical applications. We tested a novel AC method that supplements standard Dixon-based tissue segmentation with a superimposed model-based bone com...

  4. Correction.

    2016-02-01

    In the article by Guessous et al (Guessous I, Pruijm M, Ponte B, Ackermann D, Ehret G, Ansermot N, Vuistiner P, Staessen J, Gu Y, Paccaud F, Mohaupt M, Vogt B, Pechère-Bertschi A, Martin PY, Burnier M, Eap CB, Bochud M. Associations of ambulatory blood pressure with urinary caffeine and caffeine metabolite excretions. Hypertension. 2015;65:691–696. doi: 10.1161/HYPERTENSIONAHA.114.04512), which published online ahead of print December 8, 2014, and appeared in the March 2015 issue of the journal, a correction was needed.One of the author surnames was misspelled. Antoinette Pechère-Berstchi has been corrected to read Antoinette Pechère-Bertschi.The authors apologize for this error. PMID:26763012

  5. The importance of accurate repair of the orbicularis oris muscle in the correction of unilateral cleft lip.

    Park, C G; Ha, B

    1995-09-01

    Most of the attempts and efforts in cleft lip repair have been directed toward the skin incision. The importance of the orbicularis oris muscle repair has been emphasized in recent years. The well-designed skin incision with simple repair of the orbicularis oris muscle has produced a considerable improvement in the appearance of the upper lip; however, the repaired upper lip seems to change its shape abnormally in motion and has a tendency to be distorted with age if the orbicularis oris muscle is not repaired precisely and accurately. Following the dissection of the normal upper lip and unilateral cleft lip in cadavers, we could find two different components in the orbicularis oris muscle, a superficial and a deep component. One is a retractor and the other is a constrictor of the lip. They have antagonistic actions to each other during lip movement. We also can identify these two different components of the muscle in the cleft lip patient during operation. We thought inaccurate and mixed connection between these two different functional components could make the repaired lip distorted and unbalanced, which would get worse during growth. By identification and separate repair of the two different muscular components of the orbicularis oris muscle (i.e., repair of the superficial and deep components on the lateral side with the corresponding components on the medial side), better results in the dynamic and three-dimensional configuration of the upper lip can be achieved, and unfavorable distortion can be avoided as the patients grow.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:7652051

  6. Radiochromic film dosimetry with flatbed scanners: A fast and accurate method for dose calibration and uniformity correction with single film exposure

    Film dosimetry is an attractive tool for dose distribution verification in intensity modulated radiotherapy (IMRT). A critical aspect of radiochromic film dosimetry is the scanner used for the readout of the film: the output needs to be calibrated in dose response and corrected for pixel value and spatial dependent nonuniformity caused by light scattering; these procedures can take a long time. A method for a fast and accurate calibration and uniformity correction for radiochromic film dosimetry is presented: a single film exposure is used to do both calibration and correction. Gafchromic EBT films were read with two flatbed charge coupled device scanners (Epson V750 and 1680Pro). The accuracy of the method is investigated with specific dose patterns and an IMRT beam. The comparisons with a two-dimensional array of ionization chambers using a 18x18 cm2 open field and an inverse pyramid dose pattern show an increment in the percentage of points which pass the gamma analysis (tolerance parameters of 3% and 3 mm), passing from 55% and 64% for the 1680Pro and V750 scanners, respectively, to 94% for both scanners for the 18x18 open field, and from 76% and 75% to 91% for the inverse pyramid pattern. Application to an IMRT beam also shows better gamma index results, passing from 88% and 86% for the two scanners, respectively, to 94% for both. The number of points and dose range considered for correction and calibration appears to be appropriate for use in IMRT verification. The method showed to be fast and to correct properly the nonuniformity and has been adopted for routine clinical IMRT dose verification

  7. Correction

    2002-01-01

    The photo on the second page of the Bulletin n°48/2002, from 25 November 2002, illustrating the article «Spanish Visit to CERN» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.   The Spanish delegation, accompanied by Spanish scientists at CERN, also visited the LHC superconducting magnet test hall (photo). From left to right: Felix Rodriguez Mateos of CERN LHC Division, Josep Piqué i Camps, Spanish Minister of Science and Technology, César Dopazo, Director-General of CIEMAT (Spanish Research Centre for Energy, Environment and Technology), Juan Antonio Rubio, ETT Division Leader at CERN, Manuel Aguilar-Benitez, Spanish Delegate to Council, Manuel Delfino, IT Division Leader at CERN, and Gonzalo León, Secretary-General of Scientific Policy to the Minister.

  8. Practical self-absorption correction method for various environmental samples in a 1000 cm3 Marinelli container to perform accurate radioactivity determination with HPGe detectors

    The self-absorption of large volume samples is an important issue in gamma-ray spectrometry using high purity germanium (HPGe) detectors. After the Fukushima Daiichi Nuclear Power Plant accident, a large number of radioactivity measurements of various environmental samples have been performed using 1000 cm3 containers. This study uses Monte Carlo simulations and a semiempirical function to address the self-absorption correction factor for the samples in the 1000 cm3 Marinelli container that has been widely marketed after the accident. The presented factor was validated by experiments using test sources and was shown to be accurate for a wide range of linear attenuation coefficients μ(0.05 - 1.0 cm-1). This suggests that the proposed correction factor is applicable to almost all environmental samples. In addition, an interlaboratory comparison where participants were asked to determine the radioactivity of a certified reference material demonstrated that the proposed correction factor can be used with HPGe detectors of different crystal sizes. (author)

  9. Extension of the B3LYP - Dispersion-Correcting Potential Approach to the Accurate Treatment of both Inter- and Intramolecular Interactions

    DiLabio, Gino A; Torres, Edmanuel

    2013-01-01

    We recently showed that dispersion-correcting potentials (DCPs), atom-centered Gaussian-type functions developed for use with B3LYP (J. Phys. Chem. Lett. 2012, 3, 1738-1744) greatly improved the ability of the underlying functional to predict non-covalent interactions. However, the application of B3LYP-DCP for the {\\beta}-scission of the cumyloxyl radical led a calculated barrier height that was over-estimated by ca. 8 kcal/mol. We show in the present work that the source of this error arises from the previously developed carbon atom DCPs, which erroneously alters the electron density in the C-C covalent-bonding region. In this work, we present a new C-DCP with a form that was expected to influence the electron density farther from the nucleus. Tests of the new C-DCP, with previously published H-, N- and O-DCPs, with B3LYP-DCP/6-31+G(2d,2p) on the S66, S22B, HSG-A, and HC12 databases of non-covalently interacting dimers showed that it is one of the most accurate methods available for treating intermolecular i...

  10. Light time calculations in high precision deep space navigation

    Bertone, Stefano; Lainey, Valéry

    2013-01-01

    During the last decade, the precision in the tracking of spacecraft has constantly improved. With the recent discovery of few astrometric anomalies, such as the Pioneer and Earth flyby anomalies, it becomes important to deeply analyze the operative modeling currently adopted in Deep Space Navigation (DSN). Our study shows that some traditional approximations can lead to neglect tiny terms that could have consequences in the orbit determination of a probe in specific configurations such as during an Earth flyby. Here we suggest a way to improve the light time calculation used for probe tracking.

  11. Toward accurate thermochemistry of the {sup 24}MgH, {sup 25}MgH, and {sup 26}MgH molecules at elevated temperatures: Corrections due to unbound states

    Szidarovszky, Tamás [MTA-ELTE Research Group on Complex Chemical Systems, P.O. Box 32, H-1518 Budapest 112 (Hungary); Császár, Attila G., E-mail: csaszar@chem.elte.hu [MTA-ELTE Research Group on Complex Chemical Systems, P.O. Box 32, H-1518 Budapest 112 (Hungary); Laboratory on Molecular Structure and Dynamics, Institute of Chemistry, Eötvös University, Pázmány Péter sétány 1/A, H-1117 Budapest (Hungary)

    2015-01-07

    The total partition functions Q(T) and their first two moments Q{sup ′}(T) and Q{sup ″}(T), together with the isobaric heat capacities C{sub p}(T), are computed a priori for three major MgH isotopologues on the temperature range of T = 100–3000 K using the recent highly accurate potential energy curve, spin-rotation, and non-adiabatic correction functions of Henderson et al. [J. Phys. Chem. A 117, 13373 (2013)]. Nuclear motion computations are carried out on the ground electronic state to determine the (ro)vibrational energy levels and the scattering phase shifts. The effect of resonance states is found to be significant above about 1000 K and it increases with temperature. Even very short-lived states, due to their relatively large number, have significant contributions to Q(T) at elevated temperatures. The contribution of scattering states is around one fourth of that of resonance states but opposite in sign. Uncertainty estimates are given for the possible error sources, suggesting that all computed thermochemical properties have an accuracy better than 0.005% up to 1200 K. Between 1200 and 2500 K, the uncertainties can rise to around 0.1%, while between 2500 K and 3000 K, a further increase to 0.5% might be observed for Q{sup ″}(T) and C{sub p}(T), principally due to the neglect of excited electronic states. The accurate thermochemical data determined are presented in the supplementary material for the three isotopologues of {sup 24}MgH, {sup 25}MgH, and {sup 26}MgH at 1 K increments. These data, which differ significantly from older standard data, should prove useful for astronomical models incorporating thermodynamic properties of these species.

  12. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: An accurate correction scheme for electrostatic finite-size effects

    Rocklin, Gabriel J. [Department of Pharmaceutical Chemistry, University of California San Francisco, 1700 4th St., San Francisco, California 94143-2550, USA and Biophysics Graduate Program, University of California San Francisco, 1700 4th St., San Francisco, California 94143-2550 (United States); Mobley, David L. [Departments of Pharmaceutical Sciences and Chemistry, University of California Irvine, 147 Bison Modular, Building 515, Irvine, California 92697-0001, USA and Department of Chemistry, University of New Orleans, 2000 Lakeshore Drive, New Orleans, Louisiana 70148 (United States); Dill, Ken A. [Laufer Center for Physical and Quantitative Biology, 5252 Stony Brook University, Stony Brook, New York 11794-0001 (United States); Hünenberger, Philippe H., E-mail: phil@igc.phys.chem.ethz.ch [Laboratory of Physical Chemistry, Swiss Federal Institute of Technology, ETH, 8093 Zürich (Switzerland)

    2013-11-14

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol{sup −1}) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non

  13. Impact of aerosols on the OMI tropospheric NO2 retrievals over industrialized regions: how accurate is the aerosol correction of cloud-free scenes via a simple cloud model?

    Chimot, J.; Vlemmix, T.; Veefkind, J. P.; de Haan, J. F.; Levelt, P. F.

    2016-02-01

    The Ozone Monitoring Instrument (OMI) has provided daily global measurements of tropospheric NO2 for more than a decade. Numerous studies have drawn attention to the complexities related to measurements of tropospheric NO2 in the presence of aerosols. Fine particles affect the OMI spectral measurements and the length of the average light path followed by the photons. However, they are not explicitly taken into account in the current operational OMI tropospheric NO2 retrieval chain (DOMINO - Derivation of OMI tropospheric NO2) product. Instead, the operational OMI O2 - O2 cloud retrieval algorithm is applied both to cloudy and to cloud-free scenes (i.e. clear sky) dominated by the presence of aerosols. This paper describes in detail the complex interplay between the spectral effects of aerosols in the satellite observation and the associated response of the OMI O2 - O2 cloud retrieval algorithm. Then, it evaluates the impact on the accuracy of the tropospheric NO2 retrievals through the computed Air Mass Factor (AMF) with a focus on cloud-free scenes. For that purpose, collocated OMI NO2 and MODIS (Moderate Resolution Imaging Spectroradiometer) Aqua aerosol products are analysed over the strongly industrialized East China area. In addition, aerosol effects on the tropospheric NO2 AMF and the retrieval of OMI cloud parameters are simulated. Both the observation-based and the simulation-based approach demonstrate that the retrieved cloud fraction increases with increasing Aerosol Optical Thickness (AOT), but the magnitude of this increase depends on the aerosol properties and surface albedo. This increase is induced by the additional scattering effects of aerosols which enhance the scene brightness. The decreasing effective cloud pressure with increasing AOT primarily represents the shielding effects of the O2 - O2 column located below the aerosol layers. The study cases show that the aerosol correction based on the implemented OMI cloud model results in biases

  14. New analysis of the light time effect in TU Ursae Majoris

    Liška, J.; Skarka, M.; Mikulášek, Z.; Zejda, M.; Chrastina, M.

    2016-04-01

    Context. Recent statistical studies prove that the percentage of RR Lyrae pulsators that are located in binaries or multiple stellar systems is considerably lower than might be expected. This can be better understood from an in-depth analysis of individual candidates. We investigate in detail the light time effect of the most probable binary candidate TU UMa. This is complicated because the pulsation period shows secular variation. Aims: We model possible light time effect of TU UMa using a new code applied on previously available and newly determined maxima timings to confirm binarity and refine parameters of the orbit of the RRab component in the binary system. The binary hypothesis is also tested using radial velocity measurements. Methods: We used new approach to determine brightness maxima timings based on template fitting. This can also be used on sparse or scattered data. This approach was successfully applied on measurements from different sources. To determine the orbital parameters of the double star TU UMa, we developed a new code to analyse light time effect that also includes secular variation in the pulsation period. Its usability was successfully tested on CL Aur, an eclipsing binary with mass-transfer in a triple system that shows similar changes in the O-C diagram. Since orbital motion would cause systematic shifts in mean radial velocities (dominated by pulsations), we computed and compared our model with centre-of-mass velocities. They were determined using high-quality templates of radial velocity curves of RRab stars. Results: Maxima timings adopted from the GEOS database (168) together with those newly determined from sky surveys and new measurements (85) were used to construct an O-C diagram spanning almost five proposed orbital cycles. This data set is three times larger than data sets used by previous authors. Modelling of the O-C dependence resulted in 23.3-yr orbital period, which translates into a minimum mass of the second component of

  15. Nighttime lights time series of tsunami damage, recovery, and economic metrics in Sumatra, Indonesia.

    Gillespie, Thomas W; Frankenberg, Elizabeth; Chum, Kai Fung; Thomas, Duncan

    2014-01-01

    On 26 December 2004, a magnitude 9.2 earthquake off the west coast of the northern Sumatra, Indonesia resulted in 160,000 Indonesians killed. We examine the Defense Meteorological Satellite Program-Operational Linescan System (DMSP-OLS) nighttime light imagery brightness values for 307 communities in the Study of the Tsunami Aftermath and Recovery (STAR), a household survey in Sumatra from 2004 to 2008. We examined night light time series between the annual brightness and extent of damage, economic metrics collected from STAR households and aggregated to the community level. There were significant changes in brightness values from 2004 to 2008 with a significant drop in brightness values in 2005 due to the tsunami and pre-tsunami nighttime light values returning in 2006 for all damage zones. There were significant relationships between the nighttime imagery brightness and per capita expenditures, and spending on energy and on food. Results suggest that Defense Meteorological Satellite Program nighttime light imagery can be used to capture the impacts and recovery from the tsunami and other natural disasters and estimate time series economic metrics at the community level in developing countries. PMID:25419471

  16. New Analysis of the Light Time Effect in TU Ursae Majoris

    Liska, Jiri; Mikulasek, Zdenek; Zejda, Miloslav; Chrastina, Marek

    2015-01-01

    This paper attempts to model possible Light Time Effect of TU UMa using a new code applied on formerly available and newly determined maxima timings in order to confirm binarity and refine parameters of the orbit of RRab component in binary system. The binary hypothesis is further tested also using radial velocity measurements. A new approach for determination of maxima timings based on template fitting which is also usable on sparse or scattered data is described. This approach was successfully applied on measurements from different sources. For determination of orbital parameters of a double star TU UMa we developed a new code for analysis of LiTE involving also secular variation in pulsation period. Its usability was successfully tested on CL Aur - an eclipsing binary with mass-transfer in a triple system showing similar changes in O-C diagram. Since orbital motion would cause systematic shifts in mean radial velocities (dominated by pulsations) we computed and compared our model with center-of-mass veloci...

  17. ACE: accurate correction of errors using K-mer tries

    Sheikhizadeh Anari, S.; Ridder, de D.

    2015-01-01

    The quality of high-throughput next-generation sequencing data significantly influences the performance and memory consumption of assembly and mapping algorithms. The most ubiquitous platform, Illumina, mainly suffers from substitution errors. We have developed a tool, ACE, based on K-mer tries to c

  18. Accurate ab initio spin densities

    Boguslawski, Katharina; Legeza, Örs; Reiher, Markus

    2012-01-01

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys. 2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CA...

  19. Accurate Finite Difference Algorithms

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  20. Accurate backgrounds to Higgs production at the LHC

    Kauer, N

    2007-01-01

    Corrections of 10-30% for backgrounds to the H --> WW --> l^+l^-\\sla{p}_T search in vector boson and gluon fusion at the LHC are reviewed to make the case for precise and accurate theoretical background predictions.

  1. Corrective actions

    The variety of corrective actions which have been attempted at many radioactive waste disposal sites points to less than ideal performance by present-day standards at many closed and presently-operating sites. In humid regions, most of the problems have encompassed some kind of water intrusion into the buried waste. In arid regions, the problems have centered on trench subsidence and intrusion by plant roots and animals. It is overwhelmingly apparent that any protective barrier for the buried waste, whether for water or biological intrusion, will depend on stable support from the underlying burial trenches. Trench subsidence must be halted, prevented, or circumscribed in some manner to assure this necessary long-term support. Final corrective actions will differ considerably from site to site, depending on unique geological, pedological, and meteorological environments. In the meantime, many of the shorter-term corrective actions described in this chapter can be implemented as immediate needs dictate

  2. PHOTOMETRIC PROPERTIES OF SELECTED ALGOL-TYPE BINARIES. III. AL GEMINORUM AND BM MONOCEROTIS WITH POSSIBLE LIGHT-TIME ORBITS

    Yang, Y.-G.; Dai, H.-F. [School of Physics and Electronic Information, Huaibei Normal University, 235000 Huaibei, Anhui Province (China); Li, H.-L., E-mail: yygcn@163.com [National Astronomical Observatories, Chinese Academy of Sciences, 100012 Beijing (China)

    2012-01-15

    We present the CCD photometry of two Algol-type binaries, AL Gem and BM Mon, observed from 2008 November to 2011 January. With the updated Wilson-Devinney program, photometric solutions were deduced from their EA-type light curves. The mass ratios and fill-out factors of the primaries are found to be q{sub ph} = 0.090({+-} 0.005) and f{sub 1} = 47.3%({+-} 0.3%) for AL Gem, and q{sub ph} = 0.275({+-} 0.007) and f{sub 1} = 55.4%({+-} 0.5%) for BM Mon, respectively. By analyzing the O-C curves, we discovered that the periods of AL Gem and BM Mon change in a quasi-sinusoidal mode, which may possibly result from the light-time effect via the presence of a third body. Periods, amplitudes, and eccentricities of light-time orbits are 78.83({+-} 1.17) yr, 0fd0204({+-}0fd0007), and 0.28({+-} 0.02) for AL Gem and 97.78({+-} 2.67) yr, 0fd0175({+-}0fd0006), and 0.29({+-} 0.02) for BM Mon, respectively. Assumed to be in a coplanar orbit with the binary, the masses of the third bodies would be 0.29 M{sub Sun} for AL Gem and 0.26 M{sub Sun} for BM Mon. This kind of additional companion can extract angular momentum from the close binary orbit, and such processes may play an important role in multiple star evolution.

  3. Prisons and Correctional Facilities, LAGIC is consulting with local parish GIS departments to create spatially accurate point and polygons data sets including the locations and building footprints of schools, churches, government buildings, law enforcement and emergency response offices, pha, Published in 2011, 1:12000 (1in=1000ft) scale, Louisiana Geographic Information Center.

    NSGIC GIS Inventory (aka Ramona) — This Prisons and Correctional Facilities dataset, published at 1:12000 (1in=1000ft) scale, was produced all or in part from Orthoimagery information as of 2011. It...

  4. Towards accurate emergency response behavior

    Nuclear reactor operator emergency response behavior has persisted as a training problem through lack of information. The industry needs an accurate definition of operator behavior in adverse stress conditions, and training methods which will produce the desired behavior. Newly assembled information from fifty years of research into human behavior in both high and low stress provides a more accurate definition of appropriate operator response, and supports training methods which will produce the needed control room behavior. The research indicates that operator response in emergencies is divided into two modes, conditioned behavior and knowledge based behavior. Methods which assure accurate conditioned behavior, and provide for the recovery of knowledge based behavior, are described in detail

  5. Deconvolution with correct sampling

    Magain, P; Sohy, S

    1997-01-01

    A new method for improving the resolution of astronomical images is presented. It is based on the principle that sampled data cannot be fully deconvolved without violating the sampling theorem. Thus, the sampled image should not be deconvolved by the total Point Spread Function, but by a narrower function chosen so that the resolution of the deconvolved image is compatible with the adopted sampling. Our deconvolution method gives results which are markedly superior to those of other existing techniques: in particular, it does not produce ringing around point sources superimposed on a smooth background. Moreover, it allows to perform accurate astrometry and photometry of crowded fields. These improvements are a consequence of both the correct treatment of sampling and the recognition that the most probable astronomical image is not a flat one. The method is also well adapted to the optimal combination of different images of the same object, as can be obtained, e.g., via adaptive optics techniques.

  6. Accurate determination of antenna directivity

    Dich, Mikael

    1997-01-01

    The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power...

  7. Accurate shear measurement with faint sources

    Zhang, Jun; Foucaud, Sebastien [Center for Astronomy and Astrophysics, Department of Physics and Astronomy, Shanghai Jiao Tong University, 955 Jianchuan road, Shanghai, 200240 (China); Luo, Wentao, E-mail: betajzhang@sjtu.edu.cn, E-mail: walt@shao.ac.cn, E-mail: foucaud@sjtu.edu.cn [Key Laboratory for Research in Galaxies and Cosmology, Shanghai Astronomical Observatory, Nandan Road 80, Shanghai, 200030 (China)

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.

  8. Universality of Quantum Gravity Corrections

    Das, Saurya

    2008-01-01

    We show that the existence of a minimum measurable length and the related Generalized Uncertainty Principle (GUP), predicted by theories of Quantum Gravity, influence all quantum Hamiltonians. Thus, they predict quantum gravity corrections to various quantum phenomena. We compute such corrections to the Lamb Shift, the Landau levels and the tunnelling current in a Scanning Tunnelling Microscope (STM). We show that these corrections can be interpreted in two ways: (a) either that they are exceedingly small, beyond the reach of current experiments, or (b) that they predict upper bounds on the quantum gravity parameter in the GUP, compatible with experiments at the electroweak scale. Thus, more accurate measurements in the future would either be able to test these predictions, or further tighten the above bounds and predict an intermediate length scale, between the electroweak and the Planck scale.

  9. Probabilistic error correction for RNA sequencing

    Le, Hai-Son; Schulz, Marcel H.; McCauley, Brenna M.; Hinman, Veronica F.; Bar-Joseph, Ziv

    2013-01-01

    Sequencing of RNAs (RNA-Seq) has revolutionized the field of transcriptomics, but the reads obtained often contain errors. Read error correction can have a large impact on our ability to accurately assemble transcripts. This is especially true for de novo transcriptome analysis, where a reference genome is not available. Current read error correction methods, developed for DNA sequence data, cannot handle the overlapping effects of non-uniform abundance, polymorphisms and alternative splicing...

  10. Accurate Modeling of Advanced Reflectarrays

    Zhou, Min

    of the incident field, the choice of basis functions, and the technique to calculate the far-field. Based on accurate reference measurements of two offset reflectarrays carried out at the DTU-ESA Spherical NearField Antenna Test Facility, it was concluded that the three latter factors are particularly important...... to the conventional phase-only optimization technique (POT), the geometrical parameters of the array elements are directly optimized to fulfill the far-field requirements, thus maintaining a direct relation between optimization goals and optimization variables. As a result, better designs can be obtained compared...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...

  11. Accurate thickness measurement of graphene

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  12. Thermodynamics of Error Correction

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  13. Motion-corrected Fourier ptychography

    Bian, Liheng; Guo, Kaikai; Suo, Jinli; Yang, Changhuei; Chen, Feng; Dai, Qionghai

    2016-01-01

    Fourier ptychography (FP) is a recently proposed computational imaging technique for high space-bandwidth product imaging. In real setups such as endoscope and transmission electron microscope, the common sample motion largely degrades the FP reconstruction and limits its practicability. In this paper, we propose a novel FP reconstruction method to efficiently correct for unknown sample motion. Specifically, we adaptively update the sample's Fourier spectrum from low spatial-frequency regions towards high spatial-frequency ones, with an additional motion recovery and phase-offset compensation procedure for each sub-spectrum. Benefiting from the phase retrieval redundancy theory, the required large overlap between adjacent sub-spectra offers an accurate guide for successful motion recovery. Experimental results on both simulated data and real captured data show that the proposed method can correct for unknown sample motion with its standard deviation being up to 10% of the field-of-view scale. We have released...

  14. MR image intensity inhomogeneity correction

    MR technology is one of the best and most reliable ways of studying the brain. Its main drawback is the so-called intensity inhomogeneity or bias field which impairs the visual inspection and the medical proceedings for diagnosis and strongly affects the quantitative image analysis. Noise is yet another artifact in medical images. In order to accurately and effectively restore the original signal, reference is hereof made to filtering, bias correction and quantitative analysis of correction. In this report, two denoising algorithms are used; (i) Basis rotation fields of experts (BRFoE) and (ii) Anisotropic Diffusion (when Gaussian noise, the Perona-Malik and Tukey's biweight functions and the standard deviation of the noise of the input image are considered)

  15. A More Accurate Fourier Transform

    Courtney, Elya

    2015-01-01

    Fourier transform methods are used to analyze functions and data sets to provide frequencies, amplitudes, and phases of underlying oscillatory components. Fast Fourier transform (FFT) methods offer speed advantages over evaluation of explicit integrals (EI) that define Fourier transforms. This paper compares frequency, amplitude, and phase accuracy of the two methods for well resolved peaks over a wide array of data sets including cosine series with and without random noise and a variety of physical data sets, including atmospheric $\\mathrm{CO_2}$ concentrations, tides, temperatures, sound waveforms, and atomic spectra. The FFT uses MIT's FFTW3 library. The EI method uses the rectangle method to compute the areas under the curve via complex math. Results support the hypothesis that EI methods are more accurate than FFT methods. Errors range from 5 to 10 times higher when determining peak frequency by FFT, 1.4 to 60 times higher for peak amplitude, and 6 to 10 times higher for phase under a peak. The ability t...

  16. Relativistic formulation of coordinate light time, Doppler and astrometric observables up to the second post-Minkowskian order

    Hees, A; Poncin-Lafitte, C Le

    2014-01-01

    Given the extreme accuracy of modern space science, a precise relativistic modeling of observations is required. In particular, it is important to describe properly light propagation through the Solar System. For two decades, several modeling efforts based on the solution of the null geodesic equations have been proposed but they are mainly valid only for the first order Post-Newtonian approximation. However, with the increasing precision of ongoing space missions as Gaia, GAME, BepiColombo, JUNO or JUICE, we know that some corrections up to the second order have to be taken into account for future experiments. We present a procedure to compute the relativistic coordinate time delay, Doppler and astrometric observables avoiding the integration of the null geodesic equation. This is possible using the Time Transfer Function formalism, a powerful tool providing key quantities such as the time of flight of a light signal between two point-events and the tangent vector to its null-geodesic. Indeed we show how to ...

  17. The Utility of Maze Accurate Response Rate in Assessing Reading Comprehension in Upper Elementary and Middle School Students

    McCane-Bowling, Sara J.; Strait, Andrea D.; Guess, Pamela E.; Wiedo, Jennifer R.; Muncie, Eric

    2014-01-01

    This study examined the predictive utility of five formative reading measures: words correct per minute, number of comprehension questions correct, reading comprehension rate, number of maze correct responses, and maze accurate response rate (MARR). Broad Reading cluster scores obtained via the Woodcock-Johnson III (WJ III) Tests of Achievement…

  18. Accurate, meshless methods for magnetohydrodynamics

    Hopkins, Philip F.; Raives, Matthias J.

    2016-01-01

    Recently, we explored new meshless finite-volume Lagrangian methods for hydrodynamics: the `meshless finite mass' (MFM) and `meshless finite volume' (MFV) methods; these capture advantages of both smoothed particle hydrodynamics (SPH) and adaptive mesh refinement (AMR) schemes. We extend these to include ideal magnetohydrodynamics (MHD). The MHD equations are second-order consistent and conservative. We augment these with a divergence-cleaning scheme, which maintains nabla \\cdot B≈ 0. We implement these in the code GIZMO, together with state-of-the-art SPH MHD. We consider a large test suite, and show that on all problems the new methods are competitive with AMR using constrained transport (CT) to ensure nabla \\cdot B=0. They correctly capture the growth/structure of the magnetorotational instability, MHD turbulence, and launching of magnetic jets, in some cases converging more rapidly than state-of-the-art AMR. Compared to SPH, the MFM/MFV methods exhibit convergence at fixed neighbour number, sharp shock-capturing, and dramatically reduced noise, divergence errors, and diffusion. Still, `modern' SPH can handle most test problems, at the cost of larger kernels and `by hand' adjustment of artificial diffusion. Compared to non-moving meshes, the new methods exhibit enhanced `grid noise' but reduced advection errors and diffusion, easily include self-gravity, and feature velocity-independent errors and superior angular momentum conservation. They converge more slowly on some problems (smooth, slow-moving flows), but more rapidly on others (involving advection/rotation). In all cases, we show divergence control beyond the Powell 8-wave approach is necessary, or all methods can converge to unphysical answers even at high resolution.

  19. NWS Corrections to Observations

    National Oceanic and Atmospheric Administration, Department of Commerce — Form B-14 is the National Weather Service form entitled 'Notice of Corrections to Weather Records.' The forms are used to make corrections to observations on forms...

  20. Corrective Jaw Surgery

    Full Text Available ... Jaw Surgery Download Download the ebook for further information Corrective jaw, or orthognathic, surgery is performed by ... your treatment. Correction of Common Dentofacial Deformities ​ ​ The information provided here is not intended as a substitute ...

  1. Error Correction in Classroom

    Dr. Grace Zhang

    2000-01-01

    Error correction is an important issue in foreign language acquisition. This paper investigates how students feel about the way in which error correction should take place in a Chinese-as-a foreign-language classroom, based on empirical data of a large scale. The study shows that there is a general consensus that error correction is necessary. In terms of correction strategy, the students preferred a combination of direct and indirect corrections, or a direct only correction. The former choice indicates that students would be happy to take either so long as the correction gets done.Most students didn't mind peer correcting provided it is conducted in a constructive way. More than halfofthe students would feel uncomfortable ifthe same error they make in class is corrected consecutively more than three times. Taking these findings into consideration, we may want to cncourage peer correcting, use a combination of correction strategies (direct only if suitable) and do it in a non-threatening and sensitive way. It is hoped that this study would contribute to the effectiveness of error correction in a Chinese language classroom and it may also have a wider implication on other languages.

  2. 38 CFR 4.46 - Accurate measurement.

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  3. 温度校正的NaCl水溶液浓度超声检测装置设计与试验%Design and test of high accurately measuring equipment for NaCl water solution utilizing ultrasonic velocity with temperature correction

    孟瑞锋; 马小康; 王州博; 董龙梅; 杨涛; 刘东红

    2015-01-01

    abnormal sample points and checking out the regression coefficient of the model by t-test. The developed model had high prediction accuracy and stability with the maximum prediction error of 0.25 g/100 g, the determination coefficient of calibration (Rcal2) of 0.9992, the determination coefficient of validation (Rval2) of 0.9988, the root mean square error of calibration (RMSEC) of 0.0894 g/100 g, the root mean square error of prediction (RMSEP) of 0.1015 g/100 g and the ratio performance deviation (RPD) of 28.57, which indicated that the model could be used for practical detection accurately and steadily, and was helpful for on-line measuring.

  4. Source distribution dependent scatter correction for PVI

    Source distribution dependent scatter correction methods which incorporate different amounts of information about the source position and material distribution have been developed and tested. The techniques use image to projection integral transformation incorporating varying degrees of information on the distribution of scattering material, or convolution subtraction methods, with some information about the scattering material included in one of the convolution methods. To test the techniques, the authors apply them to data generated by Monte Carlo simulations which use geometric shapes or a voxelized density map to model the scattering material. Source position and material distribution have been found to have some effect on scatter correction. An image to projection method which incorporates a density map produces accurate scatter correction but is computationally expensive. Simpler methods, both image to projection and convolution, can also provide effective scatter correction

  5. The FLUKA code: An accurate simulation tool for particle therapy

    Battistoni, Giuseppe; Böhlen, Till T; Cerutti, Francesco; Chin, Mary Pik Wai; Dos Santos Augusto, Ricardo M; Ferrari, Alfredo; Garcia Ortega, Pablo; Kozlowska, Wioletta S; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically-based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in-vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with bot...

  6. Accurate characterization of OPVs: Device masking and different solar simulators

    Gevorgyan, Suren; Carlé, Jon Eggert; Søndergaard, Roar R.;

    2013-01-01

    One of the prime objects of organic solar cell research has been to improve the power conversion efficiency. Unfortunately, the accurate determination of this property is not straight forward and has led to the recommendation that record devices be tested and certified at a few accredited...... laboratories following rigorous ASTM and IEC standards. This work tries to address some of the issues confronting the standard laboratory in this regard. Solar simulator lamps are investigated for their light field homogeneity and direct versus diffuse components, as well as the correct device area...

  7. Diophantine Correct Open Induction

    Raffer, Sidney

    2010-01-01

    We give an induction-free axiom system for diophantine correct open induction. We relate the problem of whether a finitely generated ring of Puiseux polynomials is diophantine correct to a problem about the value-distribution of a tuple of semialgebraic functions with integer arguments. We use this result, and a theorem of Bergelson and Leibman on generalized polynomials, to identify a class of diophantine correct subrings of the field of descending Puiseux series with real coefficients.

  8. Attenuation correction for small animal PET tomographs

    Chow, Patrick L [David Geffen School of Medicine at UCLA, Crump Institute for Molecular Imaging, University of California, 700 Westwood Plaza, Los Angeles, CA 90095 (United States); Rannou, Fernando R [Departamento de Ingenieria Informatica, Universidad de Santiago de Chile (USACH), Av. Ecuador 3659, Santiago (Chile); Chatziioannou, Arion F [David Geffen School of Medicine at UCLA, Crump Institute for Molecular Imaging, University of California, 700 Westwood Plaza, Los Angeles, CA 90095 (United States)

    2005-04-21

    Attenuation correction is one of the important corrections required for quantitative positron emission tomography (PET). This work will compare the quantitative accuracy of attenuation correction using a simple global scale factor with traditional transmission-based methods acquired either with a small animal PET or a small animal x-ray computed tomography (CT) scanner. Two phantoms (one mouse-sized and one rat-sized) and two animal subjects (one mouse and one rat) were scanned in CTI Concorde Microsystem's microPET (registered) Focus{sup TM} for emission and transmission data and in ImTek's MicroCAT{sup TM} II for transmission data. PET emission image values were calibrated against a scintillation well counter. Results indicate that the scale factor method of attenuation correction places the average measured activity concentration about the expected value, without correcting for the cupping artefact from attenuation. Noise analysis in the phantom studies with the PET-based method shows that noise in the transmission data increases the noise in the corrected emission data. The CT-based method was accurate and delivered low-noise images suitable for both PET data correction and PET tracer localization.

  9. Gouy shift correction for highly accurate refractive index retrieval in time-domain terahertz spectroscopy

    Kužel, Petr; Němec, Hynek; Kadlec, Filip; Kadlec, Christelle

    2010-01-01

    Roč. 18, č. 15 (2010), s. 15338-15348. ISSN 1094-4087 R&D Projects: GA ČR GC202/09/J045 Institutional research plan: CEZ:AV0Z10100520 Keywords : terahertz spectroscopy * Gouy phase shift * gaussian beams * refractive index Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 3.749, year: 2010

  10. Spelling Correction in Context

    Pinot, Guillaume; Enguehard, Chantal

    2005-01-01

    International audience Spelling checkers, frequently used nowadays, do not allow to correct real-word errors. Thus, the erroneous replacement of dessert by desert is not detected. We propose in this article an algorithm based on the examination of the context of words to correct this kind of spelling errors. This algorithm uses a training on a raw corpus.

  11. Derivative corrections from noncommutativity

    We show that an infinite subset of the higher-derivative α' corrections to the DBI and Chern-Simons actions of ordinary commutative open-string theory can be determined using noncommutativity. Our predictions are compared to some lowest order α' corrections that have been computed explicitly by Wyllard (hep-th/0008125), and shown to agree. (author)

  12. Hybrid scatter correction for CT imaging

    The purpose of this study was to develop and evaluate the hybrid scatter correction algorithm (HSC) for CT imaging. Therefore, two established ways to perform scatter correction, i.e. physical scatter correction based on Monte Carlo simulations and a convolution-based scatter correction algorithm, were combined in order to perform an object-dependent, fast and accurate scatter correction. Based on a reconstructed CT volume, patient-specific scatter intensity is estimated by a coarse Monte Carlo simulation that uses a reduced amount of simulated photons in order to reduce the simulation time. To further speed up the Monte Carlo scatter estimation, scatter intensities are simulated only for a fraction of all projections. In a second step, the high noise estimate of the scatter intensity is used to calibrate the open parameters in a convolution-based algorithm which is then used to correct measured intensities for scatter. Furthermore, the scatter-corrected intensities are used in order to reconstruct a scatter-corrected CT volume data set. To evaluate the scatter reduction potential of HSC, we conducted simulations in a clinical CT geometry and measurements with a flat detector CT system. In the simulation study, HSC-corrected images were compared to scatter-free reference images. For the measurements, no scatter-free reference image was available. Therefore, we used an image corrected with a low-noise Monte Carlo simulation as a reference. The results show that the HSC can significantly reduce scatter artifacts. Compared to the reference images, the error due to scatter artifacts decreased from 100% for uncorrected images to a value below 20% for HSC-corrected images for both the clinical (simulated data) and the flat detector CT geometry (measurement). Compared to a low-noise Monte Carlo simulation, with the HSC the number of photon histories can be reduced by about a factor of 100 per projection without losing correction accuracy. Furthermore, it was sufficient to

  13. Accurate ab initio vibrational energies of methyl chloride

    Two new nine-dimensional potential energy surfaces (PESs) have been generated using high-level ab initio theory for the two main isotopologues of methyl chloride, CH335Cl and CH337Cl. The respective PESs, CBS-35 HL, and CBS-37 HL, are based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set (CBS) limit, and incorporate a range of higher-level (HL) additive energy corrections to account for core-valence electron correlation, higher-order coupled cluster terms, scalar relativistic effects, and diagonal Born-Oppenheimer corrections. Variational calculations of the vibrational energy levels were performed using the computer program TROVE, whose functionality has been extended to handle molecules of the form XY 3Z. Fully converged energies were obtained by means of a complete vibrational basis set extrapolation. The CBS-35 HL and CBS-37 HL PESs reproduce the fundamental term values with root-mean-square errors of 0.75 and 1.00 cm−1, respectively. An analysis of the combined effect of the HL corrections and CBS extrapolation on the vibrational wavenumbers indicates that both are needed to compute accurate theoretical results for methyl chloride. We believe that it would be extremely challenging to go beyond the accuracy currently achieved for CH3Cl without empirical refinement of the respective PESs

  14. Accurate ab initio vibrational energies of methyl chloride

    Owens, Alec, E-mail: owens@mpi-muelheim.mpg.de [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany); Department of Physics and Astronomy, University College London, Gower Street, WC1E 6BT London (United Kingdom); Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan [Department of Physics and Astronomy, University College London, Gower Street, WC1E 6BT London (United Kingdom); Thiel, Walter [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany)

    2015-06-28

    Two new nine-dimensional potential energy surfaces (PESs) have been generated using high-level ab initio theory for the two main isotopologues of methyl chloride, CH{sub 3}{sup 35}Cl and CH{sub 3}{sup 37}Cl. The respective PESs, CBS-35{sup  HL}, and CBS-37{sup  HL}, are based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set (CBS) limit, and incorporate a range of higher-level (HL) additive energy corrections to account for core-valence electron correlation, higher-order coupled cluster terms, scalar relativistic effects, and diagonal Born-Oppenheimer corrections. Variational calculations of the vibrational energy levels were performed using the computer program TROVE, whose functionality has been extended to handle molecules of the form XY {sub 3}Z. Fully converged energies were obtained by means of a complete vibrational basis set extrapolation. The CBS-35{sup  HL} and CBS-37{sup  HL} PESs reproduce the fundamental term values with root-mean-square errors of 0.75 and 1.00 cm{sup −1}, respectively. An analysis of the combined effect of the HL corrections and CBS extrapolation on the vibrational wavenumbers indicates that both are needed to compute accurate theoretical results for methyl chloride. We believe that it would be extremely challenging to go beyond the accuracy currently achieved for CH{sub 3}Cl without empirical refinement of the respective PESs.

  15. Accurate transition rates for intercombination lines of singly ionized nitrogen

    The transition energies and rates for the 2s22p23P1,2-2s2p35S2o and 2s22p3s-2s22p3p intercombination transitions have been calculated using term-dependent nonorthogonal orbitals in the multiconfiguration Hartree-Fock approach. Several sets of spectroscopic and correlation nonorthogonal functions have been chosen to describe adequately term dependence of wave functions and various correlation corrections. Special attention has been focused on the accurate representation of strong interactions between the 2s2p31,3P1o and 2s22p3s 1,3P1olevels. The relativistic corrections are included through the one-body mass correction, Darwin, and spin-orbit operators and two-body spin-other-orbit and spin-spin operators in the Breit-Pauli Hamiltonian. The importance of core-valence correlation effects has been examined. The accuracy of present transition rates is evaluated by the agreement between the length and velocity formulations combined with the agreement between the calculated and measured transition energies. The present results for transition probabilities, branching fraction, and lifetimes have been compared with previous calculations and experiments.

  16. Accurate thermoelastic tensor and acoustic velocities of NaCl

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry

  17. Can clinicians accurately assess esophageal dilation without fluoroscopy?

    Bailey, A D; Goldner, F

    1990-01-01

    This study questioned whether clinicians could determine the success of esophageal dilation accurately without the aid of fluoroscopy. Twenty patients were enrolled with the diagnosis of distal esophageal stenosis, including benign peptic stricture (17), Schatski's ring (2), and squamous cell carcinoma of the esophagus (1). Dilation attempts using only Maloney dilators were monitored fluoroscopically by the principle investigator, the physician and patient being unaware of the findings. Physicians then predicted whether or not their dilations were successful, and they examined various features to determine their usefulness in predicting successful dilation. They were able to predict successful dilation accurately in 97% of the cases studied; however, their predictions of unsuccessful dilation were correct only 60% of the time. Features helpful in predicting passage included easy passage of the dilator (98%) and the patient feeling the dilator in the stomach (95%). Excessive resistance suggesting unsuccessful passage was an unreliable feature and was often due to the dilator curling in the stomach. When Maloney dilators are used to dilate simple distal strictures, if the physician predicts successful passage, he is reliably accurate without the use of fluoroscopy; however, if unsuccessful passage is suspected, fluoroscopy must be used for confirmation. PMID:2210278

  18. Accurate thermoelastic tensor and acoustic velocities of NaCl

    Marcondes, Michel L.; Shukla, Gaurav; da Silveira, Pedro; Wentzcovitch, Renata M.

    2015-12-01

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  19. Accurate thermoelastic tensor and acoustic velocities of NaCl

    Marcondes, Michel L., E-mail: michel@if.usp.br [Physics Institute, University of Sao Paulo, Sao Paulo, 05508-090 (Brazil); Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Shukla, Gaurav, E-mail: shukla@physics.umn.edu [School of Physics and Astronomy, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States); Silveira, Pedro da [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Wentzcovitch, Renata M., E-mail: wentz002@umn.edu [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States)

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  20. Mobile image based color correction using deblurring

    Wang, Yu; Xu, Chang; Boushey, Carol; Zhu, Fengqing; Delp, Edward J.

    2015-03-01

    Dietary intake, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as obesity and cancer. The goals of the Technology Assisted Dietary Assessment (TADA) System, developed at Purdue University, is to automatically identify and quantify foods and beverages consumed by utilizing food images acquired with a mobile device. Color correction serves as a critical step to ensure accurate food identification and volume estimation. We make use of a specifically designed color checkerboard (i.e. a fiducial marker) to calibrate the imaging system so that the variations of food appearance under different lighting conditions can be determined. In this paper, we propose an image quality enhancement technique by combining image de-blurring and color correction. The contribution consists of introducing an automatic camera shake removal method using a saliency map and improving the polynomial color correction model using the LMS color space.

  1. moco: Fast Motion Correction for Calcium Imaging.

    Dubbs, Alexander; Guevara, James; Yuste, Rafael

    2016-01-01

    Motion correction is the first step in a pipeline of algorithms to analyze calcium imaging videos and extract biologically relevant information, for example the network structure of the neurons therein. Fast motion correction is especially critical for closed-loop activity triggered stimulation experiments, where accurate detection and targeting of specific cells in necessary. We introduce a novel motion-correction algorithm which uses a Fourier-transform approach, and a combination of judicious downsampling and the accelerated computation of many L 2 norms using dynamic programming and two-dimensional, fft-accelerated convolutions, to enhance its efficiency. Its accuracy is comparable to that of established community-used algorithms, and it is more stable to large translational motions. It is programmed in Java and is compatible with ImageJ. PMID:26909035

  2. Corrected Age for Preemies

    ... Prenatal Baby Bathing & Skin Care Breastfeeding Crying & Colic Diapers & Clothing Feeding & Nutrition Preemie Sleep Teething & Tooth Care Toddler Preschool Gradeschool Teen Young Adult Healthy Children > Ages & Stages > Baby > Preemie > Corrected Age ...

  3. Attitudinally correct nomenclature

    Cook, A C; Anderson, R. H.

    2002-01-01

    For half a century, inappropriate terms have been used to describe the various parts of the heart in a clinical context. Does the cardiological community have the fortitude to correct these mistakes?.

  4. Nested Quantum Annealing Correction

    Vinci, Walter; Albash, Tameem; Lidar, Daniel A.

    2015-01-01

    We present a general error-correcting scheme for quantum annealing that allows for the encoding of a logical qubit into an arbitrarily large number of physical qubits. Given any Ising model optimization problem, the encoding replaces each logical qubit by a complete graph of degree $C$, representing the distance of the error-correcting code. A subsequent minor-embedding step then implements the encoding on the underlying hardware graph of the quantum annealer. We demonstrate experimentally th...

  5. Laboratory Building for Accurate Determination of Plutonium

    2008-01-01

    <正>The accurate determination of plutonium is one of the most important assay techniques of nuclear fuel, also the key of the chemical measurement transfer and the base of the nuclear material balance. An

  6. Accurate Calculation of the Differential Cross Section of Bhabha Scattering with Photon Chain Loops Contribution in QED

    JIANG Min; FANG Zhen-Yun; SANG Wen-Long; GAO Fei

    2006-01-01

    @@ In the minimum electromagnetism coupling model of interaction between photon and electron (positron), we accurately calculate photon chain renormalized propagator and obtain the accurate result of differential cross section of Bhabha scattering with a photon chain renormalized propagator in quantum electrodynamics. The related radiative corrections are briefly reviewed and discussed.

  7. Cyclic period changes and the light-time effect in eclipsing binaries: A low-mass companion around the system VV Ursae Majoris

    Tanrıver, Mehmet

    2015-04-01

    In this article, a period analysis of the late-type eclipsing binary VV UMa is presented. This work is based on the periodic variation of eclipse timings of the VV UMa binary. We determined the orbital properties and mass of a third orbiting body in the system by analyzing the light-travel time effect. The O-C diagram constructed for all available minima times of VV UMa exhibits a cyclic character superimposed on a linear variation. This variation includes three maxima and two minima within approximately 28,240 orbital periods of the system, which can be explained as the light-travel time effect (LITE) because of an unseen third body in a triple system that causes variations of the eclipse arrival times. New parameter values of the light-time travel effect because of the third body were computed with a period of 23.22 ± 0.17 years in the system. The cyclic-variation analysis produces a value of 0.0139 day as the semi-amplitude of the light-travel time effect and 0.35 as the orbital eccentricity of the third body. The mass of the third body that orbits the eclipsing binary stars is 0.787 ± 0.02 M⊙, and the semi-major axis of its orbit is 10.75 AU.

  8. Invariant Image Watermarking Using Accurate Zernike Moments

    Ismail A. Ismail

    2010-01-01

    Full Text Available problem statement: Digital image watermarking is the most popular method for image authentication, copyright protection and content description. Zernike moments are the most widely used moments in image processing and pattern recognition. The magnitudes of Zernike moments are rotation invariant so they can be used just as a watermark signal or be further modified to carry embedded data. The computed Zernike moments in Cartesian coordinate are not accurate due to geometrical and numerical error. Approach: In this study, we employed a robust image-watermarking algorithm using accurate Zernike moments. These moments are computed in polar coordinate, where both approximation and geometric errors are removed. Accurate Zernike moments are used in image watermarking and proved to be robust against different kind of geometric attacks. The performance of the proposed algorithm is evaluated using standard images. Results: Experimental results show that, accurate Zernike moments achieve higher degree of robustness than those approximated ones against rotation, scaling, flipping, shearing and affine transformation. Conclusion: By computing accurate Zernike moments, the embedded bits watermark can be extracted at low error rate.

  9. Model Correction Factor Method

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes

    1997-01-01

    of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods......The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... statebased on an idealized mechanical model to be adapted to the original limit state by the model correction factor. Reliable approximations are obtained by iterative use of gradient information on the original limit state function analogously to previous response surface approaches. However, the strength...

  10. Bryant J. correction formula

    For the practical application of the method proposed by J. Bryant, the authors carried out a series of small corrections, related with the bottom, the dead time of the detectors and channels, with the resolution time of the coincidences, with the accidental coincidences, with the decay scheme and with the gamma efficiency of the beta detector beta and the beta efficiency beta of the gamma detector. The calculation of the correction formula is presented in the development of the present report, being presented 25 combinations of the probability of the first existent state at once of one disintegration and the second state at once of the following disintegration. (Author)

  11. Second-order accurate finite volume method for well-driven flows

    Dotlić, M.; Vidović, D.; Pokorni, B.; Pušić, M.; Dimkić, M.

    2016-02-01

    We consider a finite volume method for a well-driven fluid flow in a porous medium. Due to the singularity of the well, modeling in the near-well region with standard numerical schemes results in a completely wrong total well flux and an inaccurate hydraulic head. Local grid refinement can help, but it comes at computational cost. In this article we propose two methods to address the well singularity. In the first method the flux through well faces is corrected using a logarithmic function, in a way related to the Peaceman model. Coupling this correction with a non-linear second-order accurate two-point scheme gives a greatly improved total well flux, but the resulting scheme is still inconsistent. In the second method fluxes in the near-well region are corrected by representing the hydraulic head as a sum of a logarithmic and a linear function. This scheme is second-order accurate.

  12. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.

    Fuchs, Franz G; Hjelmervik, Jon M

    2016-02-01

    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results. PMID:26731454

  13. An Improved Wavelet Correction for Zero Shifted Accelerometer Data

    Timothy S. Edwards

    2003-01-01

    Full Text Available Accelerometer data from shock measurements often contains a spurious DC drifting phenomenon known as zero shifting. This erroneous signal can be caused by a variety of sources. The most conservative approach when dealing with such data is to discard it and collect a different set with steps taken to prevent the zero shifting. This approach is rarely practical, however. The test article may have been destroyed or it may be impossible or prohibitively costly to recreate the test. A method has been proposed by which wavelets may be used to correct the acceleration data. By comparing the corrected accelerometer data to an independent measurement of the acceleration from a laser vibrometer this paper shows that the corrected data, in the cases presented, accurately represents the shock. A method is presented by which the analyst may accurately choose the wavelet correction parameters. The comparisons are made in the time and frequency domains, as well as with the shock response spectrum.

  14. The Digital Correction Unit: A data correction/compaction chip

    The Digital Correction Unit (DCU) is a semi-custom CMOS integrated circuit which corrects and compacts data for the SLD experiment. It performs a piece-wise linear correction to data, and implements two separate compaction algorithms. This paper describes the basic functionality of the DCU and its correction and compaction algorithms

  15. Ballistic deficit correction

    The EUROGAM data-acquisition has to handle a large number of events/s. Typical in-beam experiments using heavy-ion fusion reactions assume the production of about 50 000 compound nuclei per second deexciting via particle and γ-ray emissions. The very powerful γ-ray detection of EUROGAM is expected to produce high-fold event rates as large as 104 events/s. Such high count rates introduce, in a common dead time mode, large dead times for the whole system associated with the processing of the pulse, its digitization and its readout (from the preamplifier pulse up to the readout of the information). In order to minimize the dead time the shaping time constant τ, usually about 3 μs for large volume Ge detectors has to be reduced. Smaller shaping times, however, will adversely affect the energy resolution due to ballistic deficit. One possible solution is to operate the linear amplifier, with a somewhat smaller shaping time constant (in the present case we choose τ = 1.5 μs), in combination with a ballistic deficit compensator. The ballistic deficit can be corrected in different ways using a Gated Integrator, a hardware correction or even a software correction. In this paper we present a comparative study of the software and hardware corrections as well as gated integration

  16. Text Induced Spelling Correction

    Reynaert, M.W.C.

    2004-01-01

    We present TISC, a language-independent and context-sensitive spelling checking and correction system designed to facilitate the automatic removal of non-word spelling errors in large corpora. Its lexicon is derived from a very large corpus of raw text, without supervision, and contains word unigram

  17. Writing: Revisions and Corrections

    Kohl, Herb

    1978-01-01

    A fifth grader wanted to know what he had to do to get all his ideas the way he wanted them in his story writing "and" have the spelling, punctuation and quotation marks correctly styled. His teacher encouraged him to think about writing as a process and provided the student with three steps as guidelines for effective writing. (Author/RK)

  18. Philips Pro-Trace: accurate quantification near the limits of detection

    Full text: Pro-Trace is a new module for Philips' SuperQ analytical software, developed specifically for the analysis of trace elements in a wide variety of matrices. It enables the full potential of the sub-ppm quantification achievable by Philips Magix/PW240x spectrometers to be realized. Accurate trace element analysis requires very accurate determination of net count rates (i.e. after all the corrections for background, spectral overlap and matrix have been made) together with careful selection of instrumental parameters, which comes through experience. Pro-Trace has been developed with both in mind. On the application side Pro-Trace offers: superior background correction; background correction for fixed channels; iterated spectral overlap correction; correction for low-level spectral impurity; correction of inter-element matrix effects using mass absorption coefficients; jump-edge matrix correction; LLD and error calculation for every element in every sample. From the user standpoint, Pro-Trace operates entirely within SuperQ, which is familiar to many. Much of the experience required in setting up a trace element application has been incorporated into a Smart Element Selector and an application setup wizard. A set of high-purity setup standards and blanks has also been developed for the Pro-Trace package. This set contains all the samples required for background correction, line overlap correction, MAC calibration and concentration calibration for 40 elements. This presentation will be illustrated by examples of calibrations and data obtained using Pro-Trace. Copyright (2002) Australian X-ray Analytical Association Inc

  19. Geometric correction of APEX hyperspectral data

    Vreys Kristin

    2016-03-01

    Full Text Available Hyperspectral imagery originating from airborne sensors is nowadays widely used for the detailed characterization of land surface. The correct mapping of the pixel positions to ground locations largely contributes to the success of the applications. Accurate geometric correction, also referred to as “orthorectification”, is thus an important prerequisite which must be performed prior to using airborne imagery for evaluations like change detection, or mapping or overlaying the imagery with existing data sets or maps. A so-called “ortho-image” provides an accurate representation of the earth’s surface, having been adjusted for lens distortions, camera tilt and topographic relief. In this paper, we describe the different steps in the geometric correction process of APEX hyperspectral data, as applied in the Central Data Processing Center (CDPC at the Flemish Institute for Technological Research (VITO, Mol, Belgium. APEX ortho-images are generated through direct georeferencing of the raw images, thereby making use of sensor interior and exterior orientation data, boresight calibration data and elevation data. They can be referenced to any userspecified output projection system and can be resampled to any output pixel size.

  20. Accurate atomic data for industrial plasma applications

    Griesmann, U.; Bridges, J.M.; Roberts, J.R.; Wiese, W.L.; Fuhr, J.R. [National Inst. of Standards and Technology, Gaithersburg, MD (United States)

    1997-12-31

    Reliable branching fraction, transition probability and transition wavelength data for radiative dipole transitions of atoms and ions in plasma are important in many industrial applications. Optical plasma diagnostics and modeling of the radiation transport in electrical discharge plasmas (e.g. in electrical lighting) depend on accurate basic atomic data. NIST has an ongoing experimental research program to provide accurate atomic data for radiative transitions. The new NIST UV-vis-IR high resolution Fourier transform spectrometer has become an excellent tool for accurate and efficient measurements of numerous transition wavelengths and branching fractions in a wide wavelength range. Recently, the authors have also begun to employ photon counting techniques for very accurate measurements of branching fractions of weaker spectral lines with the intent to improve the overall accuracy for experimental branching fractions to better than 5%. They have now completed their studies of transition probabilities of Ne I and Ne II. The results agree well with recent calculations and for the first time provide reliable transition probabilities for many weak intercombination lines.

  1. More accurate picture of human body organs

    Computerized tomography and nucler magnetic resonance tomography (NMRT) are revolutionary contributions to radiodiagnosis because they allow to obtain a more accurate image of human body organs. The principles are described of both methods. Attention is mainly devoted to NMRT which has clinically only been used for three years. It does not burden the organism with ionizing radiation. (Ha)

  2. Isomerism of Cyanomethanimine: Accurate Structural, Energetic, and Spectroscopic Characterization.

    Puzzarini, Cristina

    2015-11-25

    The structures, relative stabilities, and rotational and vibrational parameters of the Z-C-, E-C-, and N-cyanomethanimine isomers have been evaluated using state-of-the-art quantum-chemical approaches. Equilibrium geometries have been calculated by means of a composite scheme based on coupled-cluster calculations that accounts for the extrapolation to the complete basis set limit and core-correlation effects. The latter approach is proved to provide molecular structures with an accuracy of 0.001-0.002 Å and 0.05-0.1° for bond lengths and angles, respectively. Systematically extrapolated ab initio energies, accounting for electron correlation through coupled-cluster theory, including up to single, double, triple, and quadruple excitations, and corrected for core-electron correlation and anharmonic zero-point vibrational energy, have been used to accurately determine relative energies and the Z-E isomerization barrier with an accuracy of about 1 kJ/mol. Vibrational and rotational spectroscopic parameters have been investigated by means of hybrid schemes that allow us to obtain rotational constants accurate to about a few megahertz and vibrational frequencies with a mean absolute error of ∼1%. Where available, for all properties considered, a very good agreement with experimental data has been observed. PMID:26529434

  3. Accurate phylogenetic classification of DNA fragments based onsequence composition

    McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis; Hugenholtz, Philip; Rigoutsos, Isidore

    2006-05-01

    Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequence characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.

  4. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T.; Cerutti, Francesco; Chin, Mary P. W.; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G.; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R.; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both 4He and 12C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth–dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  5. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  6. CTI Correction Code

    Massey, Richard; Stoughton, Chris; Leauthaud, Alexie; Rhodes, Jason; Koekemoer, Anton; Ellis, Richard; Shaghoulian, Edgar

    2013-07-01

    Charge Transfer Inefficiency (CTI) due to radiation damage above the Earth's atmosphere creates spurious trailing in images from Charge-Coupled Device (CCD) imaging detectors. Radiation damage also creates unrelated warm pixels, which can be used to measure CTI. This code provides pixel-based correction for CTI and has proven effective in Hubble Space Telescope Advanced Camera for Surveys raw images, successfully reducing the CTI trails by a factor of ~30 everywhere in the CCD and at all flux levels. The core is written in java for speed, and a front-end user interface is provided in IDL. The code operates on raw data by returning individual electrons to pixels from which they were unintentionally dragged during readout. Correction takes about 25 minutes per ACS exposure, but is trivially parallelisable to multiple processors.

  7. Aberration Corrected Emittance Exchange

    Nanni, Emilio A

    2015-01-01

    Full exploitation of emittance exchange (EEX) requires aberration-free performance of a complex imaging system including active radio-frequency (RF) elements which can add temporal distortions. We investigate the performance of an EEX line where the exchange occurs between two dimensions with normalized emittances which differ by orders of magnitude. The transverse emittance is exchanged into the longitudinal dimension using a double dog-leg emittance exchange setup with a 5 cell RF deflector cavity. Aberration correction is performed on the four most dominant aberrations. These include temporal aberrations that are corrected with higher order magnetic optical elements located where longitudinal and transverse emittance are coupled. We demonstrate aberration-free performance of emittances differing by 4 orders of magnitude, i.e. an initial transverse emittance of $\\epsilon_x=1$ pm-rad is exchanged with a longitudinal emittance of $\\epsilon_z=10$ nm-rad.

  8. Quantum Error Correcting Subsystems and Self-Correcting Quantum Memories

    Bacon, D

    2005-01-01

    The most general method for encoding quantum information is not to encode the information into a subspace of a Hilbert space, but to encode information into a subsystem of a Hilbert space. In this paper we use this fact to define subsystems with quantum error correcting capabilities. In standard quantum error correcting codes, one requires the ability to apply a procedure which exactly reverses on the error correcting subspace any correctable error. In contrast, for quantum error correcting subsystems, the correction procedure need not undo the error which has occurred, but instead one must perform correction only modulo the subsystem structure. Here we present two examples of quantum error correcting subsystems. These examples are motivated by simple spatially local Hamiltonians on square and cubic lattices. In three dimensions we provide evidence, in the form a simple mean field theory, that our Hamiltonian gives rise to a system which is self-correcting. Such a system will be a natural high-temperature qua...

  9. Feedback about more accurate versus less accurate trials: differential effects on self-confidence and activation.

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-06-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705

  10. Correction and updating.

    1994-03-01

    In the heading of David Cassidy's review of The Private Lives of Albert Einstein (18 February, p. 997) the price of the book as sold by its British publisher, Faber and Faber, was given incorrectly; the correct price is pound15.99. The book is also to be published in the United States by St. Martin's Press, New York, in April, at a price of $23.95. PMID:17817438

  11. Accurate guitar tuning by cochlear implant musicians.

    Thomas Lu

    Full Text Available Modern cochlear implant (CI users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  12. Accurate guitar tuning by cochlear implant musicians.

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  13. How Accurate is inv(A)*b?

    Druinsky, Alex

    2012-01-01

    Several widely-used textbooks lead the reader to believe that solving a linear system of equations Ax = b by multiplying the vector b by a computed inverse inv(A) is inaccurate. Virtually all other textbooks on numerical analysis and numerical linear algebra advise against using computed inverses without stating whether this is accurate or not. In fact, under reasonable assumptions on how the inverse is computed, x = inv(A)*b is as accurate as the solution computed by the best backward-stable solvers. This fact is not new, but obviously obscure. We review the literature on the accuracy of this computation and present a self-contained numerical analysis of it.

  14. Talus avulsion fractures: are they accurately diagnosed?

    Robinson, Karen P; Davies, Mark B

    2015-10-01

    Dorsal talus avulsion fractures occurring along the supination line of the foot can cause pain and discomfort. Examination of the foot and ankle using the Ottawa ankle rules does not include examination of the talus, an injury here is easily missed causing concern to the patient. This is a retrospective study carried out in a major trauma centre to look at the assessment and diagnosis of all patients with a dorsal talus and navicular avulsion fractures over a one year period. Nineteen patients with an isolated dorsal talus avulsion fracture and five patients with an isolated dorsal navicular fracture were included. The correct diagnosis was made in 12 of patients with isolated dorsal talus avulsion fractures, 7 patients were given an incorrect diagnosis after misreading of the radiograph. Four patients with a dorsal navicular avulsion fracture were given the correct diagnosis. If not correctly diagnosed on presentation patients can be overly concerned that a 'fracture was missed' which can lead to confusion and anxiety. Therefore these injuries need to be recognised early, promptly diagnosed, treated symptomatically and reassurance given. We recommend the routine palpation of the talus in addition to the examination set out in the Ottawa Ankle Rules and the close inspection of plain radiographs to adequately diagnose an injury in this area. PMID:26190632

  15. Accurate, reproducible measurement of blood pressure.

    Campbell, N. R.; Chockalingam, A; Fodor, J. G.; McKay, D. W.

    1990-01-01

    The diagnosis of mild hypertension and the treatment of hypertension require accurate measurement of blood pressure. Blood pressure readings are altered by various factors that influence the patient, the techniques used and the accuracy of the sphygmomanometer. The variability of readings can be reduced if informed patients prepare in advance by emptying their bladder and bowel, by avoiding over-the-counter vasoactive drugs the day of measurement and by avoiding exposure to cold, caffeine con...

  16. Accurate Finite Difference Methods for Option Pricing

    Persson, Jonas

    2006-01-01

    Stock options are priced numerically using space- and time-adaptive finite difference methods. European options on one and several underlying assets are considered. These are priced with adaptive numerical algorithms including a second order method and a more accurate method. For American options we use the adaptive technique to price options on one stock with and without stochastic volatility. In all these methods emphasis is put on the control of errors to fulfill predefined tolerance level...

  17. Towards accurate modeling of moving contact lines

    Holmgren, Hanna

    2015-01-01

    The present thesis treats the numerical simulation of immiscible incompressible two-phase flows with moving contact lines. The conventional Navier–Stokes equations combined with a no-slip boundary condition leads to a non-integrable stress singularity at the contact line. The singularity in the model can be avoided by allowing the contact line to slip. Implementing slip conditions in an accurate way is not straight-forward and different regularization techniques exist where ad-hoc procedures ...

  18. Accurate variational forms for multiskyrmion configurations

    Jackson, A.D.; Weiss, C.; Wirzba, A.; Lande, A.

    1989-04-17

    Simple variational forms are suggested for the fields of a single skyrmion on a hypersphere, S/sub 3/(L), and of a face-centered cubic array of skyrmions in flat space, R/sub 3/. The resulting energies are accurate at the level of 0.2%. These approximate field configurations provide a useful alternative to brute-force solutions of the corresponding Euler equations.

  19. Efficient Accurate Context-Sensitive Anomaly Detection

    2007-01-01

    For program behavior-based anomaly detection, the only way to ensure accurate monitoring is to construct an efficient and precise program behavior model. A new program behavior-based anomaly detection model,called combined pushdown automaton (CPDA) model was proposed, which is based on static binary executable analysis. The CPDA model incorporates the optimized call stack walk and code instrumentation technique to gain complete context information. Thereby the proposed method can detect more attacks, while retaining good performance.

  20. Identification of Microorganisms by High Resolution Tandem Mass Spectrometry with Accurate Statistical Significance

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo

    2016-02-01

    Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.

  1. Accurate phase-shift velocimetry in rock

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  2. A precise technique for manufacturing correction coil

    An automated method of manufacturing correction coils has been developed which provides a precise embodiment of the coil design. Numerically controlled machines have been developed to accurately position coil windings on the beam tube. Two types of machines have been built. One machine bonds the wire to a substrate which is wrapped around the beam tube after it is completed while the second machine bonds the wire directly to the beam tube. Both machines use the Multiwire reg-sign technique of bonding the wire to the substrate utilizing an ultrasonic stylus. These machines are being used to manufacture coils for both the SSC and RHIC

  3. Threshold Corrections to the Bottom Quark Mass Revisited

    Anandakrishnan, Archana; Raby, Stuart

    2014-01-01

    Threshold corrections to the bottom quark mass are often estimated under the approximation that tan$\\beta$ enhanced contributions are the most dominant. In this work we revisit this common approximation made to the estimation of the supersymmetric threshold corrections to the bottom quark mass. We calculate the full one-loop supersymmetric corrections to the bottom quark mass and survey a large part of the phenomenological MSSM parameter space to study the validity of considering only the tan$\\beta$ enhanced corrections. Our analysis demonstrates that this approximation severely breaks down in parts of the parameter space. The size of the threshold corrections has significant consequences for the estimation of fits to the bottom quark mass, couplings to Higgses, and flavor observables, and therefore the approximate expressions must be replaced with the full contributions for accurate estimations.

  4. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  5. Assessing the correctional orientation of corrections officers in South Korea.

    Moon, Byongook; Maxwell, Sheila Royo

    2004-12-01

    The correctional goal in South Korea has recently changed from the straightforward punishment of inmates to rehabilitation. Currently, emphases are being placed on education, counseling, and other treatment programs. These changes have consequently begun to also change the corrections officers' roles from a purely custodial role to a human service role, in which officers are expected to manage rehabilitation and treatment programs. Despite these changes, few studies have examined the attitudes of corrections officers toward rehabilitation programming. This is an important dimension to examine in rehabilitation programming, as corrections officers play a major role in the delivery of institutional programs. This study examines the attitudes of South Korean corrections officers toward rehabilitation programs. Approximately 430 corrections officers were sampled. Results show that correctional attitudes are largely influenced by not only officers' own motivations for joining corrections but also by institutional factors such as job stress. Policy implications are discussed. PMID:15538029

  6. 78 FR 16611 - Freedom of Information Act; Correction

    2013-03-18

    ...The Federal Trade Commission published a final rule on February 28, 2013 revising its Rules of Practice governing access to agency records. In one of its amendatory instructions, the final rule mentioned a paragraph that was not being affected. This document makes a technical correction to the amendatory instruction so that it accurately reflects the amendments carried...

  7. A Technique for Calculating Quantum Corrections to Solitons

    Barnes, Chris; Turok, Neil

    1997-01-01

    We present a numerical scheme for calculating the first quantum corrections to the properties of static solitons. The technique is applicable to solitons of arbitrary shape, and may be used in 3+1 dimensions for multiskyrmions or other complicated solitons. We report on a test computation in 1+1 dimensions, where we accurately reproduce the analytical result with minimal numerical effort.

  8. Educational Programs in Adult Correctional Institutions: A Survey.

    Dell'Apa, Frank

    A national survey of adult correctional institutions was conducted by questionnaire in 1973 to obtain an accurate picture of the current status of academic educational programs, particularly at the elementary and secondary levels, available to inmates. Questions were designed to obtain information regarding the degree of participation of inmates…

  9. Brain Image Motion Correction

    Jensen, Rasmus Ramsbøl; Benjaminsen, Claus; Larsen, Rasmus;

    2015-01-01

    The application of motion tracking is wide, including: industrial production lines, motion interaction in gaming, computer-aided surgery and motion correction in medical brain imaging. Several devices for motion tracking exist using a variety of different methodologies. In order to use such devices...... offset and tracking noise in medical brain imaging. The data are generated from a phantom mounted on a rotary stage and have been collected using a Siemens High Resolution Research Tomograph for positron emission tomography. During acquisition the phantom was tracked with our latest tracking prototype...

  10. Corrective action program

    Prior to the implementation of the Corrective Action Program in Asco NPP, the station was already using a number of systems for troubleshooting problems and identifying areas for improvement in areas such as maintenance, operating experience and quality insurance. These systems coexisted with little interaction among each other. The publication of UNESA Guide CEN-13 led Asco NPP to implement the Program, which was then included in the SISC (Inspection Base Plan for Integrated Supervision System of NPPs), which is the Spanish version of the ROP. (Author).

  11. A simple and accurate measurement method of current density of an electron accelerator for irradiation

    For simple and accurate measurement of the current distribution in a broad beam from electron accelerators, a method for detecting the charge absorbed in a graphite target exposed to the air has been examined. The present report means to solve several fundamental problems. The effective incidence area of the absorber is strictly defined by the design of the geometrical arrangement of the absorber assembly. Electron backscattering from the absorber is corrected with backscattering coefficients in consideration of oblique incidence on the absorber. The influence of ionic charge produced in air is ascribed to the contact potential between the absorber and the guard, and correction methods are proposed. (orig.)

  12. Accurate diagnosis is essential for amebiasis

    2004-01-01

    @@ Amebiasis is one of the three most common causes of death from parasitic disease, and Entamoeba histolytica is the most widely distributed parasites in the world. Particularly, Entamoeba histolytica infection in the developing countries is a significant health problem in amebiasis-endemic areas with a significant impact on infant mortality[1]. In recent years a world wide increase in the number of patients with amebiasis has refocused attention on this important infection. On the other hand, improving the quality of parasitological methods and widespread use of accurate tecniques have improved our knowledge about the disease.

  13. Investigations on Accurate Analysis of Microstrip Reflectarrays

    Zhou, Min; Sørensen, S. B.; Kim, Oleksiy S.;

    2011-01-01

    An investigation on accurate analysis of microstrip reflectarrays is presented. Sources of error in reflectarray analysis are examined and solutions to these issues are proposed. The focus is on two sources of error, namely the determination of the equivalent currents to calculate the radiation...... pattern, and the inaccurate mutual coupling between array elements due to the lack of periodicity. To serve as reference, two offset reflectarray antennas have been designed, manufactured and measured at the DTUESA Spherical Near-Field Antenna Test Facility. Comparisons of simulated and measured data are...

  14. Niche Genetic Algorithm with Accurate Optimization Performance

    LIU Jian-hua; YAN De-kun

    2005-01-01

    Based on crowding mechanism, a novel niche genetic algorithm was proposed which can record evolutionary direction dynamically during evolution. After evolution, the solutions's precision can be greatly improved by means of the local searching along the recorded direction. Simulation shows that this algorithm can not only keep population diversity but also find accurate solutions. Although using this method has to take more time compared with the standard GA, it is really worth applying to some cases that have to meet a demand for high solution precision.

  15. How accurately can we calculate thermal systems?

    The objective was to determine how accurately simple reactor lattice integral parameters can be determined, considering user input, differences in the methods, source data and the data processing procedures and assumptions. Three simple square lattice test cases with different fuel to moderator ratios were defined. The effect of the thermal scattering models were shown to be important and much bigger than the spread in the results. Nevertheless, differences of up to 0.4% in the K-eff calculated by continuous energy Monte Carlo codes were observed even when the same source data were used. (author)

  16. OCT Motion Correction

    Kraus, Martin F.; Hornegger, Joachim

    From the introduction of time domain OCT [1] up to recent swept source systems, motion continues to be an issue in OCT imaging. In contrast to normal photography, an OCT image does not represent a single point in time. Instead, conventional OCT devices sequentially acquire one-dimensional data over a period of several seconds, capturing one beam of light at a time and recording both the intensity and delay of reflections along its path through an object. In combination with unavoidable object motion which occurs in many imaging contexts, the problem of motion artifacts lies in the very nature of OCT imaging. Motion artifacts degrade image quality and make quantitative measurements less reliable. Therefore, it is desirable to come up with techniques to measure and/or correct object motion during OCT acquisition. In this chapter, we describe the effect of motion on OCT data sets and give an overview on the state of the art in the field of retinal OCT motion correction.

  17. Contact Lenses for Vision Correction

    ... Ask an Ophthalmologist Español Eye Health / Glasses & Contacts Contact Lenses Sections Contact Lenses for Vision Correction Proper ... to Know About Contact Lenses Colored Contact Lenses Contact Lenses for Vision Correction Written by: Kierstan Boyd ...

  18. Attenuation correction for myocardial scintigraphy: state-of-the-art

    Myocardial perfusion imaging has been proved as an accurate, noninvasive method for diagnosis of coronary artery disease with a high prognostic value. However image artifacts, which decrease sensitivity and in particular specificity, degrade the clinical impact of this method. Soft tissue attenuation is regarded as one of the most important factors of impaired image quality. Different approaches to correct for tissue attenuation have been implemented by the camera manufacturers. The principle is to derive an attenuation map from the transmission data and to correct the emission data for nonuniform photon attenuation with this map. There have been several reports published demonstrating an improved specificity with no substantial change in sensitivity by this method. To accurately perform attenuation correction quality control measurements and adequate training of technologists and physicians are mandatory. (orig.)

  19. Accurate determination of characteristic relative permeability curves

    Krause, Michael H.; Benson, Sally M.

    2015-09-01

    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  20. Accurate radiative transfer calculations for layered media.

    Selden, Adrian C

    2016-07-01

    Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics. PMID:27409700

  1. Accurate basis set truncation for wavefunction embedding

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  2. Accurate pose estimation for forensic identification

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  3. Mixed Burst Error Correcting Codes

    Sethi, Amita

    2015-01-01

    In this paper, we construct codes which are an improvement on the previously known block wise burst error correcting codes in terms of their error correcting capabilities. Along with different bursts in different sub-blocks, the given codes also correct overlapping bursts of a given length in two consecutive sub-blocks of a code word. Such codes are called mixed burst correcting (mbc) codes.

  4. Accurate molecular classification of cancer using simple rules

    Gotoh Osamu

    2009-10-01

    Full Text Available Abstract Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often hampers the interpretability of the models. For a better understanding of the classification results, it is desirable to develop simpler rule-based models with as few marker genes as possible. Methods We screened a small number of informative single genes and gene pairs on the basis of their depended degrees proposed in rough sets. Applying the decision rules induced by the selected genes or gene pairs, we constructed cancer classifiers. We tested the efficacy of the classifiers by leave-one-out cross-validation (LOOCV of training sets and classification of independent test sets. Results We applied our methods to five cancerous gene expression datasets: leukemia (acute lymphoblastic leukemia [ALL] vs. acute myeloid leukemia [AML], lung cancer, prostate cancer, breast cancer, and leukemia (ALL vs. mixed-lineage leukemia [MLL] vs. AML. Accurate classification outcomes were obtained by utilizing just one or two genes. Some genes that correlated closely with the pathogenesis of relevant cancers were identified. In terms of both classification performance and algorithm simplicity, our approach outperformed or at least matched existing methods. Conclusion In cancerous gene expression datasets, a small number of genes, even one or two if selected correctly, is capable of achieving an ideal cancer classification effect. This finding also means that very simple rules may perform well for cancerous class prediction.

  5. Accurate, fully-automated NMR spectral profiling for metabolomics.

    Siamak Ravanbakhsh

    Full Text Available Many diseases cause significant changes to the concentrations of small molecules (a.k.a. metabolites that appear in a person's biofluids, which means such diseases can often be readily detected from a person's "metabolic profile"-i.e., the list of concentrations of those metabolites. This information can be extracted from a biofluids Nuclear Magnetic Resonance (NMR spectrum. However, due to its complexity, NMR spectral profiling has remained manual, resulting in slow, expensive and error-prone procedures that have hindered clinical and industrial adoption of metabolomics via NMR. This paper presents a system, BAYESIL, which can quickly, accurately, and autonomously produce a person's metabolic profile. Given a 1D 1H NMR spectrum of a complex biofluid (specifically serum or cerebrospinal fluid, BAYESIL can automatically determine the metabolic profile. This requires first performing several spectral processing steps, then matching the resulting spectrum against a reference compound library, which contains the "signatures" of each relevant metabolite. BAYESIL views spectral matching as an inference problem within a probabilistic graphical model that rapidly approximates the most probable metabolic profile. Our extensive studies on a diverse set of complex mixtures including real biological samples (serum and CSF, defined mixtures and realistic computer generated spectra; involving > 50 compounds, show that BAYESIL can autonomously find the concentration of NMR-detectable metabolites accurately (~ 90% correct identification and ~ 10% quantification error, in less than 5 minutes on a single CPU. These results demonstrate that BAYESIL is the first fully-automatic publicly-accessible system that provides quantitative NMR spectral profiling effectively-with an accuracy on these biofluids that meets or exceeds the performance of trained experts. We anticipate this tool will usher in high-throughput metabolomics and enable a wealth of new applications of

  6. Accuracy of 3D Virtual Planning of Corrective Osteotomies of the Distal Radius

    Stockmans, Filip; Dezillie, Marleen; Vanhaecke, Jeroen

    2013-01-01

    Corrective osteotomies of the distal radius for symptomatic malunion are time-tested procedures that rely on accurate corrections. Patients with combined intra- and extra-articular malunions present a challenging deformity. Virtual planning and patient-specific instruments (PSIs) to transfer the planning into the operating room have been used both to simplify the surgery and to make it more accurate. This report focuses on the clinically achieved accuracy in four patients treated between 2008...

  7. EDITORIAL: Politically correct physics?

    Pople Deputy Editor, Stephen

    1997-03-01

    If you were a caring, thinking, liberally minded person in the 1960s, you marched against the bomb, against the Vietnam war, and for civil rights. By the 1980s, your voice was raised about the destruction of the rainforests and the threat to our whole planetary environment. At the same time, you opposed discrimination against any group because of race, sex or sexual orientation. You reasoned that people who spoke or acted in a discriminatory manner should be discriminated against. In other words, you became politically correct. Despite its oft-quoted excesses, the political correctness movement sprang from well-founded concerns about injustices in our society. So, on balance, I am all for it. Or, at least, I was until it started to invade science. Biologists were the first to feel the impact. No longer could they refer to 'higher' and 'lower' orders, or 'primitive' forms of life. To the list of undesirable 'isms' - sexism, racism, ageism - had been added a new one: speciesism. Chemists remained immune to the PC invasion, but what else could you expect from a group of people so steeped in tradition that their principal unit, the mole, requires the use of the thoroughly unreconstructed gram? Now it is the turn of the physicists. This time, the offenders are not those who talk disparagingly about other people or animals, but those who refer to 'forms of energy' and 'heat'. Political correctness has evolved into physical correctness. I was always rather fond of the various forms of energy: potential, kinetic, chemical, electrical, sound and so on. My students might merge heat and internal energy into a single, fuzzy concept loosely associated with moving molecules. They might be a little confused at a whole new crop of energies - hydroelectric, solar, wind, geothermal and tidal - but they could tell me what devices turned chemical energy into electrical energy, even if they couldn't quite appreciate that turning tidal energy into geothermal energy wasn't part of the

  8. Temperature Corrected Bootstrap Algorithm

    Comiso, Joey C.; Zwally, H. Jay

    1997-01-01

    A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.

  9. Anomaly Corrected Heterotic Horizons

    Fontanella, A; Papadopoulos, G

    2016-01-01

    We consider supersymmetric near-horizon geometries in heterotic supergravity up to two loop order in sigma model perturbation theory. We identify the conditions for the horizons to admit enhancement of supersymmetry. We show that solutions which undergo supersymmetry enhancement exhibit an sl(2,R) symmetry, and we describe the geometry of their horizon sections. We also prove a modified Lichnerowicz type theorem, incorporating $\\alpha'$ corrections, which relates Killing spinors to zero modes of near-horizon Dirac operators. Furthermore, we demonstrate that there are no AdS2 solutions in heterotic supergravity up to second order in $\\alpha'$ for which the fields are smooth and the internal space is smooth and compact without boundary. We investigate a class of nearly supersymmetric horizons, for which the gravitino Killing spinor equation is satisfied on the spatial cross sections but not the dilatino one, and present a description of their geometry.

  10. XRF matrix corrections

    Full text: In order to obtain meaningful analytical information from an X-Ray Fluorescence spectrometer, it is necessary to correlate measured intensity values with sample concentrations. The ability to do this to a desired level of precision depends on taking care of a number of variables which influence measured intensity values. These variables include: the sample, which needs to be homogeneous, flat and critically thick to the analyte lines used for measurement; the spectrometer, which needs to perform any mechanical movements in a highly reproducible manner; the time taken to measure an analyte line, and the software, which needs to take care of detector dead-time, the contribution of background to the measured signal, the effects of line overlaps and matrix (absorption and enhancement) effects. This presentation will address commonly used correction procedures for matrix effects and their relative success in achieving their objective. Copyright (2002) Australian X-ray Analytical Association Inc

  11. Accurate FRET Measurements within Single Diffusing Biomolecules Using Alternating-Laser Excitation

    Lee, Nam Ki; Kapanidis, Achillefs N.; Wang, You; Michalet, Xavier; Mukhopadhyay, Jayanta; Ebright, Richard H.; Weiss, Shimon

    2005-01-01

    Fluorescence resonance energy transfer (FRET) between a donor (D) and an acceptor (A) at the single-molecule level currently provides qualitative information about distance, and quantitative information about kinetics of distance changes. Here, we used the sorting ability of confocal microscopy equipped with alternating-laser excitation (ALEX) to measure accurate FRET efficiencies and distances from single molecules, using corrections that account for cross-talk terms that contaminate the FRE...

  12. Accurate valence band width of diamond

    An accurate width is determined for the valence band of diamond by imaging photoelectron momentum distributions for a variety of initial- and final-state energies. The experimental result of 23.0±0.2 eV2 agrees well with first-principles quasiparticle calculations (23.0 and 22.88 eV) and significantly exceeds the local-density-functional width, 21.5±0.2 eV2. This difference quantifies effects of creating an excited hole state (with associated many-body effects) in a band measurement vs studying ground-state properties treated by local-density-functional calculations. copyright 1997 The American Physical Society

  13. Toward Accurate and Quantitative Comparative Metagenomics.

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  14. Accurate Telescope Mount Positioning with MEMS Accelerometers

    Mészáros, László; Pál, András; Csépány, Gergely

    2014-01-01

    This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the sub-arcminute range which is well smaller than the field-of-view of conventional imaging telescope systems. Here we present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.

  15. Accurate estimation of indoor travel times

    Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan;

    2014-01-01

    the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. InTraTime...... allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...... a minimal-effort setup and self-improving operations due to unsupervised learning---as it is able to adapt implicitly to factors influencing indoor travel times such as elevators, rotating doors or changes in building layout. We evaluate and compare the proposed InTraTime method to indoor adaptions...

  16. Accurate sky background modelling for ESO facilities

    Full text: Ground-based measurements like e.g. high resolution spectroscopy are heavily influenced by several physical processes. Amongst others, line absorption/ emission, air glow by OH molecules, and scattering of photons within the earth's atmosphere make observations in particular from facilities like the future European extremely large telescope a challenge. Additionally, emission from unresolved extrasolar objects, the zodiacal light, the moon and even thermal emission from the telescope and the instrument contribute significantly to the broad band background over a wide wavelength range. In our talk we review these influences and give an overview on how they can be accurately modeled for increasing the overall precision of spectroscopic and imaging measurements. (author)

  17. Accurate Weather Forecasting for Radio Astronomy

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  18. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  19. Airborne experiment results for spaceborne atmospheric synchronous correction system

    Cui, Wenyu; Yi, Weining; Du, Lili; Liu, Xiao

    2015-10-01

    The image quality of optical remote sensing satellite is affected by the atmosphere, thus the image needs to be corrected. Due to the spatial and temporal variability of atmospheric conditions, correction by using synchronous atmospheric parameters can effectively improve the remote sensing image quality. For this reason, a small light spaceborne instrument, the atmospheric synchronous correction device (airborne prototype), is developed by AIOFM of CAS(Anhui Institute of Optics and Fine Mechanics of Chinese Academy of Sciences). With this instrument, of which the detection mode is timing synchronization and spatial coverage, the atmospheric parameters consistent with the images to be corrected in time and space can be obtained, and then the correction is achieved by radiative transfer model. To verify the technical process and treatment effect of spaceborne atmospheric correction system, the first airborne experiment is designed and completed. The experiment is implemented by the "satellite-airborne-ground" synchronous measuring method. A high resolution(0.4 m) camera and the atmospheric correction device are equipped on the aircraft, which photograph the ground with the satellite observation over the top simultaneously. And aerosol optical depth (AOD) and columnar water vapor (CWV) in the imagery area are also acquired, which are used for the atmospheric correction for satellite and aerial images. Experimental results show that using the AOD and CWV of imagery area retrieved by the data obtained by the device to correct aviation and satellite images, can improve image definition and contrast by more than 30%, and increase MTF by more than 1 time, which means atmospheric correction for satellite images by using the data of spaceborne atmospheric synchronous correction device is accurate and effective.

  20. An accurate {delta}f method for neoclassical transport calculation

    Wang, W.X.; Nakajima, N.; Murakami, S.; Okamoto, M. [National Inst. for Fusion Science, Toki, Gifu (Japan)

    1999-03-01

    A {delta}f method, solving drift kinetic equation, for neoclassical transport calculation is presented in detail. It is demonstrated that valid results essentially rely on the correct evaluation of marker density g in weight calculation. A general and accurate weighting scheme is developed without using some assumed g in weight equation for advancing particle weights, unlike the previous schemes. This scheme employs an additional weight function to directly solve g from its kinetic equation using the idea of {delta}f method. Therefore the severe constraint that the real marker distribution must be consistent with the initially assumed g during a simulation is relaxed. An improved like-particle collision scheme is presented. By performing compensation for momentum, energy and particle losses arising from numerical errors, the conservations of all the three quantities are greatly improved during collisions. Ion neoclassical transport due to self-collisions is examined under finite banana case as well as zero banana limit. A solution with zero particle and zero energy flux (in case of no temperature gradient) over whole poloidal section is obtained. With the improvement in both like-particle collision scheme and weighting scheme, the {delta}f simulation shows a significantly upgraded performance for neoclassical transport study. (author)

  1. A Distributed Weighted Voting Approach for Accurate Eye Center Estimation

    Gagandeep Singh

    2013-05-01

    Full Text Available This paper proposes a novel approach for accurate estimation of eye center in face images. A distributed voting based approach in which every pixel votes is adopted for potential eye center candidates. The votes are distributed over a subset of pixels which lie in a direction which is opposite to gradient direction and the weightage of votes is distributed according to a novel mechanism.  First, image is normalized to eliminate illumination variations and its edge map is generated using Canny edge detector. Distributed voting is applied on the edge image to generate different eye center candidates. Morphological closing and local maxima search are used to reduce the number of candidates. A classifier based on spatial and intensity information is used to choose the correct candidates for the locations of eye center. The proposed approach was tested on BioID face database and resulted in better Iris detection rate than the state-of-the-art. The proposed approach is robust against illumination variation, small pose variations, presence of eye glasses and partial occlusion of eyes.Defence Science Journal, 2013, 63(3, pp.292-297, DOI:http://dx.doi.org/10.14429/dsj.63.2763

  2. Accurate measurement of RF exposure from emerging wireless communication systems

    Isotropic broadband probes or spectrum analyzers (SAs) may be used for the measurement of rapidly varying electromagnetic fields generated by emerging wireless communication systems. In this paper this problematic is investigated by comparing the responses measured by two different isotropic broadband probes typically used to perform electric field (E-field) evaluations. The broadband probes are submitted to signals with variable duty cycles (DC) and crest factors (CF) either with or without Orthogonal Frequency Division Multiplexing (OFDM) modulation but with the same root-mean-square (RMS) power. The two probes do not provide accurate enough results for deterministic signals such as Worldwide Interoperability for Microwave Access (WIMAX) or Long Term Evolution (LTE) as well as for non-deterministic signals such as Wireless Fidelity (WiFi). The legacy measurement protocols should be adapted to cope for the emerging wireless communication technologies based on the OFDM modulation scheme. This is not easily achieved except when the statistics of the RF emission are well known. In this case the measurement errors are shown to be systematic and a correction factor or calibration can be applied to obtain a good approximation of the total RMS power.

  3. An accurate δf method for neoclassical transport calculation

    A δf method, solving drift kinetic equation, for neoclassical transport calculation is presented in detail. It is demonstrated that valid results essentially rely on the correct evaluation of marker density g in weight calculation. A general and accurate weighting scheme is developed without using some assumed g in weight equation for advancing particle weights, unlike the previous schemes. This scheme employs an additional weight function to directly solve g from its kinetic equation using the idea of δf method. Therefore the severe constraint that the real marker distribution must be consistent with the initially assumed g during a simulation is relaxed. An improved like-particle collision scheme is presented. By performing compensation for momentum, energy and particle losses arising from numerical errors, the conservations of all the three quantities are greatly improved during collisions. Ion neoclassical transport due to self-collisions is examined under finite banana case as well as zero banana limit. A solution with zero particle and zero energy flux (in case of no temperature gradient) over whole poloidal section is obtained. With the improvement in both like-particle collision scheme and weighting scheme, the δf simulation shows a significantly upgraded performance for neoclassical transport study. (author)

  4. Study of accurate volume measurement system for plutonium nitrate solution

    Hosoma, T. [Power Reactor and Nuclear Fuel Development Corp., Tokai, Ibaraki (Japan). Tokai Works

    1998-12-01

    It is important for effective safeguarding of nuclear materials to establish a technique for accurate volume measurement of plutonium nitrate solution in accountancy tank. The volume of the solution can be estimated by two differential pressures between three dip-tubes, in which the air is purged by an compressor. One of the differential pressure corresponds to the density of the solution, and another corresponds to the surface level of the solution in the tank. The measurement of the differential pressure contains many uncertain errors, such as precision of pressure transducer, fluctuation of back-pressure, generation of bubbles at the front of the dip-tubes, non-uniformity of temperature and density of the solution, pressure drop in the dip-tube, and so on. The various excess pressures at the volume measurement are discussed and corrected by a reasonable method. High precision-differential pressure measurement system is developed with a quartz oscillation type transducer which converts a differential pressure to a digital signal. The developed system is used for inspection by the government and IAEA. (M. Suetake)

  5. Accurate measurement of RF exposure from emerging wireless communication systems

    Letertre, Thierry; Monebhurrun, Vikass; Toffano, Zeno

    2013-04-01

    Isotropic broadband probes or spectrum analyzers (SAs) may be used for the measurement of rapidly varying electromagnetic fields generated by emerging wireless communication systems. In this paper this problematic is investigated by comparing the responses measured by two different isotropic broadband probes typically used to perform electric field (E-field) evaluations. The broadband probes are submitted to signals with variable duty cycles (DC) and crest factors (CF) either with or without Orthogonal Frequency Division Multiplexing (OFDM) modulation but with the same root-mean-square (RMS) power. The two probes do not provide accurate enough results for deterministic signals such as Worldwide Interoperability for Microwave Access (WIMAX) or Long Term Evolution (LTE) as well as for non-deterministic signals such as Wireless Fidelity (WiFi). The legacy measurement protocols should be adapted to cope for the emerging wireless communication technologies based on the OFDM modulation scheme. This is not easily achieved except when the statistics of the RF emission are well known. In this case the measurement errors are shown to be systematic and a correction factor or calibration can be applied to obtain a good approximation of the total RMS power.

  6. Asymptotic expansion based equation of state for hard-disk fluids offering accurate virial coefficients

    Tian, Jianxiang; Mulero, A

    2016-01-01

    Despite the fact that more that more than 30 analytical expressions for the equation of state of hard-disk fluids have been proposed in the literature, none of them is capable of reproducing the currently accepted numeric or estimated values for the first eighteen virial coefficients. Using the asymptotic expansion method, extended to the first ten virial coefficients for hard-disk fluids, fifty-seven new expressions for the equation of state have been studied. Of these, a new equation of state is selected which reproduces accurately all the first eighteen virial coefficients. Comparisons for the compressibility factor with computer simulations show that this new equation is as accurate as other similar expressions with the same number of parameters. Finally, the location of the poles of the 57 new equations shows that there are some particular configurations which could give both the accurate virial coefficients and the correct closest packing fraction in the future when higher virial coefficients than the t...

  7. Addition of noise by scatter correction methods in PVI

    Effective scatter correction techniques are required to account for errors due to high scatter fraction seen in positron volume imaging (PVI). To be effective, the correction techniques must be accurate and practical, but they also must not add excessively to the statistical noise in the image. The authors have investigated the noise added by three correction methods: a convolution/subtraction method; a method that interpolates the scatter from the events outside the object; and a dual energy window method with and without smoothing of the scatter estimate. The methods were applied to data generated by Monte Carlo simulation to determine their effect on the variance of the corrected projections. The convolution and interpolation methods did not add significantly to the variance. The dual energy window subtraction method without smoothing increased the variance by a factor of more than twelve, but this factor was improved to 1.2 by smoothing the scatter estimate

  8. IRI topside correction

    The topside segment of the International Reference Ionosphere (IRI) electron density model (and also of the Bent model) is based on the limited amount of topside data available at the time (∼40,000 Alouette 1 profiles). Being established from such a small database it is therefore not surprising that these models have well-known shortcomings, for example, at high solar activities. Meanwhile a large data base of close to 200,000 topside profiles from Alouette 1, 2, and ISIS 1, 2 has become available online. A program of automated scaling and inversion of a large volume of digitized ionograms adds continuously to this data pool. We have used the currently available ISIS/Alouette topside profiles to evaluate the IRI topside model and to investigate ways of improving the model. The IRI model performs generally well at middle latitudes and shows discrepancies at low and high latitudes and these discrepancies are largest during high solar activity. In the upper topside IRI consistently overestimates the measurements. Based on averages of the data-model ratios we have established correction factors for the IRI model. These factors vary with altitude, modified dip latitude, and local time. (author)

  9. Real-time lens distortion correction: speed, accuracy and efficiency

    Bax, Michael R.; Shahidi, Ramin

    2014-11-01

    Optical lens systems suffer from nonlinear geometrical distortion. Optical imaging applications such as image-enhanced endoscopy and image-based bronchoscope tracking require correction of this distortion for accurate localization, tracking, registration, and measurement of image features. Real-time capability is desirable for interactive systems and live video. The use of a texture-mapping graphics accelerator, which is standard hardware on current motherboard chipsets and add-in video graphics cards, to perform distortion correction is proposed. Mesh generation for image tessellation, an error analysis, and performance results are presented. It is shown that distortion correction using commodity graphics hardware is substantially faster than using the main processor and can be performed at video frame rates (faster than 30 frames per second), and that the polar-based method of mesh generation proposed here is more accurate than a conventional grid-based approach. Using graphics hardware to perform distortion correction is not only fast and accurate but also efficient as it frees the main processor for other tasks, which is an important issue in some real-time applications.

  10. Accurate, Meshless Methods for Magneto-Hydrodynamics

    Hopkins, Philip F

    2016-01-01

    Recently, we developed a pair of meshless finite-volume Lagrangian methods for hydrodynamics: the 'meshless finite mass' (MFM) and 'meshless finite volume' (MFV) methods. These capture advantages of both smoothed-particle hydrodynamics (SPH) and adaptive mesh-refinement (AMR) schemes. Here, we extend these to include ideal magneto-hydrodynamics (MHD). The MHD equations are second-order consistent and conservative. We augment these with a divergence-cleaning scheme, which maintains div*B~0 to high accuracy. We implement these in the code GIZMO, together with a state-of-the-art implementation of SPH MHD. In every one of a large suite of test problems, the new methods are competitive with moving-mesh and AMR schemes using constrained transport (CT) to ensure div*B=0. They are able to correctly capture the growth and structure of the magneto-rotational instability (MRI), MHD turbulence, and the launching of magnetic jets, in some cases converging more rapidly than AMR codes. Compared to SPH, the MFM/MFV methods e...

  11. Accurate fission data for nuclear safety

    Solders, A; Jokinen, A; Kolhinen, V S; Lantz, M; Mattera, A; Penttila, H; Pomp, S; Rakopoulos, V; Rinta-Antila, S

    2013-01-01

    The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyvaskyla. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (10^12 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons...

  12. Fast and Provably Accurate Bilateral Filtering.

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722

  13. A self-interaction-free local hybrid functional: Accurate binding energies vis-\\`a-vis accurate ionization potentials from Kohn-Sham eigenvalues

    Schmidt, Tobias; Makmal, Adi; Kronik, Leeor; Kümmel, Stephan

    2014-01-01

    We present and test a new approximation for the exchange-correlation (xc) energy of Kohn-Sham density functional theory. It combines exact exchange with a compatible non-local correlation functional. The functional is by construction free of one-electron self-interaction, respects constraints derived from uniform coordinate scaling, and has the correct asymptotic behavior of the xc energy density. It contains one parameter that is not determined ab initio. We investigate whether it is possible to construct a functional that yields accurate binding energies and affords other advantages, specifically Kohn-Sham eigenvalues that reliably reflect ionization potentials. Tests for a set of atoms and small molecules show that within our local-hybrid form accurate binding energies can be achieved by proper optimization of the free parameter in our functional, along with an improvement in dissociation energy curves and in Kohn-Sham eigenvalues. However, the correspondence of the latter to experimental ionization potent...

  14. Food systems in correctional settings

    Smoyer, Amy; Kjær Minke, Linda

    management of food systems may improve outcomes for incarcerated people and help correctional administrators to maximize their health and safety. This report summarizes existing research on food systems in correctional settings and provides examples of food programmes in prison and remand facilities......, including a case study of food-related innovation in the Danish correctional system. It offers specific conclusions for policy-makers, administrators of correctional institutions and prison-food-service professionals, and makes proposals for future research.......Food is a central component of life in correctional institutions and plays a critical role in the physical and mental health of incarcerated people and the construction of prisoners' identities and relationships. An understanding of the role of food in correctional settings and the effective...

  15. Comparison of Topographic Correction Methods

    Rudolf Richter

    2009-07-01

    Full Text Available A comparison of topographic correction methods is conducted for Landsat-5 TM, Landsat-7 ETM+, and SPOT-5 imagery from different geographic areas and seasons. Three successful and known methods are compared: the semi-empirical C correction, the Gamma correction depending on the incidence and exitance angles, and a modified Minnaert approach. In the majority of cases the modified Minnaert approach performed best, but no method is superior in all cases.

  16. Health care in correctional facilities.

    Thorburn, K M

    1995-01-01

    More than 1.3 million adults are in correctional facilities, including jails and federal and state prisons, in the United States. Health care of the inmates is an integral component of correctional management. Health services in correctional facilities underwent dramatic improvements during the 1970s. Public policy trends beginning in the early 1980s substantially affected the demographics and health status of jail and prison populations and threatened earlier gains in the health care of inma...

  17. Corrective Feedback and Teacher Development

    Ellis, Rod

    2009-01-01

    This article examines a number of controversies relating to how corrective feedback (CF) has been viewed in SLA and language pedagogy. These controversies address (1) whether CF contributes to L2 acquisition, (2) which errors should be corrected, (3) who should do the correcting (the teacher or the learner him/herself), (4) which type of CF is the most effective, and (5) what is the best timing for CF (immediate or delayed). In discussing these controversies, both the pedagogic and SLA litera...

  18. Cool Cluster Correctly Correlated

    Sergey Aleksandrovich Varganov

    2005-12-17

    Atomic clusters are unique objects, which occupy an intermediate position between atoms and condensed matter systems. For a long time it was thought that physical and chemical properties of atomic dusters monotonically change with increasing size of the cluster from a single atom to a condensed matter system. However, recently it has become clear that many properties of atomic clusters can change drastically with the size of the clusters. Because physical and chemical properties of clusters can be adjusted simply by changing the cluster's size, different applications of atomic clusters were proposed. One example is the catalytic activity of clusters of specific sizes in different chemical reactions. Another example is a potential application of atomic clusters in microelectronics, where their band gaps can be adjusted by simply changing cluster sizes. In recent years significant advances in experimental techniques allow one to synthesize and study atomic clusters of specified sizes. However, the interpretation of the results is often difficult. The theoretical methods are frequently used to help in interpretation of complex experimental data. Most of the theoretical approaches have been based on empirical or semiempirical methods. These methods allow one to study large and small dusters using the same approximations. However, since empirical and semiempirical methods rely on simple models with many parameters, it is often difficult to estimate the quantitative and even qualitative accuracy of the results. On the other hand, because of significant advances in quantum chemical methods and computer capabilities, it is now possible to do high quality ab-initio calculations not only on systems of few atoms but on clusters of practical interest as well. In addition to accurate results for specific clusters, such methods can be used for benchmarking of different empirical and semiempirical approaches. The atomic clusters studied in this work contain from a few atoms

  19. QCD corrections to triboson production

    Lazopoulos, Achilleas; Melnikov, Kirill; Petriello, Frank

    2007-07-01

    We present a computation of the next-to-leading order QCD corrections to the production of three Z bosons at the Large Hadron Collider. We calculate these corrections using a completely numerical method that combines sector decomposition to extract infrared singularities with contour deformation of the Feynman parameter integrals to avoid internal loop thresholds. The NLO QCD corrections to pp→ZZZ are approximately 50% and are badly underestimated by the leading order scale dependence. However, the kinematic dependence of the corrections is minimal in phase space regions accessible at leading order.

  20. Entropic Corrections to Coulomb's Law

    Hendi, S. H.; Sheykhi, A.

    2012-04-01

    Two well-known quantum corrections to the area law have been introduced in the literatures, namely, logarithmic and power-law corrections. Logarithmic corrections, arises from loop quantum gravity due to thermal equilibrium fluctuations and quantum fluctuations, while, power-law correction appears in dealing with the entanglement of quantum fields in and out the horizon. Inspired by Verlinde's argument on the entropic force, and assuming the quantum corrected relation for the entropy, we propose the entropic origin for the Coulomb's law in this note. Also we investigate the Uehling potential as a radiative correction to Coulomb potential in 1-loop order and show that for some value of distance the entropic corrections of the Coulomb's law is compatible with the vacuum-polarization correction in QED. So, we derive modified Coulomb's law as well as the entropy corrected Poisson's equation which governing the evolution of the scalar potential ϕ. Our study further supports the unification of gravity and electromagnetic interactions based on the holographic principle.

  1. Accurate paleointensities - the multi-method approach

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  2. Towards Accurate Application Characterization for Exascale (APEX)

    Hammond, Simon David [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  3. Accurate hydrocarbon estimates attained with radioactive isotope

    To make accurate economic evaluations of new discoveries, an oil company needs to know how much gas and oil a reservoir contains. The porous rocks of these reservoirs are not completely filled with gas or oil, but contain a mixture of gas, oil and water. It is extremely important to know what volume percentage of this water--called connate water--is contained in the reservoir rock. The percentage of connate water can be calculated from electrical resistivity measurements made downhole. The accuracy of this method can be improved if a pure sample of connate water can be analyzed or if the chemistry of the water can be determined by conventional logging methods. Because of the similarity of the mud filtrate--the water in a water-based drilling fluid--and the connate water, this is not always possible. If the oil company cannot distinguish between connate water and mud filtrate, its oil-in-place calculations could be incorrect by ten percent or more. It is clear that unless an oil company can be sure that a sample of connate water is pure, or at the very least knows exactly how much mud filtrate it contains, its assessment of the reservoir's water content--and consequently its oil or gas content--will be distorted. The oil companies have opted for the Repeat Formation Tester (RFT) method. Label the drilling fluid with small doses of tritium--a radioactive isotope of hydrogen--and it will be easy to detect and quantify in the sample

  4. Towards a more accurate concept of fuels

    Full text: The introduction of LEU in Atucha and the approval of CARA show an advancement of the Argentine power stations fuels, which stimulate and show a direction to follow. In the first case, the use of enriched U fuel relax an important restriction related to neutronic economy; that means that it is possible to design less penalized fuels using more Zry. The second case allows a decrease in the lineal power of the rods, enabling a better performance of the fuel in normal and also in accident conditions. In this work we wish to emphasize this last point, trying to find a design in which the surface power of the rod is diminished. Hence, in accident conditions owing to lack of coolant, the cladding tube will not reach temperatures that will produce oxidation, with the corresponding H2 formation and with plasticity enough to form blisters, which will obstruct the reflooding and hydration that will produce fragility and rupture of the cladding tube, with the corresponding radioactive material dispersion. This work is oriented to find rods designs with quasi rectangular geometry to lower the surface power of the rods, in order to obtain a lower central temperature of the rod. Thus, critical temperatures will not be reached in case of lack of coolant. This design is becoming a reality after PPFAE's efforts in search of cladding tubes fabrication with different circumferential values, rectangular in particular. This geometry, with an appropriate pellet design, can minimize the pellet-cladding interaction and, through the accurate width election, non rectified pellets could be used. This means an important economy in pellets production, as well as an advance in the fabrication of fuels in gloves box and hot cells in the future. The sequence to determine critical geometrical parameters is described and some rod dispositions are explored

  5. Accurate orbit propagation with planetary close encounters

    Baù, Giulio; Milani Comparetti, Andrea; Guerra, Francesca

    2015-08-01

    We tackle the problem of accurately propagating the motion of those small bodies that undergo close approaches with a planet. The literature is lacking on this topic and the reliability of the numerical results is not sufficiently discussed. The high-frequency components of the perturbation generated by a close encounter makes the propagation particularly challenging both from the point of view of the dynamical stability of the formulation and the numerical stability of the integrator. In our approach a fixed step-size and order multistep integrator is combined with a regularized formulation of the perturbed two-body problem. When the propagated object enters the region of influence of a celestial body, the latter becomes the new primary body of attraction. Moreover, the formulation and the step-size will also be changed if necessary. We present: 1) the restarter procedure applied to the multistep integrator whenever the primary body is changed; 2) new analytical formulae for setting the step-size (given the order of the multistep, formulation and initial osculating orbit) in order to control the accumulation of the local truncation error and guarantee the numerical stability during the propagation; 3) a new definition of the region of influence in the phase space. We test the propagator with some real asteroids subject to the gravitational attraction of the planets, the Yarkovsky and relativistic perturbations. Our goal is to show that the proposed approach improves the performance of both the propagator implemented in the OrbFit software package (which is currently used by the NEODyS service) and of the propagator represented by a variable step-size and order multistep method combined with Cowell's formulation (i.e. direct integration of position and velocity in either the physical or a fictitious time).

  6. Fast, accurate standardless XRF analysis with IQ+

    Full text: Due to both chemical and physical effects, the most accurate XRF data are derived from calibrations set up using in-type standards, necessitating some prior knowledge of the samples being analysed. Whilst this is often the case for routine samples, particularly in production control, for completely unknown samples the identification and availability of in-type standards can be problematic. Under these circumstances standardless analysis can offer a viable solution. Successful analysis of completely unknown samples requires a complete chemical overview of the speciemen together with the flexibility of a fundamental parameters (FP) algorithm to handle wide-ranging compositions. Although FP algorithms are improving all the time, most still require set-up samples to define the spectrometer response to a particular element. Whilst such materials may be referred to as standards, the emphasis in this kind of analysis is that only a single calibration point is required per element and that the standard chosen does not have to be in-type. The high sensitivities of modern XRF spectrometers, together with recent developments in detector counting electronics that possess a large dynamic range and high-speed data processing capacity bring significant advances to fast, standardless analysis. Illustrated with a tantalite-columbite heavy-mineral concentrate grading use-case, this paper will present the philosophy behind the semi-quantitative IQ+ software and the required hardware. This combination can give a rapid scan-based overview and quantification of the sample in less than two minutes, together with the ability to define channels for specific elements of interest where higher accuracy and lower levels of quantification are required. The accuracy, precision and limitations of standardless analysis will be assessed using certified reference materials of widely differing chemical and physical composition. Copyright (2002) Australian X-ray Analytical Association Inc

  7. PET measurements of cerebral metabolism corrected for CSF contributions

    Thirty-three subjects have been studied with PET and anatomic imaging (proton-NMR and/or CT) in order to determine the effect of cerebral atrophy on calculations of metabolic rates. Subgroups of neurologic disease investigated include stroke, brain tumor, epilepsy, psychosis, and dementia. Anatomic images were digitized through a Vidicon camera and analyzed volumetrically. Relative areas for ventricles, sulci, and brain tissue were calculated. Preliminary analysis suggests that ventricular volumes as determined by NMR and CT are similar, while sulcal volumes are larger on NMR scans. Metabolic rates (18F-FDG) were calculated before and after correction for CSF spaces, with initial focus upon dementia and normal aging. Correction for atrophy led to a greater increase (%) in global metabolic rates in demented individuals (18.2 +- 5.3) compared to elderly controls (8.3 +- 3.0,p < .05). A trend towards significantly lower glucose metabolism in demented subjects before CSF correction was not seen following correction for atrophy. These data suggest that volumetric analysis of NMR images may more accurately reflect the degree of cerebral atrophy, since NMR does not suffer from beam hardening artifact due to bone-parenchyma juxtapositions. Furthermore, appropriate correction for CSF spaces should be employed if current resolution PET scanners are to accurately measure residual brain tissue metabolism in various pathological states

  8. Evaluation of inhomogeneity correction algorithm in 3DCRT for the purpose of gated treatments

    It has been established that the tumors in chest such as lung and abdomen do move during the course of treatments and it is accurate to treat them with gated imaging and treatment. However, the increased dose per fraction delivered while delivering this kind of treatments with tighter margins make it imperative to verify the inhomogeneity corrections are applied accurately in the treatment planning systems. The purpose of this work is to check the inhomogeneity corrections used or applied in the treatment planning system in terms of phantom measurements and also relate it to other methods of corrections such as ETPR

  9. Error analysis and correction for laser speckle photography

    Song, Y.Z.; Kulenovic, R.; Groll, M. [Univ. Stuttgart (Germany). Inst. of Nuclear Technology and Energy Systems

    1995-12-31

    This paper deals with error analysis of experimental data of a laser speckle photography (LSP) application which measures a temperature field of natural convection around a heated cylindrical tube. A method for error corrections is proposed and presented in detail. Experimental and theoretical investigations have shown errors in the measurements are induced due to four causes. These error sources are discussed and suggestions to avoid the errors are given. Due to the error analysis and the introduced methods for their correction the temperature distribution, respectively the temperature gradient in a thermal boundary layer can be obtained more accurately.

  10. Jet Energy Corrections at CMS

    Santocchia, Attilio

    2009-01-01

    Many physics measurements in CMS will rely on the precise reconstruction of Jets. Correction of the raw jet energy measured by the CMS detector will be a fundamental step for most of the analysis where hadron activity is investigated. Jet correction plans in CMS have been widely studied for different conditions: at stat-up simulation tuned on test-beam data will be used. Then data-driven methods will be available and finally, simulation tuned on collision data will give us the ultimate procedure for calculating jet corrections. Jet transverse energy is corrected first for pile-up and noise offset; correction for the response of the calorimeter as a function of jet pseudorapidity relative to the barrel comes afterwards and correction for the absolute response as a function of transverse momentum in the barrel is the final standard sub-correction applied. Other effects like flavour and parton correction will be optionally applied on the Jet $E_T$ depending on the measurement requests. In this paper w...

  11. Correct and efficient accelerator programming

    Cohen, Albert; Donaldson, Alistair F.; Huisman, Marieke; Katoen, Joost-Pieter

    2013-01-01

    This report documents the program and the outcomes of Dagstuhl Seminar 13142 “Correct and Efficient Accelerator Programming”. The aim of this Dagstuhl seminar was to bring together researchers from various sub-disciplines of computer science to brainstorm and discuss the theoretical foundations, design and implementation of techniques and tools for correct and efficient accelerator programming.

  12. Fine-Tuning Corrective Feedback.

    Han, ZhaoHong

    2001-01-01

    Explores the notion of "fine-tuning" in connection with the corrective feedback process. Describes a longitudinal case study, conducted in the context of Norwegian as a second a language, that shows how fine-tuning and lack thereof in the provision of written corrective feedback differentially affects a second language learner's restructuring of…

  13. Shell corrections in stopping powers

    Bichsel, H.

    2002-05-01

    One of the theories of the electronic stopping power S for fast light ions was derived by Bethe. The algorithm currently used for the calculation of S includes terms known as the mean excitation energy I, the shell correction, the Barkas correction, and the Bloch correction. These terms are described here. For the calculation of the shell corrections an atomic model is used, which is more realistic than the hydrogenic approximation used so far. A comparison is made with similar calculations in which the local plasma approximation is utilized. Close agreement with the experimental data for protons with energies from 0.3 to 10 MeV traversing Al and Si is found without the need for adjustable parameters for the shell corrections.

  14. Relativistic corrections to stopping powers

    Relativistic corrections to the nonrelativistic Bethe-Bloch formula for the stopping power of matter for charged particles are traditionally computed by considering close collisions separately from distant collisions. The close collision contribution is further divided into the Mott correction appropriate for very small impact parameters, and the Bloch correction, computed for larger values. This division of the region of close collisions leads to a very cumbersome result if one generalizes the original Bloch procedure to relativistic energies. The authors avoid the resulting poorly specified scattering angle theta/sub o/ that divides the Mott and Bloch correction regimes by using the procedure suggested by Lindhard and applied by Golovchenko, Cox and Goland to determine the Bloch correction for relativistic velocities. 25 references, 2 figures

  15. Accurate calculation of (31)P NMR chemical shifts in polyoxometalates.

    Pascual-Borràs, Magda; López, Xavier; Poblet, Josep M

    2015-04-14

    We search for the best density functional theory strategy for the determination of (31)P nuclear magnetic resonance (NMR) chemical shifts, δ((31)P), in polyoxometalates. Among the variables governing the quality of the quantum modelling, we tackle herein the influence of the functional and the basis set. The spin-orbit and solvent effects were routinely included. To do so we analysed the family of structures α-[P2W18-xMxO62](n-) with M = Mo(VI), V(V) or Nb(V); [P2W17O62(M'R)](n-) with M' = Sn(IV), Ge(IV) and Ru(II) and [PW12-xMxO40](n-) with M = Pd(IV), Nb(V) and Ti(IV). The main results suggest that, to date, the best procedure for the accurate calculation of δ((31)P) in polyoxometalates is the combination of TZP/PBE//TZ2P/OPBE (for NMR//optimization step). The hybrid functionals (PBE0, B3LYP) tested herein were applied to the NMR step, besides being more CPU-consuming, do not outperform pure GGA functionals. Although previous studies on (183)W NMR suggested that the use of very large basis sets like QZ4P were needed for geometry optimization, the present results indicate that TZ2P suffices if the functional is optimal. Moreover, scaling corrections were applied to the results providing low mean absolute errors below 1 ppm for δ((31)P), which is a step forward in order to confirm or predict chemical shifts in polyoxometalates. Finally, via a simplified molecular model, we establish how the small variations in δ((31)P) arise from energy changes in the occupied and virtual orbitals of the PO4 group. PMID:25738630

  16. Scattering Correction For Image Reconstruction In Flash Radiography

    Cao, Liangzhi; Wang, Mengqi; Wu, Hongchun; Liu, Zhouyu; Cheng, Yuxiong; Zhang, Hongbo [Xi' an Jiaotong Univ., Xi' an (China)

    2013-08-15

    Scattered photons cause blurring and distortions in flash radiography, reducing the accuracy of image reconstruction significantly. The effect of the scattered photons is taken into account and an iterative deduction of the scattered photons is proposed to amend the scattering effect for image restoration. In order to deduct the scattering contribution, the flux of scattered photons is estimated as the sum of two components. The single scattered component is calculated accurately together with the uncollided flux along the characteristic ray, while the multiple scattered component is evaluated using correction coefficients pre-obtained from Monte Carlo simulations.The arbitrary geometry pretreatment and ray tracing are carried out based on the customization of AutoCAD. With the above model, an Iterative Procedure for image restORation code, IPOR, is developed. Numerical results demonstrate that the IPOR code is much more accurate than the direct reconstruction solution without scattering correction and it has a very high computational efficiency.

  17. Scattering Correction For Image Reconstruction In Flash Radiography

    Scattered photons cause blurring and distortions in flash radiography, reducing the accuracy of image reconstruction significantly. The effect of the scattered photons is taken into account and an iterative deduction of the scattered photons is proposed to amend the scattering effect for image restoration. In order to deduct the scattering contribution, the flux of scattered photons is estimated as the sum of two components. The single scattered component is calculated accurately together with the uncollided flux along the characteristic ray, while the multiple scattered component is evaluated using correction coefficients pre-obtained from Monte Carlo simulations.The arbitrary geometry pretreatment and ray tracing are carried out based on the customization of AutoCAD. With the above model, an Iterative Procedure for image restORation code, IPOR, is developed. Numerical results demonstrate that the IPOR code is much more accurate than the direct reconstruction solution without scattering correction and it has a very high computational efficiency

  18. Thermal Correction to the Molar Polarizability of a Boltzmann Gas

    Jentschura, U D; Mohr, P J

    2013-01-01

    Metrology in atomic physics has been crucial for a number of advanced determinations of fundamental constants. In addition to very precise frequency measurements, the molar polarizability of an atomic gas has recently also been measured very accurately. Part of the motivation for the measurements is due to ongoing efforts to redefine the International System of Units (SI) for which an accurate value of the Boltzmann constant is needed. Here, we calculate the dominant shift of the molar polarizability in an atomic gas due to thermal effects. It is given by the relativistic correction to the dipole interaction, which emerges when the probing electric field is Lorenz transformed into the rest frame of the atoms that undergo thermal motion. While this effect is small when compared to currently available experimental accuracy, the relativistic correction to the dipole interaction is much larger than the thermal shift of the polarizability induced by blackbody radiation.

  19. Thermal correction to the molar polarizability of a Boltzmann gas

    Jentschura, U. D.; Puchalski, M.; Mohr, P. J.

    2011-12-01

    Metrology in atomic physics has been crucial for a number of advanced determinations of fundamental constants. In addition to very precise frequency measurements, the molar polarizability of an atomic gas has recently also been measured very accurately. Part of the motivation for the measurements is due to ongoing efforts to redefine the International System of Units (SI), for which an accurate value of the Boltzmann constant is needed. Here we calculate the dominant shift of the molar polarizability in an atomic gas due to thermal effects. It is given by the relativistic correction to the dipole interaction, which emerges when the probing electric field is Lorentz transformed into the rest frame of the atoms that undergo thermal motion. While this effect is small when compared to currently available experimental accuracy, the relativistic correction to the dipole interaction is much larger than the thermal shift of the polarizability induced by blackbody radiation.

  20. Evaluation of QNI corrections in porous media applications

    Radebe, M. J.; de Beer, F. C.; Nshimirimana, R.

    2011-09-01

    Qualitative measurements using digital neutron imaging has been the more explored aspect than accurate quantitative measurements. The reason for this bias is that quantitative measurements require correction for background and material scatter, and neutron spectral effects. Quantitative Neutron Imaging (QNI) software package has resulted from efforts at the Paul Scherrer Institute, Helmholtz Zentrum Berlin (HZB) and Necsa to correct for these effects, while the sample-detector distance (SDD) principle has previously been demonstrated as a measure to eliminate material scatter effect. This work evaluates the capabilities of the QNI software package to produce accurate quantitative results on specific characteristics of porous media, and its role to nondestructive quantification of material with and without calibration. The work further complements QNI abilities by the use of different SDDs. Studies of effective %porosity of mortar and attenuation coefficient of water using QNI and SDD principle are reported.

  1. Surface Consistent Finite Frequency Phase Corrections

    Kimman, W. P.

    2016-04-01

    Static time-delay corrections are frequency independent and ignore velocity variations away from the assumed vertical ray-path through the subsurface. There is therefore a clear potential for improvement if the finite frequency nature of wave propagation can be properly accounted for. Such a method is presented here based on the Born approximation, the assumption of surface consistency, and the misfit of instantaneous phase. The concept of instantaneous phase lends itself very well for sweep-like signals, hence these are the focus of this study. Analytical sensitivity kernels are derived that accurately predict frequency dependent phase shifts due to P-wave anomalies in the near surface. They are quick to compute and robust near the source and receivers. An additional correction is presented that re-introduces the non-linear relation between model perturbation and phase delay, which becomes relevant for stronger velocity anomalies. The phase shift as function of frequency is a slowly varying signal, its computation therefore doesn't require fine sampling even for broadband sweeps. The kernels reveal interesting features of the sensitivity of seismic arrivals to the near surface: small anomalies can have a relative large impact resulting from the medium field term that is dominant near the source and receivers. Furthermore, even simple velocity anomalies can produce a distinct frequency dependent phase behaviour. Unlike statics, the predicted phase corrections are smooth in space. Verification with spectral element simulations shows an excellent match for the predicted phase shifts over the entire seismic frequency band. Applying the phase shift to the reference sweep corrects for wavelet distortion, making the technique akin to surface consistent deconvolution, even though no division in the spectral domain is involved. As long as multiple scattering is mild, surface consistent finite frequency phase corrections outperform traditional statics for moderately large

  2. High order QED corrections in Z physics

    In this thesis a number of calculations of higher order QED corrections are presented, all applying to the standard LEP/SLC processes e+e-→ f-bar f, where f stands for any fermion. In cases where f≠ e-, νe, the above process is only possible via annihilation of the incoming electron positron pair. At LEP/SLC this mainly occurs via the production and the subsequent decay of a Z boson, i.e. the cross section is heavily dominated by the Z resonance. These processes and the corrections to them, treated in a semi-analytical way, are discussed (ch. 2). In the case f = e- (Bhabha scattering) the process can also occur via the exchange of a virtual photon in the t-channel. Since the latter contribution is dominant at small scattering angles one has to exclude these angles if one is interested in Z physics. Having excluded that region one has to recalculate all QED corrections (ch. 3). The techniques introduced there enables for the calculation the difference between forward and backward scattering, the forward backward symmetry, for the cases f ≠ e-, νe (ch. 4). At small scattering angles, where Bhabha scattering is dominated by photon exchange in the t-channel, this process is used in experiments to determine the luminosity of the e+e- accelerator. hence an accurate theoretical description of this process at small angles is of vital interest to the overall normalization of all measurements at LEP/SLC. Ch. 5 gives such a description in a semi-analytical way. The last two chapters discuss Monte Carlo techniques that are used for the cases f≠ e-, νe. Ch. 6 describes the simulation of two photon bremsstrahlung, which is a second order QED correction effect. The results are compared with results of the semi-analytical treatment in ch. 2. Finally ch. 7 reviews several techniques that have been used to simulate higher order QED corrections for the cases f≠ e-, νe. (author). 132 refs.; 10 figs.; 16 tabs

  3. Surface consistent finite frequency phase corrections

    Kimman, W. P.

    2016-07-01

    Static time-delay corrections are frequency independent and ignore velocity variations away from the assumed vertical ray path through the subsurface. There is therefore a clear potential for improvement if the finite frequency nature of wave propagation can be properly accounted for. Such a method is presented here based on the Born approximation, the assumption of surface consistency and the misfit of instantaneous phase. The concept of instantaneous phase lends itself very well for sweep-like signals, hence these are the focus of this study. Analytical sensitivity kernels are derived that accurately predict frequency-dependent phase shifts due to P-wave anomalies in the near surface. They are quick to compute and robust near the source and receivers. An additional correction is presented that re-introduces the nonlinear relation between model perturbation and phase delay, which becomes relevant for stronger velocity anomalies. The phase shift as function of frequency is a slowly varying signal, its computation therefore does not require fine sampling even for broad-band sweeps. The kernels reveal interesting features of the sensitivity of seismic arrivals to the near surface: small anomalies can have a relative large impact resulting from the medium field term that is dominant near the source and receivers. Furthermore, even simple velocity anomalies can produce a distinct frequency-dependent phase behaviour. Unlike statics, the predicted phase corrections are smooth in space. Verification with spectral element simulations shows an excellent match for the predicted phase shifts over the entire seismic frequency band. Applying the phase shift to the reference sweep corrects for wavelet distortion, making the technique akin to surface consistent deconvolution, even though no division in the spectral domain is involved. As long as multiple scattering is mild, surface consistent finite frequency phase corrections outperform traditional statics for moderately large

  4. An accurate and practical method for inference of weak gravitational lensing from galaxy images

    Bernstein, Gary M.; Armstrong, Robert; Krawiec, Christina; March, Marisa C.

    2016-07-01

    We demonstrate highly accurate recovery of weak gravitational lensing shear using an implementation of the Bayesian Fourier Domain (BFD) method proposed by Bernstein & Armstrong, extended to correct for selection biases. The BFD formalism is rigorously correct for Nyquist-sampled, background-limited, uncrowded images of background galaxies. BFD does not assign shapes to galaxies, instead compressing the pixel data D into a vector of moments M, such that we have an analytic expression for the probability P(M|g) of obtaining the observations with gravitational lensing distortion g along the line of sight. We implement an algorithm for conducting BFD's integrations over the population of unlensed source galaxies which measures ≈10 galaxies s-1 core-1 with good scaling properties. Initial tests of this code on ≈109 simulated lensed galaxy images recover the simulated shear to a fractional accuracy of m = (2.1 ± 0.4) × 10-3, substantially more accurate than has been demonstrated previously for any generally applicable method. Deep sky exposures generate a sufficiently accurate approximation to the noiseless, unlensed galaxy population distribution assumed as input to BFD. Potential extensions of the method include simultaneous measurement of magnification and shear; multiple-exposure, multiband observations; and joint inference of photometric redshifts and lensing tomography.

  5. Comparative evaluation of scatter correction techniques in 3D positron emission tomography

    Zaidi, H

    2000-01-01

    Much research and development has been concentrated on the scatter compensation required for quantitative 3D PET. Increasingly sophisticated scatter correction procedures are under investigation, particularly those based on accurate scatter models, and iterative reconstruction-based scatter compensation approaches. The main difference among the correction methods is the way in which the scatter component in the selected energy window is estimated. Monte Carlo methods give further insight and might in themselves offer a possible correction procedure. Methods: Five scatter correction methods are compared in this paper where applicable. The dual-energy window (DEW) technique, the convolution-subtraction (CVS) method, two variants of the Monte Carlo-based scatter correction technique (MCBSC1 and MCBSC2) and our newly developed statistical reconstruction-based scatter correction (SRBSC) method. These scatter correction techniques are evaluated using Monte Carlo simulation studies, experimental phantom measurements...

  6. Spectroscopically Accurate Line Lists for Application in Sulphur Chemistry

    Underwood, D. S.; Azzam, A. A. A.; Yurchenko, S. N.; Tennyson, J.

    2013-09-01

    for inclusion in standard atmospheric and planetary spectroscopic databases. The methods involved in computing the ab initio potential energy and dipole moment surfaces involved minor corrections to the equilibrium S-O distance, which produced a good agreement with experimentally determined rotational energies. However the purely ab initio method was not been able to reproduce an equally spectroscopically accurate representation of vibrational motion. We therefore present an empirical refinement to this original, ab initio potential surface, based on the experimental data available. This will not only be used to reproduce the room-temperature spectrum to a greater degree of accuracy, but is essential in the production of a larger, accurate line list necessary for the simulation of higher temperature spectra: we aim for coverage suitable for T ? 800 K. Our preliminary studies on SO3 have also shown it to exhibit an interesting "forbidden" rotational spectrum and "clustering" of rotational states; to our knowledge this phenomenon has not been observed in other examples of trigonal planar molecules and is also an investigative avenue we wish to pursue. Finally, the IR absorption bands for SO2 and SO3 exhibit a strong overlap, and the inclusion of SO2 as a complement to our studies is something that we will be interested in doing in the near future.

  7. Holographic thermalization with Weyl corrections

    Dey, Anshuman; Mahapatra, Subhash; Sarkar, Tapobrata

    2016-01-01

    We consider holographic thermalization in the presence of a Weyl correction in five dimensional AdS space. We first obtain the Weyl corrected black brane solution perturbatively, up to first order in the coupling. The corresponding AdS-Vaidya like solution is then constructed. This is then used to numerically analyze the time dependence of the two point correlation functions and the expectation values of rectangular Wilson loops in the boundary field theory, and we discuss how the Weyl correction can modify the thermalization time scales in the dual field theory. In this context, the subtle interplay between the Weyl coupling constant and the chemical potential is studied in detail.

  8. Segmented attenuation correction using artificial neural networks in positron tomography

    The measured attenuation correction technique is widely used in cardiac positron tomographic studies. However, the success of this technique is limited because of insufficient counting statistics achievable in practical transmission scan times, and of the scattered radiation in transmission measurement which leads to an underestimation of the attenuation coefficients. In this work, a segmented attenuation correction technique has been developed that uses artificial neural networks. The technique has been validated in phantoms and verified in human studies. The results indicate that attenuation coefficients measured in the segmented transmission image are accurate and reproducible. Activity concentrations measured in the reconstructed emission image can also be recovered accurately using this new technique. The accuracy of the technique is subject independent and insensitive to scatter contamination in the transmission data. This technique has the potential of reducing the transmission scan time, and satisfactory results are obtained if the transmission data contain about 400 000 true counts per plane. It can predict accurately the value of any attenuation coefficient in the range from air to water in a transmission image with or without scatter correction. (author)

  9. Software for Correcting the Dynamic Error of Force Transducers

    Naoki Miyashita

    2014-07-01

    Full Text Available Software which corrects the dynamic error of force transducers in impact force measurements using their own output signal has been developed. The software corrects the output waveform of the transducers using the output waveform itself, estimates its uncertainty and displays the results. In the experiment, the dynamic error of three transducers of the same model are evaluated using the Levitation Mass Method (LMM, in which the impact forces applied to the transducers are accurately determined as the inertial force of the moving part of the aerostatic linear bearing. The parameters for correcting the dynamic error are determined from the results of one set of impact measurements of one transducer. Then, the validity of the obtained parameters is evaluated using the results of the other sets of measurements of all the three transducers. The uncertainties in the uncorrected force and those in the corrected force are also estimated. If manufacturers determine the correction parameters for each model using the proposed method, and provide the software with the parameters corresponding to each model, then users can obtain the waveform corrected against dynamic error and its uncertainty. The present status and the future prospects of the developed software are discussed in this paper.

  10. How well does multiple OCR error correction generalize?

    Lund, William B.; Ringger, Eric K.; Walker, Daniel D.

    2013-12-01

    As the digitization of historical documents, such as newspapers, becomes more common, the need of the archive patron for accurate digital text from those documents increases. Building on our earlier work, the contributions of this paper are: 1. in demonstrating the applicability of novel methods for correcting optical character recognition (OCR) on disparate data sets, including a new synthetic training set, 2. enhancing the correction algorithm with novel features, and 3. assessing the data requirements of the correction learning method. First, we correct errors using conditional random fields (CRF) trained on synthetic training data sets in order to demonstrate the applicability of the methodology to unrelated test sets. Second, we show the strength of lexical features from the training sets on two unrelated test sets, yielding a relative reduction in word error rate on the test sets of 6.52%. New features capture the recurrence of hypothesis tokens and yield an additional relative reduction in WER of 2.30%. Further, we show that only 2.0% of the full training corpus of over 500,000 feature cases is needed to achieve correction results comparable to those using the entire training corpus, effectively reducing both the complexity of the training process and the learned correction model.

  11. Surface corrections to the moment of inertia and shell structure in finite Fermi systems

    Gorpinchenko, D. V.; Magner, A. G.; Bartel, J.; Blocki, J. P.

    2016-02-01

    The moment of inertia for nuclear collective rotations is derived within a semiclassical approach based on the Inglis cranking and Strutinsky shell-correction methods, improved by surface corrections within the nonperturbative periodic-orbit theory. For adiabatic (statistical-equilibrium) rotations it was approximated by the generalized rigid-body moment of inertia accounting for the shell corrections of the particle density. An improved phase-space trace formula allows to express the shell components of the moment of inertia more accurately in terms of the free-energy shell correction. Evaluating their ratio within the extended Thomas-Fermi effective-surface approximation, one finds good agreement with the quantum calculations.

  12. Surface corrections to the shell-structure of the moment of inertia

    Gorpinchenko, D V; Bartel, J; Blocki, J P

    2015-01-01

    The moment of inertia for nuclear collective rotations is derived within a semiclassical approach based on the Inglis cranking and the Strutinsky shell-correction methods, improved by surface corrections within the non-perturbative periodic-orbit theory. For adiabatic (statistical-equilibrium) rotations it was approximated by the generalized rigid-body moment of inertia accounting for the shell corrections of the particle density. An improved phase-space trace formula allows to express the shell components of the moment of inertia more accurately in terms of the free-energy shell correction with their ratio evaluated within the extended Thomas-Fermi effective-surface approximation.

  13. Correcting the Chromatic Aberration in Barrel Distortion of Endoscopic Images

    Y. M. Harry Ng

    2003-04-01

    Full Text Available Modern endoscopes offer physicians a wide-angle field of view (FOV for minimally invasive therapies. However, the high level of barrel distortion may prevent accurate perception of image. Fortunately, this kind of distortion may be corrected by digital image processing. In this paper we investigate the chromatic aberrations in the barrel distortion of endoscopic images. In the past, chromatic aberration in endoscopes is corrected by achromatic lenses or active lens control. In contrast, we take a computational approach by modifying the concept of image warping and the existing barrel distortion correction algorithm to tackle the chromatic aberration problem. In addition, an error function for the determination of the level of centroid coincidence is proposed. Simulation and experimental results confirm the effectiveness of our method.

  14. Reflection error correction of gas turbine blade temperature

    Kipngetich, Ketui Daniel; Feng, Chi; Gao, Shan

    2016-03-01

    Accurate measurement of gas turbine blades' temperature is one of the greatest challenges encountered in gas turbine temperature measurements. Within an enclosed gas turbine environment with surfaces of varying temperature and low emissivities, a new challenge is introduced into the use of radiation thermometers due to the problem of reflection error. A method for correcting this error has been proposed and demonstrated in this work through computer simulation and experiment. The method assumed that emissivities of all surfaces exchanging thermal radiation are known. Simulations were carried out considering targets with low and high emissivities of 0.3 and 0.8 respectively while experimental measurements were carried out on blades with emissivity of 0.76. Simulated results showed possibility of achieving error less than 1% while experimental result corrected the error to 1.1%. It was thus concluded that the method is appropriate for correcting reflection error commonly encountered in temperature measurement of gas turbine blades.

  15. Analytic method for geometrical parameter correction of planar HPGe detector

    A numerical integration formula was introduced to calculate the response of planar HPGe detector to photons emitted from point source. Then the formula was used to correct the geometrical parameter of planar HPGe detector. 241Am and 137Cs point sources were placed at a certain distance (1-20 cm) away from entrance window to get the corresponding detection efficiency. The detection parameters were calculated in weighted least square fitting using the formula with the experimental efficiencies as formula results. This correction method was accurate and timesaving. The simulation result from MCNP using the corrected parameters shows that the relative deviations between simulation and experimental efficiencies are less than 1% for 59.5 and 661.6 keV photons with the distance of 1-20 cm. (authors)

  16. Neural network scatter correction technique for digital radiography

    This paper presents a scatter correction technique based on artificial neural networks. The technique utilizes the acquisition of a conventional digital radiographic image, coupled with the acquisition of a multiple pencil beam (micro-aperture) digital image. Image subtraction results in a sparsely sampled estimate of the scatter component in the image. The neural network is trained to develop a causal relationship between image data on the low-pass filtered open field image and the sparsely sampled scatter image, and then the trained network is used to correct the entire image (pixel by pixel) in a manner which is operationally similar to but potentially more powerful than convolution. The technique is described and is illustrated using clinical primary component images combined with scatter component images that are realistically simulated using the results from previously reported Monte Carlo investigations. The results indicate that an accurate scatter correction can be realized using this technique

  17. Water-table correction factors applied to gasoline contamination

    The application of correction factors to measured ground-water elevations is an important step in the process of characterizing sites contaminated by petroleum products such as gasoline. The water-table configuration exerts a significant control on the migration of free product (e.g., gasoline) and dissolved hydrocarbon constituents. An accurate representation of this configuration cannot be made on the basis of measurements obtained from monitoring wells containing free product, unless correction factors are applied. By applying correction factors, the effect of the overlying product on the apparent water-table configuration is removed, and the water table can be analyzed at its ambient (undisturbed) level. A case history is presented where corrected water-table elevations and elevations measured at wells unaffected by free product are combined as control points. The used of the combined data facilitates a more accurate assessment of the shape of the water table, which leads to better conclusions regarding the source(s) of contamination, the extent of free-product accumulation, and optimal areas for focusing remediation efforts

  18. Self-correcting quantum computers

    Is the notion of a quantum computer (QC) resilient to thermal noise unphysical? We address this question from a constructive perspective and show that local quantum Hamiltonian models provide self-correcting QCs. To this end, we first give a sufficient condition on the connectedness of excitations for a stabilizer code model to be a self-correcting quantum memory. We then study the two main examples of topological stabilizer codes in arbitrary dimensions and establish their self-correcting capabilities. Also, we address the transversality properties of topological color codes, showing that six-dimensional color codes provide a self-correcting model that allows the transversal and local implementation of a universal set of operations in seven spatial dimensions. Finally, we give a procedure for initializing such quantum memories at finite temperature. (paper)

  19. Multipole correction in large synchrotrons

    A new method of correcting dynamic nonlinearities due to the multipole content of a synchrotron such as the Superconducting Super Collider is discussed. The method uses lumped multipole elements placed at the center (C) of the accelerator half-cells as well as elements near the focusing (F) and defocusing (D) quads. In a first approximation, the corrector strengths follow Simpson's Rule. Correction of second-order sextupole nonlinearities may also be obtained with the F, C, and D octupoles. Correction of nonlinearities by about three orders of magnitude are obtained, and simple solutions to a fundamental problem in synchrotrons are demonstrated. Applications to the CERN Large Hadron Collider and lower energy machines, as well as extensions for quadrupole correction, are also discussed

  20. Self-Correcting Quantum Computers

    Bombin, H; Horodecki, M; Martín-Delgado, M A

    2009-01-01

    Is the notion of a quantum computer resilient to thermal noise unphysical? We address this question from a constructive perspective and show that local quantum Hamiltonian models provide self-correcting quantum computers. To this end, we first give a sufficient condition on the connectedness of excitations for a stabilizer code model to be a self-correcting quantum memory. We then study the two main examples of topological stabilizer codes in arbitrary dimensions and establish their self-correcting capabilities. Also, we address the transversality properties of topological color codes, showing that 6D color codes provide a self-correcting model that allows the transversal and local implementation of a universal set of operations in seven spatial dimensions. Finally, we give a procedure to initialize such quantum memories at finite temperature.

  1. Quantum corrections for Boltzmann equation

    M.; Levy; PETER

    2008-01-01

    We present the lowest order quantum correction to the semiclassical Boltzmann distribution function,and the equation satisfied by this correction is given. Our equation for the quantum correction is obtained from the conventional quantum Boltzmann equation by explicitly expressing the Planck constant in the gradient approximation,and the quantum Wigner distribution function is expanded in pow-ers of Planck constant,too. The negative quantum correlation in the Wigner dis-tribution function which is just the quantum correction terms is naturally singled out,thus obviating the need for the Husimi’s coarse grain averaging that is usually done to remove the negative quantum part of the Wigner distribution function. We also discuss the classical limit of quantum thermodynamic entropy in the above framework.

  2. Spelling Correction in Agglutinative Languages

    Oflazer, K

    1994-01-01

    This paper presents an approach to spelling correction in agglutinative languages that is based on two-level morphology and a dynamic programming based search algorithm. Spelling correction in agglutinative languages is significantly different than in languages like English. The concept of a word in such languages is much wider that the entries found in a dictionary, owing to {}~productive word formation by derivational and inflectional affixations. After an overview of certain issues and relevant mathematical preliminaries, we formally present the problem and our solution. We then present results from our experiments with spelling correction in Turkish, a Ural--Altaic agglutinative language. Our results indicate that we can find the intended correct word in 95\\% of the cases and offer it as the first candidate in 74\\% of the cases, when the edit distance is 1.

  3. The neural correlates of correctly rejecting lures during memory retrieval: the role of item relatedness.

    Bowman, Caitlin R; Dennis, Nancy A

    2015-06-01

    Successful memory retrieval is predicated not only on recognizing old information, but also on correctly rejecting new information (lures) in order to avoid false memories. Correctly rejecting lures is more difficult when they are perceptually or semantically related to information presented at study as compared to when lures are distinct from previously studied information. This behavioral difference suggests that the cognitive and neural basis of correct rejections differs with respect to the relatedness between lures and studied items. The present study sought to identify neural activity that aids in suppressing false memories by examining the network of brain regions underlying correct rejection of related and unrelated lures. Results showed neural overlap in the right hippocampus and anterior parahippocampal gyrus associated with both related and unrelated correct rejections, indicating that some neural regions support correctly rejecting lures regardless of their semantic/perceptual characteristics. Direct comparisons between related and unrelated correct rejections showed that unrelated correct rejections were associated with greater activity in bilateral middle and inferior temporal cortices, regions that have been associated with categorical processing and semantic labels. Related correct rejections showed greater activation in visual and lateral prefrontal cortices, which have been associated with perceptual processing and retrieval monitoring. Thus, while related and unrelated correct rejections show some common neural correlates, related correct rejections are driven by greater perceptual processing whereas unrelated correct rejections show greater reliance on salient categorical cues to support quick and accurate memory decisions. PMID:25862563

  4. Colour correction for panoramic imaging

    Tian, Gui Yun; Gledhill, Duke; Taylor, D.

    2002-01-01

    This paper reports the problem of colour distortion in panoramic imaging. Particularly when image mosaicing is used for panoramic imaging, the images are captured under different lighting conditions and viewpoints. The paper analyses several linear approaches for their colour transform and mapping. A new approach of colour histogram based colour correction is provided, which is robust to image capturing conditions such as viewpoints and scaling. The procedure for the colour correction is intr...

  5. Radiative corrections to Bose condensation

    Gonzalez, A. (Academia de Ciencias de Cuba, La Habana. Inst. de Matematica, Cibernetica y Computacion)

    1985-04-01

    The Bose condensation of the scalar field in a theory behaving in the Coleman-Weinberg mode is considered. The effective potential of the model is computed within the semiclassical approximation in a dimensional regularization scheme. Radiative corrections are shown to introduce certain ..mu..-dependent ultraviolet divergences in the effective potential coming from the Many-Particle theory. The weight of radiative corrections in the dynamics of the system is strongly modified by the charge density.

  6. Quantum error correction for beginners

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)

  7. ACCURATE KAP METER CALIBRATION AS A PREREQUISITE FOR OPTIMISATION IN PROJECTION RADIOGRAPHY.

    Malusek, A; Sandborg, M; Carlsson, G Alm

    2016-06-01

    Modern X-ray units register the air kerma-area product, PKA, with a built-in KAP meter. Some KAP meters show an energy-dependent bias comparable with the maximum uncertainty articulated by the IEC (25 %), adversely affecting dose-optimisation processes. To correct for the bias, a reference KAP meter calibrated at a standards laboratory and two calibration methods described here can be used to achieve an uncertainty of standards laboratory, Q0, to any beam quality, Q, in the clinic. Alternatively, beam quality corrections are measured with an energy-independent dosemeter via a reference beam quality in the clinic, Q1, to beam quality, Q Biases up to 35 % of built-in KAP meter readings were noted. Energy-dependent calibration factors are needed for unbiased PKA Accurate KAP meter calibration as a prerequisite for optimisation in projection radiography. PMID:26743261

  8. Accurate early positions for Swift GRBS: enhancing X-ray positions with UVOT astrometry

    Goad, M R; Beardmore, A P; Evans, P A; Rosen, S R; Osborne, J P; Starling, R L C; Marshall, F E; Yershov, V; Burrows, D N; Gehrels, N; Roming, P; Moretti, A; Capalbi, M; Hill, J E; Kennea, J; Koch, S; Berk, D Vanden

    2007-01-01

    Here we describe an autonomous way of producing more accurate prompt XRT positions for Swift-detected GRBs and their afterglows, based on UVOT astrometry and a detailed mapping between the XRT and UVOT detectors. The latter significantly reduces the dominant systematic error -- the star-tracker solution to the World Coordinate System. This technique, which is limited to times when there is significant overlap between UVOT and XRT PC-mode data, provides a factor of 2 improvement in the localisation of XRT refined positions on timescales of less than a few hours. Furthermore, the accuracy achieved is superior to astrometrically corrected XRT PC mode images at early times (for up to 24 hours), for the majority of bursts, and is comparable to the accuracy achieved by astrometrically corrected X-ray positions based on deep XRT PC-mode imaging at later times (abridged).

  9. Generation increases at Cofrentes Nuclear Power Plant based on accurate feedwater flow measurement

    This paper discusses the application of Caldon LEFM ultrasonic flow and temperature measurement systems at Cofrentes Nuclear Power Plant. Based on plant instrumentation, Cofrentes engineering personnel estimated an 8 to 10 MW electric shortfall in generation due to venturi nozzle fouling. An external LEFM ultrasonic flow measurement system installed in October 2000 showed a shortfall of about 9 MW electric, consistent with expectations. The plant has increased generation by using the more accurate ultrasonic system to correct for the venturi nozzle bias. Following the recovery of generation lost to venturi fouling, Cofrentes plans to upgrade the flow meter to Caldon's LEFM CheckPlus system. This system is sufficiently accurate to warrant re-licensing for a power up-rate of up to 1,7% based on improved thermal power measurement. (author)

  10. Accurate gap levels and their role in the reliability of other calculated defect properties

    Deak, Peter; Aradi, Balint; Frauenheim, Thomas [Bremen Center for Computational Materials Science, Universitaet Bremen, POB 330440, 28334 Bremen (Germany); Gali, Adam [Department Atomic Physics, Budapest University of Technology and Economics, 1521 Budapest (Hungary)

    2011-04-15

    The functionality of semiconductors and insulators depends mainly on defects which modify the electronic, optical, and magnetic spectra through their gap levels. Accurate calculation of the latter is not only important for the experimental identification of the defect, but influences also the accuracy of other calculated defect properties, and is the most difficult challenge for defect theory. The electron self-interaction error in the standard implementations of ab initio density functional theory causes a severe underestimation of the band gap, leading to a corresponding uncertainty in the defect level positions in it. This is a widely known problem which is usually dealt with by a posteriori corrections. A wide range of corrections schemes are used, ranging from ad hoc scaling or shifting, through procedures of limited validity (like the scissor operator or various alignment schemes), to more rigorous quasiparticle corrections based on many-body perturbation theory. We will demonstrate in this paper that consequences of the gap error must to be taken into account in the total energy, and simply correcting the band energy with the gap level shifts is of limited applicability. Therefore, the self-consistent determination of the total energy, free of the gap-error, is preferred. We will show that semi-empirical screened hybrid functionals can successfully be used for this purpose. (Copyright copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  11. An Accurate Calculation of the Big-Bang Prediction for the Abundance of Primordial Helium

    López, R E; Lopez, Robert E.; Turner, Michael S.

    1999-01-01

    Within the standard model of particle physics and cosmology we have calculated the big-bang prediction for the primordial abundance of Helium to a theoretical uncertainty of $0.1 \\pct$ $(\\delta Y_P = \\pm 0.0002)$. At this accuracy the uncertainty in the abundance is dominated by the experimental uncertainty in the neutron mean lifetime, $\\tau_n = 885.3 \\pm 2.0 \\rm{sec}$. The following physical effects were included in the calculation: the zero and finite-temperature radiative, Coulomb and finite-nucleon mass corrections to the weak rates; order-$\\alpha$ quantum-electrodynamic correction to the plasma density, electron mass, and neutrino temperature; and incomplete neutrino decoupling. New results for the finite-temperature radiative correction and the QED plasma correction were used. In addition, we wrote a new and independent nucleosynthesis code to control numerical errors to less than 0.1\\pct. Our predictions for the \\EL[4]{He} abundance are summarized with an accurate fitting formula. Summarizing our work...

  12. Construction of modified Godunov type schemes accurate at any Mach number for the compressible Euler system

    Dellacherie, Stéphane; Jung, Jonathan; Omnes, Pascal; Raviart, Pierre-Arnaud

    2013-01-01

    Through a linear analysis, we show how to modify Godunov type schemes applied to the compressible Euler system to make them accurate at any Mach number. This allows to propose all Mach Godunov type schemes. A linear stability result is proposed and a formal asymptotic analysis justifies the construction in the barotropic case when the Godunov type scheme is a Roe scheme. We also underline that we may have to introduce a cut-off in the all Mach correction to avoid the creation of non-entropic ...

  13. Generator maintenance electrical testing. The importance of trending and accurate interpretation. A case study

    In today's rapidly changing Power Generation Industry it is more critical than ever to acquire and maintain accurate records of previous and current electrical test data. Evaluation and trending of this data is essential to insuring the reliable operation of the machine in the ever changing world of extended maintenance outages and maintenance budget reductions. This paper presents a case study of a unique problem that had initiated in as early as 1990 and was not properly diagnosed and corrected until 2004, at which time it had propagated to a condition of eminent failure. (author)

  14. Accurate on-line mass flow measurements in supercritical fluid chromatography.

    Tarafder, Abhijit; Vajda, Péter; Guiochon, Georges

    2013-12-13

    This work demonstrates the possible advantages and the challenges of accurate on-line measurements of the CO2 mass flow rate during supercritical fluid chromatography (SFC) operations. Only the mass flow rate is constant along the column in SFC. The volume flow rate is not. The critical importance of accurate measurements of mass flow rates for the achievement of reproducible data and the serious difficulties encountered in supercritical fluid chromatography for its assessment were discussed earlier based on the physical properties of carbon dioxide. In this report, we experimentally demonstrate the problems encountered when performing mass flow rate measurements and the gain that can possibly be achieved by acquiring reproducible data using a Coriolis flow meter. The results obtained show how the use of a highly accurate mass flow meter permits, besides the determination of accurate values of the mass flow rate, a systematic, constant diagnosis of the correct operation of the instrument and the monitoring of the condition of the carbon dioxide pump. PMID:24210558

  15. 42 CFR 460.194 - Corrective action.

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Corrective action. 460.194 Section 460.194 Public...) Federal/State Monitoring § 460.194 Corrective action. (a) A PACE organization must take action to correct... corrective actions. (c) Failure to correct deficiencies may result in sanctions or termination, as...

  16. Technical evaluation of TomoTherapy automatic roll correction.

    Laub, Steve; Snyder, Michael; Burmeister, Jay

    2015-01-01

    The TomoTherapy Hi·Art System allows the application of rotational corrections as a part of the pretreatment image guidance process. This study outlines a custom method to perform an end-to-end evaluation of the TomoTherapy Hi·Art roll correction feature. A roll-sensitive plan was designed and delivered to a cylindrical solid water phantom to test the accuracy of roll corrections, as well as the ability of the automatic registration feature to detect induced roll. Cylindrical target structures containing coaxial inner avoidance structures were placed adjacent to the plane bisecting the phantom and 7 cm laterally off central axis. The phantom was positioned at isocenter with the target-plane parallel to the couch surface. Varying degrees of phantom roll were induced and dose to the targets and inner avoidance structures was measured using Kodak EDR2 films placed in the target-plane. Normalized point doses were compared with baseline (no roll) data to determine the sensitivity of the test and the effectiveness of the roll correction feature. Gamma analysis comparing baseline, roll-corrected, and uncorrected films was performed using film analysis software. MVCT images were acquired prior to plan delivery. Measured roll was compared with induced roll to evaluate the automatic registration feature's ability to detect rotational misalignment. Rotations beyond 0.3° result in statistically significant deviation from baseline point measurements. Gamma pass rates begin to drop below 90% at approximately 0.5° induced rotation at 3%/3 mm and between 0.2° and 0.3° for 2%/2 mm. With roll correction applied, point dose measurements for all rotations are indistinguishable from baseline, and gamma pass rates exceed 96% when using 3% and 3 mm as evaluation criteria. Measured roll via the automatic registration algorithm agrees with induced rotation to within the test sensitivity for nearly all imaging settings. The TomoTherapy automatic registration system accurately detects

  17. Accurate Jones Matrix of the Practical Faraday Rotator

    王林斗; 祝昇翔; 李玉峰; 邢文烈; 魏景芝

    2003-01-01

    The Jones matrix of practical Faraday rotators is often used in the engineering calculation of non-reciprocal optical field. Nevertheless, only the approximate Jones matrix of practical Faraday rotators has been presented by now. Based on the theory of polarized light, this paper presents the accurate Jones matrix of practical Faraday rotators. In addition, an experiment has been carried out to verify the validity of the accurate Jones matrix. This matrix accurately describes the optical characteristics of practical Faraday rotators, including rotation, loss and depolarization of the polarized light. The accurate Jones matrix can be used to obtain the accurate results for the practical Faraday rotator to transform the polarized light, which paves the way for the accurate analysis and calculation of practical Faraday rotators in relevant engineering applications.

  18. Biomimetic Approach for Accurate, Real-Time Aerodynamic Coefficients Project

    National Aeronautics and Space Administration — Aerodynamic and structural reliability and efficiency depends critically on the ability to accurately assess the aerodynamic loads and moments for each lifting...

  19. An adaptive optics approach for laser beam correction in turbulence utilizing a modified plenoptic camera

    Ko, Jonathan; Wu, Chensheng; Davis, Christopher C.

    2015-09-01

    Adaptive optics has been widely used in the field of astronomy to correct for atmospheric turbulence while viewing images of celestial bodies. The slightly distorted incoming wavefronts are typically sensed with a Shack-Hartmann sensor and then corrected with a deformable mirror. Although this approach has proven to be effective for astronomical purposes, a new approach must be developed when correcting for the deep turbulence experienced in ground to ground based optical systems. We propose the use of a modified plenoptic camera as a wavefront sensor capable of accurately representing an incoming wavefront that has been significantly distorted by strong turbulence conditions (C2n distortions. After the large distortions have been corrected, a secondary mode utilizing more traditional adaptive optics algorithms can take over to fine tune the wavefront correction. This two-stage algorithm can find use in free space optical communication systems, in directed energy applications, as well as for image correction purposes.

  20. Quantitative SPECT reconstruction using CT-derived corrections

    Willowson, Kathy; Bailey, Dale L.; Baldock, Clive

    2008-06-01

    A method for achieving quantitative single-photon emission computed tomography (SPECT) based upon corrections derived from x-ray computed tomography (CT) data is presented. A CT-derived attenuation map is used to perform transmission-dependent scatter correction (TDSC) in conjunction with non-uniform attenuation correction. The original CT data are also utilized to correct for partial volume effects in small volumes of interest. The accuracy of the quantitative technique has been evaluated with phantom experiments and clinical lung ventilation/perfusion SPECT/CT studies. A comparison of calculated values with the known total activities and concentrations in a mixed-material cylindrical phantom, and in liver and cardiac inserts within an anthropomorphic torso phantom, produced accurate results. The total activity in corrected ventilation-subtracted perfusion images was compared to the calibrated injected dose of [99mTc]-MAA (macro-aggregated albumin). The average difference over 12 studies between the known and calculated activities was found to be -1%, with a range of ±7%.

  1. Quantitative SPECT reconstruction using CT-derived corrections

    Willowson, Kathy; Bailey, Dale L; Baldock, Clive [Institute of Medical Physics, School of Physics, University of Sydney, Camperdown, NSW 2006 (Australia)], E-mail: K.Willowson@physics.usyd.edu.au

    2008-06-21

    A method for achieving quantitative single-photon emission computed tomography (SPECT) based upon corrections derived from x-ray computed tomography (CT) data is presented. A CT-derived attenuation map is used to perform transmission-dependent scatter correction (TDSC) in conjunction with non-uniform attenuation correction. The original CT data are also utilized to correct for partial volume effects in small volumes of interest. The accuracy of the quantitative technique has been evaluated with phantom experiments and clinical lung ventilation/perfusion SPECT/CT studies. A comparison of calculated values with the known total activities and concentrations in a mixed-material cylindrical phantom, and in liver and cardiac inserts within an anthropomorphic torso phantom, produced accurate results. The total activity in corrected ventilation-subtracted perfusion images was compared to the calibrated injected dose of [{sup 99m}Tc]-MAA (macro-aggregated albumin). The average difference over 12 studies between the known and calculated activities was found to be -1%, with a range of {+-}7%.

  2. Atmospheric Error Correction of the Laser Beam Ranging

    J. Saydi

    2014-01-01

    Full Text Available Atmospheric models based on surface measurements of pressure, temperature, and relative humidity have been used to increase the laser ranging accuracy by ray tracing. Atmospheric refraction can cause significant errors in laser ranging systems. Through the present research, the atmospheric effects on the laser beam were investigated by using the principles of laser ranging. Atmospheric correction was calculated for 0.532, 1.3, and 10.6 micron wavelengths through the weather conditions of Tehran, Isfahan, and Bushehr in Iran since March 2012 to March 2013. Through the present research the atmospheric correction was computed for meteorological data in base of monthly mean. Of course, the meteorological data were received from meteorological stations in Tehran, Isfahan, and Bushehr. Atmospheric correction was calculated for 11, 100, and 200 kilometers laser beam propagations under 30°, 60°, and 90° rising angles for each propagation. The results of the study showed that in the same months and beam emission angles, the atmospheric correction was most accurate for 10.6 micron wavelength. The laser ranging error was decreased by increasing the laser emission angle. The atmospheric correction with two Marini-Murray and Mendes-Pavlis models for 0.532 nm was compared.

  3. Error-Correcting Data Structures

    de Wolf, Ronald

    2008-01-01

    We study data structures in the presence of adversarial noise. We want to encode a given object in a succinct data structure that enables us to efficiently answer specific queries about the object, even if the data structure has been corrupted by a constant fraction of errors. This model is the common generalization of (static) data structures and locally decodable error-correcting codes. The main issue is the tradeoff between the space used by the data structure and the time (number of probes) needed to answer a query about the encoded object. We prove a number of upper and lower bounds on various natural error-correcting data structure problems. In particular, we show that the optimal length of error-correcting data structures for the Membership problem (where we want to store subsets of size s from a universe of size n) is closely related to the optimal length of locally decodable codes for s-bit strings.

  4. Quantum error correction beyond qubits

    Aoki, Takao; Takahashi, Go; Kajiya, Tadashi; Yoshikawa, Jun-Ichi; Braunstein, Samuel L.; van Loock, Peter; Furusawa, Akira

    2009-08-01

    Quantum computation and communication rely on the ability to manipulate quantum states robustly and with high fidelity. To protect fragile quantum-superposition states from corruption through so-called decoherence noise, some form of error correction is needed. Therefore, the discovery of quantum error correction (QEC) was a key step to turn the field of quantum information from an academic curiosity into a developing technology. Here, we present an experimental implementation of a QEC code for quantum information encoded in continuous variables, based on entanglement among nine optical beams. This nine-wave-packet adaptation of Shor's original nine-qubit scheme enables, at least in principle, full quantum error correction against an arbitrary single-beam error.

  5. Fermilab Booster Correction Elements upgrade

    The Fermilab Booster Correction Element Power Supply System is being upgraded to provide significant improvements in performance and versatility. At the same time these improvements will compliment raising the Booster injection energy from 200 MeV to 400 MeV and will allow increased range of adjustment to tune, chromaticity, closed orbit and harmonic corrections. All correction elements will be capable of ramping to give dynamic orbit, tune and chromaticity control throughout the acceleration cycle. The power supplies are commercial switch mode current sources capable of operating in all four current-voltage quadrants. External secondary feedback loops on the amplifiers have extended the small signal bandwidth to 3 kHz and allow current ramps in excess of 1000 A/sec. Implementation and present status of the upgrade project is described in this paper. (author) 4 refs., 2 figs., 1 tab

  6. Electroweak corrections for LHC processes

    For the Run 2 of the LHC next-to-leading order electroweak corrections will play an important role. Even though they are typically moderate at the level of total cross sections they can lead to substantial deviations in the shapes of distributions. In particular for new physics searches but also for a precise determination of Standard Model observables their inclusion in the theoretical predictions is mandatory for a reliable estimation of the Standard Model contribution. In this article we review the status and recent developments in electroweak calculations and their automation for LHC processes. We discuss general issues and properties of NLO electroweak corrections and present some examples, including the full calculation of the NLO corrections to the production of a W-boson in association with two jets computed using GoSam interfaced to MadDipole.

  7. Delegation in Correctional Nursing Practice.

    Tompkins, Frances

    2016-07-01

    Correctional nurses face daily challenges as a result of their work environment. Common challenges include availability of resources for appropriate care delivery, negotiating with custody staff for access to patients, adherence to scope of practice standards, and working with a varied staffing mix. Professional correctional nurses must consider the educational backgrounds and competency of other nurses and assistive personnel in planning for care delivery. Budgetary constraints and varied staff preparation can be a challenge for the professional nurse. Adequate care planning requires understanding the educational level and competency of licensed and unlicensed staff. Delegation is the process of assessing patient needs and transferring responsibility for care to appropriately educated and competent staff. Correctional nurses can benefit from increased knowledge about delegation. PMID:27302707

  8. Electroweak corrections for LHC processes

    Chiesa, Mauro [Istituto Nazionale di Fisica Nucleare, Pavia (Italy); Greiner, Nicolas [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Gruppe Theorie; Tramontano, Francesco [Napoli Univ. (Italy). Dept. of Physics; Istituto Nazionale di Fisica Nucleare, Naples (Italy)

    2015-07-15

    For the Run 2 of the LHC next-to-leading order electroweak corrections will play an important role. Even though they are typically moderate at the level of total cross sections they can lead to substantial deviations in the shapes of distributions. In particular for new physics searches but also for a precise determination of Standard Model observables their inclusion in the theoretical predictions is mandatory for a reliable estimation of the Standard Model contribution. In this article we review the status and recent developments in electroweak calculations and their automation for LHC processes. We discuss general issues and properties of NLO electroweak corrections and present some examples, including the full calculation of the NLO corrections to the production of a W-boson in association with two jets computed using GoSam interfaced to MadDipole.

  9. Local Correction of Boolean Functions

    Alon, Noga

    2011-01-01

    A Boolean function f over n variables is said to be q-locally correctable if, given a black-box access to a function g which is "close" to an isomorphism f_sigma of f, we can compute f_sigma(x) for any x in Z_2^n with good probability using q queries to g. We observe that any k-junta, that is, any function which depends only on k of its input variables, is O(2^k)-locally correctable. Moreover, we show that there are examples where this is essentially best possible, and locally correcting some k-juntas requires a number of queries which is exponential in k. These examples, however, are far from being typical, and indeed we prove that for almost every k-junta, O(k log k) queries suffice.

  10. Correction

    2007-01-01

    From left to right: Luis, Carmen, Mario, Christian and José listening to speeches by theorists Alvaro De Rújula and Luis Alvarez-Gaumé (right) at their farewell gathering on 15 May.We unfortunately cut out a part of the "Word of thanks" from the team retiring from Restaurant No. 1. The complete message is published below: Dear friends, You are the true "nucleus" of CERN. Every member of this extraordinary human mosaic will always remain in our affections and in our thoughts. We have all been very touched by your spontaneous generosity. Arrivederci, Mario Au revoir,Christian Hasta Siempre Carmen, José and Luis PS: Lots of love to the theory team and to the hidden organisers. So long!