WorldWideScience

Sample records for accurate light-time correction

  1. Correcting incompatible DN values and geometric errors in nighttime lights time series images

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Naizhuo [Texas Tech Univ., Lubbock, TX (United States); Zhou, Yuyu [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Samson, Eric L. [Mayan Esteem Project, Farmington, CT (United States)

    2014-09-19

    The Defense Meteorological Satellite Program’s Operational Linescan System (DMSP-OLS) nighttime lights imagery has proven to be a powerful remote sensing tool to monitor urbanization and assess socioeconomic activities at large scales. However, the existence of incompatible digital number (DN) values and geometric errors severely limit application of nighttime light image data on multi-year quantitative research. In this study we extend and improve previous studies on inter-calibrating nighttime lights image data to obtain more compatible and reliable nighttime lights time series (NLT) image data for China and the United States (US) through four steps: inter-calibration, geometric correction, steady increase adjustment, and population data correction. We then use gross domestic product (GDP) data to test the processed NLT image data indirectly and find that sum light (summed DN value of pixels in a nighttime light image) maintains apparent increase trends with relatively large GDP growth rates but does not increase or decrease with relatively small GDP growth rates. As nighttime light is a sensitive indicator for economic activity, the temporally consistent trends between sum light and GDP growth rate imply that brightness of nighttime lights on the ground is correctly represented by the processed NLT image data. Finally, through analyzing the corrected NLT image data from 1992 to 2008, we find that China experienced apparent nighttime lights development in 1992-1997 and 2001-2008 respectively and the US suffered from nighttime lights decay in large areas after 2001.

  2. Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions

    Science.gov (United States)

    Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara

    2012-01-01

    This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…

  3. Karect: accurate correction of substitution, insertion and deletion errors for next-generation sequencing data

    KAUST Repository

    Allam, Amin

    2015-07-14

    Motivation: Next-generation sequencing generates large amounts of data affected by errors in the form of substitutions, insertions or deletions of bases. Error correction based on the high-coverage information, typically improves de novo assembly. Most existing tools can correct substitution errors only; some support insertions and deletions, but accuracy in many cases is low. Results: We present Karect, a novel error correction technique based on multiple alignment. Our approach supports substitution, insertion and deletion errors. It can handle non-uniform coverage as well as moderately covered areas of the sequenced genome. Experiments with data from Illumina, 454 FLX and Ion Torrent sequencing machines demonstrate that Karect is more accurate than previous methods, both in terms of correcting individual-bases errors (up to 10% increase in accuracy gain) and post de novo assembly quality (up to 10% increase in NGA50). We also introduce an improved framework for evaluating the quality of error correction.

  4. Significance of accurate diffraction corrections for the second harmonic wave in determining the acoustic nonlinearity parameter

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Hyunjo, E-mail: hjjeong@wku.ac.kr [Division of Mechanical and Automotive Engineering, Wonkwang University, Iksan, Jeonbuk 570-749 (Korea, Republic of); Zhang, Shuzeng; Li, Xiongbing [School of Traffic and Transportation Engineering, Central South University, Changsha, Hunan 410075 (China); Barnard, Dan [Center for Nondestructive Evaluation, Iowa State University, Ames, IA 50010 (United States)

    2015-09-15

    The accurate measurement of acoustic nonlinearity parameter β for fluids or solids generally requires making corrections for diffraction effects due to finite size geometry of transmitter and receiver. These effects are well known in linear acoustics, while those for second harmonic waves have not been well addressed and therefore not properly considered in previous studies. In this work, we explicitly define the attenuation and diffraction corrections using the multi-Gaussian beam (MGB) equations which were developed from the quasilinear solutions of the KZK equation. The effects of making these corrections are examined through the simulation of β determination in water. Diffraction corrections are found to have more significant effects than attenuation corrections, and the β values of water can be estimated experimentally with less than 5% errors when the exact second harmonic diffraction corrections are used together with the negligible attenuation correction effects on the basis of linear frequency dependence between attenuation coefficients, α{sub 2} ≃ 2α{sub 1}.

  5. An Accurate Volume Measurement of Solid Lesions by correcting Partial Volume Effects on CT images

    Directory of Open Access Journals (Sweden)

    Bitty S. Varghese

    2016-06-01

    Full Text Available Under digital image processing, medical images have lots of applications like oncological diagnosis of tumors and chemotherapy. Computed Tomography (CT images are used to capture images of solid lesions like lung or liver. For oncological chemotherapy based therapeutics, estimation of size of tumor is the main task to determine whether the treatment is in right path or not. This means that, after chemotherapy, tumor either grows or shrinks. Due to irregular growth of tumor, diameter of tumor is not a standard parameter to determine the size. Volume is the appropriate method to identify the size. But partial volume artifacts, which arise due to low resolution of imaging device, reduces the accuracy of measurement. Partial volume correction (PVC which extracts the necessary information from the segmented output resolves this problem. This paper presents a different perspective of accurate volumetric measurement by correcting partial volume effect at the borders of segmentation result.

  6. Should scatter be corrected in both transmission and emission data for accurate quantitation in cardiac SPET?

    International Nuclear Information System (INIS)

    Ideally, reliable quantitation in single-photon emission tomography (SPET) requires both emission and transmission data to be scatter free. Although scatter in emission data has been extensively studied, it is not well known how scatter in transmission data affects relative and absolute quantitation in reconstructed images. We studied SPET quantitative accuracy for different amounts of scatter in emission and transmission data using a Utah phantom and a cardiac Data Spectrum phantom including different attenuating media. Acquisitions over 180 were considered and three projection sets were derived: 20% images and Jaszczak and triple-energy-window scatter-corrected projections. Transmission data were acquired using gadolinium-153 line sources in a 90-110 keV window using a narrow or wide scanning window. The transmission scans were performed either simultaneously with the emission acquisition or 24 h later. Transmission maps were reconstructed using filtered backprojection and μ values were linearly scaled from 100 to 140 keV. Attenuation-corrected images were reconstructed using a conjugate gradient minimal residual algorithm. The μ value underestimation varied between 4% with a narrow transmission window in soft tissue and 22% with a wide window in a material simulating bone. Scatter in the emission and transmission data had little effect on the uniformity of activity distribution in the left ventricle wall and in a uniformly hot compartment of the Utah phantom. Correcting the transmission data for scatter had no impact on contrast between a hot and a cold region or on signal-to-noise ratio (SNR) in regions with uniform activity distribution, while correcting the emission data for scatter improved contrast and reduced SNR. For absolute quantitation, the most accurate results (bias <4% in both phantoms) were obtained when reducing scatter in both emission and transmission data. In conclusion, trying to obtain the same amount of scatter in emission and transmission

  7. Should scatter be corrected in both transmission and emission data for accurate quantitation in cardiac SPET?

    Energy Technology Data Exchange (ETDEWEB)

    Fakhri, G.E. [Harvard Medical School, Boston, MA (United States). Dept. of Radiology; U494 INSERM, CHU Pitie-Salpetriere, Paris (France); Buvat, I.; Todd-Pokropek, A.; Benali, H. [U494 INSERM, CHU Pitie-Salpetriere, Paris (France); Almeida, P. [Servico de Medicina Nuclear, Hospital Garcia de Orta, Almada (Portugal); Bendriem, B. [CTI, Inc., Knoxville, TN (United States)

    2000-09-01

    Ideally, reliable quantitation in single-photon emission tomography (SPET) requires both emission and transmission data to be scatter free. Although scatter in emission data has been extensively studied, it is not well known how scatter in transmission data affects relative and absolute quantitation in reconstructed images. We studied SPET quantitative accuracy for different amounts of scatter in emission and transmission data using a Utah phantom and a cardiac Data Spectrum phantom including different attenuating media. Acquisitions over 180 were considered and three projection sets were derived: 20% images and Jaszczak and triple-energy-window scatter-corrected projections. Transmission data were acquired using gadolinium-153 line sources in a 90-110 keV window using a narrow or wide scanning window. The transmission scans were performed either simultaneously with the emission acquisition or 24 h later. Transmission maps were reconstructed using filtered backprojection and {mu} values were linearly scaled from 100 to 140 keV. Attenuation-corrected images were reconstructed using a conjugate gradient minimal residual algorithm. The {mu} value underestimation varied between 4% with a narrow transmission window in soft tissue and 22% with a wide window in a material simulating bone. Scatter in the emission and transmission data had little effect on the uniformity of activity distribution in the left ventricle wall and in a uniformly hot compartment of the Utah phantom. Correcting the transmission data for scatter had no impact on contrast between a hot and a cold region or on signal-to-noise ratio (SNR) in regions with uniform activity distribution, while correcting the emission data for scatter improved contrast and reduced SNR. For absolute quantitation, the most accurate results (bias <4% in both phantoms) were obtained when reducing scatter in both emission and transmission data. In conclusion, trying to obtain the same amount of scatter in emission and

  8. Accurate tracking of tumor volume change during radiotherapy by CT-CBCT registration with intensity correction

    Science.gov (United States)

    Park, Seyoun; Robinson, Adam; Quon, Harry; Kiess, Ana P.; Shen, Colette; Wong, John; Plishker, William; Shekhar, Raj; Lee, Junghoon

    2016-03-01

    In this paper, we propose a CT-CBCT registration method to accurately predict the tumor volume change based on daily cone-beam CTs (CBCTs) during radiotherapy. CBCT is commonly used to reduce patient setup error during radiotherapy, but its poor image quality impedes accurate monitoring of anatomical changes. Although physician's contours drawn on the planning CT can be automatically propagated to daily CBCTs by deformable image registration (DIR), artifacts in CBCT often cause undesirable errors. To improve the accuracy of the registration-based segmentation, we developed a DIR method that iteratively corrects CBCT intensities by local histogram matching. Three popular DIR algorithms (B-spline, demons, and optical flow) with the intensity correction were implemented on a graphics processing unit for efficient computation. We evaluated their performances on six head and neck (HN) cancer cases. For each case, four trained scientists manually contoured the nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial image registration software based on conventional mutual information (MI), VelocityAI (Varian Medical Systems Inc.). The volume differences (mean±std in cc) between the average of the manual segmentations and automatic segmentations are 3.70+/-2.30 (B-spline), 1.25+/-1.78 (demons), 0.93+/-1.14 (optical flow), and 4.39+/-3.86 (VelocityAI). The proposed method significantly reduced the estimation error by 9% (B-spline), 38% (demons), and 51% (optical flow) over the results using VelocityAI. Although demonstrated only on HN nodal GTVs, the results imply that the proposed method can produce improved segmentation of other critical structures over conventional methods.

  9. Enabling accurate first-principle calculations of electronic properties with a corrected k dot p scheme

    CERN Document Server

    Berland, Kristian

    2016-01-01

    A computationally inexpensive kp-based interpolation scheme is developed that can extend the eigenvalues and momentum matrix elements of a sparsely sampled k-point grid into a densely sampled one. Dense sampling, often required to accurately describe transport and optical properties of bulk materials, can be computationally demanding to compute, for instance, in combination with hybrid functionals within the density functional theory (DFT) or with perturbative expansions beyond DFT such as the GW method. The scheme is based on solving the k$\\cdot$p method and extrapolating from multiple reference k points. It includes a correction term that reduces the number of empty bands needed and ameliorates band discontinuities. We show that the scheme can be used to generate accurate band structures, density of states, and dielectric functions. Several examples are given, using traditional and hybrid functionals, with Si, TiNiSn, and Cu as test cases. We illustrate that d-electron and semi-core states, which are partic...

  10. Accurate mask model implementation in optical proximity correction model for 14-nm nodes and beyond

    Science.gov (United States)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Farys, Vincent; Huguennet, Frederic; Armeanu, Ana-Maria; Bork, Ingo; Chomat, Michael; Buck, Peter; Schanen, Isabelle

    2016-04-01

    In a previous work, we demonstrated that the current optical proximity correction model assuming the mask pattern to be analogous to the designed data is no longer valid. An extreme case of line-end shortening shows a gap up to 10 nm difference (at mask level). For that reason, an accurate mask model has been calibrated for a 14-nm logic gate level. A model with a total RMS of 1.38 nm at mask level was obtained. Two-dimensional structures, such as line-end shortening and corner rounding, were well predicted using scanning electron microscopy pictures overlaid with simulated contours. The first part of this paper is dedicated to the implementation of our improved model in current flow. The improved model consists of a mask model capturing mask process and writing effects, and a standard optical and resist model addressing the litho exposure and development effects at wafer level. The second part will focus on results from the comparison of the two models, the new and the regular.

  11. Accurate and Simple Time Synchronization and Frequency Offset Correction in OFDM System

    Institute of Scientific and Technical Information of China (English)

    LIU Xiao-ming; JIANG Wei-yu; LIU Yuan-an

    2004-01-01

    We present a new synchronization scheme for Orthogonal Frequency-Division Multiplexing (OFDM) systems. In this scheme, time synchronization and carrier frequency offset correction can be performed in one identical training symbol. Time synchronization algorithm is robust and simple operated, and its performance is independent of the carrier frequency offset. We derive the theoretical variance error for our time synchronization algorithm in AWGN channel. We also derive the performance lower bound of our frequency offset correction algorithm. The frequency offset correction algorithm is high accuracy and its performance will degrade very little under multipath fading environment.

  12. Band-Filling Correction Method for Accurate Adsorption Energy Calculations: A Cu/ZnO Case Study.

    Science.gov (United States)

    Hellström, Matti; Spångberg, Daniel; Hermansson, Kersti; Broqvist, Peter

    2013-11-12

    We present a simple method, the "band-filling correction", to calculate accurate adsorption energies (Eads) in the low coverage limit from finite-size supercell slab calculations using DFT. We show that it is necessary to use such a correction if charge transfer takes place between the adsorbate and the substrate, resulting in the substrate bands either filling up or becoming depleted. With this correction scheme, we calculate Eads of an isolated Cu atom adsorbed on the ZnO(101̅0) surface. Without the correction, the calculated Eads is highly coverage-dependent, even for surface supercells that would typically be considered very large (in the range from 1 nm × 1 nm to 2.5 nm × 2.5 nm). The correction scheme works very well for semilocal functionals, where the corrected Eads is converged within 0.01 eV for all coverages. The correction scheme also works well for hybrid functionals if a large supercell is used and the exact exchange interaction is screened. PMID:26583386

  13. A Highly Accurate Classification of TM Data through Correction of Atmospheric Effects

    Directory of Open Access Journals (Sweden)

    Bill Smith

    2009-07-01

    Full Text Available Atmospheric correction impacts on the accuracy of satellite image-based land cover classification are a growing concern among scientists. In this study, the principle objective was to enhance classification accuracy by minimizing contamination effects from aerosol scattering in Landsat TM images due to the variation in solar zenith angle corresponding to cloud-free earth targets. We have derived a mathematical model for aerosols to compute and subtract the aerosol scattering noise per pixel of different vegetation classes from TM images of Nicolet in north-eastern Wisconsin. An algorithm in C++ has been developed with iterations to simulate, model, and correct for the solar zenith angle influences on scattering. Results from a supervised classification with corrected TM images showed increased class accuracy for land cover types over uncorrected images. The overall accuracy of the supervised classification was improved substantially (between 13% and 18%. The z-score shows significant difference between the corrected data and the raw data (between 4.0 and 12.0. Therefore, the atmospheric correction was essential for enhancing the image classification.

  14. Accurate mass error correction in liquid chromatography time-of-flight mass spectrometry based metabolomics

    NARCIS (Netherlands)

    Mihaleva, V.V.; Vorst, O.F.J.; Maliepaard, C.A.; Verhoeven, H.A.; Vos, de C.H.; Hall, R.D.; Ham, van R.C.H.J.

    2008-01-01

    Compound identification and annotation in (untargeted) metabolomics experiments based on accurate mass require the highest possible accuracy of the mass determination. Experimental LC/TOF-MS platforms equipped with a time-to-digital converter (TDC) give the best mass estimate for those mass signals

  15. Accurate Treatment of Large Supramolecular Complexes by Double-Hybrid Density Functionals Coupled with Nonlocal van der Waals Corrections.

    Science.gov (United States)

    Calbo, Joaquín; Ortí, Enrique; Sancho-García, Juan C; Aragó, Juan

    2015-03-10

    In this work, we present a thorough assessment of the performance of some representative double-hybrid density functionals (revPBE0-DH-NL and B2PLYP-NL) as well as their parent hybrid and GGA counterparts, in combination with the most modern version of the nonlocal (NL) van der Waals correction to describe very large weakly interacting molecular systems dominated by noncovalent interactions. Prior to the assessment, an accurate and homogeneous set of reference interaction energies was computed for the supramolecular complexes constituting the L7 and S12L data sets by using the novel, precise, and efficient DLPNO-CCSD(T) method at the complete basis set limit (CBS). The correction of the basis set superposition error and the inclusion of the deformation energies (for the S12L set) have been crucial for obtaining precise DLPNO-CCSD(T)/CBS interaction energies. Among the density functionals evaluated, the double-hybrid revPBE0-DH-NL and B2PLYP-NL with the three-body dispersion correction provide remarkably accurate association energies very close to the chemical accuracy. Overall, the NL van der Waals approach combined with proper density functionals can be seen as an accurate and affordable computational tool for the modeling of large weakly bonded supramolecular systems. PMID:26579747

  16. AN ACCURATE NEW METHOD OF CALCULATING ABSOLUTE MAGNITUDES AND K-CORRECTIONS APPLIED TO THE SLOAN FILTER SET

    Energy Technology Data Exchange (ETDEWEB)

    Beare, Richard; Brown, Michael J. I.; Pimbblet, Kevin, E-mail: richard@beares.net [Monash Centre for Astrophysics, Monash University, Clayton, Victoria 3800 (Australia)

    2014-12-20

    We describe an accurate new method for determining absolute magnitudes, and hence also K-corrections, that is simpler than most previous methods, being based on a quadratic function of just one suitably chosen observed color. The method relies on the extensive and accurate new set of 129 empirical galaxy template spectral energy distributions from Brown et al. A key advantage of our method is that we can reliably estimate random errors in computed absolute magnitudes due to galaxy diversity, photometric error and redshift error. We derive K-corrections for the five Sloan Digital Sky Survey filters and provide parameter tables for use by the astronomical community. Using the New York Value-Added Galaxy Catalog, we compare our K-corrections with those from kcorrect. Our K-corrections produce absolute magnitudes that are generally in good agreement with kcorrect. Absolute griz magnitudes differ by less than 0.02 mag and those in the u band by ∼0.04 mag. The evolution of rest-frame colors as a function of redshift is better behaved using our method, with relatively few galaxies being assigned anomalously red colors and a tight red sequence being observed across the whole 0.0 < z < 0.5 redshift range.

  17. A Highly Accurate Classification of TM Data through Correction of Atmospheric Effects

    OpenAIRE

    Bill Smith; Frank Scarpace; Widad Elmahboub

    2009-01-01

    Atmospheric correction impacts on the accuracy of satellite image-based land cover classification are a growing concern among scientists. In this study, the principle objective was to enhance classification accuracy by minimizing contamination effects from aerosol scattering in Landsat TM images due to the variation in solar zenith angle corresponding to cloud-free earth targets. We have derived a mathematical model for aerosols to compute and subtract the aerosol scattering noise per pixel o...

  18. Accurate correction of magnetic field instabilities for high-resolution isochronous mass measurements in storage rings

    CERN Document Server

    Shuai, P; Zhang, Y H; Litvinov, Yu A; Wang, M; Tu, X L; Blaum, K; Zhou, X H; Yuan, Y J; Audi, G; Yan, X L; Chen, X C; Xu, X; Zhang, W; Sun, B H; Yamaguchi, T; Chen, R J; Fu, C Y; Ge, Z; Huang, W J; Liu, D W; Xing, Y M; Zeng, Q

    2014-01-01

    Isochronous mass spectrometry (IMS) in storage rings is a successful technique for accurate mass measurements of short-lived nuclides with relative precision of about $10^{-5}-10^{-7}$. Instabilities of the magnetic fields in storage rings are one of the major contributions limiting the achievable mass resolving power, which is directly related to the precision of the obtained mass values. A new data analysis method is proposed allowing one to minimise the effect of such instabilities. The masses of the previously measured at the CSRe $^{41}$Ti, $^{43}$V, $^{47}$Mn, $^{49}$Fe, $^{53}$Ni and $^{55}$Cu nuclides were re-determined with this method. An improvement of the mass precision by a factor of $\\sim 1.7$ has been achieved for $^{41}$Ti and $^{43}$V. The method can be applied to any isochronous mass experiment irrespective of the accelerator facility. Furthermore, the method can be used as an on-line tool for checking the isochronous conditions of the storage ring.

  19. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    Science.gov (United States)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  20. Accurate Evaluation of Ion Conductivity of the Gramicidin A Channel Using a Polarizable Force Field without Any Corrections.

    Science.gov (United States)

    Peng, Xiangda; Zhang, Yuebin; Chu, Huiying; Li, Yan; Zhang, Dinglin; Cao, Liaoran; Li, Guohui

    2016-06-14

    Classical molecular dynamic (MD) simulation of membrane proteins faces significant challenges in accurately reproducing and predicting experimental observables such as ion conductance and permeability due to its incapability of precisely describing the electronic interactions in heterogeneous systems. In this work, the free energy profiles of K(+) and Na(+) permeating through the gramicidin A channel are characterized by using the AMOEBA polarizable force field with a total sampling time of 1 μs. Our results indicated that by explicitly introducing the multipole terms and polarization into the electrostatic potentials, the permeation free energy barrier of K(+) through the gA channel is considerably reduced compared to the overestimated results obtained from the fixed-charge model. Moreover, the estimated maximum conductance, without any corrections, for both K(+) and Na(+) passing through the gA channel are much closer to the experimental results than any classical MD simulations, demonstrating the power of AMOEBA in investigating the membrane proteins. PMID:27171823

  1. Accurate non-Born-Oppenheimer calculations of the complete pure vibrational spectrum of D2 with including relativistic corrections.

    Science.gov (United States)

    Bubin, Sergiy; Stanke, Monika; Adamowicz, Ludwik

    2011-08-21

    In this work we report very accurate variational calculations of the complete pure vibrational spectrum of the D(2) molecule performed within the framework where the Born-Oppenheimer (BO) approximation is not assumed. After the elimination of the center-of-mass motion, D(2) becomes a three-particle problem in this framework. As the considered states correspond to the zero total angular momentum, their wave functions are expanded in terms of all-particle, one-center, spherically symmetric explicitly correlated Gaussian functions multiplied by even non-negative powers of the internuclear distance. The nonrelativistic energies of the states obtained in the non-BO calculations are corrected for the relativistic effects of the order of α(2) (where α = 1/c is the fine structure constant) calculated as expectation values of the operators representing these effects. PMID:21861559

  2. Correction.

    Science.gov (United States)

    2015-11-01

    In the article by Heuslein et al, which published online ahead of print on September 3, 2015 (DOI: 10.1161/ATVBAHA.115.305775), a correction was needed. Brett R. Blackman was added as the penultimate author of the article. The article has been corrected for publication in the November 2015 issue. PMID:26490278

  3. Dixon sequence with superimposed model-based bone compartment provides highly accurate PET/MR attenuation correction of the brain

    OpenAIRE

    Koesters, Thomas; Friedman, Kent P.; Fenchel, Matthias; Zhan, Yiqiang; Hermosillo, Gerardo; Babb, James; Jelescu, Ileana O.; Faul, David; Boada, Fernando E.; Shepherd, Timothy M.

    2016-01-01

    Simultaneous PET/MR of the brain is a promising new technology for characterizing patients with suspected cognitive impairment or epilepsy. Unlike CT though, MR signal intensities do not provide a direct correlate to PET photon attenuation correction (AC) and inaccurate radiotracer standard uptake value (SUV) estimation could limit future PET/MR clinical applications. We tested a novel AC method that supplements standard Dixon-based tissue segmentation with a superimposed model-based bone com...

  4. Harmonic allocation of authorship credit: source-level correction of bibliometric bias assures accurate publication and citation analysis.

    Directory of Open Access Journals (Sweden)

    Nils T Hagen

    Full Text Available Authorship credit for multi-authored scientific publications is routinely allocated either by issuing full publication credit repeatedly to all coauthors, or by dividing one credit equally among all coauthors. The ensuing inflationary and equalizing biases distort derived bibliometric measures of merit by systematically benefiting secondary authors at the expense of primary authors. Here I show how harmonic counting, which allocates credit according to authorship rank and the number of coauthors, provides simultaneous source-level correction for both biases as well as accommodating further decoding of byline information. I also demonstrate large and erratic effects of counting bias on the original h-index, and show how the harmonic version of the h-index provides unbiased bibliometric ranking of scientific merit while retaining the original's essential simplicity, transparency and intended fairness. Harmonic decoding of byline information resolves the conundrum of authorship credit allocation by providing a simple recipe for source-level correction of inflationary and equalizing bias. Harmonic counting could also offer unrivalled accuracy in automated assessments of scientific productivity, impact and achievement.

  5. Correction.

    Science.gov (United States)

    2016-02-01

    In the article by Guessous et al (Guessous I, Pruijm M, Ponte B, Ackermann D, Ehret G, Ansermot N, Vuistiner P, Staessen J, Gu Y, Paccaud F, Mohaupt M, Vogt B, Pechère-Bertschi A, Martin PY, Burnier M, Eap CB, Bochud M. Associations of ambulatory blood pressure with urinary caffeine and caffeine metabolite excretions. Hypertension. 2015;65:691–696. doi: 10.1161/HYPERTENSIONAHA.114.04512), which published online ahead of print December 8, 2014, and appeared in the March 2015 issue of the journal, a correction was needed.One of the author surnames was misspelled. Antoinette Pechère-Berstchi has been corrected to read Antoinette Pechère-Bertschi.The authors apologize for this error. PMID:26763012

  6. Correction

    CERN Multimedia

    2002-01-01

    The photo on the second page of the Bulletin n°48/2002, from 25 November 2002, illustrating the article «Spanish Visit to CERN» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.   The Spanish delegation, accompanied by Spanish scientists at CERN, also visited the LHC superconducting magnet test hall (photo). From left to right: Felix Rodriguez Mateos of CERN LHC Division, Josep Piqué i Camps, Spanish Minister of Science and Technology, César Dopazo, Director-General of CIEMAT (Spanish Research Centre for Energy, Environment and Technology), Juan Antonio Rubio, ETT Division Leader at CERN, Manuel Aguilar-Benitez, Spanish Delegate to Council, Manuel Delfino, IT Division Leader at CERN, and Gonzalo León, Secretary-General of Scientific Policy to the Minister.

  7. Radiochromic film dosimetry with flatbed scanners: A fast and accurate method for dose calibration and uniformity correction with single film exposure

    International Nuclear Information System (INIS)

    Film dosimetry is an attractive tool for dose distribution verification in intensity modulated radiotherapy (IMRT). A critical aspect of radiochromic film dosimetry is the scanner used for the readout of the film: the output needs to be calibrated in dose response and corrected for pixel value and spatial dependent nonuniformity caused by light scattering; these procedures can take a long time. A method for a fast and accurate calibration and uniformity correction for radiochromic film dosimetry is presented: a single film exposure is used to do both calibration and correction. Gafchromic EBT films were read with two flatbed charge coupled device scanners (Epson V750 and 1680Pro). The accuracy of the method is investigated with specific dose patterns and an IMRT beam. The comparisons with a two-dimensional array of ionization chambers using a 18x18 cm2 open field and an inverse pyramid dose pattern show an increment in the percentage of points which pass the gamma analysis (tolerance parameters of 3% and 3 mm), passing from 55% and 64% for the 1680Pro and V750 scanners, respectively, to 94% for both scanners for the 18x18 open field, and from 76% and 75% to 91% for the inverse pyramid pattern. Application to an IMRT beam also shows better gamma index results, passing from 88% and 86% for the two scanners, respectively, to 94% for both. The number of points and dose range considered for correction and calibration appears to be appropriate for use in IMRT verification. The method showed to be fast and to correct properly the nonuniformity and has been adopted for routine clinical IMRT dose verification

  8. Practical self-absorption correction method for various environmental samples in a 1000 cm3 Marinelli container to perform accurate radioactivity determination with HPGe detectors

    International Nuclear Information System (INIS)

    The self-absorption of large volume samples is an important issue in gamma-ray spectrometry using high purity germanium (HPGe) detectors. After the Fukushima Daiichi Nuclear Power Plant accident, a large number of radioactivity measurements of various environmental samples have been performed using 1000 cm3 containers. This study uses Monte Carlo simulations and a semiempirical function to address the self-absorption correction factor for the samples in the 1000 cm3 Marinelli container that has been widely marketed after the accident. The presented factor was validated by experiments using test sources and was shown to be accurate for a wide range of linear attenuation coefficients μ(0.05 - 1.0 cm-1). This suggests that the proposed correction factor is applicable to almost all environmental samples. In addition, an interlaboratory comparison where participants were asked to determine the radioactivity of a certified reference material demonstrated that the proposed correction factor can be used with HPGe detectors of different crystal sizes. (author)

  9. Extension of the B3LYP - Dispersion-Correcting Potential Approach to the Accurate Treatment of both Inter- and Intramolecular Interactions

    CERN Document Server

    DiLabio, Gino A; Torres, Edmanuel

    2013-01-01

    We recently showed that dispersion-correcting potentials (DCPs), atom-centered Gaussian-type functions developed for use with B3LYP (J. Phys. Chem. Lett. 2012, 3, 1738-1744) greatly improved the ability of the underlying functional to predict non-covalent interactions. However, the application of B3LYP-DCP for the {\\beta}-scission of the cumyloxyl radical led a calculated barrier height that was over-estimated by ca. 8 kcal/mol. We show in the present work that the source of this error arises from the previously developed carbon atom DCPs, which erroneously alters the electron density in the C-C covalent-bonding region. In this work, we present a new C-DCP with a form that was expected to influence the electron density farther from the nucleus. Tests of the new C-DCP, with previously published H-, N- and O-DCPs, with B3LYP-DCP/6-31+G(2d,2p) on the S66, S22B, HSG-A, and HC12 databases of non-covalently interacting dimers showed that it is one of the most accurate methods available for treating intermolecular i...

  10. Accurate real-time ionospheric corrections as the key to extend the centimeter-error-level GNSS navigation at continental scale (WARTK)

    Science.gov (United States)

    Hernandez-Pajares, M.; Juan, J.; Sanz, J.; Aragon-Angel, A.

    2007-05-01

    The main focus of this presentation is to show the recent improvements in real-time GNSS ionospheric determination extending the service area of the so called "Wide Area Real Time Kinematic" technique (WARTK), which allow centimeter-error-level navigation up to hundreds of kilometers far from the nearest GNSS reference site.[-4mm] The real-time GNSS navigation with centimeters of error has been feasible since the nineties thanks to the so- called "Real-Time Kinematic" technique (RTK), by exactly solving the integer values of the double-differenced carrier phase ambiguities. This was possible thanks to dual-frequency carrier phase data acquired simultaneously with data from a close (less than 10-20 km) reference GNSS site, under the assumption of common atmospheric effects on the satellite signal. This technique has been improved by different authors with the consideration of a network of reference sites. However the differential ionospheric refraction has remained as the main limiting factor in the extension of the applicability distance regarding to the reference site.[-4mm] In this context the authors have been developing the Wide Area RTK technique (WARTK) in different works and projects since 1998, overworking the mentioned limitations. In this way the RTK applicability with the existing sparse (Wide Area) networks of reference GPS stations, separated hundreds of kilometers, is feasible. And such networks are presently deployed in the context of other projects, such as SBAS support, over Europe and North America (EGNOS and WAAS respectively) among other regions.[-4mm] In particular WARTK is based on computing very accurate differential ionospheric corrections from a Wide Area network of permanent GNSS receivers, and providing them in real-time to the users. The key points addressed by the technique are an accurate real-time ionospheric modeling -combined with the corresponding geodetic model- by means of:[-4mm] a) A tomographic voxel model of the ionosphere

  11. The accurate calculation of the band gap of liquid water by means of GW corrections applied to plane-wave density functional theory molecular dynamics simulations

    NARCIS (Netherlands)

    Fang, Changming; Li, Wun Fan; Koster, Rik S.; Klimeš, Jiří; Van Blaaderen, Alfons; Van Huis, Marijn A.

    2015-01-01

    Knowledge about the intrinsic electronic properties of water is imperative for understanding the behaviour of aqueous solutions that are used throughout biology, chemistry, physics, and industry. The calculation of the electronic band gap of liquids is challenging, because the most accurate ab initi

  12. Toward accurate thermochemistry of the {sup 24}MgH, {sup 25}MgH, and {sup 26}MgH molecules at elevated temperatures: Corrections due to unbound states

    Energy Technology Data Exchange (ETDEWEB)

    Szidarovszky, Tamás [MTA-ELTE Research Group on Complex Chemical Systems, P.O. Box 32, H-1518 Budapest 112 (Hungary); Császár, Attila G., E-mail: csaszar@chem.elte.hu [MTA-ELTE Research Group on Complex Chemical Systems, P.O. Box 32, H-1518 Budapest 112 (Hungary); Laboratory on Molecular Structure and Dynamics, Institute of Chemistry, Eötvös University, Pázmány Péter sétány 1/A, H-1117 Budapest (Hungary)

    2015-01-07

    The total partition functions Q(T) and their first two moments Q{sup ′}(T) and Q{sup ″}(T), together with the isobaric heat capacities C{sub p}(T), are computed a priori for three major MgH isotopologues on the temperature range of T = 100–3000 K using the recent highly accurate potential energy curve, spin-rotation, and non-adiabatic correction functions of Henderson et al. [J. Phys. Chem. A 117, 13373 (2013)]. Nuclear motion computations are carried out on the ground electronic state to determine the (ro)vibrational energy levels and the scattering phase shifts. The effect of resonance states is found to be significant above about 1000 K and it increases with temperature. Even very short-lived states, due to their relatively large number, have significant contributions to Q(T) at elevated temperatures. The contribution of scattering states is around one fourth of that of resonance states but opposite in sign. Uncertainty estimates are given for the possible error sources, suggesting that all computed thermochemical properties have an accuracy better than 0.005% up to 1200 K. Between 1200 and 2500 K, the uncertainties can rise to around 0.1%, while between 2500 K and 3000 K, a further increase to 0.5% might be observed for Q{sup ″}(T) and C{sub p}(T), principally due to the neglect of excited electronic states. The accurate thermochemical data determined are presented in the supplementary material for the three isotopologues of {sup 24}MgH, {sup 25}MgH, and {sup 26}MgH at 1 K increments. These data, which differ significantly from older standard data, should prove useful for astronomical models incorporating thermodynamic properties of these species.

  13. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: An accurate correction scheme for electrostatic finite-size effects

    Energy Technology Data Exchange (ETDEWEB)

    Rocklin, Gabriel J. [Department of Pharmaceutical Chemistry, University of California San Francisco, 1700 4th St., San Francisco, California 94143-2550, USA and Biophysics Graduate Program, University of California San Francisco, 1700 4th St., San Francisco, California 94143-2550 (United States); Mobley, David L. [Departments of Pharmaceutical Sciences and Chemistry, University of California Irvine, 147 Bison Modular, Building 515, Irvine, California 92697-0001, USA and Department of Chemistry, University of New Orleans, 2000 Lakeshore Drive, New Orleans, Louisiana 70148 (United States); Dill, Ken A. [Laufer Center for Physical and Quantitative Biology, 5252 Stony Brook University, Stony Brook, New York 11794-0001 (United States); Hünenberger, Philippe H., E-mail: phil@igc.phys.chem.ethz.ch [Laboratory of Physical Chemistry, Swiss Federal Institute of Technology, ETH, 8093 Zürich (Switzerland)

    2013-11-14

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol{sup −1}) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non

  14. Impact of aerosols on the OMI tropospheric NO2 retrievals over industrialized regions: how accurate is the aerosol correction of cloud-free scenes via a simple cloud model?

    Science.gov (United States)

    Chimot, J.; Vlemmix, T.; Veefkind, J. P.; de Haan, J. F.; Levelt, P. F.

    2016-02-01

    The Ozone Monitoring Instrument (OMI) has provided daily global measurements of tropospheric NO2 for more than a decade. Numerous studies have drawn attention to the complexities related to measurements of tropospheric NO2 in the presence of aerosols. Fine particles affect the OMI spectral measurements and the length of the average light path followed by the photons. However, they are not explicitly taken into account in the current operational OMI tropospheric NO2 retrieval chain (DOMINO - Derivation of OMI tropospheric NO2) product. Instead, the operational OMI O2 - O2 cloud retrieval algorithm is applied both to cloudy and to cloud-free scenes (i.e. clear sky) dominated by the presence of aerosols. This paper describes in detail the complex interplay between the spectral effects of aerosols in the satellite observation and the associated response of the OMI O2 - O2 cloud retrieval algorithm. Then, it evaluates the impact on the accuracy of the tropospheric NO2 retrievals through the computed Air Mass Factor (AMF) with a focus on cloud-free scenes. For that purpose, collocated OMI NO2 and MODIS (Moderate Resolution Imaging Spectroradiometer) Aqua aerosol products are analysed over the strongly industrialized East China area. In addition, aerosol effects on the tropospheric NO2 AMF and the retrieval of OMI cloud parameters are simulated. Both the observation-based and the simulation-based approach demonstrate that the retrieved cloud fraction increases with increasing Aerosol Optical Thickness (AOT), but the magnitude of this increase depends on the aerosol properties and surface albedo. This increase is induced by the additional scattering effects of aerosols which enhance the scene brightness. The decreasing effective cloud pressure with increasing AOT primarily represents the shielding effects of the O2 - O2 column located below the aerosol layers. The study cases show that the aerosol correction based on the implemented OMI cloud model results in biases

  15. ACE: accurate correction of errors using K-mer tries

    NARCIS (Netherlands)

    Sheikhizadeh Anari, S.; Ridder, de D.

    2015-01-01

    The quality of high-throughput next-generation sequencing data significantly influences the performance and memory consumption of assembly and mapping algorithms. The most ubiquitous platform, Illumina, mainly suffers from substitution errors. We have developed a tool, ACE, based on K-mer tries to c

  16. Accurate ab initio spin densities

    CERN Document Server

    Boguslawski, Katharina; Legeza, Örs; Reiher, Markus

    2012-01-01

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys. 2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CA...

  17. Nighttime lights time series of tsunami damage, recovery, and economic metrics in Sumatra, Indonesia.

    Science.gov (United States)

    Gillespie, Thomas W; Frankenberg, Elizabeth; Chum, Kai Fung; Thomas, Duncan

    2014-01-01

    On 26 December 2004, a magnitude 9.2 earthquake off the west coast of the northern Sumatra, Indonesia resulted in 160,000 Indonesians killed. We examine the Defense Meteorological Satellite Program-Operational Linescan System (DMSP-OLS) nighttime light imagery brightness values for 307 communities in the Study of the Tsunami Aftermath and Recovery (STAR), a household survey in Sumatra from 2004 to 2008. We examined night light time series between the annual brightness and extent of damage, economic metrics collected from STAR households and aggregated to the community level. There were significant changes in brightness values from 2004 to 2008 with a significant drop in brightness values in 2005 due to the tsunami and pre-tsunami nighttime light values returning in 2006 for all damage zones. There were significant relationships between the nighttime imagery brightness and per capita expenditures, and spending on energy and on food. Results suggest that Defense Meteorological Satellite Program nighttime light imagery can be used to capture the impacts and recovery from the tsunami and other natural disasters and estimate time series economic metrics at the community level in developing countries.

  18. The Near-contact Binary RZ Draconis with Two Possible Light-time Orbits

    Science.gov (United States)

    Yang, Y.-G.; Li, H.-L.; Dai, H.-F.; Zhang, L.-Y.

    2010-12-01

    We present new multicolor photometry for RZ Draconis, observed in 2009 at the Xinglong Station of the National Astronomical Observatories of China. By using the updated version of the Wilson-Devinney Code, the photometric-spectroscopic elements were deduced from new photometric observations and published radial velocity data. The mass ratio and orbital inclination are q = 0.375(±0.002) and i = 84fdg60(±0fdg13), respectively. The fill-out factor of the primary is f = 98.3%, implying that RZ Dra is an Algol-like near-contact binary. Based on 683 light minimum times from 1907 to 2009, the orbital period change was investigated in detail. From the O - C curve, it is discovered that two quasi-sinusoidal variations may exist (i.e., P 3 = 75.62(±2.20) yr and P 4 = 27.59(±0.10) yr), which likely result from light-time effects via the presence of two additional bodies. In a coplanar orbit with the binary system, the third and fourth bodies may be low-mass drafts (i.e., M 3 = 0.175 M sun and M 4 = 0.074 M sun). If this is true, RZ Dra may be a quadruple star. The additional body could extract angular momentum from the binary system, which may cause the orbit to shrink. With the orbit shrinking, the primary may fill its Roche lobe and RZ Dra evolves into a contact configuration.

  19. Speaking Fluently And Accurately

    Institute of Scientific and Technical Information of China (English)

    JosephDeVeto

    2004-01-01

    Even after many years of study,students make frequent mistakes in English. In addition, many students still need a long time to think of what they want to say. For some reason, in spite of all the studying, students are still not quite fluent.When I teach, I use one technique that helps students not only speak more accurately, but also more fluently. That technique is dictations.

  20. Accurate Finite Difference Algorithms

    Science.gov (United States)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  1. Accurate backgrounds to Higgs production at the LHC

    CERN Document Server

    Kauer, N

    2007-01-01

    Corrections of 10-30% for backgrounds to the H --> WW --> l^+l^-\\sla{p}_T search in vector boson and gluon fusion at the LHC are reviewed to make the case for precise and accurate theoretical background predictions.

  2. Highly Accurate Measurement of the Electron Orbital Magnetic Moment

    CERN Document Server

    Awobode, A M

    2015-01-01

    We propose to accurately determine the orbital magnetic moment of the electron by measuring, in a Magneto-Optical or Ion trap, the ratio of the Lande g-factors in two atomic states. From the measurement of (gJ1/gJ2), the quantity A, which depends on the corrections to the electron g-factors can be extracted, if the states are LS coupled. Given that highly accurate values of the correction to the spin g-factor are currently available, accurate values of the correction to the orbital g-factor may also be determined. At present, (-1.8 +/- 0.4) x 10-4 has been determined as a correction to the electron orbital g-factor, by using earlier measurements of the ratio gJ1/gJ2, made on the Indium 2P1/2 and 2P3/2 states.

  3. Corrective Jaw Surgery

    Science.gov (United States)

    ... out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment of ... out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment of ...

  4. PHOTOMETRIC PROPERTIES OF SELECTED ALGOL-TYPE BINARIES. III. AL GEMINORUM AND BM MONOCEROTIS WITH POSSIBLE LIGHT-TIME ORBITS

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Y.-G.; Dai, H.-F. [School of Physics and Electronic Information, Huaibei Normal University, 235000 Huaibei, Anhui Province (China); Li, H.-L., E-mail: yygcn@163.com [National Astronomical Observatories, Chinese Academy of Sciences, 100012 Beijing (China)

    2012-01-15

    We present the CCD photometry of two Algol-type binaries, AL Gem and BM Mon, observed from 2008 November to 2011 January. With the updated Wilson-Devinney program, photometric solutions were deduced from their EA-type light curves. The mass ratios and fill-out factors of the primaries are found to be q{sub ph} = 0.090({+-} 0.005) and f{sub 1} = 47.3%({+-} 0.3%) for AL Gem, and q{sub ph} = 0.275({+-} 0.007) and f{sub 1} = 55.4%({+-} 0.5%) for BM Mon, respectively. By analyzing the O-C curves, we discovered that the periods of AL Gem and BM Mon change in a quasi-sinusoidal mode, which may possibly result from the light-time effect via the presence of a third body. Periods, amplitudes, and eccentricities of light-time orbits are 78.83({+-} 1.17) yr, 0fd0204({+-}0fd0007), and 0.28({+-} 0.02) for AL Gem and 97.78({+-} 2.67) yr, 0fd0175({+-}0fd0006), and 0.29({+-} 0.02) for BM Mon, respectively. Assumed to be in a coplanar orbit with the binary, the masses of the third bodies would be 0.29 M{sub Sun} for AL Gem and 0.26 M{sub Sun} for BM Mon. This kind of additional companion can extract angular momentum from the close binary orbit, and such processes may play an important role in multiple star evolution.

  5. Photometric Properties of Selected Algol-type Binaries. III. AL Geminorum and BM Monocerotis with Possible Light-time Orbits

    Science.gov (United States)

    Yang, Y.-G.; Li, H.-L.; Dai, H.-F.

    2012-01-01

    We present the CCD photometry of two Algol-type binaries, AL Gem and BM Mon, observed from 2008 November to 2011 January. With the updated Wilson-Devinney program, photometric solutions were deduced from their EA-type light curves. The mass ratios and fill-out factors of the primaries are found to be q ph = 0.090(± 0.005) and f 1 = 47.3%(± 0.3%) for AL Gem, and q ph = 0.275(± 0.007) and f 1 = 55.4%(± 0.5%) for BM Mon, respectively. By analyzing the O-C curves, we discovered that the periods of AL Gem and BM Mon change in a quasi-sinusoidal mode, which may possibly result from the light-time effect via the presence of a third body. Periods, amplitudes, and eccentricities of light-time orbits are 78.83(± 1.17) yr, 0fd0204(±0fd0007), and 0.28(± 0.02) for AL Gem and 97.78(± 2.67) yr, 0fd0175(±0fd0006), and 0.29(± 0.02) for BM Mon, respectively. Assumed to be in a coplanar orbit with the binary, the masses of the third bodies would be 0.29 M ⊙ for AL Gem and 0.26 M ⊙ for BM Mon. This kind of additional companion can extract angular momentum from the close binary orbit, and such processes may play an important role in multiple star evolution.

  6. Accurate measurement of unsteady state fluid temperature

    Science.gov (United States)

    Jaremkiewicz, Magdalena

    2016-07-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  7. Prisons and Correctional Facilities, LAGIC is consulting with local parish GIS departments to create spatially accurate point and polygons data sets including the locations and building footprints of schools, churches, government buildings, law enforcement and emergency response offices, pha, Published in 2011, 1:12000 (1in=1000ft) scale, Louisiana Geographic Information Center.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Prisons and Correctional Facilities dataset, published at 1:12000 (1in=1000ft) scale, was produced all or in part from Orthoimagery information as of 2011. It...

  8. Deconvolution with correct sampling

    CERN Document Server

    Magain, P; Sohy, S

    1997-01-01

    A new method for improving the resolution of astronomical images is presented. It is based on the principle that sampled data cannot be fully deconvolved without violating the sampling theorem. Thus, the sampled image should not be deconvolved by the total Point Spread Function, but by a narrower function chosen so that the resolution of the deconvolved image is compatible with the adopted sampling. Our deconvolution method gives results which are markedly superior to those of other existing techniques: in particular, it does not produce ringing around point sources superimposed on a smooth background. Moreover, it allows to perform accurate astrometry and photometry of crowded fields. These improvements are a consequence of both the correct treatment of sampling and the recognition that the most probable astronomical image is not a flat one. The method is also well adapted to the optimal combination of different images of the same object, as can be obtained, e.g., via adaptive optics techniques.

  9. Towards accurate emergency response behavior

    International Nuclear Information System (INIS)

    Nuclear reactor operator emergency response behavior has persisted as a training problem through lack of information. The industry needs an accurate definition of operator behavior in adverse stress conditions, and training methods which will produce the desired behavior. Newly assembled information from fifty years of research into human behavior in both high and low stress provides a more accurate definition of appropriate operator response, and supports training methods which will produce the needed control room behavior. The research indicates that operator response in emergencies is divided into two modes, conditioned behavior and knowledge based behavior. Methods which assure accurate conditioned behavior, and provide for the recovery of knowledge based behavior, are described in detail

  10. Accurate Modeling of Advanced Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min

    Analysis and optimization methods for the design of advanced printed re ectarrays have been investigated, and the study is focused on developing an accurate and efficient simulation tool. For the analysis, a good compromise between accuracy and efficiency can be obtained using the spectral domain...

  11. 77 FR 72199 - Technical Corrections; Correction

    Science.gov (United States)

    2012-12-05

    ...) is correcting a final rule that was published in the Federal Register on July 6, 2012 (77 FR 39899), and effective on August 6, 2012. That final rule amended the NRC regulations to make technical... COMMISSION 10 CFR Part 171 RIN 3150-AJ16 Technical Corrections; Correction AGENCY: Nuclear...

  12. Photometric Investigation and Possible Light-Time Effect in the Orbital Period of a Marginal Contact System, CW Cassiopeiae

    Science.gov (United States)

    Jiang, Tian-Yu; Li, Li-Fang; Han, Zhan-Wen; Jiang, Deng-Kai

    2010-04-01

    The first complete charge-coupled device (CCD) light curves in B and V passbands of a neglected contact binary system, CW Cassiopeiae (CW Cas), are presented. They were analyzed simultaneously by using the Wilson and Devinney (WD) code (1971, ApJ, 166, 605). The photometric solution indicates that CW Cas is a W-type W UMa system with a mass ratio of m2/m1 2.234, and that it is in a marginal contact state with a contact degree of ˜6.5% and a relatively large temperature difference of ˜327K between its two components. Based on the minimum times collected from the literature, together with the new ones obtained in this study, the orbital period changes of CW Cas were investigated in detail. It was found that a periodical variation overlaps with a secular period decrease in its orbital period. The long-term period decrease with a rate of dP/dt = -3.44 × 10-8d yr-1 can be interpreted either by mass transfer from the more-massive component to the less-massive with a rate of dm2/dt = -3.6 × 10-8M⊙ yr-1, or by mass and angular-momentum losses through magnetic braking due to a magnetic stellar wind. A low-amplitude cyclic variation with a period of T = 63.7 yr might be caused by the light-time effect due to the presence of a third body.

  13. Accurate sampling using Langevin dynamics

    CERN Document Server

    Bussi, Giovanni

    2008-01-01

    We show how to derive a simple integrator for the Langevin equation and illustrate how it is possible to check the accuracy of the obtained distribution on the fly, using the concept of effective energy introduced in a recent paper [J. Chem. Phys. 126, 014101 (2007)]. Our integrator leads to correct sampling also in the difficult high-friction limit. We also show how these ideas can be applied in practical simulations, using a Lennard-Jones crystal as a paradigmatic case.

  14. Profitable capitation requires accurate costing.

    Science.gov (United States)

    West, D A; Hicks, L L; Balas, E A; West, T D

    1996-01-01

    In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages. PMID:8788799

  15. 千伏级 CBCT 图像 CT 值校正及在放疗剂量计算中应用%Investigation of CT numbers correction of kilo-voltage cone-beam CT images for accurate dose calculation

    Institute of Scientific and Technical Information of China (English)

    王雪桃; 柏森; 李光俊; 蒋晓芹; 苏晨; 李衍龙; 朱智慧

    2015-01-01

    目的:研究千伏级CBCT图像CT值校正方法,提高其用于剂量计算的准确性。方法以扇形束计划 CT 作为先验信息,将 CBCT 与计划 CT 图像进行刚性配准,通过将 CBCT 与计划 CT 图像相减得到 CBCT 散射背景估计,对散射背景进行低通滤波处理,最后将原始 CBCT 图像减去滤波后的散射背景得到校正的 CBCT 图像。对 Catphan600模体和4例盆腔恶性肿瘤患者的 CBCT 图像进行校正,配对 t 检验校正前后 CBCT 与计划 CT 的差异,评估校正后的 CBCT 图像质量并分析用于剂量计算的准确性。结果经 CT 值校正后 CBCT 图像伪影明显减少,空气、脂肪、肌肉、股骨头的平均值校正前与计划 CT 分别相差232、89、29、66 HU,而校正后平均值差别缩小至5 HU 内(P=0??39、0??66、0??59、1??00)。校正后 CBCT 图像用于剂量计算误差在2%内。结论校正后的 CBCT 图像 CT 值与计划 CT 的 CT 值相似,用于剂量计算可得到准确的结果。%Objective To study CT numbers correction of kilo?voltage cone?beam CT (KV?CBCT) images for dose calculation. Method Aligning the CBCT images with plan CT images, then obtain the background scatter by subtracting CT images from CBCT images. The background scatter is then processed by low?pass filter. The final CBCT images are acquired by subtracting the background scatter from the raw CBCT. KV?CBCT images of Catphan600 phantom and four patients with pelvic tumors were obtained with the linac?integrated CBCT system. The CBCT images were modified to correct the CT numbers. Finally, compare HU numbers between corrected CBCT and planning CT by paired T test. Evaluate the image quality and accuracy of dose calculation of the modified CBCT images. Results The proposed method reduces the artifacts of CBCT images significantly. The differences of CT numbers were 232 HU, 89 HU, 29 HU and 66 HU for air, fat, muscle and femoral head between CT and CBCT

  16. Accurate determination of antenna directivity

    DEFF Research Database (Denmark)

    Dich, Mikael

    1997-01-01

    The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power......-pattern measurements. The derivation is based on the theory of spherical wave expansion of electromagnetic fields, which also establishes a simple criterion for the required number of samples of the power density. An array antenna consisting of Hertzian dipoles is used to test the accuracy and rate of convergence...

  17. Parameters for accurate genome alignment

    Directory of Open Access Journals (Sweden)

    Hamada Michiaki

    2010-02-01

    Full Text Available Abstract Background Genome sequence alignments form the basis of much research. Genome alignment depends on various mundane but critical choices, such as how to mask repeats and which score parameters to use. Surprisingly, there has been no large-scale assessment of these choices using real genomic data. Moreover, rigorous procedures to control the rate of spurious alignment have not been employed. Results We have assessed 495 combinations of score parameters for alignment of animal, plant, and fungal genomes. As our gold-standard of accuracy, we used genome alignments implied by multiple alignments of proteins and of structural RNAs. We found the HOXD scoring schemes underlying alignments in the UCSC genome database to be far from optimal, and suggest better parameters. Higher values of the X-drop parameter are not always better. E-values accurately indicate the rate of spurious alignment, but only if tandem repeats are masked in a non-standard way. Finally, we show that γ-centroid (probabilistic alignment can find highly reliable subsets of aligned bases. Conclusions These results enable more accurate genome alignment, with reliability measures for local alignments and for individual aligned bases. This study was made possible by our new software, LAST, which can align vertebrate genomes in a few hours http://last.cbrc.jp/.

  18. Thermodynamics of Error Correction

    Science.gov (United States)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  19. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry

    OpenAIRE

    Fuchs, Franz G.; Hjelmervik, Jon M.

    2014-01-01

    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire...

  20. The Accurate Particle Tracer Code

    CERN Document Server

    Wang, Yulei; Qin, Hong; Yu, Zhi

    2016-01-01

    The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusion energy research, computational mathematics, software engineering, and high-performance computation. The APT code consists of seven main modules, including the I/O module, the initialization module, the particle pusher module, the parallelization module, the field configuration module, the external force-field module, and the extendible module. The I/O module, supported by Lua and Hdf5 projects, provides a user-friendly interface for both numerical simulation and data analysis. A series of new geometric numerical methods...

  1. Accurate thickness measurement of graphene

    Science.gov (United States)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  2. Accurate thickness measurement of graphene.

    Science.gov (United States)

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-03-29

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  3. Motion-corrected Fourier ptychography

    CERN Document Server

    Bian, Liheng; Guo, Kaikai; Suo, Jinli; Yang, Changhuei; Chen, Feng; Dai, Qionghai

    2016-01-01

    Fourier ptychography (FP) is a recently proposed computational imaging technique for high space-bandwidth product imaging. In real setups such as endoscope and transmission electron microscope, the common sample motion largely degrades the FP reconstruction and limits its practicability. In this paper, we propose a novel FP reconstruction method to efficiently correct for unknown sample motion. Specifically, we adaptively update the sample's Fourier spectrum from low spatial-frequency regions towards high spatial-frequency ones, with an additional motion recovery and phase-offset compensation procedure for each sub-spectrum. Benefiting from the phase retrieval redundancy theory, the required large overlap between adjacent sub-spectra offers an accurate guide for successful motion recovery. Experimental results on both simulated data and real captured data show that the proposed method can correct for unknown sample motion with its standard deviation being up to 10% of the field-of-view scale. We have released...

  4. Accurate Fiber Length Measurement Using Time-of-Flight Technique

    Science.gov (United States)

    Terra, Osama; Hussein, Hatem

    2016-06-01

    Fiber artifacts of very well-measured length are required for the calibration of optical time domain reflectometers (OTDR). In this paper accurate length measurement of different fiber lengths using the time-of-flight technique is performed. A setup is proposed to measure accurately lengths from 1 to 40 km at 1,550 and 1,310 nm using high-speed electro-optic modulator and photodetector. This setup offers traceability to the SI unit of time, the second (and hence to meter by definition), by locking the time interval counter to the Global Positioning System (GPS)-disciplined quartz oscillator. Additionally, the length of a recirculating loop artifact is measured and compared with the measurement made for the same fiber by the National Physical Laboratory of United Kingdom (NPL). Finally, a method is proposed to relatively correct the fiber refractive index to allow accurate fiber length measurement.

  5. A More Accurate Fourier Transform

    CERN Document Server

    Courtney, Elya

    2015-01-01

    Fourier transform methods are used to analyze functions and data sets to provide frequencies, amplitudes, and phases of underlying oscillatory components. Fast Fourier transform (FFT) methods offer speed advantages over evaluation of explicit integrals (EI) that define Fourier transforms. This paper compares frequency, amplitude, and phase accuracy of the two methods for well resolved peaks over a wide array of data sets including cosine series with and without random noise and a variety of physical data sets, including atmospheric $\\mathrm{CO_2}$ concentrations, tides, temperatures, sound waveforms, and atomic spectra. The FFT uses MIT's FFTW3 library. The EI method uses the rectangle method to compute the areas under the curve via complex math. Results support the hypothesis that EI methods are more accurate than FFT methods. Errors range from 5 to 10 times higher when determining peak frequency by FFT, 1.4 to 60 times higher for peak amplitude, and 6 to 10 times higher for phase under a peak. The ability t...

  6. Relativistic formulation of coordinate light time, Doppler and astrometric observables up to the second post-Minkowskian order

    CERN Document Server

    Hees, A; Poncin-Lafitte, C Le

    2014-01-01

    Given the extreme accuracy of modern space science, a precise relativistic modeling of observations is required. In particular, it is important to describe properly light propagation through the Solar System. For two decades, several modeling efforts based on the solution of the null geodesic equations have been proposed but they are mainly valid only for the first order Post-Newtonian approximation. However, with the increasing precision of ongoing space missions as Gaia, GAME, BepiColombo, JUNO or JUICE, we know that some corrections up to the second order have to be taken into account for future experiments. We present a procedure to compute the relativistic coordinate time delay, Doppler and astrometric observables avoiding the integration of the null geodesic equation. This is possible using the Time Transfer Function formalism, a powerful tool providing key quantities such as the time of flight of a light signal between two point-events and the tangent vector to its null-geodesic. Indeed we show how to ...

  7. NWS Corrections to Observations

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Form B-14 is the National Weather Service form entitled 'Notice of Corrections to Weather Records.' The forms are used to make corrections to observations on forms...

  8. Error Correction in Classroom

    Institute of Scientific and Technical Information of China (English)

    Dr. Grace Zhang

    2000-01-01

    Error correction is an important issue in foreign language acquisition. This paper investigates how students feel about the way in which error correction should take place in a Chinese-as-a foreign-language classroom, based on empirical data of a large scale. The study shows that there is a general consensus that error correction is necessary. In terms of correction strategy, the students preferred a combination of direct and indirect corrections, or a direct only correction. The former choice indicates that students would be happy to take either so long as the correction gets done.Most students didn't mind peer correcting provided it is conducted in a constructive way. More than halfofthe students would feel uncomfortable ifthe same error they make in class is corrected consecutively more than three times. Taking these findings into consideration, we may want to cncourage peer correcting, use a combination of correction strategies (direct only if suitable) and do it in a non-threatening and sensitive way. It is hoped that this study would contribute to the effectiveness of error correction in a Chinese language classroom and it may also have a wider implication on other languages.

  9. Second-order accurate finite volume method for well-driven flows

    CERN Document Server

    Dotlić, Milan; Pokorni, Boris; Pušić, Milenko; Dimkić, Milan

    2013-01-01

    We consider a finite volume method for a well-driven fluid flow in a porous medium. Due to the singularity of the well, modeling in the near-well region with standard numerical schemes results in a completely wrong total well flux and an inaccurate hydraulic head. Local grid refinement can help, but it comes at computational cost. In this article we propose two methods to address well singularity. In the first method the flux through well faces is corrected using a logarithmic function, in a way related to the Peaceman correction. Coupling this correction with a second-order accurate two-point scheme gives a greatly improved total well flux, but the resulting scheme is still not even first order accurate on coarse grids. In the second method fluxes in the near-well region are corrected by representing the hydraulic head as a sum of a logarithmic and a linear function. This scheme is second-order accurate.

  10. How flatbed scanners upset accurate film dosimetry.

    Science.gov (United States)

    van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S

    2016-01-21

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  11. 温度校正的NaCl水溶液浓度超声检测装置设计与试验%Design and test of high accurately measuring equipment for NaCl water solution utilizing ultrasonic velocity with temperature correction

    Institute of Scientific and Technical Information of China (English)

    孟瑞锋; 马小康; 王州博; 董龙梅; 杨涛; 刘东红

    2015-01-01

    abnormal sample points and checking out the regression coefficient of the model by t-test. The developed model had high prediction accuracy and stability with the maximum prediction error of 0.25 g/100 g, the determination coefficient of calibration (Rcal2) of 0.9992, the determination coefficient of validation (Rval2) of 0.9988, the root mean square error of calibration (RMSEC) of 0.0894 g/100 g, the root mean square error of prediction (RMSEP) of 0.1015 g/100 g and the ratio performance deviation (RPD) of 28.57, which indicated that the model could be used for practical detection accurately and steadily, and was helpful for on-line measuring.

  12. 38 CFR 4.46 - Accurate measurement.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  13. BASIC: A Simple and Accurate Modular DNA Assembly Method.

    Science.gov (United States)

    Storch, Marko; Casini, Arturo; Mackrow, Ben; Ellis, Tom; Baldwin, Geoff S

    2017-01-01

    Biopart Assembly Standard for Idempotent Cloning (BASIC) is a simple, accurate, and robust DNA assembly method. The method is based on linker-mediated DNA assembly and provides highly accurate DNA assembly with 99 % correct assemblies for four parts and 90 % correct assemblies for seven parts [1]. The BASIC standard defines a single entry vector for all parts flanked by the same prefix and suffix sequences and its idempotent nature means that the assembled construct is returned in the same format. Once a part has been adapted into the BASIC format it can be placed at any position within a BASIC assembly without the need for reformatting. This allows laboratories to grow comprehensive and universal part libraries and to share them efficiently. The modularity within the BASIC framework is further extended by the possibility of encoding ribosomal binding sites (RBS) and peptide linker sequences directly on the linkers used for assembly. This makes BASIC a highly versatile library construction method for combinatorial part assembly including the construction of promoter, RBS, gene variant, and protein-tag libraries. In comparison with other DNA assembly standards and methods, BASIC offers a simple robust protocol; it relies on a single entry vector, provides for easy hierarchical assembly, and is highly accurate for up to seven parts per assembly round [2].

  14. [Orthognathic surgery: corrective bone operations].

    Science.gov (United States)

    Reuther, J

    2000-05-01

    The article reviews the history of orthognathic surgery from the middle of the last century up to the present. Initially, mandibular osteotomies were only performed in cases of severe malformations. But during the last century a precise and standardized procedure for correction of the mandible was established. Multiple modifications allowed control of small fragments, functionally stable osteosynthesis, and finally a precise positioning of the condyle. In 1955 Obwegeser and Trauner introduced the sagittal split osteotomy by an intraoral approach. It was the final breakthrough for orthognathic surgery as a standard treatment for corrections of the mandible. Surgery of the maxilla dates back to the nineteenth century. B. von Langenbeck from Berlin is said to have performed the first Le Fort I osteotomy in 1859. After minor changes, Wassmund corrected a posttraumatic malocclusion by a Le Fort I osteotomy in 1927. But it was Axhausen who risked the total mobilization of the maxilla in 1934. By additional modifications and further refinements, Obwegeser paved the way for this approach to become a standard procedure in maxillofacial surgery. Tessier mobilized the whole midface by a Le Fort III osteotomy and showed new perspectives in the correction of severe malformations of the facial bones, creating the basis of modern craniofacial surgery. While the last 150 years were distinguished by the creation and standardization of surgical methods, the present focus lies on precise treatment planning and the consideration of functional aspects of the whole stomatognathic system. To date, 3D visualization by CT scans, stereolithographic models, and computer-aided treatment planning and simulation allow surgery of complex cases and accurate predictions of soft tissue changes.

  15. The FLUKA code: An accurate simulation tool for particle therapy

    CERN Document Server

    Battistoni, Giuseppe; Böhlen, Till T; Cerutti, Francesco; Chin, Mary Pik Wai; Dos Santos Augusto, Ricardo M; Ferrari, Alfredo; Garcia Ortega, Pablo; Kozlowska, Wioletta S; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically-based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in-vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with bot...

  16. Accurate characterization of OPVs: Device masking and different solar simulators

    DEFF Research Database (Denmark)

    Gevorgyan, Suren; Carlé, Jon Eggert; Søndergaard, Roar R.;

    2013-01-01

    One of the prime objects of organic solar cell research has been to improve the power conversion efficiency. Unfortunately, the accurate determination of this property is not straight forward and has led to the recommendation that record devices be tested and certified at a few accredited...... laboratories following rigorous ASTM and IEC standards. This work tries to address some of the issues confronting the standard laboratory in this regard. Solar simulator lamps are investigated for their light field homogeneity and direct versus diffuse components, as well as the correct device area...

  17. An Accurate Technique for Calculation of Radiation From Printed Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min; Sorensen, Stig B.; Jorgensen, Erik;

    2011-01-01

    The accuracy of various techniques for calculating the radiation from printed reflectarrays is examined, and an improved technique based on the equivalent currents approach is proposed. The equivalent currents are found from a continuous plane wave spectrum calculated by use of the spectral dyadic...... Green's function. This ensures a correct relation between the equivalent electric and magnetic currents and thus allows an accurate calculation of the radiation over the entire far-field sphere. A comparison to DTU-ESA Facility measurements of a reference offset reflectarray designed and manufactured...

  18. Accurate measurement of ultrasonic velocity by eliminating the diffraction effect

    Institute of Scientific and Technical Information of China (English)

    WEI Tingcun

    2003-01-01

    The accurate measurement method of ultrasonic velocity by the pulse interferencemethod with eliminating the diffraction effect has been investigated in VHF range experimen-tally. Two silicate glasses were taken as the specimens, their frequency dependences of longitu-dinal velocities were measured in the frequency range 50-350 MHz, and the phase advances ofultrasonic signals caused by diffraction effect were calculated using A. O. Williams' theoreticalexpression. For the frequency dependences of longitudinal velocities, the measurement resultswere in good agreement with the simulation ones in which the phase advances were included.It has been shown that the velocity error due to diffraction effect can be corrected very well bythis method.

  19. Attenuation correction for small animal PET tomographs

    Energy Technology Data Exchange (ETDEWEB)

    Chow, Patrick L [David Geffen School of Medicine at UCLA, Crump Institute for Molecular Imaging, University of California, 700 Westwood Plaza, Los Angeles, CA 90095 (United States); Rannou, Fernando R [Departamento de Ingenieria Informatica, Universidad de Santiago de Chile (USACH), Av. Ecuador 3659, Santiago (Chile); Chatziioannou, Arion F [David Geffen School of Medicine at UCLA, Crump Institute for Molecular Imaging, University of California, 700 Westwood Plaza, Los Angeles, CA 90095 (United States)

    2005-04-21

    Attenuation correction is one of the important corrections required for quantitative positron emission tomography (PET). This work will compare the quantitative accuracy of attenuation correction using a simple global scale factor with traditional transmission-based methods acquired either with a small animal PET or a small animal x-ray computed tomography (CT) scanner. Two phantoms (one mouse-sized and one rat-sized) and two animal subjects (one mouse and one rat) were scanned in CTI Concorde Microsystem's microPET (registered) Focus{sup TM} for emission and transmission data and in ImTek's MicroCAT{sup TM} II for transmission data. PET emission image values were calibrated against a scintillation well counter. Results indicate that the scale factor method of attenuation correction places the average measured activity concentration about the expected value, without correcting for the cupping artefact from attenuation. Noise analysis in the phantom studies with the PET-based method shows that noise in the transmission data increases the noise in the corrected emission data. The CT-based method was accurate and delivered low-noise images suitable for both PET data correction and PET tracer localization.

  20. Spelling Correction in Context

    OpenAIRE

    Pinot, Guillaume; Enguehard, Chantal

    2005-01-01

    International audience Spelling checkers, frequently used nowadays, do not allow to correct real-word errors. Thus, the erroneous replacement of dessert by desert is not detected. We propose in this article an algorithm based on the examination of the context of words to correct this kind of spelling errors. This algorithm uses a training on a raw corpus.

  1. Surface EMG measurements during fMRI at 3T : Accurate EMG recordings after artifact correction

    NARCIS (Netherlands)

    van Duinen, Hiske; Zijdewind, Inge; Hoogduin, H; Maurits, N

    2005-01-01

    In this experiment, we have measured surface EMG of the first dorsal interosseus during predefined submaximal isometric contractions (5, 15, 30, 50, and 70% of maximal force) of the index finger simultaneously with fMRI measurements. Since we have used sparse sampling fMRI (3-s scanning; 2-s non-sca

  2. Probabilistic quantum error correction

    CERN Document Server

    Fern, J; Fern, Jesse; Terilla, John

    2002-01-01

    There are well known necessary and sufficient conditions for a quantum code to correct a set of errors. We study weaker conditions under which a quantum code may correct errors with probabilities that may be less than one. We work with stabilizer codes and as an application study how the nine qubit code, the seven qubit code, and the five qubit code perform when there are errors on more than one qubit. As a second application, we discuss the concept of syndrome quality and use it to suggest a way that quantum error correction can be practically improved.

  3. Contrast image correction method

    Science.gov (United States)

    Schettini, Raimondo; Gasparini, Francesca; Corchs, Silvia; Marini, Fabrizio; Capra, Alessandro; Castorina, Alfio

    2010-04-01

    A method for contrast enhancement is proposed. The algorithm is based on a local and image-dependent exponential correction. The technique aims to correct images that simultaneously present overexposed and underexposed regions. To prevent halo artifacts, the bilateral filter is used as the mask of the exponential correction. Depending on the characteristics of the image (piloted by histogram analysis), an automated parameter-tuning step is introduced, followed by stretching, clipping, and saturation preserving treatments. Comparisons with other contrast enhancement techniques are presented. The Mean Opinion Score (MOS) experiment on grayscale images gives the greatest preference score for our algorithm.

  4. Corrected Age for Preemies

    Science.gov (United States)

    ... Prenatal Baby Bathing & Skin Care Breastfeeding Crying & Colic Diapers & Clothing Feeding & Nutrition Preemie Sleep Teething & Tooth Care Toddler Preschool Gradeschool Teen Young Adult Healthy Children > Ages & Stages > Baby > Preemie > Corrected Age ...

  5. Respiration correction by clustering in ultrasound images

    Science.gov (United States)

    Wu, Kaizhi; Chen, Xi; Ding, Mingyue; Sang, Nong

    2016-03-01

    Respiratory motion is a challenging factor for image acquisition, image-guided procedures and perfusion quantification using contrast-enhanced ultrasound in the abdominal and thoracic region. In order to reduce the influence of respiratory motion, respiratory correction methods were investigated. In this paper we propose a novel, cluster-based respiratory correction method. In the proposed method, we assign the image frames of the corresponding respiratory phase using spectral clustering firstly. And then, we achieve the images correction automatically by finding a cluster in which points are close to each other. Unlike the traditional gating method, we don't need to estimate the breathing cycle accurate. It is because images are similar at the corresponding respiratory phase, and they are close in high-dimensional space. The proposed method is tested on simulation image sequence and real ultrasound image sequence. The experimental results show the effectiveness of our proposed method in quantitative and qualitative.

  6. moco: Fast Motion Correction for Calcium Imaging

    Directory of Open Access Journals (Sweden)

    Alexander eDubbs

    2016-02-01

    Full Text Available Motion correction is the first step in a pipeline of algorithms to analyze calcium imaging videos and extract biologically relevant information, for example the network structure of the neurons therein. Fast motion correction is especially critical for closed-loop activity triggered stimulation experiments, where accurate detection and targeting of specific cells in necessary. We introduce a novel motion-correction algorithm that uses a Fourier-transform approach, and a combination of judicious downsampling and the accelerated computation of many $L_2$ norms using dynamic programming and two-dimensional, fft-accelerated convolutions, to enhance its efficiency. Its accuracy is comparable to that of established community-used algorithms, and it is more stable to large translational motions. It is programmed in Java and is compatible with ImageJ.

  7. moco: Fast Motion Correction for Calcium Imaging.

    Science.gov (United States)

    Dubbs, Alexander; Guevara, James; Yuste, Rafael

    2016-01-01

    Motion correction is the first step in a pipeline of algorithms to analyze calcium imaging videos and extract biologically relevant information, for example the network structure of the neurons therein. Fast motion correction is especially critical for closed-loop activity triggered stimulation experiments, where accurate detection and targeting of specific cells in necessary. We introduce a novel motion-correction algorithm which uses a Fourier-transform approach, and a combination of judicious downsampling and the accelerated computation of many L 2 norms using dynamic programming and two-dimensional, fft-accelerated convolutions, to enhance its efficiency. Its accuracy is comparable to that of established community-used algorithms, and it is more stable to large translational motions. It is programmed in Java and is compatible with ImageJ.

  8. Correctness is not enough

    CERN Document Server

    Pryor, Louise

    2008-01-01

    The usual aim of spreadsheet audit is to verify correctness. There are two problems with this: first, it is often difficult to tell whether the spreadsheets in question are correct, and second, even if they are, they may still give the wrong results. These problems are explained in this paper, which presents the key criteria for judging a spreadsheet and discusses how those criteria can be achieved

  9. Nested Quantum Annealing Correction

    OpenAIRE

    Vinci, Walter; Albash, Tameem; Lidar, Daniel A.

    2015-01-01

    We present a general error-correcting scheme for quantum annealing that allows for the encoding of a logical qubit into an arbitrarily large number of physical qubits. Given any Ising model optimization problem, the encoding replaces each logical qubit by a complete graph of degree $C$, representing the distance of the error-correcting code. A subsequent minor-embedding step then implements the encoding on the underlying hardware graph of the quantum annealer. We demonstrate experimentally th...

  10. Accurate transition rates for intercombination lines of singly ionized nitrogen

    International Nuclear Information System (INIS)

    The transition energies and rates for the 2s22p23P1,2-2s2p35S2o and 2s22p3s-2s22p3p intercombination transitions have been calculated using term-dependent nonorthogonal orbitals in the multiconfiguration Hartree-Fock approach. Several sets of spectroscopic and correlation nonorthogonal functions have been chosen to describe adequately term dependence of wave functions and various correlation corrections. Special attention has been focused on the accurate representation of strong interactions between the 2s2p31,3P1o and 2s22p3s 1,3P1olevels. The relativistic corrections are included through the one-body mass correction, Darwin, and spin-orbit operators and two-body spin-other-orbit and spin-spin operators in the Breit-Pauli Hamiltonian. The importance of core-valence correlation effects has been examined. The accuracy of present transition rates is evaluated by the agreement between the length and velocity formulations combined with the agreement between the calculated and measured transition energies. The present results for transition probabilities, branching fraction, and lifetimes have been compared with previous calculations and experiments.

  11. Accurate ab initio vibrational energies of methyl chloride

    Energy Technology Data Exchange (ETDEWEB)

    Owens, Alec, E-mail: owens@mpi-muelheim.mpg.de [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany); Department of Physics and Astronomy, University College London, Gower Street, WC1E 6BT London (United Kingdom); Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan [Department of Physics and Astronomy, University College London, Gower Street, WC1E 6BT London (United Kingdom); Thiel, Walter [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany)

    2015-06-28

    Two new nine-dimensional potential energy surfaces (PESs) have been generated using high-level ab initio theory for the two main isotopologues of methyl chloride, CH{sub 3}{sup 35}Cl and CH{sub 3}{sup 37}Cl. The respective PESs, CBS-35{sup  HL}, and CBS-37{sup  HL}, are based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set (CBS) limit, and incorporate a range of higher-level (HL) additive energy corrections to account for core-valence electron correlation, higher-order coupled cluster terms, scalar relativistic effects, and diagonal Born-Oppenheimer corrections. Variational calculations of the vibrational energy levels were performed using the computer program TROVE, whose functionality has been extended to handle molecules of the form XY {sub 3}Z. Fully converged energies were obtained by means of a complete vibrational basis set extrapolation. The CBS-35{sup  HL} and CBS-37{sup  HL} PESs reproduce the fundamental term values with root-mean-square errors of 0.75 and 1.00 cm{sup −1}, respectively. An analysis of the combined effect of the HL corrections and CBS extrapolation on the vibrational wavenumbers indicates that both are needed to compute accurate theoretical results for methyl chloride. We believe that it would be extremely challenging to go beyond the accuracy currently achieved for CH{sub 3}Cl without empirical refinement of the respective PESs.

  12. Accurate thermoelastic tensor and acoustic velocities of NaCl

    Energy Technology Data Exchange (ETDEWEB)

    Marcondes, Michel L., E-mail: michel@if.usp.br [Physics Institute, University of Sao Paulo, Sao Paulo, 05508-090 (Brazil); Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Shukla, Gaurav, E-mail: shukla@physics.umn.edu [School of Physics and Astronomy, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States); Silveira, Pedro da [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Wentzcovitch, Renata M., E-mail: wentz002@umn.edu [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States)

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  13. Laboratory Building for Accurate Determination of Plutonium

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    <正>The accurate determination of plutonium is one of the most important assay techniques of nuclear fuel, also the key of the chemical measurement transfer and the base of the nuclear material balance. An

  14. Accurate Calculation of the Differential Cross Section of Bhabha Scattering with Photon Chain Loops Contribution in QED

    Institute of Scientific and Technical Information of China (English)

    JIANG Min; FANG Zhen-Yun; SANG Wen-Long; GAO Fei

    2006-01-01

    @@ In the minimum electromagnetism coupling model of interaction between photon and electron (positron), we accurately calculate photon chain renormalized propagator and obtain the accurate result of differential cross section of Bhabha scattering with a photon chain renormalized propagator in quantum electrodynamics. The related radiative corrections are briefly reviewed and discussed.

  15. Partial Volume Correction in Quantitative Amyloid Imaging

    Science.gov (United States)

    Su, Yi; Blazey, Tyler M.; Snyder, Abraham Z.; Raichle, Marcus E.; Marcus, Daniel S.; Ances, Beau M.; Bateman, Randall J.; Cairns, Nigel J.; Aldea, Patricia; Cash, Lisa; Christensen, Jon J.; Friedrichsen, Karl; Hornbeck, Russ C.; Farrar, Angela M.; Owen, Christopher J.; Mayeux, Richard; Brickman, Adam M.; Klunk, William; Price, Julie C.; Thompson, Paul M.; Ghetti, Bernardino; Saykin, Andrew J.; Sperling, Reisa A.; Johnson, Keith A.; Schofield, Peter R.; Buckles, Virginia; Morris, John C.; Benzinger, Tammie. LS.

    2014-01-01

    Amyloid imaging is a valuable tool for research and diagnosis in dementing disorders. As positron emission tomography (PET) scanners have limited spatial resolution, measured signals are distorted by partial volume effects. Various techniques have been proposed for correcting partial volume effects, but there is no consensus as to whether these techniques are necessary in amyloid imaging, and, if so, how they should be implemented. We evaluated a two-component partial volume correction technique and a regional spread function technique using both simulated and human Pittsburgh compound B (PiB) PET imaging data. Both correction techniques compensated for partial volume effects and yielded improved detection of subtle changes in PiB retention. However, the regional spread function technique was more accurate in application to simulated data. Because PiB retention estimates depend on the correction technique, standardization is necessary to compare results across groups. Partial volume correction has sometimes been avoided because it increases the sensitivity to inaccuracy in image registration and segmentation. However, our results indicate that appropriate PVC may enhance our ability to detect changes in amyloid deposition. PMID:25485714

  16. Radiative corrections to parity-non-conservation in atoms

    CERN Document Server

    Kuchiev, M Yu

    2003-01-01

    Recent progress in calculations of QED radiative corrections to parity nonconservation in atoms is reviewed. The QED vacuum polarization, the self-energy corrections and the vertex corrections are shown to be described very reliably by different methods used by different groups. All new calculations have recently converged to very close final values. Each separate radiative correction is very large, above 1 % for heavy atoms, but having different signs they partly compensate each other. Our results for the radiative corrections for all atoms are presented. The corrections are -0.54 % for 133 Cs, and -0.70% for 205 Tl, 208 Pb, and 209 Bi. The result for 133 Cs reconciles the most accurate atomic experimental data for the 6s-7s PNC amplitude in 133 Cs of Wood et al with the standard model.

  17. Invariant Image Watermarking Using Accurate Zernike Moments

    Directory of Open Access Journals (Sweden)

    Ismail A. Ismail

    2010-01-01

    Full Text Available problem statement: Digital image watermarking is the most popular method for image authentication, copyright protection and content description. Zernike moments are the most widely used moments in image processing and pattern recognition. The magnitudes of Zernike moments are rotation invariant so they can be used just as a watermark signal or be further modified to carry embedded data. The computed Zernike moments in Cartesian coordinate are not accurate due to geometrical and numerical error. Approach: In this study, we employed a robust image-watermarking algorithm using accurate Zernike moments. These moments are computed in polar coordinate, where both approximation and geometric errors are removed. Accurate Zernike moments are used in image watermarking and proved to be robust against different kind of geometric attacks. The performance of the proposed algorithm is evaluated using standard images. Results: Experimental results show that, accurate Zernike moments achieve higher degree of robustness than those approximated ones against rotation, scaling, flipping, shearing and affine transformation. Conclusion: By computing accurate Zernike moments, the embedded bits watermark can be extracted at low error rate.

  18. An Improved Wavelet Correction for Zero Shifted Accelerometer Data

    Directory of Open Access Journals (Sweden)

    Timothy S. Edwards

    2003-01-01

    Full Text Available Accelerometer data from shock measurements often contains a spurious DC drifting phenomenon known as zero shifting. This erroneous signal can be caused by a variety of sources. The most conservative approach when dealing with such data is to discard it and collect a different set with steps taken to prevent the zero shifting. This approach is rarely practical, however. The test article may have been destroyed or it may be impossible or prohibitively costly to recreate the test. A method has been proposed by which wavelets may be used to correct the acceleration data. By comparing the corrected accelerometer data to an independent measurement of the acceleration from a laser vibrometer this paper shows that the corrected data, in the cases presented, accurately represents the shock. A method is presented by which the analyst may accurately choose the wavelet correction parameters. The comparisons are made in the time and frequency domains, as well as with the shock response spectrum.

  19. Renormalons and Power Corrections

    CERN Document Server

    Beneke, Martin

    2000-01-01

    Even for short-distance dominated observables the QCD perturbation expansion is never complete. The divergence of the expansion through infrared renormalons provides formal evidence of this fact. In this article we review how this apparent failure can be turned into a useful tool to investigate power corrections to hard processes in QCD.

  20. ERRORS AND CORRECTION

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    To err is human . Since the 1960s, most second language teachers or language theorists have regarded errors as natural and inevitable in the language learning process . Instead of regarding them as terrible and disappointing, teachers have come to realize their value. This paper will consider these values, analyze some errors and propose some effective correction techniques.

  1. Text Induced Spelling Correction

    NARCIS (Netherlands)

    Reynaert, M.W.C.

    2004-01-01

    We present TISC, a language-independent and context-sensitive spelling checking and correction system designed to facilitate the automatic removal of non-word spelling errors in large corpora. Its lexicon is derived from a very large corpus of raw text, without supervision, and contains word unigram

  2. Model Correction Factor Method

    DEFF Research Database (Denmark)

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes

    1997-01-01

    of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...

  3. Writing: Revisions and Corrections

    Science.gov (United States)

    Kohl, Herb

    1978-01-01

    A fifth grader wanted to know what he had to do to get all his ideas the way he wanted them in his story writing "and" have the spelling, punctuation and quotation marks correctly styled. His teacher encouraged him to think about writing as a process and provided the student with three steps as guidelines for effective writing. (Author/RK)

  4. 75 FR 68409 - Correction

    Science.gov (United States)

    2010-11-08

    ... Documents#0;#0; ] Presidential Determination No. 2010-14 of September 3, 2010--Unexpected Urgent Refugee And Migration Needs Resulting From Flooding In Pakistan Correction In Presidential document 2010-27673 beginning..., the Presidential Determination number should read ``2010-14'' (Presidential Sig.) [FR Doc....

  5. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.

    Science.gov (United States)

    Fuchs, Franz G; Hjelmervik, Jon M

    2016-02-01

    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results.

  6. Fast and accurate determination of modularity and its effect size

    CERN Document Server

    Treviño, Santiago; Del Genio, Charo I; Bassler, Kevin E

    2014-01-01

    We present a fast spectral algorithm for community detection in complex networks. Our method searches for the partition with the maximum value of the modularity via the interplay of several refinement steps that include both agglomeration and division. We validate the accuracy of the algorithm by applying it to several real-world benchmark networks. On all these, our algorithm performs as well or better than any other known polynomial scheme. This allows us to extensively study the modularity distribution in ensembles of Erd\\H{o}s-R\\'enyi networks, producing theoretical predictions for means and variances inclusive of finite-size corrections. Our work provides a way to accurately estimate the effect size of modularity, providing a $z$-score measure of it and enabling a more informative comparison of networks with different numbers of nodes and links.

  7. Cyclic period changes and the light-time effect in eclipsing binaries: A low-mass companion around the system VV Ursae Majoris

    Science.gov (United States)

    Tanrıver, Mehmet

    2015-04-01

    In this article, a period analysis of the late-type eclipsing binary VV UMa is presented. This work is based on the periodic variation of eclipse timings of the VV UMa binary. We determined the orbital properties and mass of a third orbiting body in the system by analyzing the light-travel time effect. The O-C diagram constructed for all available minima times of VV UMa exhibits a cyclic character superimposed on a linear variation. This variation includes three maxima and two minima within approximately 28,240 orbital periods of the system, which can be explained as the light-travel time effect (LITE) because of an unseen third body in a triple system that causes variations of the eclipse arrival times. New parameter values of the light-time travel effect because of the third body were computed with a period of 23.22 ± 0.17 years in the system. The cyclic-variation analysis produces a value of 0.0139 day as the semi-amplitude of the light-travel time effect and 0.35 as the orbital eccentricity of the third body. The mass of the third body that orbits the eclipsing binary stars is 0.787 ± 0.02 M⊙, and the semi-major axis of its orbit is 10.75 AU.

  8. Accurate tracking control in LOM application

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The fabrication of accurate prototype from CAD model directly in short time depends on the accurate tracking control and reference trajectory planning in (Laminated Object Manufacture) LOM application. An improvement on contour accuracy is acquired by the introduction of a tracking controller and a trajectory generation policy. A model of the X-Y positioning system of LOM machine is developed as the design basis of tracking controller. The ZPETC (Zero Phase Error Tracking Controller) is used to eliminate single axis following error, thus reduce the contour error. The simulation is developed on a Maltab model based on a retrofitted LOM machine and the satisfied result is acquired.

  9. Geometric correction of APEX hyperspectral data

    Directory of Open Access Journals (Sweden)

    Vreys Kristin

    2016-03-01

    Full Text Available Hyperspectral imagery originating from airborne sensors is nowadays widely used for the detailed characterization of land surface. The correct mapping of the pixel positions to ground locations largely contributes to the success of the applications. Accurate geometric correction, also referred to as “orthorectification”, is thus an important prerequisite which must be performed prior to using airborne imagery for evaluations like change detection, or mapping or overlaying the imagery with existing data sets or maps. A so-called “ortho-image” provides an accurate representation of the earth’s surface, having been adjusted for lens distortions, camera tilt and topographic relief. In this paper, we describe the different steps in the geometric correction process of APEX hyperspectral data, as applied in the Central Data Processing Center (CDPC at the Flemish Institute for Technological Research (VITO, Mol, Belgium. APEX ortho-images are generated through direct georeferencing of the raw images, thereby making use of sensor interior and exterior orientation data, boresight calibration data and elevation data. They can be referenced to any userspecified output projection system and can be resampled to any output pixel size.

  10. Corrected transposition of the great arteries

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Young Hi; Park, Jae Hyung; Han, Man Chung [Seoul National University College of Medicine, Seoul (Korea, Republic of)

    1981-12-15

    The corrected transposition of the great arteries is an usual congenital cardiac malformation, which consists of transposition of great arteries and ventricular inversion, and which is caused by abnormal development of conotruncus and ventricular looping. High frequency of associated cardiac malformations makes it difficult to get accurate morphologic diagnosis. A total of 18 cases of corrected transposition of the great arteries is presented, in which cardiac catheterization and angiocardiography were done at the Department of Radiology, Seoul National University Hospital between September 1976 and June 1981. The clinical, radiographic, and operative findings with the emphasis on the angiocardiographic findings were analyzed. The results are as follows: 1. Among 18 cases, 13 cases have normal cardiac position, 2 cases have dextrocardia with situs solitus, 2 cases have dextrocardia with situs inversus and 1 case has levocardia with situs inversus. 2. Segmental sets are (S, L, L) in 15 cases, and (I, D,D) in 3 cases and there is no exception to loop rule. 3. Side by side interrelationships of both ventricles and both semilunar valves are noticed in 10 and 12 cases respectively. 4. Subaortic type conus is noted in all 18 cases. 5. Associated cardic malformations are VSD in 14 cases, PS in 11, PDA in 3, PFO in 3, ASD in 2, right aortic arch in 2, tricuspid insufficiency, mitral prolapse, persistent left SVC and persistent right SVC in 1 case respectively. 6. For accurate diagnosis of corrected TGA, selective biventriculography using biplane cineradiography is an essential procedure.

  11. Accurate atomic data for industrial plasma applications

    Energy Technology Data Exchange (ETDEWEB)

    Griesmann, U.; Bridges, J.M.; Roberts, J.R.; Wiese, W.L.; Fuhr, J.R. [National Inst. of Standards and Technology, Gaithersburg, MD (United States)

    1997-12-31

    Reliable branching fraction, transition probability and transition wavelength data for radiative dipole transitions of atoms and ions in plasma are important in many industrial applications. Optical plasma diagnostics and modeling of the radiation transport in electrical discharge plasmas (e.g. in electrical lighting) depend on accurate basic atomic data. NIST has an ongoing experimental research program to provide accurate atomic data for radiative transitions. The new NIST UV-vis-IR high resolution Fourier transform spectrometer has become an excellent tool for accurate and efficient measurements of numerous transition wavelengths and branching fractions in a wide wavelength range. Recently, the authors have also begun to employ photon counting techniques for very accurate measurements of branching fractions of weaker spectral lines with the intent to improve the overall accuracy for experimental branching fractions to better than 5%. They have now completed their studies of transition probabilities of Ne I and Ne II. The results agree well with recent calculations and for the first time provide reliable transition probabilities for many weak intercombination lines.

  12. A highly accurate ab initio potential energy surface for methane

    Science.gov (United States)

    Owens, Alec; Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter

    2016-09-01

    A new nine-dimensional potential energy surface (PES) for methane has been generated using state-of-the-art ab initio theory. The PES is based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set limit and incorporates a range of higher-level additive energy corrections. These include core-valence electron correlation, higher-order coupled cluster terms beyond perturbative triples, scalar relativistic effects, and the diagonal Born-Oppenheimer correction. Sub-wavenumber accuracy is achieved for the majority of experimentally known vibrational energy levels with the four fundamentals of 12CH4 reproduced with a root-mean-square error of 0.70 cm-1. The computed ab initio equilibrium C-H bond length is in excellent agreement with previous values despite pure rotational energies displaying minor systematic errors as J (rotational excitation) increases. It is shown that these errors can be significantly reduced by adjusting the equilibrium geometry. The PES represents the most accurate ab initio surface to date and will serve as a good starting point for empirical refinement.

  13. Accurate phylogenetic classification of DNA fragments based onsequence composition

    Energy Technology Data Exchange (ETDEWEB)

    McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis; Hugenholtz, Philip; Rigoutsos, Isidore

    2006-05-01

    Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequence characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.

  14. CTI Correction Code

    Science.gov (United States)

    Massey, Richard; Stoughton, Chris; Leauthaud, Alexie; Rhodes, Jason; Koekemoer, Anton; Ellis, Richard; Shaghoulian, Edgar

    2013-07-01

    Charge Transfer Inefficiency (CTI) due to radiation damage above the Earth's atmosphere creates spurious trailing in images from Charge-Coupled Device (CCD) imaging detectors. Radiation damage also creates unrelated warm pixels, which can be used to measure CTI. This code provides pixel-based correction for CTI and has proven effective in Hubble Space Telescope Advanced Camera for Surveys raw images, successfully reducing the CTI trails by a factor of ~30 everywhere in the CCD and at all flux levels. The core is written in java for speed, and a front-end user interface is provided in IDL. The code operates on raw data by returning individual electrons to pixels from which they were unintentionally dragged during readout. Correction takes about 25 minutes per ACS exposure, but is trivially parallelisable to multiple processors.

  15. Correction coil cable

    Science.gov (United States)

    Wang, Sou-Tien

    1994-11-01

    A wire cable assembly (10, 310) adapted for the winding of electrical coils is taught. A primary intended use is for use in particle tube assemblies (532) for the superconducting super collider. The correction coil cables (10, 310) have wires (14, 314) collected in wire arrays (12, 312) with a center rib (16, 316) sandwiched therebetween to form a core assembly (18, 318 ). The core assembly (18, 318) is surrounded by an assembly housing (20, 320) having an inner spiral wrap (22, 322) and a counter wound outer spiral wrap (24, 324). An alternate embodiment (410) of the invention is rolled into a keystoned shape to improve radial alignment of the correction coil cable (410) on a particle tube (733) in a particle tube assembly (732).

  16. Aberration Corrected Emittance Exchange

    CERN Document Server

    Nanni, Emilio A

    2015-01-01

    Full exploitation of emittance exchange (EEX) requires aberration-free performance of a complex imaging system including active radio-frequency (RF) elements which can add temporal distortions. We investigate the performance of an EEX line where the exchange occurs between two dimensions with normalized emittances which differ by orders of magnitude. The transverse emittance is exchanged into the longitudinal dimension using a double dog-leg emittance exchange setup with a 5 cell RF deflector cavity. Aberration correction is performed on the four most dominant aberrations. These include temporal aberrations that are corrected with higher order magnetic optical elements located where longitudinal and transverse emittance are coupled. We demonstrate aberration-free performance of emittances differing by 4 orders of magnitude, i.e. an initial transverse emittance of $\\epsilon_x=1$ pm-rad is exchanged with a longitudinal emittance of $\\epsilon_z=10$ nm-rad.

  17. Radiative corrections to DIS

    CERN Document Server

    Krasny, Mieczyslaw Witold

    2008-01-01

    Early deep inelastic scattering (DIS) experiments at SLAC discovered partons, identified them as quarks and gluons, and restricted the set of the candidate theories for strong interactions to those exhibiting the asymptotic freedom property. The next generation DIS experiments at FNAL and CERN confirmed the predictions of QCD for the size of the scaling violation effects in the nucleon structure functions. The QCD fits to their data resulted in determining the momentum distributions of the point-like constituents of nucleons. Interpretation of data coming from all these experiments and, in the case of the SLAC experiments, even an elaboration of the running strategies, would not have been possible without a precise understanding of the electromagnetic radiative corrections. In this note I recollect the important milestones, achieved in the period preceding the HERA era, in the high precision calculations of the radiative corrections to DIS, and in the development of the methods of their experimental control. ...

  18. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.

    Science.gov (United States)

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  19. The dynamic correction of collimation errors of CT slicing pictures

    Institute of Scientific and Technical Information of China (English)

    LIU Ya-xiong; Sekou Sing-are; LI Di-chen; LU Bing-heng

    2006-01-01

    To eliminate the motion artifacts of CT images caused by patient motions and other related errors,two kinds of correctors (A type and U type) are proposed to monitor the scanning process and correct the motion artifacts of the original images via reverse geometrical transformation such as reverse scaling,moving,rotating and offsetting.The results confirm that the correction method with any of the correctors can improve the accuracy and reliability of CT images,which facilitates in eliminating or decreasing the motion artifacts and correcting other static errors and image processing errors.This provides a foundation for the 3D reconstruction and accurate fabrication of the customized implants.

  20. Herschel SPIRE FTS telescope model correction

    CERN Document Server

    Hopwood, Rosalind; Polehampton, Edward T; Valtchanov, Ivan; Benielli, Dominique; Imhof, Peter; Lim, Tanya; Lu, Nanyao; Marchili, Nicola; Pearson, Chris P; Swinyard, Bruce M

    2014-01-01

    Emission from the Herschel telescope is the dominant source of radiation for the majority of SPIRE Fourier transform spectrometer (FTS) observations, despite the exceptionally low emissivity of the primary and secondary mirrors. Accurate modelling and removal of the telescope contribution is, therefore, an important and challenging aspect of FTS calibration and data reduction pipeline. A dust-contaminated telescope model with time invariant mirror emissivity was adopted before the Herschel launch. However, measured FTS spectra show a clear evolution of the telescope contribution over the mission and strong need for a correction to the standard telescope model in order to reduce residual background (of up to 7 Jy) in the final data products. Systematic changes in observations of dark sky, taken over the course of the mission, provide a measure of the evolution between observed telescope emission and the telescope model. These dark sky observations have been used to derive a time dependent correction to the tel...

  1. FIELD CORRECTION FACTORS FOR PERSONAL NEUTRON DOSEMETERS.

    Science.gov (United States)

    Luszik-Bhadra, M

    2016-09-01

    A field-dependent correction factor can be obtained by comparing the readings of two albedo neutron dosemeters fixed in opposite directions on a polyethylene sphere to the H*(10) reading as determined with a thermal neutron detector in the centre of the same sphere. The work shows that the field calibration technique as used for albedo neutron dosemeters can be generalised for all kind of dosemeters, since H*(10) is a conservative estimate of the sum of the personal dose equivalents Hp(10) in two opposite directions. This result is drawn from reference values as determined by spectrometers within the EVIDOS project at workplace of nuclear installations in Europe. More accurate field-dependent correction factors can be achieved by the analysis of several personal dosimeters on a phantom, but reliable angular responses of these dosemeters need to be taken into account. PMID:26493946

  2. Exemplar-based human action pose correction.

    Science.gov (United States)

    Shen, Wei; Deng, Ke; Bai, Xiang; Leyvand, Tommer; Guo, Baining; Tu, Zhuowen

    2014-07-01

    The launch of Xbox Kinect has built a very successful computer vision product and made a big impact on the gaming industry. This sheds lights onto a wide variety of potential applications related to action recognition. The accurate estimation of human poses from the depth image is universally a critical step. However, existing pose estimation systems exhibit failures when facing severe occlusion. In this paper, we propose an exemplar-based method to learn to correct the initially estimated poses. We learn an inhomogeneous systematic bias by leveraging the exemplar information within a specific human action domain. Furthermore, as an extension, we learn a conditional model by incorporation of pose tags to further increase the accuracy of pose correction. In the experiments, significant improvements on both joint-based skeleton correction and tag prediction are observed over the contemporary approaches, including what is delivered by the current Kinect system. Our experiments for the facial landmark correction also illustrate that our algorithm can improve the accuracy of other detection/estimation systems.

  3. Refining atmospheric correction for aquatic remote spectroscopy

    Science.gov (United States)

    Thompson, D. R.; Guild, L. S.; Negrey, K.; Kudela, R. M.; Palacios, S. L.; Gao, B. C.; Green, R. O.

    2015-12-01

    Remote spectroscopic investigations of aquatic ecosystems typically measure radiance at high spectral resolution and then correct these data for atmospheric effects to estimate Remote Sensing Reflectance (Rrs) at the surface. These reflectance spectra reveal phytoplankton absorption and scattering features, enabling accurate retrieval of traditional remote sensing parameters, such as chlorophyll-a, and new retrievals of additional parameters, such as phytoplankton functional type. Future missions will significantly expand coverage of these datasets with airborne campaigns (CORAL, ORCAS, and the HyspIRI Preparatory Campaign) and orbital instruments (EnMAP, HyspIRI). Remote characterization of phytoplankton can be influenced by errors in atmospheric correction due to uncertain atmospheric constituents such as aerosols. The "empirical line method" is an expedient solution that estimates a linear relationship between observed radiances and in-situ reflectance measurements. While this approach is common for terrestrial data, there are few examples involving aquatic scenes. Aquatic scenes are challenging due to the difficulty of acquiring in situ measurements from open water; with only a handful of reference spectra, the resulting corrections may not be stable. Here we present a brief overview of methods for atmospheric correction, and describe ongoing experiments on empirical line adjustment with AVIRIS overflights of Monterey Bay from the 2013-2014 HyspIRI preparatory campaign. We present new methods, based on generalized Tikhonov regularization, to improve stability and performance when few reference spectra are available. Copyright 2015 California Institute of Technology. All Rights Reserved. US Government Support Acknowledged.

  4. Feedback about more accurate versus less accurate trials: differential effects on self-confidence and activation.

    Science.gov (United States)

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-06-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705

  5. A New Geometrical Correction Method for Inaccessible Area Imagery

    Institute of Scientific and Technical Information of China (English)

    Lee Hong-shik; Park Jun-ku; Lim Sam-sung

    2003-01-01

    The geometric correction of a satellite imagery is performed by making a systematic correction with satellite ephemerides and attitude angles followed by employing the Ground Control Points (GCPs) or Digital Elevation Models (DEMs). In a remote area or an inaccessible area, however,GCPs are unavailable to be surveyed and thus they can be obtained only by reading maps, which is not accurate in reality.In this study, we performed the systematic correction process to the inaccessible area and the precise geometric correction process to the adjacent accessible area by using GCPs. Then we analyzed the correlation between the two geo-referenced Korea Multipurpose Satellite (KOMPSAT-1 EOC) images. A new geometrical correction for the inaccessible area imagery is achieved by applying the correlation to the inaccessible imagery. By employing this new method, the accuracy of the inaccessible area imagery is significantly improved absolutely and relatively.

  6. Accurate estimation of indoor travel times

    DEFF Research Database (Denmark)

    Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan;

    2014-01-01

    are collected within the building complex. Results indicate that InTraTime is superior with respect to metrics such as deployment cost, maintenance cost and estimation accuracy, yielding an average deviation from actual travel times of 11.7 %. This accuracy was achieved despite using a minimal-effort setup......The ability to accurately estimate indoor travel times is crucial for enabling improvements within application areas such as indoor navigation, logistics for mobile workers, and facility management. In this paper, we study the challenges inherent in indoor travel time estimation, and we propose...... the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. In...

  7. Accurate guitar tuning by cochlear implant musicians.

    Directory of Open Access Journals (Sweden)

    Thomas Lu

    Full Text Available Modern cochlear implant (CI users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  8. Accurate Finite Difference Methods for Option Pricing

    OpenAIRE

    Persson, Jonas

    2006-01-01

    Stock options are priced numerically using space- and time-adaptive finite difference methods. European options on one and several underlying assets are considered. These are priced with adaptive numerical algorithms including a second order method and a more accurate method. For American options we use the adaptive technique to price options on one stock with and without stochastic volatility. In all these methods emphasis is put on the control of errors to fulfill predefined tolerance level...

  9. Accurate variational forms for multiskyrmion configurations

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, A.D.; Weiss, C.; Wirzba, A.; Lande, A.

    1989-04-17

    Simple variational forms are suggested for the fields of a single skyrmion on a hypersphere, S/sub 3/(L), and of a face-centered cubic array of skyrmions in flat space, R/sub 3/. The resulting energies are accurate at the level of 0.2%. These approximate field configurations provide a useful alternative to brute-force solutions of the corresponding Euler equations.

  10. Efficient Accurate Context-Sensitive Anomaly Detection

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    For program behavior-based anomaly detection, the only way to ensure accurate monitoring is to construct an efficient and precise program behavior model. A new program behavior-based anomaly detection model,called combined pushdown automaton (CPDA) model was proposed, which is based on static binary executable analysis. The CPDA model incorporates the optimized call stack walk and code instrumentation technique to gain complete context information. Thereby the proposed method can detect more attacks, while retaining good performance.

  11. Identification of Microorganisms by High Resolution Tandem Mass Spectrometry with Accurate Statistical Significance

    Science.gov (United States)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo

    2016-02-01

    Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.

  12. Accurate phase-shift velocimetry in rock

    Science.gov (United States)

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  13. Accurate structural correlations from maximum likelihood superpositions.

    Directory of Open Access Journals (Sweden)

    Douglas L Theobald

    2008-02-01

    Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.

  14. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    Science.gov (United States)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  15. Brain Image Motion Correction

    DEFF Research Database (Denmark)

    Jensen, Rasmus Ramsbøl; Benjaminsen, Claus; Larsen, Rasmus;

    2015-01-01

    The application of motion tracking is wide, including: industrial production lines, motion interaction in gaming, computer-aided surgery and motion correction in medical brain imaging. Several devices for motion tracking exist using a variety of different methodologies. In order to use such devices...... offset and tracking noise in medical brain imaging. The data are generated from a phantom mounted on a rotary stage and have been collected using a Siemens High Resolution Research Tomograph for positron emission tomography. During acquisition the phantom was tracked with our latest tracking prototype...

  16. A Technique for Calculating Quantum Corrections to Solitons

    OpenAIRE

    Barnes, Chris; Turok, Neil

    1997-01-01

    We present a numerical scheme for calculating the first quantum corrections to the properties of static solitons. The technique is applicable to solitons of arbitrary shape, and may be used in 3+1 dimensions for multiskyrmions or other complicated solitons. We report on a test computation in 1+1 dimensions, where we accurately reproduce the analytical result with minimal numerical effort.

  17. Educational Programs in Adult Correctional Institutions: A Survey.

    Science.gov (United States)

    Dell'Apa, Frank

    A national survey of adult correctional institutions was conducted by questionnaire in 1973 to obtain an accurate picture of the current status of academic educational programs, particularly at the elementary and secondary levels, available to inmates. Questions were designed to obtain information regarding the degree of participation of inmates…

  18. OCT Motion Correction

    Science.gov (United States)

    Kraus, Martin F.; Hornegger, Joachim

    From the introduction of time domain OCT [1] up to recent swept source systems, motion continues to be an issue in OCT imaging. In contrast to normal photography, an OCT image does not represent a single point in time. Instead, conventional OCT devices sequentially acquire one-dimensional data over a period of several seconds, capturing one beam of light at a time and recording both the intensity and delay of reflections along its path through an object. In combination with unavoidable object motion which occurs in many imaging contexts, the problem of motion artifacts lies in the very nature of OCT imaging. Motion artifacts degrade image quality and make quantitative measurements less reliable. Therefore, it is desirable to come up with techniques to measure and/or correct object motion during OCT acquisition. In this chapter, we describe the effect of motion on OCT data sets and give an overview on the state of the art in the field of retinal OCT motion correction.

  19. Contact Lenses for Vision Correction

    Science.gov (United States)

    ... Ask an Ophthalmologist Español Eye Health / Glasses & Contacts Contact Lenses Sections Contact Lenses for Vision Correction Proper ... to Know About Contact Lenses Colored Contact Lenses Contact Lenses for Vision Correction Written by: Kierstan Boyd ...

  20. Precise and accurate train run data: Approximation of actual arrival and departure times

    DEFF Research Database (Denmark)

    Richter, Troels; Landex, Alex; Andersen, Jonas Lohmann Elkjær

    possible with the present systems. GPS data from a major Danish Railway Undertaking is used as an alternate data source with more accurate arrival and departure times. The offset is based on the median of the time difference between these two sources. Factors taken into consideration when constructing...... the correction function, are location, message type, platform used and train type. The approximated correction values are then analysed to ensure that interquartile range is within the defined criteria. The practical implementation is an additional column in the train run history database tables...

  1. A simple and accurate measurement method of current density of an electron accelerator for irradiation

    International Nuclear Information System (INIS)

    For simple and accurate measurement of the current distribution in a broad beam from electron accelerators, a method for detecting the charge absorbed in a graphite target exposed to the air has been examined. The present report means to solve several fundamental problems. The effective incidence area of the absorber is strictly defined by the design of the geometrical arrangement of the absorber assembly. Electron backscattering from the absorber is corrected with backscattering coefficients in consideration of oblique incidence on the absorber. The influence of ionic charge produced in air is ascribed to the contact potential between the absorber and the guard, and correction methods are proposed. (orig.)

  2. Illumination Correction on Biomedical Images

    OpenAIRE

    Edoardo Ardizzone; Roberto Pirrone; Orazio Gambino; Salvatore Vitabile

    2014-01-01

    RF-Inhomogeneity Correction (aka bias) artifact is an important research field in Magnetic Resonance Imaging (MRI). Bias corrupts MR images altering their illumination even though they are acquired with the most recent scanners. Homomorphic Unsharp Masking (HUM) is a filtering technique aimed at correcting illumination inhomogeneity, but it produces a halo around the edges as a side effect. In this paper a novel correction scheme based on HUM is proposed to correct the artifact mentioned abov...

  3. New law requires 'medically accurate' lesson plans.

    Science.gov (United States)

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.

  4. Niche Genetic Algorithm with Accurate Optimization Performance

    Institute of Scientific and Technical Information of China (English)

    LIU Jian-hua; YAN De-kun

    2005-01-01

    Based on crowding mechanism, a novel niche genetic algorithm was proposed which can record evolutionary direction dynamically during evolution. After evolution, the solutions's precision can be greatly improved by means of the local searching along the recorded direction. Simulation shows that this algorithm can not only keep population diversity but also find accurate solutions. Although using this method has to take more time compared with the standard GA, it is really worth applying to some cases that have to meet a demand for high solution precision.

  5. Investigations on Accurate Analysis of Microstrip Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min; Sørensen, S. B.; Kim, Oleksiy S.;

    2011-01-01

    An investigation on accurate analysis of microstrip reflectarrays is presented. Sources of error in reflectarray analysis are examined and solutions to these issues are proposed. The focus is on two sources of error, namely the determination of the equivalent currents to calculate the radiation...... pattern, and the inaccurate mutual coupling between array elements due to the lack of periodicity. To serve as reference, two offset reflectarray antennas have been designed, manufactured and measured at the DTUESA Spherical Near-Field Antenna Test Facility. Comparisons of simulated and measured data are...

  6. Accurate diagnosis is essential for amebiasis

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    @@ Amebiasis is one of the three most common causes of death from parasitic disease, and Entamoeba histolytica is the most widely distributed parasites in the world. Particularly, Entamoeba histolytica infection in the developing countries is a significant health problem in amebiasis-endemic areas with a significant impact on infant mortality[1]. In recent years a world wide increase in the number of patients with amebiasis has refocused attention on this important infection. On the other hand, improving the quality of parasitological methods and widespread use of accurate tecniques have improved our knowledge about the disease.

  7. Universality: Accurate Checks in Dyson's Hierarchical Model

    Science.gov (United States)

    Godina, J. J.; Meurice, Y.; Oktay, M. B.

    2003-06-01

    In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.

  8. 78 FR 34245 - Miscellaneous Corrections

    Science.gov (United States)

    2013-06-07

    .... 2.346(f). (77 FR 46576-46578, 46584; August 3, 2012). This change implements the intended revision... regulations to make miscellaneous corrections. These changes include updating the name of its human capital office, correcting and adding missing cross-references, correcting grammatical errors, revising...

  9. Radiation camera motion correction system

    Science.gov (United States)

    Hoffer, P.B.

    1973-12-18

    The device determines the ratio of the intensity of radiation received by a radiation camera from two separate portions of the object. A correction signal is developed to maintain this ratio at a substantially constant value and this correction signal is combined with the camera signal to correct for object motion. (Official Gazette)

  10. Second order QCD corrections to inclusive semileptonic b \\to Xc l \\bar \

    CERN Document Server

    Biswas, Sandip

    2009-01-01

    We extend previous computations of the second order QCD corrections to semileptonic b \\to c inclusive transitions, to the case where the charged lepton in the final state is massive. This allows accurate description of b \\to c \\tau \\bar \

  11. Temperature Corrected Bootstrap Algorithm

    Science.gov (United States)

    Comiso, Joey C.; Zwally, H. Jay

    1997-01-01

    A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.

  12. XRF matrix corrections

    International Nuclear Information System (INIS)

    Full text: In order to obtain meaningful analytical information from an X-Ray Fluorescence spectrometer, it is necessary to correlate measured intensity values with sample concentrations. The ability to do this to a desired level of precision depends on taking care of a number of variables which influence measured intensity values. These variables include: the sample, which needs to be homogeneous, flat and critically thick to the analyte lines used for measurement; the spectrometer, which needs to perform any mechanical movements in a highly reproducible manner; the time taken to measure an analyte line, and the software, which needs to take care of detector dead-time, the contribution of background to the measured signal, the effects of line overlaps and matrix (absorption and enhancement) effects. This presentation will address commonly used correction procedures for matrix effects and their relative success in achieving their objective. Copyright (2002) Australian X-ray Analytical Association Inc

  13. EDITORIAL: Politically correct physics?

    Science.gov (United States)

    Pople Deputy Editor, Stephen

    1997-03-01

    If you were a caring, thinking, liberally minded person in the 1960s, you marched against the bomb, against the Vietnam war, and for civil rights. By the 1980s, your voice was raised about the destruction of the rainforests and the threat to our whole planetary environment. At the same time, you opposed discrimination against any group because of race, sex or sexual orientation. You reasoned that people who spoke or acted in a discriminatory manner should be discriminated against. In other words, you became politically correct. Despite its oft-quoted excesses, the political correctness movement sprang from well-founded concerns about injustices in our society. So, on balance, I am all for it. Or, at least, I was until it started to invade science. Biologists were the first to feel the impact. No longer could they refer to 'higher' and 'lower' orders, or 'primitive' forms of life. To the list of undesirable 'isms' - sexism, racism, ageism - had been added a new one: speciesism. Chemists remained immune to the PC invasion, but what else could you expect from a group of people so steeped in tradition that their principal unit, the mole, requires the use of the thoroughly unreconstructed gram? Now it is the turn of the physicists. This time, the offenders are not those who talk disparagingly about other people or animals, but those who refer to 'forms of energy' and 'heat'. Political correctness has evolved into physical correctness. I was always rather fond of the various forms of energy: potential, kinetic, chemical, electrical, sound and so on. My students might merge heat and internal energy into a single, fuzzy concept loosely associated with moving molecules. They might be a little confused at a whole new crop of energies - hydroelectric, solar, wind, geothermal and tidal - but they could tell me what devices turned chemical energy into electrical energy, even if they couldn't quite appreciate that turning tidal energy into geothermal energy wasn't part of the

  14. Anomaly Corrected Heterotic Horizons

    CERN Document Server

    Fontanella, A; Papadopoulos, G

    2016-01-01

    We consider supersymmetric near-horizon geometries in heterotic supergravity up to two loop order in sigma model perturbation theory. We identify the conditions for the horizons to admit enhancement of supersymmetry. We show that solutions which undergo supersymmetry enhancement exhibit an sl(2,R) symmetry, and we describe the geometry of their horizon sections. We also prove a modified Lichnerowicz type theorem, incorporating $\\alpha'$ corrections, which relates Killing spinors to zero modes of near-horizon Dirac operators. Furthermore, we demonstrate that there are no AdS2 solutions in heterotic supergravity up to second order in $\\alpha'$ for which the fields are smooth and the internal space is smooth and compact without boundary. We investigate a class of nearly supersymmetric horizons, for which the gravitino Killing spinor equation is satisfied on the spatial cross sections but not the dilatino one, and present a description of their geometry.

  15. Catalytic quantum error correction

    CERN Document Server

    Brun, T; Hsieh, M H; Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-01-01

    We develop the theory of entanglement-assisted quantum error correcting (EAQEC) codes, a generalization of the stabilizer formalism to the setting in which the sender and receiver have access to pre-shared entanglement. Conventional stabilizer codes are equivalent to dual-containing symplectic codes. In contrast, EAQEC codes do not require the dual-containing condition, which greatly simplifies their construction. We show how any quaternary classical code can be made into a EAQEC code. In particular, efficient modern codes, like LDPC codes, which attain the Shannon capacity, can be made into EAQEC codes attaining the hashing bound. In a quantum computation setting, EAQEC codes give rise to catalytic quantum codes which maintain a region of inherited noiseless qubits. We also give an alternative construction of EAQEC codes by making classical entanglement assisted codes coherent.

  16. Accurate radiative transfer calculations for layered media.

    Science.gov (United States)

    Selden, Adrian C

    2016-07-01

    Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics. PMID:27409700

  17. Accurate pose estimation for forensic identification

    Science.gov (United States)

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  18. How Accurately can we Calculate Thermal Systems?

    Energy Technology Data Exchange (ETDEWEB)

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-04-20

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.

  19. Accurate basis set truncation for wavefunction embedding

    Science.gov (United States)

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  20. Accurate pattern registration for integrated circuit tomography

    Energy Technology Data Exchange (ETDEWEB)

    Levine, Zachary H.; Grantham, Steven; Neogi, Suneeta; Frigo, Sean P.; McNulty, Ian; Retsch, Cornelia C.; Wang, Yuxin; Lucatorto, Thomas B.

    2001-07-15

    As part of an effort to develop high resolution microtomography for engineered structures, a two-level copper integrated circuit interconnect was imaged using 1.83 keV x rays at 14 angles employing a full-field Fresnel zone plate microscope. A major requirement for high resolution microtomography is the accurate registration of the reference axes in each of the many views needed for a reconstruction. A reconstruction with 100 nm resolution would require registration accuracy of 30 nm or better. This work demonstrates that even images that have strong interference fringes can be used to obtain accurate fiducials through the use of Radon transforms. We show that we are able to locate the coordinates of the rectilinear circuit patterns to 28 nm. The procedure is validated by agreement between an x-ray parallax measurement of 1.41{+-}0.17 {mu}m and a measurement of 1.58{+-}0.08 {mu}m from a scanning electron microscope image of a cross section.

  1. Accurate determination of characteristic relative permeability curves

    Science.gov (United States)

    Krause, Michael H.; Benson, Sally M.

    2015-09-01

    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  2. Accurate Classification of RNA Structures Using Topological Fingerprints

    Science.gov (United States)

    Li, Kejie; Gribskov, Michael

    2016-01-01

    While RNAs are well known to possess complex structures, functionally similar RNAs often have little sequence similarity. While the exact size and spacing of base-paired regions vary, functionally similar RNAs have pronounced similarity in the arrangement, or topology, of base-paired stems. Furthermore, predicted RNA structures often lack pseudoknots (a crucial aspect of biological activity), and are only partially correct, or incomplete. A topological approach addresses all of these difficulties. In this work we describe each RNA structure as a graph that can be converted to a topological spectrum (RNA fingerprint). The set of subgraphs in an RNA structure, its RNA fingerprint, can be compared with the fingerprints of other RNA structures to identify and correctly classify functionally related RNAs. Topologically similar RNAs can be identified even when a large fraction, up to 30%, of the stems are omitted, indicating that highly accurate structures are not necessary. We investigate the performance of the RNA fingerprint approach on a set of eight highly curated RNA families, with diverse sizes and functions, containing pseudoknots, and with little sequence similarity–an especially difficult test set. In spite of the difficult test set, the RNA fingerprint approach is very successful (ROC AUC > 0.95). Due to the inclusion of pseudoknots, the RNA fingerprint approach both covers a wider range of possible structures than methods based only on secondary structure, and its tolerance for incomplete structures suggests that it can be applied even to predicted structures. Source code is freely available at https://github.rcac.purdue.edu/mgribsko/XIOS_RNA_fingerprint. PMID:27755571

  3. Accurate, fully-automated NMR spectral profiling for metabolomics.

    Directory of Open Access Journals (Sweden)

    Siamak Ravanbakhsh

    Full Text Available Many diseases cause significant changes to the concentrations of small molecules (a.k.a. metabolites that appear in a person's biofluids, which means such diseases can often be readily detected from a person's "metabolic profile"-i.e., the list of concentrations of those metabolites. This information can be extracted from a biofluids Nuclear Magnetic Resonance (NMR spectrum. However, due to its complexity, NMR spectral profiling has remained manual, resulting in slow, expensive and error-prone procedures that have hindered clinical and industrial adoption of metabolomics via NMR. This paper presents a system, BAYESIL, which can quickly, accurately, and autonomously produce a person's metabolic profile. Given a 1D 1H NMR spectrum of a complex biofluid (specifically serum or cerebrospinal fluid, BAYESIL can automatically determine the metabolic profile. This requires first performing several spectral processing steps, then matching the resulting spectrum against a reference compound library, which contains the "signatures" of each relevant metabolite. BAYESIL views spectral matching as an inference problem within a probabilistic graphical model that rapidly approximates the most probable metabolic profile. Our extensive studies on a diverse set of complex mixtures including real biological samples (serum and CSF, defined mixtures and realistic computer generated spectra; involving > 50 compounds, show that BAYESIL can autonomously find the concentration of NMR-detectable metabolites accurately (~ 90% correct identification and ~ 10% quantification error, in less than 5 minutes on a single CPU. These results demonstrate that BAYESIL is the first fully-automatic publicly-accessible system that provides quantitative NMR spectral profiling effectively-with an accuracy on these biofluids that meets or exceeds the performance of trained experts. We anticipate this tool will usher in high-throughput metabolomics and enable a wealth of new applications of

  4. Accurate molecular classification of cancer using simple rules

    Directory of Open Access Journals (Sweden)

    Gotoh Osamu

    2009-10-01

    Full Text Available Abstract Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often hampers the interpretability of the models. For a better understanding of the classification results, it is desirable to develop simpler rule-based models with as few marker genes as possible. Methods We screened a small number of informative single genes and gene pairs on the basis of their depended degrees proposed in rough sets. Applying the decision rules induced by the selected genes or gene pairs, we constructed cancer classifiers. We tested the efficacy of the classifiers by leave-one-out cross-validation (LOOCV of training sets and classification of independent test sets. Results We applied our methods to five cancerous gene expression datasets: leukemia (acute lymphoblastic leukemia [ALL] vs. acute myeloid leukemia [AML], lung cancer, prostate cancer, breast cancer, and leukemia (ALL vs. mixed-lineage leukemia [MLL] vs. AML. Accurate classification outcomes were obtained by utilizing just one or two genes. Some genes that correlated closely with the pathogenesis of relevant cancers were identified. In terms of both classification performance and algorithm simplicity, our approach outperformed or at least matched existing methods. Conclusion In cancerous gene expression datasets, a small number of genes, even one or two if selected correctly, is capable of achieving an ideal cancer classification effect. This finding also means that very simple rules may perform well for cancerous class prediction.

  5. Accuracy of 3D Virtual Planning of Corrective Osteotomies of the Distal Radius

    OpenAIRE

    Stockmans, Filip; Dezillie, Marleen; Vanhaecke, Jeroen

    2013-01-01

    Corrective osteotomies of the distal radius for symptomatic malunion are time-tested procedures that rely on accurate corrections. Patients with combined intra- and extra-articular malunions present a challenging deformity. Virtual planning and patient-specific instruments (PSIs) to transfer the planning into the operating room have been used both to simplify the surgery and to make it more accurate. This report focuses on the clinically achieved accuracy in four patients treated between 2008...

  6. A statistical method for assessing peptide identification confidence in accurate mass and time tag proteomics.

    Science.gov (United States)

    Stanley, Jeffrey R; Adkins, Joshua N; Slysz, Gordon W; Monroe, Matthew E; Purvine, Samuel O; Karpievitch, Yuliya V; Anderson, Gordon A; Smith, Richard D; Dabney, Alan R

    2011-08-15

    Current algorithms for quantifying peptide identification confidence in the accurate mass and time (AMT) tag approach assume that the AMT tags themselves have been correctly identified. However, there is uncertainty in the identification of AMT tags, because this is based on matching LC-MS/MS fragmentation spectra to peptide sequences. In this paper, we incorporate confidence measures for the AMT tag identifications into the calculation of probabilities for correct matches to an AMT tag database, resulting in a more accurate overall measure of identification confidence for the AMT tag approach. The method is referenced as Statistical Tools for AMT Tag Confidence (STAC). STAC additionally provides a uniqueness probability (UP) to help distinguish between multiple matches to an AMT tag and a method to calculate an overall false discovery rate (FDR). STAC is freely available for download, as both a command line and a Windows graphical application.

  7. Accurate Face Recognition Using PCA and LDA

    Directory of Open Access Journals (Sweden)

    Sukhvinder Singh

    2012-03-01

    Full Text Available Face recognition from images is a sub-area of the general object recognition problem. It is of particular interest in a wide variety of applications. Here, the face recognition is based on the new proposed modified PCA algorithm by using some components of the LDA algorithm of the face recognition. The proposed algorithm is based on the measure of the principal components of the faces and also to find the shortest distance between them. The experimental results demonstrate that this arithmetic can improve the face recognition rate. . Experimental results on ORL face database show that the method has higher correct recognition rate and higher recognition speeds than traditional PCA algorithm.

  8. Accurate FRET Measurements within Single Diffusing Biomolecules Using Alternating-Laser Excitation

    OpenAIRE

    Lee, Nam Ki; Kapanidis, Achillefs N.; Wang, You; Michalet, Xavier; Mukhopadhyay, Jayanta; Ebright, Richard H.; Weiss, Shimon

    2005-01-01

    Fluorescence resonance energy transfer (FRET) between a donor (D) and an acceptor (A) at the single-molecule level currently provides qualitative information about distance, and quantitative information about kinetics of distance changes. Here, we used the sorting ability of confocal microscopy equipped with alternating-laser excitation (ALEX) to measure accurate FRET efficiencies and distances from single molecules, using corrections that account for cross-talk terms that contaminate the FRE...

  9. Toward Accurate and Quantitative Comparative Metagenomics

    Science.gov (United States)

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  10. Toward Accurate and Quantitative Comparative Metagenomics.

    Science.gov (United States)

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  11. Accurate guitar tuning by cochlear implant musicians.

    Science.gov (United States)

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  12. How accurate are SuperCOSMOS positions?

    CERN Document Server

    Schaefer, Adam; Johnston, Helen

    2014-01-01

    Optical positions from the SuperCOSMOS Sky Survey have been compared in detail with accurate radio positions that define the second realisation of the International Celestial Reference Frame (ICRF2). The comparison was limited to the IIIaJ plates from the UK/AAO and Oschin (Palomar) Schmidt telescopes. A total of 1373 ICRF2 sources was used, with the sample restricted to stellar objects brighter than $B_J=20$ and Galactic latitudes $|b|>10^{\\circ}$. Position differences showed an rms scatter of $0.16''$ in right ascension and declination. While overall systematic offsets were $<0.1''$ in each hemisphere, both the systematics and scatter were greater in the north.

  13. Accurate renormalization group analyses in neutrino sector

    Energy Technology Data Exchange (ETDEWEB)

    Haba, Naoyuki [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Kaneta, Kunio [Kavli IPMU (WPI), The University of Tokyo, Kashiwa, Chiba 277-8568 (Japan); Takahashi, Ryo [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Yamaguchi, Yuya [Department of Physics, Faculty of Science, Hokkaido University, Sapporo 060-0810 (Japan)

    2014-08-15

    We investigate accurate renormalization group analyses in neutrino sector between ν-oscillation and seesaw energy scales. We consider decoupling effects of top quark and Higgs boson on the renormalization group equations of light neutrino mass matrix. Since the decoupling effects are given in the standard model scale and independent of high energy physics, our method can basically apply to any models beyond the standard model. We find that the decoupling effects of Higgs boson are negligible, while those of top quark are not. Particularly, the decoupling effects of top quark affect neutrino mass eigenvalues, which are important for analyzing predictions such as mass squared differences and neutrinoless double beta decay in an underlying theory existing at high energy scale.

  14. Accurate Telescope Mount Positioning with MEMS Accelerometers

    CERN Document Server

    Mészáros, László; Pál, András; Csépány, Gergely

    2014-01-01

    This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the sub-arcminute range which is well smaller than the field-of-view of conventional imaging telescope systems. Here we present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.

  15. Accurate Weather Forecasting for Radio Astronomy

    Science.gov (United States)

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  16. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    Science.gov (United States)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  17. Topologically correct cortical segmentation using Khalimsky's cubic complex framework

    Science.gov (United States)

    Cardoso, Manuel J.; Clarkson, Matthew J.; Modat, Marc; Talbot, Hugues; Couprie, Michel; Ourselin, Sébastien

    2011-03-01

    Automatic segmentation of the cerebral cortex from magnetic resonance brain images is a valuable tool for neuroscience research. Due to the presence of noise, intensity non-uniformity, partial volume effects, the limited resolution of MRI and the highly convoluted shape of the cerebral cortex, segmenting the brain in a robust, accurate and topologically correct way still poses a challenge. In this paper we describe a topologically correct Expectation Maximisation based Maximum a Posteriori segmentation algorithm formulated within the Khalimsky cubic complex framework, where both the solution of the EM algorithm and the information derived from a geodesic distance function are used to locally modify the weighting of a Markov Random Field and drive the topology correction operations. Experiments performed on 20 Brainweb datasets show that the proposed method obtains a topologically correct segmentation without significant loss in accuracy when compared to two well established techniques.

  18. Star catalog position and proper motion corrections in asteroid astrometry

    CERN Document Server

    Farnocchia, D; Chamberlin, A B; Tholen, D J

    2014-01-01

    We provide a scheme to correct asteroid astrometric observations for star catalog systematic errors due to inaccurate star positions and proper motions. As reference we select the most accurate stars in the PPMXL catalog, i.e., those based on 2MASS astrometry. We compute position and proper motion corrections for 19 of the most used star catalogs. The use of these corrections provides better ephemeris predictions and improves the error statistics of astrometric observations, e.g., by removing most of the regional systematic errors previously seen in Pan-STARRS PS1 asteroid astrometry. The correction table is publicly available at ftp://ssd.jpl.nasa.gov/pub/ssd/debias/debias_2014.tgz and can be freely used in orbit determination algorithms to obtain more reliable asteroid trajectories.

  19. An accurate δf method for neoclassical transport calculation

    International Nuclear Information System (INIS)

    A δf method, solving drift kinetic equation, for neoclassical transport calculation is presented in detail. It is demonstrated that valid results essentially rely on the correct evaluation of marker density g in weight calculation. A general and accurate weighting scheme is developed without using some assumed g in weight equation for advancing particle weights, unlike the previous schemes. This scheme employs an additional weight function to directly solve g from its kinetic equation using the idea of δf method. Therefore the severe constraint that the real marker distribution must be consistent with the initially assumed g during a simulation is relaxed. An improved like-particle collision scheme is presented. By performing compensation for momentum, energy and particle losses arising from numerical errors, the conservations of all the three quantities are greatly improved during collisions. Ion neoclassical transport due to self-collisions is examined under finite banana case as well as zero banana limit. A solution with zero particle and zero energy flux (in case of no temperature gradient) over whole poloidal section is obtained. With the improvement in both like-particle collision scheme and weighting scheme, the δf simulation shows a significantly upgraded performance for neoclassical transport study. (author)

  20. An accurate {delta}f method for neoclassical transport calculation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, W.X.; Nakajima, N.; Murakami, S.; Okamoto, M. [National Inst. for Fusion Science, Toki, Gifu (Japan)

    1999-03-01

    A {delta}f method, solving drift kinetic equation, for neoclassical transport calculation is presented in detail. It is demonstrated that valid results essentially rely on the correct evaluation of marker density g in weight calculation. A general and accurate weighting scheme is developed without using some assumed g in weight equation for advancing particle weights, unlike the previous schemes. This scheme employs an additional weight function to directly solve g from its kinetic equation using the idea of {delta}f method. Therefore the severe constraint that the real marker distribution must be consistent with the initially assumed g during a simulation is relaxed. An improved like-particle collision scheme is presented. By performing compensation for momentum, energy and particle losses arising from numerical errors, the conservations of all the three quantities are greatly improved during collisions. Ion neoclassical transport due to self-collisions is examined under finite banana case as well as zero banana limit. A solution with zero particle and zero energy flux (in case of no temperature gradient) over whole poloidal section is obtained. With the improvement in both like-particle collision scheme and weighting scheme, the {delta}f simulation shows a significantly upgraded performance for neoclassical transport study. (author)

  1. A Distributed Weighted Voting Approach for Accurate Eye Center Estimation

    Directory of Open Access Journals (Sweden)

    Gagandeep Singh

    2013-05-01

    Full Text Available This paper proposes a novel approach for accurate estimation of eye center in face images. A distributed voting based approach in which every pixel votes is adopted for potential eye center candidates. The votes are distributed over a subset of pixels which lie in a direction which is opposite to gradient direction and the weightage of votes is distributed according to a novel mechanism.  First, image is normalized to eliminate illumination variations and its edge map is generated using Canny edge detector. Distributed voting is applied on the edge image to generate different eye center candidates. Morphological closing and local maxima search are used to reduce the number of candidates. A classifier based on spatial and intensity information is used to choose the correct candidates for the locations of eye center. The proposed approach was tested on BioID face database and resulted in better Iris detection rate than the state-of-the-art. The proposed approach is robust against illumination variation, small pose variations, presence of eye glasses and partial occlusion of eyes.Defence Science Journal, 2013, 63(3, pp.292-297, DOI:http://dx.doi.org/10.14429/dsj.63.2763

  2. Accurate measurement of RF exposure from emerging wireless communication systems

    International Nuclear Information System (INIS)

    Isotropic broadband probes or spectrum analyzers (SAs) may be used for the measurement of rapidly varying electromagnetic fields generated by emerging wireless communication systems. In this paper this problematic is investigated by comparing the responses measured by two different isotropic broadband probes typically used to perform electric field (E-field) evaluations. The broadband probes are submitted to signals with variable duty cycles (DC) and crest factors (CF) either with or without Orthogonal Frequency Division Multiplexing (OFDM) modulation but with the same root-mean-square (RMS) power. The two probes do not provide accurate enough results for deterministic signals such as Worldwide Interoperability for Microwave Access (WIMAX) or Long Term Evolution (LTE) as well as for non-deterministic signals such as Wireless Fidelity (WiFi). The legacy measurement protocols should be adapted to cope for the emerging wireless communication technologies based on the OFDM modulation scheme. This is not easily achieved except when the statistics of the RF emission are well known. In this case the measurement errors are shown to be systematic and a correction factor or calibration can be applied to obtain a good approximation of the total RMS power.

  3. Study of accurate volume measurement system for plutonium nitrate solution

    Energy Technology Data Exchange (ETDEWEB)

    Hosoma, T. [Power Reactor and Nuclear Fuel Development Corp., Tokai, Ibaraki (Japan). Tokai Works

    1998-12-01

    It is important for effective safeguarding of nuclear materials to establish a technique for accurate volume measurement of plutonium nitrate solution in accountancy tank. The volume of the solution can be estimated by two differential pressures between three dip-tubes, in which the air is purged by an compressor. One of the differential pressure corresponds to the density of the solution, and another corresponds to the surface level of the solution in the tank. The measurement of the differential pressure contains many uncertain errors, such as precision of pressure transducer, fluctuation of back-pressure, generation of bubbles at the front of the dip-tubes, non-uniformity of temperature and density of the solution, pressure drop in the dip-tube, and so on. The various excess pressures at the volume measurement are discussed and corrected by a reasonable method. High precision-differential pressure measurement system is developed with a quartz oscillation type transducer which converts a differential pressure to a digital signal. The developed system is used for inspection by the government and IAEA. (M. Suetake)

  4. Accurate measurement of RF exposure from emerging wireless communication systems

    Science.gov (United States)

    Letertre, Thierry; Monebhurrun, Vikass; Toffano, Zeno

    2013-04-01

    Isotropic broadband probes or spectrum analyzers (SAs) may be used for the measurement of rapidly varying electromagnetic fields generated by emerging wireless communication systems. In this paper this problematic is investigated by comparing the responses measured by two different isotropic broadband probes typically used to perform electric field (E-field) evaluations. The broadband probes are submitted to signals with variable duty cycles (DC) and crest factors (CF) either with or without Orthogonal Frequency Division Multiplexing (OFDM) modulation but with the same root-mean-square (RMS) power. The two probes do not provide accurate enough results for deterministic signals such as Worldwide Interoperability for Microwave Access (WIMAX) or Long Term Evolution (LTE) as well as for non-deterministic signals such as Wireless Fidelity (WiFi). The legacy measurement protocols should be adapted to cope for the emerging wireless communication technologies based on the OFDM modulation scheme. This is not easily achieved except when the statistics of the RF emission are well known. In this case the measurement errors are shown to be systematic and a correction factor or calibration can be applied to obtain a good approximation of the total RMS power.

  5. Analytical model for relativistic corrections to the nuclear magnetic shielding constant in atoms

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Rodolfo H. [Facultad de Ciencias Exactas, Universidad Nacional del Nordeste, Avenida Libertad 5500 (3400), Corrientes (Argentina)]. E-mail: rhromero@exa.unne.edu.ar; Gomez, Sergio S. [Facultad de Ciencias Exactas, Universidad Nacional del Nordeste, Avenida Libertad 5500 (3400), Corrientes (Argentina)

    2006-04-24

    We present a simple analytical model for calculating and rationalizing the main relativistic corrections to the nuclear magnetic shielding constant in atoms. It provides good estimates for those corrections and their trends, in reasonable agreement with accurate four-component calculations and perturbation methods. The origin of the effects in deep core atomic orbitals is manifestly shown.

  6. Real-time lens distortion correction: speed, accuracy and efficiency

    Science.gov (United States)

    Bax, Michael R.; Shahidi, Ramin

    2014-11-01

    Optical lens systems suffer from nonlinear geometrical distortion. Optical imaging applications such as image-enhanced endoscopy and image-based bronchoscope tracking require correction of this distortion for accurate localization, tracking, registration, and measurement of image features. Real-time capability is desirable for interactive systems and live video. The use of a texture-mapping graphics accelerator, which is standard hardware on current motherboard chipsets and add-in video graphics cards, to perform distortion correction is proposed. Mesh generation for image tessellation, an error analysis, and performance results are presented. It is shown that distortion correction using commodity graphics hardware is substantially faster than using the main processor and can be performed at video frame rates (faster than 30 frames per second), and that the polar-based method of mesh generation proposed here is more accurate than a conventional grid-based approach. Using graphics hardware to perform distortion correction is not only fast and accurate but also efficient as it frees the main processor for other tasks, which is an important issue in some real-time applications.

  7. Nested Quantum Error Correction Codes

    CERN Document Server

    Wang, Zhuo; Fan, Hen; Vedral, Vlatko

    2009-01-01

    The theory of quantum error correction was established more than a decade ago as the primary tool for fighting decoherence in quantum information processing. Although great progress has already been made in this field, limited methods are available in constructing new quantum error correction codes from old codes. Here we exhibit a simple and general method to construct new quantum error correction codes by nesting certain quantum codes together. The problem of finding long quantum error correction codes is reduced to that of searching several short length quantum codes with certain properties. Our method works for all length and all distance codes, and is quite efficient to construct optimal or near optimal codes. Two main known methods in constructing new codes from old codes in quantum error-correction theory, the concatenating and pasting, can be understood in the framework of nested quantum error correction codes.

  8. Food systems in correctional settings

    DEFF Research Database (Denmark)

    Smoyer, Amy; Kjær Minke, Linda

    Food is a central component of life in correctional institutions and plays a critical role in the physical and mental health of incarcerated people and the construction of prisoners' identities and relationships. An understanding of the role of food in correctional settings and the effective...... management of food systems may improve outcomes for incarcerated people and help correctional administrators to maximize their health and safety. This report summarizes existing research on food systems in correctional settings and provides examples of food programmes in prison and remand facilities......, including a case study of food-related innovation in the Danish correctional system. It offers specific conclusions for policy-makers, administrators of correctional institutions and prison-food-service professionals, and makes proposals for future research....

  9. Asymptotic expansion based equation of state for hard-disk fluids offering accurate virial coefficients

    CERN Document Server

    Tian, Jianxiang; Mulero, A

    2016-01-01

    Despite the fact that more that more than 30 analytical expressions for the equation of state of hard-disk fluids have been proposed in the literature, none of them is capable of reproducing the currently accepted numeric or estimated values for the first eighteen virial coefficients. Using the asymptotic expansion method, extended to the first ten virial coefficients for hard-disk fluids, fifty-seven new expressions for the equation of state have been studied. Of these, a new equation of state is selected which reproduces accurately all the first eighteen virial coefficients. Comparisons for the compressibility factor with computer simulations show that this new equation is as accurate as other similar expressions with the same number of parameters. Finally, the location of the poles of the 57 new equations shows that there are some particular configurations which could give both the accurate virial coefficients and the correct closest packing fraction in the future when higher virial coefficients than the t...

  10. Health care in correctional facilities.

    OpenAIRE

    Thorburn, K M

    1995-01-01

    More than 1.3 million adults are in correctional facilities, including jails and federal and state prisons, in the United States. Health care of the inmates is an integral component of correctional management. Health services in correctional facilities underwent dramatic improvements during the 1970s. Public policy trends beginning in the early 1980s substantially affected the demographics and health status of jail and prison populations and threatened earlier gains in the health care of inma...

  11. Comparison of Topographic Correction Methods

    Directory of Open Access Journals (Sweden)

    Rudolf Richter

    2009-07-01

    Full Text Available A comparison of topographic correction methods is conducted for Landsat-5 TM, Landsat-7 ETM+, and SPOT-5 imagery from different geographic areas and seasons. Three successful and known methods are compared: the semi-empirical C correction, the Gamma correction depending on the incidence and exitance angles, and a modified Minnaert approach. In the majority of cases the modified Minnaert approach performed best, but no method is superior in all cases.

  12. Corrective Feedback and Teacher Development

    OpenAIRE

    Ellis, Rod

    2009-01-01

    This article examines a number of controversies relating to how corrective feedback (CF) has been viewed in SLA and language pedagogy. These controversies address (1) whether CF contributes to L2 acquisition, (2) which errors should be corrected, (3) who should do the correcting (the teacher or the learner him/herself), (4) which type of CF is the most effective, and (5) what is the best timing for CF (immediate or delayed). In discussing these controversies, both the pedagogic and SLA litera...

  13. Accurate, Meshless Methods for Magneto-Hydrodynamics

    CERN Document Server

    Hopkins, Philip F

    2016-01-01

    Recently, we developed a pair of meshless finite-volume Lagrangian methods for hydrodynamics: the 'meshless finite mass' (MFM) and 'meshless finite volume' (MFV) methods. These capture advantages of both smoothed-particle hydrodynamics (SPH) and adaptive mesh-refinement (AMR) schemes. Here, we extend these to include ideal magneto-hydrodynamics (MHD). The MHD equations are second-order consistent and conservative. We augment these with a divergence-cleaning scheme, which maintains div*B~0 to high accuracy. We implement these in the code GIZMO, together with a state-of-the-art implementation of SPH MHD. In every one of a large suite of test problems, the new methods are competitive with moving-mesh and AMR schemes using constrained transport (CT) to ensure div*B=0. They are able to correctly capture the growth and structure of the magneto-rotational instability (MRI), MHD turbulence, and the launching of magnetic jets, in some cases converging more rapidly than AMR codes. Compared to SPH, the MFM/MFV methods e...

  14. Cool Cluster Correctly Correlated

    Energy Technology Data Exchange (ETDEWEB)

    Sergey Aleksandrovich Varganov

    2005-12-17

    Atomic clusters are unique objects, which occupy an intermediate position between atoms and condensed matter systems. For a long time it was thought that physical and chemical properties of atomic dusters monotonically change with increasing size of the cluster from a single atom to a condensed matter system. However, recently it has become clear that many properties of atomic clusters can change drastically with the size of the clusters. Because physical and chemical properties of clusters can be adjusted simply by changing the cluster's size, different applications of atomic clusters were proposed. One example is the catalytic activity of clusters of specific sizes in different chemical reactions. Another example is a potential application of atomic clusters in microelectronics, where their band gaps can be adjusted by simply changing cluster sizes. In recent years significant advances in experimental techniques allow one to synthesize and study atomic clusters of specified sizes. However, the interpretation of the results is often difficult. The theoretical methods are frequently used to help in interpretation of complex experimental data. Most of the theoretical approaches have been based on empirical or semiempirical methods. These methods allow one to study large and small dusters using the same approximations. However, since empirical and semiempirical methods rely on simple models with many parameters, it is often difficult to estimate the quantitative and even qualitative accuracy of the results. On the other hand, because of significant advances in quantum chemical methods and computer capabilities, it is now possible to do high quality ab-initio calculations not only on systems of few atoms but on clusters of practical interest as well. In addition to accurate results for specific clusters, such methods can be used for benchmarking of different empirical and semiempirical approaches. The atomic clusters studied in this work contain from a few atoms

  15. QCD corrections to triboson production

    Science.gov (United States)

    Lazopoulos, Achilleas; Melnikov, Kirill; Petriello, Frank

    2007-07-01

    We present a computation of the next-to-leading order QCD corrections to the production of three Z bosons at the Large Hadron Collider. We calculate these corrections using a completely numerical method that combines sector decomposition to extract infrared singularities with contour deformation of the Feynman parameter integrals to avoid internal loop thresholds. The NLO QCD corrections to pp→ZZZ are approximately 50% and are badly underestimated by the leading order scale dependence. However, the kinematic dependence of the corrections is minimal in phase space regions accessible at leading order.

  16. Accurate free energy calculation along optimized paths.

    Science.gov (United States)

    Chen, Changjun; Xiao, Yi

    2010-05-01

    The path-based methods of free energy calculation, such as thermodynamic integration and free energy perturbation, are simple in theory, but difficult in practice because in most cases smooth paths do not exist, especially for large molecules. In this article, we present a novel method to build the transition path of a peptide. We use harmonic potentials to restrain its nonhydrogen atom dihedrals in the initial state and set the equilibrium angles of the potentials as those in the final state. Through a series of steps of geometrical optimization, we can construct a smooth and short path from the initial state to the final state. This path can be used to calculate free energy difference. To validate this method, we apply it to a small 10-ALA peptide and find that the calculated free energy changes in helix-helix and helix-hairpin transitions are both self-convergent and cross-convergent. We also calculate the free energy differences between different stable states of beta-hairpin trpzip2, and the results show that this method is more efficient than the conventional molecular dynamics method in accurate free energy calculation.

  17. Accurate fission data for nuclear safety

    CERN Document Server

    Solders, A; Jokinen, A; Kolhinen, V S; Lantz, M; Mattera, A; Penttila, H; Pomp, S; Rakopoulos, V; Rinta-Antila, S

    2013-01-01

    The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyvaskyla. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (10^12 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons...

  18. Fast and Provably Accurate Bilateral Filtering.

    Science.gov (United States)

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722

  19. A self-interaction-free local hybrid functional: Accurate binding energies vis-\\`a-vis accurate ionization potentials from Kohn-Sham eigenvalues

    CERN Document Server

    Schmidt, Tobias; Makmal, Adi; Kronik, Leeor; Kümmel, Stephan

    2014-01-01

    We present and test a new approximation for the exchange-correlation (xc) energy of Kohn-Sham density functional theory. It combines exact exchange with a compatible non-local correlation functional. The functional is by construction free of one-electron self-interaction, respects constraints derived from uniform coordinate scaling, and has the correct asymptotic behavior of the xc energy density. It contains one parameter that is not determined ab initio. We investigate whether it is possible to construct a functional that yields accurate binding energies and affords other advantages, specifically Kohn-Sham eigenvalues that reliably reflect ionization potentials. Tests for a set of atoms and small molecules show that within our local-hybrid form accurate binding energies can be achieved by proper optimization of the free parameter in our functional, along with an improvement in dissociation energy curves and in Kohn-Sham eigenvalues. However, the correspondence of the latter to experimental ionization potent...

  20. New orbit correction method uniting global and local orbit corrections

    Science.gov (United States)

    Nakamura, N.; Takaki, H.; Sakai, H.; Satoh, M.; Harada, K.; Kamiya, Y.

    2006-01-01

    A new orbit correction method, called the eigenvector method with constraints (EVC), is proposed and formulated to unite global and local orbit corrections for ring accelerators, especially synchrotron radiation(SR) sources. The EVC can exactly correct the beam positions at arbitrarily selected ring positions such as light source points, simultaneously reducing closed orbit distortion (COD) around the whole ring. Computer simulations clearly demonstrate these features of the EVC for both cases of the Super-SOR light source and the Advanced Light Source (ALS) that have typical structures of high-brilliance SR sources. In addition, the effects of errors in beam position monitor (BPM) reading and steering magnet setting on the orbit correction are analytically expressed and also compared with the computer simulations. Simulation results show that the EVC is very effective and useful for orbit correction and beam position stabilization in SR sources.

  1. Accurate paleointensities - the multi-method approach

    Science.gov (United States)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  2. Towards Accurate Application Characterization for Exascale (APEX)

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Simon David [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  3. Accurate hydrocarbon estimates attained with radioactive isotope

    International Nuclear Information System (INIS)

    To make accurate economic evaluations of new discoveries, an oil company needs to know how much gas and oil a reservoir contains. The porous rocks of these reservoirs are not completely filled with gas or oil, but contain a mixture of gas, oil and water. It is extremely important to know what volume percentage of this water--called connate water--is contained in the reservoir rock. The percentage of connate water can be calculated from electrical resistivity measurements made downhole. The accuracy of this method can be improved if a pure sample of connate water can be analyzed or if the chemistry of the water can be determined by conventional logging methods. Because of the similarity of the mud filtrate--the water in a water-based drilling fluid--and the connate water, this is not always possible. If the oil company cannot distinguish between connate water and mud filtrate, its oil-in-place calculations could be incorrect by ten percent or more. It is clear that unless an oil company can be sure that a sample of connate water is pure, or at the very least knows exactly how much mud filtrate it contains, its assessment of the reservoir's water content--and consequently its oil or gas content--will be distorted. The oil companies have opted for the Repeat Formation Tester (RFT) method. Label the drilling fluid with small doses of tritium--a radioactive isotope of hydrogen--and it will be easy to detect and quantify in the sample

  4. Correct and efficient accelerator programming

    OpenAIRE

    Cohen, Albert; Donaldson, Alistair F.; Huisman, Marieke; Katoen, Joost-Pieter

    2013-01-01

    This report documents the program and the outcomes of Dagstuhl Seminar 13142 “Correct and Efficient Accelerator Programming”. The aim of this Dagstuhl seminar was to bring together researchers from various sub-disciplines of computer science to brainstorm and discuss the theoretical foundations, design and implementation of techniques and tools for correct and efficient accelerator programming.

  5. Atmospheric correction of satellite data

    Science.gov (United States)

    Shmirko, Konstantin; Bobrikov, Alexey; Pavlov, Andrey

    2015-11-01

    Atmosphere responses for more than 90% of all radiation measured by satellite. Due to this, atmospheric correction plays an important role in separating water leaving radiance from the signal, evaluating concentration of various water pigments (chlorophyll-A, DOM, CDOM, etc). The elimination of atmospheric intrinsic radiance from remote sensing signal referred to as atmospheric correction.

  6. Accurate and precise 40Ar/39Ar dating by high-resolution, multi-collection, mass spectrometry

    DEFF Research Database (Denmark)

    Storey, Michael; Rivera, Tiffany; Flude, Stephanie

    analog enabling precise measurement of very small 36Ar signals for accurate correction for atmospheric-derived 40Ar; (iii) higher mass resolution allows pseudo resolution of hydrocarbon isobaric interferences from 36Ar through to 40Ar; (iv) multi-collection, allowing more data to be gathered in a fixed...

  7. Relativistic corrections to stopping powers

    International Nuclear Information System (INIS)

    Relativistic corrections to the nonrelativistic Bethe-Bloch formula for the stopping power of matter for charged particles are traditionally computed by considering close collisions separately from distant collisions. The close collision contribution is further divided into the Mott correction appropriate for very small impact parameters, and the Bloch correction, computed for larger values. This division of the region of close collisions leads to a very cumbersome result if one generalizes the original Bloch procedure to relativistic energies. The authors avoid the resulting poorly specified scattering angle theta/sub o/ that divides the Mott and Bloch correction regimes by using the procedure suggested by Lindhard and applied by Golovchenko, Cox and Goland to determine the Bloch correction for relativistic velocities. 25 references, 2 figures

  8. Shell corrections in stopping powers

    Science.gov (United States)

    Bichsel, H.

    2002-05-01

    One of the theories of the electronic stopping power S for fast light ions was derived by Bethe. The algorithm currently used for the calculation of S includes terms known as the mean excitation energy I, the shell correction, the Barkas correction, and the Bloch correction. These terms are described here. For the calculation of the shell corrections an atomic model is used, which is more realistic than the hydrogenic approximation used so far. A comparison is made with similar calculations in which the local plasma approximation is utilized. Close agreement with the experimental data for protons with energies from 0.3 to 10 MeV traversing Al and Si is found without the need for adjustable parameters for the shell corrections.

  9. On the accurate estimation of gap fraction during daytime with digital cover photography

    Science.gov (United States)

    Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.

    2015-12-01

    Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.

  10. Scattering Correction For Image Reconstruction In Flash Radiography

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Liangzhi; Wang, Mengqi; Wu, Hongchun; Liu, Zhouyu; Cheng, Yuxiong; Zhang, Hongbo [Xi' an Jiaotong Univ., Xi' an (China)

    2013-08-15

    Scattered photons cause blurring and distortions in flash radiography, reducing the accuracy of image reconstruction significantly. The effect of the scattered photons is taken into account and an iterative deduction of the scattered photons is proposed to amend the scattering effect for image restoration. In order to deduct the scattering contribution, the flux of scattered photons is estimated as the sum of two components. The single scattered component is calculated accurately together with the uncollided flux along the characteristic ray, while the multiple scattered component is evaluated using correction coefficients pre-obtained from Monte Carlo simulations.The arbitrary geometry pretreatment and ray tracing are carried out based on the customization of AutoCAD. With the above model, an Iterative Procedure for image restORation code, IPOR, is developed. Numerical results demonstrate that the IPOR code is much more accurate than the direct reconstruction solution without scattering correction and it has a very high computational efficiency.

  11. Scattering Correction For Image Reconstruction In Flash Radiography

    International Nuclear Information System (INIS)

    Scattered photons cause blurring and distortions in flash radiography, reducing the accuracy of image reconstruction significantly. The effect of the scattered photons is taken into account and an iterative deduction of the scattered photons is proposed to amend the scattering effect for image restoration. In order to deduct the scattering contribution, the flux of scattered photons is estimated as the sum of two components. The single scattered component is calculated accurately together with the uncollided flux along the characteristic ray, while the multiple scattered component is evaluated using correction coefficients pre-obtained from Monte Carlo simulations.The arbitrary geometry pretreatment and ray tracing are carried out based on the customization of AutoCAD. With the above model, an Iterative Procedure for image restORation code, IPOR, is developed. Numerical results demonstrate that the IPOR code is much more accurate than the direct reconstruction solution without scattering correction and it has a very high computational efficiency

  12. Evaluation of QNI corrections in porous media applications

    Science.gov (United States)

    Radebe, M. J.; de Beer, F. C.; Nshimirimana, R.

    2011-09-01

    Qualitative measurements using digital neutron imaging has been the more explored aspect than accurate quantitative measurements. The reason for this bias is that quantitative measurements require correction for background and material scatter, and neutron spectral effects. Quantitative Neutron Imaging (QNI) software package has resulted from efforts at the Paul Scherrer Institute, Helmholtz Zentrum Berlin (HZB) and Necsa to correct for these effects, while the sample-detector distance (SDD) principle has previously been demonstrated as a measure to eliminate material scatter effect. This work evaluates the capabilities of the QNI software package to produce accurate quantitative results on specific characteristics of porous media, and its role to nondestructive quantification of material with and without calibration. The work further complements QNI abilities by the use of different SDDs. Studies of effective %porosity of mortar and attenuation coefficient of water using QNI and SDD principle are reported.

  13. High order QED corrections in Z physics

    International Nuclear Information System (INIS)

    In this thesis a number of calculations of higher order QED corrections are presented, all applying to the standard LEP/SLC processes e+e-→ f-bar f, where f stands for any fermion. In cases where f≠ e-, νe, the above process is only possible via annihilation of the incoming electron positron pair. At LEP/SLC this mainly occurs via the production and the subsequent decay of a Z boson, i.e. the cross section is heavily dominated by the Z resonance. These processes and the corrections to them, treated in a semi-analytical way, are discussed (ch. 2). In the case f = e- (Bhabha scattering) the process can also occur via the exchange of a virtual photon in the t-channel. Since the latter contribution is dominant at small scattering angles one has to exclude these angles if one is interested in Z physics. Having excluded that region one has to recalculate all QED corrections (ch. 3). The techniques introduced there enables for the calculation the difference between forward and backward scattering, the forward backward symmetry, for the cases f ≠ e-, νe (ch. 4). At small scattering angles, where Bhabha scattering is dominated by photon exchange in the t-channel, this process is used in experiments to determine the luminosity of the e+e- accelerator. hence an accurate theoretical description of this process at small angles is of vital interest to the overall normalization of all measurements at LEP/SLC. Ch. 5 gives such a description in a semi-analytical way. The last two chapters discuss Monte Carlo techniques that are used for the cases f≠ e-, νe. Ch. 6 describes the simulation of two photon bremsstrahlung, which is a second order QED correction effect. The results are compared with results of the semi-analytical treatment in ch. 2. Finally ch. 7 reviews several techniques that have been used to simulate higher order QED corrections for the cases f≠ e-, νe. (author). 132 refs.; 10 figs.; 16 tabs

  14. Surface consistent finite frequency phase corrections

    Science.gov (United States)

    Kimman, W. P.

    2016-07-01

    Static time-delay corrections are frequency independent and ignore velocity variations away from the assumed vertical ray path through the subsurface. There is therefore a clear potential for improvement if the finite frequency nature of wave propagation can be properly accounted for. Such a method is presented here based on the Born approximation, the assumption of surface consistency and the misfit of instantaneous phase. The concept of instantaneous phase lends itself very well for sweep-like signals, hence these are the focus of this study. Analytical sensitivity kernels are derived that accurately predict frequency-dependent phase shifts due to P-wave anomalies in the near surface. They are quick to compute and robust near the source and receivers. An additional correction is presented that re-introduces the nonlinear relation between model perturbation and phase delay, which becomes relevant for stronger velocity anomalies. The phase shift as function of frequency is a slowly varying signal, its computation therefore does not require fine sampling even for broad-band sweeps. The kernels reveal interesting features of the sensitivity of seismic arrivals to the near surface: small anomalies can have a relative large impact resulting from the medium field term that is dominant near the source and receivers. Furthermore, even simple velocity anomalies can produce a distinct frequency-dependent phase behaviour. Unlike statics, the predicted phase corrections are smooth in space. Verification with spectral element simulations shows an excellent match for the predicted phase shifts over the entire seismic frequency band. Applying the phase shift to the reference sweep corrects for wavelet distortion, making the technique akin to surface consistent deconvolution, even though no division in the spectral domain is involved. As long as multiple scattering is mild, surface consistent finite frequency phase corrections outperform traditional statics for moderately large

  15. Holographic thermalization with Weyl corrections

    Science.gov (United States)

    Dey, Anshuman; Mahapatra, Subhash; Sarkar, Tapobrata

    2016-01-01

    We consider holographic thermalization in the presence of a Weyl correction in five dimensional AdS space. We first obtain the Weyl corrected black brane solution perturbatively, up to first order in the coupling. The corresponding AdS-Vaidya like solution is then constructed. This is then used to numerically analyze the time dependence of the two point correlation functions and the expectation values of rectangular Wilson loops in the boundary field theory, and we discuss how the Weyl correction can modify the thermalization time scales in the dual field theory. In this context, the subtle interplay between the Weyl coupling constant and the chemical potential is studied in detail.

  16. How well does multiple OCR error correction generalize?

    Science.gov (United States)

    Lund, William B.; Ringger, Eric K.; Walker, Daniel D.

    2013-12-01

    As the digitization of historical documents, such as newspapers, becomes more common, the need of the archive patron for accurate digital text from those documents increases. Building on our earlier work, the contributions of this paper are: 1. in demonstrating the applicability of novel methods for correcting optical character recognition (OCR) on disparate data sets, including a new synthetic training set, 2. enhancing the correction algorithm with novel features, and 3. assessing the data requirements of the correction learning method. First, we correct errors using conditional random fields (CRF) trained on synthetic training data sets in order to demonstrate the applicability of the methodology to unrelated test sets. Second, we show the strength of lexical features from the training sets on two unrelated test sets, yielding a relative reduction in word error rate on the test sets of 6.52%. New features capture the recurrence of hypothesis tokens and yield an additional relative reduction in WER of 2.30%. Further, we show that only 2.0% of the full training corpus of over 500,000 feature cases is needed to achieve correction results comparable to those using the entire training corpus, effectively reducing both the complexity of the training process and the learned correction model.

  17. Software for Correcting the Dynamic Error of Force Transducers

    Directory of Open Access Journals (Sweden)

    Naoki Miyashita

    2014-07-01

    Full Text Available Software which corrects the dynamic error of force transducers in impact force measurements using their own output signal has been developed. The software corrects the output waveform of the transducers using the output waveform itself, estimates its uncertainty and displays the results. In the experiment, the dynamic error of three transducers of the same model are evaluated using the Levitation Mass Method (LMM, in which the impact forces applied to the transducers are accurately determined as the inertial force of the moving part of the aerostatic linear bearing. The parameters for correcting the dynamic error are determined from the results of one set of impact measurements of one transducer. Then, the validity of the obtained parameters is evaluated using the results of the other sets of measurements of all the three transducers. The uncertainties in the uncorrected force and those in the corrected force are also estimated. If manufacturers determine the correction parameters for each model using the proposed method, and provide the software with the parameters corresponding to each model, then users can obtain the waveform corrected against dynamic error and its uncertainty. The present status and the future prospects of the developed software are discussed in this paper.

  18. An accurate and practical method for inference of weak gravitational lensing from galaxy images

    Science.gov (United States)

    Bernstein, Gary M.; Armstrong, Robert; Krawiec, Christina; March, Marisa C.

    2016-07-01

    We demonstrate highly accurate recovery of weak gravitational lensing shear using an implementation of the Bayesian Fourier Domain (BFD) method proposed by Bernstein & Armstrong, extended to correct for selection biases. The BFD formalism is rigorously correct for Nyquist-sampled, background-limited, uncrowded images of background galaxies. BFD does not assign shapes to galaxies, instead compressing the pixel data D into a vector of moments M, such that we have an analytic expression for the probability P(M|g) of obtaining the observations with gravitational lensing distortion g along the line of sight. We implement an algorithm for conducting BFD's integrations over the population of unlensed source galaxies which measures ≈10 galaxies s-1 core-1 with good scaling properties. Initial tests of this code on ≈109 simulated lensed galaxy images recover the simulated shear to a fractional accuracy of m = (2.1 ± 0.4) × 10-3, substantially more accurate than has been demonstrated previously for any generally applicable method. Deep sky exposures generate a sufficiently accurate approximation to the noiseless, unlensed galaxy population distribution assumed as input to BFD. Potential extensions of the method include simultaneous measurement of magnification and shear; multiple-exposure, multiband observations; and joint inference of photometric redshifts and lensing tomography.

  19. Spectroscopically Accurate Line Lists for Application in Sulphur Chemistry

    Science.gov (United States)

    Underwood, D. S.; Azzam, A. A. A.; Yurchenko, S. N.; Tennyson, J.

    2013-09-01

    for inclusion in standard atmospheric and planetary spectroscopic databases. The methods involved in computing the ab initio potential energy and dipole moment surfaces involved minor corrections to the equilibrium S-O distance, which produced a good agreement with experimentally determined rotational energies. However the purely ab initio method was not been able to reproduce an equally spectroscopically accurate representation of vibrational motion. We therefore present an empirical refinement to this original, ab initio potential surface, based on the experimental data available. This will not only be used to reproduce the room-temperature spectrum to a greater degree of accuracy, but is essential in the production of a larger, accurate line list necessary for the simulation of higher temperature spectra: we aim for coverage suitable for T ? 800 K. Our preliminary studies on SO3 have also shown it to exhibit an interesting "forbidden" rotational spectrum and "clustering" of rotational states; to our knowledge this phenomenon has not been observed in other examples of trigonal planar molecules and is also an investigative avenue we wish to pursue. Finally, the IR absorption bands for SO2 and SO3 exhibit a strong overlap, and the inclusion of SO2 as a complement to our studies is something that we will be interested in doing in the near future.

  20. Quantum corrections for Boltzmann equation

    Institute of Scientific and Technical Information of China (English)

    M.; Levy; PETER

    2008-01-01

    We present the lowest order quantum correction to the semiclassical Boltzmann distribution function,and the equation satisfied by this correction is given. Our equation for the quantum correction is obtained from the conventional quantum Boltzmann equation by explicitly expressing the Planck constant in the gradient approximation,and the quantum Wigner distribution function is expanded in pow-ers of Planck constant,too. The negative quantum correlation in the Wigner dis-tribution function which is just the quantum correction terms is naturally singled out,thus obviating the need for the Husimi’s coarse grain averaging that is usually done to remove the negative quantum part of the Wigner distribution function. We also discuss the classical limit of quantum thermodynamic entropy in the above framework.

  1. Dispersion based beam tilt correction

    CERN Document Server

    Guetg, Marc W; Prat, Eduard; Reiche, Sven

    2013-01-01

    In Free Electron Lasers (FEL), a transverse centroid misalignment of longitudinal slices in an electron bunch reduces the effective overlap between radiation field and electron bunch and therefore the FEL performance. The dominant sources of slice misalignments for FELs are the incoherent and coherent synchrotron radiation within bunch compressors as well as transverse wake fields in the accelerating cavities. This is of particular importance for over-compression which is required for one of the key operation modes for the SwissFEL planned at the Paul Scherrer Institute. The centroid shift is corrected using corrector magnets in dispersive sections, e.g. the bunch compressors. First and second order corrections are achieved by pairs of sextupole and quadrupole magnets in the horizontal plane while skew quadrupoles correct to first order in the vertical plane. Simulations and measurements at the SwissFEL Injector Test Facility are done to investigate the proposed correction scheme for SwissFEL. This paper pres...

  2. Spelling Correction in Agglutinative Languages

    CERN Document Server

    Oflazer, K

    1994-01-01

    This paper presents an approach to spelling correction in agglutinative languages that is based on two-level morphology and a dynamic programming based search algorithm. Spelling correction in agglutinative languages is significantly different than in languages like English. The concept of a word in such languages is much wider that the entries found in a dictionary, owing to {}~productive word formation by derivational and inflectional affixations. After an overview of certain issues and relevant mathematical preliminaries, we formally present the problem and our solution. We then present results from our experiments with spelling correction in Turkish, a Ural--Altaic agglutinative language. Our results indicate that we can find the intended correct word in 95\\% of the cases and offer it as the first candidate in 74\\% of the cases, when the edit distance is 1.

  3. Multipole correction in large synchrotrons

    International Nuclear Information System (INIS)

    A new method of correcting dynamic nonlinearities due to the multipole content of a synchrotron such as the Superconducting Super Collider is discussed. The method uses lumped multipole elements placed at the center (C) of the accelerator half-cells as well as elements near the focusing (F) and defocusing (D) quads. In a first approximation, the corrector strengths follow Simpson's Rule. Correction of second-order sextupole nonlinearities may also be obtained with the F, C, and D octupoles. Correction of nonlinearities by about three orders of magnitude are obtained, and simple solutions to a fundamental problem in synchrotrons are demonstrated. Applications to the CERN Large Hadron Collider and lower energy machines, as well as extensions for quadrupole correction, are also discussed

  4. Analytic method for geometrical parameter correction of planar HPGe detector

    International Nuclear Information System (INIS)

    A numerical integration formula was introduced to calculate the response of planar HPGe detector to photons emitted from point source. Then the formula was used to correct the geometrical parameter of planar HPGe detector. 241Am and 137Cs point sources were placed at a certain distance (1-20 cm) away from entrance window to get the corresponding detection efficiency. The detection parameters were calculated in weighted least square fitting using the formula with the experimental efficiencies as formula results. This correction method was accurate and timesaving. The simulation result from MCNP using the corrected parameters shows that the relative deviations between simulation and experimental efficiencies are less than 1% for 59.5 and 661.6 keV photons with the distance of 1-20 cm. (authors)

  5. Correcting the Chromatic Aberration in Barrel Distortion of Endoscopic Images

    Directory of Open Access Journals (Sweden)

    Y. M. Harry Ng

    2003-04-01

    Full Text Available Modern endoscopes offer physicians a wide-angle field of view (FOV for minimally invasive therapies. However, the high level of barrel distortion may prevent accurate perception of image. Fortunately, this kind of distortion may be corrected by digital image processing. In this paper we investigate the chromatic aberrations in the barrel distortion of endoscopic images. In the past, chromatic aberration in endoscopes is corrected by achromatic lenses or active lens control. In contrast, we take a computational approach by modifying the concept of image warping and the existing barrel distortion correction algorithm to tackle the chromatic aberration problem. In addition, an error function for the determination of the level of centroid coincidence is proposed. Simulation and experimental results confirm the effectiveness of our method.

  6. Reflection error correction of gas turbine blade temperature

    Science.gov (United States)

    Kipngetich, Ketui Daniel; Feng, Chi; Gao, Shan

    2016-03-01

    Accurate measurement of gas turbine blades' temperature is one of the greatest challenges encountered in gas turbine temperature measurements. Within an enclosed gas turbine environment with surfaces of varying temperature and low emissivities, a new challenge is introduced into the use of radiation thermometers due to the problem of reflection error. A method for correcting this error has been proposed and demonstrated in this work through computer simulation and experiment. The method assumed that emissivities of all surfaces exchanging thermal radiation are known. Simulations were carried out considering targets with low and high emissivities of 0.3 and 0.8 respectively while experimental measurements were carried out on blades with emissivity of 0.76. Simulated results showed possibility of achieving error less than 1% while experimental result corrected the error to 1.1%. It was thus concluded that the method is appropriate for correcting reflection error commonly encountered in temperature measurement of gas turbine blades.

  7. Radiative corrections to Bose condensation

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, A. (Academia de Ciencias de Cuba, La Habana. Inst. de Matematica, Cibernetica y Computacion)

    1985-04-01

    The Bose condensation of the scalar field in a theory behaving in the Coleman-Weinberg mode is considered. The effective potential of the model is computed within the semiclassical approximation in a dimensional regularization scheme. Radiative corrections are shown to introduce certain ..mu..-dependent ultraviolet divergences in the effective potential coming from the Many-Particle theory. The weight of radiative corrections in the dynamics of the system is strongly modified by the charge density.

  8. Colour correction for panoramic imaging

    OpenAIRE

    Tian, Gui Yun; Gledhill, Duke; Taylor, D.

    2002-01-01

    This paper reports the problem of colour distortion in panoramic imaging. Particularly when image mosaicing is used for panoramic imaging, the images are captured under different lighting conditions and viewpoints. The paper analyses several linear approaches for their colour transform and mapping. A new approach of colour histogram based colour correction is provided, which is robust to image capturing conditions such as viewpoints and scaling. The procedure for the colour correction is intr...

  9. Finite Size Corrections for Dimers

    CERN Document Server

    Nigro, Alessandro

    2012-01-01

    In this paper we derive the finite size corrections to the energy eigenvalues of the energy for 2D dimers on a square lattice. These finite size corrections, as in the case of Critical Dense Polymers, are proportional to the eigenvalues of the Local Integrals of Motion of Bazhanov Lukyanov and Zamolodchikov for central charge $c=-2$. This sheds more light on the status of the Dimer model as a conformal field theory with this value of the certral charge.

  10. Coincidence corrections for a multi-detector gamma spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Britton, R., E-mail: r.britton@surrey.ac.uk [University of Surrey, Guildford GU2 7XH (United Kingdom); AWE, Aldermaston, Reading, Berkshire RG7 4PR (United Kingdom); Burnett, J.L.; Davies, A.V. [AWE, Aldermaston, Reading, Berkshire RG7 4PR (United Kingdom); Regan, P.H. [University of Surrey, Guildford GU2 7XH (United Kingdom)

    2015-01-01

    List-mode data acquisition has been utilised in conjunction with a high-efficiency γ–γ coincidence system, allowing both the energetic and temporal information to be retained for each recorded event. Collected data is re-processed multiple times to extract any coincidence information from the γ-spectroscopy system, correct for the time-walk of low-energy events, and remove accidental coincidences from the projected coincidence spectra. The time-walk correction has resulted in a reduction in the width of the coincidence delay gate of 18.4±0.4%, and thus an equivalent removal of ‘background’ coincidences. The correction factors applied to ∼5.6% of events up to ∼500 keV for a combined {sup 137}Cs and {sup 60}Co source, and are crucial for accurate coincidence measurements of low-energy events that may otherwise be missed by a standard delay gate. By extracting both the delay gate and a representative ‘background’ region for the coincidences, a coincidence background subtracted spectrum is projected from the coincidence matrix, which effectively removes ∼100% of the accidental coincidences (up to 16.6±0.7% of the total coincidence events seen during this work). This accidental-coincidence removal is crucial for accurate characterisation of the events seen in coincidence systems, as without this correction false coincidence signatures may be incorrectly interpreted.

  11. Water-table correction factors applied to gasoline contamination

    International Nuclear Information System (INIS)

    The application of correction factors to measured ground-water elevations is an important step in the process of characterizing sites contaminated by petroleum products such as gasoline. The water-table configuration exerts a significant control on the migration of free product (e.g., gasoline) and dissolved hydrocarbon constituents. An accurate representation of this configuration cannot be made on the basis of measurements obtained from monitoring wells containing free product, unless correction factors are applied. By applying correction factors, the effect of the overlying product on the apparent water-table configuration is removed, and the water table can be analyzed at its ambient (undisturbed) level. A case history is presented where corrected water-table elevations and elevations measured at wells unaffected by free product are combined as control points. The used of the combined data facilitates a more accurate assessment of the shape of the water table, which leads to better conclusions regarding the source(s) of contamination, the extent of free-product accumulation, and optimal areas for focusing remediation efforts

  12. The neural correlates of correctly rejecting lures during memory retrieval: the role of item relatedness.

    Science.gov (United States)

    Bowman, Caitlin R; Dennis, Nancy A

    2015-06-01

    Successful memory retrieval is predicated not only on recognizing old information, but also on correctly rejecting new information (lures) in order to avoid false memories. Correctly rejecting lures is more difficult when they are perceptually or semantically related to information presented at study as compared to when lures are distinct from previously studied information. This behavioral difference suggests that the cognitive and neural basis of correct rejections differs with respect to the relatedness between lures and studied items. The present study sought to identify neural activity that aids in suppressing false memories by examining the network of brain regions underlying correct rejection of related and unrelated lures. Results showed neural overlap in the right hippocampus and anterior parahippocampal gyrus associated with both related and unrelated correct rejections, indicating that some neural regions support correctly rejecting lures regardless of their semantic/perceptual characteristics. Direct comparisons between related and unrelated correct rejections showed that unrelated correct rejections were associated with greater activity in bilateral middle and inferior temporal cortices, regions that have been associated with categorical processing and semantic labels. Related correct rejections showed greater activation in visual and lateral prefrontal cortices, which have been associated with perceptual processing and retrieval monitoring. Thus, while related and unrelated correct rejections show some common neural correlates, related correct rejections are driven by greater perceptual processing whereas unrelated correct rejections show greater reliance on salient categorical cues to support quick and accurate memory decisions. PMID:25862563

  13. 42 CFR 460.194 - Corrective action.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Corrective action. 460.194 Section 460.194 Public...) Federal/State Monitoring § 460.194 Corrective action. (a) A PACE organization must take action to correct... corrective actions. (c) Failure to correct deficiencies may result in sanctions or termination, as...

  14. Fully 3D refraction correction dosimetry system

    Science.gov (United States)

    Manjappa, Rakesh; Sharath Makki, S.; Kumar, Rajesh; Mohan Vasu, Ram; Kanhirodan, Rajan

    2016-02-01

    medium is 71.8%, an increase of 6.4% compared to that achieved using conventional ART algorithm. Smaller diameter dosimeters are scanned with dry air scanning by using a wide-angle lens that collects refracted light. The images reconstructed using cone beam geometry is seen to deteriorate in some planes as those regions are not scanned. Refraction correction is important and needs to be taken in to consideration to achieve quantitatively accurate dose reconstructions. Refraction modeling is crucial in array based scanners as it is not possible to identify refracted rays in the sinogram space.

  15. Accurate early positions for Swift GRBS: enhancing X-ray positions with UVOT astrometry

    CERN Document Server

    Goad, M R; Beardmore, A P; Evans, P A; Rosen, S R; Osborne, J P; Starling, R L C; Marshall, F E; Yershov, V; Burrows, D N; Gehrels, N; Roming, P; Moretti, A; Capalbi, M; Hill, J E; Kennea, J; Koch, S; Berk, D Vanden

    2007-01-01

    Here we describe an autonomous way of producing more accurate prompt XRT positions for Swift-detected GRBs and their afterglows, based on UVOT astrometry and a detailed mapping between the XRT and UVOT detectors. The latter significantly reduces the dominant systematic error -- the star-tracker solution to the World Coordinate System. This technique, which is limited to times when there is significant overlap between UVOT and XRT PC-mode data, provides a factor of 2 improvement in the localisation of XRT refined positions on timescales of less than a few hours. Furthermore, the accuracy achieved is superior to astrometrically corrected XRT PC mode images at early times (for up to 24 hours), for the majority of bursts, and is comparable to the accuracy achieved by astrometrically corrected X-ray positions based on deep XRT PC-mode imaging at later times (abridged).

  16. Toward an Accurate Estimate of the Exfoliation Energy of Black Phosphorus: A Periodic Quantum Chemical Approach.

    Science.gov (United States)

    Sansone, Giuseppe; Maschio, Lorenzo; Usvyat, Denis; Schütz, Martin; Karttunen, Antti

    2016-01-01

    The black phosphorus (black-P) crystal is formed of covalently bound layers of phosphorene stacked together by weak van der Waals interactions. An experimental measurement of the exfoliation energy of black-P is not available presently, making theoretical studies the most important source of information for the optimization of phosphorene production. Here, we provide an accurate estimate of the exfoliation energy of black-P on the basis of multilevel quantum chemical calculations, which include the periodic local Møller-Plesset perturbation theory of second order, augmented by higher-order corrections, which are evaluated with finite clusters mimicking the crystal. Very similar results are also obtained by density functional theory with the D3-version of Grimme's empirical dispersion correction. Our estimate of the exfoliation energy for black-P of -151 meV/atom is substantially larger than that of graphite, suggesting the need for different strategies to generate isolated layers for these two systems. PMID:26651397

  17. ACCURATE KAP METER CALIBRATION AS A PREREQUISITE FOR OPTIMISATION IN PROJECTION RADIOGRAPHY.

    Science.gov (United States)

    Malusek, A; Sandborg, M; Carlsson, G Alm

    2016-06-01

    Modern X-ray units register the air kerma-area product, PKA, with a built-in KAP meter. Some KAP meters show an energy-dependent bias comparable with the maximum uncertainty articulated by the IEC (25 %), adversely affecting dose-optimisation processes. To correct for the bias, a reference KAP meter calibrated at a standards laboratory and two calibration methods described here can be used to achieve an uncertainty of standards laboratory, Q0, to any beam quality, Q, in the clinic. Alternatively, beam quality corrections are measured with an energy-independent dosemeter via a reference beam quality in the clinic, Q1, to beam quality, Q Biases up to 35 % of built-in KAP meter readings were noted. Energy-dependent calibration factors are needed for unbiased PKA Accurate KAP meter calibration as a prerequisite for optimisation in projection radiography. PMID:26743261

  18. An Accurate Calculation of the Big-Bang Prediction for the Abundance of Primordial Helium

    CERN Document Server

    López, R E; Lopez, Robert E.; Turner, Michael S.

    1999-01-01

    Within the standard model of particle physics and cosmology we have calculated the big-bang prediction for the primordial abundance of Helium to a theoretical uncertainty of $0.1 \\pct$ $(\\delta Y_P = \\pm 0.0002)$. At this accuracy the uncertainty in the abundance is dominated by the experimental uncertainty in the neutron mean lifetime, $\\tau_n = 885.3 \\pm 2.0 \\rm{sec}$. The following physical effects were included in the calculation: the zero and finite-temperature radiative, Coulomb and finite-nucleon mass corrections to the weak rates; order-$\\alpha$ quantum-electrodynamic correction to the plasma density, electron mass, and neutrino temperature; and incomplete neutrino decoupling. New results for the finite-temperature radiative correction and the QED plasma correction were used. In addition, we wrote a new and independent nucleosynthesis code to control numerical errors to less than 0.1\\pct. Our predictions for the \\EL[4]{He} abundance are summarized with an accurate fitting formula. Summarizing our work...

  19. Accurate gap levels and their role in the reliability of other calculated defect properties

    Energy Technology Data Exchange (ETDEWEB)

    Deak, Peter; Aradi, Balint; Frauenheim, Thomas [Bremen Center for Computational Materials Science, Universitaet Bremen, POB 330440, 28334 Bremen (Germany); Gali, Adam [Department Atomic Physics, Budapest University of Technology and Economics, 1521 Budapest (Hungary)

    2011-04-15

    The functionality of semiconductors and insulators depends mainly on defects which modify the electronic, optical, and magnetic spectra through their gap levels. Accurate calculation of the latter is not only important for the experimental identification of the defect, but influences also the accuracy of other calculated defect properties, and is the most difficult challenge for defect theory. The electron self-interaction error in the standard implementations of ab initio density functional theory causes a severe underestimation of the band gap, leading to a corresponding uncertainty in the defect level positions in it. This is a widely known problem which is usually dealt with by a posteriori corrections. A wide range of corrections schemes are used, ranging from ad hoc scaling or shifting, through procedures of limited validity (like the scissor operator or various alignment schemes), to more rigorous quasiparticle corrections based on many-body perturbation theory. We will demonstrate in this paper that consequences of the gap error must to be taken into account in the total energy, and simply correcting the band energy with the gap level shifts is of limited applicability. Therefore, the self-consistent determination of the total energy, free of the gap-error, is preferred. We will show that semi-empirical screened hybrid functionals can successfully be used for this purpose. (Copyright copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  20. An efficient and accurate method for calculating nonlinear diffraction beam fields

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Hyun Jo; Cho, Sung Jong; Nam, Ki Woong; Lee, Jang Hyun [Division of Mechanical and Automotive Engineering, Wonkwang University, Iksan (Korea, Republic of)

    2016-04-15

    This study develops an efficient and accurate method for calculating nonlinear diffraction beam fields propagating in fluids or solids. The Westervelt equation and quasilinear theory, from which the integral solutions for the fundamental and second harmonics can be obtained, are first considered. A computationally efficient method is then developed using a multi-Gaussian beam (MGB) model that easily separates the diffraction effects from the plane wave solution. The MGB models provide accurate beam fields when compared with the integral solutions for a number of transmitter-receiver geometries. These models can also serve as fast, powerful modeling tools for many nonlinear acoustics applications, especially in making diffraction corrections for the nonlinearity parameter determination, because of their computational efficiency and accuracy.

  1. Technical evaluation of TomoTherapy automatic roll correction.

    Science.gov (United States)

    Laub, Steve; Snyder, Michael; Burmeister, Jay

    2015-01-01

    The TomoTherapy Hi·Art System allows the application of rotational corrections as a part of the pretreatment image guidance process. This study outlines a custom method to perform an end-to-end evaluation of the TomoTherapy Hi·Art roll correction feature. A roll-sensitive plan was designed and delivered to a cylindrical solid water phantom to test the accuracy of roll corrections, as well as the ability of the automatic registration feature to detect induced roll. Cylindrical target structures containing coaxial inner avoidance structures were placed adjacent to the plane bisecting the phantom and 7 cm laterally off central axis. The phantom was positioned at isocenter with the target-plane parallel to the couch surface. Varying degrees of phantom roll were induced and dose to the targets and inner avoidance structures was measured using Kodak EDR2 films placed in the target-plane. Normalized point doses were compared with baseline (no roll) data to determine the sensitivity of the test and the effectiveness of the roll correction feature. Gamma analysis comparing baseline, roll-corrected, and uncorrected films was performed using film analysis software. MVCT images were acquired prior to plan delivery. Measured roll was compared with induced roll to evaluate the automatic registration feature's ability to detect rotational misalignment. Rotations beyond 0.3° result in statistically significant deviation from baseline point measurements. Gamma pass rates begin to drop below 90% at approximately 0.5° induced rotation at 3%/3 mm and between 0.2° and 0.3° for 2%/2 mm. With roll correction applied, point dose measurements for all rotations are indistinguishable from baseline, and gamma pass rates exceed 96% when using 3% and 3 mm as evaluation criteria. Measured roll via the automatic registration algorithm agrees with induced rotation to within the test sensitivity for nearly all imaging settings. The TomoTherapy automatic registration system accurately detects

  2. Arthroscopically assisted Latarjet procedure: A new surgical approach for accurate coracoid graft placement and compression

    Directory of Open Access Journals (Sweden)

    Ettore Taverna

    2013-01-01

    Full Text Available The Latarjet procedure is a confirmed method for the treatment of shoulder instability in the presence of bone loss. It is a challenging procedure for which a key point is the correct placement of the coracoid graft onto the glenoid neck. We here present our technique for an athroscopically assisted Latarjet procedure with a new drill guide, permitting an accurate and reproducible positioning of the coracoid graft, with optimal compression of the graft onto the glenoid neck due to the perfect position of the screws: perpendicular to the graft and the glenoid neck and parallel between them.

  3. Arthroscopically assisted Latarjet procedure: A new surgical approach for accurate coracoid graft placement and compression.

    Science.gov (United States)

    Taverna, Ettore; Ufenast, Henri; Broffoni, Laura; Garavaglia, Guido

    2013-07-01

    The Latarjet procedure is a confirmed method for the treatment of shoulder instability in the presence of bone loss. It is a challenging procedure for which a key point is the correct placement of the coracoid graft onto the glenoid neck. We here present our technique for an athroscopically assisted Latarjet procedure with a new drill guide, permitting an accurate and reproducible positioning of the coracoid graft, with optimal compression of the graft onto the glenoid neck due to the perfect position of the screws: perpendicular to the graft and the glenoid neck and parallel between them.

  4. Construction of modified Godunov type schemes accurate at any Mach number for the compressible Euler system

    OpenAIRE

    Dellacherie, Stéphane; Jung, Jonathan; Omnes, Pascal; Raviart, Pierre-Arnaud

    2013-01-01

    Through a linear analysis, we show how to modify Godunov type schemes applied to the compressible Euler system to make them accurate at any Mach number. This allows to propose all Mach Godunov type schemes. A linear stability result is proposed and a formal asymptotic analysis justifies the construction in the barotropic case when the Godunov type scheme is a Roe scheme. We also underline that we may have to introduce a cut-off in the all Mach correction to avoid the creation of non-entropic ...

  5. Accurate on-line mass flow measurements in supercritical fluid chromatography.

    Science.gov (United States)

    Tarafder, Abhijit; Vajda, Péter; Guiochon, Georges

    2013-12-13

    This work demonstrates the possible advantages and the challenges of accurate on-line measurements of the CO2 mass flow rate during supercritical fluid chromatography (SFC) operations. Only the mass flow rate is constant along the column in SFC. The volume flow rate is not. The critical importance of accurate measurements of mass flow rates for the achievement of reproducible data and the serious difficulties encountered in supercritical fluid chromatography for its assessment were discussed earlier based on the physical properties of carbon dioxide. In this report, we experimentally demonstrate the problems encountered when performing mass flow rate measurements and the gain that can possibly be achieved by acquiring reproducible data using a Coriolis flow meter. The results obtained show how the use of a highly accurate mass flow meter permits, besides the determination of accurate values of the mass flow rate, a systematic, constant diagnosis of the correct operation of the instrument and the monitoring of the condition of the carbon dioxide pump. PMID:24210558

  6. Gravitomagnetic corrections on gravitational waves

    CERN Document Server

    Capozziello, S; Forte, L; Garufi, F; Milano, L

    2009-01-01

    Gravitational waveforms and production could be considerably affected by gravitomagnetic corrections considered in relativistic theory of orbits. Beside the standard periastron effect of General Relativity, new nutation effects come out when c^{-3} corrections are taken into account. Such corrections emerge as soon as matter-current densities and vector gravitational potentials cannot be discarded into dynamics. We study the gravitational waves emitted through the capture, in the gravitational field of massive binary systems (e.g. a very massive black hole on which a stellar object is inspiralling) via the quadrupole approximation, considering precession and nutation effects. We present a numerical study to obtain the gravitational wave luminosity, the total energy output and the gravitational radiation amplitude. From a crude estimate of the expected number of events towards peculiar targets (e.g. globular clusters) and in particular, the rate of events per year for dense stellar clusters at the Galactic Cen...

  7. Delegation in Correctional Nursing Practice.

    Science.gov (United States)

    Tompkins, Frances

    2016-07-01

    Correctional nurses face daily challenges as a result of their work environment. Common challenges include availability of resources for appropriate care delivery, negotiating with custody staff for access to patients, adherence to scope of practice standards, and working with a varied staffing mix. Professional correctional nurses must consider the educational backgrounds and competency of other nurses and assistive personnel in planning for care delivery. Budgetary constraints and varied staff preparation can be a challenge for the professional nurse. Adequate care planning requires understanding the educational level and competency of licensed and unlicensed staff. Delegation is the process of assessing patient needs and transferring responsibility for care to appropriately educated and competent staff. Correctional nurses can benefit from increased knowledge about delegation. PMID:27302707

  8. Local Correction of Boolean Functions

    CERN Document Server

    Alon, Noga

    2011-01-01

    A Boolean function f over n variables is said to be q-locally correctable if, given a black-box access to a function g which is "close" to an isomorphism f_sigma of f, we can compute f_sigma(x) for any x in Z_2^n with good probability using q queries to g. We observe that any k-junta, that is, any function which depends only on k of its input variables, is O(2^k)-locally correctable. Moreover, we show that there are examples where this is essentially best possible, and locally correcting some k-juntas requires a number of queries which is exponential in k. These examples, however, are far from being typical, and indeed we prove that for almost every k-junta, O(k log k) queries suffice.

  9. Accurate Jones Matrix of the Practical Faraday Rotator

    Institute of Scientific and Technical Information of China (English)

    王林斗; 祝昇翔; 李玉峰; 邢文烈; 魏景芝

    2003-01-01

    The Jones matrix of practical Faraday rotators is often used in the engineering calculation of non-reciprocal optical field. Nevertheless, only the approximate Jones matrix of practical Faraday rotators has been presented by now. Based on the theory of polarized light, this paper presents the accurate Jones matrix of practical Faraday rotators. In addition, an experiment has been carried out to verify the validity of the accurate Jones matrix. This matrix accurately describes the optical characteristics of practical Faraday rotators, including rotation, loss and depolarization of the polarized light. The accurate Jones matrix can be used to obtain the accurate results for the practical Faraday rotator to transform the polarized light, which paves the way for the accurate analysis and calculation of practical Faraday rotators in relevant engineering applications.

  10. When correction turns positive: processing corrective prosody in Dutch.

    Directory of Open Access Journals (Sweden)

    Diana V Dimitrova

    Full Text Available Current research on spoken language does not provide a consistent picture as to whether prosody, the melody and rhythm of speech, conveys a specific meaning. Perception studies show that English listeners assign meaning to prosodic patterns, and, for instance, associate some accents with contrast, whereas Dutch listeners behave more controversially. In two ERP studies we tested how Dutch listeners process words carrying two types of accents, which either provided new information (new information accents or corrected information (corrective accents, both in single sentences (experiment 1 and after corrective and new information questions (experiment 2. In both experiments corrective accents elicited a sustained positivity as compared to new information accents, which started earlier in context than in single sentences. The positivity was not modulated by the nature of the preceding question, suggesting that the underlying neural mechanism likely reflects the construction of an interpretation to the accented word, either by identifying an alternative in context or by inferring it when no context is present. Our experimental results provide strong evidence for inferential processes related to prosodic contours in Dutch.

  11. When correction turns positive: processing corrective prosody in Dutch.

    Science.gov (United States)

    Dimitrova, Diana V; Stowe, Laurie A; Hoeks, John C J

    2015-01-01

    Current research on spoken language does not provide a consistent picture as to whether prosody, the melody and rhythm of speech, conveys a specific meaning. Perception studies show that English listeners assign meaning to prosodic patterns, and, for instance, associate some accents with contrast, whereas Dutch listeners behave more controversially. In two ERP studies we tested how Dutch listeners process words carrying two types of accents, which either provided new information (new information accents) or corrected information (corrective accents), both in single sentences (experiment 1) and after corrective and new information questions (experiment 2). In both experiments corrective accents elicited a sustained positivity as compared to new information accents, which started earlier in context than in single sentences. The positivity was not modulated by the nature of the preceding question, suggesting that the underlying neural mechanism likely reflects the construction of an interpretation to the accented word, either by identifying an alternative in context or by inferring it when no context is present. Our experimental results provide strong evidence for inferential processes related to prosodic contours in Dutch.

  12. Correction

    Science.gov (United States)

    2016-09-01

    The feature article “Neutrons for new drugs” (August pp26–29) stated that neutron crystallography was used to determine the structures of “wellknown complex biological molecules such as lysine, insulin and trypsin”.

  13. Correction

    CERN Multimedia

    2007-01-01

    From left to right: Luis, Carmen, Mario, Christian and José listening to speeches by theorists Alvaro De Rújula and Luis Alvarez-Gaumé (right) at their farewell gathering on 15 May.We unfortunately cut out a part of the "Word of thanks" from the team retiring from Restaurant No. 1. The complete message is published below: Dear friends, You are the true "nucleus" of CERN. Every member of this extraordinary human mosaic will always remain in our affections and in our thoughts. We have all been very touched by your spontaneous generosity. Arrivederci, Mario Au revoir,Christian Hasta Siempre Carmen, José and Luis PS: Lots of love to the theory team and to the hidden organisers. So long!

  14. Correction.

    Science.gov (United States)

    1991-11-29

    Because of a production error, the photographs of pierre Chambon and Harald zur Hausen, which appeared on pages 1116 and 1117 of last week's issue (22 November), were transposed. Here's what you should have seen: Chambon is on the left, zur Hausen on the right.

  15. Correction

    Institute of Scientific and Technical Information of China (English)

    2015-01-01

    <正>The paper"A comparative study on the transplantation of different concentrations of human umbilical mesenchymal cells into diabetic rat"DOI:10.3980/j.issn.2222-3959.2015.02.08 was published in No.2 issue of IJO on 18th April.Jia-Hui Kong,Dan Zheng,Song Chen,Hong-Tao Duan,Yue-Xin Wang,Meng Dong,Jian Song Clinical College of Ophthalmology,Tianjin Medical University,Tianjin Eye Hospital,Tianjin Institute of Ophthalmology,

  16. Accurate and Timely Forecasting of CME-Driven Geomagnetic Storms

    Science.gov (United States)

    Chen, J.; Kunkel, V.; Skov, T. M.

    2015-12-01

    Wide-spread and severe geomagnetic storms are primarily caused by theejecta of coronal mass ejections (CMEs) that impose long durations ofstrong southward interplanetary magnetic field (IMF) on themagnetosphere, the duration and magnitude of the southward IMF (Bs)being the main determinants of geoeffectiveness. Another importantquantity to forecast is the arrival time of the expected geoeffectiveCME ejecta. In order to accurately forecast these quantities in atimely manner (say, 24--48 hours of advance warning time), it isnecessary to calculate the evolving CME ejecta---its structure andmagnetic field vector in three dimensions---using remote sensing solardata alone. We discuss a method based on the validated erupting fluxrope (EFR) model of CME dynamics. It has been shown using STEREO datathat the model can calculate the correct size, magnetic field, and theplasma parameters of a CME ejecta detected at 1 AU, using the observedCME position-time data alone as input (Kunkel and Chen 2010). Onedisparity is in the arrival time, which is attributed to thesimplified geometry of circular toroidal axis of the CME flux rope.Accordingly, the model has been extended to self-consistently includethe transverse expansion of the flux rope (Kunkel 2012; Kunkel andChen 2015). We show that the extended formulation provides a betterprediction of arrival time even if the CME apex does not propagatedirectly toward the earth. We apply the new method to a number of CMEevents and compare predicted flux ropes at 1 AU to the observed ejectastructures inferred from in situ magnetic and plasma data. The EFRmodel also predicts the asymptotic ambient solar wind speed (Vsw) foreach event, which has not been validated yet. The predicted Vswvalues are tested using the ENLIL model. We discuss the minimum andsufficient required input data for an operational forecasting systemfor predicting the drivers of large geomagnetic storms.Kunkel, V., and Chen, J., ApJ Lett, 715, L80, 2010. Kunkel, V., Ph

  17. Atmospheric Error Correction of the Laser Beam Ranging

    Directory of Open Access Journals (Sweden)

    J. Saydi

    2014-01-01

    Full Text Available Atmospheric models based on surface measurements of pressure, temperature, and relative humidity have been used to increase the laser ranging accuracy by ray tracing. Atmospheric refraction can cause significant errors in laser ranging systems. Through the present research, the atmospheric effects on the laser beam were investigated by using the principles of laser ranging. Atmospheric correction was calculated for 0.532, 1.3, and 10.6 micron wavelengths through the weather conditions of Tehran, Isfahan, and Bushehr in Iran since March 2012 to March 2013. Through the present research the atmospheric correction was computed for meteorological data in base of monthly mean. Of course, the meteorological data were received from meteorological stations in Tehran, Isfahan, and Bushehr. Atmospheric correction was calculated for 11, 100, and 200 kilometers laser beam propagations under 30°, 60°, and 90° rising angles for each propagation. The results of the study showed that in the same months and beam emission angles, the atmospheric correction was most accurate for 10.6 micron wavelength. The laser ranging error was decreased by increasing the laser emission angle. The atmospheric correction with two Marini-Murray and Mendes-Pavlis models for 0.532 nm was compared.

  18. Quantitative SPECT reconstruction using CT-derived corrections

    Science.gov (United States)

    Willowson, Kathy; Bailey, Dale L.; Baldock, Clive

    2008-06-01

    A method for achieving quantitative single-photon emission computed tomography (SPECT) based upon corrections derived from x-ray computed tomography (CT) data is presented. A CT-derived attenuation map is used to perform transmission-dependent scatter correction (TDSC) in conjunction with non-uniform attenuation correction. The original CT data are also utilized to correct for partial volume effects in small volumes of interest. The accuracy of the quantitative technique has been evaluated with phantom experiments and clinical lung ventilation/perfusion SPECT/CT studies. A comparison of calculated values with the known total activities and concentrations in a mixed-material cylindrical phantom, and in liver and cardiac inserts within an anthropomorphic torso phantom, produced accurate results. The total activity in corrected ventilation-subtracted perfusion images was compared to the calibrated injected dose of [99mTc]-MAA (macro-aggregated albumin). The average difference over 12 studies between the known and calculated activities was found to be -1%, with a range of ±7%.

  19. Quantitative SPECT reconstruction using CT-derived corrections

    Energy Technology Data Exchange (ETDEWEB)

    Willowson, Kathy; Bailey, Dale L; Baldock, Clive [Institute of Medical Physics, School of Physics, University of Sydney, Camperdown, NSW 2006 (Australia)], E-mail: K.Willowson@physics.usyd.edu.au

    2008-06-21

    A method for achieving quantitative single-photon emission computed tomography (SPECT) based upon corrections derived from x-ray computed tomography (CT) data is presented. A CT-derived attenuation map is used to perform transmission-dependent scatter correction (TDSC) in conjunction with non-uniform attenuation correction. The original CT data are also utilized to correct for partial volume effects in small volumes of interest. The accuracy of the quantitative technique has been evaluated with phantom experiments and clinical lung ventilation/perfusion SPECT/CT studies. A comparison of calculated values with the known total activities and concentrations in a mixed-material cylindrical phantom, and in liver and cardiac inserts within an anthropomorphic torso phantom, produced accurate results. The total activity in corrected ventilation-subtracted perfusion images was compared to the calibrated injected dose of [{sup 99m}Tc]-MAA (macro-aggregated albumin). The average difference over 12 studies between the known and calculated activities was found to be -1%, with a range of {+-}7%.

  20. An adaptive optics approach for laser beam correction in turbulence utilizing a modified plenoptic camera

    Science.gov (United States)

    Ko, Jonathan; Wu, Chensheng; Davis, Christopher C.

    2015-09-01

    Adaptive optics has been widely used in the field of astronomy to correct for atmospheric turbulence while viewing images of celestial bodies. The slightly distorted incoming wavefronts are typically sensed with a Shack-Hartmann sensor and then corrected with a deformable mirror. Although this approach has proven to be effective for astronomical purposes, a new approach must be developed when correcting for the deep turbulence experienced in ground to ground based optical systems. We propose the use of a modified plenoptic camera as a wavefront sensor capable of accurately representing an incoming wavefront that has been significantly distorted by strong turbulence conditions (C2n distortions. After the large distortions have been corrected, a secondary mode utilizing more traditional adaptive optics algorithms can take over to fine tune the wavefront correction. This two-stage algorithm can find use in free space optical communication systems, in directed energy applications, as well as for image correction purposes.

  1. Technical Note: Bias correcting climate model simulated daily temperature extremes with quantile mapping

    Directory of Open Access Journals (Sweden)

    B. Thrasher

    2012-09-01

    Full Text Available When applying a quantile mapping-based bias correction to daily temperature extremes simulated by a global climate model (GCM, the transformed values of maximum and minimum temperatures are changed, and the diurnal temperature range (DTR can become physically unrealistic. While causes are not thoroughly explored, there is a strong relationship between GCM biases in snow albedo feedback during snowmelt and bias correction resulting in unrealistic DTR values. We propose a technique to bias correct DTR, based on comparing observations and GCM historic simulations, and combine that with either bias correcting daily maximum temperatures and calculating daily minimum temperatures or vice versa. By basing the bias correction on a base period of 1961–1980 and validating it during a test period of 1981–1999, we show that bias correcting DTR and maximum daily temperature can produce more accurate estimations of daily temperature extremes while avoiding the pathological cases of unrealistic DTR values.

  2. Technical Note: Bias correcting climate model simulated daily temperature extremes with quantile mapping

    Directory of Open Access Journals (Sweden)

    B. L. Thrasher

    2012-04-01

    Full Text Available When applying a quantile-mapping based bias correction to daily temperature extremes simulated by a global climate model (GCM, the transformed values of maximum and minimum temperatures are changed, and the diurnal temperature range (DTR can become physically unrealistic. While causes are not thoroughly explored, there is a strong relationship between GCM biases in snow albedo feedback during snowmelt and bias correction resulting in unrealistic DTR values. We propose a technique to bias correct DTR, based on comparing observations and GCM historic simulations, and combine that with either bias correcting daily maximum temperatures and calculating daily minimum temperatures or vice versa. By basing the bias correction on a base period of 1961–1980 and validating it during a test period of 1981–1999, we show that bias correcting DTR and maximum daily temperature can produce more accurate estimations of daily temperature extremes while avoiding the pathological cases of unrealistic DTR values.

  3. Biomimetic Approach for Accurate, Real-Time Aerodynamic Coefficients Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Aerodynamic and structural reliability and efficiency depends critically on the ability to accurately assess the aerodynamic loads and moments for each lifting...

  4. Bunch mode specific rate corrections for PILATUS3 detectors

    Energy Technology Data Exchange (ETDEWEB)

    Trueb, P., E-mail: peter.trueb@dectris.com [DECTRIS Ltd, 5400 Baden (Switzerland); Dejoie, C. [ETH Zurich, 8093 Zurich (Switzerland); Kobas, M. [DECTRIS Ltd, 5400 Baden (Switzerland); Pattison, P. [EPF Lausanne, 1015 Lausanne (Switzerland); Peake, D. J. [School of Physics, The University of Melbourne, Victoria 3010 (Australia); Radicci, V. [DECTRIS Ltd, 5400 Baden (Switzerland); Sobott, B. A. [School of Physics, The University of Melbourne, Victoria 3010 (Australia); Walko, D. A. [Argonne National Laboratory, Argonne, IL 60439 (United States); Broennimann, C. [DECTRIS Ltd, 5400 Baden (Switzerland)

    2015-04-09

    The count rate behaviour of PILATUS3 detectors has been characterized for seven bunch modes at four different synchrotrons. The instant retrigger technology of the PILATUS3 application-specific integrated circuit is found to reduce the dependency of the required rate correction on the synchrotron bunch mode. The improvement of using bunch mode specific rate corrections based on a Monte Carlo simulation is quantified. PILATUS X-ray detectors are in operation at many synchrotron beamlines around the world. This article reports on the characterization of the new PILATUS3 detector generation at high count rates. As for all counting detectors, the measured intensities have to be corrected for the dead-time of the counting mechanism at high photon fluxes. The large number of different bunch modes at these synchrotrons as well as the wide range of detector settings presents a challenge for providing accurate corrections. To avoid the intricate measurement of the count rate behaviour for every bunch mode, a Monte Carlo simulation of the counting mechanism has been implemented, which is able to predict the corrections for arbitrary bunch modes and a wide range of detector settings. This article compares the simulated results with experimental data acquired at different synchrotrons. It is found that the usage of bunch mode specific corrections based on this simulation improves the accuracy of the measured intensities by up to 40% for high photon rates and highly structured bunch modes. For less structured bunch modes, the instant retrigger technology of PILATUS3 detectors substantially reduces the dependency of the rate correction on the bunch mode. The acquired data also demonstrate that the instant retrigger technology allows for data acquisition up to 15 million photons per second per pixel.

  5. Bunch mode specific rate corrections for PILATUS3 detectors

    International Nuclear Information System (INIS)

    The count rate behaviour of PILATUS3 detectors has been characterized for seven bunch modes at four different synchrotrons. The instant retrigger technology of the PILATUS3 application-specific integrated circuit is found to reduce the dependency of the required rate correction on the synchrotron bunch mode. The improvement of using bunch mode specific rate corrections based on a Monte Carlo simulation is quantified. PILATUS X-ray detectors are in operation at many synchrotron beamlines around the world. This article reports on the characterization of the new PILATUS3 detector generation at high count rates. As for all counting detectors, the measured intensities have to be corrected for the dead-time of the counting mechanism at high photon fluxes. The large number of different bunch modes at these synchrotrons as well as the wide range of detector settings presents a challenge for providing accurate corrections. To avoid the intricate measurement of the count rate behaviour for every bunch mode, a Monte Carlo simulation of the counting mechanism has been implemented, which is able to predict the corrections for arbitrary bunch modes and a wide range of detector settings. This article compares the simulated results with experimental data acquired at different synchrotrons. It is found that the usage of bunch mode specific corrections based on this simulation improves the accuracy of the measured intensities by up to 40% for high photon rates and highly structured bunch modes. For less structured bunch modes, the instant retrigger technology of PILATUS3 detectors substantially reduces the dependency of the rate correction on the bunch mode. The acquired data also demonstrate that the instant retrigger technology allows for data acquisition up to 15 million photons per second per pixel

  6. The correct "ball bearings" data.

    Science.gov (United States)

    Caroni, C

    2002-12-01

    The famous data on fatigue failure times of ball bearings have been quoted incorrectly from Lieblein and Zelen's original paper. The correct data include censored values, as well as non-fatigue failures that must be handled appropriately. They could be described by a mixture of Weibull distributions, corresponding to different modes of failure.

  7. Quantum Convolutional Error Correction Codes

    OpenAIRE

    Chau, H. F.

    1998-01-01

    I report two general methods to construct quantum convolutional codes for quantum registers with internal $N$ states. Using one of these methods, I construct a quantum convolutional code of rate 1/4 which is able to correct one general quantum error for every eight consecutive quantum registers.

  8. CORRECTIVE ACTION IN CAR MANUFACTURING

    Directory of Open Access Journals (Sweden)

    H. Rohne

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: In this paper the important .issues involved in successfully implementing corrective action systems in quality management are discussed. The work is based on experience in implementing and operating such a system in an automotive manufacturing enterprise in South Africa. The core of a corrective action system is good documentation, supported by a computerised information system. Secondly, a systematic problem solving methodology is essential to resolve the quality related problems identified by the system. In the following paragraphs the general corrective action process is discussed and the elements of a corrective action system are identified, followed by a more detailed discussion of each element. Finally specific results from the application are discussed.

    AFRIKAANSE OPSOMMING: Belangrike oorwegings by die suksesvolle implementering van korrektiewe aksie stelsels in gehaltebestuur word in hierdie artikel bespreek. Die werk is gebaseer op ondervinding in die implementering en bedryf van so 'n stelsel by 'n motorvervaardiger in Suid Afrika. Die kern van 'n korrektiewe aksie stelsel is goeie dokumentering, gesteun deur 'n gerekenariseerde inligtingstelsel. Tweedens is 'n sistematiese probleemoplossings rnetodologie nodig om die gehalte verwante probleme wat die stelsel identifiseer aan te spreek. In die volgende paragrawe word die algemene korrektiewe aksie proses bespreek en die elemente van die korrektiewe aksie stelsel geidentifiseer. Elke element word dan in meer besonderhede bespreek. Ten slotte word spesifieke resultate van die toepassing kortliks behandel.

  9. Multilingual text induced spelling correction

    NARCIS (Netherlands)

    Reynaert, M.W.C.

    2004-01-01

    We present TISC, a multilingual, language-independent and context-sensitive spelling checking and correction system designed to facilitate the automatic removal of non-word spelling errors in large corpora. Its lexicon is derived from raw text corpora, without supervision, and contains word unigrams

  10. 78 FR 34604 - Submitting Complete and Accurate Information

    Science.gov (United States)

    2013-06-10

    ... COMMISSION 10 CFR Part 50 Submitting Complete and Accurate Information AGENCY: Nuclear Regulatory Commission... accurate information as would a licensee or an applicant for a license.'' DATES: Submit comments by August... may submit comments by any of the following methods (unless this document describes a different...

  11. Weather radar equation correction for frequency agile and phased array radars

    OpenAIRE

    Knorr, Jeffrey B.

    2007-01-01

    This paper presents the derivation of a correction to the Probert-Jones weather radar equation for use with advanced frequency agile, phased array radars. It is shown that two additional terms are required to account for frequency hopping and electronic beam pointing. The corrected weather radar equation provides a basis for accurate and efficient computation of a reflectivity estimate from the weather signal data samples. Lastly, an understanding of calibration requirements for these advance...

  12. IDENTIFICATION AND CORRECTION OF COORDINATE MEASURING MACHINE GEOMETRICAL ERRORS USING LASERTRACER SYSTEMS

    Directory of Open Access Journals (Sweden)

    Adam Gąska

    2013-12-01

    Full Text Available LaserTracer (LT systems are the most sophisticated and accurate laser tracking devices. They are mainly used for correction of geometrical errors of machine tools and coordinate measuring machines. This process is about four times faster than standard methods based on usage of laser interferometers. The methodology of LaserTracer usage to correction of geometrical errors, including presentation of this system, multilateration method and software that was used are described in details in this paper.

  13. Highly accurate potential energy surface for the He-H2 dimer.

    Science.gov (United States)

    Bakr, Brandon W; Smith, Daniel G A; Patkowski, Konrad

    2013-10-14

    A new highly accurate interaction potential is constructed for the He-H2 van der Waals complex. This potential is fitted to 1900 ab initio energies computed at the very large-basis coupled-cluster level and augmented by corrections for higher-order excitations (up to full configuration interaction level) and the diagonal Born-Oppenheimer correction. At the vibrationally averaged H-H bond length of 1.448736 bohrs, the well depth of our potential, 15.870 ± 0.065 K, is nearly 1 K larger than the most accurate previous studies have indicated. In addition to constructing our own three-dimensional potential in the van der Waals region, we present a reparameterization of the Boothroyd-Martin-Peterson potential surface [A. I. Boothroyd, P. G. Martin, and M. R. Peterson, J. Chem. Phys. 119, 3187 (2003)] that is suitable for all configurations of the triatomic system. Finally, we use the newly developed potentials to compute the properties of the lone bound states of (4)He-H2 and (3)He-H2 and the interaction second virial coefficient of the hydrogen-helium mixture. PMID:24116617

  14. Speed-of-sound compensated photoacoustic tomography for accurate imaging

    CERN Document Server

    Jose, Jithin; Steenbergen, Wiendelt; Slump, Cornelis H; van Leeuwen, Ton G; Manohar, Srirang

    2012-01-01

    In most photoacoustic (PA) measurements, variations in speed-of-sound (SOS) of the subject are neglected under the assumption of acoustic homogeneity. Biological tissue with spatially heterogeneous SOS cannot be accurately reconstructed under this assumption. We present experimental and image reconstruction methods with which 2-D SOS distributions can be accurately acquired and reconstructed, and with which the SOS map can be used subsequently to reconstruct highly accurate PA tomograms. We begin with a 2-D iterative reconstruction approach in an ultrasound transmission tomography (UTT) setting, which uses ray refracted paths instead of straight ray paths to recover accurate SOS images of the subject. Subsequently, we use the SOS distribution in a new 2-D iterative approach, where refraction of rays originating from PA sources are accounted for in accurately retrieving the distribution of these sources. Both the SOS reconstruction and SOS-compensated PA reconstruction methods utilize the Eikonal equation to m...

  15. Short- and long-range corrected hybrid density functionals with the D3 dispersion corrections

    CERN Document Server

    Wang, Chih-Wei; Chai, Jeng-Da

    2016-01-01

    We propose a short- and long-range corrected (SLC) hybrid scheme employing 100% Hartree-Fock (HF) exchange at both zero and infinite interelectronic distances, wherein three SLC hybrid density functionals with the D3 dispersion corrections (SLC-LDA-D3, SLC-PBE-D3, and SLC-B97-D3) are developed. SLC-PBE-D3 and SLC-B97-D3 are shown to be accurate for a very diverse range of applications, such as core ionization and excitation energies, thermochemistry, kinetics, noncovalent interactions, dissociation of symmetric radical cations, vertical ionization potentials, vertical electron affinities, fundamental gaps, and valence, Rydberg, and long-range charge-transfer excitation energies. Relative to omegaB97X-D, SLC-B97-D3 provides significant improvement for core ionization and excitation energies and noticeable improvement for the self-interaction, asymptote, energy-gap, and charge-transfer problems, while performing similarly for thermochemistry, kinetics, and noncovalent interactions.

  16. Scheme construction with numerical flux residual correction (NFRC) and group velocity control (GVC)

    Institute of Scientific and Technical Information of China (English)

    MA Yanwen; FU Dexun

    2006-01-01

    For simulating multi-scale complex flow fields like turbulent flows, the high order accurate schemes are preferred. In this paper, a scheme construction with numerical flux residual correction (NFRC) is presented. Any order accurate difference approximation can be obtained with the NFRC. To improve the resolution of the shock, the constructed schemes are modified with group velocity control (GVC) and weighted group velocity control (WGVC). The method of scheme construction is simple, and it is used to solve practical problems.

  17. Frequency-domain correction of sensor dynamic error for step response

    Science.gov (United States)

    Yang, Shuang-Long; Xu, Ke-Jun

    2012-11-01

    To obtain accurate results in dynamic measurements it is required that the sensors should have good dynamic performance. In practice, sensors have non-ideal dynamic characteristics due to their small damp ratios and low natural frequencies. In this case some dynamic error correction methods can be adopted for dealing with the sensor responses to eliminate the effect of their dynamic characteristics. The frequency-domain correction of sensor dynamic error is a common method. Using the existing calculation method, however, the correct frequency-domain correction function (FCF) cannot be obtained according to the step response calibration experimental data. This is because of the leakage error and invalid FCF value caused by the cycle extension of the finite length step input-output intercepting data. In order to solve these problems the data splicing preprocessing and FCF interpolation are put forward, and the FCF calculation steps as well as sensor dynamic error correction procedure by the calculated FCF are presented in this paper. The proposed solution is applied to the dynamic error correction of the bar-shaped wind tunnel strain gauge balance so as to verify its effectiveness. The dynamic error correction results show that the adjust time of the balance step response is shortened to 10 ms (shorter than 1/30 before correction) after frequency-domain correction, and the overshoot is fallen within 5% (less than 1/10 before correction) as well. The dynamic measurement accuracy of the balance is improved significantly.

  18. Accurate Impedance Calculation for Underground and Submarine Power Cables using MoM-SO and a Multilayer Ground Model

    OpenAIRE

    Patel, Utkarsh R.; Triverio, Piero

    2015-01-01

    An accurate knowledge of the per-unit length impedance of power cables is necessary to correctly predict electromagnetic transients in power systems. In particular, skin, proximity, and ground return effects must be properly estimated. In many applications, the medium that surrounds the cable is not uniform and can consist of multiple layers of different conductivity, such as dry and wet soil, water, or air. We introduce a multilayer ground model for the recently-proposed MoM-SO method, suita...

  19. Energy dependence corrections to MOSFET dosimetric sensitivity.

    Science.gov (United States)

    Cheung, T; Butson, M J; Yu, P K N

    2009-03-01

    Metal Oxide Semiconductor Field Effect Transistors (MOSFET's) are dosimeters which are now frequently utilized in radiotherapy treatment applications. An improved MOSFET, clinical semiconductor dosimetry system (CSDS) which utilizes improved packaging for the MOSFET device has been studied for energy dependence of sensitivity to x-ray radiation measurement. Energy dependence from 50 kVp to 10 MV x-rays has been studied and found to vary by up to a factor of 3.2 with 75 kVp producing the highest sensitivity response. The detectors average life span in high sensitivity mode is energy related and ranges from approximately 100 Gy for 75 kVp x-rays to approximately 300 Gy at 6 MV x-ray energy. The MOSFET detector has also been studied for sensitivity variations with integrated dose history. It was found to become less sensitive to radiation with age and the magnitude of this effect is dependant on radiation energy with lower energies producing a larger sensitivity reduction with integrated dose. The reduction in sensitivity is however approximated reproducibly by a slightly non linear, second order polynomial function allowing corrections to be made to readings to account for this effect to provide more accurate dose assessments both in phantom and in-vivo. PMID:19400548

  20. Proximity effect correction sensitivity analysis

    Science.gov (United States)

    Zepka, Alex; Zimmermann, Rainer; Hoppe, Wolfgang; Schulz, Martin

    2010-05-01

    Determining the quality of a proximity effect correction (PEC) is often done via 1-dimensional measurements such as: CD deviations from target, corner rounding, or line-end shortening. An alternative approach would compare the entire perimeter of the exposed shape and its original design. Unfortunately, this is not a viable solution as there is a practical limit to the number of metrology measurements that can be done in a reasonable amount of time. In this paper we make use of simulated results and introduce a method which may be considered complementary to the standard way of PEC qualification. It compares simulated contours with the target layout via a Boolean XOR operation with the area of the XOR differences providing a direct measure of how close a corrected layout approximates the target.

  1. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  2. Personalized recommendation with corrected similarity

    International Nuclear Information System (INIS)

    Personalized recommendation has attracted a surge of interdisciplinary research. Especially, similarity-based methods in applications of real recommendation systems have achieved great success. However, the computations of similarities are overestimated or underestimated, in particular because of the defective strategy of unidirectional similarity estimation. In this paper, we solve this drawback by leveraging mutual correction of forward and backward similarity estimations, and propose a new personalized recommendation index, i.e., corrected similarity based inference (CSI). Through extensive experiments on four benchmark datasets, the results show a greater improvement of CSI in comparison with these mainstream baselines. And a detailed analysis is presented to unveil and understand the origin of such difference between CSI and mainstream indices. (paper)

  3. An evaluation of effective radiuses of bulk-wave ultrasonic transducers as circular piston sources for accurate velocity measurements.

    Science.gov (United States)

    Arakawa, Mototaka; Kushibiki, Jun-ichi; Aoki, Naoya

    2004-05-01

    The effective radius of a bulk-wave ultrasonic transducer as a circular piston source, fabricated on one end of a synthetic silica (SiO2) glass buffer rod, was evaluated for accurate velocity measurements of dispersive specimens over a wide frequency range. The effective radius was determined by comparing measured and calculated phase variations due to diffraction in an ultrasonic transmission line of the SiO2 buffer rod/water-couplant/SiO2 standard specimen, using radio-frequency (RF) tone burst ultrasonic waves. Fourteen devices with different device parameters were evaluated. The velocities of the nondispersive standard specimen (C-7940) were found to be 5934.10 +/- 0.35 m/s at 70 to 290 MHz, after diffraction correction using the nominal radius (0.75 mm) for an ultrasonic device with an operating center frequency of about 400 MHz. Corrected velocities were more accurately found to be 5934.15 +/- 0.03 m/s by using the effective radius (0.780 mm) for the diffraction correction. Bulk-wave ultrasonic devices calibrated by this experimental procedure enable conducting extremely accurate velocity dispersion measurements. PMID:15217227

  4. HMM-FRAME: accurate protein domain classification for metagenomic sequences containing frameshift errors

    Directory of Open Access Journals (Sweden)

    Sun Yanni

    2011-05-01

    Full Text Available Abstract Background Protein domain classification is an important step in metagenomic annotation. The state-of-the-art method for protein domain classification is profile HMM-based alignment. However, the relatively high rates of insertions and deletions in homopolymer regions of pyrosequencing reads create frameshifts, causing conventional profile HMM alignment tools to generate alignments with marginal scores. This makes error-containing gene fragments unclassifiable with conventional tools. Thus, there is a need for an accurate domain classification tool that can detect and correct sequencing errors. Results We introduce HMM-FRAME, a protein domain classification tool based on an augmented Viterbi algorithm that can incorporate error models from different sequencing platforms. HMM-FRAME corrects sequencing errors and classifies putative gene fragments into domain families. It achieved high error detection sensitivity and specificity in a data set with annotated errors. We applied HMM-FRAME in Targeted Metagenomics and a published metagenomic data set. The results showed that our tool can correct frameshifts in error-containing sequences, generate much longer alignments with significantly smaller E-values, and classify more sequences into their native families. Conclusions HMM-FRAME provides a complementary protein domain classification tool to conventional profile HMM-based methods for data sets containing frameshifts. Its current implementation is best used for small-scale metagenomic data sets. The source code of HMM-FRAME can be downloaded at http://www.cse.msu.edu/~zhangy72/hmmframe/ and at https://sourceforge.net/projects/hmm-frame/.

  5. Unitarity Corrections and Structure Functions

    CERN Document Server

    Gay-Ducati, M B

    2002-01-01

    We have studied the color dipole picture for the description of the deep inelastic process, mainly the structure functions which are driven directly by the gluon distribution. Estimates for those functions are obtained using the effective dipole cross section given by the Glauber-Mueller approach in QCD, encoding the corrections due to the unitarity effects associated with the saturation phenomenon. Frame invariance is verified in the calculations of the observables when analysing the experimental data.

  6. Interaction and self-correction

    DEFF Research Database (Denmark)

    Satne, Glenda Lucila

    2014-01-01

    In this paper, I address the question of how to account for the normative dimension involved in conceptual competence in a naturalistic framework. First, I present what I call the naturalist challenge (NC), referring to both the phylogenetic and ontogenetic dimensions of conceptual possession and......-correction that I develop with the help of the interactionist theory of mutual understanding arising from recent developments in phenomenology and developmental psychology. © 2014 Satne....

  7. EPS Young Physicist Prize - CORRECTION

    CERN Document Server

    2009-01-01

    The original text for the article 'Prizes aplenty in Krakow' in Bulletin 30-31 assigned the award of the EPS HEPP Young Physicist Prize to Maurizio Pierini. In fact he shared the prize with Niki Saoulidou of Fermilab, who was rewarded for her contribution to neutrino physics, as the article now correctly indicates. We apologise for not having named Niki Saoulidou in the original article.

  8. Holographic superconductors with Weyl corrections

    Science.gov (United States)

    Momeni, Davood; Raza, Muhammad; Myrzakulov, Ratbay

    2016-10-01

    A quick review on the analytical aspects of holographic superconductors (HSCs) with Weyl corrections has been presented. Mainly, we focus on matching method and variational approaches. Different types of such HSC have been investigated — s-wave, p-wave and Stúckelberg ones. We also review the fundamental construction of a p-wave type, in which the non-Abelian gauge field is coupled to the Weyl tensor. The results are compared from numerics to analytical results.

  9. The fallacies of QT correction

    OpenAIRE

    Lokhandwala, Yash; Toal, SC

    2003-01-01

    Not to correct QT, but how to, that is the question”. The QT interval is a reflection of the action potential in the cardiac cells. Homogenous or heterogenous changes in the action potential duration lead to alteration of QT interval (in addition to morphological changes of T & U waves) 1. Such changes can be due to change in heart rate & autonomic tone. They can also be markers of abnormal repolarization, depolarization or both as a result of electrolyte disturbances, cardiac diseases, drug...

  10. Lightweight Specifications for Parallel Correctness

    OpenAIRE

    Burnim, Jacob Samuels

    2012-01-01

    With the spread of multicore processors, it is increasingly necessaryfor programmers to write parallel software. Yet writing correctparallel software with explicit multithreading remains a difficultundertaking. Though many tools exist to help test, debug, and verifyparallel programs, such tools are often hindered by a lack of anyspecification from the programmer of the intended, correct parallelbehavior of his or her software.In this dissertation, we propose three novel lightweightspecificati...

  11. Performance of TPC crosstalk correction

    CERN Document Server

    Dydak, F; Krasnoperov, A; Nefedov, Y; Wotschack, J; Zhemchugov, A

    2004-01-01

    The performance of the CERN-Dubna-Milano (CDM) algorithm for TPC crosstalk correction is presented. The algorithm is designed to correct for uni-directional and bi-directional crosstalk, but not for self-crosstalk. It reduces at the 10% level the number of clusters, and the number of pads with a signal above threshold. Despite of dramatic effects in selected channels with complicated crosstalk patterns, the average longitudinal signal shape of a hit, and the average transverse signal shape of a cluster, are little affected by uni-directional and bi-directional crosstalk. The longitudinal signal shape of hits is understood in terms of preamplifier response, longitudinal diffusion, track inclination, and self-crosstalk. The transverse signal shape of clusters is understood in terms of the TPC's pad response function. The CDM crosstalk correction leads to an average charge decrease at the level of 15%, though with significant differences between TPC sectors. On the whole, crosstalk constitutes a relatively benig...

  12. A powerful test of independent assortment that determines genome-wide significance quickly and accurately.

    Science.gov (United States)

    Stewart, W C L; Hager, V R

    2016-08-01

    In the analysis of DNA sequences on related individuals, most methods strive to incorporate as much information as possible, with little or no attention paid to the issue of statistical significance. For example, a modern workstation can easily handle the computations needed to perform a large-scale genome-wide inheritance-by-descent (IBD) scan, but accurate assessment of the significance of that scan is often hindered by inaccurate approximations and computationally intensive simulation. To address these issues, we developed gLOD-a test of co-segregation that, for large samples, models chromosome-specific IBD statistics as a collection of stationary Gaussian processes. With this simple model, the parametric bootstrap yields an accurate and rapid assessment of significance-the genome-wide corrected P-value. Furthermore, we show that (i) under the null hypothesis, the limiting distribution of the gLOD is the standard Gumbel distribution; (ii) our parametric bootstrap simulator is approximately 40 000 times faster than gene-dropping methods, and it is more powerful than methods that approximate the adjusted P-value; and, (iii) the gLOD has the same statistical power as the widely used maximum Kong and Cox LOD. Thus, our approach gives researchers the ability to determine quickly and accurately the significance of most large-scale IBD scans, which may contain multiple traits, thousands of families and tens of thousands of DNA sequences.

  13. Correcting electrode impedance effects in broadband SIP measurements

    Science.gov (United States)

    Huisman, Johan Alexander; Zimmermann, Egon; Esser, Odilia; Haegel, Franz-Hubert; Vereecken, Harry

    2016-04-01

    Broadband spectral induced polarization (SIP) measurements of the complex electrical resistivity can be affected by the contact impedance of the potential electrodes above 100 Hz. In this study, we present a correction procedure to remove electrode impedance effects from SIP measurements. The first step in this correction procedure is to estimate the electrode impedance using a measurement with reversed current and potential electrodes. In a second step, this estimated electrode impedance is used to correct SIP measurements based on a simplified electrical model of the SIP measurement system. We evaluated this new correction procedure using SIP measurements on water because of the well-defined dielectric properties. It was found that the difference between the corrected and expected phase of the complex electrical resistivity of water was below 0.1 mrad at 1 kHz for a wide range of electrode impedances. In addition, SIP measurements on a saturated unconsolidated sediment sample with two types of potential electrodes showed that the measured phase of the electrical resistivity was very similar (difference SIP measurements on variably saturated unconsolidated sand were made. Here, the plausibility of the phase of the electrical resistivity was improved for frequencies up to 1 kHz, but errors remained for higher frequencies due to the approximate nature of the electrode impedance estimates and some remaining unknown parasitic capacitances that led to current leakage. It was concluded that the proposed correction procedure for SIP measurements improved the accuracy of the phase measurements by an order of magnitude in the kHz frequency range. Further improvement of this accuracy requires a method to accurately estimate parasitic capacitances in situ.

  14. Drift correction of the dissolved signal in single particle ICPMS.

    Science.gov (United States)

    Cornelis, Geert; Rauch, Sebastien

    2016-07-01

    A method is presented where drift, the random fluctuation of the signal intensity, is compensated for based on the estimation of the drift function by a moving average. It was shown using single particle ICPMS (spICPMS) measurements of 10 and 60 nm Au NPs that drift reduces accuracy of spICPMS analysis at the calibration stage and during calculations of the particle size distribution (PSD), but that the present method can again correct the average signal intensity as well as the signal distribution of particle-containing samples skewed by drift. Moreover, deconvolution, a method that models signal distributions of dissolved signals, fails in some cases when using standards and samples affected by drift, but the present method was shown to improve accuracy again. Relatively high particle signals have to be removed prior to drift correction in this procedure, which was done using a 3 × sigma method, and the signals are treated separately and added again. The method can also correct for flicker noise that increases when signal intensity is increased because of drift. The accuracy was improved in many cases when flicker correction was used, but when accurate results were obtained despite drift, the correction procedures did not reduce accuracy. The procedure may be useful to extract results from experimental runs that would otherwise have to be run again. Graphical Abstract A method is presented where a spICP-MS signal affected by drift (left) is corrected (right) by adjusting the local (moving) averages (green) and standard deviations (purple) to the respective values at a reference time (red). In combination with removing particle events (blue) in the case of calibration standards, this method is shown to obtain particle size distributions where that would otherwise be impossible, even when the deconvolution method is used to discriminate dissolved and particle signals.

  15. Correction parameters in conventional dental radiography for dental implant

    Directory of Open Access Journals (Sweden)

    Barunawaty Yunus

    2009-12-01

    Full Text Available Background: Radiographic imaging as a supportive diagnostic tool is the essential component in treatment planning for dental implant. It help dentist to access target area of implant due to recommendation of many inventions in making radiographic imaging previously. Along with the progress of science and technology, the increasing demand of easier and simpler treatment method, a modern radiographic diagnostic for dental implant is needed. In fact, Makassar, especially in Faculty of Dentistry Hasanuddin University, has only a conventional dental radiography. Researcher wants to optimize the equipment that is used to obtain parameters of the jaw that has been corrected to get accurate dental implant. Purpose: This study aimed to see the difference of radiographic imaging of dental implant size which is going to be placed in patient before and after correction. Method: The type of research is analytical observational with cross sectional design. Sampling method is non random sampling. The amount of samples is 30 people, male and female, aged 20–50 years old. The correction value is evaluated from the parameter result of width, height, and thick of the jaw that were corrected with a metal ball by using conventional dental radiography to see the accuracy. Data is analyzed using SPSS 14 for Windows program with T-test analysis. Result: The result that is obtained by T-Test analysis results with significant value which p<0.05 in the width and height of panoramic radiography technique, the width and height of periapical radiography technique, and the thick of occlusal radiography technique before and after correction. Conclusion: It can be concluded that there is a significant difference before and after the results of panoramic, periapical, and occlusal radiography is corrected.

  16. INVERSE CORRECTION OF FOURIER TRANSFORMS FOR ONE-DIMENSIONAL STRONGLY SCATTERING APERIODIC LATTICES

    Directory of Open Access Journals (Sweden)

    Y. F. Hsin

    2016-05-01

    Full Text Available The accuracy of the Fourier transform (FT, advantageous for the aperiodic lattice (AL design, is significantly improved for strongly scattering periodic lattices (PLs and ALs. The approach is to inversely obtain corrected parameters from an accurate transfer matrix method for the FT. We establish a corrected FT in order to improve the spectral inaccuracy of strongly scattering PLs by redefining wave numbers and reflective intensity. We further correct the FT for strongly scattering ALs by implementing improvements applied to strongly scattering PLs and then making detailed wave number adjustments in the main band spectral region. Silicon lattice simulations are presented.

  17. LANDSAT-4 Radiometric and Geometric Correction and Image Enhancement Results. [San Francisco Bay, California

    Science.gov (United States)

    Bernstein, R.; Lotspiech, J. B.

    1984-01-01

    Techniques were developed or improved to calibrate, enhance, and geometrically correct LANDSAT-4 satellite data. Statistical techniques to correct data radiometry were evaluated and were found to minimize striping and banding. Conventional techniques cause striping even with perfect calibration parameters. Intensity enhancement techniques were improved to display image data with large variation in intensity or brightness. Data were geometrically corrected to conform to a 1:100,000 map reference and image products produced with the map overlay. It is shown that these products can serve as accurate map products. A personal computer was experimentally used for digital image processing.

  18. Correction factors for gravimetric measurement of peritumoural oedema in man.

    Science.gov (United States)

    Bell, B A; Smith, M A; Tocher, J L; Miller, J D

    1987-01-01

    The water content of samples of normal and oedematous brain in lobectomy specimens from 16 patients with cerebral tumours has been measured by gravimetry and by wet and dry weighing. Uncorrected gravimetry underestimated the water content of oedematous peritumoural cortex by a mean of 1.17%, and of oedematous peritumoural white matter by a mean of 2.52%. Gravimetric correction equations calculated theoretically and from an animal model of serum infusion white matter oedema overestimate peritumoural white matter oedema in man, and empirical gravimetric error correction factors for oedematous peritumoural human white matter and cortex have therefore been derived. These enable gravimetry to be used to accurately determine peritumoural oedema in man.

  19. Correction factors for gravimetric measurement of peritumoural oedema in man.

    Science.gov (United States)

    Bell, B A; Smith, M A; Tocher, J L; Miller, J D

    1987-01-01

    The water content of samples of normal and oedematous brain in lobectomy specimens from 16 patients with cerebral tumours has been measured by gravimetry and by wet and dry weighing. Uncorrected gravimetry underestimated the water content of oedematous peritumoural cortex by a mean of 1.17%, and of oedematous peritumoural white matter by a mean of 2.52%. Gravimetric correction equations calculated theoretically and from an animal model of serum infusion white matter oedema overestimate peritumoural white matter oedema in man, and empirical gravimetric error correction factors for oedematous peritumoural human white matter and cortex have therefore been derived. These enable gravimetry to be used to accurately determine peritumoural oedema in man. PMID:3268140

  20. Professional orientation and pluralistic ignorance among jail correctional officers.

    Science.gov (United States)

    Cook, Carrie L; Lane, Jodi

    2014-06-01

    Research about the attitudes and beliefs of correctional officers has historically been conducted in prison facilities while ignoring jail settings. This study contributes to our understanding of correctional officers by examining the perceptions of those who work in jails, specifically measuring professional orientations about counseling roles, punitiveness, corruption of authority by inmates, and social distance from inmates. The study also examines whether officers are accurate in estimating these same perceptions of their peers, a line of inquiry that has been relatively ignored. Findings indicate that the sample was concerned about various aspects of their job and the management of inmates. Specifically, officers were uncertain about adopting counseling roles, were somewhat punitive, and were concerned both with maintaining social distance from inmates and with an inmate's ability to corrupt their authority. Officers also misperceived the professional orientation of their fellow officers and assumed their peer group to be less progressive than they actually were. PMID:23422025

  1. Double Layered Sheath in Accurate HV XLPE Cable Modeling

    DEFF Research Database (Denmark)

    Gudmundsdottir, Unnur Stella; Silva, J. De; Bak, Claus Leth;

    2010-01-01

    This paper discusses modelling of high voltage AC underground cables. For long cables, when crossbonding points are present, not only the coaxial mode of propagation is excited during transient phenomena, but also the intersheath mode. This causes inaccurate simulation results for high frequency...... studies of crossbonded cables. For the intersheath mode, the correct physical representation of the cables sheath as well as proximity affect play a large role and will ensure correct calculations of the series impedance matrix and therefore a correct simulation for the actual cable. This paper gives...... a new, more correct method for modelling the actual physical layout of the sheath. It is shown by comparison to field measurements how the new method of simulating the cable's sheath results in simulations with less deviation from field test results....

  2. SPECT Compton-scattering correction by analysis of energy spectra.

    Science.gov (United States)

    Koral, K F; Wang, X Q; Rogers, W L; Clinthorne, N H; Wang, X H

    1988-02-01

    The hypothesis that energy spectra at individual spatial locations in single photon emission computed tomographic projection images can be analyzed to separate the Compton-scattered component from the unscattered component is tested indirectly. An axially symmetric phantom consisting of a cylinder with a sphere is imaged with either the cylinder or the sphere containing 99mTc. An iterative peak-erosion algorithm and a fitting algorithm are given and employed to analyze the acquired spectra. Adequate separation into an unscattered component and a Compton-scattered component is judged on the basis of filtered-backprojection reconstruction of corrected projections. In the reconstructions, attenuation correction is based on the known geometry and the total attenuation cross section for water. An independent test of the accuracy of separation is not made. For both algorithms, reconstructed slices for the cold-sphere, hot-surround phantom have the correct shape as confirmed by simulation results that take into account the measured dependence of system resolution on depth. For the inverse phantom, a hot sphere in a cold surround, quantitative results with the fitting algorithm are accurate but with a particular number of iterations of the erosion algorithm are less good. (A greater number of iterations would improve the 26% error with the algorithm, however.) These preliminary results encourage us to believe that a method for correcting for Compton-scattering in a wide variety of objects can be found, thus helping to achieve quantitative SPECT. PMID:3258023

  3. Assessment of ionospheric and tropospheric corrections for PPP-RTK

    Science.gov (United States)

    de Oliveira, Paulo; Fund, François; Morel, Laurent; Monico, João; Durand, Stéphane; Durand, Fréderic

    2016-04-01

    The PPP-RTK is a state of art GNSS (Global Navigation Satellite System) technique employed to determine accurate positions in real-time. To perform the PPP-RTK it is necessary to accomplish the SSR (State Space Representation) of the spatially correlated errors affecting the GNSS observables, such as the tropospheric delay and the ionospheric effect. Using GNSS data of local or regional GNSS active networks, it is possible to determine quite well the atmospheric errors for any position in the network coverage area, by modeling these effects or biases. This work presents the results of tropospheric and ionospheric modeling employed to obtain the respective corrections. The region in the study is France and the Orphéon GNSS active network is used to generate the atmospheric corrections. The CNES (Centre National d'Etudes Spatiales) satellite orbit products are used to perform ambiguity fixing in GNSS processing. Two atmospheric modeling approaches are considered: 1) generation of a priori correction by coefficients estimated using the GNSS network and 2) the use of interpolated ionospheric and tropospheric effects from the closest reference stations to the user's location, as suggested in the second stage of RTCM (Ratio Technical Commission for Maritime) messages development. Finally, the atmospheric corrections are introduced in PPP-RTK as a priori values to allow improvements in ambiguity fixing and to reduce its convergence time. The discussion emphasizes the positive and the negative points of each solution or even the associated use of them.

  4. Children's perception of their synthetically corrected speech production.

    Science.gov (United States)

    Strömbergsson, Sofia; Wengelin, Asa; House, David

    2014-06-01

    We explore children's perception of their own speech - in its online form, in its recorded form, and in synthetically modified forms. Children with phonological disorder (PD) and children with typical speech and language development (TD) performed tasks of evaluating accuracy of the different types of speech stimuli, either immediately after having produced the utterance or after a delay. In addition, they performed a task designed to assess their ability to detect synthetic modification. Both groups showed high performance in tasks involving evaluation of other children's speech, whereas in tasks of evaluating one's own speech, the children with PD were less accurate than their TD peers. The children with PD were less sensitive to misproductions in immediate conjunction with their production of an utterance, and more accurate after a delay. Within-category modification often passed undetected, indicating a satisfactory quality of the generated speech. Potential clinical benefits of using corrective re-synthesis are discussed.

  5. Atlas-guided correction of brain histology distortion

    Directory of Open Access Journals (Sweden)

    Xi Qiu

    2011-01-01

    Full Text Available Histological tissue preparation stages (e.g., cutting, sectioning, etc. often introduce tissue distortions that prevent a smooth 3D reconstruction from being built. In this paper, we propose a method to correct histology distortions by running a piecewise registration scheme. It takes the information of several consecutive slices in a neighborhood into account. In order to achieve an accurate anatomic presentation, we run the method iteratively with the assistance from a pre-segmented brain atlas. The registration parameters are optimized to accommodate different brain sub-regions, e.g., cerebellum, hippocampus, etc. The results are evaluated by both visual and quantitative approaches. The proposed method has been proved to be robust enough for reconstructing an accurate and smooth mouse brain volume.

  6. Accurate Sliding-Mode Control System Modeling for Buck Converters

    DEFF Research Database (Denmark)

    Høyerby, Mikkel Christian Wendelboe; Andersen, Michael Andreas E.

    2007-01-01

    This paper shows that classical sliding mode theory fails to correctly predict the output impedance of the highly useful sliding mode PID compensated buck converter. The reason for this is identified as the assumption of the sliding variable being held at zero during sliding mode, effectively...... approach also predicts the self-oscillating switching action of the sliding-mode control system correctly. Analytical findings are verified by simulation as well as experimentally in a 10-30V/3A buck converter....

  7. β—Correction Spectrophotometric Determination of Cadmium with Cadion

    Institute of Scientific and Technical Information of China (English)

    郜洪文

    1995-01-01

    Cadmium has been determined by β-correction spectrophotometry with cadion,p-nitrobenzenediazoaminoaz-obenzone,and a non-ionic surfactant,tuiton X-100.The real absorbance of a Cd-cadion chelate in the colored solution can be accurately determined and the complex-ratio of cadion with Cd(II) has been worked out to be 2.Beer's law is obeyed over the concentration range of 0-0.20mg/1 cadmium and the detec-tion limit for cadmium is only 0.003mg/1.Satisfactory experimental results are presented with respect to the determination of trace cadmium in wastewaters.

  8. Automated motion correction based on target tracking for dynamic nuclear medicine studies

    Science.gov (United States)

    Cao, Xinhua; Tetrault, Tracy; Fahey, Fred; Treves, Ted

    2008-03-01

    Nuclear medicine dynamic studies of kidneys, bladder and stomach are important diagnostic tools. Accurate generation of time-activity curves from regions of interest (ROIs) requires that the patient remains motionless for the duration of the study. This is not always possible since some dynamic studies may last from several minutes to one hour. Several motion correction solutions have been explored. Motion correction using external point sources is inconvenient and not accurate especially when motion results from breathing, organ motion or feeding rather than from body motion alone. Centroid-based motion correction assumes that activity distribution is only inside the single organ (without background) and uniform, but this approach is impractical in most clinical studies. In this paper, we present a novel technique of motion correction that first tracks the organ of interest in a dynamic series then aligns the organ. The implementation algorithm for target tracking-based motion correction consists of image preprocessing, target detection, target positioning, motion estimation and prediction, tracking (new search region generation) and target alignment. The targeted organ is tracked from the first frame to the last one in the dynamic series to generate a moving trajectory of the organ. Motion correction is implemented by aligning the organ ROIs in the image series to the location of the organ in the first image. The proposed method of motion correction has been applied to several dynamic nuclear medicine studies including radionuclide cystography, dynamic renal scintigraphy, diuretic renography and gastric emptying scintigraphy.

  9. A new, accurate predictive model for incident hypertension

    DEFF Research Database (Denmark)

    Völzke, Henry; Fung, Glenn; Ittermann, Till;

    2013-01-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....

  10. A Fast and Accurate Universal Kepler Solver with Stumpff Series

    CERN Document Server

    Wisdom, Jack

    2015-01-01

    We derive and present a fast and accurate solution of the initial value problem for Keplerian motion in universal variables that does not use the Stumpff series. We find that it performs better than methods based on the Stumpff series.

  11. Highly Accurate Sensor for High-Purity Oxygen Determination Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In this STTR effort, Los Gatos Research (LGR) and the University of Wisconsin (UW) propose to develop a highly-accurate sensor for high-purity oxygen determination....

  12. ACCURATE ESTIMATES OF CHARACTERISTIC EXPONENTS FOR SECOND ORDER DIFFERENTIAL EQUATION

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In this paper, a second order linear differential equation is considered, and an accurate estimate method of characteristic exponent for it is presented. Finally, we give some examples to verify the feasibility of our result.

  13. Accurate wall thickness measurement using autointerference of circumferential Lamb wave

    International Nuclear Information System (INIS)

    In this paper, a method of accurately measuring the pipe wall thickness by using noncontact air-coupled ultrasonic transducer (NAUT) was presented. In this method, accurate measurement of angular wave number (AWN) is a key technique because the AWN is changes minutely with the wall thickness. An autointerference of the circumferential (C-) Lamb wave was used for accurate measurements of the AWN. Principle of the method was first explained. Modified method for measuring the wall thickness near a butt weld line was also proposed and its accuracy was evaluated within 6 μm error. It was also shown in the paper that wall thickness measurement was accurately carried out beyond the difference among the sensors by calibrating the frequency response of the sensors. (author)

  14. Plans for Jet Energy Corrections at CMS

    Science.gov (United States)

    Mishra, Kalanand

    2009-05-01

    We present a plan for Jet Energy Corrections at CMS. Jet corrections at CMS will come initially from simulation tuned on test beam data, directly from collision data when available, and ultimately from a simulation tuned on collision data. The corrections will be factorized into a fixed sequence of sub-corrections associated with different detector and physics effects. The following three factors are minimum requirements for most analysis: offset corrections for pile-up and noise; correction for the response of the calorimeter as a function of jet pseudorapidity relative to the barrel; correction for the absolute response as a function of transverse momentum in the barrel. The required correction gives a jet Lorentz vector equivalent to the sum of particles in the jet cone emanating from a QCD hard collision. We discuss the status of these corrections, the planned data-driven techniques for their derivation, and their anticipated evolution with the stages of the CMS experiment.

  15. Atmospheric correction of APEX hyperspectral data

    Directory of Open Access Journals (Sweden)

    Sterckx Sindy

    2016-03-01

    Full Text Available Atmospheric correction plays a crucial role among the processing steps applied to remotely sensed hyperspectral data. Atmospheric correction comprises a group of procedures needed to remove atmospheric effects from observed spectra, i.e. the transformation from at-sensor radiances to at-surface radiances or reflectances. In this paper we present the different steps in the atmospheric correction process for APEX hyperspectral data as applied by the Central Data Processing Center (CDPC at the Flemish Institute for Technological Research (VITO, Mol, Belgium. The MODerate resolution atmospheric TRANsmission program (MODTRAN is used to determine the source of radiation and for applying the actual atmospheric correction. As part of the overall correction process, supporting algorithms are provided in order to derive MODTRAN configuration parameters and to account for specific effects, e.g. correction for adjacency effects, haze and shadow correction, and topographic BRDF correction. The methods and theory underlying these corrections and an example of an application are presented.

  16. Algorithmic scatter correction in dual-energy digital mammography

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xi; Mou, Xuanqin [Institute of Image Processing and Pattern Recognition, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China); Nishikawa, Robert M.; Lau, Beverly A. [Department of Radiology, The University of Chicago, Chicago, Illinois 60637 (United States); Chan, Suk-tak [Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung Hom (Hong Kong); Zhang, Lei [Department of Computing, The Hong Kong Polytechnic University, Hung Hom (Hong Kong)

    2013-11-15

    . The range of background DE calcification signals using scatter-uncorrected data was reduced by 58% with scatter-corrected data by algorithmic method. With the scatter-correction algorithm and denoising, the minimum visible calcification size can be reduced from 380 to 280 μm.Conclusions: When applying the proposed algorithmic scatter correction to images, the resultant background DE calcification signals can be reduced and the CNR of calcifications can be improved. This method has similar or even better performance than pinhole-array interpolation method in scatter correction for DEDM; moreover, this method is convenient and requires no extra exposure to the patient. Although the proposed scatter correction method is effective, it is validated by a 5-cm-thick phantom with calcifications and homogeneous background. The method should be tested on structured backgrounds to more accurately gauge effectiveness.

  17. The Importance of Slow-roll Corrections During Multi-field Inflation

    CERN Document Server

    Avgoustidis, Anastasios; Davis, Anne-Christine; Ribeiro, Raquel H; Turzynski, Krzysztof; Watson, Scott

    2011-01-01

    We re-examine the importance of slow-roll corrections during the evolution of cosmological perturbations in models of multi-field inflation. We find that in many instances the presence of light degrees of freedom leads to situations in which next to leading order slow-roll corrections become significant. Examples where we expect such corrections to be crucial include models in which modes exit the Hubble radius while the inflationary trajectory undergoes an abrupt turn in field space, or during a phase transition. We illustrate this with two examples -- hybrid inflation and double quadratic inflation. Utilizing both analytic estimates and full numerical results, we find that corrections can be as large as 20%. Our results have implications for many existing models in the literature, as these corrections must be included to obtain accurate observational predictions -- particularly given the level of accuracy expected from CMB experiments such as Planck

  18. Scatter correction method with primary modulator for dual energy digital radiography: a preliminary study

    Science.gov (United States)

    Jo, Byung-Du; Lee, Young-Jin; Kim, Dae-Hong; Jeon, Pil-Hyun; Kim, Hee-Joung

    2014-03-01

    In conventional digital radiography (DR) using a dual energy subtraction technique, a significant fraction of the detected photons are scattered within the body, resulting in the scatter component. Scattered radiation can significantly deteriorate image quality in diagnostic X-ray imaging systems. Various methods of scatter correction, including both measurement and non-measurement-based methods have been proposed in the past. Both methods can reduce scatter artifacts in images. However, non-measurement-based methods require a homogeneous object and have insufficient scatter component correction. Therefore, we employed a measurement-based method to correct for the scatter component of inhomogeneous objects from dual energy DR (DEDR) images. We performed a simulation study using a Monte Carlo simulation with a primary modulator, which is a measurement-based method for the DEDR system. The primary modulator, which has a checkerboard pattern, was used to modulate primary radiation. Cylindrical phantoms of variable size were used to quantify imaging performance. For scatter estimation, we used Discrete Fourier Transform filtering. The primary modulation method was evaluated using a cylindrical phantom in the DEDR system. The scatter components were accurately removed using a primary modulator. When the results acquired with scatter correction and without correction were compared, the average contrast-to-noise ratio (CNR) with the correction was 1.35 times higher than that obtained without correction, and the average root mean square error (RMSE) with the correction was 38.00% better than that without correction. In the subtraction study, the average CNR with correction was 2.04 (aluminum subtraction) and 1.38 (polymethyl methacrylate (PMMA) subtraction) times higher than that obtained without the correction. The analysis demonstrated the accuracy of scatter correction and the improvement of image quality using a primary modulator and showed the feasibility of

  19. Holographic Thermalization with Weyl Corrections

    CERN Document Server

    Dey, Anshuman; Sarkar, Tapobrata

    2015-01-01

    We consider holographic thermalization in the presence of a Weyl correction in five dimensional AdS space. We numerically analyze the time dependence of the two point correlation functions and the expectation values of rectangular Wilson loops in the boundary field theory. The subtle interplay between the Weyl coupling constant and the chemical potential is studied in detail. An outcome of our analysis is the appearance of a swallow tail behaviour in the thermalization curve, and we give evidence that this might indicate distinct physical situations relating to different length scales in the problem.

  20. Correct Linearization of Einstein's Equations

    Directory of Open Access Journals (Sweden)

    Rabounski D.

    2006-06-01

    Full Text Available Regularly Einstein's equations can be reduced to a wave form (linearly dependent from the second derivatives of the space metric in the absence of gravitation, the space rotation and Christoffel's symbols. As shown here, the origin of the problem is that one uses the general covariant theory of measurement. Here the wave form of Einstein's equations is obtained in the terms of Zelmanov's chronometric invariants (physically observable projections on the observer's time line and spatial section. The obtained equations depend on solely the second derivatives even if gravitation, the space rotation and Christoffel's symbols. The correct linearization proves: the Einstein equations are completely compatible with weak waves of the metric.

  1. Correction of gene expression data

    DEFF Research Database (Denmark)

    Darbani Shirvanehdeh, Behrooz; Stewart, C. Neal, Jr.; Noeparvar, Shahin;

    2014-01-01

    This report investigates for the first time the potential inter-treatment bias source of cell number for gene expression studies. Cell-number bias can affect gene expression analysis when comparing samples with unequal total cellular RNA content or with different RNA extraction efficiencies....... For maximal reliability of analysis, therefore, comparisons should be performed at the cellular level. This could be accomplished using an appropriate correction method that can detect and remove the inter-treatment bias for cell-number. Based on inter-treatment variations of reference genes, we introduce...

  2. Matrix Models and Gravitational Corrections

    CERN Document Server

    Dijkgraaf, R; Temurhan, M; Dijkgraaf, Robbert; Sinkovics, Annamaria; Temurhan, Mine

    2002-01-01

    We provide evidence of the relation between supersymmetric gauge theories and matrix models beyond the planar limit. We compute gravitational R^2 couplings in gauge theories perturbatively, by summing genus one matrix model diagrams. These diagrams give the leading 1/N^2 corrections in the large N limit of the matrix model and can be related to twist field correlators in a collective conformal field theory. In the case of softly broken SU(N) N=2 super Yang-Mills theories, we find that these exact solutions of the matrix models agree with results obtained by topological field theory methods.

  3. Heisenberg coupling constant predicted for molecular magnets with pairwise spin-contamination correction

    International Nuclear Information System (INIS)

    New method to eliminate the spin-contamination in broken symmetry density functional theory (BS DFT) calculations is introduced. Unlike conventional spin-purification correction, this method is based on canonical Natural Orbitals (NO) for each high/low spin coupled electron pair. We derive an expression to extract the energy of the pure singlet state given in terms of energy of BS DFT solution, the occupation number of the bonding NO, and the energy of the higher spin state built on these bonding and antibonding NOs (not self-consistent Kohn–Sham orbitals of the high spin state). Compared to the other spin-contamination correction schemes, spin-correction is applied to each correlated electron pair individually. We investigate two binuclear Mn(IV) molecular magnets using this pairwise correction. While one of the molecules is described by magnetic orbitals strongly localized on the metal centers, and spin gap is accurately predicted by Noodleman and Yamaguchi schemes, for the other one the gap is predicted poorly by these schemes due to strong delocalization of the magnetic orbitals onto the ligands. We show our new correction to yield more accurate results in both cases. - Highlights: • Magnetic orbitails obtained for high and low spin states are not related. • Spin-purification correction becomes inaccurate for delocalized magnetic orbitals. • We use the natural orbitals of the broken symmetry state to build high spin state. • This new correction is made separately for each electron pair. • Our spin-purification correction is more accurate for delocalised magnetic orbitals

  4. Heisenberg coupling constant predicted for molecular magnets with pairwise spin-contamination correction

    Energy Technology Data Exchange (ETDEWEB)

    Masunov, Artëm E., E-mail: amasunov@ucf.edu [NanoScience Technology Center, Department of Chemistry, and Department of Physics, University of Central Florida, Orlando, FL 32826 (United States); Photochemistry Center RAS, ul. Novatorov 7a, Moscow 119421 (Russian Federation); Gangopadhyay, Shruba [Department of Physics, University of California, Davis, CA 95616 (United States); IBM Almaden Research Center, 650 Harry Road, San Jose, CA 95120 (United States)

    2015-12-15

    New method to eliminate the spin-contamination in broken symmetry density functional theory (BS DFT) calculations is introduced. Unlike conventional spin-purification correction, this method is based on canonical Natural Orbitals (NO) for each high/low spin coupled electron pair. We derive an expression to extract the energy of the pure singlet state given in terms of energy of BS DFT solution, the occupation number of the bonding NO, and the energy of the higher spin state built on these bonding and antibonding NOs (not self-consistent Kohn–Sham orbitals of the high spin state). Compared to the other spin-contamination correction schemes, spin-correction is applied to each correlated electron pair individually. We investigate two binuclear Mn(IV) molecular magnets using this pairwise correction. While one of the molecules is described by magnetic orbitals strongly localized on the metal centers, and spin gap is accurately predicted by Noodleman and Yamaguchi schemes, for the other one the gap is predicted poorly by these schemes due to strong delocalization of the magnetic orbitals onto the ligands. We show our new correction to yield more accurate results in both cases. - Highlights: • Magnetic orbitails obtained for high and low spin states are not related. • Spin-purification correction becomes inaccurate for delocalized magnetic orbitals. • We use the natural orbitals of the broken symmetry state to build high spin state. • This new correction is made separately for each electron pair. • Our spin-purification correction is more accurate for delocalised magnetic orbitals.

  5. Simulation of Kelvin-Helmholtz Instability with Flux-Corrected Transport Method

    Institute of Scientific and Technical Information of China (English)

    WANG Li-Feng; YE Wen-Hua; FAN Zheng-Feng; LI Ying-Jun

    2009-01-01

    The sixth-order accurate phase error flux-corrected transport numerical algorithm is introduced, and used to simulate Kelvin-Helmholtz instability. Linear growth rates of the simulation agree with the linear theories of Kelvin-Helmholtz instability. It indicates the validity and accuracy of this simulation method. The method also has good capturing ability of the instability interface deformation.

  6. A novel energy mapping approach for CT-based attenuation correction in PET

    NARCIS (Netherlands)

    Teimourian, B.; Ay, M. R.; Zafarghandi, M. Shamsaie; Ghafarian, P.; Ghadiri, H.; Zaidi, H.

    2012-01-01

    Purpose: Dual-energy CT (DECT) is arguably the most accurate energy mapping technique in CT-based attenuation correction (CTAC) implemented on hybrid PET/CT systems. However, this approach is not attractive for clinical use owing to increased patient dose. The authors propose a novel energy mapping

  7. The Role of the Components of Knowledge of Results Information in Error Correction.

    Science.gov (United States)

    Reeve, T. Gilmour; Magill, Richard A.

    1981-01-01

    A study was done to determine the usefulness of the components of a knowledge of results (KR) statement for organizing response correction. Errors in direction and distance components of a KR statement testing psychomotor skills were manipulated across four groups. The groups receiving directional information were more accurate in error…

  8. Drift-corrected nanoplasmonic hydrogen sensing by polarization

    Science.gov (United States)

    Wadell, Carl; Langhammer, Christoph

    2015-06-01

    Accurate and reliable hydrogen sensors are an important enabling technology for the large-scale introduction of hydrogen as a fuel or energy storage medium. As an example, in a hydrogen-powered fuel cell car of the type now introduced to the market, more than 15 hydrogen sensors are required for safe operation. To enable the long-term use of plasmonic sensors in this particular context, we introduce a concept for drift-correction based on light polarization utilizing symmetric sensor and sensing material nanoparticles arranged in a heterodimer. In this way the inert gold sensor element of the plasmonic dimer couples to a sensing-active palladium element if illuminated in the dimer-parallel polarization direction but not the perpendicular one. Thus the perpendicular polarization readout can be used to efficiently correct for drifts occurring due to changes of the sensor element itself or due to non-specific events like a temperature change. Furthermore, by the use of a polarizing beamsplitter, both polarization signals can be read out simultaneously making it possible to continuously correct the sensor response to eliminate long-term drift and ageing effects. Since our approach is generic, we also foresee its usefulness for other applications of nanoplasmonic sensors than hydrogen sensing.Accurate and reliable hydrogen sensors are an important enabling technology for the large-scale introduction of hydrogen as a fuel or energy storage medium. As an example, in a hydrogen-powered fuel cell car of the type now introduced to the market, more than 15 hydrogen sensors are required for safe operation. To enable the long-term use of plasmonic sensors in this particular context, we introduce a concept for drift-correction based on light polarization utilizing symmetric sensor and sensing material nanoparticles arranged in a heterodimer. In this way the inert gold sensor element of the plasmonic dimer couples to a sensing-active palladium element if illuminated in the dimer

  9. Quantitative photoluminescence of broad band absorbing melanins: A procedure to correct for inner filter and re-absorption effects

    CERN Document Server

    Riesz, J; Meredith, P; Riesz, Jennifer; Gilmore, Joel; Meredith, Paul

    2004-01-01

    We report methods for correcting the photoluminescence emission and excitation spectra of highly absorbing samples for re-absorption and inner filter effects. We derive the general form of the correction, and investigate various methods for determining the parameters. Additionally, the correction methods are tested with highly absorbing fluorescein and melanin (broadband absorption) solutions; the expected linear relationships between absorption and emission are recovered upon application of the correction, indicating that the methods are valid. These procedures allow accurate quantitative analysis of the emission of low quantum yield samples (such as melanin) at concentrations where absorption is significant.

  10. 78 FR 16753 - Service Delivery Plan; Correction

    Science.gov (United States)

    2013-03-18

    ... ADMINISTRATION Service Delivery Plan; Correction AGENCY: Social Security Administration. ACTION: Notice; request for comments; Correction. SUMMARY: The Social Security Administration published a document in the..., Office of Regulations, Social Security Administration. BILLING CODE 4191-02-P...

  11. Automatic Power Factor Correction Using Capacitive Bank

    OpenAIRE

    Mr.Anant Kumar Tiwari,; Mrs. Durga Sharma

    2014-01-01

    The power factor correction of electrical loads is a problem common to all industrial companies. Earlier the power factor correction was done by adjusting the capacitive bank manually [1]. The automated power factor corrector (APFC) using capacitive load bank is helpful in providing the power factor correction. Proposed automated project involves measuring the power factor value from the load using microcontroller. The design of this auto-adjustable power factor correction is ...

  12. Network error correction with unequal link capacities

    OpenAIRE

    Kim, Sukwon; Ho, Tracey; Effros, Michelle; Avestimehr, Amir Salman

    2010-01-01

    We study network error correction with unequal link capacities. Previous results on network error correction assume unit link capacities. We consider network error correction codes that can correct arbitrary errors occurring on up to z links. We find the capacity of a network consisting of parallel links, and a generalized Singleton outer bound for any arbitrary network. We show by example that linear coding is insufficient for achieving capacity in general. In our exampl...

  13. Correcting for Telluric Absorption: Methods, Case Studies, and Release of the TelFit Code

    CERN Document Server

    Gullikson, Kevin; Kraus, Adam

    2014-01-01

    Ground-based astronomical spectra are contaminated by the Earth's atmosphere to varying degrees in all spectral regions. We present a Python code that can accurately fit a model to the telluric absorption spectrum present in astronomical data, with residuals of $\\sim 3-5\\%$ of the continuum for moderately strong lines. We demonstrate the quality of the correction by fitting the telluric spectrum in a nearly featureless A0V star, HIP 20264, as well as to a series of dwarf M star spectra near the 819 nm sodium doublet. We directly compare the results to an empirical telluric correction of HIP 20264 and find that our model-fitting procedure is at least as good and sometimes more accurate. The telluric correction code, which we make freely available to the astronomical community, can be used as a replacement for telluric standard star observations for many purposes.

  14. Correcting for telluric absorption: Methods, case studies, and release of the TelFit code

    Energy Technology Data Exchange (ETDEWEB)

    Gullikson, Kevin; Kraus, Adam [Department of Astronomy, University of Texas, 2515 Speedway, Stop C1400, Austin, TX 78712 (United States); Dodson-Robinson, Sarah [Department of Physics and Astronomy, 217 Sharp Lab, Newark, DE 19716 (United States)

    2014-09-01

    Ground-based astronomical spectra are contaminated by the Earth's atmosphere to varying degrees in all spectral regions. We present a Python code that can accurately fit a model to the telluric absorption spectrum present in astronomical data, with residuals of ∼3%-5% of the continuum for moderately strong lines. We demonstrate the quality of the correction by fitting the telluric spectrum in a nearly featureless A0V star, HIP 20264, as well as to a series of dwarf M star spectra near the 819 nm sodium doublet. We directly compare the results to an empirical telluric correction of HIP 20264 and find that our model-fitting procedure is at least as good and sometimes more accurate. The telluric correction code, which we make freely available to the astronomical community, can be used as a replacement for telluric standard star observations for many purposes.

  15. Application of bias correction methods to improve the accuracy of quantitative radar rainfall in Korea

    Directory of Open Access Journals (Sweden)

    J.-K. Lee

    2015-04-01

    Full Text Available There are many potential sources of bias in the radar rainfall estimation process. This study classified the biases from the rainfall estimation process into the reflectivity measurement bias and QPE model bias and also conducted the bias correction methods to improve the accuracy of the Radar-AWS Rainrate (RAR calculation system operated by the Korea Meteorological Administration (KMA. For the Z bias correction, this study utilized the bias correction algorithm for the reflectivity. The concept of this algorithm is that the reflectivity of target single-pol radars is corrected based on the reference dual-pol radar corrected in the hardware and software bias. This study, and then, dealt with two post-process methods, the Mean Field Bias Correction (MFBC method and the Local Gauge Correction method (LGC, to correct rainfall-bias. The Z bias and rainfall-bias correction methods were applied to the RAR system. The accuracy of the RAR system improved after correcting Z bias. For rainfall types, although the accuracy of Changma front and local torrential cases was slightly improved without the Z bias correction, especially, the accuracy of typhoon cases got worse than existing results. As a result of the rainfall-bias correction, the accuracy of the RAR system performed Z bias_LGC was especially superior to the MFBC method because the different rainfall biases were applied to each grid rainfall amount in the LGC method. For rainfall types, Results of the Z bias_LGC showed that rainfall estimates for all types was more accurate than only the Z bias and, especially, outcomes in typhoon cases was vastly superior to the others.

  16. Application of bias correction methods to improve the accuracy of quantitative radar rainfall in Korea

    Directory of Open Access Journals (Sweden)

    J.-K. Lee

    2015-11-01

    Full Text Available There are many potential sources of the biases in the radar rainfall estimation process. This study classified the biases from the rainfall estimation process into the reflectivity measurement bias and the rainfall estimation bias by the Quantitative Precipitation Estimation (QPE model and also conducted the bias correction methods to improve the accuracy of the Radar-AWS Rainrate (RAR calculation system operated by the Korea Meteorological Administration (KMA. In the Z bias correction for the reflectivity biases occurred by measuring the rainfalls, this study utilized the bias correction algorithm. The concept of this algorithm is that the reflectivity of the target single-pol radars is corrected based on the reference dual-pol radar corrected in the hardware and software bias. This study, and then, dealt with two post-process methods, the Mean Field Bias Correction (MFBC method and the Local Gauge Correction method (LGC, to correct the rainfall estimation bias by the QPE model. The Z bias and rainfall estimation bias correction methods were applied to the RAR system. The accuracy of the RAR system was improved after correcting Z bias. For the rainfall types, although the accuracy of the Changma front and the local torrential cases was slightly improved without the Z bias correction the accuracy of the typhoon cases got worse than the existing results in particular. As a result of the rainfall estimation bias correction, the Z bias_LGC was especially superior to the MFBC method because the different rainfall biases were applied to each grid rainfall amount in the LGC method. For the rainfall types, the results of the Z bias_LGC showed that the rainfall estimates for all types was more accurate than only the Z bias and, especially, the outcomes in the typhoon cases was vastly superior to the others.

  17. A rigid motion correction method for helical computed tomography (CT)

    International Nuclear Information System (INIS)

    We propose a method to compensate for six degree-of-freedom rigid motion in helical CT of the head. The method is demonstrated in simulations and in helical scans performed on a 16-slice CT scanner. Scans of a Hoffman brain phantom were acquired while an optical motion tracking system recorded the motion of the bed and the phantom. Motion correction was performed by restoring projection consistency using data from the motion tracking system, and reconstructing with an iterative fully 3D algorithm. Motion correction accuracy was evaluated by comparing reconstructed images with a stationary reference scan. We also investigated the effects on accuracy of tracker sampling rate, measurement jitter, interpolation of tracker measurements, and the synchronization of motion data and CT projections. After optimization of these aspects, motion corrected images corresponded remarkably closely to images of the stationary phantom with correlation and similarity coefficients both above 0.9. We performed a simulation study using volunteer head motion and found similarly that our method is capable of compensating effectively for realistic human head movements. To the best of our knowledge, this is the first practical demonstration of generalized rigid motion correction in helical CT. Its clinical value, which we have yet to explore, may be significant. For example it could reduce the necessity for repeat scans and resource-intensive anesthetic and sedation procedures in patient groups prone to motion, such as young children. It is not only applicable to dedicated CT imaging, but also to hybrid PET/CT and SPECT/CT, where it could also ensure an accurate CT image for lesion localization and attenuation correction of the functional image data. (paper)

  18. Addendum to the Corrective Action Investigation Plan for Corrective Action Unit 321: Area 22 Weather Station Fuel Storage, Nevada Test Site, Nevada (Rev. 0, November 2000)

    Energy Technology Data Exchange (ETDEWEB)

    DOE/NV

    2000-11-03

    This addendum to the Corrective Action Investigation Plan (CAIP) contains the U.S. Department of Energy, Nevada Operations Office's approach to determine the extent of contamination existing at Corrective Action Unit (CAU) 321. This addendum was required when the extent of contamination exceeded the estimate in the original Corrective Action Decision Document (CADD). Located in Area 22 on the Nevada Test Site, Corrective Action Unit 321, Weather Station Fuel Storage, consists of Corrective Action Site 22-99-05, Fuel Storage Area, was used to store fuel and other petroleum products necessary for motorized operations at the historic Camp Desert Rock facility. This facility was operational from 1951 to 1958 and dismantled after 1958. Based on site history and earlier investigation activities at CAU 321, the contaminant of potential concern (COPC) was previously identified as total petroleum hydrocarbons (diesel-range organics). The scope of this corrective action investigation for the Fuel Storage Area will include the selection of biased sample locations to determine the vertical and lateral extent of contamination, collection of soil samples using rotary sonic drilling techniques, and the utilization of field-screening methods to accurately determine the extent of COPC contamination. The results of this field investigation will support a defensible evaluation of corrective action alternatives and be included in the revised CADD.

  19. 75 FR 2510 - Procurement List; Corrections

    Science.gov (United States)

    2010-01-15

    ... services on January 11, 2010 (75 FR 1354-1355). The correct date that comments should be received is... FR 1355-1356). The correct effective date should be February 11, 2010. ADDRESSES: Committee for... PEOPLE WHO ARE BLIND OR SEVERELY DISABLED Procurement List; Corrections AGENCY: Committee for...

  20. 45 CFR 1225.19 - Corrective action.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false Corrective action. 1225.19 Section 1225.19 Public... Corrective action. (a) When discrimination is found, Peace Corps or ACTION must take appropriate action to... corrective action to the agent and other class members in accordance with § 1225.10 of this part. (b)...

  1. 40 CFR 192.04 - Corrective action.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Corrective action. 192.04 Section 192... Corrective action. If the groundwater concentration limits established for disposal sites under provisions of § 192.02(c) are found or projected to be exceeded, a corrective action program shall be placed...

  2. 45 CFR 1225.10 - Corrective action.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false Corrective action. 1225.10 Section 1225.10 Public... Corrective action. When it has been determined by Final Agency Decision that the aggrieved party has been subjected to illegal discrimination, the following corrective actions may be taken: (a) Selection as...

  3. 10 CFR 72.172 - Corrective action.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Corrective action. 72.172 Section 72.172 Energy NUCLEAR... Corrective action. The licensee, applicant for a license, certificate holder, and applicant for a CoC shall... that the cause of the condition is determined and corrective action is taken to preclude...

  4. 42 CFR 431.246 - Corrective action.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Corrective action. 431.246 Section 431.246 Public... Recipients Procedures § 431.246 Corrective action. The agency must promptly make corrective payments, retroactive to the date an incorrect action was taken, and, if appropriate, provide for admission...

  5. 40 CFR 35.3170 - Corrective action.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Corrective action. 35.3170 Section 35... STATE AND LOCAL ASSISTANCE State Water Pollution Control Revolving Funds § 35.3170 Corrective action. (a... will notify the State of such noncompliance and prescribe the necessary corrective action. Failure...

  6. 34 CFR 200.42 - Corrective action.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false Corrective action. 200.42 Section 200.42 Education... Programs Operated by Local Educational Agencies Lea and School Improvement § 200.42 Corrective action. (a) Definition. “Corrective action” means action by an LEA that— (1) Substantially and directly responds to—...

  7. 10 CFR 71.133 - Corrective action.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Corrective action. 71.133 Section 71.133 Energy NUCLEAR....133 Corrective action. The licensee, certificate holder, and applicant for a CoC shall establish... determined and corrective action taken to preclude repetition. The identification of the...

  8. Simple and accurate analytical calculation of shortest path lengths

    CERN Document Server

    Melnik, Sergey

    2016-01-01

    We present an analytical approach to calculating the distribution of shortest paths lengths (also called intervertex distances, or geodesic paths) between nodes in unweighted undirected networks. We obtain very accurate results for synthetic random networks with specified degree distribution (the so-called configuration model networks). Our method allows us to accurately predict the distribution of shortest path lengths on real-world networks using their degree distribution, or joint degree-degree distribution. Compared to some other methods, our approach is simpler and yields more accurate results. In order to obtain the analytical results, we use the analogy between an infection reaching a node in $n$ discrete time steps (i.e., as in the susceptible-infected epidemic model) and that node being at a distance $n$ from the source of the infection.

  9. Accurate level set method for simulations of liquid atomization☆

    Institute of Scientific and Technical Information of China (English)

    Changxiao Shao; Kun Luo; Jianshan Yang; Song Chen; Jianren Fan

    2015-01-01

    Computational fluid dynamics is an efficient numerical approach for spray atomization study, but it is chal enging to accurately capture the gas–liquid interface. In this work, an accurate conservative level set method is intro-duced to accurately track the gas–liquid interfaces in liquid atomization. To validate the capability of this method, binary drop collision and drop impacting on liquid film are investigated. The results are in good agreement with experiment observations. In addition, primary atomization (swirling sheet atomization) is studied using this method. To the swirling sheet atomization, it is found that Rayleigh–Taylor instability in the azimuthal direction causes the primary breakup of liquid sheet and complex vortex structures are clustered around the rim of the liq-uid sheet. The effects of central gas velocity and liquid–gas density ratio on atomization are also investigated. This work lays a solid foundation for further studying the mechanism of spray atomization.

  10. Producing accurate wave propagation time histories using the global matrix method

    International Nuclear Information System (INIS)

    This paper presents a reliable method for producing accurate displacement time histories for wave propagation in laminated plates using the global matrix method. The existence of inward and outward propagating waves in the general solution is highlighted while examining the axisymmetric case of a circular actuator on an aluminum plate. Problems with previous attempts to isolate the outward wave for anisotropic laminates are shown. The updated method develops a correction signal that can be added to the original time history solution to cancel the inward wave and leave only the outward propagating wave. The paper demonstrates the effectiveness of the new method for circular and square actuators bonded to the surface of isotropic laminates, and these results are compared with exact solutions. Results for circular actuators on cross-ply laminates are also presented and compared with experimental results, showing the ability of the new method to successfully capture the displacement time histories for composite laminates. (paper)

  11. Fast and accurate prediction of numerical relativity waveforms from binary black hole mergers using surrogate models

    CERN Document Server

    Blackman, Jonathan; Galley, Chad R; Szilagyi, Bela; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-01-01

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. In this paper, we construct an accurate and fast-to-evaluate surrogate model for numerical relativity (NR) waveforms from non-spinning binary black hole coalescences with mass ratios from $1$ to $10$ and durations corresponding to about $15$ orbits before merger. Our surrogate, which is built using reduced order modeling techniques, is distinct from traditional modeling efforts. We find that the full multi-mode surrogate model agrees with waveforms generated by NR to within the numerical error of the NR code. In particular, we show that our modeling strategy produces surrogates which can correctly predict NR waveforms that were {\\em not} used for the surrogate's training. For all practical purposes, then, the surrogate waveform model is equivalent to the high-accuracy, large-scale simulation waveform but can be evaluated in a millisecond to a second dependin...

  12. Accurate estimation of influenza epidemics using Google search data via ARGO.

    Science.gov (United States)

    Yang, Shihao; Santillana, Mauricio; Kou, S C

    2015-11-24

    Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions.

  13. ACCURATE KAP METER CALIBRATION AS A PREREQUISITE FOR OPTIMISATION IN PROJECTION RADIOGRAPHY.

    Science.gov (United States)

    Malusek, A; Sandborg, M; Carlsson, G Alm

    2016-06-01

    Modern X-ray units register the air kerma-area product, PKA, with a built-in KAP meter. Some KAP meters show an energy-dependent bias comparable with the maximum uncertainty articulated by the IEC (25 %), adversely affecting dose-optimisation processes. To correct for the bias, a reference KAP meter calibrated at a standards laboratory and two calibration methods described here can be used to achieve an uncertainty of energy-independent dosemeter via a reference beam quality in the clinic, Q1, to beam quality, Q Biases up to 35 % of built-in KAP meter readings were noted. Energy-dependent calibration factors are needed for unbiased PKA Accurate KAP meter calibration as a prerequisite for optimisation in projection radiography.

  14. Generation of accurate integral surfaces in time-dependent vector fields.

    Science.gov (United States)

    Garth, Christoph; Krishnan, Han; Tricoche, Xavier; Bobach, Tom; Joy, Kenneth I

    2008-01-01

    We present a novel approach for the direct computation of integral surfaces in time-dependent vector fields. As opposed to previous work, which we analyze in detail, our approach is based on a separation of integral surface computation into two stages: surface approximation and generation of a graphical representation. This allows us to overcome several limitations of existing techniques. We first describe an algorithm for surface integration that approximates a series of time lines using iterative refinement and computes a skeleton of the integral surface. In a second step, we generate a well-conditioned triangulation. Our approach allows a highly accurate treatment of very large time-varying vector fields in an efficient, streaming fashion. We examine the properties of the presented methods on several example datasets and perform a numerical study of its correctness and accuracy. Finally, we investigate some visualization aspects of integral surfaces. PMID:18988990

  15. Importance of local exact exchange potential in hybrid functionals for accurate excited states

    CERN Document Server

    Kim, Jaewook; Hwang, Sang-Yeon; Ryu, Seongok; Choi, Sunghwan; Kim, Woo Youn

    2016-01-01

    Density functional theory has been an essential analysis tool for both theoretical and experimental chemists since accurate hybrid functionals were developed. Here we propose a local hybrid method derived from the optimized effective potential (OEP) method and compare its distinct features with conventional nonlocal ones from the Hartree-Fock (HF) exchange operator. Both are formally exact for ground states and thus show similar accuracy for atomization energies and reaction barrier heights. For excited states, the local version yields virtual orbitals with N-electron character, while those of the nonlocal version have mixed characters between N- and (N+1)-electron orbitals. As a result, the orbital energy gaps from the former well approximate excitation energies with a small mean absolute error (MAE = 0.40 eV) for the Caricato benchmark set. The correction from time-dependent density functional theory with a simple local density approximation kernel further improves its accuracy by incorporating multi-config...

  16. Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method.

    Science.gov (United States)

    Zhao, Yan; Cao, Liangcai; Zhang, Hao; Kong, Dezhao; Jin, Guofan

    2015-10-01

    Fast calculation and correct depth cue are crucial issues in the calculation of computer-generated hologram (CGH) for high quality three-dimensional (3-D) display. An angular-spectrum based algorithm for layer-oriented CGH is proposed. Angular spectra from each layer are synthesized as a layer-corresponded sub-hologram based on the fast Fourier transform without paraxial approximation. The proposed method can avoid the huge computational cost of the point-oriented method and yield accurate predictions of the whole diffracted field compared with other layer-oriented methods. CGHs of versatile formats of 3-D digital scenes, including computed tomography and 3-D digital models, are demonstrated with precise depth performance and advanced image quality. PMID:26480062

  17. Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements.

    Science.gov (United States)

    Franck, Jennifer A; Blume, Janet; Crisco, Joseph J; Franck, Christian

    2015-09-01

    Sports-related concussions are of significant concern in many impact sports, and their detection relies on accurate measurements of the head kinematics during impact. Among the most prevalent recording technologies are videography, and more recently, the use of single-axis accelerometers mounted in a helmet, such as the HIT system. Successful extraction of the linear and angular impact accelerations depends on an accurate analysis methodology governed by the equations of motion. Current algorithms are able to estimate the magnitude of acceleration and hit location, but make assumptions about the hit orientation and are often limited in the position and/or orientation of the accelerometers. The newly formulated algorithm presented in this manuscript accurately extracts the full linear and rotational acceleration vectors from a broad arrangement of six single-axis accelerometers directly from the governing set of kinematic equations. The new formulation linearizes the nonlinear centripetal acceleration term with a finite-difference approximation and provides a fast and accurate solution for all six components of acceleration over long time periods (>250 ms). The approximation of the nonlinear centripetal acceleration term provides an accurate computation of the rotational velocity as a function of time and allows for reconstruction of a multiple-impact signal. Furthermore, the algorithm determines the impact location and orientation and can distinguish between glancing, high rotational velocity impacts, or direct impacts through the center of mass. Results are shown for ten simulated impact locations on a headform geometry computed with three different accelerometer configurations in varying degrees of signal noise. Since the algorithm does not require simplifications of the actual impacted geometry, the impact vector, or a specific arrangement of accelerometer orientations, it can be easily applied to many impact investigations in which accurate kinematics need to

  18. Pileup correction of microdosimetric spectra

    CERN Document Server

    Langen, K M; Lennox, A J; Kroc, T K; De Luca, P M

    2002-01-01

    Microdosimetric spectra were measured at the Fermilab neutron therapy facility using low pressure proportional counters operated in pulse mode. The neutron beam has a very low duty cycle (<0.1%) and consequently a high instantaneous dose rate which causes distortions of the microdosimetric spectra due to pulse pileup. The determination of undistorted spectra at this facility necessitated (i) the modified operation of the proton accelerator to reduce the instantaneous dose rate and (ii) the establishment of a computational procedure to correct the measured spectra for remaining pileup distortions. In support of the latter effort, two different pileup simulation algorithms using analytical and Monte-Carlo-based approaches were developed. While the analytical algorithm allows a detailed analysis of pileup processes it only treats two-pulse and three-pulse pileup and its validity is hence restricted. A Monte-Carlo-based pileup algorithm was developed that inherently treats all degrees of pileup. This algorithm...

  19. A Quantum Correction To Chaos

    CERN Document Server

    Fitzpatrick, A Liam

    2016-01-01

    We use results on Virasoro conformal blocks to study chaotic dynamics in CFT$_2$ at large central charge c. The Lyapunov exponent $\\lambda_L$, which is a diagnostic for the early onset of chaos, receives $1/c$ corrections that may be interpreted as $\\lambda_L = \\frac{2 \\pi}{\\beta} \\left( 1 + \\frac{12}{c} \\right)$. However, out of time order correlators receive other equally important $1/c$ suppressed contributions that do not have such a simple interpretation. We revisit the proof of a bound on $\\lambda_L$ that emerges at large $c$, focusing on CFT$_2$ and explaining why our results do not conflict with the analysis leading to the bound. We also comment on relationships between chaos, scattering, causality, and bulk locality.

  20. Radiative corrections in bumblebee electrodynamics

    Directory of Open Access Journals (Sweden)

    R.V. Maluf

    2015-10-01

    Full Text Available We investigate some quantum features of the bumblebee electrodynamics in flat spacetimes. The bumblebee field is a vector field that leads to a spontaneous Lorentz symmetry breaking. For a smooth quadratic potential, the massless excitation (Nambu–Goldstone boson can be identified as the photon, transversal to the vacuum expectation value of the bumblebee field. Besides, there is a massive excitation associated with the longitudinal mode and whose presence leads to instability in the spectrum of the theory. By using the principal-value prescription, we show that no one-loop radiative corrections to the mass term is generated. Moreover, the bumblebee self-energy is not transverse, showing that the propagation of the longitudinal mode cannot be excluded from the effective theory.

  1. Correct Linearization of Einstein's Equations

    Directory of Open Access Journals (Sweden)

    Rabounski D.

    2006-04-01

    Full Text Available Routinely, Einstein’s equations are be reduced to a wave form (linearly independent of the second derivatives of the space metric in the absence of gravitation, the space rotation and Christoffel’s symbols. As shown herein, the origin of the problem is the use of the general covariant theory of measurement. Herein the wave form of Einstein’s equations is obtained in terms of Zelmanov’s chronometric invariants (physically observable projections on the observer’s time line and spatial section. The equations so obtained depend solely upon the second derivatives, even for gravitation, the space rotation and Christoffel’s symbols. The correct linearization proves that the Einstein equations are completely compatible with weak waves of the metric.

  2. Method of accurate grinding for single enveloping TI worm

    Institute of Scientific and Technical Information of China (English)

    SUN Yuehai; ZHENG Huijiang; BI Qingzhen; WANG Shuren

    2005-01-01

    TI worm drive consists of involute helical gear and its enveloping Hourglass worm. Accurate grinding for TI worm is the key manufacture technology for TI worm gearing being popularized and applied. According to the theory of gear mesh, the equations of tooth surface of worm drive are gained, and the equation of the axial section profile of grinding wheel that can accurately grind TI worm is extracted. Simultaneously,the relation of position and motion between TI worm and grinding wheel are expounded.The method for precisely grinding single enveloping TI worm is obtained.

  3. Equivalent method for accurate solution to linear interval equations

    Institute of Scientific and Technical Information of China (English)

    王冲; 邱志平

    2013-01-01

    Based on linear interval equations, an accurate interval finite element method for solving structural static problems with uncertain parameters in terms of optimization is discussed. On the premise of ensuring the consistency of solution sets, the original interval equations are equivalently transformed into some deterministic inequations. On this basis, calculating the structural displacement response with interval parameters is predigested to a number of deterministic linear optimization problems. The results are proved to be accurate to the interval governing equations. Finally, a numerical example is given to demonstrate the feasibility and efficiency of the proposed method.

  4. Fringe capacitance correction for a coaxial soil cell.

    Science.gov (United States)

    Pelletier, Mathew G; Viera, Joseph A; Schwartz, Robert C; Lascano, Robert J; Evett, Steven R; Green, Tim R; Wanjura, John D; Holt, Greg A

    2011-01-01

    Accurate measurement of moisture content is a prime requirement in hydrological, geophysical and biogeochemical research as well as for material characterization and process control. Within these areas, accurate measurements of the surface area and bound water content is becoming increasingly important for providing answers to many fundamental questions ranging from characterization of cotton fiber maturity, to accurate characterization of soil water content in soil water conservation research to bio-plant water utilization to chemical reactions and diffusions of ionic species across membranes in cells as well as in the dense suspensions that occur in surface films. One promising technique to address the increasing demands for higher accuracy water content measurements is utilization of electrical permittivity characterization of materials. This technique has enjoyed a strong following in the soil-science and geological community through measurements of apparent permittivity via time-domain-reflectometry (TDR) as well in many process control applications. Recent research however, is indicating a need to increase the accuracy beyond that available from traditional TDR. The most logical pathway then becomes a transition from TDR based measurements to network analyzer measurements of absolute permittivity that will remove the adverse effects that high surface area soils and conductivity impart onto the measurements of apparent permittivity in traditional TDR applications.This research examines an observed experimental error for the coaxial probe, from which the modern TDR probe originated, which is hypothesized to be due to fringe capacitance. The research provides an experimental and theoretical basis for the cause of the error and provides a technique by which to correct the system to remove this source of error. To test this theory, a Poisson model of a coaxial cell was formulated to calculate the effective theoretical extra length caused by the fringe capacitance

  5. Fringe Capacitance Correction for a Coaxial Soil Cell

    Directory of Open Access Journals (Sweden)

    John D. Wanjura

    2011-01-01

    Full Text Available Accurate measurement of moisture content is a prime requirement in hydrological, geophysical and biogeochemical research as well as for material characterization and process control. Within these areas, accurate measurements of the surface area and bound water content is becoming increasingly important for providing answers to many fundamental questions ranging from characterization of cotton fiber maturity, to accurate characterization of soil water content in soil water conservation research to bio-plant water utilization to chemical reactions and diffusions of ionic species across membranes in cells as well as in the dense suspensions that occur in surface films. One promising technique to address the increasing demands for higher accuracy water content measurements is utilization of electrical permittivity characterization of materials. This technique has enjoyed a strong following in the soil-science and geological community through measurements of apparent permittivity via time-domain-reflectometry (TDR as well in many process control applications. Recent research however, is indicating a need to increase the accuracy beyond that available from traditional TDR. The most logical pathway then becomes a transition from TDR based measurements to network analyzer measurements of absolute permittivity that will remove the adverse effects that high surface area soils and conductivity impart onto the measurements of apparent permittivity in traditional TDR applications. This research examines an observed experimental error for the coaxial probe, from which the modern TDR probe originated, which is hypothesized to be due to fringe capacitance. The research provides an experimental and theoretical basis for the cause of the error and provides a technique by which to correct the system to remove this source of error. To test this theory, a Poisson model of a coaxial cell was formulated to calculate the effective theoretical extra length caused by the

  6. Generator maintenance electrical testing : the importance of trending and accurate interpretation : a case study

    Energy Technology Data Exchange (ETDEWEB)

    Rebich, N.J. [AGT Services Inc., Amsterdam, NY (United States)

    2005-07-01

    Electrical testing and diagnostics of rotating electrical machinery are important for condition assessment that ensures reliable service. The testing generally involves a thorough visual inspection and an evaluation of both the electrical insulation and conductor circuit integrity using specific electrical test equipment and pre-established acceptance criteria. Most electric utilities have specialists that conduct maintenance testing both on and off line. They document and evaluate electrical test data to determine what actions are needed to ensure reliability of equipment. They also determine if there are any unwanted trends that can influence reliability. These trends are useful in planning and budgeting future maintenance outage repairs and in providing experience-based and accurate risk assessment for deferral action. This paper presents a case study of a 1958 vintage General Electric 166 MVA, 18 KV 45 psig hydrogen inner-gas cooled winding generator. A problem in the unit began in 1990, but it was not accurately diagnosed and corrected until 2004, at which time it had reached a condition of impending failure. This paper described the initial testing and inspection routines, initial findings, repair and summary of further investigation and repair considerations. It was suggested that the strand failure was caused by high cycle fatigue of the unsupported stands within the clip caps. A map of failure locations with respect to the winding circuit was constructed by AGT Services to determine if there was any correlation between failures in terms of the electrical operational characteristics of the machine. Review of the data showed that the failures were random with respect to the electrical circuit. Determining the extent and location of the damage made it possible to develop a reliable repair strategy while avoiding a complete stator rewind. It also made it possible to correct inherent original design deficiencies. The unit was returned to full service in 2004. 9

  7. Non-linear crustal corrections in high-resolution regional waveform seismic tomography

    Science.gov (United States)

    Marone, Federica; Romanowicz, Barbara

    2007-07-01

    We compare 3-D upper mantle anisotropic structures beneath the North American continent obtained using standard and improved crustal corrections in the framework of Non-linear Asymptotic Coupling Theory (NACT) applied to long period three component fundamental and higher mode surface waveform data. Our improved approach to correct for crustal structure in high-resolution regional waveform tomographic models goes beyond the linear perturbation approximation, and is therefore more accurate in accounting for large variations in Moho topography within short distances as observed, for instance, at ocean-continent margins. This improved methodology decomposes the shallow-layer correction into a linear and non-linear part and makes use of 1-D sensitivity kernels defined according to local tectonic structure, both for the forward computation and for the computation of sensitivity kernels for inversion. The comparison of the 3-D upper mantle anisotropic structures derived using the standard and improved crustal correction approaches shows that the model norm is not strongly affected. However, significant variations are observed in the retrieved 3-D perturbations. The largest differences in the velocity models are present below 250 km depth and not in the uppermost mantle, as would be expected. We suggest that inaccurate crustal corrections preferentially map into the least constrained part of the model and therefore accurate corrections for shallow-layer structure are essential to improve our knowledge of parts of the upper mantle where our data have the smallest sensitivity.

  8. Correction of a Depth-Dependent Lateral Distortion in 3D Super-Resolution Imaging.

    Directory of Open Access Journals (Sweden)

    Lina Carlini

    Full Text Available Three-dimensional (3D localization-based super-resolution microscopy (SR requires correction of aberrations to accurately represent 3D structure. Here we show how a depth-dependent lateral shift in the apparent position of a fluorescent point source, which we term `wobble`, results in warped 3D SR images and provide a software tool to correct this distortion. This system-specific, lateral shift is typically > 80 nm across an axial range of ~ 1 μm. A theoretical analysis based on phase retrieval data from our microscope suggests that the wobble is caused by non-rotationally symmetric phase and amplitude aberrations in the microscope's pupil function. We then apply our correction to the bacterial cytoskeletal protein FtsZ in live bacteria and demonstrate that the corrected data more accurately represent the true shape of this vertically-oriented ring-like structure. We also include this correction method in a registration procedure for dual-color, 3D SR data and show that it improves target registration error (TRE at the axial limits over an imaging depth of 1 μm, yielding TRE values of < 20 nm. This work highlights the importance of correcting aberrations in 3D SR to achieve high fidelity between the measurements and the sample.

  9. Accurate Mass Determinations in Decay Chains with Missing Energy

    OpenAIRE

    Cheng, Hsin-Chia; Engelhardt, Dalit; Gunion, John F.; Han, Zhenyu; McElrath, Bob

    2008-01-01

    Many beyond the Standard Model theories include a stable dark matter candidate that yields missing / invisible energy in collider detectors. If observed at the Large Hadron Collider, we must determine if its mass and other properties (and those of its partners) predict the correct dark matter relic density. We give a new procedure for determining its mass with small error.

  10. An accurate analytic description of neutrino oscillations in matter

    Energy Technology Data Exchange (ETDEWEB)

    Niro, Viviana [Max-Planck-Institut fuer Kernphysik, Heidelberg (Germany)

    2009-07-01

    We present a simple closed-form analytic expression for the probability of two-flavour neutrino oscillations in a matter with an arbitrary density profile. Our formula is based on a perturbative expansion and allows an easy calculation of higher order corrections. We demonstrate the validity of our results using a few model density profiles, including the PREM density profile of the Earth.

  11. Fermions tunnelling with quantum gravity correction

    CERN Document Server

    liu, Zhen-Yu

    2014-01-01

    Quantum gravity correction is truly important to study tunnelling process of black hole. Base on the generalized uncertainty principle, we investigate the influence of quantum gravity and the result tell us that the quantum gravity correction accelerates the evaporation of black hole. Using corrected Dirac equation in curved spacetime and Hamilton-Jacobi method, we address the tunnelling of fermions in a 4-dimensional Schwarzschild spacetime. After solving the equation of motion of the spin 1/2 field, we obtain the corrected Hawking temperature. It turns out that the correction depends not only on the mass of black hole but aslo on the mass of emitted fermions. In our calculation, the quantum gravity correction accelerates the increasing of Hawking temperature during the radiation explicitly. This correction leads to the increasing of the evaporation of black hole.

  12. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  13. Empirical corrections for atmospheric neutral density derived from thermospheric models

    Science.gov (United States)

    Forootan, Ehsan; Kusche, Jürgen; Börger, Klaus; Henze, Christina; Löcher, Anno; Eickmans, Marius; Agena, Jens

    2016-04-01

    Accurately predicting satellite positions is a prerequisite for various applications from space situational awareness to precise orbit determination (POD). Given the fact that atmospheric drag represents a dominant influence on the position of low-Earth orbit objects, an accurate evaluation of thermospheric mass density is of great importance to low Earth orbital prediction. Over decades, various empirical atmospheric models have been developed to support computation of density changes within the atmosphere. The quality of these models is, however, restricted mainly due to the complexity of atmospheric density changes and the limited resolution of indices used to account for atmospheric temperature and neutral density changes caused by solar and geomagnetic activity. Satellite missions, such as Challenging Mini-Satellite Payload (CHAMP) and Gravity Recovery and Climate Experiment (GRACE), provide a direct measurement of non-conservative accelerations, acting on the surface of satellites. These measurements provide valuable data for improving our knowledge of thermosphere density and winds. In this paper we present two empirical frameworks to correct model-derived neutral density simulations by the along-track thermospheric density measurements of CHAMP and GRACE. First, empirical scale factors are estimated by analyzing daily CHAMP and GRACE acceleration measurements and are used to correct the density simulation of Jacchia and MSIS (Mass-Spectrometer-Incoherent-Scatter) thermospheric models. The evolution of daily scale factors is then related to solar and magnetic activity enabling their prediction in time. In the second approach, principal component analysis (PCA) is applied to extract the dominant modes of differences between CHAMP/GRACE observations and thermospheric model simulations. Afterwards an adaptive correction procedure is used to account for long-term and high-frequency differences. We conclude the study by providing recommendations on possible

  14. Comparison and Analysis of Geometric Correction Models of Spaceborne SAR.

    Science.gov (United States)

    Jiang, Weihao; Yu, Anxi; Dong, Zhen; Wang, Qingsong

    2016-06-25

    Following the development of synthetic aperture radar (SAR), SAR images have become increasingly common. Many researchers have conducted large studies on geolocation models, but little work has been conducted on the available models for the geometric correction of SAR images of different terrain. To address the terrain issue, four different models were compared and are described in this paper: a rigorous range-doppler (RD) model, a rational polynomial coefficients (RPC) model, a revised polynomial (PM) model and an elevation derivation (EDM) model. The results of comparisons of the geolocation capabilities of the models show that a proper model for a SAR image of a specific terrain can be determined. A solution table was obtained to recommend a suitable model for users. Three TerraSAR-X images, two ALOS-PALSAR images and one Envisat-ASAR image were used for the experiment, including flat terrain and mountain terrain SAR images as well as two large area images. Geolocation accuracies of the models for different terrain SAR images were computed and analyzed. The comparisons of the models show that the RD model was accurate but was the least efficient; therefore, it is not the ideal model for real-time implementations. The RPC model is sufficiently accurate and efficient for the geometric correction of SAR images of flat terrain, whose precision is below 0.001 pixels. The EDM model is suitable for the geolocation of SAR images of mountainous terrain, and its precision can reach 0.007 pixels. Although the PM model does not produce results as precise as the other models, its efficiency is excellent and its potential should not be underestimated. With respect to the geometric correction of SAR images over large areas, the EDM model has higher accuracy under one pixel, whereas the RPC model consumes one third of the time of the EDM model.

  15. Comparison and Analysis of Geometric Correction Models of Spaceborne SAR

    Directory of Open Access Journals (Sweden)

    Weihao Jiang

    2016-06-01

    Full Text Available Following the development of synthetic aperture radar (SAR, SAR images have become increasingly common. Many researchers have conducted large studies on geolocation models, but little work has been conducted on the available models for the geometric correction of SAR images of different terrain. To address the terrain issue, four different models were compared and are described in this paper: a rigorous range-doppler (RD model, a rational polynomial coefficients (RPC model, a revised polynomial (PM model and an elevation derivation (EDM model. The results of comparisons of the geolocation capabilities of the models show that a proper model for a SAR image of a specific terrain can be determined. A solution table was obtained to recommend a suitable model for users. Three TerraSAR-X images, two ALOS-PALSAR images and one Envisat-ASAR image were used for the experiment, including flat terrain and mountain terrain SAR images as well as two large area images. Geolocation accuracies of the models for different terrain SAR images were computed and analyzed. The comparisons of the models show that the RD model was accurate but was the least efficient; therefore, it is not the ideal model for real-time implementations. The RPC model is sufficiently accurate and efficient for the geometric correction of SAR images of flat terrain, whose precision is below 0.001 pixels. The EDM model is suitable for the geolocation of SAR images of mountainous terrain, and its precision can reach 0.007 pixels. Although the PM model does not produce results as precise as the other models, its efficiency is excellent and its potential should not be underestimated. With respect to the geometric correction of SAR images over large areas, the EDM model has higher accuracy under one pixel, whereas the RPC model consumes one third of the time of the EDM model.

  16. Comparison and Analysis of Geometric Correction Models of Spaceborne SAR.

    Science.gov (United States)

    Jiang, Weihao; Yu, Anxi; Dong, Zhen; Wang, Qingsong

    2016-01-01

    Following the development of synthetic aperture radar (SAR), SAR images have become increasingly common. Many researchers have conducted large studies on geolocation models, but little work has been conducted on the available models for the geometric correction of SAR images of different terrain. To address the terrain issue, four different models were compared and are described in this paper: a rigorous range-doppler (RD) model, a rational polynomial coefficients (RPC) model, a revised polynomial (PM) model and an elevation derivation (EDM) model. The results of comparisons of the geolocation capabilities of the models show that a proper model for a SAR image of a specific terrain can be determined. A solution table was obtained to recommend a suitable model for users. Three TerraSAR-X images, two ALOS-PALSAR images and one Envisat-ASAR image were used for the experiment, including flat terrain and mountain terrain SAR images as well as two large area images. Geolocation accuracies of the models for different terrain SAR images were computed and analyzed. The comparisons of the models show that the RD model was accurate but was the least efficient; therefore, it is not the ideal model for real-time implementations. The RPC model is sufficiently accurate and efficient for the geometric correction of SAR images of flat terrain, whose precision is below 0.001 pixels. The EDM model is suitable for the geolocation of SAR images of mountainous terrain, and its precision can reach 0.007 pixels. Although the PM model does not produce results as precise as the other models, its efficiency is excellent and its potential should not be underestimated. With respect to the geometric correction of SAR images over large areas, the EDM model has higher accuracy under one pixel, whereas the RPC model consumes one third of the time of the EDM model. PMID:27347973

  17. Communication: Accurate higher-order van der Waals coefficients between molecules from a model dynamic multipole polarizability

    International Nuclear Information System (INIS)

    Due to the absence of the long-range van der Waals (vdW) interaction, conventional density functional theory (DFT) often fails in the description of molecular complexes and solids. In recent years, considerable progress has been made in the development of the vdW correction. However, the vdW correction based on the leading-order coefficient C6 alone can only achieve limited accuracy, while accurate modeling of higher-order coefficients remains a formidable task, due to the strong non-additivity effect. Here, we apply a model dynamic multipole polarizability within a modified single-frequency approximation to calculate C8 and C10 between small molecules. We find that the higher-order vdW coefficients from this model can achieve remarkable accuracy, with mean absolute relative deviations of 5% for C8 and 7% for C10. Inclusion of accurate higher-order contributions in the vdW correction will effectively enhance the predictive power of DFT in condensed matter physics and quantum chemistry

  18. Accurate momentum transfer cross section for the attractive Yukawa potential

    OpenAIRE

    Khrapak, Sergey

    2014-01-01

    Accurate expression for the momentum transfer cross section for the attractive Yukawa potential is proposed. This simple analytic expression agrees with the numerical results better than to within 2% in the regime relevant for ion-particle collisions in complex (dusty) plasmas.

  19. Accurate momentum transfer cross section for the attractive Yukawa potential

    OpenAIRE

    Khrapak, S. A.

    2014-01-01

    Accurate expression for the momentum transfer cross section for the attractive Yukawa potential is proposed. This simple analytic expression agrees with the numerical results better than to within $\\pm 2\\%$ in the regime relevant for ion-particle collisions in complex (dusty) plasmas.

  20. $H_{2}^{+}$ ion in strong magnetic field an accurate calculation

    CERN Document Server

    López, J C; Turbiner, A V

    1997-01-01

    Using a unique trial function we perform an accurate calculation of the ground state $1\\sigma_g$ of the hydrogenic molecular ion $H^+_2$ in a constant uniform magnetic field ranging $0-10^{13}$ G. We show that this trial function also makes it possible to study the negative parity ground state $1\\sigma_u$.

  1. Is a Writing Sample Necessary for "Accurate Placement"?

    Science.gov (United States)

    Sullivan, Patrick; Nielsen, David

    2009-01-01

    The scholarship about assessment for placement is extensive and notoriously ambiguous. Foremost among the questions that continue to be unresolved in this scholarship is this one: Is a writing sample necessary for "accurate placement"? Using a robust data sample of student assessment essays and ACCUPLACER test scores, we put this question to the…

  2. A Simple and Accurate Method for Measuring Enzyme Activity.

    Science.gov (United States)

    Yip, Din-Yan

    1997-01-01

    Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…

  3. Accurate Period Approximation for Any Simple Pendulum Amplitude

    Institute of Scientific and Technical Information of China (English)

    XUE De-Sheng; ZHOU Zhao; GAO Mei-Zhen

    2012-01-01

    Accurate approximate analytical formulae of the pendulum period composed of a few elementary functions for any amplitude are constructed.Based on an approximation of the elliptic integral,two new logarithmic formulae for large amplitude close to 180° are obtained.Considering the trigonometric function modulation results from the dependence of relative error on the amplitude,we realize accurate approximation period expressions for any amplitude between 0 and 180°.A relative error less than 0.02% is achieved for any amplitude.This kind of modulation is also effective for other large-amplitude logarithmic approximation expressions.%Accurate approximate analytical formulae of the pendulum period composed of a few elementary functions for any amplitude are constructed. Based on an approximation of the elliptic integral, two new logarithmic formulae for large amplitude close to 180° are obtained. Considering the trigonometric function modulation results from the dependence of relative error on the amplitude, we realize accurate approximation period expressions for any amplitude between 0 and 180°. A relative error less than 0.02% is achieved for any amplitude. This kind of modulation is also effective for other large-amplitude logarithmic approximation expressions.

  4. Fast and Accurate Residential Fire Detection Using Wireless Sensor Networks

    NARCIS (Netherlands)

    Bahrepour, Majid; Meratnia, Nirvana; Havinga, Paul J.M.

    2010-01-01

    Prompt and accurate residential fire detection is important for on-time fire extinguishing and consequently reducing damages and life losses. To detect fire sensors are needed to measure the environmental parameters and algorithms are required to decide about occurrence of fire. Recently, wireless s

  5. Accurate momentum transfer cross section for the attractive Yukawa potential

    Energy Technology Data Exchange (ETDEWEB)

    Khrapak, S. A., E-mail: Sergey.Khrapak@dlr.de [Forschungsgruppe Komplexe Plasmen, Deutsches Zentrum für Luft- und Raumfahrt, Oberpfaffenhofen (Germany)

    2014-04-15

    Accurate expression for the momentum transfer cross section for the attractive Yukawa potential is proposed. This simple analytic expression agrees with the numerical results better than to within ±2% in the regime relevant for ion-particle collisions in complex (dusty) plasmas.

  6. Novel multi-beam radiometers for accurate ocean surveillance

    DEFF Research Database (Denmark)

    Cappellin, C.; Pontoppidan, K.; Nielsen, P. H.;

    2014-01-01

    Novel antenna architectures for real aperture multi-beam radiometers providing high resolution and high sensitivity for accurate sea surface temperature (SST) and ocean vector wind (OVW) measurements are investigated. On the basis of the radiometer requirements set for future SST/OVW missions...

  7. Accurate analysis of planar metamaterials using the RLC theory

    DEFF Research Database (Denmark)

    Malureanu, Radu; Lavrinenko, Andrei

    2008-01-01

    In this work we will present an accurate description of metallic pads response using RLC theory. In order to calculate such response we take into account several factors including the mutual inductances, precise formula for determining the capacitance and also the pads’ resistance considering the...

  8. Accurate segmentation of dense nanoparticles by partially discrete electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Roelandts, T., E-mail: tom.roelandts@ua.ac.be [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Batenburg, K.J. [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, 1098 XG Amsterdam (Netherlands); Biermans, E. [EMAT, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Kuebel, C. [Institute of Nanotechnology, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Sijbers, J. [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium)

    2012-03-15

    Accurate segmentation of nanoparticles within various matrix materials is a difficult problem in electron tomography. Due to artifacts related to image series acquisition and reconstruction, global thresholding of reconstructions computed by established algorithms, such as weighted backprojection or SIRT, may result in unreliable and subjective segmentations. In this paper, we introduce the Partially Discrete Algebraic Reconstruction Technique (PDART) for computing accurate segmentations of dense nanoparticles of constant composition. The particles are segmented directly by the reconstruction algorithm, while the surrounding regions are reconstructed using continuously varying gray levels. As no properties are assumed for the other compositions of the sample, the technique can be applied to any sample where dense nanoparticles must be segmented, regardless of the surrounding compositions. For both experimental and simulated data, it is shown that PDART yields significantly more accurate segmentations than those obtained by optimal global thresholding of the SIRT reconstruction. -- Highlights: Black-Right-Pointing-Pointer We present a novel reconstruction method for partially discrete electron tomography. Black-Right-Pointing-Pointer It accurately segments dense nanoparticles directly during reconstruction. Black-Right-Pointing-Pointer The gray level to use for the nanoparticles is determined objectively. Black-Right-Pointing-Pointer The method expands the set of samples for which discrete tomography can be applied.

  9. Efficient and accurate sound propagation using adaptive rectangular decomposition.

    Science.gov (United States)

    Raghuvanshi, Nikunj; Narain, Rahul; Lin, Ming C

    2009-01-01

    Accurate sound rendering can add significant realism to complement visual display in interactive applications, as well as facilitate acoustic predictions for many engineering applications, like accurate acoustic analysis for architectural design. Numerical simulation can provide this realism most naturally by modeling the underlying physics of wave propagation. However, wave simulation has traditionally posed a tough computational challenge. In this paper, we present a technique which relies on an adaptive rectangular decomposition of 3D scenes to enable efficient and accurate simulation of sound propagation in complex virtual environments. It exploits the known analytical solution of the Wave Equation in rectangular domains, and utilizes an efficient implementation of the Discrete Cosine Transform on Graphics Processors (GPU) to achieve at least a 100-fold performance gain compared to a standard Finite-Difference Time-Domain (FDTD) implementation with comparable accuracy, while also being 10-fold more memory efficient. Consequently, we are able to perform accurate numerical acoustic simulation on large, complex scenes in the kilohertz range. To the best of our knowledge, it was not previously possible to perform such simulations on a desktop computer. Our work thus enables acoustic analysis on large scenes and auditory display for complex virtual environments on commodity hardware. PMID:19590105

  10. Practical schemes for accurate forces in quantum Monte Carlo

    NARCIS (Netherlands)

    Moroni, S.; Saccani, S.; Filippi, C.

    2014-01-01

    While the computation of interatomic forces has become a well-established practice within variational Monte Carlo (VMC), the use of the more accurate Fixed-Node Diffusion Monte Carlo (DMC) method is still largely limited to the computation of total energies on structures obtained at a lower level of

  11. Combined registration and motion correction of longitudinal retinal OCT data

    Science.gov (United States)

    Lang, Andrew; Carass, Aaron; Al-Louzi, Omar; Bhargava, Pavan; Solomon, Sharon D.; Calabresi, Peter A.; Prince, Jerry L.

    2016-03-01

    Optical coherence tomography (OCT) has become an important modality for examination of the eye. To measure layer thicknesses in the retina, automated segmentation algorithms are often used, producing accurate and reliable measurements. However, subtle changes over time are difficult to detect since the magnitude of the change can be very small. Thus, tracking disease progression over short periods of time is difficult. Additionally, unstable eye position and motion alter the consistency of these measurements, even in healthy eyes. Thus, both registration and motion correction are important for processing longitudinal data of a specific patient. In this work, we propose a method to jointly do registration and motion correction. Given two scans of the same patient, we initially extract blood vessel points from a fundus projection image generated on the OCT data and estimate point correspondences. Due to saccadic eye movements during the scan, motion is often very abrupt, producing a sparse set of large displacements between successive B-scan images. Thus, we use lasso regression to estimate the movement of each image. By iterating between this regression and a rigid point-based registration, we are able to simultaneously align and correct the data. With longitudinal data from 39 healthy control subjects, our method improves the registration accuracy by 43% compared to simple alignment to the fovea and 8% when using point-based registration only. We also show improved consistency of repeated total retina thickness measurements.

  12. Importance of Attenuation Correction (AC for Small Animal PET Imaging

    Directory of Open Access Journals (Sweden)

    Henrik H. El Ali

    2012-10-01

    Full Text Available The purpose of this study was to investigate whether a correction for annihilation photon attenuation in small objects such as mice is necessary. The attenuation recovery for specific organs and subcutaneous tumors was investigated. A comparison between different attenuation correction methods was performed. Methods: Ten NMRI nude mice with subcutaneous implantation of human breast cancer cells (MCF-7 were scanned consecutively in small animal PET and CT scanners (MicroPETTM Focus 120 and ImTek’s MicroCATTM II. CT-based AC, PET-based AC and uniform AC methods were compared. Results: The activity concentration in the same organ with and without AC revealed an overall attenuation recovery of 9–21% for MAP reconstructed images, i.e., SUV without AC could underestimate the true activity at this level. For subcutaneous tumors, the attenuation was 13 ± 4% (9–17%, for kidneys 20 ± 1% (19–21%, and for bladder 18 ± 3% (15–21%. The FBP reconstructed images showed almost the same attenuation levels as the MAP reconstructed images for all organs. Conclusions: The annihilation photons are suffering attenuation even in small subjects. Both PET-based and CT-based are adequate as AC methods. The amplitude of the AC recovery could be overestimated using the uniform map. Therefore, application of a global attenuation factor on PET data might not be accurate for attenuation correction.

  13. Accurate and systematically improvable density functional theory embedding for correlated wavefunctions

    Energy Technology Data Exchange (ETDEWEB)

    Goodpaster, Jason D.; Barnes, Taylor A.; Miller, Thomas F., E-mail: tfm@caltech.edu [Division of Chemistry and Chemical Engineering, California Institute of Technology, Pasadena, California 91125 (United States); Manby, Frederick R., E-mail: fred.manby@bristol.ac.uk [Centre for Computational Chemistry, School of Chemistry, University of Bristol, Bristol BS8 ITS (United Kingdom)

    2014-05-14

    We analyze the sources of error in quantum embedding calculations in which an active subsystem is treated using wavefunction methods, and the remainder using density functional theory. We show that the embedding potential felt by the electrons in the active subsystem makes only a small contribution to the error of the method, whereas the error in the nonadditive exchange-correlation energy dominates. We test an MP2 correction for this term and demonstrate that the corrected embedding scheme accurately reproduces wavefunction calculations for a series of chemical reactions. Our projector-based embedding method uses localized occupied orbitals to partition the system; as with other local correlation methods, abrupt changes in the character of the localized orbitals along a reaction coordinate can lead to discontinuities in the embedded energy, but we show that these discontinuities are small and can be systematically reduced by increasing the size of the active region. Convergence of reaction energies with respect to the size of the active subsystem is shown to be rapid for all cases where the density functional treatment is able to capture the polarization of the environment, even in conjugated systems, and even when the partition cuts across a double bond.

  14. Accurate membrane tracing in three-dimensional reconstructions from electron cryotomography data

    International Nuclear Information System (INIS)

    The connection between the extracellular matrix and the cell is of major importance for mechanotransduction and mechanobiology. Electron cryo-tomography, in principle, enables better than nanometer-resolution analysis of these connections, but restrictions of data collection geometry hamper the accurate extraction of the ventral membrane location from these tomograms, an essential prerequisite for the analysis. Here, we introduce a novel membrane tracing strategy that enables ventral membrane extraction at high fidelity and extraordinary accuracy. The approach is based on detecting the boundary between the inside and the outside of the cell rather than trying to explicitly trace the membrane. Simulation studies show that over 99% of the membrane can be correctly modeled using this principle and the excellent match of visually identifiable membrane stretches with the extracted boundary of experimental data indicates that the accuracy is comparable for actual data. - Highlights: • The connection between the ECM and the cell is of major importance. • Electron cryo-tomography provides nanometer-resolution information. • Data collection geometry hampers extraction of membranes from tomograms. • We introduce a novel membrane tracing strategy allowing high fidelity extraction. • Simulations show that over 99% of the membrane can be correctly modeled this way

  15. Removing the thermal component from heart rate provides an accurate VO2 estimation in forest work.

    Science.gov (United States)

    Dubé, Philippe-Antoine; Imbeau, Daniel; Dubeau, Denise; Lebel, Luc; Kolus, Ahmet

    2016-05-01

    Heart rate (HR) was monitored continuously in 41 forest workers performing brushcutting or tree planting work. 10-min seated rest periods were imposed during the workday to estimate the HR thermal component (ΔHRT) per Vogt et al. (1970, 1973). VO2 was measured using a portable gas analyzer during a morning submaximal step-test conducted at the work site, during a work bout over the course of the day (range: 9-74 min), and during an ensuing 10-min rest pause taken at the worksite. The VO2 estimated, from measured HR and from corrected HR (thermal component removed), were compared to VO2 measured during work and rest. Varied levels of HR thermal component (ΔHRTavg range: 0-38 bpm) originating from a wide range of ambient thermal conditions, thermal clothing insulation worn, and physical load exerted during work were observed. Using raw HR significantly overestimated measured work VO2 by 30% on average (range: 1%-64%). 74% of VO2 prediction error variance was explained by the HR thermal component. VO2 estimated from corrected HR, was not statistically different from measured VO2. Work VO2 can be estimated accurately in the presence of thermal stress using Vogt et al.'s method, which can be implemented easily by the practitioner with inexpensive instruments. PMID:26851474

  16. Accurate membrane tracing in three-dimensional reconstructions from electron cryotomography data

    Energy Technology Data Exchange (ETDEWEB)

    Page, Christopher; Hanein, Dorit; Volkmann, Niels, E-mail: niels@burnham.org

    2015-08-15

    The connection between the extracellular matrix and the cell is of major importance for mechanotransduction and mechanobiology. Electron cryo-tomography, in principle, enables better than nanometer-resolution analysis of these connections, but restrictions of data collection geometry hamper the accurate extraction of the ventral membrane location from these tomograms, an essential prerequisite for the analysis. Here, we introduce a novel membrane tracing strategy that enables ventral membrane extraction at high fidelity and extraordinary accuracy. The approach is based on detecting the boundary between the inside and the outside of the cell rather than trying to explicitly trace the membrane. Simulation studies show that over 99% of the membrane can be correctly modeled using this principle and the excellent match of visually identifiable membrane stretches with the extracted boundary of experimental data indicates that the accuracy is comparable for actual data. - Highlights: • The connection between the ECM and the cell is of major importance. • Electron cryo-tomography provides nanometer-resolution information. • Data collection geometry hampers extraction of membranes from tomograms. • We introduce a novel membrane tracing strategy allowing high fidelity extraction. • Simulations show that over 99% of the membrane can be correctly modeled this way.

  17. Accurate computation of Stokes flow driven by an open immersed interface

    Science.gov (United States)

    Li, Yi; Layton, Anita T.

    2012-06-01

    We present numerical methods for computing two-dimensional Stokes flow driven by forces singularly supported along an open, immersed interface. Two second-order accurate methods are developed: one for accurately evaluating boundary integral solutions at a point, and another for computing Stokes solution values on a rectangular mesh. We first describe a method for computing singular or nearly singular integrals, such as a double layer potential due to sources on a curve in the plane, evaluated at a point on or near the curve. To improve accuracy of the numerical quadrature, we add corrections for the errors arising from discretization, which are found by asymptotic analysis. When used to solve the Stokes equations with sources on an open, immersed interface, the method generates second-order approximations, for both the pressure and the velocity, and preserves the jumps in the solutions and their derivatives across the boundary. We then combine the method with a mesh-based solver to yield a hybrid method for computing Stokes solutions at N2 grid points on a rectangular grid. Numerical results are presented which exhibit second-order accuracy. To demonstrate the applicability of the method, we use the method to simulate fluid dynamics induced by the beating motion of a cilium. The method preserves the sharp jumps in the Stokes solution and their derivatives across the immersed boundary. Model results illustrate the distinct hydrodynamic effects generated by the effective stroke and by the recovery stroke of the ciliary beat cycle.

  18. RNASequel: accurate and repeat tolerant realignment of RNA-seq reads.

    Science.gov (United States)

    Wilson, Gavin W; Stein, Lincoln D

    2015-10-15

    RNA-seq is a key technology for understanding the biology of the cell because of its ability to profile transcriptional and post-transcriptional regulation at single nucleotide resolutions. Compared to DNA sequencing alignment algorithms, RNA-seq alignment algorithms have a diminished ability to accurately detect and map base pair substitutions, gaps, discordant pairs and repetitive regions. These shortcomings adversely affect experiments that require a high degree of accuracy, notably the ability to detect RNA editing. We have developed RNASequel, a software package that runs as a post-processing step in conjunction with an RNA-seq aligner and systematically corrects common alignment artifacts. Its key innovations are a two-pass splice junction alignment system that includes de novo splice junctions and the use of an empirically determined estimate of the fragment size distribution when resolving read pairs. We demonstrate that RNASequel produces improved alignments when used in conjunction with STAR or Tophat2 using two simulated datasets. We then show that RNASequel improves the identification of adenosine to inosine RNA editing sites on biological datasets. This software will be useful in applications requiring the accurate identification of variants in RNA sequencing data, the discovery of RNA editing sites and the analysis of alternative splicing.

  19. Rapid and accurate prediction and scoring of water molecules in protein binding sites.

    Directory of Open Access Journals (Sweden)

    Gregory A Ross

    Full Text Available Water plays a critical role in ligand-protein interactions. However, it is still challenging to predict accurately not only where water molecules prefer to bind, but also which of those water molecules might be displaceable. The latter is often seen as a route to optimizing affinity of potential drug candidates. Using a protocol we call WaterDock, we show that the freely available AutoDock Vina tool can be used to predict accurately the binding sites of water molecules. WaterDock was validated using data from X-ray crystallography, neutron diffraction and molecular dynamics simulations and correctly predicted 97% of the water molecules in the test set. In addition, we combined data-mining, heuristic and machine learning techniques to develop probabilistic water molecule classifiers. When applied to WaterDock predictions in the Astex Diverse Set of protein ligand complexes, we could identify whether a water molecule was conserved or displaced to an accuracy of 75%. A second model predicted whether water molecules were displaced by polar groups or by non-polar groups to an accuracy of 80%. These results should prove useful for anyone wishing to undertake rational design of new compounds where the displacement of water molecules is being considered as a route to improved affinity.

  20. Towards an accurate model of the redshift space clustering of halos in the quasilinear regime

    CERN Document Server

    Reid, Beth A

    2011-01-01

    Observations of redshift-space distortions in spectroscopic galaxy surveys offer an attractive method for measuring the build-up of cosmological structure, which depends both on the expansion rate of the Universe and our theory of gravity. Galaxies occupy dark matter halos, whose redshift space clustering has a complex dependence on bias that cannot be inferred from the behavior of matter. We identify two distinct corrections on quasilinear scales (~ 30-80 Mpc/h): the non-linear mapping between real and redshift space positions, and the non-linear suppression of power in the velocity divergence field. We model the first non-perturbatively using the scale-dependent Gaussian streaming model, which we show is accurate at the 10 (s>25) Mpc/h for the monopole (quadrupole) halo correlation functions. We use perturbation theory to predict the real space pairwise halo velocity statistics. Our fully analytic model is accurate at the 2 per cent level only on scales s > 40 Mpc/h. Recent models that neglect the correctio...

  1. Fast and accurate solution of the Poisson equation in an immersed setting

    CERN Document Server

    Marques, Alexandre Noll; Rosales, Rodolfo Ruben

    2014-01-01

    We present a fast and accurate algorithm for the Poisson equation in complex geometries, using regular Cartesian grids. We consider a variety of configurations, including Poisson equations with interfaces across which the solution is discontinuous (of the type arising in multi-fluid flows). The algorithm is based on a combination of the Correction Function Method (CFM) and Boundary Integral Methods (BIM). Interface and boundary conditions can be treated in a fast and accurate manner using boundary integral equations, and the associated BIM. Unfortunately, BIM can be costly when the solution is needed everywhere in a grid, e.g. fluid flow problems. We use the CFM to circumvent this issue. The solution from the BIM is used to rewrite the problem as a series of Poisson equations in rectangular domains --- which requires the BIM solution at interfaces/boundaries only. These Poisson equations involve discontinuities at interfaces, of the type that the CFM can handle. Hence we use the CFM to solve them (to high ord...

  2. Using fatty acids to fingerprint biofilm communities: a means to quickly and accurately assess stream quality.

    Science.gov (United States)

    DeForest, Jared L; Drerup, Samuel A; Vis, Morgan L

    2016-05-01

    The assessment of lotic ecosystem quality plays an essential role to help determine the extent of environmental stress and the effectiveness of restoration activities. Methods that incorporate biological properties are considered ideal because they provide direct assessment of the end goal of a vigorous biological community. Our primary objective was to use biofilm lipids to develop an accurate biomonitoring tool that requires little expertise and time to facilitate assessment. A model was created of fatty acid biomarkers most associated with predetermined stream quality classification, exceptional warm water habitat (EWH), warm water habitat (WWH), and limited resource (LR-AMD), and validated along a gradient of known stream qualities. The fatty acid fingerprint of the biofilm community was statistically different (P = 0.03) and was generally unique to recognized stream quality. One striking difference was essential fatty acids (DHA, EPA, and ARA) were absent from LR-AMD and only recovered from WWH and EWH, 45 % more in EWH than WWH. Independently testing the model along a stream quality gradient, this model correctly categorized six of the seven sites, with no match due to low sample biomass. These results provide compelling evidence that biofilm fatty acid analysis can be a sensitive, accurate, and cost-effective biomonitoring tool. We conceive of future studies expanding this research to more in-depth studies of remediation efforts, determining the applicable geographic area for the method and the addition of multiple stressors with the possibility of distinguishing among stressors. PMID:27061804

  3. A Novel Method for Accurate Operon Predictions in All SequencedProkaryotes

    Energy Technology Data Exchange (ETDEWEB)

    Price, Morgan N.; Huang, Katherine H.; Alm, Eric J.; Arkin, Adam P.

    2004-12-01

    We combine comparative genomic measures and the distance separating adjacent genes to predict operons in 124 completely sequenced prokaryotic genomes. Our method automatically tailors itself to each genome using sequence information alone, and thus can be applied to any prokaryote. For Escherichia coli K12 and Bacillus subtilis, our method is 85 and 83% accurate, respectively, which is similar to the accuracy of methods that use the same features but are trained on experimentally characterized transcripts. In Halobacterium NRC-1 and in Helicobacterpylori, our method correctly infers that genes in operons are separated by shorter distances than they are in E.coli, and its predictions using distance alone are more accurate than distance-only predictions trained on a database of E.coli transcripts. We use microarray data from sixphylogenetically diverse prokaryotes to show that combining intergenic distance with comparative genomic measures further improves accuracy and that our method is broadly effective. Finally, we survey operon structure across 124 genomes, and find several surprises: H.pylori has many operons, contrary to previous reports; Bacillus anthracis has an unusual number of pseudogenes within conserved operons; and Synechocystis PCC6803 has many operons even though it has unusually wide spacings between conserved adjacent genes.

  4. On the importance of having accurate data for astrophysical modelling

    Science.gov (United States)

    Lique, Francois

    2016-06-01

    The Herschel telescope and the ALMA and NOEMA interferometers have opened new windows of observation for wavelengths ranging from far infrared to sub-millimeter with spatial and spectral resolutions previously unmatched. To make the most of these observations, an accurate knowledge of the physical and chemical processes occurring in the interstellar and circumstellar media is essential.In this presentation, I will discuss what are the current needs of astrophysics in terms of molecular data and I will show that accurate molecular data are crucial for the proper determination of the physical conditions in molecular clouds.First, I will focus on collisional excitation studies that are needed for molecular lines modelling beyond the Local Thermodynamic Equilibrium (LTE) approach. In particular, I will show how new collisional data for the HCN and HNC isomers, two tracers of star forming conditions, have allowed solving the problem of their respective abundance in cold molecular clouds. I will also present the last collisional data that have been computed in order to analyse new highly resolved observations provided by the ALMA interferometer.Then, I will present the calculation of accurate rate constants for the F+H2 → HF+H and Cl+H2 ↔ HCl+H reactions, which have allowed a more accurate determination of the physical conditions in diffuse molecular clouds. I will also present the recent work on the ortho-para-H2 conversion due to hydrogen exchange that allow more accurate determination of the ortho-to-para-H2 ratio in the universe and that imply a significant revision of the cooling mechanism in astrophysical media.

  5. Pulse compressor with aberration correction

    Energy Technology Data Exchange (ETDEWEB)

    Mankos, Marian [Electron Optica, Inc., Palo Alto, CA (United States)

    2015-11-30

    In this SBIR project, Electron Optica, Inc. (EOI) is developing an electron mirror-based pulse compressor attachment to new and retrofitted dynamic transmission electron microscopes (DTEMs) and ultrafast electron diffraction (UED) cameras for improving the temporal resolution of these instruments from the characteristic range of a few picoseconds to a few nanoseconds and beyond, into the sub-100 femtosecond range. The improvement will enable electron microscopes and diffraction cameras to better resolve the dynamics of reactions in the areas of solid state physics, chemistry, and biology. EOI’s pulse compressor technology utilizes the combination of electron mirror optics and a magnetic beam separator to compress the electron pulse. The design exploits the symmetry inherent in reversing the electron trajectory in the mirror in order to compress the temporally broadened beam. This system also simultaneously corrects the chromatic and spherical aberration of the objective lens for improved spatial resolution. This correction will be found valuable as the source size is reduced with laser-triggered point source emitters. With such emitters, it might be possible to significantly reduce the illuminated area and carry out ultrafast diffraction experiments from small regions of the sample, e.g. from individual grains or nanoparticles. During phase I, EOI drafted a set of candidate pulse compressor architectures and evaluated the trade-offs between temporal resolution and electron bunch size to achieve the optimum design for two particular applications with market potential: increasing the temporal and spatial resolution of UEDs, and increasing the temporal and spatial resolution of DTEMs. Specialized software packages that have been developed by MEBS, Ltd. were used to calculate the electron optical properties of the key pulse compressor components: namely, the magnetic prism, the electron mirror, and the electron lenses. In the final step, these results were folded

  6. Improved plant economics through accurate feedwater flow measurement with the crossflow ultrasonic flowmeter

    International Nuclear Information System (INIS)

    The crossflow ultrasonic flowmeter (UFM) improves nuclear power plant performance through more accurate and reliable feedwater flow measurement. Reactor power levels are typically monitored via secondary-side calorimetric calculations that depend on the accurate measurement of feedwater flow . The feedwater flow is measured with calibrated venturis in most plants. These are subject to chemical fouling and other mechanical problems. If the loss in accuracy of the feedwater flow measurement overstates the actual flow rate, the result is a direct loss in megawatts generated by the plant. This paper describes a new, innovative ultrasonic technique to improve the accuracy, stability and repeatability of ultrasonic flow measurements. By employing this advanced technology to provide a continuous correction to the venturi-measured feed water flow rate, plants have reported the recovery of between 5 and 25 MWe. This technology has been implemented in a new flowmeter called CROSSFLOW. The CROSSFLOW meter utilizes a mathematical process called cross-correlation to process the ultrasonic signal, which is modulated by the flow eddys to determine the velocity of the feedwater. It replaces the older, less accurate transit-time methodology. Comparisons with weigh tank test, calibrated plant instrumentation, and chemical tracer tests have demonstrated a repeatable accuracy of 0.21% or better with this advanced cross-correlation technology. The paper discusses the history of the cross-correlation technique and its theoretical basis, illustrates how this technique addresses the measurement sensitivities for various parameters, demonstrates the calculation of the accuracy of the meter, and discusses the recently completed NRC review of the CROSSFLOW System and methodology. The paper also discusses recent precision flow measurement applications being performed with CROSSFLOW at nuclear plants worldwide. Among these applications are the measurement of Reactor Coolant System flow and the

  7. Correction of oral contrast artifacts in CT-based attenuation correction of PET images using an automated segmentation algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Ahmadian, Alireza; Ay, Mohammad R.; Sarkar, Saeed [Medical Sciences/University of Tehran, Research Center for Science and Technology in Medicine, Tehran (Iran); Medical Sciences/University of Tehran, Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran (Iran); Bidgoli, Javad H. [Medical Sciences/University of Tehran, Research Center for Science and Technology in Medicine, Tehran (Iran); East Tehran Azad University, Department of Electrical and Computer Engineering, Tehran (Iran); Zaidi, Habib [Geneva University Hospital, Division of Nuclear Medicine, Geneva (Switzerland)

    2008-10-15

    Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map ({mu}map), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated {mu}maps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique

  8. Preferred color correction for digital LCD TVs

    Science.gov (United States)

    Kim, Kyoung Tae; Kim, Choon-Woo; Ahn, Ji-Young; Kang, Dong-Woo; Shin, Hyun-Ho

    2009-01-01

    Instead of colorimetirc color reproduction, preferred color correction is applied for digital TVs to improve subjective image quality. First step of the preferred color correction is to survey the preferred color coordinates of memory colors. This can be achieved by the off-line human visual tests. Next step is to extract pixels of memory colors representing skin, grass and sky. For the detected pixels, colors are shifted towards the desired coordinates identified in advance. This correction process may result in undesirable contours on the boundaries between the corrected and un-corrected areas. For digital TV applications, the process of extraction and correction should be applied in every frame of the moving images. This paper presents a preferred color correction method in LCH color space. Values of chroma and hue are corrected independently. Undesirable contours on the boundaries of correction are minimized. The proposed method change the coordinates of memory color pixels towards the target color coordinates. Amount of correction is determined based on the averaged coordinate of the extracted pixels. The proposed method maintains the relative color difference within memory color areas. Performance of the proposed method is evaluated using the paired comparison. Results of experiments indicate that the proposed method can reproduce perceptually pleasing images to viewers.

  9. Accurate and precise determination of critical properties from Gibbs ensemble Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Dinpajooh, Mohammadhasan [Department of Chemistry and Chemical Theory Center, University of Minnesota, 207 Pleasant Street SE, Minneapolis, Minnesota 55455 (United States); Bai, Peng; Allan, Douglas A. [Department of Chemical Engineering and Materials Science, University of Minnesota, 421 Washington Avenue SE, Minneapolis, Minnesota 55455 (United States); Siepmann, J. Ilja, E-mail: siepmann@umn.edu [Department of Chemistry and Chemical Theory Center, University of Minnesota, 207 Pleasant Street SE, Minneapolis, Minnesota 55455 (United States); Department of Chemical Engineering and Materials Science, University of Minnesota, 421 Washington Avenue SE, Minneapolis, Minnesota 55455 (United States)

    2015-09-21

    Since the seminal paper by Panagiotopoulos [Mol. Phys. 61, 813 (1997)], the Gibbs ensemble Monte Carlo (GEMC) method has been the most popular particle-based simulation approach for the computation of vapor–liquid phase equilibria. However, the validity of GEMC simulations in the near-critical region has been questioned because rigorous finite-size scaling approaches cannot be applied to simulations with fluctuating volume. Valleau [Mol. Simul. 29, 627 (2003)] has argued that GEMC simulations would lead to a spurious overestimation of the critical temperature. More recently, Patel et al. [J. Chem. Phys. 134, 024101 (2011)] opined that the use of analytical tail corrections would be problematic in the near-critical region. To address these issues, we perform extensive GEMC simulations for Lennard-Jones particles in the near-critical region varying the system size, the overall system density, and the cutoff distance. For a system with N = 5500 particles, potential truncation at 8σ and analytical tail corrections, an extrapolation of GEMC simulation data at temperatures in the range from 1.27 to 1.305 yields T{sub c} = 1.3128 ± 0.0016, ρ{sub c} = 0.316 ± 0.004, and p{sub c} = 0.1274 ± 0.0013 in excellent agreement with the thermodynamic limit determined by Potoff and Panagiotopoulos [J. Chem. Phys. 109, 10914 (1998)] using grand canonical Monte Carlo simulations and finite-size scaling. Critical properties estimated using GEMC simulations with different overall system densities (0.296 ≤ ρ{sub t} ≤ 0.336) agree to within the statistical uncertainties. For simulations with tail corrections, data obtained using r{sub cut} = 3.5σ yield T{sub c} and p{sub c} that are higher by 0.2% and 1.4% than simulations with r{sub cut} = 5 and 8σ but still with overlapping 95% confidence intervals. In contrast, GEMC simulations with a truncated and shifted potential show that r{sub cut} = 8σ is insufficient to obtain accurate results. Additional GEMC simulations for hard

  10. Processing of airborne laser scanning data to generate accurate DTM for floodplain wetland

    Science.gov (United States)

    Szporak-Wasilewska, Sylwia; Mirosław-Świątek, Dorota; Grygoruk, Mateusz; Michałowski, Robert; Kardel, Ignacy

    2015-10-01

    Structure of the floodplain, especially its topography and vegetation, influences the overland flow and dynamics of floods which are key factors shaping ecosystems in surface water-fed wetlands. Therefore elaboration of the digital terrain model (DTM) of a high spatial accuracy is crucial in hydrodynamic flow modelling in river valleys. In this study the research was conducted in the unique Central European complex of fens and marshes - the Lower Biebrza river valley. The area is represented mainly by peat ecosystems which according to EU Water Framework Directive (WFD) are called "water-dependent ecosystems". Development of accurate DTM in these areas which are overgrown by dense wetland vegetation consisting of alder forest, willow shrubs, reed, sedges and grass is very difficult, therefore to represent terrain in high accuracy the airborne laser scanning data (ALS) with scanning density of 4 points/m2 was used and the correction of the "vegetation effect" on DTM was executed. This correction was performed utilizing remotely sensed images, topographical survey using the Real Time Kinematic positioning and vegetation height measurements. In order to classify different types of vegetation within research area the object based image analysis (OBIA) was used. OBIA allowed partitioning remotely sensed imagery into meaningful image-objects, and assessing their characteristics through spatial and spectral scale. The final maps of vegetation patches that include attributes of vegetation height and vegetation spectral properties, utilized both the laser scanning data and the vegetation indices developed on the basis of airborne and satellite imagery. This data was used in process of segmentation, attribution and classification. Several different vegetation indices were tested to distinguish different types of vegetation in wetland area. The OBIA classification allowed correction of the "vegetation effect" on DTM. The final digital terrain model was compared and examined

  11. A comparison of different experimental methods for general recombination correction for liquid ionization chambers

    DEFF Research Database (Denmark)

    Andersson, Jonas; Kaiser, Franz-Joachim; Gomez, Faustino;

    2012-01-01

    of the charge carriers, as compared to using air as the sensitive medium has to be corrected for. Due to the presence of initial recombination in LICs, the correction for general recombination losses is more complicated than for air-filled ionization chambers. In the present work, recently published...... experimental methods for general recombination correction for LICs are compared and investigated for both pulsed and continuous beams. The experimental methods are all based on one of two approaches: either measurements at two different dose rates (two-dose-rate methods), or measurements at three different LIC...... polarizing voltages (three-voltage methods). In a comparison with the two-dose-rate methods, the three-voltage methods fail to achieve accurate corrections in several instances, predominantly at low polarizing voltages and dose rates. However, for continuous beams in the range of polarizing voltages...

  12. Hypernatremia: Correction Rate and Hemodialysis

    Directory of Open Access Journals (Sweden)

    Saima Nur

    2014-01-01

    Full Text Available Severe hypernatremia is defined as serum sodium levels above 152 mEq/L, with a mortality rate ≥60%. 85-year-old gentleman was brought to the emergency room with altered level of consciousness after refusing to eat for a week at a skilled nursing facility. On admission patient was nonverbal with stable vital signs and was responsive only to painful stimuli. Laboratory evaluation was significant for serum sodium of 188 mmol/L and water deficit of 12.0 L. Patient was admitted to medicine intensive care unit and after inadequate response to suboptimal fluid repletion, hemodialysis was used to correct hypernatremia. Within the first fourteen hours, sodium concentration only changed 1 mEq/L with a fluid repletion; however, the concentration dropped greater than 20 mEq/L within two hours during hemodialysis. Despite such a drastic drop in sodium concentration, patient did not develop any neurological sequela and was at baseline mental status at the time of discharge.

  13. Rulison Site corrective action report

    International Nuclear Information System (INIS)

    Project Rulison was a joint US Atomic Energy Commission (AEC) and Austral Oil Company (Austral) experiment, conducted under the AEC's Plowshare Program, to evaluate the feasibility of using a nuclear device to stimulate natural gas production in low-permeability gas-producing geologic formations. The experiment was conducted on September 10, 1969, and consisted of detonating a 40-kiloton nuclear device at a depth of 2,568 m below ground surface (BGS). This Corrective Action Report describes the cleanup of petroleum hydrocarbon- and heavy-metal-contaminated sediments from an old drilling effluent pond and characterization of the mud pits used during drilling of the R-EX well at the Rulison Site. The Rulison Site is located approximately 65 kilometers (40 miles) northeast of Grand Junction, Colorado. The effluent pond was used for the storage of drilling mud during drilling of the emplacement hole for the 1969 gas stimulation test conducted by the AEC. This report also describes the activities performed to determine whether contamination is present in mud pits used during the drilling of well R-EX, the gas production well drilled at the site to evaluate the effectiveness of the detonation in stimulating gas production. The investigation activities described in this report were conducted during the autumn of 1995, concurrent with the cleanup of the drilling effluent pond. This report describes the activities performed during the soil investigation and provides the analytical results for the samples collected during that investigation

  14. Rulison Site corrective action report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    Project Rulison was a joint US Atomic Energy Commission (AEC) and Austral Oil Company (Austral) experiment, conducted under the AEC`s Plowshare Program, to evaluate the feasibility of using a nuclear device to stimulate natural gas production in low-permeability gas-producing geologic formations. The experiment was conducted on September 10, 1969, and consisted of detonating a 40-kiloton nuclear device at a depth of 2,568 m below ground surface (BGS). This Corrective Action Report describes the cleanup of petroleum hydrocarbon- and heavy-metal-contaminated sediments from an old drilling effluent pond and characterization of the mud pits used during drilling of the R-EX well at the Rulison Site. The Rulison Site is located approximately 65 kilometers (40 miles) northeast of Grand Junction, Colorado. The effluent pond was used for the storage of drilling mud during drilling of the emplacement hole for the 1969 gas stimulation test conducted by the AEC. This report also describes the activities performed to determine whether contamination is present in mud pits used during the drilling of well R-EX, the gas production well drilled at the site to evaluate the effectiveness of the detonation in stimulating gas production. The investigation activities described in this report were conducted during the autumn of 1995, concurrent with the cleanup of the drilling effluent pond. This report describes the activities performed during the soil investigation and provides the analytical results for the samples collected during that investigation.

  15. A simple and efficient dispersion correction to the Hartree-Fock theory (2): Incorporation of a geometrical correction for the basis set superposition error.

    Science.gov (United States)

    Yoshida, Tatsusada; Hayashi, Takahisa; Mashima, Akira; Chuman, Hiroshi

    2015-10-01

    One of the most challenging problems in computer-aided drug discovery is the accurate prediction of the binding energy between a ligand and a protein. For accurate estimation of net binding energy ΔEbind in the framework of the Hartree-Fock (HF) theory, it is necessary to estimate two additional energy terms; the dispersion interaction energy (Edisp) and the basis set superposition error (BSSE). We previously reported a simple and efficient dispersion correction, Edisp, to the Hartree-Fock theory (HF-Dtq). In the present study, an approximation procedure for estimating BSSE proposed by Kruse and Grimme, a geometrical counterpoise correction (gCP), was incorporated into HF-Dtq (HF-Dtq-gCP). The relative weights of the Edisp (Dtq) and BSSE (gCP) terms were determined to reproduce ΔEbind calculated with CCSD(T)/CBS or /aug-cc-pVTZ (HF-Dtq-gCP (scaled)). The performance of HF-Dtq-gCP (scaled) was compared with that of B3LYP-D3(BJ)-bCP (dispersion corrected B3LYP with the Boys and Bernadi counterpoise correction (bCP)), by taking ΔEbind (CCSD(T)-bCP) of small non-covalent complexes as 'a golden standard'. As a critical test, HF-Dtq-gCP (scaled)/6-31G(d) and B3LYP-D3(BJ)-bCP/6-31G(d) were applied to the complex model for HIV-1 protease and its potent inhibitor, KNI-10033. The present results demonstrate that HF-Dtq-gCP (scaled) is a useful and powerful remedy for accurately and promptly predicting ΔEbind between a ligand and a protein, albeit it is a simple correction procedure.

  16. Carbon-wire loop based artifact correction outperforms post-processing EEG/fMRI corrections--A validation of a real-time simultaneous EEG/fMRI correction method.

    Science.gov (United States)

    van der Meer, Johan N; Pampel, André; Van Someren, Eus J W; Ramautar, Jennifer R; van der Werf, Ysbrand D; Gomez-Herrero, German; Lepsien, Jöran; Hellrung, Lydia; Hinrichs, Hermann; Möller, Harald E; Walter, Martin

    2016-01-15

    Simultaneous EEG-fMRI combines two powerful neuroimaging techniques, but the EEG signal suffers from severe artifacts in the MRI environment that are difficult to remove. These are the MR scanning artifact and the blood-pulsation artifact--strategies to remove them are a topic of ongoing research. Additionally large, unsystematic artifacts are produced across the full frequency spectrum by the magnet's helium pump (and ventilator) systems which are notoriously hard to remove. As a consequence, experimenters routinely deactivate the helium pump during simultaneous EEG-fMRI acquisitions which potentially risks damaging the MRI system and necessitates more frequent and expensive helium refills. We present a novel correction method addressing both helium pump and ballisto-cardiac (BCG) artifacts, consisting of carbon-wire loops (CWL) as additional sensors to accurately track unpredictable artifacts related to subtle movements in the scanner, and an EEGLAB plugin to perform artifact correction. We compare signal-to-noise metrics of EEG data, corrected with CWL and three conventional correction methods, for helium pump off and on measurements. Because the CWL setup records signals in real-time, it fits requirements of applications where immediate correction is necessary, such as neuro-feedback applications or stimulation time-locked to specific sleep oscillations. The comparison metrics in this paper relate to: (1) the EEG signal itself, (2) the "eyes open vs. eyes closed" effect, and (3) an assessment of how the artifact corrections impacts the ability to perform meaningful correlations between EEG alpha power and the BOLD signal. Results show that the CWL correction corrects for He pump artifact and also produces EEG data more comparable to EEG obtained outside the magnet than conventional post-processing methods.

  17. Visual texture accurate material appearance measurement, representation and modeling

    CERN Document Server

    Haindl, Michal

    2013-01-01

    This book surveys the state of the art in multidimensional, physically-correct visual texture modeling. Features: reviews the entire process of texture synthesis, including material appearance representation, measurement, analysis, compression, modeling, editing, visualization, and perceptual evaluation; explains the derivation of the most common representations of visual texture, discussing their properties, advantages, and limitations; describes a range of techniques for the measurement of visual texture, including BRDF, SVBRDF, BTF and BSSRDF; investigates the visualization of textural info

  18. Effectiveness of Corrective Feedback on Writing

    Institute of Scientific and Technical Information of China (English)

    高砚

    2012-01-01

      This study aims to find out the effectiveness of corrective feedback on ESL writing. By reviewing and analyzing the previous six research studies, the author tries to reveal the most effective way to provide corrective feedback for L2 students and the factors that impact the processing of error feedback. Findings indicated that corrective feedback is helpful for students to improve ESL writing on both accuracy and fluency. Furthermore, correction and direct corrective feedbacks as well as the oral and written meta-linguistic explanation are the most effective ways to help students improving their writing. However, in⁃dividual learner’s difference has influence on processing corrective feedback. At last, limitation of present study and suggestion for future research were made.

  19. Automatic Power Factor Correction Using Capacitive Bank

    Directory of Open Access Journals (Sweden)

    Mr.Anant Kumar Tiwari,

    2014-02-01

    Full Text Available The power factor correction of electrical loads is a problem common to all industrial companies. Earlier the power factor correction was done by adjusting the capacitive bank manually [1]. The automated power factor corrector (APFC using capacitive load bank is helpful in providing the power factor correction. Proposed automated project involves measuring the power factor value from the load using microcontroller. The design of this auto-adjustable power factor correction is to ensure the entire power system always preserving unity power factor. The software and hardware required to implement the suggested automatic power factor correction scheme are explained and its operation is described. APFC thus helps us to decrease the time taken to correct the power factor which helps to increase the efficiency.

  20. Motion correction in MRI of the brain

    Science.gov (United States)

    Godenschweger, F.; Kägebein, U.; Stucht, D.; Yarach, U.; Sciarra, A.; Yakupov, R.; Lüsebrink, F.; Schulze, P.; Speck, O.

    2016-03-01

    Subject motion in MRI is a relevant problem in the daily clinical routine as well as in scientific studies. Since the beginning of clinical use of MRI, many research groups have developed methods to suppress or correct motion artefacts. This review focuses on rigid body motion correction of head and brain MRI and its application in diagnosis and research. It explains the sources and types of motion and related artefacts, classifies and describes existing techniques for motion detection, compensation and correction and lists established and experimental approaches. Retrospective motion correction modifies the MR image data during the reconstruction, while prospective motion correction performs an adaptive update of the data acquisition. Differences, benefits and drawbacks of different motion correction methods are discussed.

  1. Proof-Carrying Code with Correct Compilers

    Science.gov (United States)

    Appel, Andrew W.

    2009-01-01

    In the late 1990s, proof-carrying code was able to produce machine-checkable safety proofs for machine-language programs even though (1) it was impractical to prove correctness properties of source programs and (2) it was impractical to prove correctness of compilers. But now it is practical to prove some correctness properties of source programs, and it is practical to prove correctness of optimizing compilers. We can produce more expressive proof-carrying code, that can guarantee correctness properties for machine code and not just safety. We will construct program logics for source languages, prove them sound w.r.t. the operational semantics of the input language for a proved-correct compiler, and then use these logics as a basis for proving the soundness of static analyses.

  2. Online versus offline corrections: opposition or evolution? A comparison of two electronic portal imaging approaches for locally advanced prostate cancer

    International Nuclear Information System (INIS)

    Given the onset of dose escalation and increased planning target volume (PTV) conformity, the requirement of accurate field placement has also increased. This study compares and contrasts a combination offline/online electronic portal imaging (EPI) device correction with a complete online correction protocol and assesses their relative effectiveness in managing set-up error. Field placement data was collected on patients receiving radical radiotherapy to the prostate. Ten patients were on an initial combination offline/online correction protocol, followed by another 10 patients on a complete online correction protocol. Analysis of 1480 portal images from 20 patients was carried out, illustrating that a combination offline/online approach can be very effective in dealing with the systematic component of set-up error, but it is only when a complete online correction protocol is employed that both systematic and random set-up errors can be managed. Now, EPI protocols have evolved considerably and online corrections are a highly effective tool in the quest for more accurate field placement. This study discusses the clinical workload impact issues that need to be addressed in order for an online correction protocol to be employed, and addresses many of the practical issues that need to be resolved. Management of set-up error is paramount when seeking to dose escalate and only an online correction protocol can manage both components of set-up error. Both systematic and random errors are important and can be effectively and efficiently managed

  3. Accurate analysis of arbitrarily-shaped helical groove waveguide

    Institute of Scientific and Technical Information of China (English)

    Liu Hong-Tao; Wei Yan-Yu; Gong Yu-Bin; Yue Ling-Na; Wang Wen-Xiang

    2006-01-01

    This paper presents a theory on accurately analysing the dispersion relation and the interaction impedance of electromagnetic waves propagating through a helical groove waveguide with arbitrary groove shape, in which the complex groove profile is synthesized by a series of rectangular steps. By introducing the influence of high-order evanescent modes on the connection of any two neighbouring steps by an equivalent susceptance under a modified admittance matching condition, the assumption of the neglecting discontinuity capacitance in previously published analysis is avoided, and the accurate dispersion equation is obtained by means of a combination of field-matching method and admittancematching technique. The validity of this theory is proved by comparison between the measurements and the numerical calculations for two kinds of helical groove waveguides with different groove shapes.

  4. Accurate and Simple Calibration of DLP Projector Systems

    DEFF Research Database (Denmark)

    Wilm, Jakob; Olesen, Oline Vinter; Larsen, Rasmus

    2014-01-01

    require a camera and involve feature extraction from a known projected pattern. In this work we present a novel calibration technique for DLP Projector systems based on phase shifting profilometry projection onto a printed calibration target. In contrast to most current methods, the one presented here...... does not rely on an initial camera calibration, and so does not carry over the error into projector calibration. A radial interpolation scheme is used to convert features coordinates into projector space, thereby allowing for a very accurate procedure. This allows for highly accurate determination...... of parameters including lens distortion. Our implementation acquires printed planar calibration scenes in less than 1s. This makes our method both fast and convenient. We evaluate our method in terms of reprojection errors and structured light image reconstruction quality....

  5. Method for Accurately Calibrating a Spectrometer Using Broadband Light

    Science.gov (United States)

    Simmons, Stephen; Youngquist, Robert

    2011-01-01

    A novel method has been developed for performing very fine calibration of a spectrometer. This process is particularly useful for modern miniature charge-coupled device (CCD) spectrometers where a typical factory wavelength calibration has been performed and a finer, more accurate calibration is desired. Typically, the factory calibration is done with a spectral line source that generates light at known wavelengths, allowing specific pixels in the CCD array to be assigned wavelength values. This method is good to about 1 nm across the spectrometer s wavelength range. This new method appears to be accurate to about 0.1 nm, a factor of ten improvement. White light is passed through an unbalanced Michelson interferometer, producing an optical signal with significant spectral variation. A simple theory can be developed to describe this spectral pattern, so by comparing the actual spectrometer output against this predicted pattern, errors in the wavelength assignment made by the spectrometer can be determined.

  6. Accurate prediction of secondary metabolite gene clusters in filamentous fungi

    DEFF Research Database (Denmark)

    Andersen, Mikael Rørdam; Nielsen, Jakob Blæsbjerg; Klitgaard, Andreas;

    2013-01-01

    Biosynthetic pathways of secondary metabolites from fungi are currently subject to an intense effort to elucidate the genetic basis for these compounds due to their large potential within pharmaceutics and synthetic biochemistry. The preferred method is methodical gene deletions to identify...... supporting enzymes for key synthases one cluster at a time. In this study, we design and apply a DNA expression array for Aspergillus nidulans in combination with legacy data to form a comprehensive gene expression compendium. We apply a guilt-by-association-based analysis to predict the extent...... of the biosynthetic clusters for the 58 synthases active in our set of experimental conditions. A comparison with legacy data shows the method to be accurate in 13 of 16 known clusters and nearly accurate for the remaining 3 clusters. Furthermore, we apply a data clustering approach, which identifies cross...

  7. A fast and accurate method for echocardiography strain rate imaging

    Science.gov (United States)

    Tavakoli, Vahid; Sahba, Nima; Hajebi, Nima; Nambakhsh, Mohammad Saleh

    2009-02-01

    Recently Strain and strain rate imaging have proved their superiority with respect to classical motion estimation methods in myocardial evaluation as a novel technique for quantitative analysis of myocardial function. Here in this paper, we propose a novel strain rate imaging algorithm using a new optical flow technique which is more rapid and accurate than the previous correlation-based methods. The new method presumes a spatiotemporal constancy of intensity and Magnitude of the image. Moreover the method makes use of the spline moment in a multiresolution approach. Moreover cardiac central point is obtained using a combination of center of mass and endocardial tracking. It is proved that the proposed method helps overcome the intensity variations of ultrasound texture while preserving the ability of motion estimation technique for different motions and orientations. Evaluation is performed on simulated, phantom (a contractile rubber balloon) and real sequences and proves that this technique is more accurate and faster than the previous methods.

  8. Simple and High-Accurate Schemes for Hyperbolic Conservation Laws

    Directory of Open Access Journals (Sweden)

    Renzhong Feng

    2014-01-01

    Full Text Available The paper constructs a class of simple high-accurate schemes (SHA schemes with third order approximation accuracy in both space and time to solve linear hyperbolic equations, using linear data reconstruction and Lax-Wendroff scheme. The schemes can be made even fourth order accurate with special choice of parameter. In order to avoid spurious oscillations in the vicinity of strong gradients, we make the SHA schemes total variation diminishing ones (TVD schemes for short by setting flux limiter in their numerical fluxes and then extend these schemes to solve nonlinear Burgers’ equation and Euler equations. The numerical examples show that these schemes give high order of accuracy and high resolution results. The advantages of these schemes are their simplicity and high order of accuracy.

  9. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    Directory of Open Access Journals (Sweden)

    Zhiwei Zhao

    2015-02-01

    Full Text Available Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1 achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2 greatly improves the performance of protocols exploiting link correlation.

  10. Accurate multireference study of Si3 electronic manifold

    CERN Document Server

    Goncalves, Cayo Emilio Monteiro; Braga, Joao Pedro

    2016-01-01

    Since it has been shown that the silicon trimer has a highly multi-reference character, accurate multi-reference configuration interaction calculations are performed to elucidate its electronic manifold. Emphasis is given to the long range part of the potential, aiming to understand the atom-diatom collisions dynamical aspects, to describe conical intersections and important saddle points along the reactive path. Potential energy surface main features analysis are performed for benchmarking, and highly accurate values for structures, vibrational constants and energy gaps are reported, as well as the unpublished spin-orbit coupling magnitude. The results predict that inter-system crossings will play an important role in dynamical simulations, specially in triplet state quenching, making the problem of constructing a precise potential energy surface more complicated and multi-layer dependent. The ground state is predicted to be the singlet one, but since the singlet-triplet gap is rather small (2.448 kJ/mol) bo...

  11. Efficient and Accurate Robustness Estimation for Large Complex Networks

    CERN Document Server

    Wandelt, Sebastian

    2016-01-01

    Robustness estimation is critical for the design and maintenance of resilient networks, one of the global challenges of the 21st century. Existing studies exploit network metrics to generate attack strategies, which simulate intentional attacks in a network, and compute a metric-induced robustness estimation. While some metrics are easy to compute, e.g. degree centrality, other, more accurate, metrics require considerable computation efforts, e.g. betweennes centrality. We propose a new algorithm for estimating the robustness of a network in sub-quadratic time, i.e., significantly faster than betweenness centrality. Experiments on real-world networks and random networks show that our algorithm estimates the robustness of networks close to or even better than betweenness centrality, while being orders of magnitudes faster. Our work contributes towards scalable, yet accurate methods for robustness estimation of large complex networks.

  12. Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping

    Science.gov (United States)

    Rehak, M.; Skaloud, J.

    2015-08-01

    In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.

  13. Library preparation for highly accurate population sequencing of RNA viruses

    Science.gov (United States)

    Acevedo, Ashley; Andino, Raul

    2015-01-01

    Circular resequencing (CirSeq) is a novel technique for efficient and highly accurate next-generation sequencing (NGS) of RNA virus populations. The foundation of this approach is the circularization of fragmented viral RNAs, which are then redundantly encoded into tandem repeats by ‘rolling-circle’ reverse transcription. When sequenced, the redundant copies within each read are aligned to derive a consensus sequence of their initial RNA template. This process yields sequencing data with error rates far below the variant frequencies observed for RNA viruses, facilitating ultra-rare variant detection and accurate measurement of low-frequency variants. Although library preparation takes ~5 d, the high-quality data generated by CirSeq simplifies downstream data analysis, making this approach substantially more tractable for experimentalists. PMID:24967624

  14. Accurate parameter estimation for unbalanced three-phase system.

    Science.gov (United States)

    Chen, Yuan; So, Hing Cheung

    2014-01-01

    Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS.

  15. Accurate Load Modeling Based on Analytic Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Zhenshu Wang

    2016-01-01

    Full Text Available Establishing an accurate load model is a critical problem in power system modeling. That has significant meaning in power system digital simulation and dynamic security analysis. The synthesis load model (SLM considers the impact of power distribution network and compensation capacitor, while randomness of power load is more precisely described by traction power system load model (TPSLM. On the basis of these two load models, a load modeling method that combines synthesis load with traction power load is proposed in this paper. This method uses analytic hierarchy process (AHP to interact with two load models. Weight coefficients of two models can be calculated after formulating criteria and judgment matrixes and then establishing a synthesis model by weight coefficients. The effectiveness of the proposed method was examined through simulation. The results show that accurate load modeling based on AHP can effectively improve the accuracy of load model and prove the validity of this method.

  16. Accurate adjoint design sensitivities for nano metal optics.

    Science.gov (United States)

    Hansen, Paul; Hesselink, Lambertus

    2015-09-01

    We present a method for obtaining accurate numerical design sensitivities for metal-optical nanostructures. Adjoint design sensitivity analysis, long used in fluid mechanics and mechanical engineering for both optimization and structural analysis, is beginning to be used for nano-optics design, but it fails for sharp-cornered metal structures because the numerical error in electromagnetic simulations of metal structures is highest at sharp corners. These locations feature strong field enhancement and contribute strongly to design sensitivities. By using high-accuracy FEM calculations and rounding sharp features to a finite radius of curvature we obtain highly-accurate design sensitivities for 3D metal devices. To provide a bridge to the existing literature on adjoint methods in other fields, we derive the sensitivity equations for Maxwell's equations in the PDE framework widely used in fluid mechanics. PMID:26368483

  17. Automatic remote correcting system for MOOCS

    OpenAIRE

    Rochat, Pierre-Yves

    2014-01-01

    An automatic correcting system was designed to be able to correct the programming exercises during a Massive Open Online Course (MOOC) about Microcontrollers, followed by thousands of students. Build around the MSP430G Launchpad, it has corrected more then 30'000 submissions in 7 weeks. This document provides general information about the system, the results obtained during a MOOC on the Coursera.org plateform, extensions done to remote experiences and future projects.

  18. Noncommutative corrections to classical black holes

    International Nuclear Information System (INIS)

    We calculate leading long-distance noncommutative corrections to the classical Schwarzschild black hole sourced by a massive noncommutative scalar field. The energy-momentum tensor is taken O(l4) in the noncommutative parameter l and is treated in the semiclassical (tree-level) approximation. These noncommutative corrections dominate classical post-post-Newtonian corrections if l>1/MP. However, they are still very small to be observable in present-day experiments.

  19. Noncommutative corrections to classical black holes

    OpenAIRE

    Kobakhidze, Archil(ARC Centre of Excellence for Particle Physics at the Terascale, School of Physics, The University of Sydney, NSW, 2006, Australia)

    2007-01-01

    We calculate leading long-distance noncommutative corrections to the classical Schwarzschild black hole which is sourced by a massive noncommutative scalar field. The energy-momentum tensor is taken up to ${\\cal O}(\\ell^4)$ in noncommutative parameter, and is treated in semiclassical (tree level) approximation. These noncommutative corrections can dominate classical post-post-Newtonian corrections providing $\\ell > 1/M_P$, however, they are still too small to be observable in present-day expe...

  20. Radiative corrections to pion Compton scattering

    OpenAIRE

    Kaiser, N.(Physik Department T39, Technische Universität München, Garching, D-85747, Germany); Friedrich, J. M.

    2008-01-01

    We calculate the one-photon loop radiative corrections to charged pion Compton scattering, $\\pi^- \\gamma \\to \\pi^- \\gamma $. Ultraviolet and infrared divergencies are both treated in dimensional regularization. Analytical expressions for the ${\\cal O}(\\alpha)$ corrections to the invariant Compton scattering amplitudes, $A(s,u)$ and $B(s,u)$, are presented for 11 classes of contributing one-loop diagrams. Infrared finiteness of the virtual radiative corrections is achieved (in the standard way...

  1. Overburden Corrections for CosmoALEPH

    CERN Document Server

    Schmelling, Michael

    2006-01-01

    The determination of the decoherence curve from coincidence rates between the different CosmoALEPH stations requires amongst others corrections also one for different overburdens which affect the measured rates. This note describes the calculation of the overburden corrections based on a simple parametrization of the muon flux at sea level and a simple propagation model for muons through the overburden. The results are expressed as corrections to a reference muon flux at a depth of 320 mwe below surface.

  2. Accurate quantum state estimation via "Keeping the experimentalist honest"

    CERN Document Server

    Blume-Kohout, R; Blume-Kohout, Robin; Hayden, Patrick

    2006-01-01

    In this article, we derive a unique procedure for quantum state estimation from a simple, self-evident principle: an experimentalist's estimate of the quantum state generated by an apparatus should be constrained by honesty. A skeptical observer should subject the estimate to a test that guarantees that a self-interested experimentalist will report the true state as accurately as possible. We also find a non-asymptotic, operational interpretation of the quantum relative entropy function.

  3. Continuous glucose monitors prove highly accurate in critically ill children

    OpenAIRE

    Bridges, Brian C.; Preissig, Catherine M; Maher, Kevin O.; Rigby, Mark R

    2010-01-01

    Introduction Hyperglycemia is associated with increased morbidity and mortality in critically ill patients and strict glycemic control has become standard care for adults. Recent studies have questioned the optimal targets for such management and reported increased rates of iatrogenic hypoglycemia in both critically ill children and adults. The ability to provide accurate, real-time continuous glucose monitoring would improve the efficacy and safety of this practice in critically ill patients...

  4. A novel automated image analysis method for accurate adipocyte quantification

    OpenAIRE

    Osman, Osman S.; Selway, Joanne L; Kępczyńska, Małgorzata A; Stocker, Claire J.; O’Dowd, Jacqueline F; Cawthorne, Michael A.; Arch, Jonathan RS; Jassim, Sabah; Langlands, Kenneth

    2013-01-01

    Increased adipocyte size and number are associated with many of the adverse effects observed in metabolic disease states. While methods to quantify such changes in the adipocyte are of scientific and clinical interest, manual methods to determine adipocyte size are both laborious and intractable to large scale investigations. Moreover, existing computational methods are not fully automated. We, therefore, developed a novel automatic method to provide accurate measurements of the cross-section...

  5. Ultra accurate collaborative information filtering via directed user similarity

    OpenAIRE

    Guo, Qiang; Song, Wen-Jun; Liu, Jian-Guo

    2014-01-01

    A key challenge of the collaborative filtering (CF) information filtering is how to obtain the reliable and accurate results with the help of peers' recommendation. Since the similarities from small-degree users to large-degree users would be larger than the ones opposite direction, the large-degree users' selections are recommended extensively by the traditional second-order CF algorithms. By considering the users' similarity direction and the second-order correlations to depress the influen...

  6. Evaluation of accurate eye corner detection methods for gaze estimation

    OpenAIRE

    Bengoechea, Jose Javier; Cerrolaza, Juan J.; Villanueva, Arantxa; Cabeza, Rafael

    2014-01-01

    Accurate detection of iris center and eye corners appears to be a promising approach for low cost gaze estimation. In this paper we propose novel eye inner corner detection methods. Appearance and feature based segmentation approaches are suggested. All these methods are exhaustively tested on a realistic dataset containing images of subjects gazing at different points on a screen. We have demonstrated that a method based on a neural network presents the best performance even in light changin...

  7. Combinatorial Approaches to Accurate Identification of Orthologous Genes

    OpenAIRE

    Shi, Guanqun

    2011-01-01

    The accurate identification of orthologous genes across different species is a critical and challenging problem in comparative genomics and has a wide spectrum of biological applications including gene function inference, evolutionary studies and systems biology. During the past several years, many methods have been proposed for ortholog assignment based on sequence similarity, phylogenetic approaches, synteny information, and genome rearrangement. Although these methods share many commonly a...

  8. An accurate and robust gyroscope-gased pedometer.

    Science.gov (United States)

    Lim, Yoong P; Brown, Ian T; Khoo, Joshua C T

    2008-01-01

    Pedometers are known to have steps estimation issues. This is mainly attributed to their innate acceleration based measuring sensory. A micro-machined gyroscope (better immunity to acceleration) based pedometer is proposed. Through syntactic data recognition of apriori knowledge of human shank's dynamics and temporally précised detection of heel strikes permitted by Wavelet decomposition, an accurate and robust pedometer is acquired. PMID:19163737

  9. A highly accurate method to solve Fisher’s equation

    Indian Academy of Sciences (India)

    Mehdi Bastani; Davod Khojasteh Salkuyeh

    2012-03-01

    In this study, we present a new and very accurate numerical method to approximate the Fisher’s-type equations. Firstly, the spatial derivative in the proposed equation is approximated by a sixth-order compact finite difference (CFD6) scheme. Secondly, we solve the obtained system of differential equations using a third-order total variation diminishing Runge–Kutta (TVD-RK3) scheme. Numerical examples are given to illustrate the efficiency of the proposed method.

  10. A robust and accurate formulation of molecular and colloidal electrostatics

    Science.gov (United States)

    Sun, Qiang; Klaseboer, Evert; Chan, Derek Y. C.

    2016-08-01

    This paper presents a re-formulation of the boundary integral method for the Debye-Hückel model of molecular and colloidal electrostatics that removes the mathematical singularities that have to date been accepted as an intrinsic part of the conventional boundary integral equation method. The essence of the present boundary regularized integral equation formulation consists of subtracting a known solution from the conventional boundary integral method in such a way as to cancel out the singularities associated with the Green's function. This approach better reflects the non-singular physical behavior of the systems on boundaries with the benefits of the following: (i) the surface integrals can be evaluated accurately using quadrature without any need to devise special numerical integration procedures, (ii) being able to use quadratic or spline function surface elements to represent the surface more accurately and the variation of the functions within each element is represented to a consistent level of precision by appropriate interpolation functions, (iii) being able to calculate electric fields, even at boundaries, accurately and directly from the potential without having to solve hypersingular integral equations and this imparts high precision in calculating the Maxwell stress tensor and consequently, intermolecular or colloidal forces, (iv) a reliable way to handle geometric configurations in which different parts of the boundary can be very close together without being affected by numerical instabilities, therefore potentials, fields, and forces between surfaces can be found accurately at surface separations down to near contact, and (v) having the simplicity of a formulation that does not require complex algorithms to handle singularities will result in significant savings in coding effort and in the reduction of opportunities for coding errors. These advantages are illustrated using examples drawn from molecular and colloidal electrostatics.

  11. Accurate Method for Determining Adhesion of Cantilever Beams

    Energy Technology Data Exchange (ETDEWEB)

    Michalske, T.A.; de Boer, M.P.

    1999-01-08

    Using surface micromachined samples, we demonstrate the accurate measurement of cantilever beam adhesion by using test structures which are adhered over long attachment lengths. We show that this configuration has a deep energy well, such that a fracture equilibrium is easily reached. When compared to the commonly used method of determining the shortest attached beam, the present method is much less sensitive to variations in surface topography or to details of capillary drying.

  12. Accurate calculation of thermal noise in multilayer coating

    OpenAIRE

    Gurkovsky, Alexey; Vyatchanin, Sergey

    2010-01-01

    We derive accurate formulas for thermal fluctuations in multilayer interferometric coating taking into account light propagation inside the coating. In particular, we calculate the reflected wave phase as a function of small displacements of the boundaries between the layers using transmission line model for interferometric coating and derive formula for spectral density of reflected phase in accordance with Fluctuation-Dissipation Theorem. We apply the developed approach for calculation of t...

  13. Fast and Accurate Bilateral Filtering using Gauss-Polynomial Decomposition

    OpenAIRE

    Chaudhury, Kunal N.

    2015-01-01

    The bilateral filter is a versatile non-linear filter that has found diverse applications in image processing, computer vision, computer graphics, and computational photography. A widely-used form of the filter is the Gaussian bilateral filter in which both the spatial and range kernels are Gaussian. A direct implementation of this filter requires $O(\\sigma^2)$ operations per pixel, where $\\sigma$ is the standard deviation of the spatial Gaussian. In this paper, we propose an accurate approxi...

  14. Accurate, inexpensive testing of laser pointer power for safe operation

    International Nuclear Information System (INIS)

    An accurate, inexpensive test-bed for the measurement of optical power emitted from handheld lasers is described. The setup consists of a power meter, optical bandpass filters, an adjustable iris and self-centering lens mounts. We demonstrate this test-bed by evaluating the output power of 23 laser pointers with respect to the limits imposed by the US Code of Federal Regulations. We find a compliance rate of only 26%. A discussion of potential laser pointer hazards is included. (paper)

  15. Building with Drones: Accurate 3D Facade Reconstruction using MAVs

    OpenAIRE

    Daftry, Shreyansh; Hoppe, Christof; Bischof, Horst

    2015-01-01

    Automatic reconstruction of 3D models from images using multi-view Structure-from-Motion methods has been one of the most fruitful outcomes of computer vision. These advances combined with the growing popularity of Micro Aerial Vehicles as an autonomous imaging platform, have made 3D vision tools ubiquitous for large number of Architecture, Engineering and Construction applications among audiences, mostly unskilled in computer vision. However, to obtain high-resolution and accurate reconstruc...

  16. Accurate Multisteps Traffic Flow Prediction Based on SVM

    OpenAIRE

    Zhang Mingheng; Zhen Yaobao; Hui Ganglong; Chen Gang

    2013-01-01

    Accurate traffic flow prediction is prerequisite and important for realizing intelligent traffic control and guidance, and it is also the objective requirement for intelligent traffic management. Due to the strong nonlinear, stochastic, time-varying characteristics of urban transport system, artificial intelligence methods such as support vector machine (SVM) are now receiving more and more attentions in this research field. Compared with the traditional single-step prediction method, the mul...

  17. Accurate molecular classification of cancer using simple rules

    OpenAIRE

    Gotoh Osamu; Wang Xiaosheng

    2009-01-01

    Abstract Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often ...

  18. Weather-Corrected Performance Ratio

    Energy Technology Data Exchange (ETDEWEB)

    Dierauf, T.; Growitz, A.; Kurtz, S.; Cruz, J. L. B.; Riley, E.; Hansen, C.

    2013-04-01

    Photovoltaic (PV) system performance depends on both the quality of the system and the weather. One simple way to communicate the system performance is to use the performance ratio (PR): the ratio of the electricity generated to the electricity that would have been generated if the plant consistently converted sunlight to electricity at the level expected from the DC nameplate rating. The annual system yield for flat-plate PV systems is estimated by the product of the annual insolation in the plane of the array, the nameplate rating of the system, and the PR, which provides an attractive way to estimate expected annual system yield. Unfortunately, the PR is, again, a function of both the PV system efficiency and the weather. If the PR is measured during the winter or during the summer, substantially different values may be obtained, making this metric insufficient to use as the basis for a performance guarantee when precise confidence intervals are required. This technical report defines a way to modify the PR calculation to neutralize biases that may be introduced by variations in the weather, while still reporting a PR that reflects the annual PR at that site given the project design and the project weather file. This resulting weather-corrected PR gives more consistent results throughout the year, enabling its use as a metric for performance guarantees while still retaining the familiarity this metric brings to the industry and the value of its use in predicting actual annual system yield. A testing protocol is also presented to illustrate the use of this new metric with the intent of providing a reference starting point for contractual content.

  19. Accurate computations of the structures and binding energies of the imidazole⋯benzene and pyrrole⋯benzene complexes

    Energy Technology Data Exchange (ETDEWEB)

    Ahnen, Sandra; Hehn, Anna-Sophia [Institute of Physical Chemistry, Karlsruhe Institute of Technology (KIT), Fritz-Haber-Weg 2, D-76131 Karlsruhe (Germany); Vogiatzis, Konstantinos D. [Institute of Physical Chemistry, Karlsruhe Institute of Technology (KIT), Fritz-Haber-Weg 2, D-76131 Karlsruhe (Germany); Center for Functional Nanostructures, Karlsruhe Institute of Technology (KIT), Wolfgang-Gaede-Straße 1a, D-76131 Karlsruhe (Germany); Trachsel, Maria A.; Leutwyler, Samuel [Department of Chemistry and Biochemistry, University of Bern, Freiestrasse 3, CH-3012 Bern (Switzerland); Klopper, Wim, E-mail: klopper@kit.edu [Institute of Physical Chemistry, Karlsruhe Institute of Technology (KIT), Fritz-Haber-Weg 2, D-76131 Karlsruhe (Germany); Center for Functional Nanostructures, Karlsruhe Institute of Technology (KIT), Wolfgang-Gaede-Straße 1a, D-76131 Karlsruhe (Germany)

    2014-09-30

    Highlights: • We have computed accurate binding energies of two NH⋯π hydrogen bonds. • We compare to results from dispersion-corrected density-functional theory. • A double-hybrid functional with explicit correlation has been proposed. • First results of explicitly-correlated ring-coupled-cluster theory are presented. • A double-hybrid functional with random-phase approximation is investigated. - Abstract: Using explicitly-correlated coupled-cluster theory with single and double excitations, the intermolecular distances and interaction energies of the T-shaped imidazole⋯benzene and pyrrole⋯benzene complexes have been computed in a large augmented correlation-consistent quadruple-zeta basis set, adding also corrections for connected triple excitations and remaining basis-set-superposition errors. The results of these computations are used to assess other methods such as Møller–Plesset perturbation theory (MP2), spin-component-scaled MP2 theory, dispersion-weighted MP2 theory, interference-corrected explicitly-correlated MP2 theory, dispersion-corrected double-hybrid density-functional theory (DFT), DFT-based symmetry-adapted perturbation theory, the random-phase approximation, explicitly-correlated ring-coupled-cluster-doubles theory, and double-hybrid DFT with a correlation energy computed in the random-phase approximation.

  20. An accurate metric for the spacetime around neutron stars

    CERN Document Server

    Pappas, George

    2016-01-01

    The problem of having an accurate description of the spacetime around neutron stars is of great astrophysical interest. For astrophysical applications, one needs to have a metric that captures all the properties of the spacetime around a neutron star. Furthermore, an accurate appropriately parameterised metric, i.e., a metric that is given in terms of parameters that are directly related to the physical structure of the neutron star, could be used to solve the inverse problem, which is to infer the properties of the structure of a neutron star from astrophysical observations. In this work we present such an approximate stationary and axisymmetric metric for the exterior of neutron stars, which is constructed using the Ernst formalism and is parameterised by the relativistic multipole moments of the central object. This metric is given in terms of an expansion on the Weyl-Papapetrou coordinates with the multipole moments as free parameters and is shown to be extremely accurate in capturing the physical propert...

  1. Accurate genome relative abundance estimation based on shotgun metagenomic reads.

    Directory of Open Access Journals (Sweden)

    Li C Xia

    Full Text Available Accurate estimation of microbial community composition based on metagenomic sequencing data is fundamental for subsequent metagenomics analysis. Prevalent estimation methods are mainly based on directly summarizing alignment results or its variants; often result in biased and/or unstable estimates. We have developed a unified probabilistic framework (named GRAMMy by explicitly modeling read assignment ambiguities, genome size biases and read distributions along the genomes. Maximum likelihood method is employed to compute Genome Relative Abundance of microbial communities using the Mixture Model theory (GRAMMy. GRAMMy has been demonstrated to give estimates that are accurate and robust across both simulated and real read benchmark datasets. We applied GRAMMy to a collection of 34 metagenomic read sets from four metagenomics projects and identified 99 frequent species (minimally 0.5% abundant in at least 50% of the data-sets in the human gut samples. Our results show substantial improvements over previous studies, such as adjusting the over-estimated abundance for Bacteroides species for human gut samples, by providing a new reference-based strategy for metagenomic sample comparisons. GRAMMy can be used flexibly with many read assignment tools (mapping, alignment or composition-based even with low-sensitivity mapping results from huge short-read datasets. It will be increasingly useful as an accurate and robust tool for abundance estimation with the growing size of read sets and the expanding database of reference genomes.

  2. Is bioelectrical impedance accurate for use in large epidemiological studies?

    Directory of Open Access Journals (Sweden)

    Merchant Anwar T

    2008-09-01

    Full Text Available Abstract Percentage of body fat is strongly associated with the risk of several chronic diseases but its accurate measurement is difficult. Bioelectrical impedance analysis (BIA is a relatively simple, quick and non-invasive technique, to measure body composition. It measures body fat accurately in controlled clinical conditions but its performance in the field is inconsistent. In large epidemiologic studies simpler surrogate techniques such as body mass index (BMI, waist circumference, and waist-hip ratio are frequently used instead of BIA to measure body fatness. We reviewed the rationale, theory, and technique of recently developed systems such as foot (or hand-to-foot BIA measurement, and the elements that could influence its results in large epidemiologic studies. BIA results are influenced by factors such as the environment, ethnicity, phase of menstrual cycle, and underlying medical conditions. We concluded that BIA measurements validated for specific ethnic groups, populations and conditions can accurately measure body fat in those populations, but not others and suggest that for large epdiemiological studies with diverse populations BIA may not be the appropriate choice for body composition measurement unless specific calibration equations are developed for different groups participating in the study.

  3. Can blind persons accurately assess body size from the voice?

    Science.gov (United States)

    Pisanski, Katarzyna; Oleszkiewicz, Anna; Sorokowska, Agnieszka

    2016-04-01

    Vocal tract resonances provide reliable information about a speaker's body size that human listeners use for biosocial judgements as well as speech recognition. Although humans can accurately assess men's relative body size from the voice alone, how this ability is acquired remains unknown. In this study, we test the prediction that accurate voice-based size estimation is possible without prior audiovisual experience linking low frequencies to large bodies. Ninety-one healthy congenitally or early blind, late blind and sighted adults (aged 20-65) participated in the study. On the basis of vowel sounds alone, participants assessed the relative body sizes of male pairs of varying heights. Accuracy of voice-based body size assessments significantly exceeded chance and did not differ among participants who were sighted, or congenitally blind or who had lost their sight later in life. Accuracy increased significantly with relative differences in physical height between men, suggesting that both blind and sighted participants used reliable vocal cues to size (i.e. vocal tract resonances). Our findings demonstrate that prior visual experience is not necessary for accurate body size estimation. This capacity, integral to both nonverbal communication and speech perception, may be present at birth or may generalize from broader cross-modal correspondences. PMID:27095264

  4. Accurate and efficient Nyström volume integral equation method for the Maxwell equations for multiple 3-D scatterers

    Science.gov (United States)

    Chen, Duan; Cai, Wei; Zinser, Brian; Cho, Min Hyung

    2016-09-01

    In this paper, we develop an accurate and efficient Nyström volume integral equation (VIE) method for the Maxwell equations for a large number of 3-D scatterers. The Cauchy Principal Values that arise from the VIE are computed accurately using a finite size exclusion volume together with explicit correction integrals consisting of removable singularities. Also, the hyper-singular integrals are computed using interpolated quadrature formulae with tensor-product quadrature nodes for cubes, spheres and cylinders, that are frequently encountered in the design of meta-materials. The resulting Nyström VIE method is shown to have high accuracy with a small number of collocation points and demonstrates p-convergence for computing the electromagnetic scattering of these objects. Numerical calculations of multiple scatterers of cubic, spherical, and cylindrical shapes validate the efficiency and accuracy of the proposed method.

  5. QCD corrections to tri-boson production

    CERN Document Server

    Lazopoulos, A; Petriello, F J; Lazopoulos, Achilleas; Melnikov, Kirill; Petriello, Frank

    2007-01-01

    We present a computation of the next-to-leading order QCD corrections to the production of three Z bosons at the LHC. We calculate these corrections using a completely numerical method that combines sector decomposition to extract infrared singularities with contour deformation of the Feynman parameter integrals to avoid internal loop thresholds. The NLO QCD corrections to pp -> ZZZ are approximately 50%, and are badly underestimated by the leading order scale dependence. However, the kinematic dependence of the corrections is minimal in phase space regions accessible at leading order.

  6. Higgs Pseudo Observables and Radiative Corrections

    CERN Document Server

    Bordone, Marzia; Isidori, Gino; Marzocca, David; Pattori, Andrea

    2015-01-01

    We show how leading radiative corrections can be implemented in the general description of $h\\to 4\\ell$ decays by means of Pseudo Observables (PO). With the inclusion of such corrections, the PO description of $h\\to 4\\ell$ decays can be matched to next-to-leading-order electroweak calculations both within and beyond the Standard Model (SM). In particular, we demonstrate that with the inclusion of such corrections the complete next-to-leading-order Standard Model prediction for the $h\\to 2e2\\mu$ dilepton mass spectrum is recovered within 1% accuracy. The impact of radiative corrections for non-standard PO is also briefly discussed.

  7. The prosody of speech error corrections revisited

    OpenAIRE

    Shattuck-Hufnagel, S.; Cutler, A.

    1999-01-01

    A corpus of digitized speech errors is used to compare the prosody of correction patterns for word-level vs. sound-level errors. Results for both peak F0 and perceived prosodic markedness confirm that speakers are more likely to mark corrections of word-level errors than corrections of sound-level errors, and that errors ambiguous between word-level and soundlevel (such as boat for moat) show correction patterns like those for sound level errors. This finding increases the plausibility of the...

  8. Higgs pseudo observables and radiative corrections

    Energy Technology Data Exchange (ETDEWEB)

    Bordone, Marzia; Marzocca, David; Pattori, Andrea [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Greljo, Admir [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); University of Sarajevo, Faculty of Science, Sarajevo (Bosnia and Herzegovina); Isidori, Gino [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); INFN, Laboratori Nazionali di Frascati, Frascati (Italy)

    2015-08-15

    We show how leading radiative corrections can be implemented in the general description of h → 4l decays by means of pseudo observables (PO). With the inclusion of such corrections, the PO description of h → 4l decays can be matched to next-to-leading-order electroweak calculations both within and beyond the Standard Model (SM). In particular, we demonstrate that with the inclusion of such corrections the complete next-to-leading-order SM prediction for the h → 2e2μ dilepton mass spectrum is recovered within 1% accuracy. The impact of radiative corrections for non-standard PO is also briefly discussed. (orig.)

  9. A Global Correction to PPMXL Proper Motions

    CERN Document Server

    Vickers, John J; Grebel, Eva K

    2016-01-01

    In this paper we notice that extragalactic sources seem to have non-zero proper motions in the PPMXL proper motion catalog. We collect a large, all-sky sample of extragalactic objects and fit their reported PPMXL proper motions to an ensemble of spherical harmonics in magnitude shells. A magnitude dependent proper motion correction is thus constructed. This correction is applied to a set of fundamental radio sources, quasars, and is compared to similar corrections to assess its utility. We publish, along with this paper, code which may be used to correct proper motions in the PPMXL catalog over the full sky which have 2 Micron All Sky Survey photometry.

  10. SPELL CHECKING AND ERROR CORRECTING SYSTEM FOR TEXT PARAGRAPHS WRITTEN IN PUNJABI OR HINDI LANGUAGEUSING HYBRID APPROACH

    OpenAIRE

    Amandeep Singh; Prof. Meenakshi Sharma

    2016-01-01

    Spell-checking is the process of detecting and correcting incorrectly spelled words in a text paragraph. Spell checking system first detects the incorrect words and then provide the best possible solution of corrected words. Spell checking system is a combination of handcrafted rules of the language for which spell checking system is to be created and a dictionary which contain the accurate spellings of various words. Better rules and large dictionary of words is help to improve the rate of e...

  11. A Comparative Assessment of Spalart-Shur Rotation/Curvature Correction in RANS Simulations in a Centrifugal Pump Impeller

    OpenAIRE

    Ran Tao; Ruofu Xiao; Wei Yang; Fujun Wang

    2014-01-01

    RANS simulation is widely used in the flow prediction of centrifugal pumps. Influenced by impeller rotation and streamline curvature, the eddy viscosity models with turbulence isotropy assumption are not accurate enough. In this study, Spalart-Shur rotation/curvature correction was applied on the SST k-ω turbulence model. The comparative assessment of the correction was proceeded in the simulations of a centrifugal pump impeller. CFD results were compared with existing PIV and LDV data under ...

  12. Iterative CT shading correction with no prior information

    International Nuclear Information System (INIS)

    Shading artifacts in CT images are caused by scatter contamination, beam-hardening effect and other non-ideal imaging conditions. The purpose of this study is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT images (e.g. cone-beam CT, low-kVp CT) without relying on prior information. The method is based on the general knowledge of the relatively uniform CT number distribution in one tissue component. The CT image is first segmented to construct a template image where each structure is filled with the same CT number of a specific tissue type. Then, by subtracting the ideal template from the CT image, the residual image from various error sources are generated. Since forward projection is an integration process, non-continuous shading artifacts in the image become continuous signals in a line integral. Thus, the residual image is forward projected and its line integral is low-pass filtered in order to estimate the error that causes shading artifacts. A compensation map is reconstructed from the filtered line integral error using a standard FDK algorithm and added back to the original image for shading correction. As the segmented image does not accurately depict a shaded CT image, the proposed scheme is iterated until the variation of the residual image is minimized. The proposed method is evaluated using cone-beam CT images of a Catphan©600 phantom and a pelvis patient, and low-kVp CT angiography images for carotid artery assessment. Compared with the CT image without correction, the proposed method reduces the overall CT number error from over 200 HU to be less than 30 HU and increases the spatial uniformity by a factor of 1.5. Low-contrast object is faithfully retained after the proposed correction. An effective iterative algorithm for shading correction in CT imaging is proposed that is only assisted by general anatomical information without relying on prior knowledge. The proposed method is thus practical

  13. Iterative CT shading correction with no prior information

    Science.gov (United States)

    Wu, Pengwei; Sun, Xiaonan; Hu, Hongjie; Mao, Tingyu; Zhao, Wei; Sheng, Ke; Cheung, Alice A.; Niu, Tianye

    2015-11-01

    Shading artifacts in CT images are caused by scatter contamination, beam-hardening effect and other non-ideal imaging conditions. The purpose of this study is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT images (e.g. cone-beam CT, low-kVp CT) without relying on prior information. The method is based on the general knowledge of the relatively uniform CT number distribution in one tissue component. The CT image is first segmented to construct a template image where each structure is filled with the same CT number of a specific tissue type. Then, by subtracting the ideal template from the CT image, the residual image from various error sources are generated. Since forward projection is an integration process, non-continuous shading artifacts in the image become continuous signals in a line integral. Thus, the residual image is forward projected and its line integral is low-pass filtered in order to estimate the error that causes shading artifacts. A compensation map is reconstructed from the filtered line integral error using a standard FDK algorithm and added back to the original image for shading correction. As the segmented image does not accurately depict a shaded CT image, the proposed scheme is iterated until the variation of the residual image is minimized. The proposed method is evaluated using cone-beam CT images of a Catphan©600 phantom and a pelvis patient, and low-kVp CT angiography images for carotid artery assessment. Compared with the CT image without correction, the proposed method reduces the overall CT number error from over 200 HU to be less than 30 HU and increases the spatial uniformity by a factor of 1.5. Low-contrast object is faithfully retained after the proposed correction. An effective iterative algorithm for shading correction in CT imaging is proposed that is only assisted by general anatomical information without relying on prior knowledge. The proposed method is thus practical

  14. Accurate measurements of carbon monoxide in humid air using the cavity ring-down spectroscopy (CRDS technique

    Directory of Open Access Journals (Sweden)

    H. Chen

    2012-09-01

    Full Text Available Accurate measurements of carbon monoxide (CO in humid air have been made using the cavity ring-down spectroscopy (CRDS technique. The measurements of CO mole fractions are determined from the strength of its spectral absorption in the near infrared region (∼1.57 μm after removing interferences from adjacent carbon dioxide (CO2 and water vapor (H2O absorption lines. Water correction functions that account for the dilution and pressure-broadening effects as well as absorption line interferences from adjacent CO2 and H2O lines have been derived for CO2 mole fractions between 360–390 ppm. The line interference corrections are independent of CO mole fractions. The dependence of the line interference correction on CO2 abundance is estimated to be approximately −0.3 ppb/100 ppm CO2 for dry mole fractions of CO. Comparisons of water correction functions from different analyzers of the same type show significant differences, making it necessary to perform instrument-specific water tests for each individual analyzer. The CRDS analyzer was flown on an aircraft in Alaska from April to November in 2011, and the accuracy of the CO measurements by the CRDS analyzer has been validated against discrete NOAA/ESRL flask sample measurements made on board the same aircraft, with a mean difference between integrated in situ and flask measurements of −0.6 ppb and a standard deviation of 2.8 ppb. Preliminary testing of CRDS instrumentation that employs new spectroscopic analysis (available since the beginning of 2012 indicates a smaller water vapor dependence than the models discussed here, but more work is necessary to fully validate the performance. The CRDS technique provides an accurate and low-maintenance method of monitoring the atmospheric dry mole fractions of CO in humid air streams.

  15. ARMA Prediction of SBAS Ephemeris and Clock Corrections for Low Earth Orbiting Satellites

    Directory of Open Access Journals (Sweden)

    Jeongrae Kim

    2015-01-01

    Full Text Available For low earth orbit (LEO satellite GPS receivers, space-based augmentation system (SBAS ephemeris/clock corrections can be applied to improve positioning accuracy in real time. The SBAS correction is only available within its service area, and the prediction of the SBAS corrections during the outage period can extend the coverage area. Two time series forecasting models, autoregressive moving average (ARMA and autoregressive (AR, are proposed to predict the corrections outside the service area. A simulated GPS satellite visibility condition is applied to the WAAS correction data, and the prediction accuracy degradation, along with the time, is investigated. Prediction results using the SBAS rate of change information are compared, and the ARMA method yields a better accuracy than the rate method. The error reductions of the ephemeris and clock by the ARMA method over the rate method are 37.8% and 38.5%, respectively. The AR method shows a slightly better orbit accuracy than the rate method, but its clock accuracy is even worse than the rate method. If the SBAS correction is sufficiently accurate comparing with the required ephemeris accuracy of a real-time navigation filter, then the predicted SBAS correction may improve orbit determination accuracy.

  16. Comparative study of van der Waals corrections to the bulk properties of graphite.

    Science.gov (United States)

    Rêgo, Celso R C; Oliveira, Luiz N; Tereshchuk, Polina; Da Silva, Juarez L F

    2015-10-21

    Graphite is a stack of honeycomb (graphene) layers bound together by nonlocal, long-range van der Waals (vdW) forces, which are poorly described by density functional theory (DFT) within local or semilocal exchange-correlation functionals. Several approximations have been proposed to add a vdW correction to the DFT total energies (Stefan Grimme (D2 and D3) with different damping functions (D3-BJ), Tkatchenko-Scheffler (TS) without and with self-consistent screening (TS  +  SCS) effects). Those corrections have remarkly improved the agreement between our results and experiment for the interlayer distance (from 3.9 to 0.6%) [corrected] and high-level random-phase approximation (RPA) calculations for interlayer binding energy (from 69.5 to 1.5%). [corrected]. We report a systematic investigation of various structural, energetic and electron properties with the aforementioned vdW corrections followed by comparison with experimental and theoretical RPA data. Comparison between the resulting relative errors shows that the TS  +  SCS correction provides the best results; the other corrections yield significantly larger errors for at least one of the studied properties. If considerations of computational costs or convergence problems rule out the TS  +  SCS approach, we recommend the D3-BJ correction. Comparison between the computed π(z)Γ-splitting and experimental results shows disagreements of 10% or more with all vdW corrections. Even the computationally more expensive hybrid PBE0 has proved unable to improve the agreement with the measured splitting. Our results indicate that improvements of the exchange-correlation functionals beyond the vdW corrections are necessary to accurately describe the band structure of graphite.

  17. Escaping the correction for body surface area when calculating glomerular filtration rate in children

    Energy Technology Data Exchange (ETDEWEB)

    Piepsz, Amy; Tondeur, Marianne [CHU St. Pierre, Department of Radioisotopes, Brussels (Belgium); Ham, Hamphrey [University Hospital Ghent, Department of Nuclear Medicine, Ghent (Belgium)

    2008-09-15

    {sup 51}Cr ethylene diamine tetraacetic acid ({sup 51}Cr EDTA) clearance is nowadays considered as an accurate and reproducible method for measuring glomerular filtration rate (GFR) in children. Normal values in function of age, corrected for body surface area, have been recently updated. However, much criticism has been expressed about the validity of body surface area correction. The aim of the present paper was to present the normal GFR values, not corrected for body surface area, with the associated percentile curves. For that purpose, the same patients as in the previous paper were selected, namely those with no recent urinary tract infection, having a normal left to right {sup 99m}Tc MAG3 uptake ratio and a normal kidney morphology on the early parenchymal images. A single blood sample method was used for {sup 51}Cr EDTA clearance measurement. Clearance values, not corrected for body surface area, increased progressively up to the adolescence. The percentile curves were determined and allow, for a single patient, to estimate accurately the level of non-corrected clearance and the evolution with time, whatever the age. (orig.)

  18. Next-to-Leading Order Corrections to Higgs Boson Pair Production in Gluon Fusion

    CERN Document Server

    Kerner, Matthias

    2016-01-01

    We present a calculation of the next-to-leading order QCD corrections to the production of Higgs boson pairs in gluon fusion keeping the full dependence on the mass of the top quark. The virtual corrections, involving two-loop integrals with up to four mass scales, have been calculated numerically and we present an efficient algorithm to obtain accurate results of the virtual amplitude using numerical integrations. Taking the top quark mass into account we obtain significant differences compared to results obtained in the heavy top limit.

  19. Comparative study on atmospheric correction methods of visible and near-infrared hyperspectral image

    Science.gov (United States)

    He, Qian; Wu, Jingli; Wang, Guangping; Liu, Chang; Tao, Tao

    2015-03-01

    Currently, common atmospheric correction methods usually based on the statistical information of image itself for relative reflectance calculation, or make use of the radiative transfer model and meteorological parameters for accurate calculations. In order to compare the advantages and disadvantages of these methods, we carried out some atmospheric correction experiments based on AVIRIS Airborne Visible and Near-Infrared hyperspectral data. It proved that, the statistical method is simple and convenient, but not wide adaptability, that can only get the relative reflectance; while the radiative transfer model method is very complex and require the support of auxiliary information, but it can get the precise absolute reflectance of surface features.

  20. Continuous quantum error correction by cooling

    CERN Document Server

    Sarovar, M; Sarovar, Mohan

    2005-01-01

    We describe an implementation of quantum error correction that operates continuously in time and requires no active interventions such as measurements or gates. The mechanism for carrying away the entropy introduced by errors is a cooling procedure. We evaluate the effectiveness of the scheme by simulation, and remark on the connections between this error correction scheme and the quantum Zeno effect.

  1. Relativistic Scott correction for atoms and molecules

    DEFF Research Database (Denmark)

    Solovej, Jan Philip; Sørensen, Thomas Østergaard; Spitzer, Wolfgang Ludwig

    2010-01-01

    We prove the first correction to the leading Thomas-Fermi energy for the ground state energy of atoms and molecules in a model where the kinetic energy of the electrons is treated relativistically. The leading Thomas-Fermi energy, established in [25], as well as the correction given here, are of ...

  2. 40 CFR 68.195 - Required corrections.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 15 2010-07-01 2010-07-01 false Required corrections. 68.195 Section 68.195 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CHEMICAL ACCIDENT PREVENTION PROVISIONS Risk Management Plan § 68.195 Required corrections. The owner...

  3. Corrections to scaling at the Anderson transition

    OpenAIRE

    Slevin, Keith; Ohtsuki, Tomi

    1998-01-01

    We report a numerical analysis of corrections to finite size scaling at the Anderson transition due to irrelevant scaling variables and non-linearities of the scaling variables. By taking proper account of these corrections, the universality of the critical exponent for the orthogonal universality class for three different distributions of the random potential is convincingly demonstrated.

  4. 76 FR 11337 - Presidential Library Facilities; Correction

    Science.gov (United States)

    2011-03-02

    ..., June 17, 2008 (73 FR 34197) that are the subject of this correction, NARA adopted and incorporated by... RECORDS ADMINISTRATION 36 CFR Part 1281 RIN 3095-AA82 Presidential Library Facilities; Correction AGENCY... libraries and information required in NARA's reports to Congress before accepting title to or entering...

  5. Thermoelastic Correction in the Torsion Pendulum Experiment

    Institute of Scientific and Technical Information of China (English)

    胡忠坤; 王雪黎; 罗俊

    2001-01-01

    The thermoelastic effect of the suspension fibre in the torsion pendulum experiment with magnetic damping was studied. The disagreement in the oscillation periods was reduced by one order of magnitude through monitoring the ambient temperature and thermoelastic correction. We also found that the period on uncertainty due to noise increases with the amplitude attenuation after thermoelastic correction.

  6. Error Correction as a Cultural Phenomenon

    Science.gov (United States)

    McGarry, Richard

    2004-01-01

    This study examines the pedagogical and pragmatic motives behind error correction both in classroom contexts and in everyday conversation among native Spanish-speaking English teachers in Costa Rica. Survey and interview data are analyzed and discussed in terms of participants' attitudes toward correction of errors in L1 and L2 in various…

  7. FISICO: Fast Image SegmentatIon COrrection

    Science.gov (United States)

    Valenzuela, Waldo; Ferguson, Stephen J.; Ignasiak, Dominika; Diserens, Gaëlle; Häni, Levin; Wiest, Roland; Vermathen, Peter; Boesch, Chris

    2016-01-01

    Background and Purpose In clinical diagnosis, medical image segmentation plays a key role in the analysis of pathological regions. Despite advances in automatic and semi-automatic segmentation techniques, time-effective correction tools are commonly needed to improve segmentation results. Therefore, these tools must provide faster corrections with a lower number of interactions, and a user-independent solution to reduce the time frame between image acquisition and diagnosis. Methods We present a new interactive method for correcting image segmentations. Our method provides 3D shape corrections through 2D interactions. This approach enables an intuitive and natural corrections of 3D segmentation results. The developed method has been implemented into a software tool and has been evaluated for the task of lumbar muscle and knee joint segmentations from MR images. Results Experimental results show that full segmentation corrections could be performed within an average correction time of 5.5±3.3 minutes and an average of 56.5±33.1 user interactions, while maintaining the quality of the final segmentation result within an average Dice coefficient of 0.92±0.02 for both anatomies. In addition, for users with different levels of expertise, our method yields a correction time and number of interaction decrease from 38±19.2 minutes to 6.4±4.3 minutes, and 339±157.1 to 67.7±39.6 interactions, respectively. PMID:27224061

  8. 78 FR 15877 - Taxable Medical Devices; Correction

    Science.gov (United States)

    2013-03-13

    ..., 2012 (77 FR 72924). The final regulations provide guidance on the excise tax imposed on the sale of... Accordingly, the final regulations (TD 9604), that are the subject of FR Doc. 2012-29628, are corrected as... device to the FDA's'' is corrected to read ``of a taxable medical device to the FDA's''. 8. On page...

  9. 76 FR 3837 - Nuclear Decommissioning Funds; Correction

    Science.gov (United States)

    2011-01-21

    ... Internal Revenue Service 26 CFR Part 1 RIN 1545-BF08 Nuclear Decommissioning Funds; Correction AGENCY... 23, 2010 (75 FR 80697) relating to deductions for contributions to trusts maintained for decommissioning nuclear power plants. DATES: This correction is effective on January 21, 2011, and is...

  10. 77 FR 61229 - Availability of Records; Correction

    Science.gov (United States)

    2012-10-09

    ... published in the February 27, 2012, Federal Register (77 FR 11384) and provides the correct facsimile number... / Tuesday, October 9, 2012 / Rules and Regulations#0;#0; ] FEDERAL RETIREMENT THRIFT INVESTMENT BOARD 5 CFR Part 1631 Availability of Records; Correction AGENCY: Federal Retirement Thrift Investment...

  11. A refined tip correction based on decambering

    DEFF Research Database (Denmark)

    Sørensen, Jens Nørkær; Dag, Kaya Onur; Ramos García, Néstor

    2016-01-01

    A new tip correction for use in performance codes based on the blade element momentum (BEM) or the lifting-line techniqueis presented. The correction modifies the circulation by taking into account the additional influence of the inductionof the vortices in the wake, using the so-called decamberi...

  12. Optical advantages of astigmatic aberration corrected heliostats

    Science.gov (United States)

    van Rooyen, De Wet; Schöttl, Peter; Bern, Gregor; Heimsath, Anna; Nitz, Peter

    2016-05-01

    Astigmatic aberration corrected heliostats adapt their shape in dependence of the incidence angle of the sun on the heliostat. Simulations show that this optical correction leads to a higher concentration ratio at the target and thus in a decrease in required receiver aperture in particular for smaller heliostat fields.

  13. A Hybrid Approach for Correcting Grammatical Errors

    Science.gov (United States)

    Lee, Kiyoung; Kwon, Oh-Woog; Kim, Young-Kil; Lee, Yunkeun

    2015-01-01

    This paper presents a hybrid approach for correcting grammatical errors in the sentences uttered by Korean learners of English. The error correction system plays an important role in GenieTutor, which is a dialogue-based English learning system designed to teach English to Korean students. During the talk with GenieTutor, grammatical error…

  14. 5 CFR 930.113 - Corrective action.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Corrective action. 930.113 Section 930....113 Corrective action. An agency will take adverse, disciplinary, or other appropriate action against... such action against an operator or an incidental operator: (a) The employee is convicted of...

  15. 40 CFR 1065.672 - Drift correction.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Drift correction. 1065.672 Section 1065.672 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.672 Drift correction. (a) Scope...

  16. 75 FR 9100 - Proxy Disclosure Enhancements; Correction

    Science.gov (United States)

    2010-03-01

    ... allows certain wholly-owned subsidiaries to omit the disclosure of shareholder voting results. We also... the Federal Register on December 23, 2009 (74 FR 68334). Specifically, we are correcting Forms 10-Q... following corrections to Release No. 33-9089 (December 16, 2009), which was published in FR Doc....

  17. Beyond Political Correctness: Toward the Inclusive University.

    Science.gov (United States)

    Richer, Stephen, Ed.; Weir, Lorna, Ed.

    This collection of 12 essays examines the history of the discourse over political correctness (PC) in Canadian academia, focusing on the neoconservative backlash to affirmative action, inclusive policies, and feminist and anti-racist teaching in the classroom. It includes: (1) "Introduction: Political Correctness and the Inclusive University"…

  18. 76 FR 32866 - Cable Landing Licenses; Correction

    Science.gov (United States)

    2011-06-07

    ... Systems Agency in the regulations that we published in the Federal Register of January 14, 2002, 67 FR... COMMISSION 47 CFR Part 1 Cable Landing Licenses; Correction AGENCY: Federal Communications Commission. ACTION... streamlined processing of cable landing license applications. Need for Correction As published, the...

  19. Preparing and correcting extracted BRITE observations

    CERN Document Server

    Buysschaert, B; Neiner, C

    2016-01-01

    Extracted BRITE lightcurves must be carefully prepared and corrected for instrumental effects before a scientific analysis can be performed. Therefore, we have created a suite of Python routines to prepare and correct the lightcurves, which is publicly available. In this paper we describe the method and successive steps performed by these routines.

  20. 75 FR 41530 - Petitions for Modification; Correction

    Science.gov (United States)

    2010-07-16

    ... Safety and Health Administration Petitions for Modification; Correction AGENCY: Mine Safety and Health Administration, Labor. ACTION: Notice; correction. SUMMARY: The Mine Safety and Health Administration (MSHA... Affected: 30 CFR 75.507-1(a) (Electric equipment other than power-connection points; outby the last...