Accurate adiabatic correction in the hydrogen molecule
Pachucki, Krzysztof, E-mail: krp@fuw.edu.pl [Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw (Poland); Komasa, Jacek, E-mail: komasa@man.poznan.pl [Faculty of Chemistry, Adam Mickiewicz University, Umultowska 89b, 61-614 Poznań (Poland)
2014-12-14
A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.
Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions
Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara
2012-01-01
This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…
Jeong, Hyunjo; Zhang, Shuzeng; Barnard, Dan; Li, Xiongbing
2015-09-01
The accurate measurement of acoustic nonlinearity parameter β for fluids or solids generally requires making corrections for diffraction effects due to finite size geometry of transmitter and receiver. These effects are well known in linear acoustics, while those for second harmonic waves have not been well addressed and therefore not properly considered in previous studies. In this work, we explicitly define the attenuation and diffraction corrections using the multi-Gaussian beam (MGB) equations which were developed from the quasilinear solutions of the KZK equation. The effects of making these corrections are examined through the simulation of β determination in water. Diffraction corrections are found to have more significant effects than attenuation corrections, and the β values of water can be estimated experimentally with less than 5% errors when the exact second harmonic diffraction corrections are used together with the negligible attenuation correction effects on the basis of linear frequency dependence between attenuation coefficients, α2 ≃ 2α1.
Allam, Amin
2015-07-14
Motivation: Next-generation sequencing generates large amounts of data affected by errors in the form of substitutions, insertions or deletions of bases. Error correction based on the high-coverage information, typically improves de novo assembly. Most existing tools can correct substitution errors only; some support insertions and deletions, but accuracy in many cases is low. Results: We present Karect, a novel error correction technique based on multiple alignment. Our approach supports substitution, insertion and deletion errors. It can handle non-uniform coverage as well as moderately covered areas of the sequenced genome. Experiments with data from Illumina, 454 FLX and Ion Torrent sequencing machines demonstrate that Karect is more accurate than previous methods, both in terms of correcting individual-bases errors (up to 10% increase in accuracy gain) and post de novo assembly quality (up to 10% increase in NGA50). We also introduce an improved framework for evaluating the quality of error correction.
Fullerton, G D; Keener, C R; Cameron, I L
1994-12-01
The authors describe empirical corrections to ideally dilute expressions for freezing point depression of aqueous solutions to arrive at new expressions accurate up to three molal concentration. The method assumes non-ideality is due primarily to solute/solvent interactions such that the correct free water mass Mwc is the mass of water in solution Mw minus I.M(s) where M(s) is the mass of solute and I an empirical solute/solvent interaction coefficient. The interaction coefficient is easily derived from the constant in the linear regression fit to the experimental plot of Mw/M(s) as a function of 1/delta T (inverse freezing point depression). The I-value, when substituted into the new thermodynamic expressions derived from the assumption of equivalent activity of water in solution and ice, provides accurate predictions of freezing point depression (+/- 0.05 degrees C) up to 2.5 molal concentration for all the test molecules evaluated; glucose, sucrose, glycerol and ethylene glycol. The concentration limit is the approximate monolayer water coverage limit for the solutes which suggests that direct solute/solute interactions are negligible below this limit. This is contrary to the view of many authors due to the common practice of including hydration forces (a soft potential added to the hard core atomic potential) in the interaction potential between solute particles. When this is recognized the two viewpoints are in fundamental agreement. PMID:7699200
Ideally, reliable quantitation in single-photon emission tomography (SPET) requires both emission and transmission data to be scatter free. Although scatter in emission data has been extensively studied, it is not well known how scatter in transmission data affects relative and absolute quantitation in reconstructed images. We studied SPET quantitative accuracy for different amounts of scatter in emission and transmission data using a Utah phantom and a cardiac Data Spectrum phantom including different attenuating media. Acquisitions over 180 were considered and three projection sets were derived: 20% images and Jaszczak and triple-energy-window scatter-corrected projections. Transmission data were acquired using gadolinium-153 line sources in a 90-110 keV window using a narrow or wide scanning window. The transmission scans were performed either simultaneously with the emission acquisition or 24 h later. Transmission maps were reconstructed using filtered backprojection and μ values were linearly scaled from 100 to 140 keV. Attenuation-corrected images were reconstructed using a conjugate gradient minimal residual algorithm. The μ value underestimation varied between 4% with a narrow transmission window in soft tissue and 22% with a wide window in a material simulating bone. Scatter in the emission and transmission data had little effect on the uniformity of activity distribution in the left ventricle wall and in a uniformly hot compartment of the Utah phantom. Correcting the transmission data for scatter had no impact on contrast between a hot and a cold region or on signal-to-noise ratio (SNR) in regions with uniform activity distribution, while correcting the emission data for scatter improved contrast and reduced SNR. For absolute quantitation, the most accurate results (bias <4% in both phantoms) were obtained when reducing scatter in both emission and transmission data. In conclusion, trying to obtain the same amount of scatter in emission and transmission
Fakhri, G.E. [Harvard Medical School, Boston, MA (United States). Dept. of Radiology; U494 INSERM, CHU Pitie-Salpetriere, Paris (France); Buvat, I.; Todd-Pokropek, A.; Benali, H. [U494 INSERM, CHU Pitie-Salpetriere, Paris (France); Almeida, P. [Servico de Medicina Nuclear, Hospital Garcia de Orta, Almada (Portugal); Bendriem, B. [CTI, Inc., Knoxville, TN (United States)
2000-09-01
Ideally, reliable quantitation in single-photon emission tomography (SPET) requires both emission and transmission data to be scatter free. Although scatter in emission data has been extensively studied, it is not well known how scatter in transmission data affects relative and absolute quantitation in reconstructed images. We studied SPET quantitative accuracy for different amounts of scatter in emission and transmission data using a Utah phantom and a cardiac Data Spectrum phantom including different attenuating media. Acquisitions over 180 were considered and three projection sets were derived: 20% images and Jaszczak and triple-energy-window scatter-corrected projections. Transmission data were acquired using gadolinium-153 line sources in a 90-110 keV window using a narrow or wide scanning window. The transmission scans were performed either simultaneously with the emission acquisition or 24 h later. Transmission maps were reconstructed using filtered backprojection and {mu} values were linearly scaled from 100 to 140 keV. Attenuation-corrected images were reconstructed using a conjugate gradient minimal residual algorithm. The {mu} value underestimation varied between 4% with a narrow transmission window in soft tissue and 22% with a wide window in a material simulating bone. Scatter in the emission and transmission data had little effect on the uniformity of activity distribution in the left ventricle wall and in a uniformly hot compartment of the Utah phantom. Correcting the transmission data for scatter had no impact on contrast between a hot and a cold region or on signal-to-noise ratio (SNR) in regions with uniform activity distribution, while correcting the emission data for scatter improved contrast and reduced SNR. For absolute quantitation, the most accurate results (bias <4% in both phantoms) were obtained when reducing scatter in both emission and transmission data. In conclusion, trying to obtain the same amount of scatter in emission and
Park, Seyoun; Robinson, Adam; Quon, Harry; Kiess, Ana P.; Shen, Colette; Wong, John; Plishker, William; Shekhar, Raj; Lee, Junghoon
2016-03-01
In this paper, we propose a CT-CBCT registration method to accurately predict the tumor volume change based on daily cone-beam CTs (CBCTs) during radiotherapy. CBCT is commonly used to reduce patient setup error during radiotherapy, but its poor image quality impedes accurate monitoring of anatomical changes. Although physician's contours drawn on the planning CT can be automatically propagated to daily CBCTs by deformable image registration (DIR), artifacts in CBCT often cause undesirable errors. To improve the accuracy of the registration-based segmentation, we developed a DIR method that iteratively corrects CBCT intensities by local histogram matching. Three popular DIR algorithms (B-spline, demons, and optical flow) with the intensity correction were implemented on a graphics processing unit for efficient computation. We evaluated their performances on six head and neck (HN) cancer cases. For each case, four trained scientists manually contoured the nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial image registration software based on conventional mutual information (MI), VelocityAI (Varian Medical Systems Inc.). The volume differences (mean±std in cc) between the average of the manual segmentations and automatic segmentations are 3.70+/-2.30 (B-spline), 1.25+/-1.78 (demons), 0.93+/-1.14 (optical flow), and 4.39+/-3.86 (VelocityAI). The proposed method significantly reduced the estimation error by 9% (B-spline), 38% (demons), and 51% (optical flow) over the results using VelocityAI. Although demonstrated only on HN nodal GTVs, the results imply that the proposed method can produce improved segmentation of other critical structures over conventional methods.
Berland, Kristian
2016-01-01
A computationally inexpensive kp-based interpolation scheme is developed that can extend the eigenvalues and momentum matrix elements of a sparsely sampled k-point grid into a densely sampled one. Dense sampling, often required to accurately describe transport and optical properties of bulk materials, can be computationally demanding to compute, for instance, in combination with hybrid functionals within the density functional theory (DFT) or with perturbative expansions beyond DFT such as the GW method. The scheme is based on solving the k$\\cdot$p method and extrapolating from multiple reference k points. It includes a correction term that reduces the number of empty bands needed and ameliorates band discontinuities. We show that the scheme can be used to generate accurate band structures, density of states, and dielectric functions. Several examples are given, using traditional and hybrid functionals, with Si, TiNiSn, and Cu as test cases. We illustrate that d-electron and semi-core states, which are partic...
A Highly Accurate Classification of TM Data through Correction of Atmospheric Effects
Bill Smith
2009-07-01
Full Text Available Atmospheric correction impacts on the accuracy of satellite image-based land cover classification are a growing concern among scientists. In this study, the principle objective was to enhance classification accuracy by minimizing contamination effects from aerosol scattering in Landsat TM images due to the variation in solar zenith angle corresponding to cloud-free earth targets. We have derived a mathematical model for aerosols to compute and subtract the aerosol scattering noise per pixel of different vegetation classes from TM images of Nicolet in north-eastern Wisconsin. An algorithm in C++ has been developed with iterations to simulate, model, and correct for the solar zenith angle influences on scattering. Results from a supervised classification with corrected TM images showed increased class accuracy for land cover types over uncorrected images. The overall accuracy of the supervised classification was improved substantially (between 13% and 18%. The z-score shows significant difference between the corrected data and the raw data (between 4.0 and 12.0. Therefore, the atmospheric correction was essential for enhancing the image classification.
Mihaleva, V.V.; Vorst, O.F.J.; Maliepaard, C.A.; Verhoeven, H.A.; Vos, de C.H.; Hall, R.D.; Ham, van R.C.H.J.
2008-01-01
Compound identification and annotation in (untargeted) metabolomics experiments based on accurate mass require the highest possible accuracy of the mass determination. Experimental LC/TOF-MS platforms equipped with a time-to-digital converter (TDC) give the best mass estimate for those mass signals
Perez, Kristy L.; Mann, Steve D.; Pachon, Jan H.; Madhav, Priti; Tornai, Martin P.
2010-01-01
Attenuation correction is necessary for SPECT quantification. There are a variety of methods to create attenuation maps. For dedicated breast SPECT imaging, it is unclear if either SPECT- or CT-based attenuation map would provide the most accurate quantification and whether or not segmenting the different tissue types will have an effect on the qunatification. For these experiments, 99mTc diluted in methanol and water was filled into geometric and anthropomorphic breast phantoms and was image...
Oyeyemi, Victor B; Krisiloff, David B; Keith, John A; Libisch, Florian; Pavone, Michele; Carter, Emily A
2014-01-28
Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs. PMID:25669533
Calbo, Joaquín; Ortí, Enrique; Sancho-García, Juan C; Aragó, Juan
2015-03-10
In this work, we present a thorough assessment of the performance of some representative double-hybrid density functionals (revPBE0-DH-NL and B2PLYP-NL) as well as their parent hybrid and GGA counterparts, in combination with the most modern version of the nonlocal (NL) van der Waals correction to describe very large weakly interacting molecular systems dominated by noncovalent interactions. Prior to the assessment, an accurate and homogeneous set of reference interaction energies was computed for the supramolecular complexes constituting the L7 and S12L data sets by using the novel, precise, and efficient DLPNO-CCSD(T) method at the complete basis set limit (CBS). The correction of the basis set superposition error and the inclusion of the deformation energies (for the S12L set) have been crucial for obtaining precise DLPNO-CCSD(T)/CBS interaction energies. Among the density functionals evaluated, the double-hybrid revPBE0-DH-NL and B2PLYP-NL with the three-body dispersion correction provide remarkably accurate association energies very close to the chemical accuracy. Overall, the NL van der Waals approach combined with proper density functionals can be seen as an accurate and affordable computational tool for the modeling of large weakly bonded supramolecular systems. PMID:26579747
We describe an accurate new method for determining absolute magnitudes, and hence also K-corrections, that is simpler than most previous methods, being based on a quadratic function of just one suitably chosen observed color. The method relies on the extensive and accurate new set of 129 empirical galaxy template spectral energy distributions from Brown et al. A key advantage of our method is that we can reliably estimate random errors in computed absolute magnitudes due to galaxy diversity, photometric error and redshift error. We derive K-corrections for the five Sloan Digital Sky Survey filters and provide parameter tables for use by the astronomical community. Using the New York Value-Added Galaxy Catalog, we compare our K-corrections with those from kcorrect. Our K-corrections produce absolute magnitudes that are generally in good agreement with kcorrect. Absolute griz magnitudes differ by less than 0.02 mag and those in the u band by ∼0.04 mag. The evolution of rest-frame colors as a function of redshift is better behaved using our method, with relatively few galaxies being assigned anomalously red colors and a tight red sequence being observed across the whole 0.0 < z < 0.5 redshift range
A Highly Accurate Classification of TM Data through Correction of Atmospheric Effects
Bill Smith; Frank Scarpace; Widad Elmahboub
2009-01-01
Atmospheric correction impacts on the accuracy of satellite image-based land cover classification are a growing concern among scientists. In this study, the principle objective was to enhance classification accuracy by minimizing contamination effects from aerosol scattering in Landsat TM images due to the variation in solar zenith angle corresponding to cloud-free earth targets. We have derived a mathematical model for aerosols to compute and subtract the aerosol scattering noise per pixel o...
Koubar, Khodor; Bekaert, Virgile; Brasse, David; Laquerriere, Patrice
2015-06-01
Bone mineral density plays an important role in the determination of bone strength and fracture risks. Consequently, it is very important to obtain accurate bone mineral density measurements. The microcomputerized tomography system provides 3D information about the architectural properties of bone. Quantitative analysis accuracy is decreased by the presence of artefacts in the reconstructed images, mainly due to beam hardening artefacts (such as cupping artefacts). In this paper, we introduced a new beam hardening correction method based on a postreconstruction technique performed with the use of off-line water and bone linearization curves experimentally calculated aiming to take into account the nonhomogeneity in the scanned animal. In order to evaluate the mass correction rate, calibration line has been carried out to convert the reconstructed linear attenuation coefficient into bone masses. The presented correction method was then applied on a multimaterial cylindrical phantom and on mouse skeleton images. Mass correction rate up to 18% between uncorrected and corrected images were obtained as well as a remarkable improvement of a calculated mouse femur mass has been noticed. Results were also compared to those obtained when using the simple water linearization technique which does not take into account the nonhomogeneity in the object. PMID:25818096
Accurate plutonium waste measurements using the 252Cf add-a- source technique for matrix corrections
We have developed a new measurement technique to improve the accuracy and sensitivity of the nondestructive assay (NDA) of plutonium scrap and waste. The 200-ell drum assay system uses the classical NDA method of counting passive-neutron coincidences from plutonium but has added the new features of ''add-a-source'' to improve the accuracy for matrix corrections and statistical techniques to improve the low-level detectability limits. The add-a-source technique introduces a small source of 252Cf (10-8 g) near the external surface of the sample drum. The drum perturbs the rate at which coincident neutrons from the 252Cf are counted. The perturbation provides the data to correct for the matrix and plutonium inside the drum. The errors introduced from matrix materials in 200-ell drums have been reduced by an order of magnitude using the add-a-source technique. In addition, the add-a-source method can detect unexpected neutron-shielding material inside the drum that might hide the presence of special nuclear materials. The detectability limit of the new waste-drum assay system for plutonium is better than prior systems for actual waste materials. For the in-plant installation at a mixed-oxide fabrication facility, the detectability limit is 0.73 mg of 24OPu (or 2.3 mg of high-burnup plutonium) for a 15-min. measurement. For a drum containing 100 kg of waste, this translates to about 7 nCi/g. This excellent sensitivity was achieved using a special low-background detector design, good overhead shielding, and statistical techniques in the software to selectively reduce the cosmic-ray neutron background
Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki
2016-03-01
Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and
Peng, Xiangda; Zhang, Yuebin; Chu, Huiying; Li, Yan; Zhang, Dinglin; Cao, Liaoran; Li, Guohui
2016-06-14
Classical molecular dynamic (MD) simulation of membrane proteins faces significant challenges in accurately reproducing and predicting experimental observables such as ion conductance and permeability due to its incapability of precisely describing the electronic interactions in heterogeneous systems. In this work, the free energy profiles of K(+) and Na(+) permeating through the gramicidin A channel are characterized by using the AMOEBA polarizable force field with a total sampling time of 1 μs. Our results indicated that by explicitly introducing the multipole terms and polarization into the electrostatic potentials, the permeation free energy barrier of K(+) through the gA channel is considerably reduced compared to the overestimated results obtained from the fixed-charge model. Moreover, the estimated maximum conductance, without any corrections, for both K(+) and Na(+) passing through the gA channel are much closer to the experimental results than any classical MD simulations, demonstrating the power of AMOEBA in investigating the membrane proteins. PMID:27171823
Bubin, Sergiy; Stanke, Monika; Adamowicz, Ludwik
2011-08-21
In this work we report very accurate variational calculations of the complete pure vibrational spectrum of the D(2) molecule performed within the framework where the Born-Oppenheimer (BO) approximation is not assumed. After the elimination of the center-of-mass motion, D(2) becomes a three-particle problem in this framework. As the considered states correspond to the zero total angular momentum, their wave functions are expanded in terms of all-particle, one-center, spherically symmetric explicitly correlated Gaussian functions multiplied by even non-negative powers of the internuclear distance. The nonrelativistic energies of the states obtained in the non-BO calculations are corrected for the relativistic effects of the order of α(2) (where α = 1/c is the fine structure constant) calculated as expectation values of the operators representing these effects. PMID:21861559
2015-11-01
In the article by Heuslein et al, which published online ahead of print on September 3, 2015 (DOI: 10.1161/ATVBAHA.115.305775), a correction was needed. Brett R. Blackman was added as the penultimate author of the article. The article has been corrected for publication in the November 2015 issue. PMID:26490278
Koesters, Thomas; Friedman, Kent P.; Fenchel, Matthias; Zhan, Yiqiang; Hermosillo, Gerardo; Babb, James; Jelescu, Ileana O.; Faul, David; Boada, Fernando E.; Shepherd, Timothy M.
2016-01-01
Simultaneous PET/MR of the brain is a promising new technology for characterizing patients with suspected cognitive impairment or epilepsy. Unlike CT though, MR signal intensities do not provide a direct correlate to PET photon attenuation correction (AC) and inaccurate radiotracer standard uptake value (SUV) estimation could limit future PET/MR clinical applications. We tested a novel AC method that supplements standard Dixon-based tissue segmentation with a superimposed model-based bone com...
2016-02-01
In the article by Guessous et al (Guessous I, Pruijm M, Ponte B, Ackermann D, Ehret G, Ansermot N, Vuistiner P, Staessen J, Gu Y, Paccaud F, Mohaupt M, Vogt B, Pechère-Bertschi A, Martin PY, Burnier M, Eap CB, Bochud M. Associations of ambulatory blood pressure with urinary caffeine and caffeine metabolite excretions. Hypertension. 2015;65:691–696. doi: 10.1161/HYPERTENSIONAHA.114.04512), which published online ahead of print December 8, 2014, and appeared in the March 2015 issue of the journal, a correction was needed.One of the author surnames was misspelled. Antoinette Pechère-Berstchi has been corrected to read Antoinette Pechère-Bertschi.The authors apologize for this error. PMID:26763012
Park, C G; Ha, B
1995-09-01
Most of the attempts and efforts in cleft lip repair have been directed toward the skin incision. The importance of the orbicularis oris muscle repair has been emphasized in recent years. The well-designed skin incision with simple repair of the orbicularis oris muscle has produced a considerable improvement in the appearance of the upper lip; however, the repaired upper lip seems to change its shape abnormally in motion and has a tendency to be distorted with age if the orbicularis oris muscle is not repaired precisely and accurately. Following the dissection of the normal upper lip and unilateral cleft lip in cadavers, we could find two different components in the orbicularis oris muscle, a superficial and a deep component. One is a retractor and the other is a constrictor of the lip. They have antagonistic actions to each other during lip movement. We also can identify these two different components of the muscle in the cleft lip patient during operation. We thought inaccurate and mixed connection between these two different functional components could make the repaired lip distorted and unbalanced, which would get worse during growth. By identification and separate repair of the two different muscular components of the orbicularis oris muscle (i.e., repair of the superficial and deep components on the lateral side with the corresponding components on the medial side), better results in the dynamic and three-dimensional configuration of the upper lip can be achieved, and unfavorable distortion can be avoided as the patients grow.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:7652051
Film dosimetry is an attractive tool for dose distribution verification in intensity modulated radiotherapy (IMRT). A critical aspect of radiochromic film dosimetry is the scanner used for the readout of the film: the output needs to be calibrated in dose response and corrected for pixel value and spatial dependent nonuniformity caused by light scattering; these procedures can take a long time. A method for a fast and accurate calibration and uniformity correction for radiochromic film dosimetry is presented: a single film exposure is used to do both calibration and correction. Gafchromic EBT films were read with two flatbed charge coupled device scanners (Epson V750 and 1680Pro). The accuracy of the method is investigated with specific dose patterns and an IMRT beam. The comparisons with a two-dimensional array of ionization chambers using a 18x18 cm2 open field and an inverse pyramid dose pattern show an increment in the percentage of points which pass the gamma analysis (tolerance parameters of 3% and 3 mm), passing from 55% and 64% for the 1680Pro and V750 scanners, respectively, to 94% for both scanners for the 18x18 open field, and from 76% and 75% to 91% for the inverse pyramid pattern. Application to an IMRT beam also shows better gamma index results, passing from 88% and 86% for the two scanners, respectively, to 94% for both. The number of points and dose range considered for correction and calibration appears to be appropriate for use in IMRT verification. The method showed to be fast and to correct properly the nonuniformity and has been adopted for routine clinical IMRT dose verification
2002-01-01
The photo on the second page of the Bulletin n°48/2002, from 25 November 2002, illustrating the article «Spanish Visit to CERN» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption. The Spanish delegation, accompanied by Spanish scientists at CERN, also visited the LHC superconducting magnet test hall (photo). From left to right: Felix Rodriguez Mateos of CERN LHC Division, Josep Piqué i Camps, Spanish Minister of Science and Technology, César Dopazo, Director-General of CIEMAT (Spanish Research Centre for Energy, Environment and Technology), Juan Antonio Rubio, ETT Division Leader at CERN, Manuel Aguilar-Benitez, Spanish Delegate to Council, Manuel Delfino, IT Division Leader at CERN, and Gonzalo León, Secretary-General of Scientific Policy to the Minister.
The self-absorption of large volume samples is an important issue in gamma-ray spectrometry using high purity germanium (HPGe) detectors. After the Fukushima Daiichi Nuclear Power Plant accident, a large number of radioactivity measurements of various environmental samples have been performed using 1000 cm3 containers. This study uses Monte Carlo simulations and a semiempirical function to address the self-absorption correction factor for the samples in the 1000 cm3 Marinelli container that has been widely marketed after the accident. The presented factor was validated by experiments using test sources and was shown to be accurate for a wide range of linear attenuation coefficients μ(0.05 - 1.0 cm-1). This suggests that the proposed correction factor is applicable to almost all environmental samples. In addition, an interlaboratory comparison where participants were asked to determine the radioactivity of a certified reference material demonstrated that the proposed correction factor can be used with HPGe detectors of different crystal sizes. (author)
DiLabio, Gino A; Torres, Edmanuel
2013-01-01
We recently showed that dispersion-correcting potentials (DCPs), atom-centered Gaussian-type functions developed for use with B3LYP (J. Phys. Chem. Lett. 2012, 3, 1738-1744) greatly improved the ability of the underlying functional to predict non-covalent interactions. However, the application of B3LYP-DCP for the {\\beta}-scission of the cumyloxyl radical led a calculated barrier height that was over-estimated by ca. 8 kcal/mol. We show in the present work that the source of this error arises from the previously developed carbon atom DCPs, which erroneously alters the electron density in the C-C covalent-bonding region. In this work, we present a new C-DCP with a form that was expected to influence the electron density farther from the nucleus. Tests of the new C-DCP, with previously published H-, N- and O-DCPs, with B3LYP-DCP/6-31+G(2d,2p) on the S66, S22B, HSG-A, and HC12 databases of non-covalently interacting dimers showed that it is one of the most accurate methods available for treating intermolecular i...
Light time calculations in high precision deep space navigation
Bertone, Stefano; Lainey, Valéry
2013-01-01
During the last decade, the precision in the tracking of spacecraft has constantly improved. With the recent discovery of few astrometric anomalies, such as the Pioneer and Earth flyby anomalies, it becomes important to deeply analyze the operative modeling currently adopted in Deep Space Navigation (DSN). Our study shows that some traditional approximations can lead to neglect tiny terms that could have consequences in the orbit determination of a probe in specific configurations such as during an Earth flyby. Here we suggest a way to improve the light time calculation used for probe tracking.
Szidarovszky, Tamás [MTA-ELTE Research Group on Complex Chemical Systems, P.O. Box 32, H-1518 Budapest 112 (Hungary); Császár, Attila G., E-mail: csaszar@chem.elte.hu [MTA-ELTE Research Group on Complex Chemical Systems, P.O. Box 32, H-1518 Budapest 112 (Hungary); Laboratory on Molecular Structure and Dynamics, Institute of Chemistry, Eötvös University, Pázmány Péter sétány 1/A, H-1117 Budapest (Hungary)
2015-01-07
The total partition functions Q(T) and their first two moments Q{sup ′}(T) and Q{sup ″}(T), together with the isobaric heat capacities C{sub p}(T), are computed a priori for three major MgH isotopologues on the temperature range of T = 100–3000 K using the recent highly accurate potential energy curve, spin-rotation, and non-adiabatic correction functions of Henderson et al. [J. Phys. Chem. A 117, 13373 (2013)]. Nuclear motion computations are carried out on the ground electronic state to determine the (ro)vibrational energy levels and the scattering phase shifts. The effect of resonance states is found to be significant above about 1000 K and it increases with temperature. Even very short-lived states, due to their relatively large number, have significant contributions to Q(T) at elevated temperatures. The contribution of scattering states is around one fourth of that of resonance states but opposite in sign. Uncertainty estimates are given for the possible error sources, suggesting that all computed thermochemical properties have an accuracy better than 0.005% up to 1200 K. Between 1200 and 2500 K, the uncertainties can rise to around 0.1%, while between 2500 K and 3000 K, a further increase to 0.5% might be observed for Q{sup ″}(T) and C{sub p}(T), principally due to the neglect of excited electronic states. The accurate thermochemical data determined are presented in the supplementary material for the three isotopologues of {sup 24}MgH, {sup 25}MgH, and {sup 26}MgH at 1 K increments. These data, which differ significantly from older standard data, should prove useful for astronomical models incorporating thermodynamic properties of these species.
Rocklin, Gabriel J. [Department of Pharmaceutical Chemistry, University of California San Francisco, 1700 4th St., San Francisco, California 94143-2550, USA and Biophysics Graduate Program, University of California San Francisco, 1700 4th St., San Francisco, California 94143-2550 (United States); Mobley, David L. [Departments of Pharmaceutical Sciences and Chemistry, University of California Irvine, 147 Bison Modular, Building 515, Irvine, California 92697-0001, USA and Department of Chemistry, University of New Orleans, 2000 Lakeshore Drive, New Orleans, Louisiana 70148 (United States); Dill, Ken A. [Laufer Center for Physical and Quantitative Biology, 5252 Stony Brook University, Stony Brook, New York 11794-0001 (United States); Hünenberger, Philippe H., E-mail: phil@igc.phys.chem.ethz.ch [Laboratory of Physical Chemistry, Swiss Federal Institute of Technology, ETH, 8093 Zürich (Switzerland)
2013-11-14
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol{sup −1}) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non
Chimot, J.; Vlemmix, T.; Veefkind, J. P.; de Haan, J. F.; Levelt, P. F.
2016-02-01
The Ozone Monitoring Instrument (OMI) has provided daily global measurements of tropospheric NO2 for more than a decade. Numerous studies have drawn attention to the complexities related to measurements of tropospheric NO2 in the presence of aerosols. Fine particles affect the OMI spectral measurements and the length of the average light path followed by the photons. However, they are not explicitly taken into account in the current operational OMI tropospheric NO2 retrieval chain (DOMINO - Derivation of OMI tropospheric NO2) product. Instead, the operational OMI O2 - O2 cloud retrieval algorithm is applied both to cloudy and to cloud-free scenes (i.e. clear sky) dominated by the presence of aerosols. This paper describes in detail the complex interplay between the spectral effects of aerosols in the satellite observation and the associated response of the OMI O2 - O2 cloud retrieval algorithm. Then, it evaluates the impact on the accuracy of the tropospheric NO2 retrievals through the computed Air Mass Factor (AMF) with a focus on cloud-free scenes. For that purpose, collocated OMI NO2 and MODIS (Moderate Resolution Imaging Spectroradiometer) Aqua aerosol products are analysed over the strongly industrialized East China area. In addition, aerosol effects on the tropospheric NO2 AMF and the retrieval of OMI cloud parameters are simulated. Both the observation-based and the simulation-based approach demonstrate that the retrieved cloud fraction increases with increasing Aerosol Optical Thickness (AOT), but the magnitude of this increase depends on the aerosol properties and surface albedo. This increase is induced by the additional scattering effects of aerosols which enhance the scene brightness. The decreasing effective cloud pressure with increasing AOT primarily represents the shielding effects of the O2 - O2 column located below the aerosol layers. The study cases show that the aerosol correction based on the implemented OMI cloud model results in biases
New analysis of the light time effect in TU Ursae Majoris
Liška, J.; Skarka, M.; Mikulášek, Z.; Zejda, M.; Chrastina, M.
2016-04-01
Context. Recent statistical studies prove that the percentage of RR Lyrae pulsators that are located in binaries or multiple stellar systems is considerably lower than might be expected. This can be better understood from an in-depth analysis of individual candidates. We investigate in detail the light time effect of the most probable binary candidate TU UMa. This is complicated because the pulsation period shows secular variation. Aims: We model possible light time effect of TU UMa using a new code applied on previously available and newly determined maxima timings to confirm binarity and refine parameters of the orbit of the RRab component in the binary system. The binary hypothesis is also tested using radial velocity measurements. Methods: We used new approach to determine brightness maxima timings based on template fitting. This can also be used on sparse or scattered data. This approach was successfully applied on measurements from different sources. To determine the orbital parameters of the double star TU UMa, we developed a new code to analyse light time effect that also includes secular variation in the pulsation period. Its usability was successfully tested on CL Aur, an eclipsing binary with mass-transfer in a triple system that shows similar changes in the O-C diagram. Since orbital motion would cause systematic shifts in mean radial velocities (dominated by pulsations), we computed and compared our model with centre-of-mass velocities. They were determined using high-quality templates of radial velocity curves of RRab stars. Results: Maxima timings adopted from the GEOS database (168) together with those newly determined from sky surveys and new measurements (85) were used to construct an O-C diagram spanning almost five proposed orbital cycles. This data set is three times larger than data sets used by previous authors. Modelling of the O-C dependence resulted in 23.3-yr orbital period, which translates into a minimum mass of the second component of
Gillespie, Thomas W; Frankenberg, Elizabeth; Chum, Kai Fung; Thomas, Duncan
2014-01-01
On 26 December 2004, a magnitude 9.2 earthquake off the west coast of the northern Sumatra, Indonesia resulted in 160,000 Indonesians killed. We examine the Defense Meteorological Satellite Program-Operational Linescan System (DMSP-OLS) nighttime light imagery brightness values for 307 communities in the Study of the Tsunami Aftermath and Recovery (STAR), a household survey in Sumatra from 2004 to 2008. We examined night light time series between the annual brightness and extent of damage, economic metrics collected from STAR households and aggregated to the community level. There were significant changes in brightness values from 2004 to 2008 with a significant drop in brightness values in 2005 due to the tsunami and pre-tsunami nighttime light values returning in 2006 for all damage zones. There were significant relationships between the nighttime imagery brightness and per capita expenditures, and spending on energy and on food. Results suggest that Defense Meteorological Satellite Program nighttime light imagery can be used to capture the impacts and recovery from the tsunami and other natural disasters and estimate time series economic metrics at the community level in developing countries. PMID:25419471
New Analysis of the Light Time Effect in TU Ursae Majoris
Liska, Jiri; Mikulasek, Zdenek; Zejda, Miloslav; Chrastina, Marek
2015-01-01
This paper attempts to model possible Light Time Effect of TU UMa using a new code applied on formerly available and newly determined maxima timings in order to confirm binarity and refine parameters of the orbit of RRab component in binary system. The binary hypothesis is further tested also using radial velocity measurements. A new approach for determination of maxima timings based on template fitting which is also usable on sparse or scattered data is described. This approach was successfully applied on measurements from different sources. For determination of orbital parameters of a double star TU UMa we developed a new code for analysis of LiTE involving also secular variation in pulsation period. Its usability was successfully tested on CL Aur - an eclipsing binary with mass-transfer in a triple system showing similar changes in O-C diagram. Since orbital motion would cause systematic shifts in mean radial velocities (dominated by pulsations) we computed and compared our model with center-of-mass veloci...
ACE: accurate correction of errors using K-mer tries
Sheikhizadeh Anari, S.; Ridder, de D.
2015-01-01
The quality of high-throughput next-generation sequencing data significantly influences the performance and memory consumption of assembly and mapping algorithms. The most ubiquitous platform, Illumina, mainly suffers from substitution errors. We have developed a tool, ACE, based on K-mer tries to c
Accurate ab initio spin densities
Boguslawski, Katharina; Legeza, Örs; Reiher, Markus
2012-01-01
We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys. 2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CA...
Accurate Finite Difference Algorithms
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Accurate backgrounds to Higgs production at the LHC
Kauer, N
2007-01-01
Corrections of 10-30% for backgrounds to the H --> WW --> l^+l^-\\sla{p}_T search in vector boson and gluon fusion at the LHC are reviewed to make the case for precise and accurate theoretical background predictions.
The variety of corrective actions which have been attempted at many radioactive waste disposal sites points to less than ideal performance by present-day standards at many closed and presently-operating sites. In humid regions, most of the problems have encompassed some kind of water intrusion into the buried waste. In arid regions, the problems have centered on trench subsidence and intrusion by plant roots and animals. It is overwhelmingly apparent that any protective barrier for the buried waste, whether for water or biological intrusion, will depend on stable support from the underlying burial trenches. Trench subsidence must be halted, prevented, or circumscribed in some manner to assure this necessary long-term support. Final corrective actions will differ considerably from site to site, depending on unique geological, pedological, and meteorological environments. In the meantime, many of the shorter-term corrective actions described in this chapter can be implemented as immediate needs dictate
Yang, Y.-G.; Dai, H.-F. [School of Physics and Electronic Information, Huaibei Normal University, 235000 Huaibei, Anhui Province (China); Li, H.-L., E-mail: yygcn@163.com [National Astronomical Observatories, Chinese Academy of Sciences, 100012 Beijing (China)
2012-01-15
We present the CCD photometry of two Algol-type binaries, AL Gem and BM Mon, observed from 2008 November to 2011 January. With the updated Wilson-Devinney program, photometric solutions were deduced from their EA-type light curves. The mass ratios and fill-out factors of the primaries are found to be q{sub ph} = 0.090({+-} 0.005) and f{sub 1} = 47.3%({+-} 0.3%) for AL Gem, and q{sub ph} = 0.275({+-} 0.007) and f{sub 1} = 55.4%({+-} 0.5%) for BM Mon, respectively. By analyzing the O-C curves, we discovered that the periods of AL Gem and BM Mon change in a quasi-sinusoidal mode, which may possibly result from the light-time effect via the presence of a third body. Periods, amplitudes, and eccentricities of light-time orbits are 78.83({+-} 1.17) yr, 0fd0204({+-}0fd0007), and 0.28({+-} 0.02) for AL Gem and 97.78({+-} 2.67) yr, 0fd0175({+-}0fd0006), and 0.29({+-} 0.02) for BM Mon, respectively. Assumed to be in a coplanar orbit with the binary, the masses of the third bodies would be 0.29 M{sub Sun} for AL Gem and 0.26 M{sub Sun} for BM Mon. This kind of additional companion can extract angular momentum from the close binary orbit, and such processes may play an important role in multiple star evolution.
NSGIC GIS Inventory (aka Ramona) — This Prisons and Correctional Facilities dataset, published at 1:12000 (1in=1000ft) scale, was produced all or in part from Orthoimagery information as of 2011. It...
Towards accurate emergency response behavior
Nuclear reactor operator emergency response behavior has persisted as a training problem through lack of information. The industry needs an accurate definition of operator behavior in adverse stress conditions, and training methods which will produce the desired behavior. Newly assembled information from fifty years of research into human behavior in both high and low stress provides a more accurate definition of appropriate operator response, and supports training methods which will produce the needed control room behavior. The research indicates that operator response in emergencies is divided into two modes, conditioned behavior and knowledge based behavior. Methods which assure accurate conditioned behavior, and provide for the recovery of knowledge based behavior, are described in detail
Deconvolution with correct sampling
Magain, P; Sohy, S
1997-01-01
A new method for improving the resolution of astronomical images is presented. It is based on the principle that sampled data cannot be fully deconvolved without violating the sampling theorem. Thus, the sampled image should not be deconvolved by the total Point Spread Function, but by a narrower function chosen so that the resolution of the deconvolved image is compatible with the adopted sampling. Our deconvolution method gives results which are markedly superior to those of other existing techniques: in particular, it does not produce ringing around point sources superimposed on a smooth background. Moreover, it allows to perform accurate astrometry and photometry of crowded fields. These improvements are a consequence of both the correct treatment of sampling and the recognition that the most probable astronomical image is not a flat one. The method is also well adapted to the optimal combination of different images of the same object, as can be obtained, e.g., via adaptive optics techniques.
Accurate determination of antenna directivity
Dich, Mikael
1997-01-01
The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power...
Accurate shear measurement with faint sources
Zhang, Jun; Foucaud, Sebastien [Center for Astronomy and Astrophysics, Department of Physics and Astronomy, Shanghai Jiao Tong University, 955 Jianchuan road, Shanghai, 200240 (China); Luo, Wentao, E-mail: betajzhang@sjtu.edu.cn, E-mail: walt@shao.ac.cn, E-mail: foucaud@sjtu.edu.cn [Key Laboratory for Research in Galaxies and Cosmology, Shanghai Astronomical Observatory, Nandan Road 80, Shanghai, 200030 (China)
2015-01-01
For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.
Universality of Quantum Gravity Corrections
Das, Saurya
2008-01-01
We show that the existence of a minimum measurable length and the related Generalized Uncertainty Principle (GUP), predicted by theories of Quantum Gravity, influence all quantum Hamiltonians. Thus, they predict quantum gravity corrections to various quantum phenomena. We compute such corrections to the Lamb Shift, the Landau levels and the tunnelling current in a Scanning Tunnelling Microscope (STM). We show that these corrections can be interpreted in two ways: (a) either that they are exceedingly small, beyond the reach of current experiments, or (b) that they predict upper bounds on the quantum gravity parameter in the GUP, compatible with experiments at the electroweak scale. Thus, more accurate measurements in the future would either be able to test these predictions, or further tighten the above bounds and predict an intermediate length scale, between the electroweak and the Planck scale.
Probabilistic error correction for RNA sequencing
Le, Hai-Son; Schulz, Marcel H.; McCauley, Brenna M.; Hinman, Veronica F.; Bar-Joseph, Ziv
2013-01-01
Sequencing of RNAs (RNA-Seq) has revolutionized the field of transcriptomics, but the reads obtained often contain errors. Read error correction can have a large impact on our ability to accurately assemble transcripts. This is especially true for de novo transcriptome analysis, where a reference genome is not available. Current read error correction methods, developed for DNA sequence data, cannot handle the overlapping effects of non-uniform abundance, polymorphisms and alternative splicing...
Accurate Modeling of Advanced Reflectarrays
Zhou, Min
of the incident field, the choice of basis functions, and the technique to calculate the far-field. Based on accurate reference measurements of two offset reflectarrays carried out at the DTU-ESA Spherical NearField Antenna Test Facility, it was concluded that the three latter factors are particularly important...... to the conventional phase-only optimization technique (POT), the geometrical parameters of the array elements are directly optimized to fulfill the far-field requirements, thus maintaining a direct relation between optimization goals and optimization variables. As a result, better designs can be obtained compared...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...
Accurate thickness measurement of graphene
Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.
2016-03-01
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.
Thermodynamics of Error Correction
Sartori, Pablo; Pigolotti, Simone
2015-10-01
Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.
Motion-corrected Fourier ptychography
Bian, Liheng; Guo, Kaikai; Suo, Jinli; Yang, Changhuei; Chen, Feng; Dai, Qionghai
2016-01-01
Fourier ptychography (FP) is a recently proposed computational imaging technique for high space-bandwidth product imaging. In real setups such as endoscope and transmission electron microscope, the common sample motion largely degrades the FP reconstruction and limits its practicability. In this paper, we propose a novel FP reconstruction method to efficiently correct for unknown sample motion. Specifically, we adaptively update the sample's Fourier spectrum from low spatial-frequency regions towards high spatial-frequency ones, with an additional motion recovery and phase-offset compensation procedure for each sub-spectrum. Benefiting from the phase retrieval redundancy theory, the required large overlap between adjacent sub-spectra offers an accurate guide for successful motion recovery. Experimental results on both simulated data and real captured data show that the proposed method can correct for unknown sample motion with its standard deviation being up to 10% of the field-of-view scale. We have released...
MR image intensity inhomogeneity correction
MR technology is one of the best and most reliable ways of studying the brain. Its main drawback is the so-called intensity inhomogeneity or bias field which impairs the visual inspection and the medical proceedings for diagnosis and strongly affects the quantitative image analysis. Noise is yet another artifact in medical images. In order to accurately and effectively restore the original signal, reference is hereof made to filtering, bias correction and quantitative analysis of correction. In this report, two denoising algorithms are used; (i) Basis rotation fields of experts (BRFoE) and (ii) Anisotropic Diffusion (when Gaussian noise, the Perona-Malik and Tukey's biweight functions and the standard deviation of the noise of the input image are considered)
Hees, A; Poncin-Lafitte, C Le
2014-01-01
Given the extreme accuracy of modern space science, a precise relativistic modeling of observations is required. In particular, it is important to describe properly light propagation through the Solar System. For two decades, several modeling efforts based on the solution of the null geodesic equations have been proposed but they are mainly valid only for the first order Post-Newtonian approximation. However, with the increasing precision of ongoing space missions as Gaia, GAME, BepiColombo, JUNO or JUICE, we know that some corrections up to the second order have to be taken into account for future experiments. We present a procedure to compute the relativistic coordinate time delay, Doppler and astrometric observables avoiding the integration of the null geodesic equation. This is possible using the Time Transfer Function formalism, a powerful tool providing key quantities such as the time of flight of a light signal between two point-events and the tangent vector to its null-geodesic. Indeed we show how to ...
A More Accurate Fourier Transform
Courtney, Elya
2015-01-01
Fourier transform methods are used to analyze functions and data sets to provide frequencies, amplitudes, and phases of underlying oscillatory components. Fast Fourier transform (FFT) methods offer speed advantages over evaluation of explicit integrals (EI) that define Fourier transforms. This paper compares frequency, amplitude, and phase accuracy of the two methods for well resolved peaks over a wide array of data sets including cosine series with and without random noise and a variety of physical data sets, including atmospheric $\\mathrm{CO_2}$ concentrations, tides, temperatures, sound waveforms, and atomic spectra. The FFT uses MIT's FFTW3 library. The EI method uses the rectangle method to compute the areas under the curve via complex math. Results support the hypothesis that EI methods are more accurate than FFT methods. Errors range from 5 to 10 times higher when determining peak frequency by FFT, 1.4 to 60 times higher for peak amplitude, and 6 to 10 times higher for phase under a peak. The ability t...
McCane-Bowling, Sara J.; Strait, Andrea D.; Guess, Pamela E.; Wiedo, Jennifer R.; Muncie, Eric
2014-01-01
This study examined the predictive utility of five formative reading measures: words correct per minute, number of comprehension questions correct, reading comprehension rate, number of maze correct responses, and maze accurate response rate (MARR). Broad Reading cluster scores obtained via the Woodcock-Johnson III (WJ III) Tests of Achievement…
Accurate, meshless methods for magnetohydrodynamics
Hopkins, Philip F.; Raives, Matthias J.
2016-01-01
Recently, we explored new meshless finite-volume Lagrangian methods for hydrodynamics: the `meshless finite mass' (MFM) and `meshless finite volume' (MFV) methods; these capture advantages of both smoothed particle hydrodynamics (SPH) and adaptive mesh refinement (AMR) schemes. We extend these to include ideal magnetohydrodynamics (MHD). The MHD equations are second-order consistent and conservative. We augment these with a divergence-cleaning scheme, which maintains nabla \\cdot B≈ 0. We implement these in the code GIZMO, together with state-of-the-art SPH MHD. We consider a large test suite, and show that on all problems the new methods are competitive with AMR using constrained transport (CT) to ensure nabla \\cdot B=0. They correctly capture the growth/structure of the magnetorotational instability, MHD turbulence, and launching of magnetic jets, in some cases converging more rapidly than state-of-the-art AMR. Compared to SPH, the MFM/MFV methods exhibit convergence at fixed neighbour number, sharp shock-capturing, and dramatically reduced noise, divergence errors, and diffusion. Still, `modern' SPH can handle most test problems, at the cost of larger kernels and `by hand' adjustment of artificial diffusion. Compared to non-moving meshes, the new methods exhibit enhanced `grid noise' but reduced advection errors and diffusion, easily include self-gravity, and feature velocity-independent errors and superior angular momentum conservation. They converge more slowly on some problems (smooth, slow-moving flows), but more rapidly on others (involving advection/rotation). In all cases, we show divergence control beyond the Powell 8-wave approach is necessary, or all methods can converge to unphysical answers even at high resolution.
Full Text Available ... Jaw Surgery Download Download the ebook for further information Corrective jaw, or orthognathic, surgery is performed by ... your treatment. Correction of Common Dentofacial Deformities The information provided here is not intended as a substitute ...
NWS Corrections to Observations
National Oceanic and Atmospheric Administration, Department of Commerce — Form B-14 is the National Weather Service form entitled 'Notice of Corrections to Weather Records.' The forms are used to make corrections to observations on forms...
Dr. Grace Zhang
2000-01-01
Error correction is an important issue in foreign language acquisition. This paper investigates how students feel about the way in which error correction should take place in a Chinese-as-a foreign-language classroom, based on empirical data of a large scale. The study shows that there is a general consensus that error correction is necessary. In terms of correction strategy, the students preferred a combination of direct and indirect corrections, or a direct only correction. The former choice indicates that students would be happy to take either so long as the correction gets done.Most students didn't mind peer correcting provided it is conducted in a constructive way. More than halfofthe students would feel uncomfortable ifthe same error they make in class is corrected consecutively more than three times. Taking these findings into consideration, we may want to cncourage peer correcting, use a combination of correction strategies (direct only if suitable) and do it in a non-threatening and sensitive way. It is hoped that this study would contribute to the effectiveness of error correction in a Chinese language classroom and it may also have a wider implication on other languages.
38 CFR 4.46 - Accurate measurement.
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
孟瑞锋; 马小康; 王州博; 董龙梅; 杨涛; 刘东红
2015-01-01
abnormal sample points and checking out the regression coefficient of the model by t-test. The developed model had high prediction accuracy and stability with the maximum prediction error of 0.25 g/100 g, the determination coefficient of calibration (Rcal2) of 0.9992, the determination coefficient of validation (Rval2) of 0.9988, the root mean square error of calibration (RMSEC) of 0.0894 g/100 g, the root mean square error of prediction (RMSEP) of 0.1015 g/100 g and the ratio performance deviation (RPD) of 28.57, which indicated that the model could be used for practical detection accurately and steadily, and was helpful for on-line measuring.
Source distribution dependent scatter correction for PVI
Source distribution dependent scatter correction methods which incorporate different amounts of information about the source position and material distribution have been developed and tested. The techniques use image to projection integral transformation incorporating varying degrees of information on the distribution of scattering material, or convolution subtraction methods, with some information about the scattering material included in one of the convolution methods. To test the techniques, the authors apply them to data generated by Monte Carlo simulations which use geometric shapes or a voxelized density map to model the scattering material. Source position and material distribution have been found to have some effect on scatter correction. An image to projection method which incorporates a density map produces accurate scatter correction but is computationally expensive. Simpler methods, both image to projection and convolution, can also provide effective scatter correction
The FLUKA code: An accurate simulation tool for particle therapy
Battistoni, Giuseppe; Böhlen, Till T; Cerutti, Francesco; Chin, Mary Pik Wai; Dos Santos Augusto, Ricardo M; Ferrari, Alfredo; Garcia Ortega, Pablo; Kozlowska, Wioletta S; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis
2016-01-01
Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically-based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in-vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with bot...
Accurate characterization of OPVs: Device masking and different solar simulators
Gevorgyan, Suren; Carlé, Jon Eggert; Søndergaard, Roar R.;
2013-01-01
One of the prime objects of organic solar cell research has been to improve the power conversion efficiency. Unfortunately, the accurate determination of this property is not straight forward and has led to the recommendation that record devices be tested and certified at a few accredited...... laboratories following rigorous ASTM and IEC standards. This work tries to address some of the issues confronting the standard laboratory in this regard. Solar simulator lamps are investigated for their light field homogeneity and direct versus diffuse components, as well as the correct device area...
Diophantine Correct Open Induction
Raffer, Sidney
2010-01-01
We give an induction-free axiom system for diophantine correct open induction. We relate the problem of whether a finitely generated ring of Puiseux polynomials is diophantine correct to a problem about the value-distribution of a tuple of semialgebraic functions with integer arguments. We use this result, and a theorem of Bergelson and Leibman on generalized polynomials, to identify a class of diophantine correct subrings of the field of descending Puiseux series with real coefficients.
Attenuation correction for small animal PET tomographs
Chow, Patrick L [David Geffen School of Medicine at UCLA, Crump Institute for Molecular Imaging, University of California, 700 Westwood Plaza, Los Angeles, CA 90095 (United States); Rannou, Fernando R [Departamento de Ingenieria Informatica, Universidad de Santiago de Chile (USACH), Av. Ecuador 3659, Santiago (Chile); Chatziioannou, Arion F [David Geffen School of Medicine at UCLA, Crump Institute for Molecular Imaging, University of California, 700 Westwood Plaza, Los Angeles, CA 90095 (United States)
2005-04-21
Attenuation correction is one of the important corrections required for quantitative positron emission tomography (PET). This work will compare the quantitative accuracy of attenuation correction using a simple global scale factor with traditional transmission-based methods acquired either with a small animal PET or a small animal x-ray computed tomography (CT) scanner. Two phantoms (one mouse-sized and one rat-sized) and two animal subjects (one mouse and one rat) were scanned in CTI Concorde Microsystem's microPET (registered) Focus{sup TM} for emission and transmission data and in ImTek's MicroCAT{sup TM} II for transmission data. PET emission image values were calibrated against a scintillation well counter. Results indicate that the scale factor method of attenuation correction places the average measured activity concentration about the expected value, without correcting for the cupping artefact from attenuation. Noise analysis in the phantom studies with the PET-based method shows that noise in the transmission data increases the noise in the corrected emission data. The CT-based method was accurate and delivered low-noise images suitable for both PET data correction and PET tracer localization.
Kužel, Petr; Němec, Hynek; Kadlec, Filip; Kadlec, Christelle
2010-01-01
Roč. 18, č. 15 (2010), s. 15338-15348. ISSN 1094-4087 R&D Projects: GA ČR GC202/09/J045 Institutional research plan: CEZ:AV0Z10100520 Keywords : terahertz spectroscopy * Gouy phase shift * gaussian beams * refractive index Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 3.749, year: 2010
Spelling Correction in Context
Pinot, Guillaume; Enguehard, Chantal
2005-01-01
International audience Spelling checkers, frequently used nowadays, do not allow to correct real-word errors. Thus, the erroneous replacement of dessert by desert is not detected. We propose in this article an algorithm based on the examination of the context of words to correct this kind of spelling errors. This algorithm uses a training on a raw corpus.
Derivative corrections from noncommutativity
We show that an infinite subset of the higher-derivative α' corrections to the DBI and Chern-Simons actions of ordinary commutative open-string theory can be determined using noncommutativity. Our predictions are compared to some lowest order α' corrections that have been computed explicitly by Wyllard (hep-th/0008125), and shown to agree. (author)
Hybrid scatter correction for CT imaging
The purpose of this study was to develop and evaluate the hybrid scatter correction algorithm (HSC) for CT imaging. Therefore, two established ways to perform scatter correction, i.e. physical scatter correction based on Monte Carlo simulations and a convolution-based scatter correction algorithm, were combined in order to perform an object-dependent, fast and accurate scatter correction. Based on a reconstructed CT volume, patient-specific scatter intensity is estimated by a coarse Monte Carlo simulation that uses a reduced amount of simulated photons in order to reduce the simulation time. To further speed up the Monte Carlo scatter estimation, scatter intensities are simulated only for a fraction of all projections. In a second step, the high noise estimate of the scatter intensity is used to calibrate the open parameters in a convolution-based algorithm which is then used to correct measured intensities for scatter. Furthermore, the scatter-corrected intensities are used in order to reconstruct a scatter-corrected CT volume data set. To evaluate the scatter reduction potential of HSC, we conducted simulations in a clinical CT geometry and measurements with a flat detector CT system. In the simulation study, HSC-corrected images were compared to scatter-free reference images. For the measurements, no scatter-free reference image was available. Therefore, we used an image corrected with a low-noise Monte Carlo simulation as a reference. The results show that the HSC can significantly reduce scatter artifacts. Compared to the reference images, the error due to scatter artifacts decreased from 100% for uncorrected images to a value below 20% for HSC-corrected images for both the clinical (simulated data) and the flat detector CT geometry (measurement). Compared to a low-noise Monte Carlo simulation, with the HSC the number of photon histories can be reduced by about a factor of 100 per projection without losing correction accuracy. Furthermore, it was sufficient to
Accurate ab initio vibrational energies of methyl chloride
Two new nine-dimensional potential energy surfaces (PESs) have been generated using high-level ab initio theory for the two main isotopologues of methyl chloride, CH335Cl and CH337Cl. The respective PESs, CBS-35 HL, and CBS-37 HL, are based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set (CBS) limit, and incorporate a range of higher-level (HL) additive energy corrections to account for core-valence electron correlation, higher-order coupled cluster terms, scalar relativistic effects, and diagonal Born-Oppenheimer corrections. Variational calculations of the vibrational energy levels were performed using the computer program TROVE, whose functionality has been extended to handle molecules of the form XY 3Z. Fully converged energies were obtained by means of a complete vibrational basis set extrapolation. The CBS-35 HL and CBS-37 HL PESs reproduce the fundamental term values with root-mean-square errors of 0.75 and 1.00 cm−1, respectively. An analysis of the combined effect of the HL corrections and CBS extrapolation on the vibrational wavenumbers indicates that both are needed to compute accurate theoretical results for methyl chloride. We believe that it would be extremely challenging to go beyond the accuracy currently achieved for CH3Cl without empirical refinement of the respective PESs
Accurate ab initio vibrational energies of methyl chloride
Owens, Alec, E-mail: owens@mpi-muelheim.mpg.de [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany); Department of Physics and Astronomy, University College London, Gower Street, WC1E 6BT London (United Kingdom); Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan [Department of Physics and Astronomy, University College London, Gower Street, WC1E 6BT London (United Kingdom); Thiel, Walter [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany)
2015-06-28
Two new nine-dimensional potential energy surfaces (PESs) have been generated using high-level ab initio theory for the two main isotopologues of methyl chloride, CH{sub 3}{sup 35}Cl and CH{sub 3}{sup 37}Cl. The respective PESs, CBS-35{sup HL}, and CBS-37{sup HL}, are based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set (CBS) limit, and incorporate a range of higher-level (HL) additive energy corrections to account for core-valence electron correlation, higher-order coupled cluster terms, scalar relativistic effects, and diagonal Born-Oppenheimer corrections. Variational calculations of the vibrational energy levels were performed using the computer program TROVE, whose functionality has been extended to handle molecules of the form XY {sub 3}Z. Fully converged energies were obtained by means of a complete vibrational basis set extrapolation. The CBS-35{sup HL} and CBS-37{sup HL} PESs reproduce the fundamental term values with root-mean-square errors of 0.75 and 1.00 cm{sup −1}, respectively. An analysis of the combined effect of the HL corrections and CBS extrapolation on the vibrational wavenumbers indicates that both are needed to compute accurate theoretical results for methyl chloride. We believe that it would be extremely challenging to go beyond the accuracy currently achieved for CH{sub 3}Cl without empirical refinement of the respective PESs.
Accurate transition rates for intercombination lines of singly ionized nitrogen
The transition energies and rates for the 2s22p23P1,2-2s2p35S2o and 2s22p3s-2s22p3p intercombination transitions have been calculated using term-dependent nonorthogonal orbitals in the multiconfiguration Hartree-Fock approach. Several sets of spectroscopic and correlation nonorthogonal functions have been chosen to describe adequately term dependence of wave functions and various correlation corrections. Special attention has been focused on the accurate representation of strong interactions between the 2s2p31,3P1o and 2s22p3s 1,3P1olevels. The relativistic corrections are included through the one-body mass correction, Darwin, and spin-orbit operators and two-body spin-other-orbit and spin-spin operators in the Breit-Pauli Hamiltonian. The importance of core-valence correlation effects has been examined. The accuracy of present transition rates is evaluated by the agreement between the length and velocity formulations combined with the agreement between the calculated and measured transition energies. The present results for transition probabilities, branching fraction, and lifetimes have been compared with previous calculations and experiments.
Accurate thermoelastic tensor and acoustic velocities of NaCl
Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry
Accurate thermoelastic tensor and acoustic velocities of NaCl
Marcondes, Michel L.; Shukla, Gaurav; da Silveira, Pedro; Wentzcovitch, Renata M.
2015-12-01
Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.
Accurate thermoelastic tensor and acoustic velocities of NaCl
Marcondes, Michel L., E-mail: michel@if.usp.br [Physics Institute, University of Sao Paulo, Sao Paulo, 05508-090 (Brazil); Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Shukla, Gaurav, E-mail: shukla@physics.umn.edu [School of Physics and Astronomy, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States); Silveira, Pedro da [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Wentzcovitch, Renata M., E-mail: wentz002@umn.edu [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States)
2015-12-15
Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.
Can clinicians accurately assess esophageal dilation without fluoroscopy?
Bailey, A D; Goldner, F
1990-01-01
This study questioned whether clinicians could determine the success of esophageal dilation accurately without the aid of fluoroscopy. Twenty patients were enrolled with the diagnosis of distal esophageal stenosis, including benign peptic stricture (17), Schatski's ring (2), and squamous cell carcinoma of the esophagus (1). Dilation attempts using only Maloney dilators were monitored fluoroscopically by the principle investigator, the physician and patient being unaware of the findings. Physicians then predicted whether or not their dilations were successful, and they examined various features to determine their usefulness in predicting successful dilation. They were able to predict successful dilation accurately in 97% of the cases studied; however, their predictions of unsuccessful dilation were correct only 60% of the time. Features helpful in predicting passage included easy passage of the dilator (98%) and the patient feeling the dilator in the stomach (95%). Excessive resistance suggesting unsuccessful passage was an unreliable feature and was often due to the dilator curling in the stomach. When Maloney dilators are used to dilate simple distal strictures, if the physician predicts successful passage, he is reliably accurate without the use of fluoroscopy; however, if unsuccessful passage is suspected, fluoroscopy must be used for confirmation. PMID:2210278
Mobile image based color correction using deblurring
Wang, Yu; Xu, Chang; Boushey, Carol; Zhu, Fengqing; Delp, Edward J.
2015-03-01
Dietary intake, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as obesity and cancer. The goals of the Technology Assisted Dietary Assessment (TADA) System, developed at Purdue University, is to automatically identify and quantify foods and beverages consumed by utilizing food images acquired with a mobile device. Color correction serves as a critical step to ensure accurate food identification and volume estimation. We make use of a specifically designed color checkerboard (i.e. a fiducial marker) to calibrate the imaging system so that the variations of food appearance under different lighting conditions can be determined. In this paper, we propose an image quality enhancement technique by combining image de-blurring and color correction. The contribution consists of introducing an automatic camera shake removal method using a saliency map and improving the polynomial color correction model using the LMS color space.
moco: Fast Motion Correction for Calcium Imaging.
Dubbs, Alexander; Guevara, James; Yuste, Rafael
2016-01-01
Motion correction is the first step in a pipeline of algorithms to analyze calcium imaging videos and extract biologically relevant information, for example the network structure of the neurons therein. Fast motion correction is especially critical for closed-loop activity triggered stimulation experiments, where accurate detection and targeting of specific cells in necessary. We introduce a novel motion-correction algorithm which uses a Fourier-transform approach, and a combination of judicious downsampling and the accelerated computation of many L 2 norms using dynamic programming and two-dimensional, fft-accelerated convolutions, to enhance its efficiency. Its accuracy is comparable to that of established community-used algorithms, and it is more stable to large translational motions. It is programmed in Java and is compatible with ImageJ. PMID:26909035
... Prenatal Baby Bathing & Skin Care Breastfeeding Crying & Colic Diapers & Clothing Feeding & Nutrition Preemie Sleep Teething & Tooth Care Toddler Preschool Gradeschool Teen Young Adult Healthy Children > Ages & Stages > Baby > Preemie > Corrected Age ...
Attitudinally correct nomenclature
Cook, A C; Anderson, R. H.
2002-01-01
For half a century, inappropriate terms have been used to describe the various parts of the heart in a clinical context. Does the cardiological community have the fortitude to correct these mistakes?.
Nested Quantum Annealing Correction
Vinci, Walter; Albash, Tameem; Lidar, Daniel A.
2015-01-01
We present a general error-correcting scheme for quantum annealing that allows for the encoding of a logical qubit into an arbitrarily large number of physical qubits. Given any Ising model optimization problem, the encoding replaces each logical qubit by a complete graph of degree $C$, representing the distance of the error-correcting code. A subsequent minor-embedding step then implements the encoding on the underlying hardware graph of the quantum annealer. We demonstrate experimentally th...
Laboratory Building for Accurate Determination of Plutonium
2008-01-01
<正>The accurate determination of plutonium is one of the most important assay techniques of nuclear fuel, also the key of the chemical measurement transfer and the base of the nuclear material balance. An
JIANG Min; FANG Zhen-Yun; SANG Wen-Long; GAO Fei
2006-01-01
@@ In the minimum electromagnetism coupling model of interaction between photon and electron (positron), we accurately calculate photon chain renormalized propagator and obtain the accurate result of differential cross section of Bhabha scattering with a photon chain renormalized propagator in quantum electrodynamics. The related radiative corrections are briefly reviewed and discussed.
Tanrıver, Mehmet
2015-04-01
In this article, a period analysis of the late-type eclipsing binary VV UMa is presented. This work is based on the periodic variation of eclipse timings of the VV UMa binary. We determined the orbital properties and mass of a third orbiting body in the system by analyzing the light-travel time effect. The O-C diagram constructed for all available minima times of VV UMa exhibits a cyclic character superimposed on a linear variation. This variation includes three maxima and two minima within approximately 28,240 orbital periods of the system, which can be explained as the light-travel time effect (LITE) because of an unseen third body in a triple system that causes variations of the eclipse arrival times. New parameter values of the light-time travel effect because of the third body were computed with a period of 23.22 ± 0.17 years in the system. The cyclic-variation analysis produces a value of 0.0139 day as the semi-amplitude of the light-travel time effect and 0.35 as the orbital eccentricity of the third body. The mass of the third body that orbits the eclipsing binary stars is 0.787 ± 0.02 M⊙, and the semi-major axis of its orbit is 10.75 AU.
Invariant Image Watermarking Using Accurate Zernike Moments
Ismail A. Ismail
2010-01-01
Full Text Available problem statement: Digital image watermarking is the most popular method for image authentication, copyright protection and content description. Zernike moments are the most widely used moments in image processing and pattern recognition. The magnitudes of Zernike moments are rotation invariant so they can be used just as a watermark signal or be further modified to carry embedded data. The computed Zernike moments in Cartesian coordinate are not accurate due to geometrical and numerical error. Approach: In this study, we employed a robust image-watermarking algorithm using accurate Zernike moments. These moments are computed in polar coordinate, where both approximation and geometric errors are removed. Accurate Zernike moments are used in image watermarking and proved to be robust against different kind of geometric attacks. The performance of the proposed algorithm is evaluated using standard images. Results: Experimental results show that, accurate Zernike moments achieve higher degree of robustness than those approximated ones against rotation, scaling, flipping, shearing and affine transformation. Conclusion: By computing accurate Zernike moments, the embedded bits watermark can be extracted at low error rate.
Model Correction Factor Method
Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes
1997-01-01
of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods......The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... statebased on an idealized mechanical model to be adapted to the original limit state by the model correction factor. Reliable approximations are obtained by iterative use of gradient information on the original limit state function analogously to previous response surface approaches. However, the strength...
For the practical application of the method proposed by J. Bryant, the authors carried out a series of small corrections, related with the bottom, the dead time of the detectors and channels, with the resolution time of the coincidences, with the accidental coincidences, with the decay scheme and with the gamma efficiency of the beta detector beta and the beta efficiency beta of the gamma detector. The calculation of the correction formula is presented in the development of the present report, being presented 25 combinations of the probability of the first existent state at once of one disintegration and the second state at once of the following disintegration. (Author)
Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.
Fuchs, Franz G; Hjelmervik, Jon M
2016-02-01
A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results. PMID:26731454
Second-order accurate finite volume method for well-driven flows
Dotlić, M.; Vidović, D.; Pokorni, B.; Pušić, M.; Dimkić, M.
2016-02-01
We consider a finite volume method for a well-driven fluid flow in a porous medium. Due to the singularity of the well, modeling in the near-well region with standard numerical schemes results in a completely wrong total well flux and an inaccurate hydraulic head. Local grid refinement can help, but it comes at computational cost. In this article we propose two methods to address the well singularity. In the first method the flux through well faces is corrected using a logarithmic function, in a way related to the Peaceman model. Coupling this correction with a non-linear second-order accurate two-point scheme gives a greatly improved total well flux, but the resulting scheme is still inconsistent. In the second method fluxes in the near-well region are corrected by representing the hydraulic head as a sum of a logarithmic and a linear function. This scheme is second-order accurate.
An Improved Wavelet Correction for Zero Shifted Accelerometer Data
Timothy S. Edwards
2003-01-01
Full Text Available Accelerometer data from shock measurements often contains a spurious DC drifting phenomenon known as zero shifting. This erroneous signal can be caused by a variety of sources. The most conservative approach when dealing with such data is to discard it and collect a different set with steps taken to prevent the zero shifting. This approach is rarely practical, however. The test article may have been destroyed or it may be impossible or prohibitively costly to recreate the test. A method has been proposed by which wavelets may be used to correct the acceleration data. By comparing the corrected accelerometer data to an independent measurement of the acceleration from a laser vibrometer this paper shows that the corrected data, in the cases presented, accurately represents the shock. A method is presented by which the analyst may accurately choose the wavelet correction parameters. The comparisons are made in the time and frequency domains, as well as with the shock response spectrum.
The Digital Correction Unit: A data correction/compaction chip
The Digital Correction Unit (DCU) is a semi-custom CMOS integrated circuit which corrects and compacts data for the SLD experiment. It performs a piece-wise linear correction to data, and implements two separate compaction algorithms. This paper describes the basic functionality of the DCU and its correction and compaction algorithms
Text Induced Spelling Correction
Reynaert, M.W.C.
2004-01-01
We present TISC, a language-independent and context-sensitive spelling checking and correction system designed to facilitate the automatic removal of non-word spelling errors in large corpora. Its lexicon is derived from a very large corpus of raw text, without supervision, and contains word unigram
Writing: Revisions and Corrections
Kohl, Herb
1978-01-01
A fifth grader wanted to know what he had to do to get all his ideas the way he wanted them in his story writing "and" have the spelling, punctuation and quotation marks correctly styled. His teacher encouraged him to think about writing as a process and provided the student with three steps as guidelines for effective writing. (Author/RK)
The EUROGAM data-acquisition has to handle a large number of events/s. Typical in-beam experiments using heavy-ion fusion reactions assume the production of about 50 000 compound nuclei per second deexciting via particle and γ-ray emissions. The very powerful γ-ray detection of EUROGAM is expected to produce high-fold event rates as large as 104 events/s. Such high count rates introduce, in a common dead time mode, large dead times for the whole system associated with the processing of the pulse, its digitization and its readout (from the preamplifier pulse up to the readout of the information). In order to minimize the dead time the shaping time constant τ, usually about 3 μs for large volume Ge detectors has to be reduced. Smaller shaping times, however, will adversely affect the energy resolution due to ballistic deficit. One possible solution is to operate the linear amplifier, with a somewhat smaller shaping time constant (in the present case we choose τ = 1.5 μs), in combination with a ballistic deficit compensator. The ballistic deficit can be corrected in different ways using a Gated Integrator, a hardware correction or even a software correction. In this paper we present a comparative study of the software and hardware corrections as well as gated integration
Philips Pro-Trace: accurate quantification near the limits of detection
Full text: Pro-Trace is a new module for Philips' SuperQ analytical software, developed specifically for the analysis of trace elements in a wide variety of matrices. It enables the full potential of the sub-ppm quantification achievable by Philips Magix/PW240x spectrometers to be realized. Accurate trace element analysis requires very accurate determination of net count rates (i.e. after all the corrections for background, spectral overlap and matrix have been made) together with careful selection of instrumental parameters, which comes through experience. Pro-Trace has been developed with both in mind. On the application side Pro-Trace offers: superior background correction; background correction for fixed channels; iterated spectral overlap correction; correction for low-level spectral impurity; correction of inter-element matrix effects using mass absorption coefficients; jump-edge matrix correction; LLD and error calculation for every element in every sample. From the user standpoint, Pro-Trace operates entirely within SuperQ, which is familiar to many. Much of the experience required in setting up a trace element application has been incorporated into a Smart Element Selector and an application setup wizard. A set of high-purity setup standards and blanks has also been developed for the Pro-Trace package. This set contains all the samples required for background correction, line overlap correction, MAC calibration and concentration calibration for 40 elements. This presentation will be illustrated by examples of calibrations and data obtained using Pro-Trace. Copyright (2002) Australian X-ray Analytical Association Inc
Geometric correction of APEX hyperspectral data
Vreys Kristin
2016-03-01
Full Text Available Hyperspectral imagery originating from airborne sensors is nowadays widely used for the detailed characterization of land surface. The correct mapping of the pixel positions to ground locations largely contributes to the success of the applications. Accurate geometric correction, also referred to as “orthorectification”, is thus an important prerequisite which must be performed prior to using airborne imagery for evaluations like change detection, or mapping or overlaying the imagery with existing data sets or maps. A so-called “ortho-image” provides an accurate representation of the earth’s surface, having been adjusted for lens distortions, camera tilt and topographic relief. In this paper, we describe the different steps in the geometric correction process of APEX hyperspectral data, as applied in the Central Data Processing Center (CDPC at the Flemish Institute for Technological Research (VITO, Mol, Belgium. APEX ortho-images are generated through direct georeferencing of the raw images, thereby making use of sensor interior and exterior orientation data, boresight calibration data and elevation data. They can be referenced to any userspecified output projection system and can be resampled to any output pixel size.
Accurate atomic data for industrial plasma applications
Griesmann, U.; Bridges, J.M.; Roberts, J.R.; Wiese, W.L.; Fuhr, J.R. [National Inst. of Standards and Technology, Gaithersburg, MD (United States)
1997-12-31
Reliable branching fraction, transition probability and transition wavelength data for radiative dipole transitions of atoms and ions in plasma are important in many industrial applications. Optical plasma diagnostics and modeling of the radiation transport in electrical discharge plasmas (e.g. in electrical lighting) depend on accurate basic atomic data. NIST has an ongoing experimental research program to provide accurate atomic data for radiative transitions. The new NIST UV-vis-IR high resolution Fourier transform spectrometer has become an excellent tool for accurate and efficient measurements of numerous transition wavelengths and branching fractions in a wide wavelength range. Recently, the authors have also begun to employ photon counting techniques for very accurate measurements of branching fractions of weaker spectral lines with the intent to improve the overall accuracy for experimental branching fractions to better than 5%. They have now completed their studies of transition probabilities of Ne I and Ne II. The results agree well with recent calculations and for the first time provide reliable transition probabilities for many weak intercombination lines.
More accurate picture of human body organs
Computerized tomography and nucler magnetic resonance tomography (NMRT) are revolutionary contributions to radiodiagnosis because they allow to obtain a more accurate image of human body organs. The principles are described of both methods. Attention is mainly devoted to NMRT which has clinically only been used for three years. It does not burden the organism with ionizing radiation. (Ha)
Isomerism of Cyanomethanimine: Accurate Structural, Energetic, and Spectroscopic Characterization.
Puzzarini, Cristina
2015-11-25
The structures, relative stabilities, and rotational and vibrational parameters of the Z-C-, E-C-, and N-cyanomethanimine isomers have been evaluated using state-of-the-art quantum-chemical approaches. Equilibrium geometries have been calculated by means of a composite scheme based on coupled-cluster calculations that accounts for the extrapolation to the complete basis set limit and core-correlation effects. The latter approach is proved to provide molecular structures with an accuracy of 0.001-0.002 Å and 0.05-0.1° for bond lengths and angles, respectively. Systematically extrapolated ab initio energies, accounting for electron correlation through coupled-cluster theory, including up to single, double, triple, and quadruple excitations, and corrected for core-electron correlation and anharmonic zero-point vibrational energy, have been used to accurately determine relative energies and the Z-E isomerization barrier with an accuracy of about 1 kJ/mol. Vibrational and rotational spectroscopic parameters have been investigated by means of hybrid schemes that allow us to obtain rotational constants accurate to about a few megahertz and vibrational frequencies with a mean absolute error of ∼1%. Where available, for all properties considered, a very good agreement with experimental data has been observed. PMID:26529434
Accurate phylogenetic classification of DNA fragments based onsequence composition
McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis; Hugenholtz, Philip; Rigoutsos, Isidore
2006-05-01
Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequence characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.
The FLUKA Code: An Accurate Simulation Tool for Particle Therapy
Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T.; Cerutti, Francesco; Chin, Mary P. W.; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G.; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R.; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis
2016-01-01
Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both 4He and 12C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth–dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956
The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.
Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis
2016-01-01
Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956
Massey, Richard; Stoughton, Chris; Leauthaud, Alexie; Rhodes, Jason; Koekemoer, Anton; Ellis, Richard; Shaghoulian, Edgar
2013-07-01
Charge Transfer Inefficiency (CTI) due to radiation damage above the Earth's atmosphere creates spurious trailing in images from Charge-Coupled Device (CCD) imaging detectors. Radiation damage also creates unrelated warm pixels, which can be used to measure CTI. This code provides pixel-based correction for CTI and has proven effective in Hubble Space Telescope Advanced Camera for Surveys raw images, successfully reducing the CTI trails by a factor of ~30 everywhere in the CCD and at all flux levels. The core is written in java for speed, and a front-end user interface is provided in IDL. The code operates on raw data by returning individual electrons to pixels from which they were unintentionally dragged during readout. Correction takes about 25 minutes per ACS exposure, but is trivially parallelisable to multiple processors.
Aberration Corrected Emittance Exchange
Nanni, Emilio A
2015-01-01
Full exploitation of emittance exchange (EEX) requires aberration-free performance of a complex imaging system including active radio-frequency (RF) elements which can add temporal distortions. We investigate the performance of an EEX line where the exchange occurs between two dimensions with normalized emittances which differ by orders of magnitude. The transverse emittance is exchanged into the longitudinal dimension using a double dog-leg emittance exchange setup with a 5 cell RF deflector cavity. Aberration correction is performed on the four most dominant aberrations. These include temporal aberrations that are corrected with higher order magnetic optical elements located where longitudinal and transverse emittance are coupled. We demonstrate aberration-free performance of emittances differing by 4 orders of magnitude, i.e. an initial transverse emittance of $\\epsilon_x=1$ pm-rad is exchanged with a longitudinal emittance of $\\epsilon_z=10$ nm-rad.
Quantum Error Correcting Subsystems and Self-Correcting Quantum Memories
Bacon, D
2005-01-01
The most general method for encoding quantum information is not to encode the information into a subspace of a Hilbert space, but to encode information into a subsystem of a Hilbert space. In this paper we use this fact to define subsystems with quantum error correcting capabilities. In standard quantum error correcting codes, one requires the ability to apply a procedure which exactly reverses on the error correcting subspace any correctable error. In contrast, for quantum error correcting subsystems, the correction procedure need not undo the error which has occurred, but instead one must perform correction only modulo the subsystem structure. Here we present two examples of quantum error correcting subsystems. These examples are motivated by simple spatially local Hamiltonians on square and cubic lattices. In three dimensions we provide evidence, in the form a simple mean field theory, that our Hamiltonian gives rise to a system which is self-correcting. Such a system will be a natural high-temperature qua...
Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi
2012-06-01
One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705
1994-03-01
In the heading of David Cassidy's review of The Private Lives of Albert Einstein (18 February, p. 997) the price of the book as sold by its British publisher, Faber and Faber, was given incorrectly; the correct price is pound15.99. The book is also to be published in the United States by St. Martin's Press, New York, in April, at a price of $23.95. PMID:17817438
Druinsky, Alex
2012-01-01
Several widely-used textbooks lead the reader to believe that solving a linear system of equations Ax = b by multiplying the vector b by a computed inverse inv(A) is inaccurate. Virtually all other textbooks on numerical analysis and numerical linear algebra advise against using computed inverses without stating whether this is accurate or not. In fact, under reasonable assumptions on how the inverse is computed, x = inv(A)*b is as accurate as the solution computed by the best backward-stable solvers. This fact is not new, but obviously obscure. We review the literature on the accuracy of this computation and present a self-contained numerical analysis of it.
Accurate guitar tuning by cochlear implant musicians.
Thomas Lu
Full Text Available Modern cochlear implant (CI users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.
Accurate guitar tuning by cochlear implant musicians.
Lu, Thomas; Huang, Juan; Zeng, Fan-Gang
2014-01-01
Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081
Talus avulsion fractures: are they accurately diagnosed?
Robinson, Karen P; Davies, Mark B
2015-10-01
Dorsal talus avulsion fractures occurring along the supination line of the foot can cause pain and discomfort. Examination of the foot and ankle using the Ottawa ankle rules does not include examination of the talus, an injury here is easily missed causing concern to the patient. This is a retrospective study carried out in a major trauma centre to look at the assessment and diagnosis of all patients with a dorsal talus and navicular avulsion fractures over a one year period. Nineteen patients with an isolated dorsal talus avulsion fracture and five patients with an isolated dorsal navicular fracture were included. The correct diagnosis was made in 12 of patients with isolated dorsal talus avulsion fractures, 7 patients were given an incorrect diagnosis after misreading of the radiograph. Four patients with a dorsal navicular avulsion fracture were given the correct diagnosis. If not correctly diagnosed on presentation patients can be overly concerned that a 'fracture was missed' which can lead to confusion and anxiety. Therefore these injuries need to be recognised early, promptly diagnosed, treated symptomatically and reassurance given. We recommend the routine palpation of the talus in addition to the examination set out in the Ottawa Ankle Rules and the close inspection of plain radiographs to adequately diagnose an injury in this area. PMID:26190632
Accurate Finite Difference Methods for Option Pricing
Persson, Jonas
2006-01-01
Stock options are priced numerically using space- and time-adaptive finite difference methods. European options on one and several underlying assets are considered. These are priced with adaptive numerical algorithms including a second order method and a more accurate method. For American options we use the adaptive technique to price options on one stock with and without stochastic volatility. In all these methods emphasis is put on the control of errors to fulfill predefined tolerance level...
Accurate, reproducible measurement of blood pressure.
Campbell, N. R.; Chockalingam, A; Fodor, J. G.; McKay, D. W.
1990-01-01
The diagnosis of mild hypertension and the treatment of hypertension require accurate measurement of blood pressure. Blood pressure readings are altered by various factors that influence the patient, the techniques used and the accuracy of the sphygmomanometer. The variability of readings can be reduced if informed patients prepare in advance by emptying their bladder and bowel, by avoiding over-the-counter vasoactive drugs the day of measurement and by avoiding exposure to cold, caffeine con...
Accurate variational forms for multiskyrmion configurations
Jackson, A.D.; Weiss, C.; Wirzba, A.; Lande, A.
1989-04-17
Simple variational forms are suggested for the fields of a single skyrmion on a hypersphere, S/sub 3/(L), and of a face-centered cubic array of skyrmions in flat space, R/sub 3/. The resulting energies are accurate at the level of 0.2%. These approximate field configurations provide a useful alternative to brute-force solutions of the corresponding Euler equations.
Efficient Accurate Context-Sensitive Anomaly Detection
无
2007-01-01
For program behavior-based anomaly detection, the only way to ensure accurate monitoring is to construct an efficient and precise program behavior model. A new program behavior-based anomaly detection model,called combined pushdown automaton (CPDA) model was proposed, which is based on static binary executable analysis. The CPDA model incorporates the optimized call stack walk and code instrumentation technique to gain complete context information. Thereby the proposed method can detect more attacks, while retaining good performance.
Towards accurate modeling of moving contact lines
Holmgren, Hanna
2015-01-01
The present thesis treats the numerical simulation of immiscible incompressible two-phase flows with moving contact lines. The conventional Navier–Stokes equations combined with a no-slip boundary condition leads to a non-integrable stress singularity at the contact line. The singularity in the model can be avoided by allowing the contact line to slip. Implementing slip conditions in an accurate way is not straight-forward and different regularization techniques exist where ad-hoc procedures ...
Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo
2016-02-01
Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.
Accurate phase-shift velocimetry in rock
Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.
2016-06-01
Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.
A precise technique for manufacturing correction coil
An automated method of manufacturing correction coils has been developed which provides a precise embodiment of the coil design. Numerically controlled machines have been developed to accurately position coil windings on the beam tube. Two types of machines have been built. One machine bonds the wire to a substrate which is wrapped around the beam tube after it is completed while the second machine bonds the wire directly to the beam tube. Both machines use the Multiwire reg-sign technique of bonding the wire to the substrate utilizing an ultrasonic stylus. These machines are being used to manufacture coils for both the SSC and RHIC
High Frequency QRS ECG Accurately Detects Cardiomyopathy
Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds
2005-01-01
High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing
Threshold Corrections to the Bottom Quark Mass Revisited
Anandakrishnan, Archana; Raby, Stuart
2014-01-01
Threshold corrections to the bottom quark mass are often estimated under the approximation that tan$\\beta$ enhanced contributions are the most dominant. In this work we revisit this common approximation made to the estimation of the supersymmetric threshold corrections to the bottom quark mass. We calculate the full one-loop supersymmetric corrections to the bottom quark mass and survey a large part of the phenomenological MSSM parameter space to study the validity of considering only the tan$\\beta$ enhanced corrections. Our analysis demonstrates that this approximation severely breaks down in parts of the parameter space. The size of the threshold corrections has significant consequences for the estimation of fits to the bottom quark mass, couplings to Higgses, and flavor observables, and therefore the approximate expressions must be replaced with the full contributions for accurate estimations.
Assessing the correctional orientation of corrections officers in South Korea.
Moon, Byongook; Maxwell, Sheila Royo
2004-12-01
The correctional goal in South Korea has recently changed from the straightforward punishment of inmates to rehabilitation. Currently, emphases are being placed on education, counseling, and other treatment programs. These changes have consequently begun to also change the corrections officers' roles from a purely custodial role to a human service role, in which officers are expected to manage rehabilitation and treatment programs. Despite these changes, few studies have examined the attitudes of corrections officers toward rehabilitation programming. This is an important dimension to examine in rehabilitation programming, as corrections officers play a major role in the delivery of institutional programs. This study examines the attitudes of South Korean corrections officers toward rehabilitation programs. Approximately 430 corrections officers were sampled. Results show that correctional attitudes are largely influenced by not only officers' own motivations for joining corrections but also by institutional factors such as job stress. Policy implications are discussed. PMID:15538029
A Technique for Calculating Quantum Corrections to Solitons
Barnes, Chris; Turok, Neil
1997-01-01
We present a numerical scheme for calculating the first quantum corrections to the properties of static solitons. The technique is applicable to solitons of arbitrary shape, and may be used in 3+1 dimensions for multiskyrmions or other complicated solitons. We report on a test computation in 1+1 dimensions, where we accurately reproduce the analytical result with minimal numerical effort.
78 FR 16611 - Freedom of Information Act; Correction
2013-03-18
...The Federal Trade Commission published a final rule on February 28, 2013 revising its Rules of Practice governing access to agency records. In one of its amendatory instructions, the final rule mentioned a paragraph that was not being affected. This document makes a technical correction to the amendatory instruction so that it accurately reflects the amendments carried...
Educational Programs in Adult Correctional Institutions: A Survey.
Dell'Apa, Frank
A national survey of adult correctional institutions was conducted by questionnaire in 1973 to obtain an accurate picture of the current status of academic educational programs, particularly at the elementary and secondary levels, available to inmates. Questions were designed to obtain information regarding the degree of participation of inmates…
Prior to the implementation of the Corrective Action Program in Asco NPP, the station was already using a number of systems for troubleshooting problems and identifying areas for improvement in areas such as maintenance, operating experience and quality insurance. These systems coexisted with little interaction among each other. The publication of UNESA Guide CEN-13 led Asco NPP to implement the Program, which was then included in the SISC (Inspection Base Plan for Integrated Supervision System of NPPs), which is the Spanish version of the ROP. (Author).
Jensen, Rasmus Ramsbøl; Benjaminsen, Claus; Larsen, Rasmus;
2015-01-01
The application of motion tracking is wide, including: industrial production lines, motion interaction in gaming, computer-aided surgery and motion correction in medical brain imaging. Several devices for motion tracking exist using a variety of different methodologies. In order to use such devices...... offset and tracking noise in medical brain imaging. The data are generated from a phantom mounted on a rotary stage and have been collected using a Siemens High Resolution Research Tomograph for positron emission tomography. During acquisition the phantom was tracked with our latest tracking prototype...
For simple and accurate measurement of the current distribution in a broad beam from electron accelerators, a method for detecting the charge absorbed in a graphite target exposed to the air has been examined. The present report means to solve several fundamental problems. The effective incidence area of the absorber is strictly defined by the design of the geometrical arrangement of the absorber assembly. Electron backscattering from the absorber is corrected with backscattering coefficients in consideration of oblique incidence on the absorber. The influence of ionic charge produced in air is ascribed to the contact potential between the absorber and the guard, and correction methods are proposed. (orig.)
Niche Genetic Algorithm with Accurate Optimization Performance
LIU Jian-hua; YAN De-kun
2005-01-01
Based on crowding mechanism, a novel niche genetic algorithm was proposed which can record evolutionary direction dynamically during evolution. After evolution, the solutions's precision can be greatly improved by means of the local searching along the recorded direction. Simulation shows that this algorithm can not only keep population diversity but also find accurate solutions. Although using this method has to take more time compared with the standard GA, it is really worth applying to some cases that have to meet a demand for high solution precision.
How accurately can we calculate thermal systems?
The objective was to determine how accurately simple reactor lattice integral parameters can be determined, considering user input, differences in the methods, source data and the data processing procedures and assumptions. Three simple square lattice test cases with different fuel to moderator ratios were defined. The effect of the thermal scattering models were shown to be important and much bigger than the spread in the results. Nevertheless, differences of up to 0.4% in the K-eff calculated by continuous energy Monte Carlo codes were observed even when the same source data were used. (author)
Accurate diagnosis is essential for amebiasis
无
2004-01-01
@@ Amebiasis is one of the three most common causes of death from parasitic disease, and Entamoeba histolytica is the most widely distributed parasites in the world. Particularly, Entamoeba histolytica infection in the developing countries is a significant health problem in amebiasis-endemic areas with a significant impact on infant mortality[1]. In recent years a world wide increase in the number of patients with amebiasis has refocused attention on this important infection. On the other hand, improving the quality of parasitological methods and widespread use of accurate tecniques have improved our knowledge about the disease.
Investigations on Accurate Analysis of Microstrip Reflectarrays
Zhou, Min; Sørensen, S. B.; Kim, Oleksiy S.;
2011-01-01
An investigation on accurate analysis of microstrip reflectarrays is presented. Sources of error in reflectarray analysis are examined and solutions to these issues are proposed. The focus is on two sources of error, namely the determination of the equivalent currents to calculate the radiation...... pattern, and the inaccurate mutual coupling between array elements due to the lack of periodicity. To serve as reference, two offset reflectarray antennas have been designed, manufactured and measured at the DTUESA Spherical Near-Field Antenna Test Facility. Comparisons of simulated and measured data are...
Kraus, Martin F.; Hornegger, Joachim
From the introduction of time domain OCT [1] up to recent swept source systems, motion continues to be an issue in OCT imaging. In contrast to normal photography, an OCT image does not represent a single point in time. Instead, conventional OCT devices sequentially acquire one-dimensional data over a period of several seconds, capturing one beam of light at a time and recording both the intensity and delay of reflections along its path through an object. In combination with unavoidable object motion which occurs in many imaging contexts, the problem of motion artifacts lies in the very nature of OCT imaging. Motion artifacts degrade image quality and make quantitative measurements less reliable. Therefore, it is desirable to come up with techniques to measure and/or correct object motion during OCT acquisition. In this chapter, we describe the effect of motion on OCT data sets and give an overview on the state of the art in the field of retinal OCT motion correction.
Contact Lenses for Vision Correction
... Ask an Ophthalmologist Español Eye Health / Glasses & Contacts Contact Lenses Sections Contact Lenses for Vision Correction Proper ... to Know About Contact Lenses Colored Contact Lenses Contact Lenses for Vision Correction Written by: Kierstan Boyd ...
Attenuation correction for myocardial scintigraphy: state-of-the-art
Myocardial perfusion imaging has been proved as an accurate, noninvasive method for diagnosis of coronary artery disease with a high prognostic value. However image artifacts, which decrease sensitivity and in particular specificity, degrade the clinical impact of this method. Soft tissue attenuation is regarded as one of the most important factors of impaired image quality. Different approaches to correct for tissue attenuation have been implemented by the camera manufacturers. The principle is to derive an attenuation map from the transmission data and to correct the emission data for nonuniform photon attenuation with this map. There have been several reports published demonstrating an improved specificity with no substantial change in sensitivity by this method. To accurately perform attenuation correction quality control measurements and adequate training of technologists and physicians are mandatory. (orig.)
Accurate radiative transfer calculations for layered media.
Selden, Adrian C
2016-07-01
Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics. PMID:27409700
Accurate basis set truncation for wavefunction embedding
Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.
2013-07-01
Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.
Accurate pose estimation for forensic identification
Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk
2010-04-01
In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.
Accurate determination of characteristic relative permeability curves
Krause, Michael H.; Benson, Sally M.
2015-09-01
A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.
Mixed Burst Error Correcting Codes
Sethi, Amita
2015-01-01
In this paper, we construct codes which are an improvement on the previously known block wise burst error correcting codes in terms of their error correcting capabilities. Along with different bursts in different sub-blocks, the given codes also correct overlapping bursts of a given length in two consecutive sub-blocks of a code word. Such codes are called mixed burst correcting (mbc) codes.
Accurate, fully-automated NMR spectral profiling for metabolomics.
Siamak Ravanbakhsh
Full Text Available Many diseases cause significant changes to the concentrations of small molecules (a.k.a. metabolites that appear in a person's biofluids, which means such diseases can often be readily detected from a person's "metabolic profile"-i.e., the list of concentrations of those metabolites. This information can be extracted from a biofluids Nuclear Magnetic Resonance (NMR spectrum. However, due to its complexity, NMR spectral profiling has remained manual, resulting in slow, expensive and error-prone procedures that have hindered clinical and industrial adoption of metabolomics via NMR. This paper presents a system, BAYESIL, which can quickly, accurately, and autonomously produce a person's metabolic profile. Given a 1D 1H NMR spectrum of a complex biofluid (specifically serum or cerebrospinal fluid, BAYESIL can automatically determine the metabolic profile. This requires first performing several spectral processing steps, then matching the resulting spectrum against a reference compound library, which contains the "signatures" of each relevant metabolite. BAYESIL views spectral matching as an inference problem within a probabilistic graphical model that rapidly approximates the most probable metabolic profile. Our extensive studies on a diverse set of complex mixtures including real biological samples (serum and CSF, defined mixtures and realistic computer generated spectra; involving > 50 compounds, show that BAYESIL can autonomously find the concentration of NMR-detectable metabolites accurately (~ 90% correct identification and ~ 10% quantification error, in less than 5 minutes on a single CPU. These results demonstrate that BAYESIL is the first fully-automatic publicly-accessible system that provides quantitative NMR spectral profiling effectively-with an accuracy on these biofluids that meets or exceeds the performance of trained experts. We anticipate this tool will usher in high-throughput metabolomics and enable a wealth of new applications of
Accurate molecular classification of cancer using simple rules
Gotoh Osamu
2009-10-01
Full Text Available Abstract Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often hampers the interpretability of the models. For a better understanding of the classification results, it is desirable to develop simpler rule-based models with as few marker genes as possible. Methods We screened a small number of informative single genes and gene pairs on the basis of their depended degrees proposed in rough sets. Applying the decision rules induced by the selected genes or gene pairs, we constructed cancer classifiers. We tested the efficacy of the classifiers by leave-one-out cross-validation (LOOCV of training sets and classification of independent test sets. Results We applied our methods to five cancerous gene expression datasets: leukemia (acute lymphoblastic leukemia [ALL] vs. acute myeloid leukemia [AML], lung cancer, prostate cancer, breast cancer, and leukemia (ALL vs. mixed-lineage leukemia [MLL] vs. AML. Accurate classification outcomes were obtained by utilizing just one or two genes. Some genes that correlated closely with the pathogenesis of relevant cancers were identified. In terms of both classification performance and algorithm simplicity, our approach outperformed or at least matched existing methods. Conclusion In cancerous gene expression datasets, a small number of genes, even one or two if selected correctly, is capable of achieving an ideal cancer classification effect. This finding also means that very simple rules may perform well for cancerous class prediction.
Accuracy of 3D Virtual Planning of Corrective Osteotomies of the Distal Radius
Stockmans, Filip; Dezillie, Marleen; Vanhaecke, Jeroen
2013-01-01
Corrective osteotomies of the distal radius for symptomatic malunion are time-tested procedures that rely on accurate corrections. Patients with combined intra- and extra-articular malunions present a challenging deformity. Virtual planning and patient-specific instruments (PSIs) to transfer the planning into the operating room have been used both to simplify the surgery and to make it more accurate. This report focuses on the clinically achieved accuracy in four patients treated between 2008...
Anomaly Corrected Heterotic Horizons
Fontanella, A; Papadopoulos, G
2016-01-01
We consider supersymmetric near-horizon geometries in heterotic supergravity up to two loop order in sigma model perturbation theory. We identify the conditions for the horizons to admit enhancement of supersymmetry. We show that solutions which undergo supersymmetry enhancement exhibit an sl(2,R) symmetry, and we describe the geometry of their horizon sections. We also prove a modified Lichnerowicz type theorem, incorporating $\\alpha'$ corrections, which relates Killing spinors to zero modes of near-horizon Dirac operators. Furthermore, we demonstrate that there are no AdS2 solutions in heterotic supergravity up to second order in $\\alpha'$ for which the fields are smooth and the internal space is smooth and compact without boundary. We investigate a class of nearly supersymmetric horizons, for which the gravitino Killing spinor equation is satisfied on the spatial cross sections but not the dilatino one, and present a description of their geometry.
Full text: In order to obtain meaningful analytical information from an X-Ray Fluorescence spectrometer, it is necessary to correlate measured intensity values with sample concentrations. The ability to do this to a desired level of precision depends on taking care of a number of variables which influence measured intensity values. These variables include: the sample, which needs to be homogeneous, flat and critically thick to the analyte lines used for measurement; the spectrometer, which needs to perform any mechanical movements in a highly reproducible manner; the time taken to measure an analyte line, and the software, which needs to take care of detector dead-time, the contribution of background to the measured signal, the effects of line overlaps and matrix (absorption and enhancement) effects. This presentation will address commonly used correction procedures for matrix effects and their relative success in achieving their objective. Copyright (2002) Australian X-ray Analytical Association Inc
Temperature Corrected Bootstrap Algorithm
Comiso, Joey C.; Zwally, H. Jay
1997-01-01
A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.
EDITORIAL: Politically correct physics?
Pople Deputy Editor, Stephen
1997-03-01
If you were a caring, thinking, liberally minded person in the 1960s, you marched against the bomb, against the Vietnam war, and for civil rights. By the 1980s, your voice was raised about the destruction of the rainforests and the threat to our whole planetary environment. At the same time, you opposed discrimination against any group because of race, sex or sexual orientation. You reasoned that people who spoke or acted in a discriminatory manner should be discriminated against. In other words, you became politically correct. Despite its oft-quoted excesses, the political correctness movement sprang from well-founded concerns about injustices in our society. So, on balance, I am all for it. Or, at least, I was until it started to invade science. Biologists were the first to feel the impact. No longer could they refer to 'higher' and 'lower' orders, or 'primitive' forms of life. To the list of undesirable 'isms' - sexism, racism, ageism - had been added a new one: speciesism. Chemists remained immune to the PC invasion, but what else could you expect from a group of people so steeped in tradition that their principal unit, the mole, requires the use of the thoroughly unreconstructed gram? Now it is the turn of the physicists. This time, the offenders are not those who talk disparagingly about other people or animals, but those who refer to 'forms of energy' and 'heat'. Political correctness has evolved into physical correctness. I was always rather fond of the various forms of energy: potential, kinetic, chemical, electrical, sound and so on. My students might merge heat and internal energy into a single, fuzzy concept loosely associated with moving molecules. They might be a little confused at a whole new crop of energies - hydroelectric, solar, wind, geothermal and tidal - but they could tell me what devices turned chemical energy into electrical energy, even if they couldn't quite appreciate that turning tidal energy into geothermal energy wasn't part of the
Accurate FRET Measurements within Single Diffusing Biomolecules Using Alternating-Laser Excitation
Lee, Nam Ki; Kapanidis, Achillefs N.; Wang, You; Michalet, Xavier; Mukhopadhyay, Jayanta; Ebright, Richard H.; Weiss, Shimon
2005-01-01
Fluorescence resonance energy transfer (FRET) between a donor (D) and an acceptor (A) at the single-molecule level currently provides qualitative information about distance, and quantitative information about kinetics of distance changes. Here, we used the sorting ability of confocal microscopy equipped with alternating-laser excitation (ALEX) to measure accurate FRET efficiencies and distances from single molecules, using corrections that account for cross-talk terms that contaminate the FRE...
Accurate Telescope Mount Positioning with MEMS Accelerometers
Mészáros, László; Pál, András; Csépány, Gergely
2014-01-01
This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the sub-arcminute range which is well smaller than the field-of-view of conventional imaging telescope systems. Here we present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.
Accurate estimation of indoor travel times
Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan;
2014-01-01
the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. InTraTime...... allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...... a minimal-effort setup and self-improving operations due to unsupervised learning---as it is able to adapt implicitly to factors influencing indoor travel times such as elevators, rotating doors or changes in building layout. We evaluate and compare the proposed InTraTime method to indoor adaptions...
Accurate sky background modelling for ESO facilities
Full text: Ground-based measurements like e.g. high resolution spectroscopy are heavily influenced by several physical processes. Amongst others, line absorption/ emission, air glow by OH molecules, and scattering of photons within the earth's atmosphere make observations in particular from facilities like the future European extremely large telescope a challenge. Additionally, emission from unresolved extrasolar objects, the zodiacal light, the moon and even thermal emission from the telescope and the instrument contribute significantly to the broad band background over a wide wavelength range. In our talk we review these influences and give an overview on how they can be accurately modeled for increasing the overall precision of spectroscopic and imaging measurements. (author)
Toward Accurate and Quantitative Comparative Metagenomics.
Nayfach, Stephen; Pollard, Katherine S
2016-08-25
Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341
Accurate valence band width of diamond
An accurate width is determined for the valence band of diamond by imaging photoelectron momentum distributions for a variety of initial- and final-state energies. The experimental result of 23.0±0.2 eV2 agrees well with first-principles quasiparticle calculations (23.0 and 22.88 eV) and significantly exceeds the local-density-functional width, 21.5±0.2 eV2. This difference quantifies effects of creating an excited hole state (with associated many-body effects) in a band measurement vs studying ground-state properties treated by local-density-functional calculations. copyright 1997 The American Physical Society
Accurate Weather Forecasting for Radio Astronomy
Maddalena, Ronald J.
2010-01-01
The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.
Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi
2015-02-01
With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.
Airborne experiment results for spaceborne atmospheric synchronous correction system
Cui, Wenyu; Yi, Weining; Du, Lili; Liu, Xiao
2015-10-01
The image quality of optical remote sensing satellite is affected by the atmosphere, thus the image needs to be corrected. Due to the spatial and temporal variability of atmospheric conditions, correction by using synchronous atmospheric parameters can effectively improve the remote sensing image quality. For this reason, a small light spaceborne instrument, the atmospheric synchronous correction device (airborne prototype), is developed by AIOFM of CAS(Anhui Institute of Optics and Fine Mechanics of Chinese Academy of Sciences). With this instrument, of which the detection mode is timing synchronization and spatial coverage, the atmospheric parameters consistent with the images to be corrected in time and space can be obtained, and then the correction is achieved by radiative transfer model. To verify the technical process and treatment effect of spaceborne atmospheric correction system, the first airborne experiment is designed and completed. The experiment is implemented by the "satellite-airborne-ground" synchronous measuring method. A high resolution(0.4 m) camera and the atmospheric correction device are equipped on the aircraft, which photograph the ground with the satellite observation over the top simultaneously. And aerosol optical depth (AOD) and columnar water vapor (CWV) in the imagery area are also acquired, which are used for the atmospheric correction for satellite and aerial images. Experimental results show that using the AOD and CWV of imagery area retrieved by the data obtained by the device to correct aviation and satellite images, can improve image definition and contrast by more than 30%, and increase MTF by more than 1 time, which means atmospheric correction for satellite images by using the data of spaceborne atmospheric synchronous correction device is accurate and effective.
An accurate δf method for neoclassical transport calculation
A δf method, solving drift kinetic equation, for neoclassical transport calculation is presented in detail. It is demonstrated that valid results essentially rely on the correct evaluation of marker density g in weight calculation. A general and accurate weighting scheme is developed without using some assumed g in weight equation for advancing particle weights, unlike the previous schemes. This scheme employs an additional weight function to directly solve g from its kinetic equation using the idea of δf method. Therefore the severe constraint that the real marker distribution must be consistent with the initially assumed g during a simulation is relaxed. An improved like-particle collision scheme is presented. By performing compensation for momentum, energy and particle losses arising from numerical errors, the conservations of all the three quantities are greatly improved during collisions. Ion neoclassical transport due to self-collisions is examined under finite banana case as well as zero banana limit. A solution with zero particle and zero energy flux (in case of no temperature gradient) over whole poloidal section is obtained. With the improvement in both like-particle collision scheme and weighting scheme, the δf simulation shows a significantly upgraded performance for neoclassical transport study. (author)
Study of accurate volume measurement system for plutonium nitrate solution
Hosoma, T. [Power Reactor and Nuclear Fuel Development Corp., Tokai, Ibaraki (Japan). Tokai Works
1998-12-01
It is important for effective safeguarding of nuclear materials to establish a technique for accurate volume measurement of plutonium nitrate solution in accountancy tank. The volume of the solution can be estimated by two differential pressures between three dip-tubes, in which the air is purged by an compressor. One of the differential pressure corresponds to the density of the solution, and another corresponds to the surface level of the solution in the tank. The measurement of the differential pressure contains many uncertain errors, such as precision of pressure transducer, fluctuation of back-pressure, generation of bubbles at the front of the dip-tubes, non-uniformity of temperature and density of the solution, pressure drop in the dip-tube, and so on. The various excess pressures at the volume measurement are discussed and corrected by a reasonable method. High precision-differential pressure measurement system is developed with a quartz oscillation type transducer which converts a differential pressure to a digital signal. The developed system is used for inspection by the government and IAEA. (M. Suetake)
An accurate {delta}f method for neoclassical transport calculation
Wang, W.X.; Nakajima, N.; Murakami, S.; Okamoto, M. [National Inst. for Fusion Science, Toki, Gifu (Japan)
1999-03-01
A {delta}f method, solving drift kinetic equation, for neoclassical transport calculation is presented in detail. It is demonstrated that valid results essentially rely on the correct evaluation of marker density g in weight calculation. A general and accurate weighting scheme is developed without using some assumed g in weight equation for advancing particle weights, unlike the previous schemes. This scheme employs an additional weight function to directly solve g from its kinetic equation using the idea of {delta}f method. Therefore the severe constraint that the real marker distribution must be consistent with the initially assumed g during a simulation is relaxed. An improved like-particle collision scheme is presented. By performing compensation for momentum, energy and particle losses arising from numerical errors, the conservations of all the three quantities are greatly improved during collisions. Ion neoclassical transport due to self-collisions is examined under finite banana case as well as zero banana limit. A solution with zero particle and zero energy flux (in case of no temperature gradient) over whole poloidal section is obtained. With the improvement in both like-particle collision scheme and weighting scheme, the {delta}f simulation shows a significantly upgraded performance for neoclassical transport study. (author)
A Distributed Weighted Voting Approach for Accurate Eye Center Estimation
Gagandeep Singh
2013-05-01
Full Text Available This paper proposes a novel approach for accurate estimation of eye center in face images. A distributed voting based approach in which every pixel votes is adopted for potential eye center candidates. The votes are distributed over a subset of pixels which lie in a direction which is opposite to gradient direction and the weightage of votes is distributed according to a novel mechanism. First, image is normalized to eliminate illumination variations and its edge map is generated using Canny edge detector. Distributed voting is applied on the edge image to generate different eye center candidates. Morphological closing and local maxima search are used to reduce the number of candidates. A classifier based on spatial and intensity information is used to choose the correct candidates for the locations of eye center. The proposed approach was tested on BioID face database and resulted in better Iris detection rate than the state-of-the-art. The proposed approach is robust against illumination variation, small pose variations, presence of eye glasses and partial occlusion of eyes.Defence Science Journal, 2013, 63(3, pp.292-297, DOI:http://dx.doi.org/10.14429/dsj.63.2763
Accurate measurement of RF exposure from emerging wireless communication systems
Isotropic broadband probes or spectrum analyzers (SAs) may be used for the measurement of rapidly varying electromagnetic fields generated by emerging wireless communication systems. In this paper this problematic is investigated by comparing the responses measured by two different isotropic broadband probes typically used to perform electric field (E-field) evaluations. The broadband probes are submitted to signals with variable duty cycles (DC) and crest factors (CF) either with or without Orthogonal Frequency Division Multiplexing (OFDM) modulation but with the same root-mean-square (RMS) power. The two probes do not provide accurate enough results for deterministic signals such as Worldwide Interoperability for Microwave Access (WIMAX) or Long Term Evolution (LTE) as well as for non-deterministic signals such as Wireless Fidelity (WiFi). The legacy measurement protocols should be adapted to cope for the emerging wireless communication technologies based on the OFDM modulation scheme. This is not easily achieved except when the statistics of the RF emission are well known. In this case the measurement errors are shown to be systematic and a correction factor or calibration can be applied to obtain a good approximation of the total RMS power.
Accurate measurement of RF exposure from emerging wireless communication systems
Letertre, Thierry; Monebhurrun, Vikass; Toffano, Zeno
2013-04-01
Isotropic broadband probes or spectrum analyzers (SAs) may be used for the measurement of rapidly varying electromagnetic fields generated by emerging wireless communication systems. In this paper this problematic is investigated by comparing the responses measured by two different isotropic broadband probes typically used to perform electric field (E-field) evaluations. The broadband probes are submitted to signals with variable duty cycles (DC) and crest factors (CF) either with or without Orthogonal Frequency Division Multiplexing (OFDM) modulation but with the same root-mean-square (RMS) power. The two probes do not provide accurate enough results for deterministic signals such as Worldwide Interoperability for Microwave Access (WIMAX) or Long Term Evolution (LTE) as well as for non-deterministic signals such as Wireless Fidelity (WiFi). The legacy measurement protocols should be adapted to cope for the emerging wireless communication technologies based on the OFDM modulation scheme. This is not easily achieved except when the statistics of the RF emission are well known. In this case the measurement errors are shown to be systematic and a correction factor or calibration can be applied to obtain a good approximation of the total RMS power.
Tian, Jianxiang; Mulero, A
2016-01-01
Despite the fact that more that more than 30 analytical expressions for the equation of state of hard-disk fluids have been proposed in the literature, none of them is capable of reproducing the currently accepted numeric or estimated values for the first eighteen virial coefficients. Using the asymptotic expansion method, extended to the first ten virial coefficients for hard-disk fluids, fifty-seven new expressions for the equation of state have been studied. Of these, a new equation of state is selected which reproduces accurately all the first eighteen virial coefficients. Comparisons for the compressibility factor with computer simulations show that this new equation is as accurate as other similar expressions with the same number of parameters. Finally, the location of the poles of the 57 new equations shows that there are some particular configurations which could give both the accurate virial coefficients and the correct closest packing fraction in the future when higher virial coefficients than the t...
Addition of noise by scatter correction methods in PVI
Effective scatter correction techniques are required to account for errors due to high scatter fraction seen in positron volume imaging (PVI). To be effective, the correction techniques must be accurate and practical, but they also must not add excessively to the statistical noise in the image. The authors have investigated the noise added by three correction methods: a convolution/subtraction method; a method that interpolates the scatter from the events outside the object; and a dual energy window method with and without smoothing of the scatter estimate. The methods were applied to data generated by Monte Carlo simulation to determine their effect on the variance of the corrected projections. The convolution and interpolation methods did not add significantly to the variance. The dual energy window subtraction method without smoothing increased the variance by a factor of more than twelve, but this factor was improved to 1.2 by smoothing the scatter estimate
The topside segment of the International Reference Ionosphere (IRI) electron density model (and also of the Bent model) is based on the limited amount of topside data available at the time (∼40,000 Alouette 1 profiles). Being established from such a small database it is therefore not surprising that these models have well-known shortcomings, for example, at high solar activities. Meanwhile a large data base of close to 200,000 topside profiles from Alouette 1, 2, and ISIS 1, 2 has become available online. A program of automated scaling and inversion of a large volume of digitized ionograms adds continuously to this data pool. We have used the currently available ISIS/Alouette topside profiles to evaluate the IRI topside model and to investigate ways of improving the model. The IRI model performs generally well at middle latitudes and shows discrepancies at low and high latitudes and these discrepancies are largest during high solar activity. In the upper topside IRI consistently overestimates the measurements. Based on averages of the data-model ratios we have established correction factors for the IRI model. These factors vary with altitude, modified dip latitude, and local time. (author)
Real-time lens distortion correction: speed, accuracy and efficiency
Bax, Michael R.; Shahidi, Ramin
2014-11-01
Optical lens systems suffer from nonlinear geometrical distortion. Optical imaging applications such as image-enhanced endoscopy and image-based bronchoscope tracking require correction of this distortion for accurate localization, tracking, registration, and measurement of image features. Real-time capability is desirable for interactive systems and live video. The use of a texture-mapping graphics accelerator, which is standard hardware on current motherboard chipsets and add-in video graphics cards, to perform distortion correction is proposed. Mesh generation for image tessellation, an error analysis, and performance results are presented. It is shown that distortion correction using commodity graphics hardware is substantially faster than using the main processor and can be performed at video frame rates (faster than 30 frames per second), and that the polar-based method of mesh generation proposed here is more accurate than a conventional grid-based approach. Using graphics hardware to perform distortion correction is not only fast and accurate but also efficient as it frees the main processor for other tasks, which is an important issue in some real-time applications.
Accurate, Meshless Methods for Magneto-Hydrodynamics
Hopkins, Philip F
2016-01-01
Recently, we developed a pair of meshless finite-volume Lagrangian methods for hydrodynamics: the 'meshless finite mass' (MFM) and 'meshless finite volume' (MFV) methods. These capture advantages of both smoothed-particle hydrodynamics (SPH) and adaptive mesh-refinement (AMR) schemes. Here, we extend these to include ideal magneto-hydrodynamics (MHD). The MHD equations are second-order consistent and conservative. We augment these with a divergence-cleaning scheme, which maintains div*B~0 to high accuracy. We implement these in the code GIZMO, together with a state-of-the-art implementation of SPH MHD. In every one of a large suite of test problems, the new methods are competitive with moving-mesh and AMR schemes using constrained transport (CT) to ensure div*B=0. They are able to correctly capture the growth and structure of the magneto-rotational instability (MRI), MHD turbulence, and the launching of magnetic jets, in some cases converging more rapidly than AMR codes. Compared to SPH, the MFM/MFV methods e...
Fast and Provably Accurate Bilateral Filtering.
Chaudhury, Kunal N; Dabhade, Swapnil D
2016-06-01
The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722
Accurate fission data for nuclear safety
Solders, A; Jokinen, A; Kolhinen, V S; Lantz, M; Mattera, A; Penttila, H; Pomp, S; Rakopoulos, V; Rinta-Antila, S
2013-01-01
The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyvaskyla. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (10^12 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons...
Schmidt, Tobias; Makmal, Adi; Kronik, Leeor; Kümmel, Stephan
2014-01-01
We present and test a new approximation for the exchange-correlation (xc) energy of Kohn-Sham density functional theory. It combines exact exchange with a compatible non-local correlation functional. The functional is by construction free of one-electron self-interaction, respects constraints derived from uniform coordinate scaling, and has the correct asymptotic behavior of the xc energy density. It contains one parameter that is not determined ab initio. We investigate whether it is possible to construct a functional that yields accurate binding energies and affords other advantages, specifically Kohn-Sham eigenvalues that reliably reflect ionization potentials. Tests for a set of atoms and small molecules show that within our local-hybrid form accurate binding energies can be achieved by proper optimization of the free parameter in our functional, along with an improvement in dissociation energy curves and in Kohn-Sham eigenvalues. However, the correspondence of the latter to experimental ionization potent...
Food systems in correctional settings
Smoyer, Amy; Kjær Minke, Linda
management of food systems may improve outcomes for incarcerated people and help correctional administrators to maximize their health and safety. This report summarizes existing research on food systems in correctional settings and provides examples of food programmes in prison and remand facilities......, including a case study of food-related innovation in the Danish correctional system. It offers specific conclusions for policy-makers, administrators of correctional institutions and prison-food-service professionals, and makes proposals for future research.......Food is a central component of life in correctional institutions and plays a critical role in the physical and mental health of incarcerated people and the construction of prisoners' identities and relationships. An understanding of the role of food in correctional settings and the effective...
Health care in correctional facilities.
Thorburn, K M
1995-01-01
More than 1.3 million adults are in correctional facilities, including jails and federal and state prisons, in the United States. Health care of the inmates is an integral component of correctional management. Health services in correctional facilities underwent dramatic improvements during the 1970s. Public policy trends beginning in the early 1980s substantially affected the demographics and health status of jail and prison populations and threatened earlier gains in the health care of inma...
Corrective Feedback and Teacher Development
Ellis, Rod
2009-01-01
This article examines a number of controversies relating to how corrective feedback (CF) has been viewed in SLA and language pedagogy. These controversies address (1) whether CF contributes to L2 acquisition, (2) which errors should be corrected, (3) who should do the correcting (the teacher or the learner him/herself), (4) which type of CF is the most effective, and (5) what is the best timing for CF (immediate or delayed). In discussing these controversies, both the pedagogic and SLA litera...
Comparison of Topographic Correction Methods
Rudolf Richter
2009-07-01
Full Text Available A comparison of topographic correction methods is conducted for Landsat-5 TM, Landsat-7 ETM+, and SPOT-5 imagery from different geographic areas and seasons. Three successful and known methods are compared: the semi-empirical C correction, the Gamma correction depending on the incidence and exitance angles, and a modified Minnaert approach. In the majority of cases the modified Minnaert approach performed best, but no method is superior in all cases.
Cool Cluster Correctly Correlated
Sergey Aleksandrovich Varganov
2005-12-17
Atomic clusters are unique objects, which occupy an intermediate position between atoms and condensed matter systems. For a long time it was thought that physical and chemical properties of atomic dusters monotonically change with increasing size of the cluster from a single atom to a condensed matter system. However, recently it has become clear that many properties of atomic clusters can change drastically with the size of the clusters. Because physical and chemical properties of clusters can be adjusted simply by changing the cluster's size, different applications of atomic clusters were proposed. One example is the catalytic activity of clusters of specific sizes in different chemical reactions. Another example is a potential application of atomic clusters in microelectronics, where their band gaps can be adjusted by simply changing cluster sizes. In recent years significant advances in experimental techniques allow one to synthesize and study atomic clusters of specified sizes. However, the interpretation of the results is often difficult. The theoretical methods are frequently used to help in interpretation of complex experimental data. Most of the theoretical approaches have been based on empirical or semiempirical methods. These methods allow one to study large and small dusters using the same approximations. However, since empirical and semiempirical methods rely on simple models with many parameters, it is often difficult to estimate the quantitative and even qualitative accuracy of the results. On the other hand, because of significant advances in quantum chemical methods and computer capabilities, it is now possible to do high quality ab-initio calculations not only on systems of few atoms but on clusters of practical interest as well. In addition to accurate results for specific clusters, such methods can be used for benchmarking of different empirical and semiempirical approaches. The atomic clusters studied in this work contain from a few atoms
QCD corrections to triboson production
Lazopoulos, Achilleas; Melnikov, Kirill; Petriello, Frank
2007-07-01
We present a computation of the next-to-leading order QCD corrections to the production of three Z bosons at the Large Hadron Collider. We calculate these corrections using a completely numerical method that combines sector decomposition to extract infrared singularities with contour deformation of the Feynman parameter integrals to avoid internal loop thresholds. The NLO QCD corrections to pp→ZZZ are approximately 50% and are badly underestimated by the leading order scale dependence. However, the kinematic dependence of the corrections is minimal in phase space regions accessible at leading order.
Entropic Corrections to Coulomb's Law
Hendi, S. H.; Sheykhi, A.
2012-04-01
Two well-known quantum corrections to the area law have been introduced in the literatures, namely, logarithmic and power-law corrections. Logarithmic corrections, arises from loop quantum gravity due to thermal equilibrium fluctuations and quantum fluctuations, while, power-law correction appears in dealing with the entanglement of quantum fields in and out the horizon. Inspired by Verlinde's argument on the entropic force, and assuming the quantum corrected relation for the entropy, we propose the entropic origin for the Coulomb's law in this note. Also we investigate the Uehling potential as a radiative correction to Coulomb potential in 1-loop order and show that for some value of distance the entropic corrections of the Coulomb's law is compatible with the vacuum-polarization correction in QED. So, we derive modified Coulomb's law as well as the entropy corrected Poisson's equation which governing the evolution of the scalar potential ϕ. Our study further supports the unification of gravity and electromagnetic interactions based on the holographic principle.
Towards a more accurate concept of fuels
Full text: The introduction of LEU in Atucha and the approval of CARA show an advancement of the Argentine power stations fuels, which stimulate and show a direction to follow. In the first case, the use of enriched U fuel relax an important restriction related to neutronic economy; that means that it is possible to design less penalized fuels using more Zry. The second case allows a decrease in the lineal power of the rods, enabling a better performance of the fuel in normal and also in accident conditions. In this work we wish to emphasize this last point, trying to find a design in which the surface power of the rod is diminished. Hence, in accident conditions owing to lack of coolant, the cladding tube will not reach temperatures that will produce oxidation, with the corresponding H2 formation and with plasticity enough to form blisters, which will obstruct the reflooding and hydration that will produce fragility and rupture of the cladding tube, with the corresponding radioactive material dispersion. This work is oriented to find rods designs with quasi rectangular geometry to lower the surface power of the rods, in order to obtain a lower central temperature of the rod. Thus, critical temperatures will not be reached in case of lack of coolant. This design is becoming a reality after PPFAE's efforts in search of cladding tubes fabrication with different circumferential values, rectangular in particular. This geometry, with an appropriate pellet design, can minimize the pellet-cladding interaction and, through the accurate width election, non rectified pellets could be used. This means an important economy in pellets production, as well as an advance in the fabrication of fuels in gloves box and hot cells in the future. The sequence to determine critical geometrical parameters is described and some rod dispositions are explored
Accurate orbit propagation with planetary close encounters
Baù, Giulio; Milani Comparetti, Andrea; Guerra, Francesca
2015-08-01
We tackle the problem of accurately propagating the motion of those small bodies that undergo close approaches with a planet. The literature is lacking on this topic and the reliability of the numerical results is not sufficiently discussed. The high-frequency components of the perturbation generated by a close encounter makes the propagation particularly challenging both from the point of view of the dynamical stability of the formulation and the numerical stability of the integrator. In our approach a fixed step-size and order multistep integrator is combined with a regularized formulation of the perturbed two-body problem. When the propagated object enters the region of influence of a celestial body, the latter becomes the new primary body of attraction. Moreover, the formulation and the step-size will also be changed if necessary. We present: 1) the restarter procedure applied to the multistep integrator whenever the primary body is changed; 2) new analytical formulae for setting the step-size (given the order of the multistep, formulation and initial osculating orbit) in order to control the accumulation of the local truncation error and guarantee the numerical stability during the propagation; 3) a new definition of the region of influence in the phase space. We test the propagator with some real asteroids subject to the gravitational attraction of the planets, the Yarkovsky and relativistic perturbations. Our goal is to show that the proposed approach improves the performance of both the propagator implemented in the OrbFit software package (which is currently used by the NEODyS service) and of the propagator represented by a variable step-size and order multistep method combined with Cowell's formulation (i.e. direct integration of position and velocity in either the physical or a fictitious time).
Accurate paleointensities - the multi-method approach
de Groot, Lennart
2016-04-01
The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.
Towards Accurate Application Characterization for Exascale (APEX)
Hammond, Simon David [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-09-01
Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.
Accurate hydrocarbon estimates attained with radioactive isotope
To make accurate economic evaluations of new discoveries, an oil company needs to know how much gas and oil a reservoir contains. The porous rocks of these reservoirs are not completely filled with gas or oil, but contain a mixture of gas, oil and water. It is extremely important to know what volume percentage of this water--called connate water--is contained in the reservoir rock. The percentage of connate water can be calculated from electrical resistivity measurements made downhole. The accuracy of this method can be improved if a pure sample of connate water can be analyzed or if the chemistry of the water can be determined by conventional logging methods. Because of the similarity of the mud filtrate--the water in a water-based drilling fluid--and the connate water, this is not always possible. If the oil company cannot distinguish between connate water and mud filtrate, its oil-in-place calculations could be incorrect by ten percent or more. It is clear that unless an oil company can be sure that a sample of connate water is pure, or at the very least knows exactly how much mud filtrate it contains, its assessment of the reservoir's water content--and consequently its oil or gas content--will be distorted. The oil companies have opted for the Repeat Formation Tester (RFT) method. Label the drilling fluid with small doses of tritium--a radioactive isotope of hydrogen--and it will be easy to detect and quantify in the sample
Fast, accurate standardless XRF analysis with IQ+
Full text: Due to both chemical and physical effects, the most accurate XRF data are derived from calibrations set up using in-type standards, necessitating some prior knowledge of the samples being analysed. Whilst this is often the case for routine samples, particularly in production control, for completely unknown samples the identification and availability of in-type standards can be problematic. Under these circumstances standardless analysis can offer a viable solution. Successful analysis of completely unknown samples requires a complete chemical overview of the speciemen together with the flexibility of a fundamental parameters (FP) algorithm to handle wide-ranging compositions. Although FP algorithms are improving all the time, most still require set-up samples to define the spectrometer response to a particular element. Whilst such materials may be referred to as standards, the emphasis in this kind of analysis is that only a single calibration point is required per element and that the standard chosen does not have to be in-type. The high sensitivities of modern XRF spectrometers, together with recent developments in detector counting electronics that possess a large dynamic range and high-speed data processing capacity bring significant advances to fast, standardless analysis. Illustrated with a tantalite-columbite heavy-mineral concentrate grading use-case, this paper will present the philosophy behind the semi-quantitative IQ+ software and the required hardware. This combination can give a rapid scan-based overview and quantification of the sample in less than two minutes, together with the ability to define channels for specific elements of interest where higher accuracy and lower levels of quantification are required. The accuracy, precision and limitations of standardless analysis will be assessed using certified reference materials of widely differing chemical and physical composition. Copyright (2002) Australian X-ray Analytical Association Inc
PET measurements of cerebral metabolism corrected for CSF contributions
Thirty-three subjects have been studied with PET and anatomic imaging (proton-NMR and/or CT) in order to determine the effect of cerebral atrophy on calculations of metabolic rates. Subgroups of neurologic disease investigated include stroke, brain tumor, epilepsy, psychosis, and dementia. Anatomic images were digitized through a Vidicon camera and analyzed volumetrically. Relative areas for ventricles, sulci, and brain tissue were calculated. Preliminary analysis suggests that ventricular volumes as determined by NMR and CT are similar, while sulcal volumes are larger on NMR scans. Metabolic rates (18F-FDG) were calculated before and after correction for CSF spaces, with initial focus upon dementia and normal aging. Correction for atrophy led to a greater increase (%) in global metabolic rates in demented individuals (18.2 +- 5.3) compared to elderly controls (8.3 +- 3.0,p < .05). A trend towards significantly lower glucose metabolism in demented subjects before CSF correction was not seen following correction for atrophy. These data suggest that volumetric analysis of NMR images may more accurately reflect the degree of cerebral atrophy, since NMR does not suffer from beam hardening artifact due to bone-parenchyma juxtapositions. Furthermore, appropriate correction for CSF spaces should be employed if current resolution PET scanners are to accurately measure residual brain tissue metabolism in various pathological states
Evaluation of inhomogeneity correction algorithm in 3DCRT for the purpose of gated treatments
It has been established that the tumors in chest such as lung and abdomen do move during the course of treatments and it is accurate to treat them with gated imaging and treatment. However, the increased dose per fraction delivered while delivering this kind of treatments with tighter margins make it imperative to verify the inhomogeneity corrections are applied accurately in the treatment planning systems. The purpose of this work is to check the inhomogeneity corrections used or applied in the treatment planning system in terms of phantom measurements and also relate it to other methods of corrections such as ETPR
Error analysis and correction for laser speckle photography
Song, Y.Z.; Kulenovic, R.; Groll, M. [Univ. Stuttgart (Germany). Inst. of Nuclear Technology and Energy Systems
1995-12-31
This paper deals with error analysis of experimental data of a laser speckle photography (LSP) application which measures a temperature field of natural convection around a heated cylindrical tube. A method for error corrections is proposed and presented in detail. Experimental and theoretical investigations have shown errors in the measurements are induced due to four causes. These error sources are discussed and suggestions to avoid the errors are given. Due to the error analysis and the introduced methods for their correction the temperature distribution, respectively the temperature gradient in a thermal boundary layer can be obtained more accurately.
Correct and efficient accelerator programming
Cohen, Albert; Donaldson, Alistair F.; Huisman, Marieke; Katoen, Joost-Pieter
2013-01-01
This report documents the program and the outcomes of Dagstuhl Seminar 13142 “Correct and Efficient Accelerator Programming”. The aim of this Dagstuhl seminar was to bring together researchers from various sub-disciplines of computer science to brainstorm and discuss the theoretical foundations, design and implementation of techniques and tools for correct and efficient accelerator programming.
Santocchia, Attilio
2009-01-01
Many physics measurements in CMS will rely on the precise reconstruction of Jets. Correction of the raw jet energy measured by the CMS detector will be a fundamental step for most of the analysis where hadron activity is investigated. Jet correction plans in CMS have been widely studied for different conditions: at stat-up simulation tuned on test-beam data will be used. Then data-driven methods will be available and finally, simulation tuned on collision data will give us the ultimate procedure for calculating jet corrections. Jet transverse energy is corrected first for pile-up and noise offset; correction for the response of the calorimeter as a function of jet pseudorapidity relative to the barrel comes afterwards and correction for the absolute response as a function of transverse momentum in the barrel is the final standard sub-correction applied. Other effects like flavour and parton correction will be optionally applied on the Jet $E_T$ depending on the measurement requests. In this paper w...
Fine-Tuning Corrective Feedback.
Han, ZhaoHong
2001-01-01
Explores the notion of "fine-tuning" in connection with the corrective feedback process. Describes a longitudinal case study, conducted in the context of Norwegian as a second a language, that shows how fine-tuning and lack thereof in the provision of written corrective feedback differentially affects a second language learner's restructuring of…
Relativistic corrections to stopping powers
Relativistic corrections to the nonrelativistic Bethe-Bloch formula for the stopping power of matter for charged particles are traditionally computed by considering close collisions separately from distant collisions. The close collision contribution is further divided into the Mott correction appropriate for very small impact parameters, and the Bloch correction, computed for larger values. This division of the region of close collisions leads to a very cumbersome result if one generalizes the original Bloch procedure to relativistic energies. The authors avoid the resulting poorly specified scattering angle theta/sub o/ that divides the Mott and Bloch correction regimes by using the procedure suggested by Lindhard and applied by Golovchenko, Cox and Goland to determine the Bloch correction for relativistic velocities. 25 references, 2 figures
Shell corrections in stopping powers
Bichsel, H.
2002-05-01
One of the theories of the electronic stopping power S for fast light ions was derived by Bethe. The algorithm currently used for the calculation of S includes terms known as the mean excitation energy I, the shell correction, the Barkas correction, and the Bloch correction. These terms are described here. For the calculation of the shell corrections an atomic model is used, which is more realistic than the hydrogenic approximation used so far. A comparison is made with similar calculations in which the local plasma approximation is utilized. Close agreement with the experimental data for protons with energies from 0.3 to 10 MeV traversing Al and Si is found without the need for adjustable parameters for the shell corrections.
Accurate calculation of (31)P NMR chemical shifts in polyoxometalates.
Pascual-Borràs, Magda; López, Xavier; Poblet, Josep M
2015-04-14
We search for the best density functional theory strategy for the determination of (31)P nuclear magnetic resonance (NMR) chemical shifts, δ((31)P), in polyoxometalates. Among the variables governing the quality of the quantum modelling, we tackle herein the influence of the functional and the basis set. The spin-orbit and solvent effects were routinely included. To do so we analysed the family of structures α-[P2W18-xMxO62](n-) with M = Mo(VI), V(V) or Nb(V); [P2W17O62(M'R)](n-) with M' = Sn(IV), Ge(IV) and Ru(II) and [PW12-xMxO40](n-) with M = Pd(IV), Nb(V) and Ti(IV). The main results suggest that, to date, the best procedure for the accurate calculation of δ((31)P) in polyoxometalates is the combination of TZP/PBE//TZ2P/OPBE (for NMR//optimization step). The hybrid functionals (PBE0, B3LYP) tested herein were applied to the NMR step, besides being more CPU-consuming, do not outperform pure GGA functionals. Although previous studies on (183)W NMR suggested that the use of very large basis sets like QZ4P were needed for geometry optimization, the present results indicate that TZ2P suffices if the functional is optimal. Moreover, scaling corrections were applied to the results providing low mean absolute errors below 1 ppm for δ((31)P), which is a step forward in order to confirm or predict chemical shifts in polyoxometalates. Finally, via a simplified molecular model, we establish how the small variations in δ((31)P) arise from energy changes in the occupied and virtual orbitals of the PO4 group. PMID:25738630
Scattering Correction For Image Reconstruction In Flash Radiography
Cao, Liangzhi; Wang, Mengqi; Wu, Hongchun; Liu, Zhouyu; Cheng, Yuxiong; Zhang, Hongbo [Xi' an Jiaotong Univ., Xi' an (China)
2013-08-15
Scattered photons cause blurring and distortions in flash radiography, reducing the accuracy of image reconstruction significantly. The effect of the scattered photons is taken into account and an iterative deduction of the scattered photons is proposed to amend the scattering effect for image restoration. In order to deduct the scattering contribution, the flux of scattered photons is estimated as the sum of two components. The single scattered component is calculated accurately together with the uncollided flux along the characteristic ray, while the multiple scattered component is evaluated using correction coefficients pre-obtained from Monte Carlo simulations.The arbitrary geometry pretreatment and ray tracing are carried out based on the customization of AutoCAD. With the above model, an Iterative Procedure for image restORation code, IPOR, is developed. Numerical results demonstrate that the IPOR code is much more accurate than the direct reconstruction solution without scattering correction and it has a very high computational efficiency.
Scattering Correction For Image Reconstruction In Flash Radiography
Scattered photons cause blurring and distortions in flash radiography, reducing the accuracy of image reconstruction significantly. The effect of the scattered photons is taken into account and an iterative deduction of the scattered photons is proposed to amend the scattering effect for image restoration. In order to deduct the scattering contribution, the flux of scattered photons is estimated as the sum of two components. The single scattered component is calculated accurately together with the uncollided flux along the characteristic ray, while the multiple scattered component is evaluated using correction coefficients pre-obtained from Monte Carlo simulations.The arbitrary geometry pretreatment and ray tracing are carried out based on the customization of AutoCAD. With the above model, an Iterative Procedure for image restORation code, IPOR, is developed. Numerical results demonstrate that the IPOR code is much more accurate than the direct reconstruction solution without scattering correction and it has a very high computational efficiency
Thermal Correction to the Molar Polarizability of a Boltzmann Gas
Jentschura, U D; Mohr, P J
2013-01-01
Metrology in atomic physics has been crucial for a number of advanced determinations of fundamental constants. In addition to very precise frequency measurements, the molar polarizability of an atomic gas has recently also been measured very accurately. Part of the motivation for the measurements is due to ongoing efforts to redefine the International System of Units (SI) for which an accurate value of the Boltzmann constant is needed. Here, we calculate the dominant shift of the molar polarizability in an atomic gas due to thermal effects. It is given by the relativistic correction to the dipole interaction, which emerges when the probing electric field is Lorenz transformed into the rest frame of the atoms that undergo thermal motion. While this effect is small when compared to currently available experimental accuracy, the relativistic correction to the dipole interaction is much larger than the thermal shift of the polarizability induced by blackbody radiation.
Thermal correction to the molar polarizability of a Boltzmann gas
Jentschura, U. D.; Puchalski, M.; Mohr, P. J.
2011-12-01
Metrology in atomic physics has been crucial for a number of advanced determinations of fundamental constants. In addition to very precise frequency measurements, the molar polarizability of an atomic gas has recently also been measured very accurately. Part of the motivation for the measurements is due to ongoing efforts to redefine the International System of Units (SI), for which an accurate value of the Boltzmann constant is needed. Here we calculate the dominant shift of the molar polarizability in an atomic gas due to thermal effects. It is given by the relativistic correction to the dipole interaction, which emerges when the probing electric field is Lorentz transformed into the rest frame of the atoms that undergo thermal motion. While this effect is small when compared to currently available experimental accuracy, the relativistic correction to the dipole interaction is much larger than the thermal shift of the polarizability induced by blackbody radiation.
Evaluation of QNI corrections in porous media applications
Radebe, M. J.; de Beer, F. C.; Nshimirimana, R.
2011-09-01
Qualitative measurements using digital neutron imaging has been the more explored aspect than accurate quantitative measurements. The reason for this bias is that quantitative measurements require correction for background and material scatter, and neutron spectral effects. Quantitative Neutron Imaging (QNI) software package has resulted from efforts at the Paul Scherrer Institute, Helmholtz Zentrum Berlin (HZB) and Necsa to correct for these effects, while the sample-detector distance (SDD) principle has previously been demonstrated as a measure to eliminate material scatter effect. This work evaluates the capabilities of the QNI software package to produce accurate quantitative results on specific characteristics of porous media, and its role to nondestructive quantification of material with and without calibration. The work further complements QNI abilities by the use of different SDDs. Studies of effective %porosity of mortar and attenuation coefficient of water using QNI and SDD principle are reported.
High order QED corrections in Z physics
In this thesis a number of calculations of higher order QED corrections are presented, all applying to the standard LEP/SLC processes e+e-→ f-bar f, where f stands for any fermion. In cases where f≠ e-, νe, the above process is only possible via annihilation of the incoming electron positron pair. At LEP/SLC this mainly occurs via the production and the subsequent decay of a Z boson, i.e. the cross section is heavily dominated by the Z resonance. These processes and the corrections to them, treated in a semi-analytical way, are discussed (ch. 2). In the case f = e- (Bhabha scattering) the process can also occur via the exchange of a virtual photon in the t-channel. Since the latter contribution is dominant at small scattering angles one has to exclude these angles if one is interested in Z physics. Having excluded that region one has to recalculate all QED corrections (ch. 3). The techniques introduced there enables for the calculation the difference between forward and backward scattering, the forward backward symmetry, for the cases f ≠ e-, νe (ch. 4). At small scattering angles, where Bhabha scattering is dominated by photon exchange in the t-channel, this process is used in experiments to determine the luminosity of the e+e- accelerator. hence an accurate theoretical description of this process at small angles is of vital interest to the overall normalization of all measurements at LEP/SLC. Ch. 5 gives such a description in a semi-analytical way. The last two chapters discuss Monte Carlo techniques that are used for the cases f≠ e-, νe. Ch. 6 describes the simulation of two photon bremsstrahlung, which is a second order QED correction effect. The results are compared with results of the semi-analytical treatment in ch. 2. Finally ch. 7 reviews several techniques that have been used to simulate higher order QED corrections for the cases f≠ e-, νe. (author). 132 refs.; 10 figs.; 16 tabs
Surface consistent finite frequency phase corrections
Kimman, W. P.
2016-07-01
Static time-delay corrections are frequency independent and ignore velocity variations away from the assumed vertical ray path through the subsurface. There is therefore a clear potential for improvement if the finite frequency nature of wave propagation can be properly accounted for. Such a method is presented here based on the Born approximation, the assumption of surface consistency and the misfit of instantaneous phase. The concept of instantaneous phase lends itself very well for sweep-like signals, hence these are the focus of this study. Analytical sensitivity kernels are derived that accurately predict frequency-dependent phase shifts due to P-wave anomalies in the near surface. They are quick to compute and robust near the source and receivers. An additional correction is presented that re-introduces the nonlinear relation between model perturbation and phase delay, which becomes relevant for stronger velocity anomalies. The phase shift as function of frequency is a slowly varying signal, its computation therefore does not require fine sampling even for broad-band sweeps. The kernels reveal interesting features of the sensitivity of seismic arrivals to the near surface: small anomalies can have a relative large impact resulting from the medium field term that is dominant near the source and receivers. Furthermore, even simple velocity anomalies can produce a distinct frequency-dependent phase behaviour. Unlike statics, the predicted phase corrections are smooth in space. Verification with spectral element simulations shows an excellent match for the predicted phase shifts over the entire seismic frequency band. Applying the phase shift to the reference sweep corrects for wavelet distortion, making the technique akin to surface consistent deconvolution, even though no division in the spectral domain is involved. As long as multiple scattering is mild, surface consistent finite frequency phase corrections outperform traditional statics for moderately large
Surface Consistent Finite Frequency Phase Corrections
Kimman, W. P.
2016-04-01
Static time-delay corrections are frequency independent and ignore velocity variations away from the assumed vertical ray-path through the subsurface. There is therefore a clear potential for improvement if the finite frequency nature of wave propagation can be properly accounted for. Such a method is presented here based on the Born approximation, the assumption of surface consistency, and the misfit of instantaneous phase. The concept of instantaneous phase lends itself very well for sweep-like signals, hence these are the focus of this study. Analytical sensitivity kernels are derived that accurately predict frequency dependent phase shifts due to P-wave anomalies in the near surface. They are quick to compute and robust near the source and receivers. An additional correction is presented that re-introduces the non-linear relation between model perturbation and phase delay, which becomes relevant for stronger velocity anomalies. The phase shift as function of frequency is a slowly varying signal, its computation therefore doesn't require fine sampling even for broadband sweeps. The kernels reveal interesting features of the sensitivity of seismic arrivals to the near surface: small anomalies can have a relative large impact resulting from the medium field term that is dominant near the source and receivers. Furthermore, even simple velocity anomalies can produce a distinct frequency dependent phase behaviour. Unlike statics, the predicted phase corrections are smooth in space. Verification with spectral element simulations shows an excellent match for the predicted phase shifts over the entire seismic frequency band. Applying the phase shift to the reference sweep corrects for wavelet distortion, making the technique akin to surface consistent deconvolution, even though no division in the spectral domain is involved. As long as multiple scattering is mild, surface consistent finite frequency phase corrections outperform traditional statics for moderately large
An accurate and practical method for inference of weak gravitational lensing from galaxy images
Bernstein, Gary M.; Armstrong, Robert; Krawiec, Christina; March, Marisa C.
2016-07-01
We demonstrate highly accurate recovery of weak gravitational lensing shear using an implementation of the Bayesian Fourier Domain (BFD) method proposed by Bernstein & Armstrong, extended to correct for selection biases. The BFD formalism is rigorously correct for Nyquist-sampled, background-limited, uncrowded images of background galaxies. BFD does not assign shapes to galaxies, instead compressing the pixel data D into a vector of moments M, such that we have an analytic expression for the probability P(M|g) of obtaining the observations with gravitational lensing distortion g along the line of sight. We implement an algorithm for conducting BFD's integrations over the population of unlensed source galaxies which measures ≈10 galaxies s-1 core-1 with good scaling properties. Initial tests of this code on ≈109 simulated lensed galaxy images recover the simulated shear to a fractional accuracy of m = (2.1 ± 0.4) × 10-3, substantially more accurate than has been demonstrated previously for any generally applicable method. Deep sky exposures generate a sufficiently accurate approximation to the noiseless, unlensed galaxy population distribution assumed as input to BFD. Potential extensions of the method include simultaneous measurement of magnification and shear; multiple-exposure, multiband observations; and joint inference of photometric redshifts and lensing tomography.
Comparative evaluation of scatter correction techniques in 3D positron emission tomography
Zaidi, H
2000-01-01
Much research and development has been concentrated on the scatter compensation required for quantitative 3D PET. Increasingly sophisticated scatter correction procedures are under investigation, particularly those based on accurate scatter models, and iterative reconstruction-based scatter compensation approaches. The main difference among the correction methods is the way in which the scatter component in the selected energy window is estimated. Monte Carlo methods give further insight and might in themselves offer a possible correction procedure. Methods: Five scatter correction methods are compared in this paper where applicable. The dual-energy window (DEW) technique, the convolution-subtraction (CVS) method, two variants of the Monte Carlo-based scatter correction technique (MCBSC1 and MCBSC2) and our newly developed statistical reconstruction-based scatter correction (SRBSC) method. These scatter correction techniques are evaluated using Monte Carlo simulation studies, experimental phantom measurements...
Spectroscopically Accurate Line Lists for Application in Sulphur Chemistry
Underwood, D. S.; Azzam, A. A. A.; Yurchenko, S. N.; Tennyson, J.
2013-09-01
for inclusion in standard atmospheric and planetary spectroscopic databases. The methods involved in computing the ab initio potential energy and dipole moment surfaces involved minor corrections to the equilibrium S-O distance, which produced a good agreement with experimentally determined rotational energies. However the purely ab initio method was not been able to reproduce an equally spectroscopically accurate representation of vibrational motion. We therefore present an empirical refinement to this original, ab initio potential surface, based on the experimental data available. This will not only be used to reproduce the room-temperature spectrum to a greater degree of accuracy, but is essential in the production of a larger, accurate line list necessary for the simulation of higher temperature spectra: we aim for coverage suitable for T ? 800 K. Our preliminary studies on SO3 have also shown it to exhibit an interesting "forbidden" rotational spectrum and "clustering" of rotational states; to our knowledge this phenomenon has not been observed in other examples of trigonal planar molecules and is also an investigative avenue we wish to pursue. Finally, the IR absorption bands for SO2 and SO3 exhibit a strong overlap, and the inclusion of SO2 as a complement to our studies is something that we will be interested in doing in the near future.
Holographic thermalization with Weyl corrections
Dey, Anshuman; Mahapatra, Subhash; Sarkar, Tapobrata
2016-01-01
We consider holographic thermalization in the presence of a Weyl correction in five dimensional AdS space. We first obtain the Weyl corrected black brane solution perturbatively, up to first order in the coupling. The corresponding AdS-Vaidya like solution is then constructed. This is then used to numerically analyze the time dependence of the two point correlation functions and the expectation values of rectangular Wilson loops in the boundary field theory, and we discuss how the Weyl correction can modify the thermalization time scales in the dual field theory. In this context, the subtle interplay between the Weyl coupling constant and the chemical potential is studied in detail.
Segmented attenuation correction using artificial neural networks in positron tomography
The measured attenuation correction technique is widely used in cardiac positron tomographic studies. However, the success of this technique is limited because of insufficient counting statistics achievable in practical transmission scan times, and of the scattered radiation in transmission measurement which leads to an underestimation of the attenuation coefficients. In this work, a segmented attenuation correction technique has been developed that uses artificial neural networks. The technique has been validated in phantoms and verified in human studies. The results indicate that attenuation coefficients measured in the segmented transmission image are accurate and reproducible. Activity concentrations measured in the reconstructed emission image can also be recovered accurately using this new technique. The accuracy of the technique is subject independent and insensitive to scatter contamination in the transmission data. This technique has the potential of reducing the transmission scan time, and satisfactory results are obtained if the transmission data contain about 400 000 true counts per plane. It can predict accurately the value of any attenuation coefficient in the range from air to water in a transmission image with or without scatter correction. (author)
Software for Correcting the Dynamic Error of Force Transducers
Naoki Miyashita
2014-07-01
Full Text Available Software which corrects the dynamic error of force transducers in impact force measurements using their own output signal has been developed. The software corrects the output waveform of the transducers using the output waveform itself, estimates its uncertainty and displays the results. In the experiment, the dynamic error of three transducers of the same model are evaluated using the Levitation Mass Method (LMM, in which the impact forces applied to the transducers are accurately determined as the inertial force of the moving part of the aerostatic linear bearing. The parameters for correcting the dynamic error are determined from the results of one set of impact measurements of one transducer. Then, the validity of the obtained parameters is evaluated using the results of the other sets of measurements of all the three transducers. The uncertainties in the uncorrected force and those in the corrected force are also estimated. If manufacturers determine the correction parameters for each model using the proposed method, and provide the software with the parameters corresponding to each model, then users can obtain the waveform corrected against dynamic error and its uncertainty. The present status and the future prospects of the developed software are discussed in this paper.
How well does multiple OCR error correction generalize?
Lund, William B.; Ringger, Eric K.; Walker, Daniel D.
2013-12-01
As the digitization of historical documents, such as newspapers, becomes more common, the need of the archive patron for accurate digital text from those documents increases. Building on our earlier work, the contributions of this paper are: 1. in demonstrating the applicability of novel methods for correcting optical character recognition (OCR) on disparate data sets, including a new synthetic training set, 2. enhancing the correction algorithm with novel features, and 3. assessing the data requirements of the correction learning method. First, we correct errors using conditional random fields (CRF) trained on synthetic training data sets in order to demonstrate the applicability of the methodology to unrelated test sets. Second, we show the strength of lexical features from the training sets on two unrelated test sets, yielding a relative reduction in word error rate on the test sets of 6.52%. New features capture the recurrence of hypothesis tokens and yield an additional relative reduction in WER of 2.30%. Further, we show that only 2.0% of the full training corpus of over 500,000 feature cases is needed to achieve correction results comparable to those using the entire training corpus, effectively reducing both the complexity of the training process and the learned correction model.
Surface corrections to the moment of inertia and shell structure in finite Fermi systems
Gorpinchenko, D. V.; Magner, A. G.; Bartel, J.; Blocki, J. P.
2016-02-01
The moment of inertia for nuclear collective rotations is derived within a semiclassical approach based on the Inglis cranking and Strutinsky shell-correction methods, improved by surface corrections within the nonperturbative periodic-orbit theory. For adiabatic (statistical-equilibrium) rotations it was approximated by the generalized rigid-body moment of inertia accounting for the shell corrections of the particle density. An improved phase-space trace formula allows to express the shell components of the moment of inertia more accurately in terms of the free-energy shell correction. Evaluating their ratio within the extended Thomas-Fermi effective-surface approximation, one finds good agreement with the quantum calculations.
Surface corrections to the shell-structure of the moment of inertia
Gorpinchenko, D V; Bartel, J; Blocki, J P
2015-01-01
The moment of inertia for nuclear collective rotations is derived within a semiclassical approach based on the Inglis cranking and the Strutinsky shell-correction methods, improved by surface corrections within the non-perturbative periodic-orbit theory. For adiabatic (statistical-equilibrium) rotations it was approximated by the generalized rigid-body moment of inertia accounting for the shell corrections of the particle density. An improved phase-space trace formula allows to express the shell components of the moment of inertia more accurately in terms of the free-energy shell correction with their ratio evaluated within the extended Thomas-Fermi effective-surface approximation.
Reflection error correction of gas turbine blade temperature
Kipngetich, Ketui Daniel; Feng, Chi; Gao, Shan
2016-03-01
Accurate measurement of gas turbine blades' temperature is one of the greatest challenges encountered in gas turbine temperature measurements. Within an enclosed gas turbine environment with surfaces of varying temperature and low emissivities, a new challenge is introduced into the use of radiation thermometers due to the problem of reflection error. A method for correcting this error has been proposed and demonstrated in this work through computer simulation and experiment. The method assumed that emissivities of all surfaces exchanging thermal radiation are known. Simulations were carried out considering targets with low and high emissivities of 0.3 and 0.8 respectively while experimental measurements were carried out on blades with emissivity of 0.76. Simulated results showed possibility of achieving error less than 1% while experimental result corrected the error to 1.1%. It was thus concluded that the method is appropriate for correcting reflection error commonly encountered in temperature measurement of gas turbine blades.
Analytic method for geometrical parameter correction of planar HPGe detector
A numerical integration formula was introduced to calculate the response of planar HPGe detector to photons emitted from point source. Then the formula was used to correct the geometrical parameter of planar HPGe detector. 241Am and 137Cs point sources were placed at a certain distance (1-20 cm) away from entrance window to get the corresponding detection efficiency. The detection parameters were calculated in weighted least square fitting using the formula with the experimental efficiencies as formula results. This correction method was accurate and timesaving. The simulation result from MCNP using the corrected parameters shows that the relative deviations between simulation and experimental efficiencies are less than 1% for 59.5 and 661.6 keV photons with the distance of 1-20 cm. (authors)
Correcting the Chromatic Aberration in Barrel Distortion of Endoscopic Images
Y. M. Harry Ng
2003-04-01
Full Text Available Modern endoscopes offer physicians a wide-angle field of view (FOV for minimally invasive therapies. However, the high level of barrel distortion may prevent accurate perception of image. Fortunately, this kind of distortion may be corrected by digital image processing. In this paper we investigate the chromatic aberrations in the barrel distortion of endoscopic images. In the past, chromatic aberration in endoscopes is corrected by achromatic lenses or active lens control. In contrast, we take a computational approach by modifying the concept of image warping and the existing barrel distortion correction algorithm to tackle the chromatic aberration problem. In addition, an error function for the determination of the level of centroid coincidence is proposed. Simulation and experimental results confirm the effectiveness of our method.
Neural network scatter correction technique for digital radiography
This paper presents a scatter correction technique based on artificial neural networks. The technique utilizes the acquisition of a conventional digital radiographic image, coupled with the acquisition of a multiple pencil beam (micro-aperture) digital image. Image subtraction results in a sparsely sampled estimate of the scatter component in the image. The neural network is trained to develop a causal relationship between image data on the low-pass filtered open field image and the sparsely sampled scatter image, and then the trained network is used to correct the entire image (pixel by pixel) in a manner which is operationally similar to but potentially more powerful than convolution. The technique is described and is illustrated using clinical primary component images combined with scatter component images that are realistically simulated using the results from previously reported Monte Carlo investigations. The results indicate that an accurate scatter correction can be realized using this technique
Water-table correction factors applied to gasoline contamination
The application of correction factors to measured ground-water elevations is an important step in the process of characterizing sites contaminated by petroleum products such as gasoline. The water-table configuration exerts a significant control on the migration of free product (e.g., gasoline) and dissolved hydrocarbon constituents. An accurate representation of this configuration cannot be made on the basis of measurements obtained from monitoring wells containing free product, unless correction factors are applied. By applying correction factors, the effect of the overlying product on the apparent water-table configuration is removed, and the water table can be analyzed at its ambient (undisturbed) level. A case history is presented where corrected water-table elevations and elevations measured at wells unaffected by free product are combined as control points. The used of the combined data facilitates a more accurate assessment of the shape of the water table, which leads to better conclusions regarding the source(s) of contamination, the extent of free-product accumulation, and optimal areas for focusing remediation efforts
Multipole correction in large synchrotrons
A new method of correcting dynamic nonlinearities due to the multipole content of a synchrotron such as the Superconducting Super Collider is discussed. The method uses lumped multipole elements placed at the center (C) of the accelerator half-cells as well as elements near the focusing (F) and defocusing (D) quads. In a first approximation, the corrector strengths follow Simpson's Rule. Correction of second-order sextupole nonlinearities may also be obtained with the F, C, and D octupoles. Correction of nonlinearities by about three orders of magnitude are obtained, and simple solutions to a fundamental problem in synchrotrons are demonstrated. Applications to the CERN Large Hadron Collider and lower energy machines, as well as extensions for quadrupole correction, are also discussed
Self-Correcting Quantum Computers
Bombin, H; Horodecki, M; Martín-Delgado, M A
2009-01-01
Is the notion of a quantum computer resilient to thermal noise unphysical? We address this question from a constructive perspective and show that local quantum Hamiltonian models provide self-correcting quantum computers. To this end, we first give a sufficient condition on the connectedness of excitations for a stabilizer code model to be a self-correcting quantum memory. We then study the two main examples of topological stabilizer codes in arbitrary dimensions and establish their self-correcting capabilities. Also, we address the transversality properties of topological color codes, showing that 6D color codes provide a self-correcting model that allows the transversal and local implementation of a universal set of operations in seven spatial dimensions. Finally, we give a procedure to initialize such quantum memories at finite temperature.
Quantum corrections for Boltzmann equation
M.; Levy; PETER
2008-01-01
We present the lowest order quantum correction to the semiclassical Boltzmann distribution function,and the equation satisfied by this correction is given. Our equation for the quantum correction is obtained from the conventional quantum Boltzmann equation by explicitly expressing the Planck constant in the gradient approximation,and the quantum Wigner distribution function is expanded in pow-ers of Planck constant,too. The negative quantum correlation in the Wigner dis-tribution function which is just the quantum correction terms is naturally singled out,thus obviating the need for the Husimi’s coarse grain averaging that is usually done to remove the negative quantum part of the Wigner distribution function. We also discuss the classical limit of quantum thermodynamic entropy in the above framework.
Spelling Correction in Agglutinative Languages
Oflazer, K
1994-01-01
This paper presents an approach to spelling correction in agglutinative languages that is based on two-level morphology and a dynamic programming based search algorithm. Spelling correction in agglutinative languages is significantly different than in languages like English. The concept of a word in such languages is much wider that the entries found in a dictionary, owing to {}~productive word formation by derivational and inflectional affixations. After an overview of certain issues and relevant mathematical preliminaries, we formally present the problem and our solution. We then present results from our experiments with spelling correction in Turkish, a Ural--Altaic agglutinative language. Our results indicate that we can find the intended correct word in 95\\% of the cases and offer it as the first candidate in 74\\% of the cases, when the edit distance is 1.
Self-correcting quantum computers
Is the notion of a quantum computer (QC) resilient to thermal noise unphysical? We address this question from a constructive perspective and show that local quantum Hamiltonian models provide self-correcting QCs. To this end, we first give a sufficient condition on the connectedness of excitations for a stabilizer code model to be a self-correcting quantum memory. We then study the two main examples of topological stabilizer codes in arbitrary dimensions and establish their self-correcting capabilities. Also, we address the transversality properties of topological color codes, showing that six-dimensional color codes provide a self-correcting model that allows the transversal and local implementation of a universal set of operations in seven spatial dimensions. Finally, we give a procedure for initializing such quantum memories at finite temperature. (paper)
Bowman, Caitlin R; Dennis, Nancy A
2015-06-01
Successful memory retrieval is predicated not only on recognizing old information, but also on correctly rejecting new information (lures) in order to avoid false memories. Correctly rejecting lures is more difficult when they are perceptually or semantically related to information presented at study as compared to when lures are distinct from previously studied information. This behavioral difference suggests that the cognitive and neural basis of correct rejections differs with respect to the relatedness between lures and studied items. The present study sought to identify neural activity that aids in suppressing false memories by examining the network of brain regions underlying correct rejection of related and unrelated lures. Results showed neural overlap in the right hippocampus and anterior parahippocampal gyrus associated with both related and unrelated correct rejections, indicating that some neural regions support correctly rejecting lures regardless of their semantic/perceptual characteristics. Direct comparisons between related and unrelated correct rejections showed that unrelated correct rejections were associated with greater activity in bilateral middle and inferior temporal cortices, regions that have been associated with categorical processing and semantic labels. Related correct rejections showed greater activation in visual and lateral prefrontal cortices, which have been associated with perceptual processing and retrieval monitoring. Thus, while related and unrelated correct rejections show some common neural correlates, related correct rejections are driven by greater perceptual processing whereas unrelated correct rejections show greater reliance on salient categorical cues to support quick and accurate memory decisions. PMID:25862563
Radiative corrections to Bose condensation
Gonzalez, A. (Academia de Ciencias de Cuba, La Habana. Inst. de Matematica, Cibernetica y Computacion)
1985-04-01
The Bose condensation of the scalar field in a theory behaving in the Coleman-Weinberg mode is considered. The effective potential of the model is computed within the semiclassical approximation in a dimensional regularization scheme. Radiative corrections are shown to introduce certain ..mu..-dependent ultraviolet divergences in the effective potential coming from the Many-Particle theory. The weight of radiative corrections in the dynamics of the system is strongly modified by the charge density.
Colour correction for panoramic imaging
Tian, Gui Yun; Gledhill, Duke; Taylor, D.
2002-01-01
This paper reports the problem of colour distortion in panoramic imaging. Particularly when image mosaicing is used for panoramic imaging, the images are captured under different lighting conditions and viewpoints. The paper analyses several linear approaches for their colour transform and mapping. A new approach of colour histogram based colour correction is provided, which is robust to image capturing conditions such as viewpoints and scaling. The procedure for the colour correction is intr...
Quantum error correction for beginners
Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)
ACCURATE KAP METER CALIBRATION AS A PREREQUISITE FOR OPTIMISATION IN PROJECTION RADIOGRAPHY.
Malusek, A; Sandborg, M; Carlsson, G Alm
2016-06-01
Modern X-ray units register the air kerma-area product, PKA, with a built-in KAP meter. Some KAP meters show an energy-dependent bias comparable with the maximum uncertainty articulated by the IEC (25 %), adversely affecting dose-optimisation processes. To correct for the bias, a reference KAP meter calibrated at a standards laboratory and two calibration methods described here can be used to achieve an uncertainty of standards laboratory, Q0, to any beam quality, Q, in the clinic. Alternatively, beam quality corrections are measured with an energy-independent dosemeter via a reference beam quality in the clinic, Q1, to beam quality, Q Biases up to 35 % of built-in KAP meter readings were noted. Energy-dependent calibration factors are needed for unbiased PKA Accurate KAP meter calibration as a prerequisite for optimisation in projection radiography. PMID:26743261
Accurate early positions for Swift GRBS: enhancing X-ray positions with UVOT astrometry
Goad, M R; Beardmore, A P; Evans, P A; Rosen, S R; Osborne, J P; Starling, R L C; Marshall, F E; Yershov, V; Burrows, D N; Gehrels, N; Roming, P; Moretti, A; Capalbi, M; Hill, J E; Kennea, J; Koch, S; Berk, D Vanden
2007-01-01
Here we describe an autonomous way of producing more accurate prompt XRT positions for Swift-detected GRBs and their afterglows, based on UVOT astrometry and a detailed mapping between the XRT and UVOT detectors. The latter significantly reduces the dominant systematic error -- the star-tracker solution to the World Coordinate System. This technique, which is limited to times when there is significant overlap between UVOT and XRT PC-mode data, provides a factor of 2 improvement in the localisation of XRT refined positions on timescales of less than a few hours. Furthermore, the accuracy achieved is superior to astrometrically corrected XRT PC mode images at early times (for up to 24 hours), for the majority of bursts, and is comparable to the accuracy achieved by astrometrically corrected X-ray positions based on deep XRT PC-mode imaging at later times (abridged).
Generation increases at Cofrentes Nuclear Power Plant based on accurate feedwater flow measurement
This paper discusses the application of Caldon LEFM ultrasonic flow and temperature measurement systems at Cofrentes Nuclear Power Plant. Based on plant instrumentation, Cofrentes engineering personnel estimated an 8 to 10 MW electric shortfall in generation due to venturi nozzle fouling. An external LEFM ultrasonic flow measurement system installed in October 2000 showed a shortfall of about 9 MW electric, consistent with expectations. The plant has increased generation by using the more accurate ultrasonic system to correct for the venturi nozzle bias. Following the recovery of generation lost to venturi fouling, Cofrentes plans to upgrade the flow meter to Caldon's LEFM CheckPlus system. This system is sufficiently accurate to warrant re-licensing for a power up-rate of up to 1,7% based on improved thermal power measurement. (author)
An Accurate Calculation of the Big-Bang Prediction for the Abundance of Primordial Helium
López, R E; Lopez, Robert E.; Turner, Michael S.
1999-01-01
Within the standard model of particle physics and cosmology we have calculated the big-bang prediction for the primordial abundance of Helium to a theoretical uncertainty of $0.1 \\pct$ $(\\delta Y_P = \\pm 0.0002)$. At this accuracy the uncertainty in the abundance is dominated by the experimental uncertainty in the neutron mean lifetime, $\\tau_n = 885.3 \\pm 2.0 \\rm{sec}$. The following physical effects were included in the calculation: the zero and finite-temperature radiative, Coulomb and finite-nucleon mass corrections to the weak rates; order-$\\alpha$ quantum-electrodynamic correction to the plasma density, electron mass, and neutrino temperature; and incomplete neutrino decoupling. New results for the finite-temperature radiative correction and the QED plasma correction were used. In addition, we wrote a new and independent nucleosynthesis code to control numerical errors to less than 0.1\\pct. Our predictions for the \\EL[4]{He} abundance are summarized with an accurate fitting formula. Summarizing our work...
Accurate gap levels and their role in the reliability of other calculated defect properties
Deak, Peter; Aradi, Balint; Frauenheim, Thomas [Bremen Center for Computational Materials Science, Universitaet Bremen, POB 330440, 28334 Bremen (Germany); Gali, Adam [Department Atomic Physics, Budapest University of Technology and Economics, 1521 Budapest (Hungary)
2011-04-15
The functionality of semiconductors and insulators depends mainly on defects which modify the electronic, optical, and magnetic spectra through their gap levels. Accurate calculation of the latter is not only important for the experimental identification of the defect, but influences also the accuracy of other calculated defect properties, and is the most difficult challenge for defect theory. The electron self-interaction error in the standard implementations of ab initio density functional theory causes a severe underestimation of the band gap, leading to a corresponding uncertainty in the defect level positions in it. This is a widely known problem which is usually dealt with by a posteriori corrections. A wide range of corrections schemes are used, ranging from ad hoc scaling or shifting, through procedures of limited validity (like the scissor operator or various alignment schemes), to more rigorous quasiparticle corrections based on many-body perturbation theory. We will demonstrate in this paper that consequences of the gap error must to be taken into account in the total energy, and simply correcting the band energy with the gap level shifts is of limited applicability. Therefore, the self-consistent determination of the total energy, free of the gap-error, is preferred. We will show that semi-empirical screened hybrid functionals can successfully be used for this purpose. (Copyright copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Dellacherie, Stéphane; Jung, Jonathan; Omnes, Pascal; Raviart, Pierre-Arnaud
2013-01-01
Through a linear analysis, we show how to modify Godunov type schemes applied to the compressible Euler system to make them accurate at any Mach number. This allows to propose all Mach Godunov type schemes. A linear stability result is proposed and a formal asymptotic analysis justifies the construction in the barotropic case when the Godunov type scheme is a Roe scheme. We also underline that we may have to introduce a cut-off in the all Mach correction to avoid the creation of non-entropic ...
In today's rapidly changing Power Generation Industry it is more critical than ever to acquire and maintain accurate records of previous and current electrical test data. Evaluation and trending of this data is essential to insuring the reliable operation of the machine in the ever changing world of extended maintenance outages and maintenance budget reductions. This paper presents a case study of a unique problem that had initiated in as early as 1990 and was not properly diagnosed and corrected until 2004, at which time it had propagated to a condition of eminent failure. (author)
Accurate on-line mass flow measurements in supercritical fluid chromatography.
Tarafder, Abhijit; Vajda, Péter; Guiochon, Georges
2013-12-13
This work demonstrates the possible advantages and the challenges of accurate on-line measurements of the CO2 mass flow rate during supercritical fluid chromatography (SFC) operations. Only the mass flow rate is constant along the column in SFC. The volume flow rate is not. The critical importance of accurate measurements of mass flow rates for the achievement of reproducible data and the serious difficulties encountered in supercritical fluid chromatography for its assessment were discussed earlier based on the physical properties of carbon dioxide. In this report, we experimentally demonstrate the problems encountered when performing mass flow rate measurements and the gain that can possibly be achieved by acquiring reproducible data using a Coriolis flow meter. The results obtained show how the use of a highly accurate mass flow meter permits, besides the determination of accurate values of the mass flow rate, a systematic, constant diagnosis of the correct operation of the instrument and the monitoring of the condition of the carbon dioxide pump. PMID:24210558
42 CFR 460.194 - Corrective action.
2010-10-01
... 42 Public Health 4 2010-10-01 2010-10-01 false Corrective action. 460.194 Section 460.194 Public...) Federal/State Monitoring § 460.194 Corrective action. (a) A PACE organization must take action to correct... corrective actions. (c) Failure to correct deficiencies may result in sanctions or termination, as...
Technical evaluation of TomoTherapy automatic roll correction.
Laub, Steve; Snyder, Michael; Burmeister, Jay
2015-01-01
The TomoTherapy Hi·Art System allows the application of rotational corrections as a part of the pretreatment image guidance process. This study outlines a custom method to perform an end-to-end evaluation of the TomoTherapy Hi·Art roll correction feature. A roll-sensitive plan was designed and delivered to a cylindrical solid water phantom to test the accuracy of roll corrections, as well as the ability of the automatic registration feature to detect induced roll. Cylindrical target structures containing coaxial inner avoidance structures were placed adjacent to the plane bisecting the phantom and 7 cm laterally off central axis. The phantom was positioned at isocenter with the target-plane parallel to the couch surface. Varying degrees of phantom roll were induced and dose to the targets and inner avoidance structures was measured using Kodak EDR2 films placed in the target-plane. Normalized point doses were compared with baseline (no roll) data to determine the sensitivity of the test and the effectiveness of the roll correction feature. Gamma analysis comparing baseline, roll-corrected, and uncorrected films was performed using film analysis software. MVCT images were acquired prior to plan delivery. Measured roll was compared with induced roll to evaluate the automatic registration feature's ability to detect rotational misalignment. Rotations beyond 0.3° result in statistically significant deviation from baseline point measurements. Gamma pass rates begin to drop below 90% at approximately 0.5° induced rotation at 3%/3 mm and between 0.2° and 0.3° for 2%/2 mm. With roll correction applied, point dose measurements for all rotations are indistinguishable from baseline, and gamma pass rates exceed 96% when using 3% and 3 mm as evaluation criteria. Measured roll via the automatic registration algorithm agrees with induced rotation to within the test sensitivity for nearly all imaging settings. The TomoTherapy automatic registration system accurately detects
Accurate Jones Matrix of the Practical Faraday Rotator
王林斗; 祝昇翔; 李玉峰; 邢文烈; 魏景芝
2003-01-01
The Jones matrix of practical Faraday rotators is often used in the engineering calculation of non-reciprocal optical field. Nevertheless, only the approximate Jones matrix of practical Faraday rotators has been presented by now. Based on the theory of polarized light, this paper presents the accurate Jones matrix of practical Faraday rotators. In addition, an experiment has been carried out to verify the validity of the accurate Jones matrix. This matrix accurately describes the optical characteristics of practical Faraday rotators, including rotation, loss and depolarization of the polarized light. The accurate Jones matrix can be used to obtain the accurate results for the practical Faraday rotator to transform the polarized light, which paves the way for the accurate analysis and calculation of practical Faraday rotators in relevant engineering applications.
Biomimetic Approach for Accurate, Real-Time Aerodynamic Coefficients Project
National Aeronautics and Space Administration — Aerodynamic and structural reliability and efficiency depends critically on the ability to accurately assess the aerodynamic loads and moments for each lifting...
Ko, Jonathan; Wu, Chensheng; Davis, Christopher C.
2015-09-01
Adaptive optics has been widely used in the field of astronomy to correct for atmospheric turbulence while viewing images of celestial bodies. The slightly distorted incoming wavefronts are typically sensed with a Shack-Hartmann sensor and then corrected with a deformable mirror. Although this approach has proven to be effective for astronomical purposes, a new approach must be developed when correcting for the deep turbulence experienced in ground to ground based optical systems. We propose the use of a modified plenoptic camera as a wavefront sensor capable of accurately representing an incoming wavefront that has been significantly distorted by strong turbulence conditions (C2n distortions. After the large distortions have been corrected, a secondary mode utilizing more traditional adaptive optics algorithms can take over to fine tune the wavefront correction. This two-stage algorithm can find use in free space optical communication systems, in directed energy applications, as well as for image correction purposes.
Quantum error correction beyond qubits
Aoki, Takao; Takahashi, Go; Kajiya, Tadashi; Yoshikawa, Jun-Ichi; Braunstein, Samuel L.; van Loock, Peter; Furusawa, Akira
2009-08-01
Quantum computation and communication rely on the ability to manipulate quantum states robustly and with high fidelity. To protect fragile quantum-superposition states from corruption through so-called decoherence noise, some form of error correction is needed. Therefore, the discovery of quantum error correction (QEC) was a key step to turn the field of quantum information from an academic curiosity into a developing technology. Here, we present an experimental implementation of a QEC code for quantum information encoded in continuous variables, based on entanglement among nine optical beams. This nine-wave-packet adaptation of Shor's original nine-qubit scheme enables, at least in principle, full quantum error correction against an arbitrary single-beam error.
Fermilab Booster Correction Elements upgrade
The Fermilab Booster Correction Element Power Supply System is being upgraded to provide significant improvements in performance and versatility. At the same time these improvements will compliment raising the Booster injection energy from 200 MeV to 400 MeV and will allow increased range of adjustment to tune, chromaticity, closed orbit and harmonic corrections. All correction elements will be capable of ramping to give dynamic orbit, tune and chromaticity control throughout the acceleration cycle. The power supplies are commercial switch mode current sources capable of operating in all four current-voltage quadrants. External secondary feedback loops on the amplifiers have extended the small signal bandwidth to 3 kHz and allow current ramps in excess of 1000 A/sec. Implementation and present status of the upgrade project is described in this paper. (author) 4 refs., 2 figs., 1 tab
Error-Correcting Data Structures
de Wolf, Ronald
2008-01-01
We study data structures in the presence of adversarial noise. We want to encode a given object in a succinct data structure that enables us to efficiently answer specific queries about the object, even if the data structure has been corrupted by a constant fraction of errors. This model is the common generalization of (static) data structures and locally decodable error-correcting codes. The main issue is the tradeoff between the space used by the data structure and the time (number of probes) needed to answer a query about the encoded object. We prove a number of upper and lower bounds on various natural error-correcting data structure problems. In particular, we show that the optimal length of error-correcting data structures for the Membership problem (where we want to store subsets of size s from a universe of size n) is closely related to the optimal length of locally decodable codes for s-bit strings.
Electroweak corrections for LHC processes
Chiesa, Mauro [Istituto Nazionale di Fisica Nucleare, Pavia (Italy); Greiner, Nicolas [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Gruppe Theorie; Tramontano, Francesco [Napoli Univ. (Italy). Dept. of Physics; Istituto Nazionale di Fisica Nucleare, Naples (Italy)
2015-07-15
For the Run 2 of the LHC next-to-leading order electroweak corrections will play an important role. Even though they are typically moderate at the level of total cross sections they can lead to substantial deviations in the shapes of distributions. In particular for new physics searches but also for a precise determination of Standard Model observables their inclusion in the theoretical predictions is mandatory for a reliable estimation of the Standard Model contribution. In this article we review the status and recent developments in electroweak calculations and their automation for LHC processes. We discuss general issues and properties of NLO electroweak corrections and present some examples, including the full calculation of the NLO corrections to the production of a W-boson in association with two jets computed using GoSam interfaced to MadDipole.
Delegation in Correctional Nursing Practice.
Tompkins, Frances
2016-07-01
Correctional nurses face daily challenges as a result of their work environment. Common challenges include availability of resources for appropriate care delivery, negotiating with custody staff for access to patients, adherence to scope of practice standards, and working with a varied staffing mix. Professional correctional nurses must consider the educational backgrounds and competency of other nurses and assistive personnel in planning for care delivery. Budgetary constraints and varied staff preparation can be a challenge for the professional nurse. Adequate care planning requires understanding the educational level and competency of licensed and unlicensed staff. Delegation is the process of assessing patient needs and transferring responsibility for care to appropriately educated and competent staff. Correctional nurses can benefit from increased knowledge about delegation. PMID:27302707
Electroweak corrections for LHC processes
For the Run 2 of the LHC next-to-leading order electroweak corrections will play an important role. Even though they are typically moderate at the level of total cross sections they can lead to substantial deviations in the shapes of distributions. In particular for new physics searches but also for a precise determination of Standard Model observables their inclusion in the theoretical predictions is mandatory for a reliable estimation of the Standard Model contribution. In this article we review the status and recent developments in electroweak calculations and their automation for LHC processes. We discuss general issues and properties of NLO electroweak corrections and present some examples, including the full calculation of the NLO corrections to the production of a W-boson in association with two jets computed using GoSam interfaced to MadDipole.
Local Correction of Boolean Functions
Alon, Noga
2011-01-01
A Boolean function f over n variables is said to be q-locally correctable if, given a black-box access to a function g which is "close" to an isomorphism f_sigma of f, we can compute f_sigma(x) for any x in Z_2^n with good probability using q queries to g. We observe that any k-junta, that is, any function which depends only on k of its input variables, is O(2^k)-locally correctable. Moreover, we show that there are examples where this is essentially best possible, and locally correcting some k-juntas requires a number of queries which is exponential in k. These examples, however, are far from being typical, and indeed we prove that for almost every k-junta, O(k log k) queries suffice.
Quantitative SPECT reconstruction using CT-derived corrections
Willowson, Kathy; Bailey, Dale L.; Baldock, Clive
2008-06-01
A method for achieving quantitative single-photon emission computed tomography (SPECT) based upon corrections derived from x-ray computed tomography (CT) data is presented. A CT-derived attenuation map is used to perform transmission-dependent scatter correction (TDSC) in conjunction with non-uniform attenuation correction. The original CT data are also utilized to correct for partial volume effects in small volumes of interest. The accuracy of the quantitative technique has been evaluated with phantom experiments and clinical lung ventilation/perfusion SPECT/CT studies. A comparison of calculated values with the known total activities and concentrations in a mixed-material cylindrical phantom, and in liver and cardiac inserts within an anthropomorphic torso phantom, produced accurate results. The total activity in corrected ventilation-subtracted perfusion images was compared to the calibrated injected dose of [99mTc]-MAA (macro-aggregated albumin). The average difference over 12 studies between the known and calculated activities was found to be -1%, with a range of ±7%.
Quantitative SPECT reconstruction using CT-derived corrections
Willowson, Kathy; Bailey, Dale L; Baldock, Clive [Institute of Medical Physics, School of Physics, University of Sydney, Camperdown, NSW 2006 (Australia)], E-mail: K.Willowson@physics.usyd.edu.au
2008-06-21
A method for achieving quantitative single-photon emission computed tomography (SPECT) based upon corrections derived from x-ray computed tomography (CT) data is presented. A CT-derived attenuation map is used to perform transmission-dependent scatter correction (TDSC) in conjunction with non-uniform attenuation correction. The original CT data are also utilized to correct for partial volume effects in small volumes of interest. The accuracy of the quantitative technique has been evaluated with phantom experiments and clinical lung ventilation/perfusion SPECT/CT studies. A comparison of calculated values with the known total activities and concentrations in a mixed-material cylindrical phantom, and in liver and cardiac inserts within an anthropomorphic torso phantom, produced accurate results. The total activity in corrected ventilation-subtracted perfusion images was compared to the calibrated injected dose of [{sup 99m}Tc]-MAA (macro-aggregated albumin). The average difference over 12 studies between the known and calculated activities was found to be -1%, with a range of {+-}7%.
Atmospheric Error Correction of the Laser Beam Ranging
J. Saydi
2014-01-01
Full Text Available Atmospheric models based on surface measurements of pressure, temperature, and relative humidity have been used to increase the laser ranging accuracy by ray tracing. Atmospheric refraction can cause significant errors in laser ranging systems. Through the present research, the atmospheric effects on the laser beam were investigated by using the principles of laser ranging. Atmospheric correction was calculated for 0.532, 1.3, and 10.6 micron wavelengths through the weather conditions of Tehran, Isfahan, and Bushehr in Iran since March 2012 to March 2013. Through the present research the atmospheric correction was computed for meteorological data in base of monthly mean. Of course, the meteorological data were received from meteorological stations in Tehran, Isfahan, and Bushehr. Atmospheric correction was calculated for 11, 100, and 200 kilometers laser beam propagations under 30°, 60°, and 90° rising angles for each propagation. The results of the study showed that in the same months and beam emission angles, the atmospheric correction was most accurate for 10.6 micron wavelength. The laser ranging error was decreased by increasing the laser emission angle. The atmospheric correction with two Marini-Murray and Mendes-Pavlis models for 0.532 nm was compared.
2015-01-01
<正>The paper"A comparative study on the transplantation of different concentrations of human umbilical mesenchymal cells into diabetic rat"DOI:10.3980/j.issn.2222-3959.2015.02.08 was published in No.2 issue of IJO on 18th April.Jia-Hui Kong,Dan Zheng,Song Chen,Hong-Tao Duan,Yue-Xin Wang,Meng Dong,Jian Song Clinical College of Ophthalmology,Tianjin Medical University,Tianjin Eye Hospital,Tianjin Institute of Ophthalmology,
2015-06-01
Gillon R. Defending the four principles approach as a good basis for good medical practice and therefore for good medical ethics. J Med Ethics 2015;41:111–6. The author misrepresented Beauchamp and Childress when he wrote: ‘My own view (unlike Beauchamp and Childress who explicitly state that they make no such claim ( p. 421)1, is that all moral agents whether or not they are doctors or otherwise involved in healthcare have these prima facie moral obligations; but in the context of answering the question ‘what is it to do good medical ethics ?’ my claim is limited to the ethical obligations of doctors’. The author intended and should have written the following: ‘My own view, unlike Beauchamp and Childress who explicitly state that they make no such claim (p.421)1 is that these four prima facie principles can provide a basic moral framework not only for medical ethics but for ethics in general’. PMID:26002919
2007-01-01
From left to right: Luis, Carmen, Mario, Christian and José listening to speeches by theorists Alvaro De Rújula and Luis Alvarez-Gaumé (right) at their farewell gathering on 15 May.We unfortunately cut out a part of the "Word of thanks" from the team retiring from Restaurant No. 1. The complete message is published below: Dear friends, You are the true "nucleus" of CERN. Every member of this extraordinary human mosaic will always remain in our affections and in our thoughts. We have all been very touched by your spontaneous generosity. Arrivederci, Mario Au revoir,Christian Hasta Siempre Carmen, José and Luis PS: Lots of love to the theory team and to the hidden organisers. So long!
Decoupling correction system in RHIC
A global linear decoupling in the Relativistic Heavy Ion Collider (RHIC) is going to be performed with the three families of skew quadrupoles. The operating horizontal and vertical betatron tunes in the RHIC will be separated by one unit vx=28.19 and vy=29.18. The linear coupling is corrected by minimizing the tune splitting Dn-the off diagonal matrix m. The skew quadrupole correction system is located close to the each of the six interaction regions. A detail study of the system is presented by the use of the TEAPOT accelerator physics code
Brane cosmology with curvature corrections
We study the cosmology of the Randall-Sundrum brane-world where the Einstein-Hilbert action is modified by curvature correction terms: a four-dimensional scalar curvature from induced gravity on the brane, and a five-dimensional Gauss-Bonnet curvature term. The combined effect of these curvature corrections to the action removes the infinite-density big bang singularity, although the curvature can still diverge for some parameter values. A radiation brane undergoes accelerated expansion near the minimal scale factor, for a range of parameters. This acceleration is driven by the geometric effects, without an inflation field or negative pressures. At late times, conventional cosmology is recovered. (author)
Self-correcting Multigrid Solver
Jerome L.V. Lewandowski
2004-06-29
A new multigrid algorithm based on the method of self-correction for the solution of elliptic problems is described. The method exploits information contained in the residual to dynamically modify the source term (right-hand side) of the elliptic problem. It is shown that the self-correcting solver is more efficient at damping the short wavelength modes of the algebraic error than its standard equivalent. When used in conjunction with a multigrid method, the resulting solver displays an improved convergence rate with no additional computational work.
Bunch mode specific rate corrections for PILATUS3 detectors
Trueb, P., E-mail: peter.trueb@dectris.com [DECTRIS Ltd, 5400 Baden (Switzerland); Dejoie, C. [ETH Zurich, 8093 Zurich (Switzerland); Kobas, M. [DECTRIS Ltd, 5400 Baden (Switzerland); Pattison, P. [EPF Lausanne, 1015 Lausanne (Switzerland); Peake, D. J. [School of Physics, The University of Melbourne, Victoria 3010 (Australia); Radicci, V. [DECTRIS Ltd, 5400 Baden (Switzerland); Sobott, B. A. [School of Physics, The University of Melbourne, Victoria 3010 (Australia); Walko, D. A. [Argonne National Laboratory, Argonne, IL 60439 (United States); Broennimann, C. [DECTRIS Ltd, 5400 Baden (Switzerland)
2015-04-09
The count rate behaviour of PILATUS3 detectors has been characterized for seven bunch modes at four different synchrotrons. The instant retrigger technology of the PILATUS3 application-specific integrated circuit is found to reduce the dependency of the required rate correction on the synchrotron bunch mode. The improvement of using bunch mode specific rate corrections based on a Monte Carlo simulation is quantified. PILATUS X-ray detectors are in operation at many synchrotron beamlines around the world. This article reports on the characterization of the new PILATUS3 detector generation at high count rates. As for all counting detectors, the measured intensities have to be corrected for the dead-time of the counting mechanism at high photon fluxes. The large number of different bunch modes at these synchrotrons as well as the wide range of detector settings presents a challenge for providing accurate corrections. To avoid the intricate measurement of the count rate behaviour for every bunch mode, a Monte Carlo simulation of the counting mechanism has been implemented, which is able to predict the corrections for arbitrary bunch modes and a wide range of detector settings. This article compares the simulated results with experimental data acquired at different synchrotrons. It is found that the usage of bunch mode specific corrections based on this simulation improves the accuracy of the measured intensities by up to 40% for high photon rates and highly structured bunch modes. For less structured bunch modes, the instant retrigger technology of PILATUS3 detectors substantially reduces the dependency of the rate correction on the bunch mode. The acquired data also demonstrate that the instant retrigger technology allows for data acquisition up to 15 million photons per second per pixel.
Bunch mode specific rate corrections for PILATUS3 detectors
The count rate behaviour of PILATUS3 detectors has been characterized for seven bunch modes at four different synchrotrons. The instant retrigger technology of the PILATUS3 application-specific integrated circuit is found to reduce the dependency of the required rate correction on the synchrotron bunch mode. The improvement of using bunch mode specific rate corrections based on a Monte Carlo simulation is quantified. PILATUS X-ray detectors are in operation at many synchrotron beamlines around the world. This article reports on the characterization of the new PILATUS3 detector generation at high count rates. As for all counting detectors, the measured intensities have to be corrected for the dead-time of the counting mechanism at high photon fluxes. The large number of different bunch modes at these synchrotrons as well as the wide range of detector settings presents a challenge for providing accurate corrections. To avoid the intricate measurement of the count rate behaviour for every bunch mode, a Monte Carlo simulation of the counting mechanism has been implemented, which is able to predict the corrections for arbitrary bunch modes and a wide range of detector settings. This article compares the simulated results with experimental data acquired at different synchrotrons. It is found that the usage of bunch mode specific corrections based on this simulation improves the accuracy of the measured intensities by up to 40% for high photon rates and highly structured bunch modes. For less structured bunch modes, the instant retrigger technology of PILATUS3 detectors substantially reduces the dependency of the rate correction on the bunch mode. The acquired data also demonstrate that the instant retrigger technology allows for data acquisition up to 15 million photons per second per pixel
Accurate formulas for the penalty caused by interferometric crosstalk
Rasmussen, Christian Jørgen; Liu, Fenghai; Jeppesen, Palle
2000-01-01
New simple formulas for the penalty caused by interferometric crosstalk in PIN receiver systems and optically preamplified receiver systems are presented. They are more accurate than existing formulas.......New simple formulas for the penalty caused by interferometric crosstalk in PIN receiver systems and optically preamplified receiver systems are presented. They are more accurate than existing formulas....
A new, accurate predictive model for incident hypertension
Völzke, Henry; Fung, Glenn; Ittermann, Till; Yu, Shipeng; Baumeister, Sebastian E; Dörr, Marcus; Lieb, Wolfgang; Völker, Uwe; Linneberg, Allan; Jørgensen, Torben; Felix, Stephan B; Rettig, Rainer; Rao, Bharat; Kroemer, Heyo K
2013-01-01
Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures.......Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....
78 FR 34604 - Submitting Complete and Accurate Information
2013-06-10
... COMMISSION 10 CFR Part 50 Submitting Complete and Accurate Information AGENCY: Nuclear Regulatory Commission... accurate information as would a licensee or an applicant for a license.'' DATES: Submit comments by August... may submit comments by any of the following methods (unless this document describes a different...
Highly accurate potential energy surface for the He-H2 dimer.
Bakr, Brandon W; Smith, Daniel G A; Patkowski, Konrad
2013-10-14
A new highly accurate interaction potential is constructed for the He-H2 van der Waals complex. This potential is fitted to 1900 ab initio energies computed at the very large-basis coupled-cluster level and augmented by corrections for higher-order excitations (up to full configuration interaction level) and the diagonal Born-Oppenheimer correction. At the vibrationally averaged H-H bond length of 1.448736 bohrs, the well depth of our potential, 15.870 ± 0.065 K, is nearly 1 K larger than the most accurate previous studies have indicated. In addition to constructing our own three-dimensional potential in the van der Waals region, we present a reparameterization of the Boothroyd-Martin-Peterson potential surface [A. I. Boothroyd, P. G. Martin, and M. R. Peterson, J. Chem. Phys. 119, 3187 (2003)] that is suitable for all configurations of the triatomic system. Finally, we use the newly developed potentials to compute the properties of the lone bound states of (4)He-H2 and (3)He-H2 and the interaction second virial coefficient of the hydrogen-helium mixture. PMID:24116617
Fu, Xi; Liu, JianFeng; Zou, Chong; Rui, Lu; Gui, Lai
2014-07-01
Screw fixation is used for accurate augmentation by porous polyethylene implant in traumatic enophthalmos correction to avoid complications such as migration and protrusion. We report an incident of titanium screw entered into the maxillary sinus during enophthalmos correction with porous polyethylene implant. Such incident could be avoided by standard manipulation. We here present the rare case and offer proposals for the screw fixation of porous polyethylene implant during traumatic enophthalmos correction. PMID:25006927
Ciancio, Dennis; Thompson, Kelly; Schall, Megan; Skinner, Christopher; Foorman, Barbara
2015-10-01
The relationship between reading comprehension rate measures and broad reading skill development was examined using data from approximately 1425 students (grades 1-3). Students read 3 passages, from a pool of 30, and answered open-ended comprehension questions. Accurate reading comprehension rate (ARCR) was calculated by dividing the percentage of questions answered correctly (%QC) by seconds required to read the passage. Across all 30 passages, ARCR and its two components, %QC correct and time spent reading (1/seconds spent reading the passage), were significantly correlated with broad reading scores, with %QC resulting in the lowest correlations. Two sequential regressions supported previous findings which suggest that ARCR measures consistently produced meaningful incremental increases beyond %QC in the amount of variance explained in broad reading skill; however, ARCR produced small or no incremental increases beyond reading time. Discussion focuses on the importance of the measure of reading time embedded in brief accurate reading rate measures and directions for future research. PMID:26407836
CORRECTIVE ACTION IN CAR MANUFACTURING
H. Rohne
2012-01-01
Full Text Available
ENGLISH ABSTRACT: In this paper the important .issues involved in successfully implementing corrective action systems in quality management are discussed. The work is based on experience in implementing and operating such a system in an automotive manufacturing enterprise in South Africa. The core of a corrective action system is good documentation, supported by a computerised information system. Secondly, a systematic problem solving methodology is essential to resolve the quality related problems identified by the system. In the following paragraphs the general corrective action process is discussed and the elements of a corrective action system are identified, followed by a more detailed discussion of each element. Finally specific results from the application are discussed.
AFRIKAANSE OPSOMMING: Belangrike oorwegings by die suksesvolle implementering van korrektiewe aksie stelsels in gehaltebestuur word in hierdie artikel bespreek. Die werk is gebaseer op ondervinding in die implementering en bedryf van so 'n stelsel by 'n motorvervaardiger in Suid Afrika. Die kern van 'n korrektiewe aksie stelsel is goeie dokumentering, gesteun deur 'n gerekenariseerde inligtingstelsel. Tweedens is 'n sistematiese probleemoplossings rnetodologie nodig om die gehalte verwante probleme wat die stelsel identifiseer aan te spreek. In die volgende paragrawe word die algemene korrektiewe aksie proses bespreek en die elemente van die korrektiewe aksie stelsel geidentifiseer. Elke element word dan in meer besonderhede bespreek. Ten slotte word spesifieke resultate van die toepassing kortliks behandel.
Quantum Convolutional Error Correction Codes
Chau, H. F.
1998-01-01
I report two general methods to construct quantum convolutional codes for quantum registers with internal $N$ states. Using one of these methods, I construct a quantum convolutional code of rate 1/4 which is able to correct one general quantum error for every eight consecutive quantum registers.
Multilingual text induced spelling correction
Reynaert, M.W.C.
2004-01-01
We present TISC, a multilingual, language-independent and context-sensitive spelling checking and correction system designed to facilitate the automatic removal of non-word spelling errors in large corpora. Its lexicon is derived from raw text corpora, without supervision, and contains word unigrams
Interaction and self-correction
Satne, Glenda Lucila
2014-01-01
acquisition. I then criticize two models that have been dominant in thinking about conceptual competence, the interpretationist and the causalist models. Both fail to meet NC, by failing to account for the abilities involved in conceptual self-correction. I then offer an alternative account of self...
Entropic corrections to Newton's law
In this short paper, we calculate separately the generalized uncertainty principle (GUP) and self-gravitational corrections to Newton's gravitational formula. We show that for a complete description of the GUP and self-gravity effects, both the temperature and entropy must be modified. (paper)
Adam Gąska
2013-12-01
Full Text Available LaserTracer (LT systems are the most sophisticated and accurate laser tracking devices. They are mainly used for correction of geometrical errors of machine tools and coordinate measuring machines. This process is about four times faster than standard methods based on usage of laser interferometers. The methodology of LaserTracer usage to correction of geometrical errors, including presentation of this system, multilateration method and software that was used are described in details in this paper.
Weather radar equation correction for frequency agile and phased array radars
Knorr, Jeffrey B.
2007-01-01
This paper presents the derivation of a correction to the Probert-Jones weather radar equation for use with advanced frequency agile, phased array radars. It is shown that two additional terms are required to account for frequency hopping and electronic beam pointing. The corrected weather radar equation provides a basis for accurate and efficient computation of a reflectivity estimate from the weather signal data samples. Lastly, an understanding of calibration requirements for these advance...
Electrical response of molecular systems: the power of self-interaction corrected Kohn-Sham theory
Körzdörfer, T.; Mundt, M.; Kümmel, S.
2007-01-01
The accurate prediction of electronic response properties of extended molecular systems has been a challenge for conventional, explicit density functionals. We demonstrate that a self-interaction correction implemented rigorously within Kohn-Sham theory via the Optimized Effective Potential (OEP) yields polarizabilities close to the ones from highly accurate wavefunction-based calculations and exceeding the quality of exact-exchange-OEP. The orbital structure obtained with the OEP-SIC functio...
Lutnæs, O.B.; Teale, A.M.; Helgaker, T.; Tozer, D J; Ruud, K.; Gauss, J.
2009-01-01
An accurate set of benchmark rotational g tensors and magnetizabilities are calculated using coupled-cluster singles-doubles (CCSD) theory and coupled-cluster single-doubles-perturbative-triples [CCSD(T)] theory, in a variety of basis sets consisting of (rotational) London atomic orbitals. The accuracy of the results obtained is established for the rotational g tensors by careful comparison with experimental data, taking into account zero-point vibrational corrections. After an analysis of th...
Short- and long-range corrected hybrid density functionals with the D3 dispersion corrections
Wang, Chih-Wei; Chai, Jeng-Da
2016-01-01
We propose a short- and long-range corrected (SLC) hybrid scheme employing 100% Hartree-Fock (HF) exchange at both zero and infinite interelectronic distances, wherein three SLC hybrid density functionals with the D3 dispersion corrections (SLC-LDA-D3, SLC-PBE-D3, and SLC-B97-D3) are developed. SLC-PBE-D3 and SLC-B97-D3 are shown to be accurate for a very diverse range of applications, such as core ionization and excitation energies, thermochemistry, kinetics, noncovalent interactions, dissociation of symmetric radical cations, vertical ionization potentials, vertical electron affinities, fundamental gaps, and valence, Rydberg, and long-range charge-transfer excitation energies. Relative to omegaB97X-D, SLC-B97-D3 provides significant improvement for core ionization and excitation energies and noticeable improvement for the self-interaction, asymptote, energy-gap, and charge-transfer problems, while performing similarly for thermochemistry, kinetics, and noncovalent interactions.
A technique for accurate planning of stereotactic brain implants prior to head ring fixation
Purpose: A two-step procedure is described for accurate planning of stereotactic brain implants prior to head-ring fixation. Methods and Materials: Approximately 2 weeks prior to implant a CT scan without the head ring is performed for treatment-planning purposes. An entry point and a reference point, both marked with barium and later tattooed, facilitate planning and permit correlation of the images with a later CT scan. A plan is generated using a conventional treatment-planning system to determine the number and activity of I-125 seeds required and the position of each catheter. I-125 seed anisotropy is taken into account by means of a modification to the treatment planning program. On the day of the implant a second CT scan is performed with the head ring affixed to the skull and with the same points marked as in the previous scan. The planned catheter coordinates are then mapped into the coordinate system of the second CT scan by means of a manual translational correction and a computer-calculated rotational correction derived from the reference point coordinates in the two scans. Results: The rotational correction algorithm was verified experimentally in a Rando phantom before it was used clinically. For analysis of the results with individual patients a third CT scan is performed 1 day following the implant and is used for calculating the final dosimetry. Conclusion: The technique that is described has two important advantages: 1) the number and activity of seeds required can be accurately determined in advance; and 2) sufficient time is allowed to derive the best possible plan
Frequency-domain correction of sensor dynamic error for step response
Yang, Shuang-Long; Xu, Ke-Jun
2012-11-01
To obtain accurate results in dynamic measurements it is required that the sensors should have good dynamic performance. In practice, sensors have non-ideal dynamic characteristics due to their small damp ratios and low natural frequencies. In this case some dynamic error correction methods can be adopted for dealing with the sensor responses to eliminate the effect of their dynamic characteristics. The frequency-domain correction of sensor dynamic error is a common method. Using the existing calculation method, however, the correct frequency-domain correction function (FCF) cannot be obtained according to the step response calibration experimental data. This is because of the leakage error and invalid FCF value caused by the cycle extension of the finite length step input-output intercepting data. In order to solve these problems the data splicing preprocessing and FCF interpolation are put forward, and the FCF calculation steps as well as sensor dynamic error correction procedure by the calculated FCF are presented in this paper. The proposed solution is applied to the dynamic error correction of the bar-shaped wind tunnel strain gauge balance so as to verify its effectiveness. The dynamic error correction results show that the adjust time of the balance step response is shortened to 10 ms (shorter than 1/30 before correction) after frequency-domain correction, and the overshoot is fallen within 5% (less than 1/10 before correction) as well. The dynamic measurement accuracy of the balance is improved significantly.
Arakawa, Mototaka; Kushibiki, Jun-ichi; Aoki, Naoya
2004-05-01
The effective radius of a bulk-wave ultrasonic transducer as a circular piston source, fabricated on one end of a synthetic silica (SiO2) glass buffer rod, was evaluated for accurate velocity measurements of dispersive specimens over a wide frequency range. The effective radius was determined by comparing measured and calculated phase variations due to diffraction in an ultrasonic transmission line of the SiO2 buffer rod/water-couplant/SiO2 standard specimen, using radio-frequency (RF) tone burst ultrasonic waves. Fourteen devices with different device parameters were evaluated. The velocities of the nondispersive standard specimen (C-7940) were found to be 5934.10 +/- 0.35 m/s at 70 to 290 MHz, after diffraction correction using the nominal radius (0.75 mm) for an ultrasonic device with an operating center frequency of about 400 MHz. Corrected velocities were more accurately found to be 5934.15 +/- 0.03 m/s by using the effective radius (0.780 mm) for the diffraction correction. Bulk-wave ultrasonic devices calibrated by this experimental procedure enable conducting extremely accurate velocity dispersion measurements. PMID:15217227
A fast and accurate method for computing the Sunyaev-Zeldovich signal of hot galaxy clusters
Chluba, Jens; Sazonov, Sergey; Nelson, Kaylea
2012-01-01
New generation ground and space-based CMB experiments have ushered in discoveries of massive galaxy clusters via the Sunyaev-Zeldovich (SZ) effect, providing a new window for studying cluster astrophysics and cosmology. Many of the newly discovered, SZ-selected clusters contain hot intracluster plasma (kTe > 10 keV) and exhibit disturbed morphology, indicative of frequent mergers with large peculiar velocity (v > 1000 km s^{-1}). It is well-known that for the interpretation of the SZ signal from hot, moving galaxy clusters, relativistic corrections must be taken into account, and in this work, we present a fast and accurate method for computing these effects. Our approach is based on an alternative derivation of the Boltzmann collision term which provides new physical insight into the sources of different kinematic corrections in the scattering problem. By explicitly imposing Lorentz-invariance of the scattering optical depth, we also show that the kinematic corrections to the SZ intensity signal found in thi...
Sun Yanni
2011-05-01
Full Text Available Abstract Background Protein domain classification is an important step in metagenomic annotation. The state-of-the-art method for protein domain classification is profile HMM-based alignment. However, the relatively high rates of insertions and deletions in homopolymer regions of pyrosequencing reads create frameshifts, causing conventional profile HMM alignment tools to generate alignments with marginal scores. This makes error-containing gene fragments unclassifiable with conventional tools. Thus, there is a need for an accurate domain classification tool that can detect and correct sequencing errors. Results We introduce HMM-FRAME, a protein domain classification tool based on an augmented Viterbi algorithm that can incorporate error models from different sequencing platforms. HMM-FRAME corrects sequencing errors and classifies putative gene fragments into domain families. It achieved high error detection sensitivity and specificity in a data set with annotated errors. We applied HMM-FRAME in Targeted Metagenomics and a published metagenomic data set. The results showed that our tool can correct frameshifts in error-containing sequences, generate much longer alignments with significantly smaller E-values, and classify more sequences into their native families. Conclusions HMM-FRAME provides a complementary protein domain classification tool to conventional profile HMM-based methods for data sets containing frameshifts. Its current implementation is best used for small-scale metagenomic data sets. The source code of HMM-FRAME can be downloaded at http://www.cse.msu.edu/~zhangy72/hmmframe/ and at https://sourceforge.net/projects/hmm-frame/.
Accurate calculation of diffraction-limited encircled and ensquared energy.
Andersen, Torben B
2015-09-01
Mathematical properties of the encircled and ensquared energy functions for the diffraction-limited point-spread function (PSF) are presented. These include power series and a set of linear differential equations that facilitate the accurate calculation of these functions. Asymptotic expressions are derived that provide very accurate estimates for the relative amount of energy in the diffraction PSF that fall outside a square or rectangular large detector. Tables with accurate values of the encircled and ensquared energy functions are also presented. PMID:26368873
Accurately bearing measurement in non-cooperative passive location system
The system of non-cooperative passive location based on array is proposed. In the system, target is detected by beamforming and Doppler matched filtering; and bearing is measured by a long-base-ling interferometer which is composed of long distance sub-arrays. For the interferometer with long-base-line, the bearing is measured accurately but ambiguously. To realize unambiguous accurately bearing measurement, beam width and multiple constraint adoptive beamforming technique is used to resolve azimuth ambiguous. Theory and simulation result shows this method is effective to realize accurately bearing measurement in no-cooperate passive location system. (authors)
Correcting ionospheric Faraday rotation for ASKAP
O'Sullivan, Shane; Gaensler, Bryan; Landecker, Tom L.; Willis, Tony
2012-10-01
Next-generation polarisation surveys, such as the POSSUM survey on ASKAP, aim to measure weak, statistical, cosmological effects associated with weak magnetic fields, and so will require unprecedented accuracy and stability for measuring polarisation vectors and their Faraday rotation measures (RMs). Ionospheric Faraday rotation (IFR) corrupts polarization observations and cannot be ignored at mid to low frequencies. In aperture-synthesis polarimetry IFR rotates individual visibilities and leads to a loss of coherence and accuracy of polarization angle determination. Through the POSSUM survey science team we have been involved in developing detailed ionospheric prediction software (POSSUM memos #10a,b) that will be used to correct the observed visibilities on ASKAP before imaging to obtain sufficiently accurate polarization and RM data. To provide a stringent test of this software, we propose a continuous 24 hr observing block using the 1.1-3.1 GHz band to monitor the variations caused by the time-variable ionosphere in the polarization angle and RM of a strongly polarized calibrator source, PKS B1903-802. We request a total of 96 hrs (4 x 24 hrs) to monitor the changes in the ionosphere every 3 to 6 months until BETA/ASKAP-12 is taking reliable polarization data.
Energy dependence corrections to MOSFET dosimetric sensitivity.
Cheung, T; Butson, M J; Yu, P K N
2009-03-01
Metal Oxide Semiconductor Field Effect Transistors (MOSFET's) are dosimeters which are now frequently utilized in radiotherapy treatment applications. An improved MOSFET, clinical semiconductor dosimetry system (CSDS) which utilizes improved packaging for the MOSFET device has been studied for energy dependence of sensitivity to x-ray radiation measurement. Energy dependence from 50 kVp to 10 MV x-rays has been studied and found to vary by up to a factor of 3.2 with 75 kVp producing the highest sensitivity response. The detectors average life span in high sensitivity mode is energy related and ranges from approximately 100 Gy for 75 kVp x-rays to approximately 300 Gy at 6 MV x-ray energy. The MOSFET detector has also been studied for sensitivity variations with integrated dose history. It was found to become less sensitive to radiation with age and the magnitude of this effect is dependant on radiation energy with lower energies producing a larger sensitivity reduction with integrated dose. The reduction in sensitivity is however approximated reproducibly by a slightly non linear, second order polynomial function allowing corrections to be made to readings to account for this effect to provide more accurate dose assessments both in phantom and in-vivo. PMID:19400548
Interaction and Self-Correction
GlendaLucilaSatne
2014-07-01
Full Text Available In this paper I address the question of how to account for the normative dimension involved in conceptual competence in a naturalistic framework. First, I present what I call the Naturalist Challenge (NC, referring to both the phylogenetic and ontogenetic dimensions of conceptual possession and acquisition. I then criticize two models that have been dominant in thinking about conceptual competence, the interpretationist and the causalist models. Both fail to meet NC, by failing to account for the abilities involved in conceptual self-correction. I then offer an alternative account of self-correction that I develop with the help of the interactionist theory of mutual understanding arising from recent developments in Phenomenology and Developmental Psychology.
Interaction and self-correction.
Satne, Glenda L
2014-01-01
In this paper, I address the question of how to account for the normative dimension involved in conceptual competence in a naturalistic framework. First, I present what I call the naturalist challenge (NC), referring to both the phylogenetic and ontogenetic dimensions of conceptual possession and acquisition. I then criticize two models that have been dominant in thinking about conceptual competence, the interpretationist and the causalist models. Both fail to meet NC, by failing to account for the abilities involved in conceptual self-correction. I then offer an alternative account of self-correction that I develop with the help of the interactionist theory of mutual understanding arising from recent developments in phenomenology and developmental psychology. PMID:25101044
Corrective action program reengineering project
A series of similar refueling floor events that occurred during the early 1990s prompted Susquehanna steam electric station (SSES) management to launch a broad-based review of how the Nuclear Department conducts business. This was accomplished through the formation of several improvement initiative teams. Clearly, one of the key areas that benefited from this management initiative was the corrective action program. The corrective action improvement team was charged with taking a comprehensive look at how the Nuclear Department identified and resolved problems. The 10-member team included management and bargaining unit personnel as well as an external management consultant. This paper provides a summary of this self-assessment initiative, including a discussion of the issues identified, opportunities for improvement, and subsequent completed or planned actions
Personalized recommendation with corrected similarity
Personalized recommendation has attracted a surge of interdisciplinary research. Especially, similarity-based methods in applications of real recommendation systems have achieved great success. However, the computations of similarities are overestimated or underestimated, in particular because of the defective strategy of unidirectional similarity estimation. In this paper, we solve this drawback by leveraging mutual correction of forward and backward similarity estimations, and propose a new personalized recommendation index, i.e., corrected similarity based inference (CSI). Through extensive experiments on four benchmark datasets, the results show a greater improvement of CSI in comparison with these mainstream baselines. And a detailed analysis is presented to unveil and understand the origin of such difference between CSI and mainstream indices. (paper)
Lightweight Specifications for Parallel Correctness
Burnim, Jacob Samuels
2012-01-01
With the spread of multicore processors, it is increasingly necessaryfor programmers to write parallel software. Yet writing correctparallel software with explicit multithreading remains a difficultundertaking. Though many tools exist to help test, debug, and verifyparallel programs, such tools are often hindered by a lack of anyspecification from the programmer of the intended, correct parallelbehavior of his or her software.In this dissertation, we propose three novel lightweightspecificati...
EPS Young Physicist Prize - CORRECTION
2009-01-01
The original text for the article 'Prizes aplenty in Krakow' in Bulletin 30-31 assigned the award of the EPS HEPP Young Physicist Prize to Maurizio Pierini. In fact he shared the prize with Niki Saoulidou of Fermilab, who was rewarded for her contribution to neutrino physics, as the article now correctly indicates. We apologise for not having named Niki Saoulidou in the original article.
Logarithmic Corrections in Directed Percolation
Janssen, Hans-Karl; Stenull, Olaf
2003-01-01
We study directed percolation at the upper critical transverse dimension $d=4$, where critical fluctuations induce logarithmic corrections to the leading (mean-field) behavior. Viewing directed percolation as a kinetic process, we address the following properties of directed percolation clusters: the mass (the number of active sites or particles), the radius of gyration and the survival probability. Using renormalized dynamical field theory, we determine the leading and the next to leading lo...
Fisher Renormalization for Logarithmic Corrections
Kenna, Ralph; Hsu, Hsiao-Ping; Von Ferber, Christian
2008-01-01
For continuous phase transitions characterized by power-law divergences, Fisher renormalization prescribes how to obtain the critical exponents for a system under constraint from their ideal counterparts. In statistical mechanics, such ideal behaviour at phase transitions is frequently modified by multiplicative logarithmic corrections. Here, Fisher renormalization for the exponents of these logarithms is developed in a general manner. As for the leading exponents, Fisher renormalization at t...
The fallacies of QT correction
Lokhandwala, Yash; Toal, SC
2003-01-01
Not to correct QT, but how to, that is the question”. The QT interval is a reflection of the action potential in the cardiac cells. Homogenous or heterogenous changes in the action potential duration lead to alteration of QT interval (in addition to morphological changes of T & U waves) 1. Such changes can be due to change in heart rate & autonomic tone. They can also be markers of abnormal repolarization, depolarization or both as a result of electrolyte disturbances, cardiac diseases, drug...
Surgical correction of "rhinoplastic look"
Sciuto, S.; BIANCO, N.
2013-01-01
SUMMARY A pointed, narrow and exaggeratedly upturned nasal tip and concave dorsal profile can give the nose an unnatural and artificial appearance that is the unmistakable hallmark of plastic surgery. As a result of changes in social attitudes, noses that have evidently been operated on are no longer acceptable and requests are made for correction. While a more natural dorsal profile can be obtained with camouflage grafts of autologous cartilage or alloplastic material (EPTFE), autologous gra...
Sampling Correction in Pedigree Analysis
Ginsburg Emil; Malkin Ida; Elston Robert C
2003-01-01
Usually, a pedigree is sampled and included in the sample that is analyzed after following a predefined non-random sampling design comprising several specific procedures. To obtain a pedigree analysis result free from the bias caused by the sampling procedures, a correction is applied to the pedigree likelihood. The sampling procedures usually considered are: the pedigree ascertainment, determining whether a population unit is to be sampled; the intrafamilial pedigree extension, determining w...
BPM testing, analysis, and correction
A general purpose stretched-wire test station has been developed and used for mapping Beam Position Monitors (BPMs). A computer running LabVIEW software controlling a network analyzer and x-y positioning tables operates the station and generates data files. The data is analyzed in Excel and can be used to generate correction tables. Test results from a variety of BPMs used for the Fermilab Main Injector and elsewhere will be presented
BPM testing, analysis, and correction
A general purpose stretched-wire test station has been developed and used for mapping Beam Position Monitors (BPMs). A computer running LabVIEW software controlling a network analyzer and x-y positioning tables operates the station and generates data files. The data is analyzed in Excel and can be used to generate correction tables. Test results from a variety of BPMs used for the Fermilab Main Injector and elsewhere will be presented. copyright 1998 American Institute of Physics
Surgical correction of postoperative astigmatism
Lindstrom Richard
1990-01-01
The photokeratoscope has increased the understanding of the aspheric nature of the cornea as well as a better understanding of normal corneal topography. This has significantly affected the development of newer and more predictable models of surgical astigmatic correction. Relaxing incisions effectively flatten the steeper meridian an equivalent amount as they steepen the flatter meridian. The net change in spherical equivalent is, therefore, negligible. Poor predictability is the major limit...
ACCURATE ESTIMATES OF CHARACTERISTIC EXPONENTS FOR SECOND ORDER DIFFERENTIAL EQUATION
无
2009-01-01
In this paper, a second order linear differential equation is considered, and an accurate estimate method of characteristic exponent for it is presented. Finally, we give some examples to verify the feasibility of our result.
Accurate wall thickness measurement using autointerference of circumferential Lamb wave
In this paper, a method of accurately measuring the pipe wall thickness by using noncontact air-coupled ultrasonic transducer (NAUT) was presented. In this method, accurate measurement of angular wave number (AWN) is a key technique because the AWN is changes minutely with the wall thickness. An autointerference of the circumferential (C-) Lamb wave was used for accurate measurements of the AWN. Principle of the method was first explained. Modified method for measuring the wall thickness near a butt weld line was also proposed and its accuracy was evaluated within 6 μm error. It was also shown in the paper that wall thickness measurement was accurately carried out beyond the difference among the sensors by calibrating the frequency response of the sensors. (author)
Highly Accurate Sensor for High-Purity Oxygen Determination Project
National Aeronautics and Space Administration — In this STTR effort, Los Gatos Research (LGR) and the University of Wisconsin (UW) propose to develop a highly-accurate sensor for high-purity oxygen determination....
Corrective camouflage in pediatric dermatology.
Tedeschi, Aurora; Dall'Oglio, Federica; Micali, Giuseppe; Schwartz, Robert A; Janniger, Camila K
2007-02-01
Many dermatologic diseases, including vitiligo and other pigmentary disorders, vascular malformations, acne, and disfiguring scars from surgery or trauma, can be distressing to pediatric patients and can cause psychological alterations such as depression, loss of self-esteem, deterioration of quality of life, emotional distress, and, in some cases, body dysmorphic disorder. Corrective camouflage can help cover cutaneous unaesthetic disorders using a variety of water-resistant and light to very opaque products that provide effective and natural coverage. These products also can serve as concealers during medical treatment or after surgical procedures before healing is complete. Between May 2001 and July 2003. corrective camouflage was used on 15 children and adolescents (age range, 7-16 years; mean age, 14 years). The majority of patients were girls. Six patients had acne vulgaris; 4 had vitiligo; 2 had Becker nevus; and 1 each had striae distensae, allergic contact dermatitis. and postsurgical scarring. Parents of all patients were satisfied with the cosmetic cover results. We consider corrective makeup to be a well-received and valid adjunctive therapy for use during traditional long-term treatment and as a therapeutic alternative in patients in whom conventional therapy is ineffective. PMID:17388210
Performance of TPC crosstalk correction
Dydak, F; Krasnoperov, A; Nefedov, Y; Wotschack, J; Zhemchugov, A
2004-01-01
The performance of the CERN-Dubna-Milano (CDM) algorithm for TPC crosstalk correction is presented. The algorithm is designed to correct for uni-directional and bi-directional crosstalk, but not for self-crosstalk. It reduces at the 10% level the number of clusters, and the number of pads with a signal above threshold. Despite of dramatic effects in selected channels with complicated crosstalk patterns, the average longitudinal signal shape of a hit, and the average transverse signal shape of a cluster, are little affected by uni-directional and bi-directional crosstalk. The longitudinal signal shape of hits is understood in terms of preamplifier response, longitudinal diffusion, track inclination, and self-crosstalk. The transverse signal shape of clusters is understood in terms of the TPC's pad response function. The CDM crosstalk correction leads to an average charge decrease at the level of 15%, though with significant differences between TPC sectors. On the whole, crosstalk constitutes a relatively benig...
Bernstein, R.; Lotspiech, J. B.
1984-01-01
Techniques were developed or improved to calibrate, enhance, and geometrically correct LANDSAT-4 satellite data. Statistical techniques to correct data radiometry were evaluated and were found to minimize striping and banding. Conventional techniques cause striping even with perfect calibration parameters. Intensity enhancement techniques were improved to display image data with large variation in intensity or brightness. Data were geometrically corrected to conform to a 1:100,000 map reference and image products produced with the map overlay. It is shown that these products can serve as accurate map products. A personal computer was experimentally used for digital image processing.
Correcting electrode impedance effects in broadband SIP measurements
Huisman, Johan Alexander; Zimmermann, Egon; Esser, Odilia; Haegel, Franz-Hubert; Vereecken, Harry
2016-04-01
Broadband spectral induced polarization (SIP) measurements of the complex electrical resistivity can be affected by the contact impedance of the potential electrodes above 100 Hz. In this study, we present a correction procedure to remove electrode impedance effects from SIP measurements. The first step in this correction procedure is to estimate the electrode impedance using a measurement with reversed current and potential electrodes. In a second step, this estimated electrode impedance is used to correct SIP measurements based on a simplified electrical model of the SIP measurement system. We evaluated this new correction procedure using SIP measurements on water because of the well-defined dielectric properties. It was found that the difference between the corrected and expected phase of the complex electrical resistivity of water was below 0.1 mrad at 1 kHz for a wide range of electrode impedances. In addition, SIP measurements on a saturated unconsolidated sediment sample with two types of potential electrodes showed that the measured phase of the electrical resistivity was very similar (difference SIP measurements on variably saturated unconsolidated sand were made. Here, the plausibility of the phase of the electrical resistivity was improved for frequencies up to 1 kHz, but errors remained for higher frequencies due to the approximate nature of the electrode impedance estimates and some remaining unknown parasitic capacitances that led to current leakage. It was concluded that the proposed correction procedure for SIP measurements improved the accuracy of the phase measurements by an order of magnitude in the kHz frequency range. Further improvement of this accuracy requires a method to accurately estimate parasitic capacitances in situ.
Professional orientation and pluralistic ignorance among jail correctional officers.
Cook, Carrie L; Lane, Jodi
2014-06-01
Research about the attitudes and beliefs of correctional officers has historically been conducted in prison facilities while ignoring jail settings. This study contributes to our understanding of correctional officers by examining the perceptions of those who work in jails, specifically measuring professional orientations about counseling roles, punitiveness, corruption of authority by inmates, and social distance from inmates. The study also examines whether officers are accurate in estimating these same perceptions of their peers, a line of inquiry that has been relatively ignored. Findings indicate that the sample was concerned about various aspects of their job and the management of inmates. Specifically, officers were uncertain about adopting counseling roles, were somewhat punitive, and were concerned both with maintaining social distance from inmates and with an inmate's ability to corrupt their authority. Officers also misperceived the professional orientation of their fellow officers and assumed their peer group to be less progressive than they actually were. PMID:23422025
Correction factors for gravimetric measurement of peritumoural oedema in man.
Bell, B A; Smith, M A; Tocher, J L; Miller, J D
1987-01-01
The water content of samples of normal and oedematous brain in lobectomy specimens from 16 patients with cerebral tumours has been measured by gravimetry and by wet and dry weighing. Uncorrected gravimetry underestimated the water content of oedematous peritumoural cortex by a mean of 1.17%, and of oedematous peritumoural white matter by a mean of 2.52%. Gravimetric correction equations calculated theoretically and from an animal model of serum infusion white matter oedema overestimate peritumoural white matter oedema in man, and empirical gravimetric error correction factors for oedematous peritumoural human white matter and cortex have therefore been derived. These enable gravimetry to be used to accurately determine peritumoural oedema in man. PMID:3268140
SPECT Compton-scattering correction by analysis of energy spectra.
Koral, K F; Wang, X Q; Rogers, W L; Clinthorne, N H; Wang, X H
1988-02-01
The hypothesis that energy spectra at individual spatial locations in single photon emission computed tomographic projection images can be analyzed to separate the Compton-scattered component from the unscattered component is tested indirectly. An axially symmetric phantom consisting of a cylinder with a sphere is imaged with either the cylinder or the sphere containing 99mTc. An iterative peak-erosion algorithm and a fitting algorithm are given and employed to analyze the acquired spectra. Adequate separation into an unscattered component and a Compton-scattered component is judged on the basis of filtered-backprojection reconstruction of corrected projections. In the reconstructions, attenuation correction is based on the known geometry and the total attenuation cross section for water. An independent test of the accuracy of separation is not made. For both algorithms, reconstructed slices for the cold-sphere, hot-surround phantom have the correct shape as confirmed by simulation results that take into account the measured dependence of system resolution on depth. For the inverse phantom, a hot sphere in a cold surround, quantitative results with the fitting algorithm are accurate but with a particular number of iterations of the erosion algorithm are less good. (A greater number of iterations would improve the 26% error with the algorithm, however.) These preliminary results encourage us to believe that a method for correcting for Compton-scattering in a wide variety of objects can be found, thus helping to achieve quantitative SPECT. PMID:3258023
Determination of Barkas correction and general formula for stopping power
The aim of our work was to measure stopping power for alpha particles in different gases and obtaining thc general formula for stopping power. The precise measurement of alpha particle stopping power in gases has been performed with maximal error of 0.7%. The accurate values of stopping power obtained in these measurements enable extraction of Barkas correction term and complete of general stopping power formula. The obtained formula was used for comparison of alpha stopping powers particles in different media. A good agreement we obtained with stopping power measurements in solids (author)
Automated motion correction based on target tracking for dynamic nuclear medicine studies
Cao, Xinhua; Tetrault, Tracy; Fahey, Fred; Treves, Ted
2008-03-01
Nuclear medicine dynamic studies of kidneys, bladder and stomach are important diagnostic tools. Accurate generation of time-activity curves from regions of interest (ROIs) requires that the patient remains motionless for the duration of the study. This is not always possible since some dynamic studies may last from several minutes to one hour. Several motion correction solutions have been explored. Motion correction using external point sources is inconvenient and not accurate especially when motion results from breathing, organ motion or feeding rather than from body motion alone. Centroid-based motion correction assumes that activity distribution is only inside the single organ (without background) and uniform, but this approach is impractical in most clinical studies. In this paper, we present a novel technique of motion correction that first tracks the organ of interest in a dynamic series then aligns the organ. The implementation algorithm for target tracking-based motion correction consists of image preprocessing, target detection, target positioning, motion estimation and prediction, tracking (new search region generation) and target alignment. The targeted organ is tracked from the first frame to the last one in the dynamic series to generate a moving trajectory of the organ. Motion correction is implemented by aligning the organ ROIs in the image series to the location of the organ in the first image. The proposed method of motion correction has been applied to several dynamic nuclear medicine studies including radionuclide cystography, dynamic renal scintigraphy, diuretic renography and gastric emptying scintigraphy.
The Importance of Slow-roll Corrections During Multi-field Inflation
Avgoustidis, Anastasios; Davis, Anne-Christine; Ribeiro, Raquel H; Turzynski, Krzysztof; Watson, Scott
2011-01-01
We re-examine the importance of slow-roll corrections during the evolution of cosmological perturbations in models of multi-field inflation. We find that in many instances the presence of light degrees of freedom leads to situations in which next to leading order slow-roll corrections become significant. Examples where we expect such corrections to be crucial include models in which modes exit the Hubble radius while the inflationary trajectory undergoes an abrupt turn in field space, or during a phase transition. We illustrate this with two examples -- hybrid inflation and double quadratic inflation. Utilizing both analytic estimates and full numerical results, we find that corrections can be as large as 20%. Our results have implications for many existing models in the literature, as these corrections must be included to obtain accurate observational predictions -- particularly given the level of accuracy expected from CMB experiments such as Planck
Algorithmic scatter correction in dual-energy digital mammography
Chen, Xi; Mou, Xuanqin [Institute of Image Processing and Pattern Recognition, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China); Nishikawa, Robert M.; Lau, Beverly A. [Department of Radiology, The University of Chicago, Chicago, Illinois 60637 (United States); Chan, Suk-tak [Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung Hom (Hong Kong); Zhang, Lei [Department of Computing, The Hong Kong Polytechnic University, Hung Hom (Hong Kong)
2013-11-15
. The range of background DE calcification signals using scatter-uncorrected data was reduced by 58% with scatter-corrected data by algorithmic method. With the scatter-correction algorithm and denoising, the minimum visible calcification size can be reduced from 380 to 280 μm.Conclusions: When applying the proposed algorithmic scatter correction to images, the resultant background DE calcification signals can be reduced and the CNR of calcifications can be improved. This method has similar or even better performance than pinhole-array interpolation method in scatter correction for DEDM; moreover, this method is convenient and requires no extra exposure to the patient. Although the proposed scatter correction method is effective, it is validated by a 5-cm-thick phantom with calcifications and homogeneous background. The method should be tested on structured backgrounds to more accurately gauge effectiveness.
Accurate Monte Carlo modelling of the back compartments of SPECT cameras
Today, new single photon emission computed tomography (SPECT) reconstruction techniques rely on accurate Monte Carlo (MC) simulations to optimize reconstructed images. However, existing MC scintillation camera models which usually include an accurate description of the collimator and crystal, lack correct implementation of the gamma camera's back compartments. In the case of dual isotope simultaneous acquisition (DISA), where backscattered photons from the highest energy isotope are detected in the imaging energy window of the second isotope, this approximation may induce simulation errors. Here, we investigate the influence of backscatter compartment modelling on the simulation accuracy of high-energy isotopes. Three models of a scintillation camera were simulated: a simple model (SM), composed only of a collimator and a NaI(Tl) crystal; an intermediate model (IM), adding a simplified description of the backscatter compartments to the previous model and a complete model (CM), accurately simulating the materials and geometries of the camera. The camera models were evaluated with point sources (67Ga, 99mTc, 111In, 123I, 131I and 18F) in air without a collimator, in air with a collimator and in water with a collimator. In the latter case, sensitivities and point-spread functions (PSFs) simulated in the photopeak window with the IM and CM are close to the measured values (error below 10.5%). In the backscatter energy window, however, the IM and CM overestimate the FWHM of the detected PSF by 52% and 23%, respectively, while the SM underestimates it by 34%. The backscatter peak fluence is also overestimated by 20% and 10% with the IM and CM, respectively, whereas it is underestimated by 60% with the SM. The results show that an accurate description of the backscatter compartments is required for SPECT simulations of high-energy isotopes (above 300 keV) when the backscatter energy window is of interest.
Jo, Byung-Du; Lee, Young-Jin; Kim, Dae-Hong; Jeon, Pil-Hyun; Kim, Hee-Joung
2014-03-01
In conventional digital radiography (DR) using a dual energy subtraction technique, a significant fraction of the detected photons are scattered within the body, resulting in the scatter component. Scattered radiation can significantly deteriorate image quality in diagnostic X-ray imaging systems. Various methods of scatter correction, including both measurement and non-measurement-based methods have been proposed in the past. Both methods can reduce scatter artifacts in images. However, non-measurement-based methods require a homogeneous object and have insufficient scatter component correction. Therefore, we employed a measurement-based method to correct for the scatter component of inhomogeneous objects from dual energy DR (DEDR) images. We performed a simulation study using a Monte Carlo simulation with a primary modulator, which is a measurement-based method for the DEDR system. The primary modulator, which has a checkerboard pattern, was used to modulate primary radiation. Cylindrical phantoms of variable size were used to quantify imaging performance. For scatter estimation, we used Discrete Fourier Transform filtering. The primary modulation method was evaluated using a cylindrical phantom in the DEDR system. The scatter components were accurately removed using a primary modulator. When the results acquired with scatter correction and without correction were compared, the average contrast-to-noise ratio (CNR) with the correction was 1.35 times higher than that obtained without correction, and the average root mean square error (RMSE) with the correction was 38.00% better than that without correction. In the subtraction study, the average CNR with correction was 2.04 (aluminum subtraction) and 1.38 (polymethyl methacrylate (PMMA) subtraction) times higher than that obtained without the correction. The analysis demonstrated the accuracy of scatter correction and the improvement of image quality using a primary modulator and showed the feasibility of
Ma, H.; Guo, S.; Hong, X.; Zhou, Y.
2015-05-01
The HJ-1A/B satellite offers free images with high spatial and temporal resolution, which are effective for dynamically monitoring cyanobacteria blooms. However, the HJ-1A/B satellite also receives distorted signals due to the influence of atmosphere. To acquire accurate information about cyanobacteria blooms, atmospheric correction is needed. HJ-1A/B images were atmosphere corrected using the FLAASH atmospheric correction model. Considering the quantum effect within a certain wavelength range, a spectral response function was included in the process. Then the model was used to process HJ-1A/B images, and the NDVI after atmospheric correction was compared with that before correction. The standard deviation improved from 0.13 to 0.158. Results indicate that atmospheric correction effectively reduces the distorted signals. Finally, NDVI was utilized to monitor the cyanobacteria bloom in Donghu Lake. The accuracy was enhanced compared with that before correction.
Correction to Moliere's formula for multiple scattering
The semiclassical correction to Moliere's formula for multiple scattering is derived. The consideration is based on the scattering amplitude obtained with the first semiclassical correction taken into account for an arbitrary localized but not spherically symmetric potential. Unlike the leading term, the correction to Moliere's formula contains the target density n and thickness L not only in the combination nL (areal density). Therefore, this correction can be referred to as the bulk density correction. It turns out that the bulk density correction is small even for high density. This result explains the wide range of applicability of Moliere's formula
Words Correct per Minute: The Variance in Standardized Reading Scores Accounted for by Reading Speed
Williams, Jacqueline L.; Skinner, Christopher H.; Floyd, Randy G.; Hale, Andrea D.; Neddenriep, Christine; Kirk, Emily P.
2011-01-01
The measure words correct per minute (WC/M) incorporates a measure of accurate aloud word reading and a measure of reading speed. The current article describes two studies designed to parse the variance in global reading scores accounted for by reading speed. In Study I, reading speed accounted for more than 40% of the reading composite score…
Masunov, Artëm E., E-mail: amasunov@ucf.edu [NanoScience Technology Center, Department of Chemistry, and Department of Physics, University of Central Florida, Orlando, FL 32826 (United States); Photochemistry Center RAS, ul. Novatorov 7a, Moscow 119421 (Russian Federation); Gangopadhyay, Shruba [Department of Physics, University of California, Davis, CA 95616 (United States); IBM Almaden Research Center, 650 Harry Road, San Jose, CA 95120 (United States)
2015-12-15
New method to eliminate the spin-contamination in broken symmetry density functional theory (BS DFT) calculations is introduced. Unlike conventional spin-purification correction, this method is based on canonical Natural Orbitals (NO) for each high/low spin coupled electron pair. We derive an expression to extract the energy of the pure singlet state given in terms of energy of BS DFT solution, the occupation number of the bonding NO, and the energy of the higher spin state built on these bonding and antibonding NOs (not self-consistent Kohn–Sham orbitals of the high spin state). Compared to the other spin-contamination correction schemes, spin-correction is applied to each correlated electron pair individually. We investigate two binuclear Mn(IV) molecular magnets using this pairwise correction. While one of the molecules is described by magnetic orbitals strongly localized on the metal centers, and spin gap is accurately predicted by Noodleman and Yamaguchi schemes, for the other one the gap is predicted poorly by these schemes due to strong delocalization of the magnetic orbitals onto the ligands. We show our new correction to yield more accurate results in both cases. - Highlights: • Magnetic orbitails obtained for high and low spin states are not related. • Spin-purification correction becomes inaccurate for delocalized magnetic orbitals. • We use the natural orbitals of the broken symmetry state to build high spin state. • This new correction is made separately for each electron pair. • Our spin-purification correction is more accurate for delocalised magnetic orbitals.
New method to eliminate the spin-contamination in broken symmetry density functional theory (BS DFT) calculations is introduced. Unlike conventional spin-purification correction, this method is based on canonical Natural Orbitals (NO) for each high/low spin coupled electron pair. We derive an expression to extract the energy of the pure singlet state given in terms of energy of BS DFT solution, the occupation number of the bonding NO, and the energy of the higher spin state built on these bonding and antibonding NOs (not self-consistent Kohn–Sham orbitals of the high spin state). Compared to the other spin-contamination correction schemes, spin-correction is applied to each correlated electron pair individually. We investigate two binuclear Mn(IV) molecular magnets using this pairwise correction. While one of the molecules is described by magnetic orbitals strongly localized on the metal centers, and spin gap is accurately predicted by Noodleman and Yamaguchi schemes, for the other one the gap is predicted poorly by these schemes due to strong delocalization of the magnetic orbitals onto the ligands. We show our new correction to yield more accurate results in both cases. - Highlights: • Magnetic orbitails obtained for high and low spin states are not related. • Spin-purification correction becomes inaccurate for delocalized magnetic orbitals. • We use the natural orbitals of the broken symmetry state to build high spin state. • This new correction is made separately for each electron pair. • Our spin-purification correction is more accurate for delocalised magnetic orbitals
Correction of gene expression data
Darbani Shirvanehdeh, Behrooz; Stewart, C. Neal, Jr.; Noeparvar, Shahin;
2014-01-01
This report investigates for the first time the potential inter-treatment bias source of cell number for gene expression studies. Cell-number bias can affect gene expression analysis when comparing samples with unequal total cellular RNA content or with different RNA extraction efficiencies. For...... maximal reliability of analysis, therefore, comparisons should be performed at the cellular level. This could be accomplished using an appropriate correction method that can detect and remove the inter-treatment bias for cell-number. Based on inter-treatment variations of reference genes, we introduce an...
Correct Linearization of Einstein's Equations
Rabounski D.
2006-06-01
Full Text Available Regularly Einstein's equations can be reduced to a wave form (linearly dependent from the second derivatives of the space metric in the absence of gravitation, the space rotation and Christoffel's symbols. As shown here, the origin of the problem is that one uses the general covariant theory of measurement. Here the wave form of Einstein's equations is obtained in the terms of Zelmanov's chronometric invariants (physically observable projections on the observer's time line and spatial section. The obtained equations depend on solely the second derivatives even if gravitation, the space rotation and Christoffel's symbols. The correct linearization proves: the Einstein equations are completely compatible with weak waves of the metric.
Holographic Thermalization with Weyl Corrections
Dey, Anshuman; Sarkar, Tapobrata
2015-01-01
We consider holographic thermalization in the presence of a Weyl correction in five dimensional AdS space. We numerically analyze the time dependence of the two point correlation functions and the expectation values of rectangular Wilson loops in the boundary field theory. The subtle interplay between the Weyl coupling constant and the chemical potential is studied in detail. An outcome of our analysis is the appearance of a swallow tail behaviour in the thermalization curve, and we give evidence that this might indicate distinct physical situations relating to different length scales in the problem.
Drift-corrected nanoplasmonic hydrogen sensing by polarization
Wadell, Carl; Langhammer, Christoph
2015-06-01
Accurate and reliable hydrogen sensors are an important enabling technology for the large-scale introduction of hydrogen as a fuel or energy storage medium. As an example, in a hydrogen-powered fuel cell car of the type now introduced to the market, more than 15 hydrogen sensors are required for safe operation. To enable the long-term use of plasmonic sensors in this particular context, we introduce a concept for drift-correction based on light polarization utilizing symmetric sensor and sensing material nanoparticles arranged in a heterodimer. In this way the inert gold sensor element of the plasmonic dimer couples to a sensing-active palladium element if illuminated in the dimer-parallel polarization direction but not the perpendicular one. Thus the perpendicular polarization readout can be used to efficiently correct for drifts occurring due to changes of the sensor element itself or due to non-specific events like a temperature change. Furthermore, by the use of a polarizing beamsplitter, both polarization signals can be read out simultaneously making it possible to continuously correct the sensor response to eliminate long-term drift and ageing effects. Since our approach is generic, we also foresee its usefulness for other applications of nanoplasmonic sensors than hydrogen sensing.Accurate and reliable hydrogen sensors are an important enabling technology for the large-scale introduction of hydrogen as a fuel or energy storage medium. As an example, in a hydrogen-powered fuel cell car of the type now introduced to the market, more than 15 hydrogen sensors are required for safe operation. To enable the long-term use of plasmonic sensors in this particular context, we introduce a concept for drift-correction based on light polarization utilizing symmetric sensor and sensing material nanoparticles arranged in a heterodimer. In this way the inert gold sensor element of the plasmonic dimer couples to a sensing-active palladium element if illuminated in the dimer
Accurately measuring dynamic coefficient of friction in ultraform finishing
Briggs, Dennis; Echaves, Samantha; Pidgeon, Brendan; Travis, Nathan; Ellis, Jonathan D.
2013-09-01
UltraForm Finishing (UFF) is a deterministic sub-aperture computer numerically controlled grinding and polishing platform designed by OptiPro Systems. UFF is used to grind and polish a variety of optics from simple spherical to fully freeform, and numerous materials from glasses to optical ceramics. The UFF system consists of an abrasive belt around a compliant wheel that rotates and contacts the part to remove material. This work aims to accurately measure the dynamic coefficient of friction (μ), how it changes as a function of belt wear, and how this ultimately affects material removal rates. The coefficient of friction has been examined in terms of contact mechanics and Preston's equation to determine accurate material removal rates. By accurately predicting changes in μ, polishing iterations can be more accurately predicted, reducing the total number of iterations required to meet specifications. We have established an experimental apparatus that can accurately measure μ by measuring triaxial forces during translating loading conditions or while manufacturing the removal spots used to calculate material removal rates. Using this system, we will demonstrate μ measurements for UFF belts during different states of their lifecycle and assess the material removal function from spot diagrams as a function of wear. Ultimately, we will use this system for qualifying belt-wheel-material combinations to develop a spot-morphing model to better predict instantaneous material removal functions.
Riesz, J; Meredith, P; Riesz, Jennifer; Gilmore, Joel; Meredith, Paul
2004-01-01
We report methods for correcting the photoluminescence emission and excitation spectra of highly absorbing samples for re-absorption and inner filter effects. We derive the general form of the correction, and investigate various methods for determining the parameters. Additionally, the correction methods are tested with highly absorbing fluorescein and melanin (broadband absorption) solutions; the expected linear relationships between absorption and emission are recovered upon application of the correction, indicating that the methods are valid. These procedures allow accurate quantitative analysis of the emission of low quantum yield samples (such as melanin) at concentrations where absorption is significant.
Accurate Sliding-Mode Control System Modeling for Buck Converters
Høyerby, Mikkel Christian Wendelboe; Andersen, Michael Andreas E.
2007-01-01
This paper shows that classical sliding mode theory fails to correctly predict the output impedance of the highly useful sliding mode PID compensated buck converter. The reason for this is identified as the assumption of the sliding variable being held at zero during sliding mode, effectively...... modeling the hysteretic comparator as an infinite gain. Correct prediction of output impedance is shown to be enabled by the use of a more elaborate, finite-gain model of the hysteretic comparator, which takes the effects of time delay and finite switching frequency into account. The demonstrated modeling...... approach also predicts the self-oscillating switching action of the sliding-mode control system correctly. Analytical findings are verified by simulation as well as experimentally in a 10-30V/3A buck converter....
Doppler angle correction in the measurement of intrarenal parameters
Mennitt K
2011-03-01
Full Text Available Jing Gao¹, Keith Hentel¹, Qiang Zhu², Teng Ma², George Shih¹, Kevin Mennitt¹, Robert Min¹¹Department of Radiology, New York Presbyterian Hospital, Weill Cornell Medical College, NY, USA; ²Division of Diagnostic Ultrasound, Department of Radiology, Beijing Tongren Hospital, Capital Medical University, Beijing, ChinaBackground: The aim of this study was to assess differences in intrarenal artery Doppler parameters measured without and with Doppler angle correction.Methods: We retrospectively reviewed color duplex sonography in 30 normally functioning kidneys (20 native kidneys in 10 subjects and 10 transplanted kidneys in 10 subjects performed between January 26, 2010 and July 26, 2010. There were 10 age-matched men and10 age-matched women (mean 39.8 ± 12.2, range 21–60 years in this study. Depending on whether the Doppler angle was corrected in the spectral Doppler measurement, Doppler parameters including peak systolic velocity (PSV, end-diastolic velocity (EDV, and resistive index (RI measured at the interlobar artery of the kidney were divided into two groups, ie, initial Doppler parameters measured without Doppler angle correction (Group 1 and remeasured Doppler parameters with Doppler angle correction (Group 2. Values for PSV, EDV, and RI measured without Doppler angle correction were compared with those measured with Doppler angle correction, and were analyzed statistically with a paired-samples t-test.Results: There were statistical differences in PSV and EDV at the interlobar artery in the upper, mid, and lower poles of the kidney between Group 1 and Group 2 (all P < 0.001. PSV and EDV in Group 1 were significantly lower than in Group 2. RI in Group 1 was the same as that in Group 2 in the upper, mid, and lower poles of the kidneys.Conclusion: Doppler angle correction plays an important role in the accurate measurement of intrarenal blood flow velocity. The true flow velocity converted from the maximum Doppler velocity shift
Correction of electric standing waves
Kester, Do; Avruch, Ian; Teyssier, David
2014-12-01
Electric Standing Waves (ESW) appear in some frequency bands of HIFI, a heterodyne spectrometer aboard the Herschel Space Observatory. ESWs consist of about 10 irregular ripples added to a continuum contribution. They distort the spectra and should be removed. ESWs change so rapidly that the standard ways to mitigate them, do not work. We have built a catalog of thousands of spectra taken on empty sky that contain only the ESW contribution. All ESWs seem to belong to a limited number of multiplicative families. To find representative members of the families we modelled them as splines and chose one representative template model for each family based on Bayesian evidence. The resulting set of models is our catalog of possible ESW templates. To correct a spectrum taken on an astronomical source, we select the template from the catalog that fits with the highest Bayesian evidence and subtract it. This has to be done in the possible presence of spectral lines and of a true astronomical continuuum. Both the true lines and continuum should be unaffected by the procedure. To exclude the lines we use a robustly weighted variety of the (gaussian) likelihood. Ideally the correction should be part of the pipeline with which all HIFI observations have to be processed. This requires a procedure having no failures, no interaction, and limited CPU usage.
Simple and accurate analytical calculation of shortest path lengths
Melnik, Sergey
2016-01-01
We present an analytical approach to calculating the distribution of shortest paths lengths (also called intervertex distances, or geodesic paths) between nodes in unweighted undirected networks. We obtain very accurate results for synthetic random networks with specified degree distribution (the so-called configuration model networks). Our method allows us to accurately predict the distribution of shortest path lengths on real-world networks using their degree distribution, or joint degree-degree distribution. Compared to some other methods, our approach is simpler and yields more accurate results. In order to obtain the analytical results, we use the analogy between an infection reaching a node in $n$ discrete time steps (i.e., as in the susceptible-infected epidemic model) and that node being at a distance $n$ from the source of the infection.
Accurate and Simple Calibration of DLP Projector Systems
Wilm, Jakob; Olesen, Oline Vinter; Larsen, Rasmus
2014-01-01
Much work has been devoted to the calibration of optical cameras, and accurate and simple methods are now available which require only a small number of calibration targets. The problem of obtaining these parameters for light projectors has not been studied as extensively and most current methods...... require a camera and involve feature extraction from a known projected pattern. In this work we present a novel calibration technique for DLP Projector systems based on phase shifting profilometry projection onto a printed calibration target. In contrast to most current methods, the one presented here...... does not rely on an initial camera calibration, and so does not carry over the error into projector calibration. A radial interpolation scheme is used to convert features coordinates into projector space, thereby allowing for a very accurate procedure. This allows for highly accurate determination of...
Accurate level set method for simulations of liquid atomization☆
Changxiao Shao; Kun Luo; Jianshan Yang; Song Chen; Jianren Fan
2015-01-01
Computational fluid dynamics is an efficient numerical approach for spray atomization study, but it is chal enging to accurately capture the gas–liquid interface. In this work, an accurate conservative level set method is intro-duced to accurately track the gas–liquid interfaces in liquid atomization. To validate the capability of this method, binary drop collision and drop impacting on liquid film are investigated. The results are in good agreement with experiment observations. In addition, primary atomization (swirling sheet atomization) is studied using this method. To the swirling sheet atomization, it is found that Rayleigh–Taylor instability in the azimuthal direction causes the primary breakup of liquid sheet and complex vortex structures are clustered around the rim of the liq-uid sheet. The effects of central gas velocity and liquid–gas density ratio on atomization are also investigated. This work lays a solid foundation for further studying the mechanism of spray atomization.
Accurate nuclear radii and binding energies from a chiral interaction
Ekstrom, A; Wendt, K A; Hagen, G; Papenbrock, T; Carlsson, B D; Forssen, C; Hjorth-Jensen, M; Navratil, P; Nazarewicz, W
2015-01-01
The accurate reproduction of nuclear radii and binding energies is a long-standing challenge in nuclear theory. To address this problem two-nucleon and three-nucleon forces from chiral effective field theory are optimized simultaneously to low-energy nucleon-nucleon scattering data, as well as binding energies and radii of few-nucleon systems and selected isotopes of carbon and oxygen. Coupled-cluster calculations based on this interaction, named NNLOsat, yield accurate binding energies and radii of nuclei up to 40Ca, and are consistent with the empirical saturation point of symmetric nuclear matter. In addition, the low-lying collective 3- states in 16O and 40Ca are described accurately, while spectra for selected p- and sd-shell nuclei are in reasonable agreement with experiment.
Correcting for telluric absorption: Methods, case studies, and release of the TelFit code
Gullikson, Kevin; Kraus, Adam [Department of Astronomy, University of Texas, 2515 Speedway, Stop C1400, Austin, TX 78712 (United States); Dodson-Robinson, Sarah [Department of Physics and Astronomy, 217 Sharp Lab, Newark, DE 19716 (United States)
2014-09-01
Ground-based astronomical spectra are contaminated by the Earth's atmosphere to varying degrees in all spectral regions. We present a Python code that can accurately fit a model to the telluric absorption spectrum present in astronomical data, with residuals of ∼3%-5% of the continuum for moderately strong lines. We demonstrate the quality of the correction by fitting the telluric spectrum in a nearly featureless A0V star, HIP 20264, as well as to a series of dwarf M star spectra near the 819 nm sodium doublet. We directly compare the results to an empirical telluric correction of HIP 20264 and find that our model-fitting procedure is at least as good and sometimes more accurate. The telluric correction code, which we make freely available to the astronomical community, can be used as a replacement for telluric standard star observations for many purposes.
Lindner, O.; Kammeier, A.; Fricke, E. [Inst. fuer Molekulare Biophysik, Radiopharmazie und Nuklearmedizin, Herz- und Diabeteszentrum NRW, Universitaetsklinik der Ruhr-Univ. Bochum, Bad Oeynhausen (Germany)
2004-09-01
Myocardial perfusion imaging has been proved as an accurate, noninvasive method for diagnosis of coronary artery disease with a high prognostic value. However image artifacts, which decrease sensitivity and in particular specificity, degrade the clinical impact of this method. Soft tissue attenuation is regarded as one of the most important factors of impaired image quality. Different approaches to correct for tissue attenuation have been implemented by the camera manufacturers. The principle is to derive an attenuation map from the transmission data and to correct the emission data for nonuniform photon attenuation with this map. There have been several reports published demonstrating an improved specificity with no substantial change in sensitivity by this method. To accurately perform attenuation correction quality control measurements and adequate training of technologists and physicians are mandatory. (orig.)
Correcting for Telluric Absorption: Methods, Case Studies, and Release of the TelFit Code
Gullikson, Kevin; Kraus, Adam
2014-01-01
Ground-based astronomical spectra are contaminated by the Earth's atmosphere to varying degrees in all spectral regions. We present a Python code that can accurately fit a model to the telluric absorption spectrum present in astronomical data, with residuals of $\\sim 3-5\\%$ of the continuum for moderately strong lines. We demonstrate the quality of the correction by fitting the telluric spectrum in a nearly featureless A0V star, HIP 20264, as well as to a series of dwarf M star spectra near the 819 nm sodium doublet. We directly compare the results to an empirical telluric correction of HIP 20264 and find that our model-fitting procedure is at least as good and sometimes more accurate. The telluric correction code, which we make freely available to the astronomical community, can be used as a replacement for telluric standard star observations for many purposes.
An Accurate Quartic Force Field and Vibrational Frequencies for HNO and DNO
Dateo, Christopher E.; Lee, Timothy J.; Schwenke, David W.
1994-01-01
An accurate ab initio quartic force field for HNO has been determined using the singles and doubles coupled-cluster method that includes a perturbational estimate of the effects of connected triple excitations, CCSD(T), in conjunction with the correlation consistent polarized valence triple zeta (cc-pVTZ) basis set. Improved harmonic frequencies were determined with the cc-pVQZ basis set. Fundamental vibrational frequencies were determined using a second-order perturbation theory analysis and also using variational calculations. The N-0 stretch and bending fundamentals are determined well from both vibrational analyses. The H-N stretch, however, is shown to have an unusually large anharmonic correction, and is not well determined using second-order perturbation theory. The H-N fundamental is well determined from the variational calculations, demonstrating the quality of the ab initio quartic force field. The zero-point energy of HNO that should be used in isodesmic reactions is also discussed.
Generation of accurate integral surfaces in time-dependent vector fields.
Garth, Christoph; Krishnan, Han; Tricoche, Xavier; Bobach, Tom; Joy, Kenneth I
2008-01-01
We present a novel approach for the direct computation of integral surfaces in time-dependent vector fields. As opposed to previous work, which we analyze in detail, our approach is based on a separation of integral surface computation into two stages: surface approximation and generation of a graphical representation. This allows us to overcome several limitations of existing techniques. We first describe an algorithm for surface integration that approximates a series of time lines using iterative refinement and computes a skeleton of the integral surface. In a second step, we generate a well-conditioned triangulation. Our approach allows a highly accurate treatment of very large time-varying vector fields in an efficient, streaming fashion. We examine the properties of the presented methods on several example datasets and perform a numerical study of its correctness and accuracy. Finally, we investigate some visualization aspects of integral surfaces. PMID:18988990
Accurate switching intensities and length scales in quasi-phase-matched materials
Bang, Ole; Graversen, Torben Winther; Corney, Joel Frederick
2001-01-01
We consider unseeded typeI second-harmonic generation in quasi-phase-matched quadratic nonlinear materials and derive an accurate analytical expression for the evolution of the average intensity. The intensity- dependent nonlinear phase mismatch that is due to the cubic nonlinearity induced by...... quasi phase matching is found. The equivalent formula for the intensity of maximum conversion, the crossing of which changes the one-period nonlinear phase shift of the fundamental abruptly by p , corrects earlier estimates [Opt.Lett. 23, 506 (1998)] by a factor of 5.3. We find the crystal lengths that...... are necessary to obtain an optimal flat phase versus intensity response on either side of this separatrix intensity....
Blackman, Jonathan; Galley, Chad R; Szilagyi, Bela; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A
2015-01-01
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. In this paper, we construct an accurate and fast-to-evaluate surrogate model for numerical relativity (NR) waveforms from non-spinning binary black hole coalescences with mass ratios from $1$ to $10$ and durations corresponding to about $15$ orbits before merger. Our surrogate, which is built using reduced order modeling techniques, is distinct from traditional modeling efforts. We find that the full multi-mode surrogate model agrees with waveforms generated by NR to within the numerical error of the NR code. In particular, we show that our modeling strategy produces surrogates which can correctly predict NR waveforms that were {\\em not} used for the surrogate's training. For all practical purposes, then, the surrogate waveform model is equivalent to the high-accuracy, large-scale simulation waveform but can be evaluated in a millisecond to a second dependin...
Producing accurate wave propagation time histories using the global matrix method
This paper presents a reliable method for producing accurate displacement time histories for wave propagation in laminated plates using the global matrix method. The existence of inward and outward propagating waves in the general solution is highlighted while examining the axisymmetric case of a circular actuator on an aluminum plate. Problems with previous attempts to isolate the outward wave for anisotropic laminates are shown. The updated method develops a correction signal that can be added to the original time history solution to cancel the inward wave and leave only the outward propagating wave. The paper demonstrates the effectiveness of the new method for circular and square actuators bonded to the surface of isotropic laminates, and these results are compared with exact solutions. Results for circular actuators on cross-ply laminates are also presented and compared with experimental results, showing the ability of the new method to successfully capture the displacement time histories for composite laminates. (paper)
Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method.
Zhao, Yan; Cao, Liangcai; Zhang, Hao; Kong, Dezhao; Jin, Guofan
2015-10-01
Fast calculation and correct depth cue are crucial issues in the calculation of computer-generated hologram (CGH) for high quality three-dimensional (3-D) display. An angular-spectrum based algorithm for layer-oriented CGH is proposed. Angular spectra from each layer are synthesized as a layer-corresponded sub-hologram based on the fast Fourier transform without paraxial approximation. The proposed method can avoid the huge computational cost of the point-oriented method and yield accurate predictions of the whole diffracted field compared with other layer-oriented methods. CGHs of versatile formats of 3-D digital scenes, including computed tomography and 3-D digital models, are demonstrated with precise depth performance and advanced image quality. PMID:26480062
Accurate determination of the 235U isotope abundance by gamma spectrometry
The purpose of this manual is to serve as guide in applications of the Certified Reference Material EC-NRM-171/NBS-SRM-969 for accurate U-235 isotope abundance measurements on bulk uranium samples by means of gamma spectrometry. The manual provides a thorough description of this non-destructive assay technique. Crucial measurement parameters affecting the accuracy of the gamma-spectrometric U-235 isotope abundance determination are discussed in detail and, whereever possible, evaluated quantitatively. The correction terms and tolerance limits given refer both to physical and chemical properties of the samples under assay and to relevant parameters of typical measurement systems such as counting geometry, signal processing, data evaluation and calibration. (orig.)
J.-K. Lee
2015-11-01
Full Text Available There are many potential sources of the biases in the radar rainfall estimation process. This study classified the biases from the rainfall estimation process into the reflectivity measurement bias and the rainfall estimation bias by the Quantitative Precipitation Estimation (QPE model and also conducted the bias correction methods to improve the accuracy of the Radar-AWS Rainrate (RAR calculation system operated by the Korea Meteorological Administration (KMA. In the Z bias correction for the reflectivity biases occurred by measuring the rainfalls, this study utilized the bias correction algorithm. The concept of this algorithm is that the reflectivity of the target single-pol radars is corrected based on the reference dual-pol radar corrected in the hardware and software bias. This study, and then, dealt with two post-process methods, the Mean Field Bias Correction (MFBC method and the Local Gauge Correction method (LGC, to correct the rainfall estimation bias by the QPE model. The Z bias and rainfall estimation bias correction methods were applied to the RAR system. The accuracy of the RAR system was improved after correcting Z bias. For the rainfall types, although the accuracy of the Changma front and the local torrential cases was slightly improved without the Z bias correction the accuracy of the typhoon cases got worse than the existing results in particular. As a result of the rainfall estimation bias correction, the Z bias_LGC was especially superior to the MFBC method because the different rainfall biases were applied to each grid rainfall amount in the LGC method. For the rainfall types, the results of the Z bias_LGC showed that the rainfall estimates for all types was more accurate than only the Z bias and, especially, the outcomes in the typhoon cases was vastly superior to the others.
J.-K. Lee
2015-04-01
Full Text Available There are many potential sources of bias in the radar rainfall estimation process. This study classified the biases from the rainfall estimation process into the reflectivity measurement bias and QPE model bias and also conducted the bias correction methods to improve the accuracy of the Radar-AWS Rainrate (RAR calculation system operated by the Korea Meteorological Administration (KMA. For the Z bias correction, this study utilized the bias correction algorithm for the reflectivity. The concept of this algorithm is that the reflectivity of target single-pol radars is corrected based on the reference dual-pol radar corrected in the hardware and software bias. This study, and then, dealt with two post-process methods, the Mean Field Bias Correction (MFBC method and the Local Gauge Correction method (LGC, to correct rainfall-bias. The Z bias and rainfall-bias correction methods were applied to the RAR system. The accuracy of the RAR system improved after correcting Z bias. For rainfall types, although the accuracy of Changma front and local torrential cases was slightly improved without the Z bias correction, especially, the accuracy of typhoon cases got worse than existing results. As a result of the rainfall-bias correction, the accuracy of the RAR system performed Z bias_LGC was especially superior to the MFBC method because the different rainfall biases were applied to each grid rainfall amount in the LGC method. For rainfall types, Results of the Z bias_LGC showed that rainfall estimates for all types was more accurate than only the Z bias and, especially, outcomes in typhoon cases was vastly superior to the others.
DOE/NV
2000-11-03
This addendum to the Corrective Action Investigation Plan (CAIP) contains the U.S. Department of Energy, Nevada Operations Office's approach to determine the extent of contamination existing at Corrective Action Unit (CAU) 321. This addendum was required when the extent of contamination exceeded the estimate in the original Corrective Action Decision Document (CADD). Located in Area 22 on the Nevada Test Site, Corrective Action Unit 321, Weather Station Fuel Storage, consists of Corrective Action Site 22-99-05, Fuel Storage Area, was used to store fuel and other petroleum products necessary for motorized operations at the historic Camp Desert Rock facility. This facility was operational from 1951 to 1958 and dismantled after 1958. Based on site history and earlier investigation activities at CAU 321, the contaminant of potential concern (COPC) was previously identified as total petroleum hydrocarbons (diesel-range organics). The scope of this corrective action investigation for the Fuel Storage Area will include the selection of biased sample locations to determine the vertical and lateral extent of contamination, collection of soil samples using rotary sonic drilling techniques, and the utilization of field-screening methods to accurately determine the extent of COPC contamination. The results of this field investigation will support a defensible evaluation of corrective action alternatives and be included in the revised CADD.
Network error correction with unequal link capacities
Kim, Sukwon; Ho, Tracey; Effros, Michelle; Avestimehr, Amir Salman
2010-01-01
We study network error correction with unequal link capacities. Previous results on network error correction assume unit link capacities. We consider network error correction codes that can correct arbitrary errors occurring on up to z links. We find the capacity of a network consisting of parallel links, and a generalized Singleton outer bound for any arbitrary network. We show by example that linear coding is insufficient for achieving capacity in general. In our exampl...
Equivalent method for accurate solution to linear interval equations
王冲; 邱志平
2013-01-01
Based on linear interval equations, an accurate interval finite element method for solving structural static problems with uncertain parameters in terms of optimization is discussed. On the premise of ensuring the consistency of solution sets, the original interval equations are equivalently transformed into some deterministic inequations. On this basis, calculating the structural displacement response with interval parameters is predigested to a number of deterministic linear optimization problems. The results are proved to be accurate to the interval governing equations. Finally, a numerical example is given to demonstrate the feasibility and eﬃciency of the proposed method.
Accurate upwind-monotone (nonoscillatory) methods for conservation laws
Huynh, Hung T.
1992-01-01
The well known MUSCL scheme of Van Leer is constructed using a piecewise linear approximation. The MUSCL scheme is second order accurate at the smooth part of the solution except at extrema where the accuracy degenerates to first order due to the monotonicity constraint. To construct accurate schemes which are free from oscillations, the author introduces the concept of upwind monotonicity. Several classes of schemes, which are upwind monotone and of uniform second or third order accuracy are then presented. Results for advection with constant speed are shown. It is also shown that the new scheme compares favorably with state of the art methods.
A rigid motion correction method for helical computed tomography (CT)
We propose a method to compensate for six degree-of-freedom rigid motion in helical CT of the head. The method is demonstrated in simulations and in helical scans performed on a 16-slice CT scanner. Scans of a Hoffman brain phantom were acquired while an optical motion tracking system recorded the motion of the bed and the phantom. Motion correction was performed by restoring projection consistency using data from the motion tracking system, and reconstructing with an iterative fully 3D algorithm. Motion correction accuracy was evaluated by comparing reconstructed images with a stationary reference scan. We also investigated the effects on accuracy of tracker sampling rate, measurement jitter, interpolation of tracker measurements, and the synchronization of motion data and CT projections. After optimization of these aspects, motion corrected images corresponded remarkably closely to images of the stationary phantom with correlation and similarity coefficients both above 0.9. We performed a simulation study using volunteer head motion and found similarly that our method is capable of compensating effectively for realistic human head movements. To the best of our knowledge, this is the first practical demonstration of generalized rigid motion correction in helical CT. Its clinical value, which we have yet to explore, may be significant. For example it could reduce the necessity for repeat scans and resource-intensive anesthetic and sedation procedures in patient groups prone to motion, such as young children. It is not only applicable to dedicated CT imaging, but also to hybrid PET/CT and SPECT/CT, where it could also ensure an accurate CT image for lesion localization and attenuation correction of the functional image data. (paper)
Rebich, N.J. [AGT Services Inc., Amsterdam, NY (United States)
2005-07-01
Electrical testing and diagnostics of rotating electrical machinery are important for condition assessment that ensures reliable service. The testing generally involves a thorough visual inspection and an evaluation of both the electrical insulation and conductor circuit integrity using specific electrical test equipment and pre-established acceptance criteria. Most electric utilities have specialists that conduct maintenance testing both on and off line. They document and evaluate electrical test data to determine what actions are needed to ensure reliability of equipment. They also determine if there are any unwanted trends that can influence reliability. These trends are useful in planning and budgeting future maintenance outage repairs and in providing experience-based and accurate risk assessment for deferral action. This paper presents a case study of a 1958 vintage General Electric 166 MVA, 18 KV 45 psig hydrogen inner-gas cooled winding generator. A problem in the unit began in 1990, but it was not accurately diagnosed and corrected until 2004, at which time it had reached a condition of impending failure. This paper described the initial testing and inspection routines, initial findings, repair and summary of further investigation and repair considerations. It was suggested that the strand failure was caused by high cycle fatigue of the unsupported stands within the clip caps. A map of failure locations with respect to the winding circuit was constructed by AGT Services to determine if there was any correlation between failures in terms of the electrical operational characteristics of the machine. Review of the data showed that the failures were random with respect to the electrical circuit. Determining the extent and location of the damage made it possible to develop a reliable repair strategy while avoiding a complete stator rewind. It also made it possible to correct inherent original design deficiencies. The unit was returned to full service in 2004. 9
Monte Carlo scatter correction for SPECT
Liu, Zemei
The goal of this dissertation is to present a quantitatively accurate and computationally fast scatter correction method that is robust and easily accessible for routine applications in SPECT imaging. A Monte Carlo based scatter estimation method is investigated and developed further. The Monte Carlo simulation program SIMIND (Simulating Medical Imaging Nuclear Detectors), was specifically developed to simulate clinical SPECT systems. The SIMIND scatter estimation (SSE) method was developed further using a multithreading technique to distribute the scatter estimation task across multiple threads running concurrently on multi-core CPU's to accelerate the scatter estimation process. An analytical collimator that ensures less noise was used during SSE. The research includes the addition to SIMIND of charge transport modeling in cadmium zinc telluride (CZT) detectors. Phenomena associated with radiation-induced charge transport including charge trapping, charge diffusion, charge sharing between neighboring detector pixels, as well as uncertainties in the detection process are addressed. Experimental measurements and simulation studies were designed for scintillation crystal based SPECT and CZT based SPECT systems to verify and evaluate the expanded SSE method. Jaszczak Deluxe and Anthropomorphic Torso Phantoms (Data Spectrum Corporation, Hillsborough, NC, USA) were used for experimental measurements and digital versions of the same phantoms employed during simulations to mimic experimental acquisitions. This study design enabled easy comparison of experimental and simulated data. The results have consistently shown that the SSE method performed similarly or better than the triple energy window (TEW) and effective scatter source estimation (ESSE) methods for experiments on all the clinical SPECT systems. The SSE method is proven to be a viable method for scatter estimation for routine clinical use.
Correction magnet power supplies for APS machine
A number of correction magnets are required for the advanced photon source (APS) machine to correct the beam. There are five kinds of correction magnets for the storage ring, two for the injector synchrotron, and two for the positron accumulator ring (PAR). Table I shoes a summary of the correction magnet power supplies for the APS machine. For the storage ring, the displacement of the quadrupole magnets due to the low frequency vibration below 25 Hz has the most significant effect on the stability of the positron closed orbit. The primary external source of the low frequency vibration is the ground motion of approximately 20 μm amplitude, with frequency components concentrated below 10 Hz. These low frequency vibrations can be corrected by using the correction magnets, whose field strengths are controlled individually through the feedback loop comprising the beam position monitoring system. The correction field require could be either positive or negative. Thus for all the correction magnets, bipolar power supplies (BPSs) are required to produce both polarities of correction fields. Three different types of BPS are used for all the correction magnets. Type I BPSs cover all the correction magnets for the storage ring, except for the trim dipoles. The maximum output current of the Type I BPS is 140 Adc. A Type II BPS powers a trim dipole, and its maximum output current is 60 Adc. The injector synchrotron and PAR correction magnets are powered form Type III BPSs, whose maximum output current is 25 Adc
75 FR 2510 - Procurement List; Corrections
2010-01-15
... services on January 11, 2010 (75 FR 1354-1355). The correct date that comments should be received is... FR 1355-1356). The correct effective date should be February 11, 2010. ADDRESSES: Committee for... PEOPLE WHO ARE BLIND OR SEVERELY DISABLED Procurement List; Corrections AGENCY: Committee for...
45 CFR 1225.19 - Corrective action.
2010-10-01
... 45 Public Welfare 4 2010-10-01 2010-10-01 false Corrective action. 1225.19 Section 1225.19 Public... Corrective action. (a) When discrimination is found, Peace Corps or ACTION must take appropriate action to... corrective action to the agent and other class members in accordance with § 1225.10 of this part. (b)...
40 CFR 192.04 - Corrective action.
2010-07-01
... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Corrective action. 192.04 Section 192... Corrective action. If the groundwater concentration limits established for disposal sites under provisions of § 192.02(c) are found or projected to be exceeded, a corrective action program shall be placed...
45 CFR 1225.10 - Corrective action.
2010-10-01
... 45 Public Welfare 4 2010-10-01 2010-10-01 false Corrective action. 1225.10 Section 1225.10 Public... Corrective action. When it has been determined by Final Agency Decision that the aggrieved party has been subjected to illegal discrimination, the following corrective actions may be taken: (a) Selection as...
10 CFR 72.172 - Corrective action.
2010-01-01
... 10 Energy 2 2010-01-01 2010-01-01 false Corrective action. 72.172 Section 72.172 Energy NUCLEAR... Corrective action. The licensee, applicant for a license, certificate holder, and applicant for a CoC shall... that the cause of the condition is determined and corrective action is taken to preclude...
42 CFR 431.246 - Corrective action.
2010-10-01
... 42 Public Health 4 2010-10-01 2010-10-01 false Corrective action. 431.246 Section 431.246 Public... Recipients Procedures § 431.246 Corrective action. The agency must promptly make corrective payments, retroactive to the date an incorrect action was taken, and, if appropriate, provide for admission...
40 CFR 35.3170 - Corrective action.
2010-07-01
... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Corrective action. 35.3170 Section 35... STATE AND LOCAL ASSISTANCE State Water Pollution Control Revolving Funds § 35.3170 Corrective action. (a... will notify the State of such noncompliance and prescribe the necessary corrective action. Failure...
34 CFR 200.42 - Corrective action.
2010-07-01
... 34 Education 1 2010-07-01 2010-07-01 false Corrective action. 200.42 Section 200.42 Education... Programs Operated by Local Educational Agencies Lea and School Improvement § 200.42 Corrective action. (a) Definition. “Corrective action” means action by an LEA that— (1) Substantially and directly responds to—...
10 CFR 71.133 - Corrective action.
2010-01-01
... 10 Energy 2 2010-01-01 2010-01-01 false Corrective action. 71.133 Section 71.133 Energy NUCLEAR....133 Corrective action. The licensee, certificate holder, and applicant for a CoC shall establish... determined and corrective action taken to preclude repetition. The identification of the...
Physics of the Power Corrections in QCD
Gubarev, F V; Zakharov, V I
1999-01-01
We review the physics of the power corrections to the parton model. In the first part, we consider the power corrections which characterize the infrared sensitivity of Feynman graphs when the contribution of short distances dominates. The second part is devoted to the hypothetical power corrections associated with nonperturbative effects at small distances.
Lim, Teik-Cheng
2016-05-01
For moderately thick plates, the use of First order Shear Deformation Theory (FSDT) with a constant shear correction factor of 5/6 is sufficient to take into account the plate deflection arising from transverse shear deformation. For very thick plates, the use of Third order Shear Deformation Theory (TSDT) is preferred as it allows the shear strain distribution to be varied through the plate thickness. Therefore no correction factor is required in TSDT, unlike FSDT. Due to the complexity involved in TSDT, this paper obtains a more accurate shear correction factor for use in FSDT of very thick simply supported and uniformly loaded isosceles right triangular plates based on the TSDT. By matching the maximum deflections for this plate according to FSDT and TSDT, a variable shear correction factor is obtained. Results show that the shear correction factor for the simplified TSDT, i.e. 14/17, is least accurate. The commonly adopted shear correction factor of 5/6 in FSDT is valid only for very thin or highly auxetic plates. This paper provides a variable shear correction for FSDT deflection that matches the plate deflection by TSDT. This variable shear correction factor allows designers to justify the use of a commonly adopted shear correction factor of 5/6 even for very thick plates as long as the Poisson’s ratio of the plate material is sufficiently negative.
Accurate Mass Determinations in Decay Chains with Missing Energy
Cheng, Hsin-Chia; Engelhardt, Dalit; Gunion, John F.; Han, Zhenyu; McElrath, Bob
2008-01-01
Many beyond the Standard Model theories include a stable dark matter candidate that yields missing / invisible energy in collider detectors. If observed at the Large Hadron Collider, we must determine if its mass and other properties (and those of its partners) predict the correct dark matter relic density. We give a new procedure for determining its mass with small error.
Fringe capacitance correction for a coaxial soil cell.
Pelletier, Mathew G; Viera, Joseph A; Schwartz, Robert C; Lascano, Robert J; Evett, Steven R; Green, Tim R; Wanjura, John D; Holt, Greg A
2011-01-01
Accurate measurement of moisture content is a prime requirement in hydrological, geophysical and biogeochemical research as well as for material characterization and process control. Within these areas, accurate measurements of the surface area and bound water content is becoming increasingly important for providing answers to many fundamental questions ranging from characterization of cotton fiber maturity, to accurate characterization of soil water content in soil water conservation research to bio-plant water utilization to chemical reactions and diffusions of ionic species across membranes in cells as well as in the dense suspensions that occur in surface films. One promising technique to address the increasing demands for higher accuracy water content measurements is utilization of electrical permittivity characterization of materials. This technique has enjoyed a strong following in the soil-science and geological community through measurements of apparent permittivity via time-domain-reflectometry (TDR) as well in many process control applications. Recent research however, is indicating a need to increase the accuracy beyond that available from traditional TDR. The most logical pathway then becomes a transition from TDR based measurements to network analyzer measurements of absolute permittivity that will remove the adverse effects that high surface area soils and conductivity impart onto the measurements of apparent permittivity in traditional TDR applications.This research examines an observed experimental error for the coaxial probe, from which the modern TDR probe originated, which is hypothesized to be due to fringe capacitance. The research provides an experimental and theoretical basis for the cause of the error and provides a technique by which to correct the system to remove this source of error. To test this theory, a Poisson model of a coaxial cell was formulated to calculate the effective theoretical extra length caused by the fringe capacitance
Fringe Capacitance Correction for a Coaxial Soil Cell
John D. Wanjura
2011-01-01
Full Text Available Accurate measurement of moisture content is a prime requirement in hydrological, geophysical and biogeochemical research as well as for material characterization and process control. Within these areas, accurate measurements of the surface area and bound water content is becoming increasingly important for providing answers to many fundamental questions ranging from characterization of cotton fiber maturity, to accurate characterization of soil water content in soil water conservation research to bio-plant water utilization to chemical reactions and diffusions of ionic species across membranes in cells as well as in the dense suspensions that occur in surface films. One promising technique to address the increasing demands for higher accuracy water content measurements is utilization of electrical permittivity characterization of materials. This technique has enjoyed a strong following in the soil-science and geological community through measurements of apparent permittivity via time-domain-reflectometry (TDR as well in many process control applications. Recent research however, is indicating a need to increase the accuracy beyond that available from traditional TDR. The most logical pathway then becomes a transition from TDR based measurements to network analyzer measurements of absolute permittivity that will remove the adverse effects that high surface area soils and conductivity impart onto the measurements of apparent permittivity in traditional TDR applications. This research examines an observed experimental error for the coaxial probe, from which the modern TDR probe originated, which is hypothesized to be due to fringe capacitance. The research provides an experimental and theoretical basis for the cause of the error and provides a technique by which to correct the system to remove this source of error. To test this theory, a Poisson model of a coaxial cell was formulated to calculate the effective theoretical extra length caused by the
Correction of a Depth-Dependent Lateral Distortion in 3D Super-Resolution Imaging.
Lina Carlini
Full Text Available Three-dimensional (3D localization-based super-resolution microscopy (SR requires correction of aberrations to accurately represent 3D structure. Here we show how a depth-dependent lateral shift in the apparent position of a fluorescent point source, which we term `wobble`, results in warped 3D SR images and provide a software tool to correct this distortion. This system-specific, lateral shift is typically > 80 nm across an axial range of ~ 1 μm. A theoretical analysis based on phase retrieval data from our microscope suggests that the wobble is caused by non-rotationally symmetric phase and amplitude aberrations in the microscope's pupil function. We then apply our correction to the bacterial cytoskeletal protein FtsZ in live bacteria and demonstrate that the corrected data more accurately represent the true shape of this vertically-oriented ring-like structure. We also include this correction method in a registration procedure for dual-color, 3D SR data and show that it improves target registration error (TRE at the axial limits over an imaging depth of 1 μm, yielding TRE values of < 20 nm. This work highlights the importance of correcting aberrations in 3D SR to achieve high fidelity between the measurements and the sample.
Ruggiero, Michael T; Gooch, Jonathan; Zubieta, Jon; Korter, Timothy M
2016-02-18
The problem of nonlocal interactions in density functional theory calculations has in part been mitigated by the introduction of range-corrected functional methods. While promising solutions, the continued evaluation of range corrections in the structural simulations of complex molecular crystals is required to judge their efficacy in challenging chemical environments. Here, three pyridinium-based crystals, exhibiting a wide range of intramolecular and intermolecular interactions, are used as benchmark systems for gauging the accuracy of several range-corrected density functional techniques. The computational results are compared to low-temperature experimental single-crystal X-ray diffraction and terahertz spectroscopic measurements, enabling the direct assessment of range correction in the accurate simulation of the potential energy surface minima and curvatures. Ultimately, the simultaneous treatment of both short- and long-range effects by the ωB97-X functional was found to be central to its rank as the top performer in reproducing the complex array of forces that occur in the studied pyridinium solids. These results demonstrate that while long-range corrections are the most commonly implemented range-dependent improvements to density functionals, short-range corrections are vital for the accurate reproduction of forces that rapidly diminish with distance, such as quadrupole-quadrupole interactions. PMID:26814572
Non-linear crustal corrections in high-resolution regional waveform seismic tomography
Marone, Federica; Romanowicz, Barbara
2007-07-01
We compare 3-D upper mantle anisotropic structures beneath the North American continent obtained using standard and improved crustal corrections in the framework of Non-linear Asymptotic Coupling Theory (NACT) applied to long period three component fundamental and higher mode surface waveform data. Our improved approach to correct for crustal structure in high-resolution regional waveform tomographic models goes beyond the linear perturbation approximation, and is therefore more accurate in accounting for large variations in Moho topography within short distances as observed, for instance, at ocean-continent margins. This improved methodology decomposes the shallow-layer correction into a linear and non-linear part and makes use of 1-D sensitivity kernels defined according to local tectonic structure, both for the forward computation and for the computation of sensitivity kernels for inversion. The comparison of the 3-D upper mantle anisotropic structures derived using the standard and improved crustal correction approaches shows that the model norm is not strongly affected. However, significant variations are observed in the retrieved 3-D perturbations. The largest differences in the velocity models are present below 250 km depth and not in the uppermost mantle, as would be expected. We suggest that inaccurate crustal corrections preferentially map into the least constrained part of the model and therefore accurate corrections for shallow-layer structure are essential to improve our knowledge of parts of the upper mantle where our data have the smallest sensitivity.
Correct Linearization of Einstein's Equations
Rabounski D.
2006-04-01
Full Text Available Routinely, Einstein’s equations are be reduced to a wave form (linearly independent of the second derivatives of the space metric in the absence of gravitation, the space rotation and Christoffel’s symbols. As shown herein, the origin of the problem is the use of the general covariant theory of measurement. Herein the wave form of Einstein’s equations is obtained in terms of Zelmanov’s chronometric invariants (physically observable projections on the observer’s time line and spatial section. The equations so obtained depend solely upon the second derivatives, even for gravitation, the space rotation and Christoffel’s symbols. The correct linearization proves that the Einstein equations are completely compatible with weak waves of the metric.
Pileup correction of microdosimetric spectra
Langen, K M; Lennox, A J; Kroc, T K; De Luca, P M
2002-01-01
Microdosimetric spectra were measured at the Fermilab neutron therapy facility using low pressure proportional counters operated in pulse mode. The neutron beam has a very low duty cycle (<0.1%) and consequently a high instantaneous dose rate which causes distortions of the microdosimetric spectra due to pulse pileup. The determination of undistorted spectra at this facility necessitated (i) the modified operation of the proton accelerator to reduce the instantaneous dose rate and (ii) the establishment of a computational procedure to correct the measured spectra for remaining pileup distortions. In support of the latter effort, two different pileup simulation algorithms using analytical and Monte-Carlo-based approaches were developed. While the analytical algorithm allows a detailed analysis of pileup processes it only treats two-pulse and three-pulse pileup and its validity is hence restricted. A Monte-Carlo-based pileup algorithm was developed that inherently treats all degrees of pileup. This algorithm...
Fitzpatrick, A Liam
2016-01-01
We use results on Virasoro conformal blocks to study chaotic dynamics in CFT$_2$ at large central charge c. The Lyapunov exponent $\\lambda_L$, which is a diagnostic for the early onset of chaos, receives $1/c$ corrections that may be interpreted as $\\lambda_L = \\frac{2 \\pi}{\\beta} \\left( 1 + \\frac{12}{c} \\right)$. However, out of time order correlators receive other equally important $1/c$ suppressed contributions that do not have such a simple interpretation. We revisit the proof of a bound on $\\lambda_L$ that emerges at large $c$, focusing on CFT$_2$ and explaining why our results do not conflict with the analysis leading to the bound. We also comment on relationships between chaos, scattering, causality, and bulk locality.
Radiative corrections in bumblebee electrodynamics
R.V. Maluf
2015-10-01
Full Text Available We investigate some quantum features of the bumblebee electrodynamics in flat spacetimes. The bumblebee field is a vector field that leads to a spontaneous Lorentz symmetry breaking. For a smooth quadratic potential, the massless excitation (Nambu–Goldstone boson can be identified as the photon, transversal to the vacuum expectation value of the bumblebee field. Besides, there is a massive excitation associated with the longitudinal mode and whose presence leads to instability in the spectrum of the theory. By using the principal-value prescription, we show that no one-loop radiative corrections to the mass term is generated. Moreover, the bumblebee self-energy is not transverse, showing that the propagation of the longitudinal mode cannot be excluded from the effective theory.
Tao, Jianmin, E-mail: jianmin.tao@temple.edu [Department of Physics, Temple University, Philadelphia, Pennsylvania 19122 (United States); Rappe, Andrew M. [Department of Chemistry, University of Pennsylvania, Philadelphia, Pennsylvania 19104-6323 (United States)
2016-01-21
Due to the absence of the long-range van der Waals (vdW) interaction, conventional density functional theory (DFT) often fails in the description of molecular complexes and solids. In recent years, considerable progress has been made in the development of the vdW correction. However, the vdW correction based on the leading-order coefficient C{sub 6} alone can only achieve limited accuracy, while accurate modeling of higher-order coefficients remains a formidable task, due to the strong non-additivity effect. Here, we apply a model dynamic multipole polarizability within a modified single-frequency approximation to calculate C{sub 8} and C{sub 10} between small molecules. We find that the higher-order vdW coefficients from this model can achieve remarkable accuracy, with mean absolute relative deviations of 5% for C{sub 8} and 7% for C{sub 10}. Inclusion of accurate higher-order contributions in the vdW correction will effectively enhance the predictive power of DFT in condensed matter physics and quantum chemistry.
Due to the absence of the long-range van der Waals (vdW) interaction, conventional density functional theory (DFT) often fails in the description of molecular complexes and solids. In recent years, considerable progress has been made in the development of the vdW correction. However, the vdW correction based on the leading-order coefficient C6 alone can only achieve limited accuracy, while accurate modeling of higher-order coefficients remains a formidable task, due to the strong non-additivity effect. Here, we apply a model dynamic multipole polarizability within a modified single-frequency approximation to calculate C8 and C10 between small molecules. We find that the higher-order vdW coefficients from this model can achieve remarkable accuracy, with mean absolute relative deviations of 5% for C8 and 7% for C10. Inclusion of accurate higher-order contributions in the vdW correction will effectively enhance the predictive power of DFT in condensed matter physics and quantum chemistry
Is Expressive Language Disorder an Accurate Diagnostic Category?
Leonard, Laurence B.
2009-01-01
Purpose: To propose that the diagnostic category of "expressive language disorder" as distinct from a disorder of both expressive and receptive language might not be accurate. Method: Evidence that casts doubt on a pure form of this disorder is reviewed from several sources, including the literature on genetic findings, theories of language…
Accurate momentum transfer cross section for the attractive Yukawa potential
Khrapak, S. A., E-mail: Sergey.Khrapak@dlr.de [Forschungsgruppe Komplexe Plasmen, Deutsches Zentrum für Luft- und Raumfahrt, Oberpfaffenhofen (Germany)
2014-04-15
Accurate expression for the momentum transfer cross section for the attractive Yukawa potential is proposed. This simple analytic expression agrees with the numerical results better than to within ±2% in the regime relevant for ion-particle collisions in complex (dusty) plasmas.
Is a Writing Sample Necessary for "Accurate Placement"?
Sullivan, Patrick; Nielsen, David
2009-01-01
The scholarship about assessment for placement is extensive and notoriously ambiguous. Foremost among the questions that continue to be unresolved in this scholarship is this one: Is a writing sample necessary for "accurate placement"? Using a robust data sample of student assessment essays and ACCUPLACER test scores, we put this question to the…
Fast and Accurate Residential Fire Detection Using Wireless Sensor Networks
Bahrepour, Majid; Meratnia, Nirvana; Havinga, Paul J.M.
2010-01-01
Prompt and accurate residential fire detection is important for on-time fire extinguishing and consequently reducing damages and life losses. To detect fire sensors are needed to measure the environmental parameters and algorithms are required to decide about occurrence of fire. Recently, wireless s
Efficient and accurate sound propagation using adaptive rectangular decomposition.
Raghuvanshi, Nikunj; Narain, Rahul; Lin, Ming C
2009-01-01
Accurate sound rendering can add significant realism to complement visual display in interactive applications, as well as facilitate acoustic predictions for many engineering applications, like accurate acoustic analysis for architectural design. Numerical simulation can provide this realism most naturally by modeling the underlying physics of wave propagation. However, wave simulation has traditionally posed a tough computational challenge. In this paper, we present a technique which relies on an adaptive rectangular decomposition of 3D scenes to enable efficient and accurate simulation of sound propagation in complex virtual environments. It exploits the known analytical solution of the Wave Equation in rectangular domains, and utilizes an efficient implementation of the Discrete Cosine Transform on Graphics Processors (GPU) to achieve at least a 100-fold performance gain compared to a standard Finite-Difference Time-Domain (FDTD) implementation with comparable accuracy, while also being 10-fold more memory efficient. Consequently, we are able to perform accurate numerical acoustic simulation on large, complex scenes in the kilohertz range. To the best of our knowledge, it was not previously possible to perform such simulations on a desktop computer. Our work thus enables acoustic analysis on large scenes and auditory display for complex virtual environments on commodity hardware. PMID:19590105
Accurate Period Approximation for Any Simple Pendulum Amplitude
XUE De-Sheng; ZHOU Zhao; GAO Mei-Zhen
2012-01-01
Accurate approximate analytical formulae of the pendulum period composed of a few elementary functions for any amplitude are constructed.Based on an approximation of the elliptic integral,two new logarithmic formulae for large amplitude close to 180° are obtained.Considering the trigonometric function modulation results from the dependence of relative error on the amplitude,we realize accurate approximation period expressions for any amplitude between 0 and 180°.A relative error less than 0.02％ is achieved for any amplitude.This kind of modulation is also effective for other large-amplitude logarithmic approximation expressions.%Accurate approximate analytical formulae of the pendulum period composed of a few elementary functions for any amplitude are constructed. Based on an approximation of the elliptic integral, two new logarithmic formulae for large amplitude close to 180° are obtained. Considering the trigonometric function modulation results from the dependence of relative error on the amplitude, we realize accurate approximation period expressions for any amplitude between 0 and 180°. A relative error less than 0.02% is achieved for any amplitude. This kind of modulation is also effective for other large-amplitude logarithmic approximation expressions.
Second-order accurate nonoscillatory schemes for scalar conservation laws
Huynh, Hung T.
1989-01-01
Explicit finite difference schemes for the computation of weak solutions of nonlinear scalar conservation laws is presented and analyzed. These schemes are uniformly second-order accurate and nonoscillatory in the sense that the number of extrema of the discrete solution is not increasing in time.
Accurate segmentation of dense nanoparticles by partially discrete electron tomography
Roelandts, T., E-mail: tom.roelandts@ua.ac.be [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Batenburg, K.J. [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, 1098 XG Amsterdam (Netherlands); Biermans, E. [EMAT, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Kuebel, C. [Institute of Nanotechnology, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Sijbers, J. [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium)
2012-03-15
Accurate segmentation of nanoparticles within various matrix materials is a difficult problem in electron tomography. Due to artifacts related to image series acquisition and reconstruction, global thresholding of reconstructions computed by established algorithms, such as weighted backprojection or SIRT, may result in unreliable and subjective segmentations. In this paper, we introduce the Partially Discrete Algebraic Reconstruction Technique (PDART) for computing accurate segmentations of dense nanoparticles of constant composition. The particles are segmented directly by the reconstruction algorithm, while the surrounding regions are reconstructed using continuously varying gray levels. As no properties are assumed for the other compositions of the sample, the technique can be applied to any sample where dense nanoparticles must be segmented, regardless of the surrounding compositions. For both experimental and simulated data, it is shown that PDART yields significantly more accurate segmentations than those obtained by optimal global thresholding of the SIRT reconstruction. -- Highlights: Black-Right-Pointing-Pointer We present a novel reconstruction method for partially discrete electron tomography. Black-Right-Pointing-Pointer It accurately segments dense nanoparticles directly during reconstruction. Black-Right-Pointing-Pointer The gray level to use for the nanoparticles is determined objectively. Black-Right-Pointing-Pointer The method expands the set of samples for which discrete tomography can be applied.
Accurate momentum transfer cross section for the attractive Yukawa potential
Khrapak, Sergey
2014-01-01
Accurate expression for the momentum transfer cross section for the attractive Yukawa potential is proposed. This simple analytic expression agrees with the numerical results better than to within 2% in the regime relevant for ion-particle collisions in complex (dusty) plasmas.
Accurate momentum transfer cross section for the attractive Yukawa potential
Khrapak, S. A.
2014-01-01
Accurate expression for the momentum transfer cross section for the attractive Yukawa potential is proposed. This simple analytic expression agrees with the numerical results better than to within $\\pm 2\\%$ in the regime relevant for ion-particle collisions in complex (dusty) plasmas.
Temperature correction in conductivity measurements
Smith, Stanford H.
1962-01-01
Electrical conductivity has been widely used in freshwater research but usual methods employed by limnologists for converting measurements to conductance at a given temperature have not given uniformly accurate results. The temperature coefficient used to adjust conductivity of natural waters to a given temperature varies depending on the kinds and concentrations of electrolytes, the temperature at the time of measurement, and the temperature to which measurements are being adjusted. The temperature coefficient was found to differ for various lake and stream waters, and showed seasonal changes. High precision can be obtained only by determining temperature coefficients for each water studied. Mean temperature coefficients are given for various temperature ranges that may be used where less precision is required.
Accurate membrane tracing in three-dimensional reconstructions from electron cryotomography data
Page, Christopher; Hanein, Dorit; Volkmann, Niels, E-mail: niels@burnham.org
2015-08-15
The connection between the extracellular matrix and the cell is of major importance for mechanotransduction and mechanobiology. Electron cryo-tomography, in principle, enables better than nanometer-resolution analysis of these connections, but restrictions of data collection geometry hamper the accurate extraction of the ventral membrane location from these tomograms, an essential prerequisite for the analysis. Here, we introduce a novel membrane tracing strategy that enables ventral membrane extraction at high fidelity and extraordinary accuracy. The approach is based on detecting the boundary between the inside and the outside of the cell rather than trying to explicitly trace the membrane. Simulation studies show that over 99% of the membrane can be correctly modeled using this principle and the excellent match of visually identifiable membrane stretches with the extracted boundary of experimental data indicates that the accuracy is comparable for actual data. - Highlights: • The connection between the ECM and the cell is of major importance. • Electron cryo-tomography provides nanometer-resolution information. • Data collection geometry hampers extraction of membranes from tomograms. • We introduce a novel membrane tracing strategy allowing high fidelity extraction. • Simulations show that over 99% of the membrane can be correctly modeled this way.
Accurate acoustic and elastic beam migration without slant stack for complex topography
Huang, Jianping; Yuan, Maolin; Liao, Wenyuan; Li, Zhenchun; Yue, Yubo
2015-06-01
Recent trends in seismic exploration have led to the collection of more surveys, often with multi-component recording, in onshore settings where both topography and subsurface targets are complex, leading to challenges for processing methods. Gaussian beam migration (GBM) is an alternative to single-arrival Kirchhoff migration, although there are some issues resulting in unsatisfactory GBM images. For example, static correction will give rise to the distortion of wavefields when near-surface elevation and velocity vary rapidly. Moreover, Green’s function compensated for phase changes from the beam center to receivers is inaccurate when receivers are not placed within some neighborhood of the beam center, that is, GBM is slightly inflexible for irregular acquisition system and complex topography. As a result, the differences of both the near-surface velocity and the surface slope from the beam center to the receivers and the poor spatial sampling of the land data lead to inaccuracy and aliasing of the slant stack, respectively. In order to improve the flexibility and accuracy of GBM, we propose accurate acoustic, PP and polarity-corrected PS beam migration without slant stack for complex topography. The applications of this method to one-component synthetic data from a 2D Canadian Foothills model and a Zhongyuan oilfield fault model, one-component field data and an unseparated multi-component synthetic data demonstrate that the method is effective for structural and relatively amplitude-preserved imaging, but significantly more time-consuming.
Removing the thermal component from heart rate provides an accurate VO2 estimation in forest work.
Dubé, Philippe-Antoine; Imbeau, Daniel; Dubeau, Denise; Lebel, Luc; Kolus, Ahmet
2016-05-01
Heart rate (HR) was monitored continuously in 41 forest workers performing brushcutting or tree planting work. 10-min seated rest periods were imposed during the workday to estimate the HR thermal component (ΔHRT) per Vogt et al. (1970, 1973). VO2 was measured using a portable gas analyzer during a morning submaximal step-test conducted at the work site, during a work bout over the course of the day (range: 9-74 min), and during an ensuing 10-min rest pause taken at the worksite. The VO2 estimated, from measured HR and from corrected HR (thermal component removed), were compared to VO2 measured during work and rest. Varied levels of HR thermal component (ΔHRTavg range: 0-38 bpm) originating from a wide range of ambient thermal conditions, thermal clothing insulation worn, and physical load exerted during work were observed. Using raw HR significantly overestimated measured work VO2 by 30% on average (range: 1%-64%). 74% of VO2 prediction error variance was explained by the HR thermal component. VO2 estimated from corrected HR, was not statistically different from measured VO2. Work VO2 can be estimated accurately in the presence of thermal stress using Vogt et al.'s method, which can be implemented easily by the practitioner with inexpensive instruments. PMID:26851474
Accurate membrane tracing in three-dimensional reconstructions from electron cryotomography data
The connection between the extracellular matrix and the cell is of major importance for mechanotransduction and mechanobiology. Electron cryo-tomography, in principle, enables better than nanometer-resolution analysis of these connections, but restrictions of data collection geometry hamper the accurate extraction of the ventral membrane location from these tomograms, an essential prerequisite for the analysis. Here, we introduce a novel membrane tracing strategy that enables ventral membrane extraction at high fidelity and extraordinary accuracy. The approach is based on detecting the boundary between the inside and the outside of the cell rather than trying to explicitly trace the membrane. Simulation studies show that over 99% of the membrane can be correctly modeled using this principle and the excellent match of visually identifiable membrane stretches with the extracted boundary of experimental data indicates that the accuracy is comparable for actual data. - Highlights: • The connection between the ECM and the cell is of major importance. • Electron cryo-tomography provides nanometer-resolution information. • Data collection geometry hampers extraction of membranes from tomograms. • We introduce a novel membrane tracing strategy allowing high fidelity extraction. • Simulations show that over 99% of the membrane can be correctly modeled this way
Knight, Joseph W; Wang, Xiaopeng; Gallandi, Lukas; Dolgounitcheva, Olga; Ren, Xinguo; Ortiz, J Vincent; Rinke, Patrick; Körzdörfer, Thomas; Marom, Noa
2016-02-01
The performance of different GW methods is assessed for a set of 24 organic acceptors. Errors are evaluated with respect to coupled cluster singles, doubles, and perturbative triples [CCSD(T)] reference data for the vertical ionization potentials (IPs) and electron affinities (EAs), extrapolated to the complete basis set limit. Additional comparisons are made to experimental data, where available. We consider fully self-consistent GW (scGW), partial self-consistency in the Green's function (scGW0), non-self-consistent G0W0 based on several mean-field starting points, and a "beyond GW" second-order screened exchange (SOSEX) correction to G0W0. We also describe the implementation of the self-consistent Coulomb hole with screened exchange method (COHSEX), which serves as one of the mean-field starting points. The best performers overall are G0W0+SOSEX and G0W0 based on an IP-tuned long-range corrected hybrid functional with the former being more accurate for EAs and the latter for IPs. Both provide a balanced treatment of localized vs delocalized states and valence spectra in good agreement with photoemission spectroscopy (PES) experiments. PMID:26731609
Goodpaster, Jason D.; Barnes, Taylor A.; Miller, Thomas F., E-mail: tfm@caltech.edu [Division of Chemistry and Chemical Engineering, California Institute of Technology, Pasadena, California 91125 (United States); Manby, Frederick R., E-mail: fred.manby@bristol.ac.uk [Centre for Computational Chemistry, School of Chemistry, University of Bristol, Bristol BS8 ITS (United Kingdom)
2014-05-14
We analyze the sources of error in quantum embedding calculations in which an active subsystem is treated using wavefunction methods, and the remainder using density functional theory. We show that the embedding potential felt by the electrons in the active subsystem makes only a small contribution to the error of the method, whereas the error in the nonadditive exchange-correlation energy dominates. We test an MP2 correction for this term and demonstrate that the corrected embedding scheme accurately reproduces wavefunction calculations for a series of chemical reactions. Our projector-based embedding method uses localized occupied orbitals to partition the system; as with other local correlation methods, abrupt changes in the character of the localized orbitals along a reaction coordinate can lead to discontinuities in the embedded energy, but we show that these discontinuities are small and can be systematically reduced by increasing the size of the active region. Convergence of reaction energies with respect to the size of the active subsystem is shown to be rapid for all cases where the density functional treatment is able to capture the polarization of the environment, even in conjugated systems, and even when the partition cuts across a double bond.
Self-interaction correction to GW approximation
A general approach to correct the self-interaction error in GW approximation is proposed, and proved to be exact in the one-electron limit. The correction is expressed by vertex corrections to both the self-energy and the polarization, and the formulation can be shown to be equivalent to the Schneider-Taylor-Yaris approximation of many-body scattering theory. The suitability of this correction in many-electron systems is also discussed. Numerical calculations of the two-electron two-site Hubbard model are performed to illustrate the effects of the self-interaction correction on many-electron systems.
Joint Correction of Ionospheric Artifact and Orbital Error in L-band SAR Interferometry
Jung, H.; Liu, Z.; Lu, Z.
2012-12-01
Synthetic aperture radar interferometry (InSAR) is a powerful technique to measure surface deformation. However, the accuracy of this technique for L-band synthetic aperture radar (SAR) system is largely compromised by ionospheric path delays on the radar signals. The ionospheric effect causes severe ionospheric distortion called azimuth streaking in SAR backscattering intensity images as well as long wavelength phase distortion similar to orbital ramp error. Effective detection and correction of ionospheric phase distortion from L-band InSAR images are necessary to measure and interpret surface displacement accurately. Recently Jung et al.(2012) proposed an efficient method to correct ionospheric phase distortions using the multiple aperture interferometry (MAI) interferogram. In this study, we extend this technique to correct the ionosphere effect in InSAR measurements of interseismic deformation. We present case studies in southern California using L-band ALOS PALSAR data and in-situ GPS measurements and show that the long wavelength noise can be removed by joint correction of the ionospheric artifact and the orbital error. Displacement maps created from 20070715-20091020 ALOS PALSAR pair: (a-b) before and after joint correction of ionospheric artifact and orbital error, and (c) after correction from 2D-polynomial fit Displacement maps created from 20071015-20091020 ALOS PALSAR pair: (a-b) before and after joint correction of ionospheric artifact and orbital error, and (c) after correction from 2D-polynomial fit
Highly accurate nitrogen dioxide (NO2) in nitrogen standards based on permeation.
Flores, Edgar; Viallon, Joële; Moussay, Philippe; Idrees, Faraz; Wielgosz, Robert Ian
2012-12-01
The development and operation of a highly accurate primary gas facility for the dynamic production of mixtures of nitrogen dioxide (NO(2)) in nitrogen (N(2)) based on continuous weighing of a permeation tube and accurate impurity quantification and correction of the gas mixtures using Fourier transform infrared spectroscopy (FT-IR) is described. NO(2) gas mixtures in the range of 5 μmol mol(-1) to 15 μmol mol(-1) with a standard relative uncertainty of 0.4% can be produced with this facility. To achieve an uncertainty at this level, significant efforts were made to reduce, identify and quantify potential impurities present in the gas mixtures, such as nitric acid (HNO(3)). A complete uncertainty budget, based on the analysis of the performance of the facility, including the use of a FT-IR spectrometer and a nondispersive UV analyzer as analytical techniques, is presented in this work. The mixtures produced by this facility were validated and then selected to provide reference values for an international comparison of the Consultative Committee for Amount of Substance (CCQM), number CCQM-K74, (1) which was designed to evaluate the consistency of primary NO(2) gas standards from 17 National Metrology Institutes. PMID:23148702
A Novel Method for Accurate Operon Predictions in All SequencedProkaryotes
Price, Morgan N.; Huang, Katherine H.; Alm, Eric J.; Arkin, Adam P.
2004-12-01
We combine comparative genomic measures and the distance separating adjacent genes to predict operons in 124 completely sequenced prokaryotic genomes. Our method automatically tailors itself to each genome using sequence information alone, and thus can be applied to any prokaryote. For Escherichia coli K12 and Bacillus subtilis, our method is 85 and 83% accurate, respectively, which is similar to the accuracy of methods that use the same features but are trained on experimentally characterized transcripts. In Halobacterium NRC-1 and in Helicobacterpylori, our method correctly infers that genes in operons are separated by shorter distances than they are in E.coli, and its predictions using distance alone are more accurate than distance-only predictions trained on a database of E.coli transcripts. We use microarray data from sixphylogenetically diverse prokaryotes to show that combining intergenic distance with comparative genomic measures further improves accuracy and that our method is broadly effective. Finally, we survey operon structure across 124 genomes, and find several surprises: H.pylori has many operons, contrary to previous reports; Bacillus anthracis has an unusual number of pseudogenes within conserved operons; and Synechocystis PCC6803 has many operons even though it has unusually wide spacings between conserved adjacent genes.
Accurate computation of Stokes flow driven by an open immersed interface
Li, Yi; Layton, Anita T.
2012-06-01
We present numerical methods for computing two-dimensional Stokes flow driven by forces singularly supported along an open, immersed interface. Two second-order accurate methods are developed: one for accurately evaluating boundary integral solutions at a point, and another for computing Stokes solution values on a rectangular mesh. We first describe a method for computing singular or nearly singular integrals, such as a double layer potential due to sources on a curve in the plane, evaluated at a point on or near the curve. To improve accuracy of the numerical quadrature, we add corrections for the errors arising from discretization, which are found by asymptotic analysis. When used to solve the Stokes equations with sources on an open, immersed interface, the method generates second-order approximations, for both the pressure and the velocity, and preserves the jumps in the solutions and their derivatives across the boundary. We then combine the method with a mesh-based solver to yield a hybrid method for computing Stokes solutions at N2 grid points on a rectangular grid. Numerical results are presented which exhibit second-order accuracy. To demonstrate the applicability of the method, we use the method to simulate fluid dynamics induced by the beating motion of a cilium. The method preserves the sharp jumps in the Stokes solution and their derivatives across the immersed boundary. Model results illustrate the distinct hydrodynamic effects generated by the effective stroke and by the recovery stroke of the ciliary beat cycle.
A method for accurate localization of the first heart sound and possible applications
We have previously developed a method for localization of the first heart sound (S1) using wavelet denoising and ECG-gated peak-picking. In this study, an additional enhancement step based on cross-correlation and ECG-gated ensemble averaging (EA) is presented. The main objective of the improved method was to localize S1 with very high temporal accuracy in (pseudo-) real time. The performance of S1 detection and localization, with and without EA enhancement, was evaluated on simulated as well as experimental data. The simulation study showed that EA enhancement reduced the localization error considerably and that S1 could be accurately localized at much lower signal-to-noise ratios. The experimental data were taken from ten healthy subjects at rest and during invoked hyper- and hypotension. For this material, the number of correct S1 detections increased from 91% to 98% when using EA enhancement. Improved performance was also demonstrated when EA enhancement was used for continuous tracking of blood pressure changes and for respiration monitoring via the electromechanical activation time. These are two typical applications where accurate localization of S1 is essential for the results
Rapid and accurate prediction and scoring of water molecules in protein binding sites.
Gregory A Ross
Full Text Available Water plays a critical role in ligand-protein interactions. However, it is still challenging to predict accurately not only where water molecules prefer to bind, but also which of those water molecules might be displaceable. The latter is often seen as a route to optimizing affinity of potential drug candidates. Using a protocol we call WaterDock, we show that the freely available AutoDock Vina tool can be used to predict accurately the binding sites of water molecules. WaterDock was validated using data from X-ray crystallography, neutron diffraction and molecular dynamics simulations and correctly predicted 97% of the water molecules in the test set. In addition, we combined data-mining, heuristic and machine learning techniques to develop probabilistic water molecule classifiers. When applied to WaterDock predictions in the Astex Diverse Set of protein ligand complexes, we could identify whether a water molecule was conserved or displaced to an accuracy of 75%. A second model predicted whether water molecules were displaced by polar groups or by non-polar groups to an accuracy of 80%. These results should prove useful for anyone wishing to undertake rational design of new compounds where the displacement of water molecules is being considered as a route to improved affinity.
This paper presents a fast and accurate marker-based automatic registration technique for aligning uncalibrated projections taken from a transmission electron microscope (TEM) with different tilt angles and orientations. Most of the existing TEM image alignment methods estimate the similarity between images using the projection model with least-squares metric and guess alignment parameters by computationally expensive nonlinear optimization schemes. Approaches based on the least-squares metric which is sensitive to outliers may cause misalignment since automatic tracking methods, though reliable, can produce a few incorrect trajectories due to a large number of marker points. To decrease the influence of outliers, we propose a robust similarity measure using the projection model with a Gaussian weighting function. This function is very effective in suppressing outliers that are far from correct trajectories and thus provides a more robust metric. In addition, we suggest a fast search strategy based on the non-gradient Powell's multidimensional optimization scheme to speed up optimization as only meaningful parameters are considered during iterative projection model estimation. Experimental results show that our method brings more accurate alignment with less computational cost compared to conventional automatic alignment methods.
DeForest, Jared L; Drerup, Samuel A; Vis, Morgan L
2016-05-01
The assessment of lotic ecosystem quality plays an essential role to help determine the extent of environmental stress and the effectiveness of restoration activities. Methods that incorporate biological properties are considered ideal because they provide direct assessment of the end goal of a vigorous biological community. Our primary objective was to use biofilm lipids to develop an accurate biomonitoring tool that requires little expertise and time to facilitate assessment. A model was created of fatty acid biomarkers most associated with predetermined stream quality classification, exceptional warm water habitat (EWH), warm water habitat (WWH), and limited resource (LR-AMD), and validated along a gradient of known stream qualities. The fatty acid fingerprint of the biofilm community was statistically different (P = 0.03) and was generally unique to recognized stream quality. One striking difference was essential fatty acids (DHA, EPA, and ARA) were absent from LR-AMD and only recovered from WWH and EWH, 45 % more in EWH than WWH. Independently testing the model along a stream quality gradient, this model correctly categorized six of the seven sites, with no match due to low sample biomass. These results provide compelling evidence that biofilm fatty acid analysis can be a sensitive, accurate, and cost-effective biomonitoring tool. We conceive of future studies expanding this research to more in-depth studies of remediation efforts, determining the applicable geographic area for the method and the addition of multiple stressors with the possibility of distinguishing among stressors. PMID:27061804
Fast and accurate solution of the Poisson equation in an immersed setting
Marques, Alexandre Noll; Rosales, Rodolfo Ruben
2014-01-01
We present a fast and accurate algorithm for the Poisson equation in complex geometries, using regular Cartesian grids. We consider a variety of configurations, including Poisson equations with interfaces across which the solution is discontinuous (of the type arising in multi-fluid flows). The algorithm is based on a combination of the Correction Function Method (CFM) and Boundary Integral Methods (BIM). Interface and boundary conditions can be treated in a fast and accurate manner using boundary integral equations, and the associated BIM. Unfortunately, BIM can be costly when the solution is needed everywhere in a grid, e.g. fluid flow problems. We use the CFM to circumvent this issue. The solution from the BIM is used to rewrite the problem as a series of Poisson equations in rectangular domains --- which requires the BIM solution at interfaces/boundaries only. These Poisson equations involve discontinuities at interfaces, of the type that the CFM can handle. Hence we use the CFM to solve them (to high ord...
Evaluating the capability of time-of-flight cameras for accurately imaging a cyclically loaded beam
Lahamy, Hervé; Lichti, Derek; El-Badry, Mamdouh; Qi, Xiaojuan; Detchev, Ivan; Steward, Jeremy; Moravvej, Mohammad
2015-05-01
Time-of-flight cameras are used for diverse applications ranging from human-machine interfaces and gaming to robotics and earth topography. This paper aims at evaluating the capability of the Mesa Imaging SR4000 and the Microsoft Kinect 2.0 time-of-flight cameras for accurately imaging the top surface of a concrete beam subjected to fatigue loading in laboratory conditions. Whereas previous work has demonstrated the success of such sensors for measuring the response at point locations, the aim here is to measure the entire beam surface in support of the overall objective of evaluating the effectiveness of concrete beam reinforcement with steel fibre reinforced polymer sheets. After applying corrections for lens distortions to the data and differencing images over time to remove systematic errors due to internal scattering, the periodic deflections experienced by the beam have been estimated for the entire top surface of the beam and at witness plates attached. The results have been assessed by comparison with measurements from highly-accurate laser displacement transducers. This study concludes that both the Microsoft Kinect 2.0 and the Mesa Imaging SR4000s are capable of sensing a moving surface with sub-millimeter accuracy once the image distortions have been modeled and removed.
On the importance of having accurate data for astrophysical modelling
Lique, Francois
2016-06-01
The Herschel telescope and the ALMA and NOEMA interferometers have opened new windows of observation for wavelengths ranging from far infrared to sub-millimeter with spatial and spectral resolutions previously unmatched. To make the most of these observations, an accurate knowledge of the physical and chemical processes occurring in the interstellar and circumstellar media is essential.In this presentation, I will discuss what are the current needs of astrophysics in terms of molecular data and I will show that accurate molecular data are crucial for the proper determination of the physical conditions in molecular clouds.First, I will focus on collisional excitation studies that are needed for molecular lines modelling beyond the Local Thermodynamic Equilibrium (LTE) approach. In particular, I will show how new collisional data for the HCN and HNC isomers, two tracers of star forming conditions, have allowed solving the problem of their respective abundance in cold molecular clouds. I will also present the last collisional data that have been computed in order to analyse new highly resolved observations provided by the ALMA interferometer.Then, I will present the calculation of accurate rate constants for the F+H2 → HF+H and Cl+H2 ↔ HCl+H reactions, which have allowed a more accurate determination of the physical conditions in diffuse molecular clouds. I will also present the recent work on the ortho-para-H2 conversion due to hydrogen exchange that allow more accurate determination of the ortho-to-para-H2 ratio in the universe and that imply a significant revision of the cooling mechanism in astrophysical media.
Empirical corrections for atmospheric neutral density derived from thermospheric models
Forootan, Ehsan; Kusche, Jürgen; Börger, Klaus; Henze, Christina; Löcher, Anno; Eickmans, Marius; Agena, Jens
2016-04-01
Accurately predicting satellite positions is a prerequisite for various applications from space situational awareness to precise orbit determination (POD). Given the fact that atmospheric drag represents a dominant influence on the position of low-Earth orbit objects, an accurate evaluation of thermospheric mass density is of great importance to low Earth orbital prediction. Over decades, various empirical atmospheric models have been developed to support computation of density changes within the atmosphere. The quality of these models is, however, restricted mainly due to the complexity of atmospheric density changes and the limited resolution of indices used to account for atmospheric temperature and neutral density changes caused by solar and geomagnetic activity. Satellite missions, such as Challenging Mini-Satellite Payload (CHAMP) and Gravity Recovery and Climate Experiment (GRACE), provide a direct measurement of non-conservative accelerations, acting on the surface of satellites. These measurements provide valuable data for improving our knowledge of thermosphere density and winds. In this paper we present two empirical frameworks to correct model-derived neutral density simulations by the along-track thermospheric density measurements of CHAMP and GRACE. First, empirical scale factors are estimated by analyzing daily CHAMP and GRACE acceleration measurements and are used to correct the density simulation of Jacchia and MSIS (Mass-Spectrometer-Incoherent-Scatter) thermospheric models. The evolution of daily scale factors is then related to solar and magnetic activity enabling their prediction in time. In the second approach, principal component analysis (PCA) is applied to extract the dominant modes of differences between CHAMP/GRACE observations and thermospheric model simulations. Afterwards an adaptive correction procedure is used to account for long-term and high-frequency differences. We conclude the study by providing recommendations on possible
Comparison and Analysis of Geometric Correction Models of Spaceborne SAR
Weihao Jiang
2016-06-01
Full Text Available Following the development of synthetic aperture radar (SAR, SAR images have become increasingly common. Many researchers have conducted large studies on geolocation models, but little work has been conducted on the available models for the geometric correction of SAR images of different terrain. To address the terrain issue, four different models were compared and are described in this paper: a rigorous range-doppler (RD model, a rational polynomial coefficients (RPC model, a revised polynomial (PM model and an elevation derivation (EDM model. The results of comparisons of the geolocation capabilities of the models show that a proper model for a SAR image of a specific terrain can be determined. A solution table was obtained to recommend a suitable model for users. Three TerraSAR-X images, two ALOS-PALSAR images and one Envisat-ASAR image were used for the experiment, including flat terrain and mountain terrain SAR images as well as two large area images. Geolocation accuracies of the models for different terrain SAR images were computed and analyzed. The comparisons of the models show that the RD model was accurate but was the least efficient; therefore, it is not the ideal model for real-time implementations. The RPC model is sufficiently accurate and efficient for the geometric correction of SAR images of flat terrain, whose precision is below 0.001 pixels. The EDM model is suitable for the geolocation of SAR images of mountainous terrain, and its precision can reach 0.007 pixels. Although the PM model does not produce results as precise as the other models, its efficiency is excellent and its potential should not be underestimated. With respect to the geometric correction of SAR images over large areas, the EDM model has higher accuracy under one pixel, whereas the RPC model consumes one third of the time of the EDM model.
Comparison and Analysis of Geometric Correction Models of Spaceborne SAR.
Jiang, Weihao; Yu, Anxi; Dong, Zhen; Wang, Qingsong
2016-01-01
Following the development of synthetic aperture radar (SAR), SAR images have become increasingly common. Many researchers have conducted large studies on geolocation models, but little work has been conducted on the available models for the geometric correction of SAR images of different terrain. To address the terrain issue, four different models were compared and are described in this paper: a rigorous range-doppler (RD) model, a rational polynomial coefficients (RPC) model, a revised polynomial (PM) model and an elevation derivation (EDM) model. The results of comparisons of the geolocation capabilities of the models show that a proper model for a SAR image of a specific terrain can be determined. A solution table was obtained to recommend a suitable model for users. Three TerraSAR-X images, two ALOS-PALSAR images and one Envisat-ASAR image were used for the experiment, including flat terrain and mountain terrain SAR images as well as two large area images. Geolocation accuracies of the models for different terrain SAR images were computed and analyzed. The comparisons of the models show that the RD model was accurate but was the least efficient; therefore, it is not the ideal model for real-time implementations. The RPC model is sufficiently accurate and efficient for the geometric correction of SAR images of flat terrain, whose precision is below 0.001 pixels. The EDM model is suitable for the geolocation of SAR images of mountainous terrain, and its precision can reach 0.007 pixels. Although the PM model does not produce results as precise as the other models, its efficiency is excellent and its potential should not be underestimated. With respect to the geometric correction of SAR images over large areas, the EDM model has higher accuracy under one pixel, whereas the RPC model consumes one third of the time of the EDM model. PMID:27347973
Fermions tunnelling with quantum gravity correction
liu, Zhen-Yu
2014-01-01
Quantum gravity correction is truly important to study tunnelling process of black hole. Base on the generalized uncertainty principle, we investigate the influence of quantum gravity and the result tell us that the quantum gravity correction accelerates the evaporation of black hole. Using corrected Dirac equation in curved spacetime and Hamilton-Jacobi method, we address the tunnelling of fermions in a 4-dimensional Schwarzschild spacetime. After solving the equation of motion of the spin 1/2 field, we obtain the corrected Hawking temperature. It turns out that the correction depends not only on the mass of black hole but aslo on the mass of emitted fermions. In our calculation, the quantum gravity correction accelerates the increasing of Hawking temperature during the radiation explicitly. This correction leads to the increasing of the evaporation of black hole.
Fermions Tunnelling with Quantum Gravity Correction
Based on the generalized uncertainty principle (GUP), we investigate the correction of quantum gravity to Hawking radiation of black hole by utilizing the tunnelling method. The result tells us that the quantum gravity correction retards the evaporation of black hole. Using the corrected covariant Dirac equation in curved spacetime, we study the tunnelling process of fermions in Schwarzschild spacetime and obtain the corrected Hawking temperature. It turns out that the correction depends not only on the mass of black hole but also on the mass of emitted fermions. In our calculation, the quantum gravity correction slows down the increase of Hawking temperature during the radiation explicitly. This correction leads to the remnants of black hole and avoids the evaporation singularity. (general)
Calculation of accurate small angle X-ray scattering curves from coarse-grained protein models
Stovgaard Kasper
2010-08-01
Full Text Available Abstract Background Genome sequencing projects have expanded the gap between the amount of known protein sequences and structures. The limitations of current high resolution structure determination methods make it unlikely that this gap will disappear in the near future. Small angle X-ray scattering (SAXS is an established low resolution method for routinely determining the structure of proteins in solution. The purpose of this study is to develop a method for the efficient calculation of accurate SAXS curves from coarse-grained protein models. Such a method can for example be used to construct a likelihood function, which is paramount for structure determination based on statistical inference. Results We present a method for the efficient calculation of accurate SAXS curves based on the Debye formula and a set of scattering form factors for dummy atom representations of amino acids. Such a method avoids the computationally costly iteration over all atoms. We estimated the form factors using generated data from a set of high quality protein structures. No ad hoc scaling or correction factors are applied in the calculation of the curves. Two coarse-grained representations of protein structure were investigated; two scattering bodies per amino acid led to significantly better results than a single scattering body. Conclusion We show that the obtained point estimates allow the calculation of accurate SAXS curves from coarse-grained protein models. The resulting curves are on par with the current state-of-the-art program CRYSOL, which requires full atomic detail. Our method was also comparable to CRYSOL in recognizing native structures among native-like decoys. As a proof-of-concept, we combined the coarse-grained Debye calculation with a previously described probabilistic model of protein structure, TorusDBN. This resulted in a significant improvement in the decoy recognition performance. In conclusion, the presented method shows great promise for
The crossflow ultrasonic flowmeter (UFM) improves nuclear power plant performance through more accurate and reliable feedwater flow measurement. Reactor power levels are typically monitored via secondary-side calorimetric calculations that depend on the accurate measurement of feedwater flow . The feedwater flow is measured with calibrated venturis in most plants. These are subject to chemical fouling and other mechanical problems. If the loss in accuracy of the feedwater flow measurement overstates the actual flow rate, the result is a direct loss in megawatts generated by the plant. This paper describes a new, innovative ultrasonic technique to improve the accuracy, stability and repeatability of ultrasonic flow measurements. By employing this advanced technology to provide a continuous correction to the venturi-measured feed water flow rate, plants have reported the recovery of between 5 and 25 MWe. This technology has been implemented in a new flowmeter called CROSSFLOW. The CROSSFLOW meter utilizes a mathematical process called cross-correlation to process the ultrasonic signal, which is modulated by the flow eddys to determine the velocity of the feedwater. It replaces the older, less accurate transit-time methodology. Comparisons with weigh tank test, calibrated plant instrumentation, and chemical tracer tests have demonstrated a repeatable accuracy of 0.21% or better with this advanced cross-correlation technology. The paper discusses the history of the cross-correlation technique and its theoretical basis, illustrates how this technique addresses the measurement sensitivities for various parameters, demonstrates the calculation of the accuracy of the meter, and discusses the recently completed NRC review of the CROSSFLOW System and methodology. The paper also discusses recent precision flow measurement applications being performed with CROSSFLOW at nuclear plants worldwide. Among these applications are the measurement of Reactor Coolant System flow and the
Importance of Attenuation Correction (AC for Small Animal PET Imaging
Henrik H. El Ali
2012-10-01
Full Text Available The purpose of this study was to investigate whether a correction for annihilation photon attenuation in small objects such as mice is necessary. The attenuation recovery for specific organs and subcutaneous tumors was investigated. A comparison between different attenuation correction methods was performed. Methods: Ten NMRI nude mice with subcutaneous implantation of human breast cancer cells (MCF-7 were scanned consecutively in small animal PET and CT scanners (MicroPETTM Focus 120 and ImTek’s MicroCATTM II. CT-based AC, PET-based AC and uniform AC methods were compared. Results: The activity concentration in the same organ with and without AC revealed an overall attenuation recovery of 9–21% for MAP reconstructed images, i.e., SUV without AC could underestimate the true activity at this level. For subcutaneous tumors, the attenuation was 13 ± 4% (9–17%, for kidneys 20 ± 1% (19–21%, and for bladder 18 ± 3% (15–21%. The FBP reconstructed images showed almost the same attenuation levels as the MAP reconstructed images for all organs. Conclusions: The annihilation photons are suffering attenuation even in small subjects. Both PET-based and CT-based are adequate as AC methods. The amplitude of the AC recovery could be overestimated using the uniform map. Therefore, application of a global attenuation factor on PET data might not be accurate for attenuation correction.
OPC modeling and correction solutions for EUV lithography
Word, James; Zuniga, Christian; Lam, Michael; Habib, Mohamed; Adam, Kostas; Oliver, Michael
2011-11-01
The introduction of EUV lithography into the semiconductor fabrication process will enable a continuation of Moore's law below the 22nm technology node. EUV lithography will, however, introduce new sources of patterning distortions which must be accurately modeled and corrected with software. Flare caused by scattered light in the projection optics result in pattern density-dependent imaging errors. The combination of non-telecentric reflective optics with reflective reticles results in mask shadowing effects. Reticle absorber materials are likely to have non-zero reflectivity due to a need to balance absorber stack height with minimization of mask shadowing effects. Depending upon placement of adjacent fields on the wafer, reflectivity along their border can result in inter-field imaging effects near the edge of neighboring exposure fields. Finally, there exists the ever-present optical proximity effects caused by diffractionlimited imaging and resist and etch process effects. To enable EUV lithography in production, it is expected that OPC will be called-upon to compensate for most of these effects. With the anticipated small imaging error budgets at sub-22nm nodes it is highly likely that only full model-based OPC solutions will have the required accuracy. The authors will explore the current capabilities of model-based OPC software to model and correct for each of the EUV imaging effects. Modeling, simulation, and correction methodologies will be defined, and experimental results of a full model-based OPC flow for EUV lithography will be presented.
Dinpajooh, Mohammadhasan; Bai, Peng; Allan, Douglas A; Siepmann, J Ilja
2015-09-21
Since the seminal paper by Panagiotopoulos [Mol. Phys. 61, 813 (1997)], the Gibbs ensemble Monte Carlo (GEMC) method has been the most popular particle-based simulation approach for the computation of vapor-liquid phase equilibria. However, the validity of GEMC simulations in the near-critical region has been questioned because rigorous finite-size scaling approaches cannot be applied to simulations with fluctuating volume. Valleau [Mol. Simul. 29, 627 (2003)] has argued that GEMC simulations would lead to a spurious overestimation of the critical temperature. More recently, Patel et al. [J. Chem. Phys. 134, 024101 (2011)] opined that the use of analytical tail corrections would be problematic in the near-critical region. To address these issues, we perform extensive GEMC simulations for Lennard-Jones particles in the near-critical region varying the system size, the overall system density, and the cutoff distance. For a system with N = 5500 particles, potential truncation at 8σ and analytical tail corrections, an extrapolation of GEMC simulation data at temperatures in the range from 1.27 to 1.305 yields T(c) = 1.3128 ± 0.0016, ρ(c) = 0.316 ± 0.004, and p(c) = 0.1274 ± 0.0013 in excellent agreement with the thermodynamic limit determined by Potoff and Panagiotopoulos [J. Chem. Phys. 109, 10914 (1998)] using grand canonical Monte Carlo simulations and finite-size scaling. Critical properties estimated using GEMC simulations with different overall system densities (0.296 ≤ ρ(t) ≤ 0.336) agree to within the statistical uncertainties. For simulations with tail corrections, data obtained using r(cut) = 3.5σ yield T(c) and p(c) that are higher by 0.2% and 1.4% than simulations with r(cut) = 5 and 8σ but still with overlapping 95% confidence intervals. In contrast, GEMC simulations with a truncated and shifted potential show that r(cut) = 8σ is insufficient to obtain accurate results. Additional GEMC simulations for hard-core square-well particles with
Dinpajooh, Mohammadhasan [Department of Chemistry and Chemical Theory Center, University of Minnesota, 207 Pleasant Street SE, Minneapolis, Minnesota 55455 (United States); Bai, Peng; Allan, Douglas A. [Department of Chemical Engineering and Materials Science, University of Minnesota, 421 Washington Avenue SE, Minneapolis, Minnesota 55455 (United States); Siepmann, J. Ilja, E-mail: siepmann@umn.edu [Department of Chemistry and Chemical Theory Center, University of Minnesota, 207 Pleasant Street SE, Minneapolis, Minnesota 55455 (United States); Department of Chemical Engineering and Materials Science, University of Minnesota, 421 Washington Avenue SE, Minneapolis, Minnesota 55455 (United States)
2015-09-21
Since the seminal paper by Panagiotopoulos [Mol. Phys. 61, 813 (1997)], the Gibbs ensemble Monte Carlo (GEMC) method has been the most popular particle-based simulation approach for the computation of vapor–liquid phase equilibria. However, the validity of GEMC simulations in the near-critical region has been questioned because rigorous finite-size scaling approaches cannot be applied to simulations with fluctuating volume. Valleau [Mol. Simul. 29, 627 (2003)] has argued that GEMC simulations would lead to a spurious overestimation of the critical temperature. More recently, Patel et al. [J. Chem. Phys. 134, 024101 (2011)] opined that the use of analytical tail corrections would be problematic in the near-critical region. To address these issues, we perform extensive GEMC simulations for Lennard-Jones particles in the near-critical region varying the system size, the overall system density, and the cutoff distance. For a system with N = 5500 particles, potential truncation at 8σ and analytical tail corrections, an extrapolation of GEMC simulation data at temperatures in the range from 1.27 to 1.305 yields T{sub c} = 1.3128 ± 0.0016, ρ{sub c} = 0.316 ± 0.004, and p{sub c} = 0.1274 ± 0.0013 in excellent agreement with the thermodynamic limit determined by Potoff and Panagiotopoulos [J. Chem. Phys. 109, 10914 (1998)] using grand canonical Monte Carlo simulations and finite-size scaling. Critical properties estimated using GEMC simulations with different overall system densities (0.296 ≤ ρ{sub t} ≤ 0.336) agree to within the statistical uncertainties. For simulations with tail corrections, data obtained using r{sub cut} = 3.5σ yield T{sub c} and p{sub c} that are higher by 0.2% and 1.4% than simulations with r{sub cut} = 5 and 8σ but still with overlapping 95% confidence intervals. In contrast, GEMC simulations with a truncated and shifted potential show that r{sub cut} = 8σ is insufficient to obtain accurate results. Additional GEMC simulations for hard
Processing of airborne laser scanning data to generate accurate DTM for floodplain wetland
Szporak-Wasilewska, Sylwia; Mirosław-Świątek, Dorota; Grygoruk, Mateusz; Michałowski, Robert; Kardel, Ignacy
2015-10-01
Structure of the floodplain, especially its topography and vegetation, influences the overland flow and dynamics of floods which are key factors shaping ecosystems in surface water-fed wetlands. Therefore elaboration of the digital terrain model (DTM) of a high spatial accuracy is crucial in hydrodynamic flow modelling in river valleys. In this study the research was conducted in the unique Central European complex of fens and marshes - the Lower Biebrza river valley. The area is represented mainly by peat ecosystems which according to EU Water Framework Directive (WFD) are called "water-dependent ecosystems". Development of accurate DTM in these areas which are overgrown by dense wetland vegetation consisting of alder forest, willow shrubs, reed, sedges and grass is very difficult, therefore to represent terrain in high accuracy the airborne laser scanning data (ALS) with scanning density of 4 points/m2 was used and the correction of the "vegetation effect" on DTM was executed. This correction was performed utilizing remotely sensed images, topographical survey using the Real Time Kinematic positioning and vegetation height measurements. In order to classify different types of vegetation within research area the object based image analysis (OBIA) was used. OBIA allowed partitioning remotely sensed imagery into meaningful image-objects, and assessing their characteristics through spatial and spectral scale. The final maps of vegetation patches that include attributes of vegetation height and vegetation spectral properties, utilized both the laser scanning data and the vegetation indices developed on the basis of airborne and satellite imagery. This data was used in process of segmentation, attribution and classification. Several different vegetation indices were tested to distinguish different types of vegetation in wetland area. The OBIA classification allowed correction of the "vegetation effect" on DTM. The final digital terrain model was compared and examined
Since the seminal paper by Panagiotopoulos [Mol. Phys. 61, 813 (1997)], the Gibbs ensemble Monte Carlo (GEMC) method has been the most popular particle-based simulation approach for the computation of vapor–liquid phase equilibria. However, the validity of GEMC simulations in the near-critical region has been questioned because rigorous finite-size scaling approaches cannot be applied to simulations with fluctuating volume. Valleau [Mol. Simul. 29, 627 (2003)] has argued that GEMC simulations would lead to a spurious overestimation of the critical temperature. More recently, Patel et al. [J. Chem. Phys. 134, 024101 (2011)] opined that the use of analytical tail corrections would be problematic in the near-critical region. To address these issues, we perform extensive GEMC simulations for Lennard-Jones particles in the near-critical region varying the system size, the overall system density, and the cutoff distance. For a system with N = 5500 particles, potential truncation at 8σ and analytical tail corrections, an extrapolation of GEMC simulation data at temperatures in the range from 1.27 to 1.305 yields Tc = 1.3128 ± 0.0016, ρc = 0.316 ± 0.004, and pc = 0.1274 ± 0.0013 in excellent agreement with the thermodynamic limit determined by Potoff and Panagiotopoulos [J. Chem. Phys. 109, 10914 (1998)] using grand canonical Monte Carlo simulations and finite-size scaling. Critical properties estimated using GEMC simulations with different overall system densities (0.296 ≤ ρt ≤ 0.336) agree to within the statistical uncertainties. For simulations with tail corrections, data obtained using rcut = 3.5σ yield Tc and pc that are higher by 0.2% and 1.4% than simulations with rcut = 5 and 8σ but still with overlapping 95% confidence intervals. In contrast, GEMC simulations with a truncated and shifted potential show that rcut = 8σ is insufficient to obtain accurate results. Additional GEMC simulations for hard-core square-well particles with various ranges of the
Proud, Simon Richard; Rasmussen, M.O.; Fensholt, R.;
2010-01-01
In order to obtain high quality data, the correction of atmospheric perturbations acting upon land surface reflectance measurements recorded by a space-based sensor is an important topic within remote sensing. For many years the Second Simulation of the Satellite Signal in the Solar Spectrum (6S......) radiative transfer model and the Simplified Method for Atmospheric Correction (SMAC) codes have been used for this atmospheric correction, but previous studies have shown that in a number of situations the quality of correction provided by the SMAC is low. This paper describes a method designed to improve...... the quality of the SMAC atmospheric correction algorithm through a slight increase in its computational complexity. Data gathered from the SEVIRI aboard Meteosat Second Generation (MSG) is used to validate the additions to SMAC, both by comparison to simulated data corrected using the highly accurate...
Ahmadian, Alireza; Ay, Mohammad R.; Sarkar, Saeed [Medical Sciences/University of Tehran, Research Center for Science and Technology in Medicine, Tehran (Iran); Medical Sciences/University of Tehran, Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran (Iran); Bidgoli, Javad H. [Medical Sciences/University of Tehran, Research Center for Science and Technology in Medicine, Tehran (Iran); East Tehran Azad University, Department of Electrical and Computer Engineering, Tehran (Iran); Zaidi, Habib [Geneva University Hospital, Division of Nuclear Medicine, Geneva (Switzerland)
2008-10-15
Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map ({mu}map), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated {mu}maps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique
Visual texture accurate material appearance measurement, representation and modeling
Haindl, Michal
2013-01-01
This book surveys the state of the art in multidimensional, physically-correct visual texture modeling. Features: reviews the entire process of texture synthesis, including material appearance representation, measurement, analysis, compression, modeling, editing, visualization, and perceptual evaluation; explains the derivation of the most common representations of visual texture, discussing their properties, advantages, and limitations; describes a range of techniques for the measurement of visual texture, including BRDF, SVBRDF, BTF and BSSRDF; investigates the visualization of textural info
Method for Accurately Calibrating a Spectrometer Using Broadband Light
Simmons, Stephen; Youngquist, Robert
2011-01-01
A novel method has been developed for performing very fine calibration of a spectrometer. This process is particularly useful for modern miniature charge-coupled device (CCD) spectrometers where a typical factory wavelength calibration has been performed and a finer, more accurate calibration is desired. Typically, the factory calibration is done with a spectral line source that generates light at known wavelengths, allowing specific pixels in the CCD array to be assigned wavelength values. This method is good to about 1 nm across the spectrometer s wavelength range. This new method appears to be accurate to about 0.1 nm, a factor of ten improvement. White light is passed through an unbalanced Michelson interferometer, producing an optical signal with significant spectral variation. A simple theory can be developed to describe this spectral pattern, so by comparing the actual spectrometer output against this predicted pattern, errors in the wavelength assignment made by the spectrometer can be determined.
Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions
Jianhua Zhang
2014-01-01
Full Text Available This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views’ calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain.
Accurate multireference study of Si3 electronic manifold
Goncalves, Cayo Emilio Monteiro; Braga, Joao Pedro
2016-01-01
Since it has been shown that the silicon trimer has a highly multi-reference character, accurate multi-reference configuration interaction calculations are performed to elucidate its electronic manifold. Emphasis is given to the long range part of the potential, aiming to understand the atom-diatom collisions dynamical aspects, to describe conical intersections and important saddle points along the reactive path. Potential energy surface main features analysis are performed for benchmarking, and highly accurate values for structures, vibrational constants and energy gaps are reported, as well as the unpublished spin-orbit coupling magnitude. The results predict that inter-system crossings will play an important role in dynamical simulations, specially in triplet state quenching, making the problem of constructing a precise potential energy surface more complicated and multi-layer dependent. The ground state is predicted to be the singlet one, but since the singlet-triplet gap is rather small (2.448 kJ/mol) bo...
Simple and High-Accurate Schemes for Hyperbolic Conservation Laws
Renzhong Feng
2014-01-01
Full Text Available The paper constructs a class of simple high-accurate schemes (SHA schemes with third order approximation accuracy in both space and time to solve linear hyperbolic equations, using linear data reconstruction and Lax-Wendroff scheme. The schemes can be made even fourth order accurate with special choice of parameter. In order to avoid spurious oscillations in the vicinity of strong gradients, we make the SHA schemes total variation diminishing ones (TVD schemes for short by setting flux limiter in their numerical fluxes and then extend these schemes to solve nonlinear Burgers’ equation and Euler equations. The numerical examples show that these schemes give high order of accuracy and high resolution results. The advantages of these schemes are their simplicity and high order of accuracy.
Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping
Rehak, M.; Skaloud, J.
2015-08-01
In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.
Accurate Development of Thermal Neutron Scattering Cross Section Libraries
Hawari, Ayman; Dunn, Michael
2014-06-10
The objective of this project is to develop a holistic (fundamental and accurate) approach for generating thermal neutron scattering cross section libraries for a collection of important enutron moderators and reflectors. The primary components of this approach are the physcial accuracy and completeness of the generated data libraries. Consequently, for the first time, thermal neutron scattering cross section data libraries will be generated that are based on accurate theoretical models, that are carefully benchmarked against experimental and computational data, and that contain complete covariance information that can be used in propagating the data uncertainties through the various components of the nuclear design and execution process. To achieve this objective, computational and experimental investigations will be performed on a carefully selected subset of materials that play a key role in all stages of the nuclear fuel cycle.
Accurate Load Modeling Based on Analytic Hierarchy Process
Zhenshu Wang
2016-01-01
Full Text Available Establishing an accurate load model is a critical problem in power system modeling. That has significant meaning in power system digital simulation and dynamic security analysis. The synthesis load model (SLM considers the impact of power distribution network and compensation capacitor, while randomness of power load is more precisely described by traction power system load model (TPSLM. On the basis of these two load models, a load modeling method that combines synthesis load with traction power load is proposed in this paper. This method uses analytic hierarchy process (AHP to interact with two load models. Weight coefficients of two models can be calculated after formulating criteria and judgment matrixes and then establishing a synthesis model by weight coefficients. The effectiveness of the proposed method was examined through simulation. The results show that accurate load modeling based on AHP can effectively improve the accuracy of load model and prove the validity of this method.
Accurate adjoint design sensitivities for nano metal optics.
Hansen, Paul; Hesselink, Lambertus
2015-09-01
We present a method for obtaining accurate numerical design sensitivities for metal-optical nanostructures. Adjoint design sensitivity analysis, long used in fluid mechanics and mechanical engineering for both optimization and structural analysis, is beginning to be used for nano-optics design, but it fails for sharp-cornered metal structures because the numerical error in electromagnetic simulations of metal structures is highest at sharp corners. These locations feature strong field enhancement and contribute strongly to design sensitivities. By using high-accuracy FEM calculations and rounding sharp features to a finite radius of curvature we obtain highly-accurate design sensitivities for 3D metal devices. To provide a bridge to the existing literature on adjoint methods in other fields, we derive the sensitivity equations for Maxwell's equations in the PDE framework widely used in fluid mechanics. PMID:26368483
Efficient and Accurate Robustness Estimation for Large Complex Networks
Wandelt, Sebastian
2016-01-01
Robustness estimation is critical for the design and maintenance of resilient networks, one of the global challenges of the 21st century. Existing studies exploit network metrics to generate attack strategies, which simulate intentional attacks in a network, and compute a metric-induced robustness estimation. While some metrics are easy to compute, e.g. degree centrality, other, more accurate, metrics require considerable computation efforts, e.g. betweennes centrality. We propose a new algorithm for estimating the robustness of a network in sub-quadratic time, i.e., significantly faster than betweenness centrality. Experiments on real-world networks and random networks show that our algorithm estimates the robustness of networks close to or even better than betweenness centrality, while being orders of magnitudes faster. Our work contributes towards scalable, yet accurate methods for robustness estimation of large complex networks.
Self-correction coil: operation mechanism of self-correction coil
We discuss here the operation mechanism of self-correction coil with a simple model. At the first stage, for the ideal self-correction coil case we calculate the self-inductance L of self-correction coil, the mutual inductance M between the error field coil and the self-correction coil, and using the model the induced curent in the self-correction coil by the external magnetic error field and induced magnetic field by the self-correction coil. And at the second stage, we extend this calculation method to non-ideal self-correction coil case, there we realize that the wire distribution of self-correction coil is important to get the high enough self-correction effect. For measure of completeness of self-correction effect, we introduce the efficiency eta of self-correction coil by the ratio of induced magnetic field by the self-correction coil and error field. As for the examples, we calculate L, M and eta for two cases; one is a single block approximation of self-correction coil winding and the other is a two block approximation case. By choosing the adequate angles of self-correction coil winding, we can get about 98% efficiency for single block approximation case and 99.8% for two block approximation case. This means that by using the self-correction coil we can improve the field quality about two orders
Rulison Site corrective action report
NONE
1996-09-01
Project Rulison was a joint US Atomic Energy Commission (AEC) and Austral Oil Company (Austral) experiment, conducted under the AEC`s Plowshare Program, to evaluate the feasibility of using a nuclear device to stimulate natural gas production in low-permeability gas-producing geologic formations. The experiment was conducted on September 10, 1969, and consisted of detonating a 40-kiloton nuclear device at a depth of 2,568 m below ground surface (BGS). This Corrective Action Report describes the cleanup of petroleum hydrocarbon- and heavy-metal-contaminated sediments from an old drilling effluent pond and characterization of the mud pits used during drilling of the R-EX well at the Rulison Site. The Rulison Site is located approximately 65 kilometers (40 miles) northeast of Grand Junction, Colorado. The effluent pond was used for the storage of drilling mud during drilling of the emplacement hole for the 1969 gas stimulation test conducted by the AEC. This report also describes the activities performed to determine whether contamination is present in mud pits used during the drilling of well R-EX, the gas production well drilled at the site to evaluate the effectiveness of the detonation in stimulating gas production. The investigation activities described in this report were conducted during the autumn of 1995, concurrent with the cleanup of the drilling effluent pond. This report describes the activities performed during the soil investigation and provides the analytical results for the samples collected during that investigation.
Rulison Site corrective action report
Project Rulison was a joint US Atomic Energy Commission (AEC) and Austral Oil Company (Austral) experiment, conducted under the AEC's Plowshare Program, to evaluate the feasibility of using a nuclear device to stimulate natural gas production in low-permeability gas-producing geologic formations. The experiment was conducted on September 10, 1969, and consisted of detonating a 40-kiloton nuclear device at a depth of 2,568 m below ground surface (BGS). This Corrective Action Report describes the cleanup of petroleum hydrocarbon- and heavy-metal-contaminated sediments from an old drilling effluent pond and characterization of the mud pits used during drilling of the R-EX well at the Rulison Site. The Rulison Site is located approximately 65 kilometers (40 miles) northeast of Grand Junction, Colorado. The effluent pond was used for the storage of drilling mud during drilling of the emplacement hole for the 1969 gas stimulation test conducted by the AEC. This report also describes the activities performed to determine whether contamination is present in mud pits used during the drilling of well R-EX, the gas production well drilled at the site to evaluate the effectiveness of the detonation in stimulating gas production. The investigation activities described in this report were conducted during the autumn of 1995, concurrent with the cleanup of the drilling effluent pond. This report describes the activities performed during the soil investigation and provides the analytical results for the samples collected during that investigation
Hypernatremia: Correction Rate and Hemodialysis
Saima Nur
2014-01-01
Full Text Available Severe hypernatremia is defined as serum sodium levels above 152 mEq/L, with a mortality rate ≥60%. 85-year-old gentleman was brought to the emergency room with altered level of consciousness after refusing to eat for a week at a skilled nursing facility. On admission patient was nonverbal with stable vital signs and was responsive only to painful stimuli. Laboratory evaluation was significant for serum sodium of 188 mmol/L and water deficit of 12.0 L. Patient was admitted to medicine intensive care unit and after inadequate response to suboptimal fluid repletion, hemodialysis was used to correct hypernatremia. Within the first fourteen hours, sodium concentration only changed 1 mEq/L with a fluid repletion; however, the concentration dropped greater than 20 mEq/L within two hours during hemodialysis. Despite such a drastic drop in sodium concentration, patient did not develop any neurological sequela and was at baseline mental status at the time of discharge.
A novel automated image analysis method for accurate adipocyte quantification
Osman, Osman S.; Selway, Joanne L; Kępczyńska, Małgorzata A; Stocker, Claire J.; O’Dowd, Jacqueline F; Cawthorne, Michael A.; Arch, Jonathan RS; Jassim, Sabah; Langlands, Kenneth
2013-01-01
Increased adipocyte size and number are associated with many of the adverse effects observed in metabolic disease states. While methods to quantify such changes in the adipocyte are of scientific and clinical interest, manual methods to determine adipocyte size are both laborious and intractable to large scale investigations. Moreover, existing computational methods are not fully automated. We, therefore, developed a novel automatic method to provide accurate measurements of the cross-section...
Combinatorial Approaches to Accurate Identification of Orthologous Genes
Shi, Guanqun
2011-01-01
The accurate identification of orthologous genes across different species is a critical and challenging problem in comparative genomics and has a wide spectrum of biological applications including gene function inference, evolutionary studies and systems biology. During the past several years, many methods have been proposed for ortholog assignment based on sequence similarity, phylogenetic approaches, synteny information, and genome rearrangement. Although these methods share many commonly a...
Strategy Guideline. Accurate Heating and Cooling Load Calculations
Burdick, Arlan [IBACOS, Inc., Pittsburgh, PA (United States)
2011-06-01
This guide presents the key criteria required to create accurate heating and cooling load calculations and offers examples of the implications when inaccurate adjustments are applied to the HVAC design process. The guide shows, through realistic examples, how various defaults and arbitrary safety factors can lead to significant increases in the load estimate. Emphasis is placed on the risks incurred from inaccurate adjustments or ignoring critical inputs of the load calculation.
Strategy Guideline: Accurate Heating and Cooling Load Calculations
Burdick, A.
2011-06-01
This guide presents the key criteria required to create accurate heating and cooling load calculations and offers examples of the implications when inaccurate adjustments are applied to the HVAC design process. The guide shows, through realistic examples, how various defaults and arbitrary safety factors can lead to significant increases in the load estimate. Emphasis is placed on the risks incurred from inaccurate adjustments or ignoring critical inputs of the load calculation.
Evaluation of accurate eye corner detection methods for gaze estimation
Bengoechea, Jose Javier; Cerrolaza, Juan J.; Villanueva, Arantxa; Cabeza, Rafael
2014-01-01
Accurate detection of iris center and eye corners appears to be a promising approach for low cost gaze estimation. In this paper we propose novel eye inner corner detection methods. Appearance and feature based segmentation approaches are suggested. All these methods are exhaustively tested on a realistic dataset containing images of subjects gazing at different points on a screen. We have demonstrated that a method based on a neural network presents the best performance even in light changin...
Building with Drones: Accurate 3D Facade Reconstruction using MAVs
Daftry, Shreyansh; Hoppe, Christof; Bischof, Horst
2015-01-01
Automatic reconstruction of 3D models from images using multi-view Structure-from-Motion methods has been one of the most fruitful outcomes of computer vision. These advances combined with the growing popularity of Micro Aerial Vehicles as an autonomous imaging platform, have made 3D vision tools ubiquitous for large number of Architecture, Engineering and Construction applications among audiences, mostly unskilled in computer vision. However, to obtain high-resolution and accurate reconstruc...
Mouse models of human AML accurately predict chemotherapy response
Zuber, Johannes; Radtke, Ina; Pardee, Timothy S.; Zhao, Zhen; Rappaport, Amy R.; Luo, Weijun; McCurrach, Mila E.; Yang, Miao-Miao; Dolan, M. Eileen; Kogan, Scott C.; Downing, James R.; Lowe, Scott W.
2009-01-01
The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to co...
Accurate Multisteps Traffic Flow Prediction Based on SVM
Zhang Mingheng; Zhen Yaobao; Hui Ganglong; Chen Gang
2013-01-01
Accurate traffic flow prediction is prerequisite and important for realizing intelligent traffic control and guidance, and it is also the objective requirement for intelligent traffic management. Due to the strong nonlinear, stochastic, time-varying characteristics of urban transport system, artificial intelligence methods such as support vector machine (SVM) are now receiving more and more attentions in this research field. Compared with the traditional single-step prediction method, the mul...
Accurate calibration of stereo cameras for machine vision
Li, Liangfu; Feng, Zuren; Feng, Yuanjing
2004-01-01
Camera calibration is an important task for machine vision, whose goal is to obtain the internal and external parameters of each camera. With these parameters, the 3D positions of a scene point, which is identified and matched in two stereo images, can be determined by the triangulation theory. This paper presents a new accurate estimation of CCD camera parameters for machine vision. We present a fast technique to estimate the camera center with special arrangement of calibration target and t...
Calibration Techniques for Accurate Measurements by Underwater Camera Systems
Mark Shortis
2015-01-01
Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation a...
Fast and Accurate Bilateral Filtering using Gauss-Polynomial Decomposition
Chaudhury, Kunal N.
2015-01-01
The bilateral filter is a versatile non-linear filter that has found diverse applications in image processing, computer vision, computer graphics, and computational photography. A widely-used form of the filter is the Gaussian bilateral filter in which both the spatial and range kernels are Gaussian. A direct implementation of this filter requires $O(\\sigma^2)$ operations per pixel, where $\\sigma$ is the standard deviation of the spatial Gaussian. In this paper, we propose an accurate approxi...
Accurate Insertion Loss Measurements of the Juno Patch Array Antennas
Chamberlain, Neil; Chen, Jacqueline; Hodges, Richard; Demas, John
2010-01-01
This paper describes two independent methods for estimating the insertion loss of patch array antennas that were developed for the Juno Microwave Radiometer instrument. One method is based principally on pattern measurements while the other method is based solely on network analyzer measurements. The methods are accurate to within 0.1 dB for the measured antennas and show good agreement (to within 0.1dB) of separate radiometric measurements.
Dejavu: An Accurate Energy-Efficient Outdoor Localization System
Aly, Heba; Youssef, Moustafa
2013-01-01
We present Dejavu, a system that uses standard cell-phone sensors to provide accurate and energy-efficient outdoor localization suitable for car navigation. Our analysis shows that different road landmarks have a unique signature on cell-phone sensors; For example, going inside tunnels, moving over bumps, going up a bridge, and even potholes all affect the inertial sensors on the phone in a unique pattern. Dejavu employs a dead-reckoning localization approach and leverages these road landmark...
Accurate Parameter Estimation for Unbalanced Three-Phase System
Yuan Chen; Hing Cheung So
2014-01-01
Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newt...
Accurate, inexpensive testing of laser pointer power for safe operation
An accurate, inexpensive test-bed for the measurement of optical power emitted from handheld lasers is described. The setup consists of a power meter, optical bandpass filters, an adjustable iris and self-centering lens mounts. We demonstrate this test-bed by evaluating the output power of 23 laser pointers with respect to the limits imposed by the US Code of Federal Regulations. We find a compliance rate of only 26%. A discussion of potential laser pointer hazards is included. (paper)
DOMAC: an accurate, hybrid protein domain prediction server
Cheng, Jianlin
2007-01-01
Protein domain prediction is important for protein structure prediction, structure determination, function annotation, mutagenesis analysis and protein engineering. Here we describe an accurate protein domain prediction server (DOMAC) combining both template-based and ab initio methods. The preliminary version of the server was ranked among the top domain prediction servers in the seventh edition of Critical Assessment of Techniques for Protein Structure Prediction (CASP7), 2006. DOMAC server...
A multiple more accurate Hardy-Littlewood-Polya inequality
Qiliang Huang
2012-11-01
Full Text Available By introducing multi-parameters and conjugate exponents and using Euler-Maclaurin’s summation formula, we estimate the weight coefficient and prove a multiple more accurate Hardy-Littlewood-Polya (H-L-P inequality, which is an extension of some earlier published results. We also prove that the constant factor in the new inequality is the best possible, and obtain its equivalent forms.
Shock Emergence in Supernovae: Limiting Cases and Accurate Approximations
Ro, Stephen
2013-01-01
We examine the dynamics of accelerating normal shocks in stratified planar atmospheres, providing accurate fitting formulae for the scaling index relating shock velocity to the initial density and for the post-shock acceleration factor as functions of the polytropic and adiabatic indices which parameterize the problem. In the limit of a uniform initial atmosphere there are analytical formulae for these quantities. In the opposite limit of a very steep density gradient the solutions match the outcome of shock acceleration in exponential atmospheres.
Shock Emergence in Supernovae: Limiting Cases and Accurate Approximations
Ro, Stephen; Matzner, Christopher D.
2013-08-01
We examine the dynamics of accelerating normal shocks in stratified planar atmospheres, providing accurate fitting formulae for the scaling index relating shock velocity to the initial density and for the post-shock acceleration factor as functions of the polytropic and adiabatic indices which parameterize the problem. In the limit of a uniform initial atmosphere, there are analytical formulae for these quantities. In the opposite limit of a very steep density gradient, the solutions match the outcome of shock acceleration in exponential atmospheres.
SHOCK EMERGENCE IN SUPERNOVAE: LIMITING CASES AND ACCURATE APPROXIMATIONS
Ro, Stephen; Matzner, Christopher D. [Department of Astronomy and Astrophysics, University of Toronto, 50 St. George St., Toronto, ON M5S 3H4 (Canada)
2013-08-10
We examine the dynamics of accelerating normal shocks in stratified planar atmospheres, providing accurate fitting formulae for the scaling index relating shock velocity to the initial density and for the post-shock acceleration factor as functions of the polytropic and adiabatic indices which parameterize the problem. In the limit of a uniform initial atmosphere, there are analytical formulae for these quantities. In the opposite limit of a very steep density gradient, the solutions match the outcome of shock acceleration in exponential atmospheres.
An accurate and robust gyroscope-gased pedometer.
Lim, Yoong P; Brown, Ian T; Khoo, Joshua C T
2008-01-01
Pedometers are known to have steps estimation issues. This is mainly attributed to their innate acceleration based measuring sensory. A micro-machined gyroscope (better immunity to acceleration) based pedometer is proposed. Through syntactic data recognition of apriori knowledge of human shank's dynamics and temporally précised detection of heel strikes permitted by Wavelet decomposition, an accurate and robust pedometer is acquired. PMID:19163737
Accurate calculation of thermal noise in multilayer coating
Gurkovsky, Alexey; Vyatchanin, Sergey
2010-01-01
We derive accurate formulas for thermal fluctuations in multilayer interferometric coating taking into account light propagation inside the coating. In particular, we calculate the reflected wave phase as a function of small displacements of the boundaries between the layers using transmission line model for interferometric coating and derive formula for spectral density of reflected phase in accordance with Fluctuation-Dissipation Theorem. We apply the developed approach for calculation of t...
Novel multi-beam radiometers for accurate ocean surveillance
Cappellin, C.; Pontoppidan, K.; Nielsen, P. H.;
2014-01-01
Novel antenna architectures for real aperture multi-beam radiometers providing high resolution and high sensitivity for accurate sea surface temperature (SST) and ocean vector wind (OVW) measurements are investigated. On the basis of the radiometer requirements set for future SST/OVW missions, co......, conical scanners and push-broom antennas are compared. The comparison will cover reflector optics and focal plane array configuration....
Strategy for accurate liver intervention by an optical tracking system
Lin, Qinyong; Yang, Rongqian; Cai, Ken; Guan, Peifeng; Xiao, Weihu; Wu, Xiaoming
2015-01-01
Image-guided navigation for radiofrequency ablation of liver tumors requires the accurate guidance of needle insertion into a tumor target. The main challenge of image-guided navigation for radiofrequency ablation of liver tumors is the occurrence of liver deformations caused by respiratory motion. This study reports a strategy of real-time automatic registration to track custom fiducial markers glued onto the surface of a patient’s abdomen to find the respiratory phase, in which the static p...
Efficient and Accurate Path Cost Estimation Using Trajectory Data
Dai, Jian; Yang, Bin; Guo, Chenjuan; Jensen, Christian S.
2015-01-01
Using the growing volumes of vehicle trajectory data, it becomes increasingly possible to capture time-varying and uncertain travel costs in a road network, including travel time and fuel consumption. The current paradigm represents a road network as a graph, assigns weights to the graph's edges by fragmenting trajectories into small pieces that fit the underlying edges, and then applies a routing algorithm to the resulting graph. We propose a new paradigm that targets more accurate and more ...
Accurate molecular classification of cancer using simple rules
Gotoh Osamu; Wang Xiaosheng
2009-01-01
Abstract Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often ...
Accurate Identification of Fear Facial Expressions Predicts Prosocial Behavior
Marsh, Abigail A.; Kozak, Megan N.; Ambady, Nalini
2007-01-01
The fear facial expression is a distress cue that is associated with the provision of help and prosocial behavior. Prior psychiatric studies have found deficits in the recognition of this expression by individuals with antisocial tendencies. However, no prior study has shown accuracy for recognition of fear to predict actual prosocial or antisocial behavior in an experimental setting. In 3 studies, the authors tested the prediction that individuals who recognize fear more accurately will beha...
Continuous glucose monitors prove highly accurate in critically ill children
Bridges, Brian C.; Preissig, Catherine M; Maher, Kevin O.; Rigby, Mark R
2010-01-01
Introduction Hyperglycemia is associated with increased morbidity and mortality in critically ill patients and strict glycemic control has become standard care for adults. Recent studies have questioned the optimal targets for such management and reported increased rates of iatrogenic hypoglycemia in both critically ill children and adults. The ability to provide accurate, real-time continuous glucose monitoring would improve the efficacy and safety of this practice in critically ill patients...
Accurate quantum state estimation via "Keeping the experimentalist honest"
Blume-Kohout, R; Blume-Kohout, Robin; Hayden, Patrick
2006-01-01
In this article, we derive a unique procedure for quantum state estimation from a simple, self-evident principle: an experimentalist's estimate of the quantum state generated by an apparatus should be constrained by honesty. A skeptical observer should subject the estimate to a test that guarantees that a self-interested experimentalist will report the true state as accurately as possible. We also find a non-asymptotic, operational interpretation of the quantum relative entropy function.
A highly accurate method to solve Fisher’s equation
Mehdi Bastani; Davod Khojasteh Salkuyeh
2012-03-01
In this study, we present a new and very accurate numerical method to approximate the Fisher’s-type equations. Firstly, the spatial derivative in the proposed equation is approximated by a sixth-order compact ﬁnite difference (CFD6) scheme. Secondly, we solve the obtained system of differential equations using a third-order total variation diminishing Runge–Kutta (TVD-RK3) scheme. Numerical examples are given to illustrate the efﬁciency of the proposed method.
Accurate Method for Determining Adhesion of Cantilever Beams
Michalske, T.A.; de Boer, M.P.
1999-01-08
Using surface micromachined samples, we demonstrate the accurate measurement of cantilever beam adhesion by using test structures which are adhered over long attachment lengths. We show that this configuration has a deep energy well, such that a fracture equilibrium is easily reached. When compared to the commonly used method of determining the shortest attached beam, the present method is much less sensitive to variations in surface topography or to details of capillary drying.
A robust and accurate formulation of molecular and colloidal electrostatics
Sun, Qiang; Klaseboer, Evert; Chan, Derek Y. C.
2016-08-01
This paper presents a re-formulation of the boundary integral method for the Debye-Hückel model of molecular and colloidal electrostatics that removes the mathematical singularities that have to date been accepted as an intrinsic part of the conventional boundary integral equation method. The essence of the present boundary regularized integral equation formulation consists of subtracting a known solution from the conventional boundary integral method in such a way as to cancel out the singularities associated with the Green's function. This approach better reflects the non-singular physical behavior of the systems on boundaries with the benefits of the following: (i) the surface integrals can be evaluated accurately using quadrature without any need to devise special numerical integration procedures, (ii) being able to use quadratic or spline function surface elements to represent the surface more accurately and the variation of the functions within each element is represented to a consistent level of precision by appropriate interpolation functions, (iii) being able to calculate electric fields, even at boundaries, accurately and directly from the potential without having to solve hypersingular integral equations and this imparts high precision in calculating the Maxwell stress tensor and consequently, intermolecular or colloidal forces, (iv) a reliable way to handle geometric configurations in which different parts of the boundary can be very close together without being affected by numerical instabilities, therefore potentials, fields, and forces between surfaces can be found accurately at surface separations down to near contact, and (v) having the simplicity of a formulation that does not require complex algorithms to handle singularities will result in significant savings in coding effort and in the reduction of opportunities for coding errors. These advantages are illustrated using examples drawn from molecular and colloidal electrostatics.
Robust Small Sample Accurate Inference in Moment Condition Models
Serigne N. Lo; Elvezio Ronchetti
2006-01-01
Procedures based on the Generalized Method of Moments (GMM) (Hansen, 1982) are basic tools in modern econometrics. In most cases, the theory available for making inference with these procedures is based on first order asymptotic theory. It is well-known that the (first order) asymptotic distribution does not provide accurate p-values and confidence intervals in moderate to small samples. Moreover, in the presence of small deviations from the assumed model, p-values and confidence intervals ba...
Is bioelectrical impedance accurate for use in large epidemiological studies?
Merchant Anwar T
2008-09-01
Full Text Available Abstract Percentage of body fat is strongly associated with the risk of several chronic diseases but its accurate measurement is difficult. Bioelectrical impedance analysis (BIA is a relatively simple, quick and non-invasive technique, to measure body composition. It measures body fat accurately in controlled clinical conditions but its performance in the field is inconsistent. In large epidemiologic studies simpler surrogate techniques such as body mass index (BMI, waist circumference, and waist-hip ratio are frequently used instead of BIA to measure body fatness. We reviewed the rationale, theory, and technique of recently developed systems such as foot (or hand-to-foot BIA measurement, and the elements that could influence its results in large epidemiologic studies. BIA results are influenced by factors such as the environment, ethnicity, phase of menstrual cycle, and underlying medical conditions. We concluded that BIA measurements validated for specific ethnic groups, populations and conditions can accurately measure body fat in those populations, but not others and suggest that for large epdiemiological studies with diverse populations BIA may not be the appropriate choice for body composition measurement unless specific calibration equations are developed for different groups participating in the study.
Can blind persons accurately assess body size from the voice?
Pisanski, Katarzyna; Oleszkiewicz, Anna; Sorokowska, Agnieszka
2016-04-01
Vocal tract resonances provide reliable information about a speaker's body size that human listeners use for biosocial judgements as well as speech recognition. Although humans can accurately assess men's relative body size from the voice alone, how this ability is acquired remains unknown. In this study, we test the prediction that accurate voice-based size estimation is possible without prior audiovisual experience linking low frequencies to large bodies. Ninety-one healthy congenitally or early blind, late blind and sighted adults (aged 20-65) participated in the study. On the basis of vowel sounds alone, participants assessed the relative body sizes of male pairs of varying heights. Accuracy of voice-based body size assessments significantly exceeded chance and did not differ among participants who were sighted, or congenitally blind or who had lost their sight later in life. Accuracy increased significantly with relative differences in physical height between men, suggesting that both blind and sighted participants used reliable vocal cues to size (i.e. vocal tract resonances). Our findings demonstrate that prior visual experience is not necessary for accurate body size estimation. This capacity, integral to both nonverbal communication and speech perception, may be present at birth or may generalize from broader cross-modal correspondences. PMID:27095264
An accurate determination of the flux within a slab
During the past decade, several articles have been written concerning accurate solutions to the monoenergetic neutron transport equation in infinite and semi-infinite geometries. The numerical formulations found in these articles were based primarily on the extensive theoretical investigations performed by the open-quotes transport greatsclose quotes such as Chandrasekhar, Busbridge, Sobolev, and Ivanov, to name a few. The development of numerical solutions in infinite and semi-infinite geometries represents an example of how mathematical transport theory can be utilized to provide highly accurate and efficient numerical transport solutions. These solutions, or analytical benchmarks, are useful as open-quotes industry standards,close quotes which provide guidance to code developers and promote learning in the classroom. The high accuracy of these benchmarks is directly attributable to the rapid advancement of the state of computing and computational methods. Transport calculations that were beyond the capability of the open-quotes supercomputersclose quotes of just a few years ago are now possible at one's desk. In this paper, we again build upon the past to tackle the slab problem, which is of the next level of difficulty in comparison to infinite media problems. The formulation is based on the monoenergetic Green's function, which is the most fundamental transport solution. This method of solution requires a fast and accurate evaluation of the Green's function, which, with today's computational power, is now readily available
Accurate pose estimation using single marker single camera calibration system
Pati, Sarthak; Erat, Okan; Wang, Lejing; Weidert, Simon; Euler, Ekkehard; Navab, Nassir; Fallavollita, Pascal
2013-03-01
Visual marker based tracking is one of the most widely used tracking techniques in Augmented Reality (AR) applications. Generally, multiple square markers are needed to perform robust and accurate tracking. Various marker based methods for calibrating relative marker poses have already been proposed. However, the calibration accuracy of these methods relies on the order of the image sequence and pre-evaluation of pose-estimation errors, making the method offline. Several studies have shown that the accuracy of pose estimation for an individual square marker depends on camera distance and viewing angle. We propose a method to accurately model the error in the estimated pose and translation of a camera using a single marker via an online method based on the Scaled Unscented Transform (SUT). Thus, the pose estimation for each marker can be estimated with highly accurate calibration results independent of the order of image sequences compared to cases when this knowledge is not used. This removes the need for having multiple markers and an offline estimation system to calculate camera pose in an AR application.
Interacting with image hierarchies for fast and accurate object segmentation
Beard, David V.; Eberly, David H.; Hemminger, Bradley M.; Pizer, Stephen M.; Faith, R. E.; Kurak, Charles; Livingston, Mark
1994-05-01
Object definition is an increasingly important area of medical image research. Accurate and fairly rapid object definition is essential for measuring the size and, perhaps more importantly, the change in size of anatomical objects such as kidneys and tumors. Rapid and fairly accurate object definition is essential for 3D real-time visualization including both surgery planning and Radiation oncology treatment planning. One approach to object definition involves the use of 3D image hierarchies, such as Eberly's Ridge Flow. However, the image hierarchy segmentation approach requires user interaction in selecting regions and subtrees. Further, visualizing and comprehending the anatomy and the selected portions of the hierarchy can be problematic. In this paper we will describe the Magic Crayon tool which allows a user to define rapidly and accurately various anatomical objects by interacting with image hierarchies such as those generated with Eberly's Ridge Flow algorithm as well as other 3D image hierarchies. Preliminary results suggest that fairly complex anatomical objects can be segmented in under a minute with sufficient accuracy for 3D surgery planning, 3D radiation oncology treatment planning, and similar applications. Potential modifications to the approach for improved accuracy are summarized.
Chen, Duan; Cai, Wei; Zinser, Brian; Cho, Min Hyung
2016-09-01
In this paper, we develop an accurate and efficient Nyström volume integral equation (VIE) method for the Maxwell equations for a large number of 3-D scatterers. The Cauchy Principal Values that arise from the VIE are computed accurately using a finite size exclusion volume together with explicit correction integrals consisting of removable singularities. Also, the hyper-singular integrals are computed using interpolated quadrature formulae with tensor-product quadrature nodes for cubes, spheres and cylinders, that are frequently encountered in the design of meta-materials. The resulting Nyström VIE method is shown to have high accuracy with a small number of collocation points and demonstrates p-convergence for computing the electromagnetic scattering of these objects. Numerical calculations of multiple scatterers of cubic, spherical, and cylindrical shapes validate the efficiency and accuracy of the proposed method.
Highlights: • We have computed accurate binding energies of two NH⋯π hydrogen bonds. • We compare to results from dispersion-corrected density-functional theory. • A double-hybrid functional with explicit correlation has been proposed. • First results of explicitly-correlated ring-coupled-cluster theory are presented. • A double-hybrid functional with random-phase approximation is investigated. - Abstract: Using explicitly-correlated coupled-cluster theory with single and double excitations, the intermolecular distances and interaction energies of the T-shaped imidazole⋯benzene and pyrrole⋯benzene complexes have been computed in a large augmented correlation-consistent quadruple-zeta basis set, adding also corrections for connected triple excitations and remaining basis-set-superposition errors. The results of these computations are used to assess other methods such as Møller–Plesset perturbation theory (MP2), spin-component-scaled MP2 theory, dispersion-weighted MP2 theory, interference-corrected explicitly-correlated MP2 theory, dispersion-corrected double-hybrid density-functional theory (DFT), DFT-based symmetry-adapted perturbation theory, the random-phase approximation, explicitly-correlated ring-coupled-cluster-doubles theory, and double-hybrid DFT with a correlation energy computed in the random-phase approximation
Ahnen, Sandra; Hehn, Anna-Sophia [Institute of Physical Chemistry, Karlsruhe Institute of Technology (KIT), Fritz-Haber-Weg 2, D-76131 Karlsruhe (Germany); Vogiatzis, Konstantinos D. [Institute of Physical Chemistry, Karlsruhe Institute of Technology (KIT), Fritz-Haber-Weg 2, D-76131 Karlsruhe (Germany); Center for Functional Nanostructures, Karlsruhe Institute of Technology (KIT), Wolfgang-Gaede-Straße 1a, D-76131 Karlsruhe (Germany); Trachsel, Maria A.; Leutwyler, Samuel [Department of Chemistry and Biochemistry, University of Bern, Freiestrasse 3, CH-3012 Bern (Switzerland); Klopper, Wim, E-mail: klopper@kit.edu [Institute of Physical Chemistry, Karlsruhe Institute of Technology (KIT), Fritz-Haber-Weg 2, D-76131 Karlsruhe (Germany); Center for Functional Nanostructures, Karlsruhe Institute of Technology (KIT), Wolfgang-Gaede-Straße 1a, D-76131 Karlsruhe (Germany)
2014-09-30
Highlights: • We have computed accurate binding energies of two NH⋯π hydrogen bonds. • We compare to results from dispersion-corrected density-functional theory. • A double-hybrid functional with explicit correlation has been proposed. • First results of explicitly-correlated ring-coupled-cluster theory are presented. • A double-hybrid functional with random-phase approximation is investigated. - Abstract: Using explicitly-correlated coupled-cluster theory with single and double excitations, the intermolecular distances and interaction energies of the T-shaped imidazole⋯benzene and pyrrole⋯benzene complexes have been computed in a large augmented correlation-consistent quadruple-zeta basis set, adding also corrections for connected triple excitations and remaining basis-set-superposition errors. The results of these computations are used to assess other methods such as Møller–Plesset perturbation theory (MP2), spin-component-scaled MP2 theory, dispersion-weighted MP2 theory, interference-corrected explicitly-correlated MP2 theory, dispersion-corrected double-hybrid density-functional theory (DFT), DFT-based symmetry-adapted perturbation theory, the random-phase approximation, explicitly-correlated ring-coupled-cluster-doubles theory, and double-hybrid DFT with a correlation energy computed in the random-phase approximation.
Given the onset of dose escalation and increased planning target volume (PTV) conformity, the requirement of accurate field placement has also increased. This study compares and contrasts a combination offline/online electronic portal imaging (EPI) device correction with a complete online correction protocol and assesses their relative effectiveness in managing set-up error. Field placement data was collected on patients receiving radical radiotherapy to the prostate. Ten patients were on an initial combination offline/online correction protocol, followed by another 10 patients on a complete online correction protocol. Analysis of 1480 portal images from 20 patients was carried out, illustrating that a combination offline/online approach can be very effective in dealing with the systematic component of set-up error, but it is only when a complete online correction protocol is employed that both systematic and random set-up errors can be managed. Now, EPI protocols have evolved considerably and online corrections are a highly effective tool in the quest for more accurate field placement. This study discusses the clinical workload impact issues that need to be addressed in order for an online correction protocol to be employed, and addresses many of the practical issues that need to be resolved. Management of set-up error is paramount when seeking to dose escalate and only an online correction protocol can manage both components of set-up error. Both systematic and random errors are important and can be effectively and efficiently managed
Yin, Wang-bao; Zhang, Lei; Wang, Le; Dong, Lei; Ma, Wei-guang; Jia, Suo-tang
2012-01-01
A technique about accurate measurement of oxygen content in coal in air environment using laser-induced breakdown spectroscopy (LIBS) is introduced in the present paper. Coal samples were excited by the laser, and plasma spectra were obtained. Combining internal standard method, temperature correction method and multi-line methods, the oxygen content of coal samples was precisely measured. The measurement precision is not less than 1.37% for oxygen content in coal analysis, so is satisfied for the requirement of coal-fired power plants in coal analysis. This method can be used in surveying, environmental protection, medicine, materials, archaeological and food safety, biochemical and metallurgy application. PMID:22497159
Ideal flood field images for SPECT uniformity correction
Since as little as 2.5% camera non-uniformity can cause disturbing artifacts in SPECT imaging, the ideal flood field images for uniformity correction would be made with the collimator in place using a perfectly uniform sheet source. While such a source is not realizable the equivalent images can be generated by mapping the activity distribution of a Co-57 sheet source and correcting subsequent images of the source with this mapping. Mapping is accomplished by analyzing equal-time images of the source made in multiple precisely determined positions. The ratio of counts detected in the same region of two images is a measure of the ratio of the activities of the two portions of the source imaged in that region. The activity distribution in the sheet source is determined from a set of such ratios. The more source positions imaged in a given time, the more accurate the source mapping, according to results of a computer simulation. A 1.9 mCi Co-57 sheet source was shifted by 12 mm increments along the horizontal and vertical axis of the camera face to 9 positions on each axis. The source was imaged for 20 min in each position and 214 million total counts were accumulated. The activity distribution of the source, relative to the center pixel, was determined for a 31 x 31 array. The integral uniformity was found to be 2.8%. The RMS error for such a mapping was determined by computer simulation to be 0.46%. The activity distribution was used to correct a high count flood field image for non-uniformities attributable to the Co-57 source. Such a corrected image represents camera plus collimator response to an almost perfectly uniform sheet source
A novel bias correction methodology for climate impact simulations
S. Sippel
2015-10-01
Full Text Available Understanding, quantifying and attributing the impacts of extreme weather and climate events in the terrestrial biosphere is crucial for societal adaptation in a changing climate. However, climate model simulations generated for this purpose typically exhibit biases in their output that hinders any straightforward assessment of impacts. To overcome this issue, various bias correction strategies are routinely used to alleviate climate model deficiencies most of which have been criticized for physical inconsistency and the non-preservation of the multivariate correlation structure. In this study, we introduce a novel, resampling-based bias correction scheme that fully preserves the physical consistency and multivariate correlation structure of the model output. This procedure strongly improves the representation of climatic extremes and variability in a large regional climate model ensemble (HadRM3P, climateprediction.net/weatherathome, which is illustrated for summer extremes in temperature and rainfall over Central Europe. Moreover, we simulate biosphere–atmosphere fluxes of carbon and water using a terrestrial ecosystem model (LPJmL driven by the bias corrected climate forcing. The resampling-based bias correction yields strongly improved statistical distributions of carbon and water fluxes, including the extremes. Our results thus highlight the importance to carefully consider statistical moments beyond the mean for climate impact simulations. In conclusion, the present study introduces an approach to alleviate climate model biases in a physically consistent way and demonstrates that this yields strongly improved simulations of climate extremes and associated impacts in the terrestrial biosphere. A wider uptake of our methodology by the climate and impact modelling community therefore seems desirable for accurately quantifying past, current and future extremes.
H. Chen
2012-09-01
Full Text Available Accurate measurements of carbon monoxide (CO in humid air have been made using the cavity ring-down spectroscopy (CRDS technique. The measurements of CO mole fractions are determined from the strength of its spectral absorption in the near infrared region (∼1.57 μm after removing interferences from adjacent carbon dioxide (CO_{2} and water vapor (H_{2}O absorption lines. Water correction functions that account for the dilution and pressure-broadening effects as well as absorption line interferences from adjacent CO_{2} and H_{2}O lines have been derived for CO_{2} mole fractions between 360–390 ppm. The line interference corrections are independent of CO mole fractions. The dependence of the line interference correction on CO_{2} abundance is estimated to be approximately −0.3 ppb/100 ppm CO_{2} for dry mole fractions of CO. Comparisons of water correction functions from different analyzers of the same type show significant differences, making it necessary to perform instrument-specific water tests for each individual analyzer. The CRDS analyzer was flown on an aircraft in Alaska from April to November in 2011, and the accuracy of the CO measurements by the CRDS analyzer has been validated against discrete NOAA/ESRL flask sample measurements made on board the same aircraft, with a mean difference between integrated in situ and flask measurements of −0.6 ppb and a standard deviation of 2.8 ppb. Preliminary testing of CRDS instrumentation that employs new spectroscopic analysis (available since the beginning of 2012 indicates a smaller water vapor dependence than the models discussed here, but more work is necessary to fully validate the performance. The CRDS technique provides an accurate and low-maintenance method of monitoring the atmospheric dry mole fractions of CO in humid air streams.
Proof-Carrying Code with Correct Compilers
Appel, Andrew W.
2009-01-01
In the late 1990s, proof-carrying code was able to produce machine-checkable safety proofs for machine-language programs even though (1) it was impractical to prove correctness properties of source programs and (2) it was impractical to prove correctness of compilers. But now it is practical to prove some correctness properties of source programs, and it is practical to prove correctness of optimizing compilers. We can produce more expressive proof-carrying code, that can guarantee correctness properties for machine code and not just safety. We will construct program logics for source languages, prove them sound w.r.t. the operational semantics of the input language for a proved-correct compiler, and then use these logics as a basis for proving the soundness of static analyses.
Motion correction in MRI of the brain
Godenschweger, F.; Kägebein, U.; Stucht, D.; Yarach, U.; Sciarra, A.; Yakupov, R.; Lüsebrink, F.; Schulze, P.; Speck, O.
2016-03-01
Subject motion in MRI is a relevant problem in the daily clinical routine as well as in scientific studies. Since the beginning of clinical use of MRI, many research groups have developed methods to suppress or correct motion artefacts. This review focuses on rigid body motion correction of head and brain MRI and its application in diagnosis and research. It explains the sources and types of motion and related artefacts, classifies and describes existing techniques for motion detection, compensation and correction and lists established and experimental approaches. Retrospective motion correction modifies the MR image data during the reconstruction, while prospective motion correction performs an adaptive update of the data acquisition. Differences, benefits and drawbacks of different motion correction methods are discussed.
Effectiveness of Corrective Feedback on Writing
高砚
2012-01-01
This study aims to find out the effectiveness of corrective feedback on ESL writing. By reviewing and analyzing the previous six research studies, the author tries to reveal the most effective way to provide corrective feedback for L2 students and the factors that impact the processing of error feedback. Findings indicated that corrective feedback is helpful for students to improve ESL writing on both accuracy and fluency. Furthermore, correction and direct corrective feedbacks as well as the oral and written meta-linguistic explanation are the most effective ways to help students improving their writing. However, in⁃dividual learner’s difference has influence on processing corrective feedback. At last, limitation of present study and suggestion for future research were made.
Bernaciak, C.; Wackeroth, D.
2012-01-01
The precision measurement of the mass of the $W$ boson is an important goal of the Fermilab Tevatron and the CERN Large Hadron Collider (LHC). It requires accurate theoretical calculations which incorporate both higher-order QCD and electroweak corrections, and also provide an interface to parton-shower Monte Carlo programs which make it possible to realistically simulate experimental data. In this paper, we present a combination of the full ${\\cal O}(\\alpha)$ electroweak corrections of {\\tt ...
Amandeep Singh; Prof. Meenakshi Sharma
2016-01-01
Spell-checking is the process of detecting and correcting incorrectly spelled words in a text paragraph. Spell checking system first detects the incorrect words and then provide the best possible solution of corrected words. Spell checking system is a combination of handcrafted rules of the language for which spell checking system is to be created and a dictionary which contain the accurate spellings of various words. Better rules and large dictionary of words is help to improve the rate of e...
Ran Tao; Ruofu Xiao; Wei Yang; Fujun Wang
2014-01-01
RANS simulation is widely used in the flow prediction of centrifugal pumps. Influenced by impeller rotation and streamline curvature, the eddy viscosity models with turbulence isotropy assumption are not accurate enough. In this study, Spalart-Shur rotation/curvature correction was applied on the SST k-ω turbulence model. The comparative assessment of the correction was proceeded in the simulations of a centrifugal pump impeller. CFD results were compared with existing PIV and LDV data under ...
Noncommutative corrections to classical black holes
We calculate leading long-distance noncommutative corrections to the classical Schwarzschild black hole sourced by a massive noncommutative scalar field. The energy-momentum tensor is taken O(l4) in the noncommutative parameter l and is treated in the semiclassical (tree-level) approximation. These noncommutative corrections dominate classical post-post-Newtonian corrections if l>1/MP. However, they are still very small to be observable in present-day experiments.
Noncommutative corrections to classical black holes
Kobakhidze, Archil(ARC Centre of Excellence for Particle Physics at the Terascale, School of Physics, The University of Sydney, NSW, 2006, Australia)
2007-01-01
We calculate leading long-distance noncommutative corrections to the classical Schwarzschild black hole which is sourced by a massive noncommutative scalar field. The energy-momentum tensor is taken up to ${\\cal O}(\\ell^4)$ in noncommutative parameter, and is treated in semiclassical (tree level) approximation. These noncommutative corrections can dominate classical post-post-Newtonian corrections providing $\\ell > 1/M_P$, however, they are still too small to be observable in present-day expe...
Rule-Based Software Verification and Correction.
Ballis, Demis
2008-01-01
The increasing complexity of software systems has led to the development of sophisticated formal Methodologies for verifying and correcting data and programs. In general, establishing whether a program behaves correctly w.r.t. the original programmer s intention or checking the consistency and the correctness of a large set of data are not trivial tasks as witnessed by many case studies which occur in the literature. In this dissertation, we face two challenging problems of verification an...
Higher twist corrections to Bjorken sum rule
Some higher twist corrections to the Bjorken sum rule are estimated in the framework of a quark-diquark model of the nucleon. The parameters of the model have been previously fixed by fitting the measured higher twist corrections to the unpolarized structure function F2(x,Q2). The resulting corrections to the Bjorken sum rule turn out to be negligible. (author). 15 refs, 1 fig
Teacher correction versus peer-marking
Mourente Miguel Mariana Correia
2009-01-01
Written language is undoubtedly more often used than oral language in a variety of contexts, including both the professional and academic life. Consequently, developing strategies for correcting compositions and improving students’ written production is of vital importance. This article describes an experiment aimed at assessing the two most widely used methods of correction for compositions –traditional teacher correction and peer marking and their effect on the frequency of errors...
Radiative corrections to pion Compton scattering
Kaiser, N.(Physik Department T39, Technische Universität München, Garching, D-85747, Germany); Friedrich, J. M.
2008-01-01
We calculate the one-photon loop radiative corrections to charged pion Compton scattering, $\\pi^- \\gamma \\to \\pi^- \\gamma $. Ultraviolet and infrared divergencies are both treated in dimensional regularization. Analytical expressions for the ${\\cal O}(\\alpha)$ corrections to the invariant Compton scattering amplitudes, $A(s,u)$ and $B(s,u)$, are presented for 11 classes of contributing one-loop diagrams. Infrared finiteness of the virtual radiative corrections is achieved (in the standard way...
Overburden Corrections for CosmoALEPH
Schmelling, Michael
2006-01-01
The determination of the decoherence curve from coincidence rates between the different CosmoALEPH stations requires amongst others corrections also one for different overburdens which affect the measured rates. This note describes the calculation of the overburden corrections based on a simple parametrization of the muon flux at sea level and a simple propagation model for muons through the overburden. The results are expressed as corrections to a reference muon flux at a depth of 320 mwe below surface.
Automatic remote correcting system for MOOCS
Rochat, Pierre-Yves
2014-01-01
An automatic correcting system was designed to be able to correct the programming exercises during a Massive Open Online Course (MOOC) about Microcontrollers, followed by thousands of students. Build around the MSP430G Launchpad, it has corrected more then 30'000 submissions in 7 weeks. This document provides general information about the system, the results obtained during a MOOC on the Coursera.org plateform, extensions done to remote experiences and future projects.
Iterative CT shading correction with no prior information
Shading artifacts in CT images are caused by scatter contamination, beam-hardening effect and other non-ideal imaging conditions. The purpose of this study is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT images (e.g. cone-beam CT, low-kVp CT) without relying on prior information. The method is based on the general knowledge of the relatively uniform CT number distribution in one tissue component. The CT image is first segmented to construct a template image where each structure is filled with the same CT number of a specific tissue type. Then, by subtracting the ideal template from the CT image, the residual image from various error sources are generated. Since forward projection is an integration process, non-continuous shading artifacts in the image become continuous signals in a line integral. Thus, the residual image is forward projected and its line integral is low-pass filtered in order to estimate the error that causes shading artifacts. A compensation map is reconstructed from the filtered line integral error using a standard FDK algorithm and added back to the original image for shading correction. As the segmented image does not accurately depict a shaded CT image, the proposed scheme is iterated until the variation of the residual image is minimized. The proposed method is evaluated using cone-beam CT images of a Catphan©600 phantom and a pelvis patient, and low-kVp CT angiography images for carotid artery assessment. Compared with the CT image without correction, the proposed method reduces the overall CT number error from over 200 HU to be less than 30 HU and increases the spatial uniformity by a factor of 1.5. Low-contrast object is faithfully retained after the proposed correction. An effective iterative algorithm for shading correction in CT imaging is proposed that is only assisted by general anatomical information without relying on prior knowledge. The proposed method is thus practical
Iterative CT shading correction with no prior information
Wu, Pengwei; Sun, Xiaonan; Hu, Hongjie; Mao, Tingyu; Zhao, Wei; Sheng, Ke; Cheung, Alice A.; Niu, Tianye
2015-11-01
Shading artifacts in CT images are caused by scatter contamination, beam-hardening effect and other non-ideal imaging conditions. The purpose of this study is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT images (e.g. cone-beam CT, low-kVp CT) without relying on prior information. The method is based on the general knowledge of the relatively uniform CT number distribution in one tissue component. The CT image is first segmented to construct a template image where each structure is filled with the same CT number of a specific tissue type. Then, by subtracting the ideal template from the CT image, the residual image from various error sources are generated. Since forward projection is an integration process, non-continuous shading artifacts in the image become continuous signals in a line integral. Thus, the residual image is forward projected and its line integral is low-pass filtered in order to estimate the error that causes shading artifacts. A compensation map is reconstructed from the filtered line integral error using a standard FDK algorithm and added back to the original image for shading correction. As the segmented image does not accurately depict a shaded CT image, the proposed scheme is iterated until the variation of the residual image is minimized. The proposed method is evaluated using cone-beam CT images of a Catphan©600 phantom and a pelvis patient, and low-kVp CT angiography images for carotid artery assessment. Compared with the CT image without correction, the proposed method reduces the overall CT number error from over 200 HU to be less than 30 HU and increases the spatial uniformity by a factor of 1.5. Low-contrast object is faithfully retained after the proposed correction. An effective iterative algorithm for shading correction in CT imaging is proposed that is only assisted by general anatomical information without relying on prior knowledge. The proposed method is thus practical
QCD corrections to tri-boson production
Lazopoulos, A; Petriello, F J; Lazopoulos, Achilleas; Melnikov, Kirill; Petriello, Frank
2007-01-01
We present a computation of the next-to-leading order QCD corrections to the production of three Z bosons at the LHC. We calculate these corrections using a completely numerical method that combines sector decomposition to extract infrared singularities with contour deformation of the Feynman parameter integrals to avoid internal loop thresholds. The NLO QCD corrections to pp -> ZZZ are approximately 50%, and are badly underestimated by the leading order scale dependence. However, the kinematic dependence of the corrections is minimal in phase space regions accessible at leading order.
Fast, spatially varying CTF correction in TEM
We have developed new methods for contrast transfer function (CTF) correction of tilted and/or thick specimens. In order to achieve higher resolutions in cryo-electron tomography (CryoET), it is necessary to account for the defocus gradient on a tilted specimen and possibly the defocus gradient within a thick specimen. CTF correction methods which account for these defocus differences have recently gained interest. However, there is no global CTF correction method available to this date (to process the entire field-of-view at once) which can use different inverse filters, e.g. phase-flipping or Wiener filter, and which can do so within a reasonable time for realistic image sizes. We show that the CTF correction methods presented in this paper correctly account for the spatially varying defocus, can employ different inverse filters and are significantly faster (>50×) than existing methods. We provide proof-of-principle implementations of all the presented CTF correction methods online. -- Highlights: ► Computationally efficient method for tilted CTF correction. ► Computationally efficient method for 3D CTF correction. ► CTF correction methods can employ different inverse filters. ► Significant speed improvement (50×).
Higgs Pseudo Observables and Radiative Corrections
Bordone, Marzia; Isidori, Gino; Marzocca, David; Pattori, Andrea
2015-01-01
We show how leading radiative corrections can be implemented in the general description of $h\\to 4\\ell$ decays by means of Pseudo Observables (PO). With the inclusion of such corrections, the PO description of $h\\to 4\\ell$ decays can be matched to next-to-leading-order electroweak calculations both within and beyond the Standard Model (SM). In particular, we demonstrate that with the inclusion of such corrections the complete next-to-leading-order Standard Model prediction for the $h\\to 2e2\\mu$ dilepton mass spectrum is recovered within 1% accuracy. The impact of radiative corrections for non-standard PO is also briefly discussed.
Higgs pseudo observables and radiative corrections
Bordone, Marzia; Marzocca, David; Pattori, Andrea [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Greljo, Admir [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); University of Sarajevo, Faculty of Science, Sarajevo (Bosnia and Herzegovina); Isidori, Gino [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); INFN, Laboratori Nazionali di Frascati, Frascati (Italy)
2015-08-15
We show how leading radiative corrections can be implemented in the general description of h → 4l decays by means of pseudo observables (PO). With the inclusion of such corrections, the PO description of h → 4l decays can be matched to next-to-leading-order electroweak calculations both within and beyond the Standard Model (SM). In particular, we demonstrate that with the inclusion of such corrections the complete next-to-leading-order SM prediction for the h → 2e2μ dilepton mass spectrum is recovered within 1% accuracy. The impact of radiative corrections for non-standard PO is also briefly discussed. (orig.)
The prosody of speech error corrections revisited
Shattuck-Hufnagel, S.; Cutler, A.
1999-01-01
A corpus of digitized speech errors is used to compare the prosody of correction patterns for word-level vs. sound-level errors. Results for both peak F0 and perceived prosodic markedness confirm that speakers are more likely to mark corrections of word-level errors than corrections of sound-level errors, and that errors ambiguous between word-level and soundlevel (such as boat for moat) show correction patterns like those for sound level errors. This finding increases the plausibility of the...
A Global Correction to PPMXL Proper Motions
Vickers, John J; Grebel, Eva K
2016-01-01
In this paper we notice that extragalactic sources seem to have non-zero proper motions in the PPMXL proper motion catalog. We collect a large, all-sky sample of extragalactic objects and fit their reported PPMXL proper motions to an ensemble of spherical harmonics in magnitude shells. A magnitude dependent proper motion correction is thus constructed. This correction is applied to a set of fundamental radio sources, quasars, and is compared to similar corrections to assess its utility. We publish, along with this paper, code which may be used to correct proper motions in the PPMXL catalog over the full sky which have 2 Micron All Sky Survey photometry.
Angelis, G. I.; Kyme, A. Z.; Ryder, W. J.; Fulton, R. R.; Meikle, S. R.
2014-10-01
Attenuation correction in positron emission tomography brain imaging of freely moving animals is a very challenging problem since the torso of the animal is often within the field of view and introduces a non negligible attenuating factor that can degrade the quantitative accuracy of the reconstructed images. In the context of unrestrained small animal imaging, estimation of the attenuation correction factors without the need for a transmission scan is highly desirable. An attractive approach that avoids the need for a transmission scan involves the generation of the hull of the animal’s head based on the reconstructed motion corrected emission images. However, this approach ignores the attenuation introduced by the animal’s torso. In this work, we propose a virtual scanner geometry which moves in synchrony with the animal’s head and discriminates between those events that traversed only the animal’s head (and therefore can be accurately compensated for attenuation) and those that might have also traversed the animal’s torso. For each recorded pose of the animal’s head a new virtual scanner geometry is defined and therefore a new system matrix must be calculated leading to a time-varying system matrix. This new approach was evaluated on phantom data acquired on the microPET Focus 220 scanner using a custom-made phantom and step-wise motion. Results showed that when the animal’s torso is within the FOV and not appropriately accounted for during attenuation correction it can lead to bias of up to 10% . Attenuation correction was more accurate when the virtual scanner was employed leading to improved quantitative estimates (bias introduced by the extraneous compartment. Although the proposed method requires increased computational resources, it can provide a reliable approach towards quantitatively accurate attenuation correction for freely moving animal studies.