Sample records for accurate light-time correction

  1. Correcting incompatible DN values and geometric errors in nighttime lights time series images

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Naizhuo [Texas Tech Univ., Lubbock, TX (United States); Zhou, Yuyu [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Samson, Eric L. [Mayan Esteem Project, Farmington, CT (United States)


    The Defense Meteorological Satellite Program’s Operational Linescan System (DMSP-OLS) nighttime lights imagery has proven to be a powerful remote sensing tool to monitor urbanization and assess socioeconomic activities at large scales. However, the existence of incompatible digital number (DN) values and geometric errors severely limit application of nighttime light image data on multi-year quantitative research. In this study we extend and improve previous studies on inter-calibrating nighttime lights image data to obtain more compatible and reliable nighttime lights time series (NLT) image data for China and the United States (US) through four steps: inter-calibration, geometric correction, steady increase adjustment, and population data correction. We then use gross domestic product (GDP) data to test the processed NLT image data indirectly and find that sum light (summed DN value of pixels in a nighttime light image) maintains apparent increase trends with relatively large GDP growth rates but does not increase or decrease with relatively small GDP growth rates. As nighttime light is a sensitive indicator for economic activity, the temporally consistent trends between sum light and GDP growth rate imply that brightness of nighttime lights on the ground is correctly represented by the processed NLT image data. Finally, through analyzing the corrected NLT image data from 1992 to 2008, we find that China experienced apparent nighttime lights development in 1992-1997 and 2001-2008 respectively and the US suffered from nighttime lights decay in large areas after 2001.


    Institute of Scientific and Technical Information of China (English)

    LiYanxing; HuXinkang; ShuaiPing; ZhangZhongfu


    The propagation path of satellite signals in the atmosphere is a curve thus it,is very difficult to calculate its flexure correction accurately, a strict calculating expressions has so far not been derived. In this study, the flexure correction of the refraction curve is divided into two parts and their strict calculating expressions are derived. By use of the standard atmospheric model, the accurate flexure correction of the refraction curve is calculated for different zenith distance Z. On this basis, a calculation model is structured. This model is very simple in structure, convenient in use and high in accuracy. When Z is smaller than 85°,the accuracy of the correction exceeds 0.06mm. The flexure correction is basically proportional to tan2Z and increases rapidly with the increase of Z When Z>50°,the correction is smaller than 0.5 mm and can be neglected. When Z>50°, the correction must be made. When Z is 85°, 88° and 89° , the corrections are 198mm, 8.911m and 28.497 km, respectively. The calculation results shows that the correction estimate by Hopfield is correct when Z≤80°, but too small when Z=89°. The expression in this paper is applicable to any satellite.


    Institute of Scientific and Technical Information of China (English)

    Li Yanxing; Hu Xinkang; Shuai Ping; Zhang Zhongfu


    The propagation path of satellite signals in the atmosphere is a curve thus very difficult to calculate its flexure correction accurately, a strict calculating expressions has so far not been derived. In this study, the flexure correction of the refraction curve is divided into two parts and their strict calculating expressions are derived. By use of the standard atmospheric model, the accurate flexure correction of the refraction curve is calculated for different zenith distance Z. On this basis, a calculation model is structured. This model is very simple in structure, convenient in use and high in accuracy. When Z is smaller than 85°, the accuracy of the correction exceeds 0.06 mm. The flexure correction is basically proportional to tan2Z and increases rapidly with the increase of Z When Z>50°,the correction is smaller than 0.5 mm and can be neglected.When Z>50°, the correction must be made. When Z is 85° , 88° and 89° , the corrections are 198mm, 8. 911 m and 28. 497 km, respectively. The calculation results shows that the correction estimate by Hopfield is correct when Z≤80 °, but too small when Z=89°. The expression in this paper is applicable to any satellite.

  4. Karect: accurate correction of substitution, insertion and deletion errors for next-generation sequencing data

    KAUST Repository

    Allam, Amin


    Motivation: Next-generation sequencing generates large amounts of data affected by errors in the form of substitutions, insertions or deletions of bases. Error correction based on the high-coverage information, typically improves de novo assembly. Most existing tools can correct substitution errors only; some support insertions and deletions, but accuracy in many cases is low. Results: We present Karect, a novel error correction technique based on multiple alignment. Our approach supports substitution, insertion and deletion errors. It can handle non-uniform coverage as well as moderately covered areas of the sequenced genome. Experiments with data from Illumina, 454 FLX and Ion Torrent sequencing machines demonstrate that Karect is more accurate than previous methods, both in terms of correcting individual-bases errors (up to 10% increase in accuracy gain) and post de novo assembly quality (up to 10% increase in NGA50). We also introduce an improved framework for evaluating the quality of error correction.

  5. Accurate and efficient correction of adjacency effects for high resolution imagery: comparison to the Lambertian correction for Landsat (United States)

    Sei, Alain


    The state of the art of atmospheric correction for moderate resolution and high resolution sensors is based on assuming that the surface reflectance at the bottom of the atmosphere is uniform. This assumption accounts for multiple scattering but ignores the contribution of neighboring pixels, that is it ignores adjacency effects. Its great advantage however is to substantially reduce the computational cost of performing atmospheric correction and make the problem computationally tractable. In a recent paper, (Sei, 2015) a computationally efficient method was introduced for the correction of adjacency effects through the use of fast FFT-based evaluations of singular integrals and the use of analytic continuation. It was shown that divergent Neumann series can be avoided and accurate results be obtained for clear and turbid atmospheres. We analyze in this paper the error of the standard state of the art Lambertian atmospheric correction method on Landsat imagery and compare it to our newly introduced method. We show that for high contrast scenes the state of the art atmospheric correction yields much larger errors than our method.

  6. Significance of accurate diffraction corrections for the second harmonic wave in determining the acoustic nonlinearity parameter

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Hyunjo, E-mail: [Division of Mechanical and Automotive Engineering, Wonkwang University, Iksan, Jeonbuk 570-749 (Korea, Republic of); Zhang, Shuzeng; Li, Xiongbing [School of Traffic and Transportation Engineering, Central South University, Changsha, Hunan 410075 (China); Barnard, Dan [Center for Nondestructive Evaluation, Iowa State University, Ames, IA 50010 (United States)


    The accurate measurement of acoustic nonlinearity parameter β for fluids or solids generally requires making corrections for diffraction effects due to finite size geometry of transmitter and receiver. These effects are well known in linear acoustics, while those for second harmonic waves have not been well addressed and therefore not properly considered in previous studies. In this work, we explicitly define the attenuation and diffraction corrections using the multi-Gaussian beam (MGB) equations which were developed from the quasilinear solutions of the KZK equation. The effects of making these corrections are examined through the simulation of β determination in water. Diffraction corrections are found to have more significant effects than attenuation corrections, and the β values of water can be estimated experimentally with less than 5% errors when the exact second harmonic diffraction corrections are used together with the negligible attenuation correction effects on the basis of linear frequency dependence between attenuation coefficients, α{sub 2} ≃ 2α{sub 1}.

  7. Accurate Modeling of Organic Molecular Crystals by Dispersion-Corrected Density Functional Tight Binding (DFTB). (United States)

    Brandenburg, Jan Gerit; Grimme, Stefan


    The ambitious goal of organic crystal structure prediction challenges theoretical methods regarding their accuracy and efficiency. Dispersion-corrected density functional theory (DFT-D) in principle is applicable, but the computational demands, for example, to compute a huge number of polymorphs, are too high. Here, we demonstrate that this task can be carried out by a dispersion-corrected density functional tight binding (DFTB) method. The semiempirical Hamiltonian with the D3 correction can accurately and efficiently model both solid- and gas-phase inter- and intramolecular interactions at a speed up of 2 orders of magnitude compared to DFT-D. The mean absolute deviations for interaction (lattice) energies for various databases are typically 2-3 kcal/mol (10-20%), that is, only about two times larger than those for DFT-D. For zero-point phonon energies, small deviations of <0.5 kcal/mol compared to DFT-D are obtained.

  8. Accurate tracking of tumor volume change during radiotherapy by CT-CBCT registration with intensity correction (United States)

    Park, Seyoun; Robinson, Adam; Quon, Harry; Kiess, Ana P.; Shen, Colette; Wong, John; Plishker, William; Shekhar, Raj; Lee, Junghoon


    In this paper, we propose a CT-CBCT registration method to accurately predict the tumor volume change based on daily cone-beam CTs (CBCTs) during radiotherapy. CBCT is commonly used to reduce patient setup error during radiotherapy, but its poor image quality impedes accurate monitoring of anatomical changes. Although physician's contours drawn on the planning CT can be automatically propagated to daily CBCTs by deformable image registration (DIR), artifacts in CBCT often cause undesirable errors. To improve the accuracy of the registration-based segmentation, we developed a DIR method that iteratively corrects CBCT intensities by local histogram matching. Three popular DIR algorithms (B-spline, demons, and optical flow) with the intensity correction were implemented on a graphics processing unit for efficient computation. We evaluated their performances on six head and neck (HN) cancer cases. For each case, four trained scientists manually contoured the nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial image registration software based on conventional mutual information (MI), VelocityAI (Varian Medical Systems Inc.). The volume differences (mean±std in cc) between the average of the manual segmentations and automatic segmentations are 3.70+/-2.30 (B-spline), 1.25+/-1.78 (demons), 0.93+/-1.14 (optical flow), and 4.39+/-3.86 (VelocityAI). The proposed method significantly reduced the estimation error by 9% (B-spline), 38% (demons), and 51% (optical flow) over the results using VelocityAI. Although demonstrated only on HN nodal GTVs, the results imply that the proposed method can produce improved segmentation of other critical structures over conventional methods.

  9. DEM sourcing guidelines for computing 1 Eö accurate terrain corrections for airborne gravity gradiometry (United States)

    Annecchione, Maria; Hatch, David; Hefford, Shane W.


    In this paper we investigate digital elevation model (DEM) sourcing requirements to compute gravity gradiometry terrain corrections accurate to 1 Eötvös (Eö) at observation heights of 80 m or more above ground. Such survey heights are typical in fixed-wing airborne surveying for resource exploration where the maximum signal-to-noise ratio is sought. We consider the accuracy of terrain corrections relevant for recent commercial airborne gravity gradiometry systems operating at the 10 Eö noise level and for future systems with a target noise level of 1 Eö. We focus on the requirements for the vertical gradient of the vertical component of gravity (Gdd) because this element of the gradient tensor is most commonly interpreted qualitatively and quantitatively. Terrain correction accuracy depends on the bare-earth DEM accuracy and spatial resolution. The bare-earth DEM accuracy and spatial resolution depends on its source. Two possible sources are considered: airborne LiDAR and Shuttle Radar Topography Mission (SRTM). The accuracy of an SRTM DEM is affected by vegetation height. The SRTM footprint is also larger and the DEM resolution is thus lower. However, resolution requirements relax as relief decreases. Publicly available LiDAR data and 1 arc-second and 3 arc-second SRTM data were selected over four study areas representing end member cases of vegetation cover and relief. The four study areas are presented as reference material for processing airborne gravity gradiometry data at the 1 Eö noise level with 50 m spatial resolution. From this investigation we find that to achieve 1 Eö accuracy in the terrain correction at 80 m height airborne LiDAR data are required even when terrain relief is a few tens of meters and the vegetation is sparse. However, as satellite ranging technologies progress bare-earth DEMs of sufficient accuracy and resolution may be sourced at lesser cost. We found that a bare-earth DEM of 10 m resolution and 2 m accuracy are sufficient for

  10. Enabling accurate first-principle calculations of electronic properties with a corrected k dot p scheme

    CERN Document Server

    Berland, Kristian


    A computationally inexpensive kp-based interpolation scheme is developed that can extend the eigenvalues and momentum matrix elements of a sparsely sampled k-point grid into a densely sampled one. Dense sampling, often required to accurately describe transport and optical properties of bulk materials, can be computationally demanding to compute, for instance, in combination with hybrid functionals within the density functional theory (DFT) or with perturbative expansions beyond DFT such as the GW method. The scheme is based on solving the k$\\cdot$p method and extrapolating from multiple reference k points. It includes a correction term that reduces the number of empty bands needed and ameliorates band discontinuities. We show that the scheme can be used to generate accurate band structures, density of states, and dielectric functions. Several examples are given, using traditional and hybrid functionals, with Si, TiNiSn, and Cu as test cases. We illustrate that d-electron and semi-core states, which are partic...

  11. Accurate mask model implementation in optical proximity correction model for 14-nm nodes and beyond (United States)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Farys, Vincent; Huguennet, Frederic; Armeanu, Ana-Maria; Bork, Ingo; Chomat, Michael; Buck, Peter; Schanen, Isabelle


    In a previous work, we demonstrated that the current optical proximity correction model assuming the mask pattern to be analogous to the designed data is no longer valid. An extreme case of line-end shortening shows a gap up to 10 nm difference (at mask level). For that reason, an accurate mask model has been calibrated for a 14-nm logic gate level. A model with a total RMS of 1.38 nm at mask level was obtained. Two-dimensional structures, such as line-end shortening and corner rounding, were well predicted using scanning electron microscopy pictures overlaid with simulated contours. The first part of this paper is dedicated to the implementation of our improved model in current flow. The improved model consists of a mask model capturing mask process and writing effects, and a standard optical and resist model addressing the litho exposure and development effects at wafer level. The second part will focus on results from the comparison of the two models, the new and the regular.

  12. Accurate relative location estimates for the North Korean nuclear tests using empirical slowness corrections (United States)

    Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna, T.; Mykkeltveit, S.


    Declared North Korean nuclear tests in 2006, 2009, 2013 and 2016 were observed seismically at regional and teleseismic distances. Waveform similarity allows the events to be located relatively with far greater accuracy than the absolute locations can be determined from seismic data alone. There is now significant redundancy in the data given the large number of regional and teleseismic stations that have recorded multiple events, and relative location estimates can be confirmed independently by performing calculations on many mutually exclusive sets of measurements. Using a 1-D global velocity model, the distances between the events estimated using teleseismic P phases are found to be approximately 25 per cent shorter than the distances between events estimated using regional Pn phases. The 2009, 2013 and 2016 events all take place within 1 km of each other and the discrepancy between the regional and teleseismic relative location estimates is no more than about 150 m. The discrepancy is much more significant when estimating the location of the more distant 2006 event relative to the later explosions with regional and teleseismic estimates varying by many hundreds of metres. The relative location of the 2006 event is challenging given the smaller number of observing stations, the lower signal-to-noise ratio and significant waveform dissimilarity at some regional stations. The 2006 event is however highly significant in constraining the absolute locations in the terrain at the Punggye-ri test-site in relation to observed surface infrastructure. For each seismic arrival used to estimate the relative locations, we define a slowness scaling factor which multiplies the gradient of seismic traveltime versus distance, evaluated at the source, relative to the applied 1-D velocity model. A procedure for estimating correction terms which reduce the double-difference time residual vector norms is presented together with a discussion of the associated uncertainty. The modified

  13. Accurate and Simple Time Synchronization and Frequency Offset Correction in OFDM System

    Institute of Scientific and Technical Information of China (English)

    LIU Xiao-ming; JIANG Wei-yu; LIU Yuan-an


    We present a new synchronization scheme for Orthogonal Frequency-Division Multiplexing (OFDM) systems. In this scheme, time synchronization and carrier frequency offset correction can be performed in one identical training symbol. Time synchronization algorithm is robust and simple operated, and its performance is independent of the carrier frequency offset. We derive the theoretical variance error for our time synchronization algorithm in AWGN channel. We also derive the performance lower bound of our frequency offset correction algorithm. The frequency offset correction algorithm is high accuracy and its performance will degrade very little under multipath fading environment.


    Energy Technology Data Exchange (ETDEWEB)

    Beare, Richard; Brown, Michael J. I.; Pimbblet, Kevin, E-mail: [Monash Centre for Astrophysics, Monash University, Clayton, Victoria 3800 (Australia)


    We describe an accurate new method for determining absolute magnitudes, and hence also K-corrections, that is simpler than most previous methods, being based on a quadratic function of just one suitably chosen observed color. The method relies on the extensive and accurate new set of 129 empirical galaxy template spectral energy distributions from Brown et al. A key advantage of our method is that we can reliably estimate random errors in computed absolute magnitudes due to galaxy diversity, photometric error and redshift error. We derive K-corrections for the five Sloan Digital Sky Survey filters and provide parameter tables for use by the astronomical community. Using the New York Value-Added Galaxy Catalog, we compare our K-corrections with those from kcorrect. Our K-corrections produce absolute magnitudes that are generally in good agreement with kcorrect. Absolute griz magnitudes differ by less than 0.02 mag and those in the u band by ∼0.04 mag. The evolution of rest-frame colors as a function of redshift is better behaved using our method, with relatively few galaxies being assigned anomalously red colors and a tight red sequence being observed across the whole 0.0 < z < 0.5 redshift range.

  15. Accurate correction of magnetic field instabilities for high-resolution isochronous mass measurements in storage rings

    CERN Document Server

    Shuai, P; Zhang, Y H; Litvinov, Yu A; Wang, M; Tu, X L; Blaum, K; Zhou, X H; Yuan, Y J; Audi, G; Yan, X L; Chen, X C; Xu, X; Zhang, W; Sun, B H; Yamaguchi, T; Chen, R J; Fu, C Y; Ge, Z; Huang, W J; Liu, D W; Xing, Y M; Zeng, Q


    Isochronous mass spectrometry (IMS) in storage rings is a successful technique for accurate mass measurements of short-lived nuclides with relative precision of about $10^{-5}-10^{-7}$. Instabilities of the magnetic fields in storage rings are one of the major contributions limiting the achievable mass resolving power, which is directly related to the precision of the obtained mass values. A new data analysis method is proposed allowing one to minimise the effect of such instabilities. The masses of the previously measured at the CSRe $^{41}$Ti, $^{43}$V, $^{47}$Mn, $^{49}$Fe, $^{53}$Ni and $^{55}$Cu nuclides were re-determined with this method. An improvement of the mass precision by a factor of $\\sim 1.7$ has been achieved for $^{41}$Ti and $^{43}$V. The method can be applied to any isochronous mass experiment irrespective of the accelerator facility. Furthermore, the method can be used as an on-line tool for checking the isochronous conditions of the storage ring.

  16. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction (United States)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki


    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  17. Correction

    DEFF Research Database (Denmark)

    Pinkevych, Mykola; Cromer, Deborah; Tolstrup, Martin


    [This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.].......[This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.]....

  18. Harmonic allocation of authorship credit: source-level correction of bibliometric bias assures accurate publication and citation analysis.

    Directory of Open Access Journals (Sweden)

    Nils T Hagen

    Full Text Available Authorship credit for multi-authored scientific publications is routinely allocated either by issuing full publication credit repeatedly to all coauthors, or by dividing one credit equally among all coauthors. The ensuing inflationary and equalizing biases distort derived bibliometric measures of merit by systematically benefiting secondary authors at the expense of primary authors. Here I show how harmonic counting, which allocates credit according to authorship rank and the number of coauthors, provides simultaneous source-level correction for both biases as well as accommodating further decoding of byline information. I also demonstrate large and erratic effects of counting bias on the original h-index, and show how the harmonic version of the h-index provides unbiased bibliometric ranking of scientific merit while retaining the original's essential simplicity, transparency and intended fairness. Harmonic decoding of byline information resolves the conundrum of authorship credit allocation by providing a simple recipe for source-level correction of inflationary and equalizing bias. Harmonic counting could also offer unrivalled accuracy in automated assessments of scientific productivity, impact and achievement.

  19. Correction

    CERN Document Server


    Tile Calorimeter modules stored at CERN. The larger modules belong to the Barrel, whereas the smaller ones are for the two Extended Barrels. (The article was about the completion of the 64 modules for one of the latter.) The photo on the first page of the Bulletin n°26/2002, from 24 July 2002, illustrating the article «The ATLAS Tile Calorimeter gets into shape» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.

  20. Correction

    CERN Multimedia


    The photo on the second page of the Bulletin n°48/2002, from 25 November 2002, illustrating the article «Spanish Visit to CERN» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.   The Spanish delegation, accompanied by Spanish scientists at CERN, also visited the LHC superconducting magnet test hall (photo). From left to right: Felix Rodriguez Mateos of CERN LHC Division, Josep Piqué i Camps, Spanish Minister of Science and Technology, César Dopazo, Director-General of CIEMAT (Spanish Research Centre for Energy, Environment and Technology), Juan Antonio Rubio, ETT Division Leader at CERN, Manuel Aguilar-Benitez, Spanish Delegate to Council, Manuel Delfino, IT Division Leader at CERN, and Gonzalo León, Secretary-General of Scientific Policy to the Minister.

  1. Correction

    Directory of Open Access Journals (Sweden)


    Full Text Available Regarding Tagler, M. J., and Jeffers, H. M. (2013. Sex differences in attitudes toward partner infidelity. Evolutionary Psychology, 11, 821–832: The authors wish to correct values in the originally published manuscript. Specifically, incorrect 95% confidence intervals around the Cohen's d values were reported on page 826 of the manuscript where we reported the within-sex simple effects for the significant Participant Sex × Infidelity Type interaction (first paragraph, and for attitudes toward partner infidelity (second paragraph. Corrected values are presented in bold below. The authors would like to thank Dr. Bernard Beins at Ithaca College for bringing these errors to our attention. Men rated sexual infidelity significantly more distressing (M = 4.69, SD = 0.74 than they rated emotional infidelity (M = 4.32, SD = 0.92, F(1, 322 = 23.96, p < .001, d = 0.44, 95% CI [0.23, 0.65], but there was little difference between women's ratings of sexual (M = 4.80, SD = 0.48 and emotional infidelity (M = 4.76, SD = 0.57, F(1, 322 = 0.48, p = .29, d = 0.08, 95% CI [−0.10, 0.26]. As expected, men rated sexual infidelity (M = 1.44, SD = 0.70 more negatively than they rated emotional infidelity (M = 2.66, SD = 1.37, F(1, 322 = 120.00, p < .001, d = 1.12, 95% CI [0.85, 1.39]. Although women also rated sexual infidelity (M = 1.40, SD = 0.62 more negatively than they rated emotional infidelity (M = 2.09, SD = 1.10, this difference was not as large and thus in the evolutionary theory supportive direction, F(1, 322 = 72.03, p < .001, d = 0.77, 95% CI [0.60, 0.94].

  2. Correction. (United States)


    In the article by Quintavalle et al (Quintavalle C, Anselmi CV, De Micco F, Roscigno G, Visconti G, Golia B, Focaccio A, Ricciardelli B, Perna E, Papa L, Donnarumma E, Condorelli G, Briguori C. Neutrophil gelatinase–associated lipocalin and contrast-induced acute kidney injury. Circ Cardiovasc Interv. 2015;8:e002673. DOI: 10.1161/CIRCINTERVENTIONS.115.002673.), which published online September 2, 2015, and appears in the September 2015 issue of the journal, a correction was needed. On page 1, the institutional affiliation for Elvira Donnarumma, PhD, “SDN Foundation,” has been changed to read, “IRCCS SDN, Naples, Italy.” The institutional affiliation for Laura Papa, PhD, “Institute for Endocrinology and Experimental Oncology, National Research Council, Naples, Italy,” has been changed to read, “Institute of Genetics and Biomedical Research, Milan Unit, Milan, Italy” and “Humanitas Research Hospital, Rozzano, Italy.” The authors regret this error.

  3. Extension of the B3LYP - Dispersion-Correcting Potential Approach to the Accurate Treatment of both Inter- and Intramolecular Interactions

    CERN Document Server

    DiLabio, Gino A; Torres, Edmanuel


    We recently showed that dispersion-correcting potentials (DCPs), atom-centered Gaussian-type functions developed for use with B3LYP (J. Phys. Chem. Lett. 2012, 3, 1738-1744) greatly improved the ability of the underlying functional to predict non-covalent interactions. However, the application of B3LYP-DCP for the {\\beta}-scission of the cumyloxyl radical led a calculated barrier height that was over-estimated by ca. 8 kcal/mol. We show in the present work that the source of this error arises from the previously developed carbon atom DCPs, which erroneously alters the electron density in the C-C covalent-bonding region. In this work, we present a new C-DCP with a form that was expected to influence the electron density farther from the nucleus. Tests of the new C-DCP, with previously published H-, N- and O-DCPs, with B3LYP-DCP/6-31+G(2d,2p) on the S66, S22B, HSG-A, and HC12 databases of non-covalently interacting dimers showed that it is one of the most accurate methods available for treating intermolecular i...

  4. WAIS-IV reliable digit span is no more accurate than age corrected scaled score as an indicator of invalid performance in a veteran sample undergoing evaluation for mTBI. (United States)

    Spencer, Robert J; Axelrod, Bradley N; Drag, Lauren L; Waldron-Perrine, Brigid; Pangilinan, Percival H; Bieliauskas, Linas A


    Reliable Digit Span (RDS) is a measure of effort derived from the Digit Span subtest of the Wechsler intelligence scales. Some authors have suggested that the age-corrected scaled score provides a more accurate measure of effort than RDS. This study examined the relative diagnostic accuracy of the traditional RDS, an extended RDS including the new Sequencing task from the Wechsler Adult Intelligence Scale-IV, and the age-corrected scaled score, relative to performance validity as determined by the Test of Memory Malingering. Data were collected from 138 Veterans seen in a traumatic brain injury clinic. The traditional RDS (≤ 7), revised RDS (≤ 11), and Digit Span age-corrected scaled score ( ≤ 6) had respective sensitivities of 39%, 39%, and 33%, and respective specificities of 82%, 89%, and 91%. Of these indices, revised RDS and the Digit Span age-corrected scaled score provide the most accurate measure of performance validity among the three measures.

  5. The accurate calculation of the band gap of liquid water by means of GW corrections applied to plane-wave density functional theory molecular dynamics simulations

    NARCIS (Netherlands)

    Fang, Changming; Li, Wun Fan; Koster, Rik S.; Klimeš, Jiří; Van Blaaderen, Alfons; Van Huis, Marijn A.


    Knowledge about the intrinsic electronic properties of water is imperative for understanding the behaviour of aqueous solutions that are used throughout biology, chemistry, physics, and industry. The calculation of the electronic band gap of liquids is challenging, because the most accurate ab initi

  6. MCPerm: a Monte Carlo permutation method for accurately correcting the multiple testing in a meta-analysis of genetic association studies.

    Directory of Open Access Journals (Sweden)

    Yongshuai Jiang

    Full Text Available Traditional permutation (TradPerm tests are usually considered the gold standard for multiple testing corrections. However, they can be difficult to complete for the meta-analyses of genetic association studies based on multiple single nucleotide polymorphism loci as they depend on individual-level genotype and phenotype data to perform random shuffles, which are not easy to obtain. Most meta-analyses have therefore been performed using summary statistics from previously published studies. To carry out a permutation using only genotype counts without changing the size of the TradPerm P-value, we developed a Monte Carlo permutation (MCPerm method. First, for each study included in the meta-analysis, we used a two-step hypergeometric distribution to generate a random number of genotypes in cases and controls. We then carried out a meta-analysis using these random genotype data. Finally, we obtained the corrected permutation P-value of the meta-analysis by repeating the entire process N times. We used five real datasets and five simulation datasets to evaluate the MCPerm method and our results showed the following: (1 MCPerm requires only the summary statistics of the genotype, without the need for individual-level data; (2 Genotype counts generated by our two-step hypergeometric distributions had the same distributions as genotype counts generated by shuffling; (3 MCPerm had almost exactly the same permutation P-values as TradPerm (r = 0.999; P<2.2e-16; (4 The calculation speed of MCPerm is much faster than that of TradPerm. In summary, MCPerm appears to be a viable alternative to TradPerm, and we have developed it as a freely available R package at CRAN:

  7. Ole Roemer and the Light-Time Effect (United States)

    Sterken, C.


    We discuss the observational background of Roemer's remarkable hypothesis that the velocity of light is finite. The outcome of the joint efforts of a highly-skilled instrumentalist and a team of surveyors driven to produce accurate maps and technically supported by the revolutionary advancements in horology, illustrates the synergy between the accuracy of the O and the C terms in the O-C concept which led to one of the most fundamental discoveries of the Renaissance.

  8. The Need for Accurate Geometric and Radiometric Corrections of Drone-Borne Hyperspectral Data for Mineral Exploration: MEPHySTo—A Toolbox for Pre-Processing Drone-Borne Hyperspectral Data

    Directory of Open Access Journals (Sweden)

    Sandra Jakob


    Full Text Available Drone-borne hyperspectral imaging is a new and promising technique for fast and precise acquisition, as well as delivery of high-resolution hyperspectral data to a large variety of end-users. Drones can overcome the scale gap between field and air-borne remote sensing, thus providing high-resolution and multi-temporal data. They are easy to use, flexible and deliver data within cm-scale resolution. So far, however, drone-borne imagery has prominently and successfully been almost solely used in precision agriculture and photogrammetry. Drone technology currently mainly relies on structure-from-motion photogrammetry, aerial photography and agricultural monitoring. Recently, a few hyperspectral sensors became available for drones, but complex geometric and radiometric effects complicate their use for geology-related studies. Using two examples, we first show that precise corrections are required for any geological mapping. We then present a processing toolbox for frame-based hyperspectral imaging systems adapted for the complex correction of drone-borne hyperspectral imagery. The toolbox performs sensor- and platform-specific geometric distortion corrections. Furthermore, a topographic correction step is implemented to correct for rough terrain surfaces. We recommend the c-factor-algorithm for geological applications. To our knowledge, we demonstrate for the first time the applicability of the corrected dataset for lithological mapping and mineral exploration.

  9. High-Precision Tungsten Isotopic Analysis by Multicollection Negative Thermal Ionization Mass Spectrometry Based on Simultaneous Measurement of W and (18)O/(16)O Isotope Ratios for Accurate Fractionation Correction. (United States)

    Trinquier, Anne; Touboul, Mathieu; Walker, Richard J


    Determination of the (182)W/(184)W ratio to a precision of ± 5 ppm (2σ) is desirable for constraining the timing of core formation and other early planetary differentiation processes. However, WO3(-) analysis by negative thermal ionization mass spectrometry normally results in a residual correlation between the instrumental-mass-fractionation-corrected (182)W/(184)W and (183)W/(184)W ratios that is attributed to mass-dependent variability of O isotopes over the course of an analysis and between different analyses. A second-order correction using the (183)W/(184)W ratio relies on the assumption that this ratio is constant in nature. This may prove invalid, as has already been realized for other isotope systems. The present study utilizes simultaneous monitoring of the (18)O/(16)O and W isotope ratios to correct oxide interferences on a per-integration basis and thus avoid the need for a double normalization of W isotopes. After normalization of W isotope ratios to a pair of W isotopes, following the exponential law, no residual W-O isotope correlation is observed. However, there is a nonideal mass bias residual correlation between (182)W/(i)W and (183)W/(i)W with time. Without double normalization of W isotopes and on the basis of three or four duplicate analyses, the external reproducibility per session of (182)W/(184)W and (183)W/(184)W normalized to (186)W/(183)W is 5-6 ppm (2σ, 1-3 μg loads). The combined uncertainty per session is less than 4 ppm for (183)W/(184)W and less than 6 ppm for (182)W/(184)W (2σm) for loads between 3000 and 50 ng.

  10. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: An accurate correction scheme for electrostatic finite-size effects

    Energy Technology Data Exchange (ETDEWEB)

    Rocklin, Gabriel J. [Department of Pharmaceutical Chemistry, University of California San Francisco, 1700 4th St., San Francisco, California 94143-2550, USA and Biophysics Graduate Program, University of California San Francisco, 1700 4th St., San Francisco, California 94143-2550 (United States); Mobley, David L. [Departments of Pharmaceutical Sciences and Chemistry, University of California Irvine, 147 Bison Modular, Building 515, Irvine, California 92697-0001, USA and Department of Chemistry, University of New Orleans, 2000 Lakeshore Drive, New Orleans, Louisiana 70148 (United States); Dill, Ken A. [Laufer Center for Physical and Quantitative Biology, 5252 Stony Brook University, Stony Brook, New York 11794-0001 (United States); Hünenberger, Philippe H., E-mail: [Laboratory of Physical Chemistry, Swiss Federal Institute of Technology, ETH, 8093 Zürich (Switzerland)


    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol{sup −1}) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non

  11. Impact of aerosols on the OMI tropospheric NO2 retrievals over industrialized regions: how accurate is the aerosol correction of cloud-free scenes via a simple cloud model? (United States)

    Chimot, J.; Vlemmix, T.; Veefkind, J. P.; de Haan, J. F.; Levelt, P. F.


    The Ozone Monitoring Instrument (OMI) has provided daily global measurements of tropospheric NO2 for more than a decade. Numerous studies have drawn attention to the complexities related to measurements of tropospheric NO2 in the presence of aerosols. Fine particles affect the OMI spectral measurements and the length of the average light path followed by the photons. However, they are not explicitly taken into account in the current operational OMI tropospheric NO2 retrieval chain (DOMINO - Derivation of OMI tropospheric NO2) product. Instead, the operational OMI O2 - O2 cloud retrieval algorithm is applied both to cloudy and to cloud-free scenes (i.e. clear sky) dominated by the presence of aerosols. This paper describes in detail the complex interplay between the spectral effects of aerosols in the satellite observation and the associated response of the OMI O2 - O2 cloud retrieval algorithm. Then, it evaluates the impact on the accuracy of the tropospheric NO2 retrievals through the computed Air Mass Factor (AMF) with a focus on cloud-free scenes. For that purpose, collocated OMI NO2 and MODIS (Moderate Resolution Imaging Spectroradiometer) Aqua aerosol products are analysed over the strongly industrialized East China area. In addition, aerosol effects on the tropospheric NO2 AMF and the retrieval of OMI cloud parameters are simulated. Both the observation-based and the simulation-based approach demonstrate that the retrieved cloud fraction increases with increasing Aerosol Optical Thickness (AOT), but the magnitude of this increase depends on the aerosol properties and surface albedo. This increase is induced by the additional scattering effects of aerosols which enhance the scene brightness. The decreasing effective cloud pressure with increasing AOT primarily represents the shielding effects of the O2 - O2 column located below the aerosol layers. The study cases show that the aerosol correction based on the implemented OMI cloud model results in biases

  12. Nighttime lights time series of tsunami damage, recovery, and economic metrics in Sumatra, Indonesia. (United States)

    Gillespie, Thomas W; Frankenberg, Elizabeth; Chum, Kai Fung; Thomas, Duncan


    On 26 December 2004, a magnitude 9.2 earthquake off the west coast of the northern Sumatra, Indonesia resulted in 160,000 Indonesians killed. We examine the Defense Meteorological Satellite Program-Operational Linescan System (DMSP-OLS) nighttime light imagery brightness values for 307 communities in the Study of the Tsunami Aftermath and Recovery (STAR), a household survey in Sumatra from 2004 to 2008. We examined night light time series between the annual brightness and extent of damage, economic metrics collected from STAR households and aggregated to the community level. There were significant changes in brightness values from 2004 to 2008 with a significant drop in brightness values in 2005 due to the tsunami and pre-tsunami nighttime light values returning in 2006 for all damage zones. There were significant relationships between the nighttime imagery brightness and per capita expenditures, and spending on energy and on food. Results suggest that Defense Meteorological Satellite Program nighttime light imagery can be used to capture the impacts and recovery from the tsunami and other natural disasters and estimate time series economic metrics at the community level in developing countries.

  13. ACE: accurate correction of errors using K-mer tries

    NARCIS (Netherlands)

    Sheikhizadeh Anari, S.; Ridder, de D.


    The quality of high-throughput next-generation sequencing data significantly influences the performance and memory consumption of assembly and mapping algorithms. The most ubiquitous platform, Illumina, mainly suffers from substitution errors. We have developed a tool, ACE, based on K-mer tries to c

  14. Accurate ab initio spin densities

    CERN Document Server

    Boguslawski, Katharina; Legeza, Örs; Reiher, Markus


    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys. 2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CA...

  15. The Near-contact Binary RZ Draconis with Two Possible Light-time Orbits (United States)

    Yang, Y.-G.; Li, H.-L.; Dai, H.-F.; Zhang, L.-Y.


    We present new multicolor photometry for RZ Draconis, observed in 2009 at the Xinglong Station of the National Astronomical Observatories of China. By using the updated version of the Wilson-Devinney Code, the photometric-spectroscopic elements were deduced from new photometric observations and published radial velocity data. The mass ratio and orbital inclination are q = 0.375(±0.002) and i = 84fdg60(±0fdg13), respectively. The fill-out factor of the primary is f = 98.3%, implying that RZ Dra is an Algol-like near-contact binary. Based on 683 light minimum times from 1907 to 2009, the orbital period change was investigated in detail. From the O - C curve, it is discovered that two quasi-sinusoidal variations may exist (i.e., P 3 = 75.62(±2.20) yr and P 4 = 27.59(±0.10) yr), which likely result from light-time effects via the presence of two additional bodies. In a coplanar orbit with the binary system, the third and fourth bodies may be low-mass drafts (i.e., M 3 = 0.175 M sun and M 4 = 0.074 M sun). If this is true, RZ Dra may be a quadruple star. The additional body could extract angular momentum from the binary system, which may cause the orbit to shrink. With the orbit shrinking, the primary may fill its Roche lobe and RZ Dra evolves into a contact configuration.

  16. Speaking Fluently And Accurately

    Institute of Scientific and Technical Information of China (English)



    Even after many years of study,students make frequent mistakes in English. In addition, many students still need a long time to think of what they want to say. For some reason, in spite of all the studying, students are still not quite fluent.When I teach, I use one technique that helps students not only speak more accurately, but also more fluently. That technique is dictations.

  17. Accurate backgrounds to Higgs production at the LHC

    CERN Document Server

    Kauer, N


    Corrections of 10-30% for backgrounds to the H --> WW --> l^+l^-\\sla{p}_T search in vector boson and gluon fusion at the LHC are reviewed to make the case for precise and accurate theoretical background predictions.

  18. Highly Accurate Measurement of the Electron Orbital Magnetic Moment

    CERN Document Server

    Awobode, A M


    We propose to accurately determine the orbital magnetic moment of the electron by measuring, in a Magneto-Optical or Ion trap, the ratio of the Lande g-factors in two atomic states. From the measurement of (gJ1/gJ2), the quantity A, which depends on the corrections to the electron g-factors can be extracted, if the states are LS coupled. Given that highly accurate values of the correction to the spin g-factor are currently available, accurate values of the correction to the orbital g-factor may also be determined. At present, (-1.8 +/- 0.4) x 10-4 has been determined as a correction to the electron orbital g-factor, by using earlier measurements of the ratio gJ1/gJ2, made on the Indium 2P1/2 and 2P3/2 states.

  19. Photometric Properties of Selected Algol-type Binaries. III. AL Geminorum and BM Monocerotis with Possible Light-time Orbits (United States)

    Yang, Y.-G.; Li, H.-L.; Dai, H.-F.


    We present the CCD photometry of two Algol-type binaries, AL Gem and BM Mon, observed from 2008 November to 2011 January. With the updated Wilson-Devinney program, photometric solutions were deduced from their EA-type light curves. The mass ratios and fill-out factors of the primaries are found to be q ph = 0.090(± 0.005) and f 1 = 47.3%(± 0.3%) for AL Gem, and q ph = 0.275(± 0.007) and f 1 = 55.4%(± 0.5%) for BM Mon, respectively. By analyzing the O-C curves, we discovered that the periods of AL Gem and BM Mon change in a quasi-sinusoidal mode, which may possibly result from the light-time effect via the presence of a third body. Periods, amplitudes, and eccentricities of light-time orbits are 78.83(± 1.17) yr, 0fd0204(±0fd0007), and 0.28(± 0.02) for AL Gem and 97.78(± 2.67) yr, 0fd0175(±0fd0006), and 0.29(± 0.02) for BM Mon, respectively. Assumed to be in a coplanar orbit with the binary, the masses of the third bodies would be 0.29 M ⊙ for AL Gem and 0.26 M ⊙ for BM Mon. This kind of additional companion can extract angular momentum from the close binary orbit, and such processes may play an important role in multiple star evolution.

  20. Accurate measurement of unsteady state fluid temperature (United States)

    Jaremkiewicz, Magdalena


    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  1. Photometric Investigation and Possible Light-Time Effect in the Orbital Period of a Marginal Contact System, CW Cassiopeiae (United States)

    Jiang, Tian-Yu; Li, Li-Fang; Han, Zhan-Wen; Jiang, Deng-Kai


    The first complete charge-coupled device (CCD) light curves in B and V passbands of a neglected contact binary system, CW Cassiopeiae (CW Cas), are presented. They were analyzed simultaneously by using the Wilson and Devinney (WD) code (1971, ApJ, 166, 605). The photometric solution indicates that CW Cas is a W-type W UMa system with a mass ratio of m2/m1 2.234, and that it is in a marginal contact state with a contact degree of ˜6.5% and a relatively large temperature difference of ˜327K between its two components. Based on the minimum times collected from the literature, together with the new ones obtained in this study, the orbital period changes of CW Cas were investigated in detail. It was found that a periodical variation overlaps with a secular period decrease in its orbital period. The long-term period decrease with a rate of dP/dt = -3.44 × 10-8d yr-1 can be interpreted either by mass transfer from the more-massive component to the less-massive with a rate of dm2/dt = -3.6 × 10-8M⊙ yr-1, or by mass and angular-momentum losses through magnetic braking due to a magnetic stellar wind. A low-amplitude cyclic variation with a period of T = 63.7 yr might be caused by the light-time effect due to the presence of a third body.

  2. 千伏级 CBCT 图像 CT 值校正及在放疗剂量计算中应用%Investigation of CT numbers correction of kilo-voltage cone-beam CT images for accurate dose calculation

    Institute of Scientific and Technical Information of China (English)

    王雪桃; 柏森; 李光俊; 蒋晓芹; 苏晨; 李衍龙; 朱智慧


    目的:研究千伏级CBCT图像CT值校正方法,提高其用于剂量计算的准确性。方法以扇形束计划 CT 作为先验信息,将 CBCT 与计划 CT 图像进行刚性配准,通过将 CBCT 与计划 CT 图像相减得到 CBCT 散射背景估计,对散射背景进行低通滤波处理,最后将原始 CBCT 图像减去滤波后的散射背景得到校正的 CBCT 图像。对 Catphan600模体和4例盆腔恶性肿瘤患者的 CBCT 图像进行校正,配对 t 检验校正前后 CBCT 与计划 CT 的差异,评估校正后的 CBCT 图像质量并分析用于剂量计算的准确性。结果经 CT 值校正后 CBCT 图像伪影明显减少,空气、脂肪、肌肉、股骨头的平均值校正前与计划 CT 分别相差232、89、29、66 HU,而校正后平均值差别缩小至5 HU 内(P=0??39、0??66、0??59、1??00)。校正后 CBCT 图像用于剂量计算误差在2%内。结论校正后的 CBCT 图像 CT 值与计划 CT 的 CT 值相似,用于剂量计算可得到准确的结果。%Objective To study CT numbers correction of kilo?voltage cone?beam CT (KV?CBCT) images for dose calculation. Method Aligning the CBCT images with plan CT images, then obtain the background scatter by subtracting CT images from CBCT images. The background scatter is then processed by low?pass filter. The final CBCT images are acquired by subtracting the background scatter from the raw CBCT. KV?CBCT images of Catphan600 phantom and four patients with pelvic tumors were obtained with the linac?integrated CBCT system. The CBCT images were modified to correct the CT numbers. Finally, compare HU numbers between corrected CBCT and planning CT by paired T test. Evaluate the image quality and accuracy of dose calculation of the modified CBCT images. Results The proposed method reduces the artifacts of CBCT images significantly. The differences of CT numbers were 232 HU, 89 HU, 29 HU and 66 HU for air, fat, muscle and femoral head between CT and CBCT

  3. Efficient and accurate fragmentation methods. (United States)

    Pruitt, Spencer R; Bertoni, Colleen; Brorsen, Kurt R; Gordon, Mark S


    Conspectus Three novel fragmentation methods that are available in the electronic structure program GAMESS (general atomic and molecular electronic structure system) are discussed in this Account. The fragment molecular orbital (FMO) method can be combined with any electronic structure method to perform accurate calculations on large molecular species with no reliance on capping atoms or empirical parameters. The FMO method is highly scalable and can take advantage of massively parallel computer systems. For example, the method has been shown to scale nearly linearly on up to 131 000 processor cores for calculations on large water clusters. There have been many applications of the FMO method to large molecular clusters, to biomolecules (e.g., proteins), and to materials that are used as heterogeneous catalysts. The effective fragment potential (EFP) method is a model potential approach that is fully derived from first principles and has no empirically fitted parameters. Consequently, an EFP can be generated for any molecule by a simple preparatory GAMESS calculation. The EFP method provides accurate descriptions of all types of intermolecular interactions, including Coulombic interactions, polarization/induction, exchange repulsion, dispersion, and charge transfer. The EFP method has been applied successfully to the study of liquid water, π-stacking in substituted benzenes and in DNA base pairs, solvent effects on positive and negative ions, electronic spectra and dynamics, non-adiabatic phenomena in electronic excited states, and nonlinear excited state properties. The effective fragment molecular orbital (EFMO) method is a merger of the FMO and EFP methods, in which interfragment interactions are described by the EFP potential, rather than the less accurate electrostatic potential. The use of EFP in this manner facilitates the use of a smaller value for the distance cut-off (Rcut). Rcut determines the distance at which EFP interactions replace fully quantum

  4. Accurate determination of antenna directivity

    DEFF Research Database (Denmark)

    Dich, Mikael


    The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power......-pattern measurements. The derivation is based on the theory of spherical wave expansion of electromagnetic fields, which also establishes a simple criterion for the required number of samples of the power density. An array antenna consisting of Hertzian dipoles is used to test the accuracy and rate of convergence...

  5. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry


    Fuchs, Franz G.; Hjelmervik, Jon M.


    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire...

  6. Accurate Modeling of Advanced Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min

    Analysis and optimization methods for the design of advanced printed re ectarrays have been investigated, and the study is focused on developing an accurate and efficient simulation tool. For the analysis, a good compromise between accuracy and efficiency can be obtained using the spectral domain...... to the POT. The GDOT can optimize for the size as well as the orientation and position of arbitrarily shaped array elements. Both co- and cross-polar radiation can be optimized for multiple frequencies, dual polarization, and several feed illuminations. Several contoured beam reflectarrays have been designed...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...

  7. The Accurate Particle Tracer Code

    CERN Document Server

    Wang, Yulei; Qin, Hong; Yu, Zhi


    The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusion energy research, computational mathematics, software engineering, and high-performance computation. The APT code consists of seven main modules, including the I/O module, the initialization module, the particle pusher module, the parallelization module, the field configuration module, the external force-field module, and the extendible module. The I/O module, supported by Lua and Hdf5 projects, provides a user-friendly interface for both numerical simulation and data analysis. A series of new geometric numerical methods...

  8. Accurate thickness measurement of graphene. (United States)

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T


    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  9. Accurate thickness measurement of graphene (United States)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.


    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  10. Motion-corrected Fourier ptychography

    CERN Document Server

    Bian, Liheng; Guo, Kaikai; Suo, Jinli; Yang, Changhuei; Chen, Feng; Dai, Qionghai


    Fourier ptychography (FP) is a recently proposed computational imaging technique for high space-bandwidth product imaging. In real setups such as endoscope and transmission electron microscope, the common sample motion largely degrades the FP reconstruction and limits its practicability. In this paper, we propose a novel FP reconstruction method to efficiently correct for unknown sample motion. Specifically, we adaptively update the sample's Fourier spectrum from low spatial-frequency regions towards high spatial-frequency ones, with an additional motion recovery and phase-offset compensation procedure for each sub-spectrum. Benefiting from the phase retrieval redundancy theory, the required large overlap between adjacent sub-spectra offers an accurate guide for successful motion recovery. Experimental results on both simulated data and real captured data show that the proposed method can correct for unknown sample motion with its standard deviation being up to 10% of the field-of-view scale. We have released...

  11. MR image intensity inhomogeneity correction (United States)

    (Vişan Pungǎ, Mirela; Moldovanu, Simona; Moraru, Luminita


    MR technology is one of the best and most reliable ways of studying the brain. Its main drawback is the so-called intensity inhomogeneity or bias field which impairs the visual inspection and the medical proceedings for diagnosis and strongly affects the quantitative image analysis. Noise is yet another artifact in medical images. In order to accurately and effectively restore the original signal, reference is hereof made to filtering, bias correction and quantitative analysis of correction. In this report, two denoising algorithms are used; (i) Basis rotation fields of experts (BRFoE) and (ii) Anisotropic Diffusion (when Gaussian noise, the Perona-Malik and Tukey's biweight functions and the standard deviation of the noise of the input image are considered).

  12. Accurate and precise zinc isotope ratio measurements in urban aerosols. (United States)

    Gioia, Simone; Weiss, Dominik; Coles, Barry; Arnold, Tim; Babinski, Marly


    We developed an analytical method and constrained procedural boundary conditions that enable accurate and precise Zn isotope ratio measurements in urban aerosols. We also demonstrate the potential of this new isotope system for air pollutant source tracing. The procedural blank is around 5 ng and significantly lower than published methods due to a tailored ion chromatographic separation. Accurate mass bias correction using external correction with Cu is limited to Zn sample content of approximately 50 ng due to the combined effect of blank contribution of Cu and Zn from the ion exchange procedure and the need to maintain a Cu/Zn ratio of approximately 1. Mass bias is corrected for by applying the common analyte internal standardization method approach. Comparison with other mass bias correction methods demonstrates the accuracy of the method. The average precision of delta(66)Zn determinations in aerosols is around 0.05 per thousand per atomic mass unit. The method was tested on aerosols collected in Sao Paulo City, Brazil. The measurements reveal significant variations in delta(66)Zn(Imperial) ranging between -0.96 and -0.37 per thousand in coarse and between -1.04 and 0.02 per thousand in fine particular matter. This variability suggests that Zn isotopic compositions distinguish atmospheric sources. The isotopic light signature suggests traffic as the main source. We present further delta(66)Zn(Imperial) data for the standard reference material NIST SRM 2783 (delta(66)Zn(Imperial) = 0.26 +/- 0.10 per thousand).

  13. Accurate Fiber Length Measurement Using Time-of-Flight Technique (United States)

    Terra, Osama; Hussein, Hatem


    Fiber artifacts of very well-measured length are required for the calibration of optical time domain reflectometers (OTDR). In this paper accurate length measurement of different fiber lengths using the time-of-flight technique is performed. A setup is proposed to measure accurately lengths from 1 to 40 km at 1,550 and 1,310 nm using high-speed electro-optic modulator and photodetector. This setup offers traceability to the SI unit of time, the second (and hence to meter by definition), by locking the time interval counter to the Global Positioning System (GPS)-disciplined quartz oscillator. Additionally, the length of a recirculating loop artifact is measured and compared with the measurement made for the same fiber by the National Physical Laboratory of United Kingdom (NPL). Finally, a method is proposed to relatively correct the fiber refractive index to allow accurate fiber length measurement.

  14. Relativistic formulation of coordinate light time, Doppler and astrometric observables up to the second post-Minkowskian order

    CERN Document Server

    Hees, A; Poncin-Lafitte, C Le


    Given the extreme accuracy of modern space science, a precise relativistic modeling of observations is required. In particular, it is important to describe properly light propagation through the Solar System. For two decades, several modeling efforts based on the solution of the null geodesic equations have been proposed but they are mainly valid only for the first order Post-Newtonian approximation. However, with the increasing precision of ongoing space missions as Gaia, GAME, BepiColombo, JUNO or JUICE, we know that some corrections up to the second order have to be taken into account for future experiments. We present a procedure to compute the relativistic coordinate time delay, Doppler and astrometric observables avoiding the integration of the null geodesic equation. This is possible using the Time Transfer Function formalism, a powerful tool providing key quantities such as the time of flight of a light signal between two point-events and the tangent vector to its null-geodesic. Indeed we show how to ...

  15. A More Accurate Fourier Transform

    CERN Document Server

    Courtney, Elya


    Fourier transform methods are used to analyze functions and data sets to provide frequencies, amplitudes, and phases of underlying oscillatory components. Fast Fourier transform (FFT) methods offer speed advantages over evaluation of explicit integrals (EI) that define Fourier transforms. This paper compares frequency, amplitude, and phase accuracy of the two methods for well resolved peaks over a wide array of data sets including cosine series with and without random noise and a variety of physical data sets, including atmospheric $\\mathrm{CO_2}$ concentrations, tides, temperatures, sound waveforms, and atomic spectra. The FFT uses MIT's FFTW3 library. The EI method uses the rectangle method to compute the areas under the curve via complex math. Results support the hypothesis that EI methods are more accurate than FFT methods. Errors range from 5 to 10 times higher when determining peak frequency by FFT, 1.4 to 60 times higher for peak amplitude, and 6 to 10 times higher for phase under a peak. The ability t...

  16. Accurate, meshless methods for magnetohydrodynamics (United States)

    Hopkins, Philip F.; Raives, Matthias J.


    Recently, we explored new meshless finite-volume Lagrangian methods for hydrodynamics: the `meshless finite mass' (MFM) and `meshless finite volume' (MFV) methods; these capture advantages of both smoothed particle hydrodynamics (SPH) and adaptive mesh refinement (AMR) schemes. We extend these to include ideal magnetohydrodynamics (MHD). The MHD equations are second-order consistent and conservative. We augment these with a divergence-cleaning scheme, which maintains nabla \\cdot B≈ 0. We implement these in the code GIZMO, together with state-of-the-art SPH MHD. We consider a large test suite, and show that on all problems the new methods are competitive with AMR using constrained transport (CT) to ensure nabla \\cdot B=0. They correctly capture the growth/structure of the magnetorotational instability, MHD turbulence, and launching of magnetic jets, in some cases converging more rapidly than state-of-the-art AMR. Compared to SPH, the MFM/MFV methods exhibit convergence at fixed neighbour number, sharp shock-capturing, and dramatically reduced noise, divergence errors, and diffusion. Still, `modern' SPH can handle most test problems, at the cost of larger kernels and `by hand' adjustment of artificial diffusion. Compared to non-moving meshes, the new methods exhibit enhanced `grid noise' but reduced advection errors and diffusion, easily include self-gravity, and feature velocity-independent errors and superior angular momentum conservation. They converge more slowly on some problems (smooth, slow-moving flows), but more rapidly on others (involving advection/rotation). In all cases, we show divergence control beyond the Powell 8-wave approach is necessary, or all methods can converge to unphysical answers even at high resolution.

  17. NWS Corrections to Observations (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Form B-14 is the National Weather Service form entitled 'Notice of Corrections to Weather Records.' The forms are used to make corrections to observations on forms...

  18. Error Correction in Classroom

    Institute of Scientific and Technical Information of China (English)

    Dr. Grace Zhang


    Error correction is an important issue in foreign language acquisition. This paper investigates how students feel about the way in which error correction should take place in a Chinese-as-a foreign-language classroom, based on empirical data of a large scale. The study shows that there is a general consensus that error correction is necessary. In terms of correction strategy, the students preferred a combination of direct and indirect corrections, or a direct only correction. The former choice indicates that students would be happy to take either so long as the correction gets done.Most students didn't mind peer correcting provided it is conducted in a constructive way. More than halfofthe students would feel uncomfortable ifthe same error they make in class is corrected consecutively more than three times. Taking these findings into consideration, we may want to cncourage peer correcting, use a combination of correction strategies (direct only if suitable) and do it in a non-threatening and sensitive way. It is hoped that this study would contribute to the effectiveness of error correction in a Chinese language classroom and it may also have a wider implication on other languages.

  19. Second-order accurate finite volume method for well-driven flows

    CERN Document Server

    Dotlić, Milan; Pokorni, Boris; Pušić, Milenko; Dimkić, Milan


    We consider a finite volume method for a well-driven fluid flow in a porous medium. Due to the singularity of the well, modeling in the near-well region with standard numerical schemes results in a completely wrong total well flux and an inaccurate hydraulic head. Local grid refinement can help, but it comes at computational cost. In this article we propose two methods to address well singularity. In the first method the flux through well faces is corrected using a logarithmic function, in a way related to the Peaceman correction. Coupling this correction with a second-order accurate two-point scheme gives a greatly improved total well flux, but the resulting scheme is still not even first order accurate on coarse grids. In the second method fluxes in the near-well region are corrected by representing the hydraulic head as a sum of a logarithmic and a linear function. This scheme is second-order accurate.

  20. Accurate skin dose measurements using radiochromic film in clinical applications. (United States)

    Devic, S; Seuntjens, J; Abdel-Rahman, W; Evans, M; Olivares, M; Podgorsak, E B; Vuong, Té; Soares, Christopher G


    Megavoltage x-ray beams exhibit the well-known phenomena of dose buildup within the first few millimeters of the incident phantom surface, or the skin. Results of the surface dose measurements, however, depend vastly on the measurement technique employed. Our goal in this study was to determine a correction procedure in order to obtain an accurate skin dose estimate at the clinically relevant depth based on radiochromic film measurements. To illustrate this correction, we have used as a reference point a depth of 70 micron. We used the new GAFCHROMIC dosimetry films (HS, XR-T, and EBT) that have effective points of measurement at depths slightly larger than 70 micron. In addition to films, we also used an Attix parallel-plate chamber and a home-built extrapolation chamber to cover tissue-equivalent depths in the range from 4 micron to 1 mm of water-equivalent depth. Our measurements suggest that within the first millimeter of the skin region, the PDD for a 6 MV photon beam and field size of 10 x 10 cm2 increases from 14% to 43%. For the three GAFCHROMIC dosimetry film models, the 6 MV beam entrance skin dose measurement corrections due to their effective point of measurement are as follows: 15% for the EBT, 15% for the HS, and 16% for the XR-T model GAFCHROMIC films. The correction factors for the exit skin dose due to the build-down region are negligible. There is a small field size dependence for the entrance skin dose correction factor when using the EBT GAFCHROMIC film model. Finally, a procedure that uses EBT model GAFCHROMIC film for an accurate measurement of the skin dose in a parallel-opposed pair 6 MV photon beam arrangement is described.

  1. How flatbed scanners upset accurate film dosimetry. (United States)

    van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S


    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  2. 温度校正的NaCl水溶液浓度超声检测装置设计与试验%Design and test of high accurately measuring equipment for NaCl water solution utilizing ultrasonic velocity with temperature correction

    Institute of Scientific and Technical Information of China (English)

    孟瑞锋; 马小康; 王州博; 董龙梅; 杨涛; 刘东红


    abnormal sample points and checking out the regression coefficient of the model by t-test. The developed model had high prediction accuracy and stability with the maximum prediction error of 0.25 g/100 g, the determination coefficient of calibration (Rcal2) of 0.9992, the determination coefficient of validation (Rval2) of 0.9988, the root mean square error of calibration (RMSEC) of 0.0894 g/100 g, the root mean square error of prediction (RMSEP) of 0.1015 g/100 g and the ratio performance deviation (RPD) of 28.57, which indicated that the model could be used for practical detection accurately and steadily, and was helpful for on-line measuring.

  3. 38 CFR 4.46 - Accurate measurement. (United States)


    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  4. BASIC: A Simple and Accurate Modular DNA Assembly Method. (United States)

    Storch, Marko; Casini, Arturo; Mackrow, Ben; Ellis, Tom; Baldwin, Geoff S


    Biopart Assembly Standard for Idempotent Cloning (BASIC) is a simple, accurate, and robust DNA assembly method. The method is based on linker-mediated DNA assembly and provides highly accurate DNA assembly with 99 % correct assemblies for four parts and 90 % correct assemblies for seven parts [1]. The BASIC standard defines a single entry vector for all parts flanked by the same prefix and suffix sequences and its idempotent nature means that the assembled construct is returned in the same format. Once a part has been adapted into the BASIC format it can be placed at any position within a BASIC assembly without the need for reformatting. This allows laboratories to grow comprehensive and universal part libraries and to share them efficiently. The modularity within the BASIC framework is further extended by the possibility of encoding ribosomal binding sites (RBS) and peptide linker sequences directly on the linkers used for assembly. This makes BASIC a highly versatile library construction method for combinatorial part assembly including the construction of promoter, RBS, gene variant, and protein-tag libraries. In comparison with other DNA assembly standards and methods, BASIC offers a simple robust protocol; it relies on a single entry vector, provides for easy hierarchical assembly, and is highly accurate for up to seven parts per assembly round [2].

  5. [Orthognathic surgery: corrective bone operations]. (United States)

    Reuther, J


    The article reviews the history of orthognathic surgery from the middle of the last century up to the present. Initially, mandibular osteotomies were only performed in cases of severe malformations. But during the last century a precise and standardized procedure for correction of the mandible was established. Multiple modifications allowed control of small fragments, functionally stable osteosynthesis, and finally a precise positioning of the condyle. In 1955 Obwegeser and Trauner introduced the sagittal split osteotomy by an intraoral approach. It was the final breakthrough for orthognathic surgery as a standard treatment for corrections of the mandible. Surgery of the maxilla dates back to the nineteenth century. B. von Langenbeck from Berlin is said to have performed the first Le Fort I osteotomy in 1859. After minor changes, Wassmund corrected a posttraumatic malocclusion by a Le Fort I osteotomy in 1927. But it was Axhausen who risked the total mobilization of the maxilla in 1934. By additional modifications and further refinements, Obwegeser paved the way for this approach to become a standard procedure in maxillofacial surgery. Tessier mobilized the whole midface by a Le Fort III osteotomy and showed new perspectives in the correction of severe malformations of the facial bones, creating the basis of modern craniofacial surgery. While the last 150 years were distinguished by the creation and standardization of surgical methods, the present focus lies on precise treatment planning and the consideration of functional aspects of the whole stomatognathic system. To date, 3D visualization by CT scans, stereolithographic models, and computer-aided treatment planning and simulation allow surgery of complex cases and accurate predictions of soft tissue changes.

  6. Diophantine Correct Open Induction

    CERN Document Server

    Raffer, Sidney


    We give an induction-free axiom system for diophantine correct open induction. We relate the problem of whether a finitely generated ring of Puiseux polynomials is diophantine correct to a problem about the value-distribution of a tuple of semialgebraic functions with integer arguments. We use this result, and a theorem of Bergelson and Leibman on generalized polynomials, to identify a class of diophantine correct subrings of the field of descending Puiseux series with real coefficients.

  7. The FLUKA code: An accurate simulation tool for particle therapy

    CERN Document Server

    Battistoni, Giuseppe; Böhlen, Till T; Cerutti, Francesco; Chin, Mary Pik Wai; Dos Santos Augusto, Ricardo M; Ferrari, Alfredo; Garcia Ortega, Pablo; Kozlowska, Wioletta S; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis


    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically-based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in-vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with bot...

  8. An Accurate Technique for Calculation of Radiation From Printed Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min; Sorensen, Stig B.; Jorgensen, Erik


    The accuracy of various techniques for calculating the radiation from printed reflectarrays is examined, and an improved technique based on the equivalent currents approach is proposed. The equivalent currents are found from a continuous plane wave spectrum calculated by use of the spectral dyadic...... Green's function. This ensures a correct relation between the equivalent electric and magnetic currents and thus allows an accurate calculation of the radiation over the entire far-field sphere. A comparison to DTU-ESA Facility measurements of a reference offset reflectarray designed and manufactured...

  9. Accurate measurement of ultrasonic velocity by eliminating the diffraction effect

    Institute of Scientific and Technical Information of China (English)

    WEI Tingcun


    The accurate measurement method of ultrasonic velocity by the pulse interferencemethod with eliminating the diffraction effect has been investigated in VHF range experimen-tally. Two silicate glasses were taken as the specimens, their frequency dependences of longitu-dinal velocities were measured in the frequency range 50-350 MHz, and the phase advances ofultrasonic signals caused by diffraction effect were calculated using A. O. Williams' theoreticalexpression. For the frequency dependences of longitudinal velocities, the measurement resultswere in good agreement with the simulation ones in which the phase advances were included.It has been shown that the velocity error due to diffraction effect can be corrected very well bythis method.


    Institute of Scientific and Technical Information of China (English)


    Introduction During the teaching and learning process, teachers often check how much students have understood through written assignments. In this article I’d like to describe one method of correcting students’ written work by using a variety of symbols to indicate where students have gone wrong, then asking students to correct their work themselves.

  11. Surface EMG measurements during fMRI at 3T : Accurate EMG recordings after artifact correction

    NARCIS (Netherlands)

    van Duinen, Hiske; Zijdewind, Inge; Hoogduin, H; Maurits, N


    In this experiment, we have measured surface EMG of the first dorsal interosseus during predefined submaximal isometric contractions (5, 15, 30, 50, and 70% of maximal force) of the index finger simultaneously with fMRI measurements. Since we have used sparse sampling fMRI (3-s scanning; 2-s non-sca

  12. Probabilistic quantum error correction

    CERN Document Server

    Fern, J; Fern, Jesse; Terilla, John


    There are well known necessary and sufficient conditions for a quantum code to correct a set of errors. We study weaker conditions under which a quantum code may correct errors with probabilities that may be less than one. We work with stabilizer codes and as an application study how the nine qubit code, the seven qubit code, and the five qubit code perform when there are errors on more than one qubit. As a second application, we discuss the concept of syndrome quality and use it to suggest a way that quantum error correction can be practically improved.

  13. Corrected Age for Preemies (United States)

    ... Spread the Word Shop AAP Find a Pediatrician Ages & Stages Prenatal Baby Bathing & Skin Care Breastfeeding Crying & ... Listen Español Text Size Email Print Share Corrected Age For Preemies Page Content Article Body If your ...

  14. moco: Fast Motion Correction for Calcium Imaging. (United States)

    Dubbs, Alexander; Guevara, James; Yuste, Rafael


    Motion correction is the first step in a pipeline of algorithms to analyze calcium imaging videos and extract biologically relevant information, for example the network structure of the neurons therein. Fast motion correction is especially critical for closed-loop activity triggered stimulation experiments, where accurate detection and targeting of specific cells in necessary. We introduce a novel motion-correction algorithm which uses a Fourier-transform approach, and a combination of judicious downsampling and the accelerated computation of many L 2 norms using dynamic programming and two-dimensional, fft-accelerated convolutions, to enhance its efficiency. Its accuracy is comparable to that of established community-used algorithms, and it is more stable to large translational motions. It is programmed in Java and is compatible with ImageJ.

  15. moco: Fast Motion Correction for Calcium Imaging

    Directory of Open Access Journals (Sweden)

    Alexander eDubbs


    Full Text Available Motion correction is the first step in a pipeline of algorithms to analyze calcium imaging videos and extract biologically relevant information, for example the network structure of the neurons therein. Fast motion correction is especially critical for closed-loop activity triggered stimulation experiments, where accurate detection and targeting of specific cells in necessary. We introduce a novel motion-correction algorithm that uses a Fourier-transform approach, and a combination of judicious downsampling and the accelerated computation of many $L_2$ norms using dynamic programming and two-dimensional, fft-accelerated convolutions, to enhance its efficiency. Its accuracy is comparable to that of established community-used algorithms, and it is more stable to large translational motions. It is programmed in Java and is compatible with ImageJ.

  16. Respiration correction by clustering in ultrasound images (United States)

    Wu, Kaizhi; Chen, Xi; Ding, Mingyue; Sang, Nong


    Respiratory motion is a challenging factor for image acquisition, image-guided procedures and perfusion quantification using contrast-enhanced ultrasound in the abdominal and thoracic region. In order to reduce the influence of respiratory motion, respiratory correction methods were investigated. In this paper we propose a novel, cluster-based respiratory correction method. In the proposed method, we assign the image frames of the corresponding respiratory phase using spectral clustering firstly. And then, we achieve the images correction automatically by finding a cluster in which points are close to each other. Unlike the traditional gating method, we don't need to estimate the breathing cycle accurate. It is because images are similar at the corresponding respiratory phase, and they are close in high-dimensional space. The proposed method is tested on simulation image sequence and real ultrasound image sequence. The experimental results show the effectiveness of our proposed method in quantitative and qualitative.

  17. Mobile image based color correction using deblurring (United States)

    Wang, Yu; Xu, Chang; Boushey, Carol; Zhu, Fengqing; Delp, Edward J.


    Dietary intake, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as obesity and cancer. The goals of the Technology Assisted Dietary Assessment (TADA) System, developed at Purdue University, is to automatically identify and quantify foods and beverages consumed by utilizing food images acquired with a mobile device. Color correction serves as a critical step to ensure accurate food identification and volume estimation. We make use of a specifically designed color checkerboard (i.e. a fiducial marker) to calibrate the imaging system so that the variations of food appearance under different lighting conditions can be determined. In this paper, we propose an image quality enhancement technique by combining image de-blurring and color correction. The contribution consists of introducing an automatic camera shake removal method using a saliency map and improving the polynomial color correction model using the LMS color space.

  18. Adaptable DC offset correction (United States)

    Golusky, John M. (Inventor); Muldoon, Kelly P. (Inventor)


    Methods and systems for adaptable DC offset correction are provided. An exemplary adaptable DC offset correction system evaluates an incoming baseband signal to determine an appropriate DC offset removal scheme; removes a DC offset from the incoming baseband signal based on the appropriate DC offset scheme in response to the evaluated incoming baseband signal; and outputs a reduced DC baseband signal in response to the DC offset removed from the incoming baseband signal.

  19. Correctness is not enough

    CERN Document Server

    Pryor, Louise


    The usual aim of spreadsheet audit is to verify correctness. There are two problems with this: first, it is often difficult to tell whether the spreadsheets in question are correct, and second, even if they are, they may still give the wrong results. These problems are explained in this paper, which presents the key criteria for judging a spreadsheet and discusses how those criteria can be achieved

  20. Accurate thermoelastic tensor and acoustic velocities of NaCl

    Energy Technology Data Exchange (ETDEWEB)

    Marcondes, Michel L., E-mail: [Physics Institute, University of Sao Paulo, Sao Paulo, 05508-090 (Brazil); Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Shukla, Gaurav, E-mail: [School of Physics and Astronomy, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States); Silveira, Pedro da [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Wentzcovitch, Renata M., E-mail: [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States)


    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  1. Laboratory Building for Accurate Determination of Plutonium

    Institute of Scientific and Technical Information of China (English)


    <正>The accurate determination of plutonium is one of the most important assay techniques of nuclear fuel, also the key of the chemical measurement transfer and the base of the nuclear material balance. An

  2. Accurate Calculation of the Differential Cross Section of Bhabha Scattering with Photon Chain Loops Contribution in QED

    Institute of Scientific and Technical Information of China (English)

    JIANG Min; FANG Zhen-Yun; SANG Wen-Long; GAO Fei


    @@ In the minimum electromagnetism coupling model of interaction between photon and electron (positron), we accurately calculate photon chain renormalized propagator and obtain the accurate result of differential cross section of Bhabha scattering with a photon chain renormalized propagator in quantum electrodynamics. The related radiative corrections are briefly reviewed and discussed.

  3. Understanding the Code: keeping accurate records. (United States)

    Griffith, Richard


    In his continuing series looking at the legal and professional implications of the Nursing and Midwifery Council's revised Code of Conduct, Richard Griffith discusses the elements of accurate record keeping under Standard 10 of the Code. This article considers the importance of accurate record keeping for the safety of patients and protection of district nurses. The legal implications of records are explained along with how district nurses should write records to ensure these legal requirements are met.


    Institute of Scientific and Technical Information of China (English)


    IntroductionI have been teaching English for ten years and like many other teachers in middle schools.I teach threebig classes each year.Before I had the opportunity to further my study in the SMSTT project run jointlyby the British Council and the State Education Commission of China at Southwest China TeachersUniversity.I found it somewhat difficult to correct students homework since I had so many students.Now I still have three big classes.but I have found it casier to correct students homework since I havebeen combining the techniques learned in the project with my own successful experience.In this article.I attempt to discuss my approach to correcting students homework.I hope that it will be of some use tothose who have not vet had the opportunity to further their training.

  5. Model Correction Factor Method

    DEFF Research Database (Denmark)

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes


    The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... statebased on an idealized mechanical model to be adapted to the original limit state by the model correction factor. Reliable approximations are obtained by iterative use of gradient information on the original limit state function analogously to previous response surface approaches. However, the strength...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...

  6. Probabilistic error correction for RNA sequencing. (United States)

    Le, Hai-Son; Schulz, Marcel H; McCauley, Brenna M; Hinman, Veronica F; Bar-Joseph, Ziv


    Sequencing of RNAs (RNA-Seq) has revolutionized the field of transcriptomics, but the reads obtained often contain errors. Read error correction can have a large impact on our ability to accurately assemble transcripts. This is especially true for de novo transcriptome analysis, where a reference genome is not available. Current read error correction methods, developed for DNA sequence data, cannot handle the overlapping effects of non-uniform abundance, polymorphisms and alternative splicing. Here we present SEquencing Error CorrEction in Rna-seq data (SEECER), a hidden Markov Model (HMM)-based method, which is the first to successfully address these problems. SEECER efficiently learns hundreds of thousands of HMMs and uses these to correct sequencing errors. Using human RNA-Seq data, we show that SEECER greatly improves on previous methods in terms of quality of read alignment to the genome and assembly accuracy. To illustrate the usefulness of SEECER for de novo transcriptome studies, we generated new RNA-Seq data to study the development of the sea cucumber Parastichopus parvimensis. Our corrected assembled transcripts shed new light on two important stages in sea cucumber development. Comparison of the assembled transcripts to known transcripts in other species has also revealed novel transcripts that are unique to sea cucumber, some of which we have experimentally validated. Supporting website:

  7. Fractal Correction of Well Logging Curves

    Institute of Scientific and Technical Information of China (English)


    It is always significant for assessing and evaluation of oil-bearing layers, especially for well logging data processing and interpretation of non-marine oil beds to get more accurate physical properties in thin and inter-thin layers. This paper presents a definition of measures and the measure presents power law relation with the corresponded scale described by fractal theory. Thus, logging curves can be reconstructed according to this power law relation. This method uses the local structure nearby concurrent points to com pensate the average effect of logging probes and measurement errors. As an example, deep and medium induced conductivity (IMPH and IDPH) curves in ODP Leg 127 Hole 797C are reconstructed or corrected. Corrected curves are with less adjacent effects through comparison of corrected curves with original one. And also, the power spectra of corrected well logging curve are abounding with more resolution components than the original one. Thus, fractal correction method makes the well logging more resoluble for thin beds.``

  8. Partial Volume Correction in Quantitative Amyloid Imaging (United States)

    Su, Yi; Blazey, Tyler M.; Snyder, Abraham Z.; Raichle, Marcus E.; Marcus, Daniel S.; Ances, Beau M.; Bateman, Randall J.; Cairns, Nigel J.; Aldea, Patricia; Cash, Lisa; Christensen, Jon J.; Friedrichsen, Karl; Hornbeck, Russ C.; Farrar, Angela M.; Owen, Christopher J.; Mayeux, Richard; Brickman, Adam M.; Klunk, William; Price, Julie C.; Thompson, Paul M.; Ghetti, Bernardino; Saykin, Andrew J.; Sperling, Reisa A.; Johnson, Keith A.; Schofield, Peter R.; Buckles, Virginia; Morris, John C.; Benzinger, Tammie. LS.


    Amyloid imaging is a valuable tool for research and diagnosis in dementing disorders. As positron emission tomography (PET) scanners have limited spatial resolution, measured signals are distorted by partial volume effects. Various techniques have been proposed for correcting partial volume effects, but there is no consensus as to whether these techniques are necessary in amyloid imaging, and, if so, how they should be implemented. We evaluated a two-component partial volume correction technique and a regional spread function technique using both simulated and human Pittsburgh compound B (PiB) PET imaging data. Both correction techniques compensated for partial volume effects and yielded improved detection of subtle changes in PiB retention. However, the regional spread function technique was more accurate in application to simulated data. Because PiB retention estimates depend on the correction technique, standardization is necessary to compare results across groups. Partial volume correction has sometimes been avoided because it increases the sensitivity to inaccuracy in image registration and segmentation. However, our results indicate that appropriate PVC may enhance our ability to detect changes in amyloid deposition. PMID:25485714

  9. Correction of ocular dystopia. (United States)

    Janecka, I P


    The purpose of this study was to examine results with elective surgical correction of enophthalmos. The study was a retrospective assessment in a university-based referral practice. A consecutive sample of 10 patients who developed ocular dystopia following orbital trauma was examined. The main outcome measures were a subjective evaluation by patients and objective measurements of patients' eye position. The intervention was three-dimensional orbital reconstruction with titanium plates. It is concluded that satisfactory correction of enophthalmos and ocular dystopia can be achieved with elective surgery using titanium plates. In addition, intraoperative measurements of eye position in three planes increases the precision of surgery.

  10. DNA barcode data accurately assign higher spider taxa

    Directory of Open Access Journals (Sweden)

    Jonathan A. Coddington


    Full Text Available The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios “barcodes” (whether single or multiple, organelle or nuclear, loci clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families—taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75–100%. Accurate assignment of higher taxa (PIdent above which errors totaled less than 5% occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However

  11. Corrections for collaborators

    NARCIS (Netherlands)



    In the ”Directions and Hints” for collaborators in Flora Malesiana, which has been forwarded to all collaborators, two corrections should be made, viz: 1) p. 12; Omit the explanatory notes under Jamaica Plain, Mass., and Cambridge, Mass. 2) p. 13; Add as number 12a; Stockholm, Paleobotaniska Avdelni


    Institute of Scientific and Technical Information of China (English)


    To err is human . Since the 1960s, most second language teachers or language theorists have regarded errors as natural and inevitable in the language learning process . Instead of regarding them as terrible and disappointing, teachers have come to realize their value. This paper will consider these values, analyze some errors and propose some effective correction techniques.

  13. General forecasting correcting formula



    A general forecasting correcting formula, as a framework for long-use and standardized forecasts, is created. The formula provides new forecasting resources and new possibilities for expansion of forecasting including economic forecasting into the areas of municipal needs, middle-size and small-size business and, even, to individual forecasting.

  14. Radiative Corrections and Z'

    CERN Document Server

    Erler, Jens


    Radiative corrections to parity violating deep inelastic electron scattering are reviewed including a discussion of the renormalization group evolution of the weak mixing angle. Recently obtained results on hypothetical Z' bosons - for which parity violating observables play an important role - are also presented.

  15. Renormalons and Power Corrections

    CERN Document Server

    Beneke, Martin


    Even for short-distance dominated observables the QCD perturbation expansion is never complete. The divergence of the expansion through infrared renormalons provides formal evidence of this fact. In this article we review how this apparent failure can be turned into a useful tool to investigate power corrections to hard processes in QCD.

  16. Fast and accurate determination of modularity and its effect size

    CERN Document Server

    Treviño, Santiago; Del Genio, Charo I; Bassler, Kevin E


    We present a fast spectral algorithm for community detection in complex networks. Our method searches for the partition with the maximum value of the modularity via the interplay of several refinement steps that include both agglomeration and division. We validate the accuracy of the algorithm by applying it to several real-world benchmark networks. On all these, our algorithm performs as well or better than any other known polynomial scheme. This allows us to extensively study the modularity distribution in ensembles of Erd\\H{o}s-R\\'enyi networks, producing theoretical predictions for means and variances inclusive of finite-size corrections. Our work provides a way to accurately estimate the effect size of modularity, providing a $z$-score measure of it and enabling a more informative comparison of networks with different numbers of nodes and links.

  17. A fast and accurate FPGA based QRS detection system. (United States)

    Shukla, Ashish; Macchiarulo, Luca


    An accurate Field Programmable Gate Array (FPGA) based ECG Analysis system is described in this paper. The design, based on a popular software based QRS detection algorithm, calculates the threshold value for the next peak detection cycle, from the median of eight previously detected peaks. The hardware design has accuracy in excess of 96% in detecting the beats correctly when tested with a subset of five 30 minute data records obtained from the MIT-BIH Arrhythmia database. The design, implemented using a proprietary design tool (System Generator), is an extension of our previous work and uses 76% resources available in a small-sized FPGA device (Xilinx Spartan xc3s500), has a higher detection accuracy as compared to our previous design and takes almost half the analysis time in comparison to software based approach.

  18. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry. (United States)

    Fuchs, Franz G; Hjelmervik, Jon M


    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results.

  19. Geometric correction of APEX hyperspectral data

    Directory of Open Access Journals (Sweden)

    Vreys Kristin


    Full Text Available Hyperspectral imagery originating from airborne sensors is nowadays widely used for the detailed characterization of land surface. The correct mapping of the pixel positions to ground locations largely contributes to the success of the applications. Accurate geometric correction, also referred to as “orthorectification”, is thus an important prerequisite which must be performed prior to using airborne imagery for evaluations like change detection, or mapping or overlaying the imagery with existing data sets or maps. A so-called “ortho-image” provides an accurate representation of the earth’s surface, having been adjusted for lens distortions, camera tilt and topographic relief. In this paper, we describe the different steps in the geometric correction process of APEX hyperspectral data, as applied in the Central Data Processing Center (CDPC at the Flemish Institute for Technological Research (VITO, Mol, Belgium. APEX ortho-images are generated through direct georeferencing of the raw images, thereby making use of sensor interior and exterior orientation data, boresight calibration data and elevation data. They can be referenced to any userspecified output projection system and can be resampled to any output pixel size.

  20. Corrected transposition of the great arteries

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Young Hi; Park, Jae Hyung; Han, Man Chung [Seoul National University College of Medicine, Seoul (Korea, Republic of)


    The corrected transposition of the great arteries is an usual congenital cardiac malformation, which consists of transposition of great arteries and ventricular inversion, and which is caused by abnormal development of conotruncus and ventricular looping. High frequency of associated cardiac malformations makes it difficult to get accurate morphologic diagnosis. A total of 18 cases of corrected transposition of the great arteries is presented, in which cardiac catheterization and angiocardiography were done at the Department of Radiology, Seoul National University Hospital between September 1976 and June 1981. The clinical, radiographic, and operative findings with the emphasis on the angiocardiographic findings were analyzed. The results are as follows: 1. Among 18 cases, 13 cases have normal cardiac position, 2 cases have dextrocardia with situs solitus, 2 cases have dextrocardia with situs inversus and 1 case has levocardia with situs inversus. 2. Segmental sets are (S, L, L) in 15 cases, and (I, D,D) in 3 cases and there is no exception to loop rule. 3. Side by side interrelationships of both ventricles and both semilunar valves are noticed in 10 and 12 cases respectively. 4. Subaortic type conus is noted in all 18 cases. 5. Associated cardic malformations are VSD in 14 cases, PS in 11, PDA in 3, PFO in 3, ASD in 2, right aortic arch in 2, tricuspid insufficiency, mitral prolapse, persistent left SVC and persistent right SVC in 1 case respectively. 6. For accurate diagnosis of corrected TGA, selective biventriculography using biplane cineradiography is an essential procedure.

  1. Accurate tracking control in LOM application

    Institute of Scientific and Technical Information of China (English)


    The fabrication of accurate prototype from CAD model directly in short time depends on the accurate tracking control and reference trajectory planning in (Laminated Object Manufacture) LOM application. An improvement on contour accuracy is acquired by the introduction of a tracking controller and a trajectory generation policy. A model of the X-Y positioning system of LOM machine is developed as the design basis of tracking controller. The ZPETC (Zero Phase Error Tracking Controller) is used to eliminate single axis following error, thus reduce the contour error. The simulation is developed on a Maltab model based on a retrofitted LOM machine and the satisfied result is acquired.

  2. Accurate Switched-Voltage voltage averaging circuit


    金光, 一幸; 松本, 寛樹


    Abstract ###This paper proposes an accurate Switched-Voltage (SV) voltage averaging circuit. It is presented ###to compensated for NMOS missmatch error at MOS differential type voltage averaging circuit. ###The proposed circuit consists of a voltage averaging and a SV sample/hold (S/H) circuit. It can ###operate using nonoverlapping three phase clocks. Performance of this circuit is verified by PSpice ###simulations.

  3. Accurate overlaying for mobile augmented reality

    NARCIS (Netherlands)

    Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.


    Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency renderi

  4. A highly accurate ab initio potential energy surface for methane (United States)

    Owens, Alec; Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter


    A new nine-dimensional potential energy surface (PES) for methane has been generated using state-of-the-art ab initio theory. The PES is based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set limit and incorporates a range of higher-level additive energy corrections. These include core-valence electron correlation, higher-order coupled cluster terms beyond perturbative triples, scalar relativistic effects, and the diagonal Born-Oppenheimer correction. Sub-wavenumber accuracy is achieved for the majority of experimentally known vibrational energy levels with the four fundamentals of 12CH4 reproduced with a root-mean-square error of 0.70 cm-1. The computed ab initio equilibrium C-H bond length is in excellent agreement with previous values despite pure rotational energies displaying minor systematic errors as J (rotational excitation) increases. It is shown that these errors can be significantly reduced by adjusting the equilibrium geometry. The PES represents the most accurate ab initio surface to date and will serve as a good starting point for empirical refinement.

  5. Aberration Corrected Emittance Exchange

    CERN Document Server

    Nanni, Emilio A


    Full exploitation of emittance exchange (EEX) requires aberration-free performance of a complex imaging system including active radio-frequency (RF) elements which can add temporal distortions. We investigate the performance of an EEX line where the exchange occurs between two dimensions with normalized emittances which differ by orders of magnitude. The transverse emittance is exchanged into the longitudinal dimension using a double dog-leg emittance exchange setup with a 5 cell RF deflector cavity. Aberration correction is performed on the four most dominant aberrations. These include temporal aberrations that are corrected with higher order magnetic optical elements located where longitudinal and transverse emittance are coupled. We demonstrate aberration-free performance of emittances differing by 4 orders of magnitude, i.e. an initial transverse emittance of $\\epsilon_x=1$ pm-rad is exchanged with a longitudinal emittance of $\\epsilon_z=10$ nm-rad.

  6. Radiative corrections to DIS

    CERN Document Server

    Krasny, Mieczyslaw Witold


    Early deep inelastic scattering (DIS) experiments at SLAC discovered partons, identified them as quarks and gluons, and restricted the set of the candidate theories for strong interactions to those exhibiting the asymptotic freedom property. The next generation DIS experiments at FNAL and CERN confirmed the predictions of QCD for the size of the scaling violation effects in the nucleon structure functions. The QCD fits to their data resulted in determining the momentum distributions of the point-like constituents of nucleons. Interpretation of data coming from all these experiments and, in the case of the SLAC experiments, even an elaboration of the running strategies, would not have been possible without a precise understanding of the electromagnetic radiative corrections. In this note I recollect the important milestones, achieved in the period preceding the HERA era, in the high precision calculations of the radiative corrections to DIS, and in the development of the methods of their experimental control. ...

  7. Quality metric for accurate overlay control in <20nm nodes (United States)

    Klein, Dana; Amit, Eran; Cohen, Guy; Amir, Nuriel; Har-Zvi, Michael; Huang, Chin-Chou Kevin; Karur-Shanmugam, Ramkumar; Pierson, Bill; Kato, Cindy; Kurita, Hiroyuki


    The semiconductor industry is moving toward 20nm nodes and below. As the Overlay (OVL) budget is getting tighter at these advanced nodes, the importance in the accuracy in each nanometer of OVL error is critical. When process owners select OVL targets and methods for their process, they must do it wisely; otherwise the reported OVL could be inaccurate, resulting in yield loss. The same problem can occur when the target sampling map is chosen incorrectly, consisting of asymmetric targets that will cause biased correctable terms and a corrupted wafer. Total measurement uncertainty (TMU) is the main parameter that process owners use when choosing an OVL target per layer. Going towards the 20nm nodes and below, TMU will not be enough for accurate OVL control. KLA-Tencor has introduced a quality score named `Qmerit' for its imaging based OVL (IBO) targets, which is obtained on the-fly for each OVL measurement point in X & Y. This Qmerit score will enable the process owners to select compatible targets which provide accurate OVL values for their process and thereby improve their yield. Together with K-T Analyzer's ability to detect the symmetric targets across the wafer and within the field, the Archer tools will continue to provide an independent, reliable measurement of OVL error into the next advanced nodes, enabling fabs to manufacture devices that meet their tight OVL error budgets.

  8. Accurate phylogenetic classification of DNA fragments based onsequence composition

    Energy Technology Data Exchange (ETDEWEB)

    McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis; Hugenholtz, Philip; Rigoutsos, Isidore


    Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequence characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.

  9. Congenitally corrected transposition

    Directory of Open Access Journals (Sweden)

    Debich-Spicer Diane


    Full Text Available Abstract Congenitally corrected transposition is a rare cardiac malformation characterized by the combination of discordant atrioventricular and ventriculo-arterial connections, usually accompanied by other cardiovascular malformations. Incidence has been reported to be around 1/33,000 live births, accounting for approximately 0.05% of congenital heart malformations. Associated malformations may include interventricular communications, obstructions of the outlet from the morphologically left ventricle, and anomalies of the tricuspid valve. The clinical picture and age of onset depend on the associated malformations, with bradycardia, a single loud second heart sound and a heart murmur being the most common manifestations. In the rare cases where there are no associated malformations, congenitally corrected transposition can lead to progressive atrioventricular valvar regurgitation and failure of the systemic ventricle. The diagnosis can also be made late in life when the patient presents with complete heart block or cardiac failure. The etiology of congenitally corrected transposition is currently unknown, and with an increase in incidence among families with previous cases of congenitally corrected transposition reported. Diagnosis can be made by fetal echocardiography, but is more commonly made postnatally with a combination of clinical signs and echocardiography. The anatomical delineation can be further assessed by magnetic resonance imaging and catheterization. The differential diagnosis is centred on the assessing if the patient is presenting with isolated malformations, or as part of a spectrum. Surgical management consists of repair of the associated malformations, or redirection of the systemic and pulmonary venous return associated with an arterial switch procedure, the so-called double switch approach. Prognosis is defined by the associated malformations, and on the timing and approach to palliative surgical care.

  10. Correcting Duporcq's theorem☆ (United States)

    Nawratil, Georg


    In 1898, Ernest Duporcq stated a famous theorem about rigid-body motions with spherical trajectories, without giving a rigorous proof. Today, this theorem is again of interest, as it is strongly connected with the topic of self-motions of planar Stewart–Gough platforms. We discuss Duporcq's theorem from this point of view and demonstrate that it is not correct. Moreover, we also present a revised version of this theorem. PMID:25540467

  11. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy. (United States)

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis


    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features.

  12. The dynamic correction of collimation errors of CT slicing pictures

    Institute of Scientific and Technical Information of China (English)

    LIU Ya-xiong; Sekou Sing-are; LI Di-chen; LU Bing-heng


    To eliminate the motion artifacts of CT images caused by patient motions and other related errors,two kinds of correctors (A type and U type) are proposed to monitor the scanning process and correct the motion artifacts of the original images via reverse geometrical transformation such as reverse scaling,moving,rotating and offsetting.The results confirm that the correction method with any of the correctors can improve the accuracy and reliability of CT images,which facilitates in eliminating or decreasing the motion artifacts and correcting other static errors and image processing errors.This provides a foundation for the 3D reconstruction and accurate fabrication of the customized implants.

  13. Automated numerical calculation of Sagnac correction for photonic paths (United States)

    Šlapák, Martin; Vojtěch, Josef; Velc, Radek


    Relativistic effects must be taken into account for highly accurate time and frequency transfers. The most important is the Sagnac correction which is also source of non-reciprocity in various directions of any transfer in relation with the Earth rotation. In this case, not all important parameters as exact trajectory of the optical fibre path (leased fibres) are known with sufficient precision thus it is necessary to estimate lower and upper bounds of computed corrections. The presented approach deals with uncertainty in knowledge of detailed fibre paths, and also with complex paths with loops. We made the whole process of calculation of the Sagnac correction fully automated.

  14. Herschel SPIRE FTS telescope model correction

    CERN Document Server

    Hopwood, Rosalind; Polehampton, Edward T; Valtchanov, Ivan; Benielli, Dominique; Imhof, Peter; Lim, Tanya; Lu, Nanyao; Marchili, Nicola; Pearson, Chris P; Swinyard, Bruce M


    Emission from the Herschel telescope is the dominant source of radiation for the majority of SPIRE Fourier transform spectrometer (FTS) observations, despite the exceptionally low emissivity of the primary and secondary mirrors. Accurate modelling and removal of the telescope contribution is, therefore, an important and challenging aspect of FTS calibration and data reduction pipeline. A dust-contaminated telescope model with time invariant mirror emissivity was adopted before the Herschel launch. However, measured FTS spectra show a clear evolution of the telescope contribution over the mission and strong need for a correction to the standard telescope model in order to reduce residual background (of up to 7 Jy) in the final data products. Systematic changes in observations of dark sky, taken over the course of the mission, provide a measure of the evolution between observed telescope emission and the telescope model. These dark sky observations have been used to derive a time dependent correction to the tel...

  15. Exemplar-based human action pose correction. (United States)

    Shen, Wei; Deng, Ke; Bai, Xiang; Leyvand, Tommer; Guo, Baining; Tu, Zhuowen


    The launch of Xbox Kinect has built a very successful computer vision product and made a big impact on the gaming industry. This sheds lights onto a wide variety of potential applications related to action recognition. The accurate estimation of human poses from the depth image is universally a critical step. However, existing pose estimation systems exhibit failures when facing severe occlusion. In this paper, we propose an exemplar-based method to learn to correct the initially estimated poses. We learn an inhomogeneous systematic bias by leveraging the exemplar information within a specific human action domain. Furthermore, as an extension, we learn a conditional model by incorporation of pose tags to further increase the accuracy of pose correction. In the experiments, significant improvements on both joint-based skeleton correction and tag prediction are observed over the contemporary approaches, including what is delivered by the current Kinect system. Our experiments for the facial landmark correction also illustrate that our algorithm can improve the accuracy of other detection/estimation systems.

  16. A stochastic model of kinetochore-microtubule attachment accurately describes fission yeast chromosome segregation. (United States)

    Gay, Guillaume; Courtheoux, Thibault; Reyes, Céline; Tournier, Sylvie; Gachet, Yannick


    In fission yeast, erroneous attachments of spindle microtubules to kinetochores are frequent in early mitosis. Most are corrected before anaphase onset by a mechanism involving the protein kinase Aurora B, which destabilizes kinetochore microtubules (ktMTs) in the absence of tension between sister chromatids. In this paper, we describe a minimal mathematical model of fission yeast chromosome segregation based on the stochastic attachment and detachment of ktMTs. The model accurately reproduces the timing of correct chromosome biorientation and segregation seen in fission yeast. Prevention of attachment defects requires both appropriate kinetochore orientation and an Aurora B-like activity. The model also reproduces abnormal chromosome segregation behavior (caused by, for example, inhibition of Aurora B). It predicts that, in metaphase, merotelic attachment is prevented by a kinetochore orientation effect and corrected by an Aurora B-like activity, whereas in anaphase, it is corrected through unbalanced forces applied to the kinetochore. These unbalanced forces are sufficient to prevent aneuploidy.

  17. A New Geometrical Correction Method for Inaccessible Area Imagery

    Institute of Scientific and Technical Information of China (English)

    Lee Hong-shik; Park Jun-ku; Lim Sam-sung


    The geometric correction of a satellite imagery is performed by making a systematic correction with satellite ephemerides and attitude angles followed by employing the Ground Control Points (GCPs) or Digital Elevation Models (DEMs). In a remote area or an inaccessible area, however,GCPs are unavailable to be surveyed and thus they can be obtained only by reading maps, which is not accurate in reality.In this study, we performed the systematic correction process to the inaccessible area and the precise geometric correction process to the adjacent accessible area by using GCPs. Then we analyzed the correlation between the two geo-referenced Korea Multipurpose Satellite (KOMPSAT-1 EOC) images. A new geometrical correction for the inaccessible area imagery is achieved by applying the correlation to the inaccessible imagery. By employing this new method, the accuracy of the inaccessible area imagery is significantly improved absolutely and relatively.

  18. Electroweak Corrections to the Neutralino Pair Production at CERN LHC

    CERN Document Server

    Ahmadov, A I


    We apply the leading and sub-leading electroweak (EW) corrections to the Drell-Yan process of the neutralino pair production at proton-proton collision, in order to calculate the effects of the these corrections on the neutralino pair production at the LHC. We provide an analysis of the dependence of the Born cross-sections for $pp\\rightarrow\\widetilde\\chi_{i}^{0}\\widetilde\\chi_{j}^{0}$ and the EW corrections to this process, on the center-of-mass energy $\\sqrt s$, on the $M_2$-$\\mu$ mass plane and on the squark mass for the three different scenarios. The numerical results show that the relative correction can be reached the few tens of percent level as the increment of the center-of-mass energy, and the evaluation of EW corrections is a crucial task for all accurate measurements of the neutralino pair production processes.

  19. Accurate colorimetric feedback for RGB LED clusters (United States)

    Man, Kwong; Ashdown, Ian


    We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.

  20. Accurate guitar tuning by cochlear implant musicians.

    Directory of Open Access Journals (Sweden)

    Thomas Lu

    Full Text Available Modern cochlear implant (CI users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  1. Identification of Microorganisms by High Resolution Tandem Mass Spectrometry with Accurate Statistical Significance (United States)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo


    Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at

  2. Efficient Accurate Context-Sensitive Anomaly Detection

    Institute of Scientific and Technical Information of China (English)


    For program behavior-based anomaly detection, the only way to ensure accurate monitoring is to construct an efficient and precise program behavior model. A new program behavior-based anomaly detection model,called combined pushdown automaton (CPDA) model was proposed, which is based on static binary executable analysis. The CPDA model incorporates the optimized call stack walk and code instrumentation technique to gain complete context information. Thereby the proposed method can detect more attacks, while retaining good performance.

  3. On accurate determination of contact angle (United States)

    Concus, P.; Finn, R.


    Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.

  4. Accurate Control of Josephson Phase Qubits (United States)


    61 ~1986!. 23 K. Kraus, States, Effects, and Operations: Fundamental Notions of Quantum Theory, Lecture Notes in Physics , Vol. 190 ~Springer-Verlag... PHYSICAL REVIEW B 68, 224518 ~2003!Accurate control of Josephson phase qubits Matthias Steffen,1,2,* John M. Martinis,3 and Isaac L. Chuang1 1Center...for Bits and Atoms and Department of Physics , MIT, Cambridge, Massachusetts 02139, USA 2Solid State and Photonics Laboratory, Stanford University

  5. Accurate guitar tuning by cochlear implant musicians. (United States)

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang


    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  6. Synthesizing Accurate Floating-Point Formulas


    Ioualalen, Arnault; Martel, Matthieu


    International audience; Many critical embedded systems perform floating-point computations yet their accuracy is difficult to assert and strongly depends on how formulas are written in programs. In this article, we focus on the synthesis of accurate formulas mathematically equal to the original formulas occurring in source codes. In general, an expression may be rewritten in many ways. To avoid any combinatorial explosion, we use an intermediate representation, called APEG, enabling us to rep...


    Institute of Scientific and Technical Information of China (English)


    IntroductionMistakes and their correction generally follow one another in the language classroom.Most teachersthink that correction is a necessary part of teaching;while most students agree that making mistakesis a necessary part of learning.Although both teachers and students maintain that correction andmistakes are necessary,we often find that some correction helps students’ learning and some does not.Correction can make students lose confidence and interest in learning.In order to try and find outmore about why this happens I surveyed students attitudes towards mistakes and correction.

  8. Experimental repetitive quantum error correction. (United States)

    Schindler, Philipp; Barreiro, Julio T; Monz, Thomas; Nebendahl, Volckmar; Nigg, Daniel; Chwalla, Michael; Hennrich, Markus; Blatt, Rainer


    The computational potential of a quantum processor can only be unleashed if errors during a quantum computation can be controlled and corrected for. Quantum error correction works if imperfections of quantum gate operations and measurements are below a certain threshold and corrections can be applied repeatedly. We implement multiple quantum error correction cycles for phase-flip errors on qubits encoded with trapped ions. Errors are corrected by a quantum-feedback algorithm using high-fidelity gate operations and a reset technique for the auxiliary qubits. Up to three consecutive correction cycles are realized, and the behavior of the algorithm for different noise environments is analyzed.

  9. Accurate structural correlations from maximum likelihood superpositions.

    Directory of Open Access Journals (Sweden)

    Douglas L Theobald


    Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.

  10. Chicago aberration correction work

    Energy Technology Data Exchange (ETDEWEB)

    Beck, V.D., E-mail: [1 Hobby Drive, Ridgefield, CT 06877-01922 (United States)


    The author describes from his personal involvement the many improvements to electron microscopy Albert Crewe and his group brought by minimizing the effects of aberrations. The Butler gun was developed to minimize aperture aberrations in a field emission electron gun. In the 1960s, Crewe anticipated using a spherical aberration corrector based on Scherzer's design. Since the tolerances could not be met mechanically, a method of moving the center of the octopoles electrically was developed by adding lower order multipole fields. Because the corrector was located about 15 cm ahead of the objective lens, combination aberrations would arise with the objective lens. This fifth order aberration would then limit the aperture of the microscope. The transformation of the off axis aberration coefficients of a round lens was developed and a means to cancel anisotropic coma was developed. A new method of generating negative spherical aberration was invented using the combination aberrations of hexapoles. Extensions of this technique to higher order aberrations were developed. An electrostatic electron mirror was invented, which allows the cancellation of primary spherical aberration and first order chromatic aberration. A reduction of chromatic aberration by two orders of magnitude was demonstrated using such a system. -- Highlights: Black-Right-Pointing-Pointer Crewe and his group made significant advances in aberration correction and reduction. Black-Right-Pointing-Pointer A deeper understanding of the quadrupole octopole corrector was developed. Black-Right-Pointing-Pointer A scheme to correct spherical aberration using hexapoles was developed. Black-Right-Pointing-Pointer Chromatic aberration was corrected using a uniform field mirror.

  11. [Correction of hypospadias]. (United States)

    Bianchi, M


    A thorough evaluation of both urethral and penile malformation are mandatory for the choice of the best surgical treatment of patients with hypospadias. The site and the size of the urethral meatus, the presence of a chordee and of a velamentous distal urethra must be carefully assessed. In distal (glandular and coronal) hypospadias, the meatal advancement with glanduloplasty is the treatment of choice. In proximal hypospadias with chordee, the transverse preputial island flap according to the Duckett's technique allows a one-stage hypospadias repair. The awareness of the possible psychologic impact of genital malformations in childhood recommends an early correction of hypospadias, if possible during the first year of life.

  12. Brain Image Motion Correction

    DEFF Research Database (Denmark)

    Jensen, Rasmus Ramsbøl; Benjaminsen, Claus; Larsen, Rasmus


    The application of motion tracking is wide, including: industrial production lines, motion interaction in gaming, computer-aided surgery and motion correction in medical brain imaging. Several devices for motion tracking exist using a variety of different methodologies. In order to use such devices...... offset and tracking noise in medical brain imaging. The data are generated from a phantom mounted on a rotary stage and have been collected using a Siemens High Resolution Research Tomograph for positron emission tomography. During acquisition the phantom was tracked with our latest tracking prototype...

  13. Calculating correct compilers

    DEFF Research Database (Denmark)

    Bahr, Patrick; Hutton, Graham


    In this article, we present a new approach to the problem of calculating compilers. In particular, we develop a simple but general technique that allows us to derive correct compilers from high-level semantics by systematic calculation, with all details of the implementation of the compilers...... falling naturally out of the calculation process. Our approach is based upon the use of standard equational reasoning techniques, and has been applied to calculate compilers for a wide range of language features and their combination, including arithmetic expressions, exceptions, state, various forms...

  14. Using Online Annotations to Support Error Correction and Corrective Feedback (United States)

    Yeh, Shiou-Wen; Lo, Jia-Jiunn


    Giving feedback on second language (L2) writing is a challenging task. This research proposed an interactive environment for error correction and corrective feedback. First, we developed an online corrective feedback and error analysis system called "Online Annotator for EFL Writing". The system consisted of five facilities: Document Maker,…

  15. Flange Correction For Metal-To-Metal Contacts (United States)

    Lieneweg, Udo; Hannaman, David J.


    Improved mathematical model provides correction for flange effect in estimating resistance of square contact between two metal layers from standard four-terminal measurements. Extended version of one developed previously for contact between metal layer and semiconductor layer, wherein flange effect important in semiconductor layer only. Here flange effect in both metal layers significant. Interfacial resistances extracted more accurately.

  16. Power corrections, renormalons and resummation

    Energy Technology Data Exchange (ETDEWEB)

    Beneke, M.


    I briefly review three topics of recent interest concerning power corrections, renormalons and Sudakov resummation: (a) 1/Q corrections to event shape observables in e(+)e(-) annihilation, (b) power corrections in Drell-Yan production and (c) factorial divergences that arise in resummation of large infrared (Sudakov) logarithms in moment or `real` space.

  17. 78 FR 34245 - Miscellaneous Corrections (United States)


    ... Federal Regulations is sold by the Superintendent of Documents. #0;Prices of new books are listed in the... office, correcting and adding missing cross-references, correcting grammatical errors, revising language... the name of its human capital office, correcting and adding missing cross-references,...

  18. 75 FR 16516 - Dates Correction (United States)


    ... From the Federal Register Online via the Government Publishing Office ] NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Office of the Federal Register Dates Correction Correction In the Notices section... through 15499, the date at the top of each page is corrected to read ``Monday, March 29, 2010''....

  19. Radiation camera motion correction system (United States)

    Hoffer, P.B.


    The device determines the ratio of the intensity of radiation received by a radiation camera from two separate portions of the object. A correction signal is developed to maintain this ratio at a substantially constant value and this correction signal is combined with the camera signal to correct for object motion. (Official Gazette)

  20. Niche Genetic Algorithm with Accurate Optimization Performance

    Institute of Scientific and Technical Information of China (English)

    LIU Jian-hua; YAN De-kun


    Based on crowding mechanism, a novel niche genetic algorithm was proposed which can record evolutionary direction dynamically during evolution. After evolution, the solutions's precision can be greatly improved by means of the local searching along the recorded direction. Simulation shows that this algorithm can not only keep population diversity but also find accurate solutions. Although using this method has to take more time compared with the standard GA, it is really worth applying to some cases that have to meet a demand for high solution precision.

  1. Accurate estimation of indoor travel times

    DEFF Research Database (Denmark)

    Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan


    the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. In...... are collected within the building complex. Results indicate that InTraTime is superior with respect to metrics such as deployment cost, maintenance cost and estimation accuracy, yielding an average deviation from actual travel times of 11.7 %. This accuracy was achieved despite using a minimal-effort setup...

  2. Accurate diagnosis is essential for amebiasis

    Institute of Scientific and Technical Information of China (English)


    @@ Amebiasis is one of the three most common causes of death from parasitic disease, and Entamoeba histolytica is the most widely distributed parasites in the world. Particularly, Entamoeba histolytica infection in the developing countries is a significant health problem in amebiasis-endemic areas with a significant impact on infant mortality[1]. In recent years a world wide increase in the number of patients with amebiasis has refocused attention on this important infection. On the other hand, improving the quality of parasitological methods and widespread use of accurate tecniques have improved our knowledge about the disease.

  3. The first accurate description of an aurora (United States)

    Schröder, Wilfried


    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  4. New law requires 'medically accurate' lesson plans. (United States)


    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.

  5. Universality: Accurate Checks in Dyson's Hierarchical Model (United States)

    Godina, J. J.; Meurice, Y.; Oktay, M. B.


    In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.

  6. Optimal estimation of ship's attitudes for beampattern corrections in a coaxial circular array

    Digital Repository Service at National Institute of Oceanography (India)

    Chakraborty, B.; Dev, K.K.

    A study is conducted to estimate the accurate attitude of a ship's motion and the estimation is used to arrive at the corrections required for a farfield pattern of a coaxial circular array. The relevant analytical expression is developed...

  7. Second order QCD corrections to inclusive semileptonic b \\to Xc l \\bar \

    CERN Document Server

    Biswas, Sandip


    We extend previous computations of the second order QCD corrections to semileptonic b \\to c inclusive transitions, to the case where the charged lepton in the final state is massive. This allows accurate description of b \\to c \\tau \\bar \

  8. Direct anharmonic correction method by molecular dynamics (United States)

    Liu, Zhong-Li; Li, Rui; Zhang, Xiu-Lu; Qu, Nuo; Cai, Ling-Cang


    The quick calculation of accurate anharmonic effects of lattice vibrations is crucial to the calculations of thermodynamic properties, the construction of the multi-phase diagram and equation of states of materials, and the theoretical designs of new materials. In this paper, we proposed a direct free energy interpolation (DFEI) method based on the temperature dependent phonon density of states (TD-PDOS) reduced from molecular dynamics simulations. Using the DFEI method, after anharmonic free energy corrections we reproduced the thermal expansion coefficients, the specific heat, the thermal pressure, the isothermal bulk modulus, and the Hugoniot P- V- T relationships of Cu easily and accurately. The extensive tests on other materials including metal, alloy, semiconductor and insulator also manifest that the DFEI method can easily uncover the rest anharmonicity that the quasi-harmonic approximation (QHA) omits. It is thus evidenced that the DFEI method is indeed a very efficient method used to conduct anharmonic effect corrections beyond QHA. More importantly it is much more straightforward and easier compared to previous anharmonic methods.

  9. Anomaly corrected heterotic horizons (United States)

    Fontanella, A.; Gutowski, J. B.; Papadopoulos, G.


    We consider supersymmetric near-horizon geometries in heterotic supergravity up to two loop order in sigma model perturbation theory. We identify the conditions for the horizons to admit enhancement of supersymmetry. We show that solutions which undergo supersymmetry enhancement exhibit an {s}{l}(2,{R}) symmetry, and we describe the geometry of their horizon sections. We also prove a modified Lichnerowicz type theorem, incorporating α' corrections, which relates Killing spinors to zero modes of near-horizon Dirac operators. Furthermore, we demonstrate that there are no AdS2 solutions in heterotic supergravity up to second order in α' for which the fields are smooth and the internal space is smooth and compact without boundary. We investigate a class of nearly supersymmetric horizons, for which the gravitino Killing spinor equation is satisfied on the spatial cross sections but not the dilatino one, and present a description of their geometry.

  10. Anomaly Corrected Heterotic Horizons

    CERN Document Server

    Fontanella, A; Papadopoulos, G


    We consider supersymmetric near-horizon geometries in heterotic supergravity up to two loop order in sigma model perturbation theory. We identify the conditions for the horizons to admit enhancement of supersymmetry. We show that solutions which undergo supersymmetry enhancement exhibit an sl(2,R) symmetry, and we describe the geometry of their horizon sections. We also prove a modified Lichnerowicz type theorem, incorporating $\\alpha'$ corrections, which relates Killing spinors to zero modes of near-horizon Dirac operators. Furthermore, we demonstrate that there are no AdS2 solutions in heterotic supergravity up to second order in $\\alpha'$ for which the fields are smooth and the internal space is smooth and compact without boundary. We investigate a class of nearly supersymmetric horizons, for which the gravitino Killing spinor equation is satisfied on the spatial cross sections but not the dilatino one, and present a description of their geometry.

  11. Catalytic quantum error correction

    CERN Document Server

    Brun, T; Hsieh, M H; Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu


    We develop the theory of entanglement-assisted quantum error correcting (EAQEC) codes, a generalization of the stabilizer formalism to the setting in which the sender and receiver have access to pre-shared entanglement. Conventional stabilizer codes are equivalent to dual-containing symplectic codes. In contrast, EAQEC codes do not require the dual-containing condition, which greatly simplifies their construction. We show how any quaternary classical code can be made into a EAQEC code. In particular, efficient modern codes, like LDPC codes, which attain the Shannon capacity, can be made into EAQEC codes attaining the hashing bound. In a quantum computation setting, EAQEC codes give rise to catalytic quantum codes which maintain a region of inherited noiseless qubits. We also give an alternative construction of EAQEC codes by making classical entanglement assisted codes coherent.

  12. EDITORIAL: Politically correct physics? (United States)

    Pople Deputy Editor, Stephen


    If you were a caring, thinking, liberally minded person in the 1960s, you marched against the bomb, against the Vietnam war, and for civil rights. By the 1980s, your voice was raised about the destruction of the rainforests and the threat to our whole planetary environment. At the same time, you opposed discrimination against any group because of race, sex or sexual orientation. You reasoned that people who spoke or acted in a discriminatory manner should be discriminated against. In other words, you became politically correct. Despite its oft-quoted excesses, the political correctness movement sprang from well-founded concerns about injustices in our society. So, on balance, I am all for it. Or, at least, I was until it started to invade science. Biologists were the first to feel the impact. No longer could they refer to 'higher' and 'lower' orders, or 'primitive' forms of life. To the list of undesirable 'isms' - sexism, racism, ageism - had been added a new one: speciesism. Chemists remained immune to the PC invasion, but what else could you expect from a group of people so steeped in tradition that their principal unit, the mole, requires the use of the thoroughly unreconstructed gram? Now it is the turn of the physicists. This time, the offenders are not those who talk disparagingly about other people or animals, but those who refer to 'forms of energy' and 'heat'. Political correctness has evolved into physical correctness. I was always rather fond of the various forms of energy: potential, kinetic, chemical, electrical, sound and so on. My students might merge heat and internal energy into a single, fuzzy concept loosely associated with moving molecules. They might be a little confused at a whole new crop of energies - hydroelectric, solar, wind, geothermal and tidal - but they could tell me what devices turned chemical energy into electrical energy, even if they couldn't quite appreciate that turning tidal energy into geothermal energy wasn't part of the

  13. How Accurately can we Calculate Thermal Systems?

    Energy Technology Data Exchange (ETDEWEB)

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A


    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.

  14. Accurate pattern registration for integrated circuit tomography

    Energy Technology Data Exchange (ETDEWEB)

    Levine, Zachary H.; Grantham, Steven; Neogi, Suneeta; Frigo, Sean P.; McNulty, Ian; Retsch, Cornelia C.; Wang, Yuxin; Lucatorto, Thomas B.


    As part of an effort to develop high resolution microtomography for engineered structures, a two-level copper integrated circuit interconnect was imaged using 1.83 keV x rays at 14 angles employing a full-field Fresnel zone plate microscope. A major requirement for high resolution microtomography is the accurate registration of the reference axes in each of the many views needed for a reconstruction. A reconstruction with 100 nm resolution would require registration accuracy of 30 nm or better. This work demonstrates that even images that have strong interference fringes can be used to obtain accurate fiducials through the use of Radon transforms. We show that we are able to locate the coordinates of the rectilinear circuit patterns to 28 nm. The procedure is validated by agreement between an x-ray parallax measurement of 1.41{+-}0.17 {mu}m and a measurement of 1.58{+-}0.08 {mu}m from a scanning electron microscope image of a cross section.

  15. Accurate determination of characteristic relative permeability curves (United States)

    Krause, Michael H.; Benson, Sally M.


    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  16. Accurate pose estimation for forensic identification (United States)

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk


    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  17. Accurate taxonomic assignment of short pyrosequencing reads. (United States)

    Clemente, José C; Jansson, Jesper; Valiente, Gabriel


    Ambiguities in the taxonomy dependent assignment of pyrosequencing reads are usually resolved by mapping each read to the lowest common ancestor in a reference taxonomy of all those sequences that match the read. This conservative approach has the drawback of mapping a read to a possibly large clade that may also contain many sequences not matching the read. A more accurate taxonomic assignment of short reads can be made by mapping each read to the node in the reference taxonomy that provides the best precision and recall. We show that given a suffix array for the sequences in the reference taxonomy, a short read can be mapped to the node of the reference taxonomy with the best combined value of precision and recall in time linear in the size of the taxonomy subtree rooted at the lowest common ancestor of the matching sequences. An accurate taxonomic assignment of short reads can thus be made with about the same efficiency as when mapping each read to the lowest common ancestor of all matching sequences in a reference taxonomy. We demonstrate the effectiveness of our approach on several metagenomic datasets of marine and gut microbiota.

  18. Accurate Classification of RNA Structures Using Topological Fingerprints (United States)

    Li, Kejie; Gribskov, Michael


    While RNAs are well known to possess complex structures, functionally similar RNAs often have little sequence similarity. While the exact size and spacing of base-paired regions vary, functionally similar RNAs have pronounced similarity in the arrangement, or topology, of base-paired stems. Furthermore, predicted RNA structures often lack pseudoknots (a crucial aspect of biological activity), and are only partially correct, or incomplete. A topological approach addresses all of these difficulties. In this work we describe each RNA structure as a graph that can be converted to a topological spectrum (RNA fingerprint). The set of subgraphs in an RNA structure, its RNA fingerprint, can be compared with the fingerprints of other RNA structures to identify and correctly classify functionally related RNAs. Topologically similar RNAs can be identified even when a large fraction, up to 30%, of the stems are omitted, indicating that highly accurate structures are not necessary. We investigate the performance of the RNA fingerprint approach on a set of eight highly curated RNA families, with diverse sizes and functions, containing pseudoknots, and with little sequence similarity–an especially difficult test set. In spite of the difficult test set, the RNA fingerprint approach is very successful (ROC AUC > 0.95). Due to the inclusion of pseudoknots, the RNA fingerprint approach both covers a wider range of possible structures than methods based only on secondary structure, and its tolerance for incomplete structures suggests that it can be applied even to predicted structures. Source code is freely available at PMID:27755571

  19. Accurate molecular classification of cancer using simple rules

    Directory of Open Access Journals (Sweden)

    Gotoh Osamu


    Full Text Available Abstract Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often hampers the interpretability of the models. For a better understanding of the classification results, it is desirable to develop simpler rule-based models with as few marker genes as possible. Methods We screened a small number of informative single genes and gene pairs on the basis of their depended degrees proposed in rough sets. Applying the decision rules induced by the selected genes or gene pairs, we constructed cancer classifiers. We tested the efficacy of the classifiers by leave-one-out cross-validation (LOOCV of training sets and classification of independent test sets. Results We applied our methods to five cancerous gene expression datasets: leukemia (acute lymphoblastic leukemia [ALL] vs. acute myeloid leukemia [AML], lung cancer, prostate cancer, breast cancer, and leukemia (ALL vs. mixed-lineage leukemia [MLL] vs. AML. Accurate classification outcomes were obtained by utilizing just one or two genes. Some genes that correlated closely with the pathogenesis of relevant cancers were identified. In terms of both classification performance and algorithm simplicity, our approach outperformed or at least matched existing methods. Conclusion In cancerous gene expression datasets, a small number of genes, even one or two if selected correctly, is capable of achieving an ideal cancer classification effect. This finding also means that very simple rules may perform well for cancerous class prediction.

  20. Accurate, fully-automated NMR spectral profiling for metabolomics.

    Directory of Open Access Journals (Sweden)

    Siamak Ravanbakhsh

    Full Text Available Many diseases cause significant changes to the concentrations of small molecules (a.k.a. metabolites that appear in a person's biofluids, which means such diseases can often be readily detected from a person's "metabolic profile"-i.e., the list of concentrations of those metabolites. This information can be extracted from a biofluids Nuclear Magnetic Resonance (NMR spectrum. However, due to its complexity, NMR spectral profiling has remained manual, resulting in slow, expensive and error-prone procedures that have hindered clinical and industrial adoption of metabolomics via NMR. This paper presents a system, BAYESIL, which can quickly, accurately, and autonomously produce a person's metabolic profile. Given a 1D 1H NMR spectrum of a complex biofluid (specifically serum or cerebrospinal fluid, BAYESIL can automatically determine the metabolic profile. This requires first performing several spectral processing steps, then matching the resulting spectrum against a reference compound library, which contains the "signatures" of each relevant metabolite. BAYESIL views spectral matching as an inference problem within a probabilistic graphical model that rapidly approximates the most probable metabolic profile. Our extensive studies on a diverse set of complex mixtures including real biological samples (serum and CSF, defined mixtures and realistic computer generated spectra; involving > 50 compounds, show that BAYESIL can autonomously find the concentration of NMR-detectable metabolites accurately (~ 90% correct identification and ~ 10% quantification error, in less than 5 minutes on a single CPU. These results demonstrate that BAYESIL is the first fully-automatic publicly-accessible system that provides quantitative NMR spectral profiling effectively-with an accuracy on these biofluids that meets or exceeds the performance of trained experts. We anticipate this tool will usher in high-throughput metabolomics and enable a wealth of new applications of

  1. Accurate microfour-point probe sheet resistance measurements on small samples

    DEFF Research Database (Denmark)

    Thorsteinsson, Sune; Wang, Fei; Petersen, Dirch Hjorth


    We show that accurate sheet resistance measurements on small samples may be performed using microfour-point probes without applying correction factors. Using dual configuration measurements, the sheet resistance may be extracted with high accuracy when the microfour-point probes are in proximity...... with sufficient accuracy. As an example, the sheet resistance of a 40 µm (50 µm) square sample may be characterized with an accuracy of 0.3% (0.1%) using a 10 µm pitch microfour-point probe and assuming a probe alignment accuracy of ±2.5 µm. ©2009 American Institute of Physics...... of a mirror plane on small samples with dimensions of a few times the probe pitch. We calculate theoretically the size of the “sweet spot,” where sufficiently accurate sheet resistances result and show that even for very small samples it is feasible to do correction free extraction of the sheet resistance...

  2. Accurate Astrometry and Photometry of Saturated and Coronagraphic Point Spread Functions

    CERN Document Server

    Marois, C; Lafrenière, D


    Accurate astrometry and photometry of saturated and coronagraphic point spread functions (PSFs) are fundamental to both ground- and space-based high contrast imaging projects. For ground-based adaptive optics imaging, differential atmospheric refraction and flexure introduce a small drift of the PSF with time, and seeing and sky transmission variations modify the PSF flux distribution. For space-based imaging, vibrations, thermal fluctuations and pointing jitters can modify the PSF core position and flux. These effects need to be corrected to properly combine the images and obtain optimal signal-to-noise ratios, accurate relative astrometry and photometry of detected objects as well as precise detection limits. Usually, one can easily correct for these effects by using the PSF core, but this is impossible when high dynamic range observing techniques are used, like coronagrahy with a non-transmissive occulting mask, or if the stellar PSF core is saturated. We present a new technique that can solve these issues...

  3. A statistical method for assessing peptide identification confidence in accurate mass and time tag proteomics. (United States)

    Stanley, Jeffrey R; Adkins, Joshua N; Slysz, Gordon W; Monroe, Matthew E; Purvine, Samuel O; Karpievitch, Yuliya V; Anderson, Gordon A; Smith, Richard D; Dabney, Alan R


    Current algorithms for quantifying peptide identification confidence in the accurate mass and time (AMT) tag approach assume that the AMT tags themselves have been correctly identified. However, there is uncertainty in the identification of AMT tags, because this is based on matching LC-MS/MS fragmentation spectra to peptide sequences. In this paper, we incorporate confidence measures for the AMT tag identifications into the calculation of probabilities for correct matches to an AMT tag database, resulting in a more accurate overall measure of identification confidence for the AMT tag approach. The method is referenced as Statistical Tools for AMT Tag Confidence (STAC). STAC additionally provides a uniqueness probability (UP) to help distinguish between multiple matches to an AMT tag and a method to calculate an overall false discovery rate (FDR). STAC is freely available for download, as both a command line and a Windows graphical application.

  4. Arthroscopically assisted Latarjet procedure: A new surgical approach for accurate coracoid graft placement and compression


    Ettore Taverna; Henri Ufenast; Laura Broffoni; Guido Garavaglia


    The Latarjet procedure is a confirmed method for the treatment of shoulder instability in the presence of bone loss. It is a challenging procedure for which a key point is the correct placement of the coracoid graft onto the glenoid neck. We here present our technique for an athroscopically assisted Latarjet procedure with a new drill guide, permitting an accurate and reproducible positioning of the coracoid graft, with optimal compression of the graft onto the glenoid neck due to the perfect...

  5. A stochastic model of kinetochore–microtubule attachment accurately describes fission yeast chromosome segregation


    Gay, Guillaume; Courtheoux, Thibault; Reyes, Céline; Tournier, Sylvie; Gachet, Yannick


    In fission yeast, erroneous attachments of spindle microtubules to kinetochores are frequent in early mitosis. Most are corrected before anaphase onset by a mechanism involving the protein kinase Aurora B, which destabilizes kinetochore microtubules (ktMTs) in the absence of tension between sister chromatids. In this paper, we describe a minimal mathematical model of fission yeast chromosome segregation based on the stochastic attachment and detachment of ktMTs. The model accurately reproduce...

  6. Airborne experiment results for spaceborne atmospheric synchronous correction system (United States)

    Cui, Wenyu; Yi, Weining; Du, Lili; Liu, Xiao


    The image quality of optical remote sensing satellite is affected by the atmosphere, thus the image needs to be corrected. Due to the spatial and temporal variability of atmospheric conditions, correction by using synchronous atmospheric parameters can effectively improve the remote sensing image quality. For this reason, a small light spaceborne instrument, the atmospheric synchronous correction device (airborne prototype), is developed by AIOFM of CAS(Anhui Institute of Optics and Fine Mechanics of Chinese Academy of Sciences). With this instrument, of which the detection mode is timing synchronization and spatial coverage, the atmospheric parameters consistent with the images to be corrected in time and space can be obtained, and then the correction is achieved by radiative transfer model. To verify the technical process and treatment effect of spaceborne atmospheric correction system, the first airborne experiment is designed and completed. The experiment is implemented by the "satellite-airborne-ground" synchronous measuring method. A high resolution(0.4 m) camera and the atmospheric correction device are equipped on the aircraft, which photograph the ground with the satellite observation over the top simultaneously. And aerosol optical depth (AOD) and columnar water vapor (CWV) in the imagery area are also acquired, which are used for the atmospheric correction for satellite and aerial images. Experimental results show that using the AOD and CWV of imagery area retrieved by the data obtained by the device to correct aviation and satellite images, can improve image definition and contrast by more than 30%, and increase MTF by more than 1 time, which means atmospheric correction for satellite images by using the data of spaceborne atmospheric synchronous correction device is accurate and effective.

  7. Toward Accurate and Quantitative Comparative Metagenomics (United States)

    Nayfach, Stephen; Pollard, Katherine S.


    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  8. Apparatus for accurately measuring high temperatures (United States)

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  9. Accurate renormalization group analyses in neutrino sector

    Energy Technology Data Exchange (ETDEWEB)

    Haba, Naoyuki [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Kaneta, Kunio [Kavli IPMU (WPI), The University of Tokyo, Kashiwa, Chiba 277-8568 (Japan); Takahashi, Ryo [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Yamaguchi, Yuya [Department of Physics, Faculty of Science, Hokkaido University, Sapporo 060-0810 (Japan)


    We investigate accurate renormalization group analyses in neutrino sector between ν-oscillation and seesaw energy scales. We consider decoupling effects of top quark and Higgs boson on the renormalization group equations of light neutrino mass matrix. Since the decoupling effects are given in the standard model scale and independent of high energy physics, our method can basically apply to any models beyond the standard model. We find that the decoupling effects of Higgs boson are negligible, while those of top quark are not. Particularly, the decoupling effects of top quark affect neutrino mass eigenvalues, which are important for analyzing predictions such as mass squared differences and neutrinoless double beta decay in an underlying theory existing at high energy scale.

  10. Accurate Weather Forecasting for Radio Astronomy (United States)

    Maddalena, Ronald J.


    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews ( rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  11. Correction, improvement and model verification of CARE 3, version 3 (United States)

    Rose, D. M.; Manke, J. W.; Altschul, R. E.; Nelson, D. L.


    An independent verification of the CARE 3 mathematical model and computer code was conducted and reported in NASA Contractor Report 166096, Review and Verification of CARE 3 Mathematical Model and Code: Interim Report. The study uncovered some implementation errors that were corrected and are reported in this document. The corrected CARE 3 program is called version 4. Thus the document, correction. improvement, and model verification of CARE 3, version 3 was written in April 1984. It is being published now as it has been determined to contain a more accurate representation of CARE 3 than the preceding document of April 1983. This edition supercedes NASA-CR-166122 entitled, 'Correction and Improvement of CARE 3,' version 3, April 1983.

  12. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system (United States)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi


    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  13. Analytical model for relativistic corrections to the nuclear magnetic shielding constant in atoms

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Rodolfo H. [Facultad de Ciencias Exactas, Universidad Nacional del Nordeste, Avenida Libertad 5500 (3400), Corrientes (Argentina)]. E-mail:; Gomez, Sergio S. [Facultad de Ciencias Exactas, Universidad Nacional del Nordeste, Avenida Libertad 5500 (3400), Corrientes (Argentina)


    We present a simple analytical model for calculating and rationalizing the main relativistic corrections to the nuclear magnetic shielding constant in atoms. It provides good estimates for those corrections and their trends, in reasonable agreement with accurate four-component calculations and perturbation methods. The origin of the effects in deep core atomic orbitals is manifestly shown.

  14. Accurate Completion of Medical Report on Diagnosing Death. (United States)

    Savić, Slobodan; Alempijević, Djordje; Andjelić, Sladjana


    Diagnosing death and issuing a Death Diagnosing Form (DDF) represents an activity that carries a great deal of public responsibility for medical professionals of the Emergency Medical Services (EMS) and is perpetually exposed to the control of the general public. Diagnosing death is necessary so as to confirm true, to exclude apparent death and consequentially to avoid burying a person alive, i.e. apparently dead. These expert-methodological guidelines based on the most up-to-date and medically based evidence have the goal of helping the physicians of the EMS in accurately filling out a medical report on diagnosing death. If the outcome of applied cardiopulmonary resuscitation measures is negative or when the person is found dead, the physician is under obligation to diagnose death and correctly fill out the DDF. It is also recommended to perform electrocardiography (EKG) and record asystole in at least two leads. In the process of diagnostics and treatment, it is a moral obligation of each Belgrade EMS physician to apply all available achievements and knowledge of modern medicine acquired from extensive international studies, which have been indeed the major theoretical basis for the creation of these expert-methodological guidelines. Those acting differently do so in accordance with their conscience and risk professional, and even criminal sanctions.

  15. An accurate {delta}f method for neoclassical transport calculation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, W.X.; Nakajima, N.; Murakami, S.; Okamoto, M. [National Inst. for Fusion Science, Toki, Gifu (Japan)


    A {delta}f method, solving drift kinetic equation, for neoclassical transport calculation is presented in detail. It is demonstrated that valid results essentially rely on the correct evaluation of marker density g in weight calculation. A general and accurate weighting scheme is developed without using some assumed g in weight equation for advancing particle weights, unlike the previous schemes. This scheme employs an additional weight function to directly solve g from its kinetic equation using the idea of {delta}f method. Therefore the severe constraint that the real marker distribution must be consistent with the initially assumed g during a simulation is relaxed. An improved like-particle collision scheme is presented. By performing compensation for momentum, energy and particle losses arising from numerical errors, the conservations of all the three quantities are greatly improved during collisions. Ion neoclassical transport due to self-collisions is examined under finite banana case as well as zero banana limit. A solution with zero particle and zero energy flux (in case of no temperature gradient) over whole poloidal section is obtained. With the improvement in both like-particle collision scheme and weighting scheme, the {delta}f simulation shows a significantly upgraded performance for neoclassical transport study. (author)

  16. Accurate measurement of RF exposure from emerging wireless communication systems (United States)

    Letertre, Thierry; Monebhurrun, Vikass; Toffano, Zeno


    Isotropic broadband probes or spectrum analyzers (SAs) may be used for the measurement of rapidly varying electromagnetic fields generated by emerging wireless communication systems. In this paper this problematic is investigated by comparing the responses measured by two different isotropic broadband probes typically used to perform electric field (E-field) evaluations. The broadband probes are submitted to signals with variable duty cycles (DC) and crest factors (CF) either with or without Orthogonal Frequency Division Multiplexing (OFDM) modulation but with the same root-mean-square (RMS) power. The two probes do not provide accurate enough results for deterministic signals such as Worldwide Interoperability for Microwave Access (WIMAX) or Long Term Evolution (LTE) as well as for non-deterministic signals such as Wireless Fidelity (WiFi). The legacy measurement protocols should be adapted to cope for the emerging wireless communication technologies based on the OFDM modulation scheme. This is not easily achieved except when the statistics of the RF emission are well known. In this case the measurement errors are shown to be systematic and a correction factor or calibration can be applied to obtain a good approximation of the total RMS power.

  17. Study of accurate volume measurement system for plutonium nitrate solution

    Energy Technology Data Exchange (ETDEWEB)

    Hosoma, T. [Power Reactor and Nuclear Fuel Development Corp., Tokai, Ibaraki (Japan). Tokai Works


    It is important for effective safeguarding of nuclear materials to establish a technique for accurate volume measurement of plutonium nitrate solution in accountancy tank. The volume of the solution can be estimated by two differential pressures between three dip-tubes, in which the air is purged by an compressor. One of the differential pressure corresponds to the density of the solution, and another corresponds to the surface level of the solution in the tank. The measurement of the differential pressure contains many uncertain errors, such as precision of pressure transducer, fluctuation of back-pressure, generation of bubbles at the front of the dip-tubes, non-uniformity of temperature and density of the solution, pressure drop in the dip-tube, and so on. The various excess pressures at the volume measurement are discussed and corrected by a reasonable method. High precision-differential pressure measurement system is developed with a quartz oscillation type transducer which converts a differential pressure to a digital signal. The developed system is used for inspection by the government and IAEA. (M. Suetake)

  18. A Distributed Weighted Voting Approach for Accurate Eye Center Estimation

    Directory of Open Access Journals (Sweden)

    Gagandeep Singh


    Full Text Available This paper proposes a novel approach for accurate estimation of eye center in face images. A distributed voting based approach in which every pixel votes is adopted for potential eye center candidates. The votes are distributed over a subset of pixels which lie in a direction which is opposite to gradient direction and the weightage of votes is distributed according to a novel mechanism.  First, image is normalized to eliminate illumination variations and its edge map is generated using Canny edge detector. Distributed voting is applied on the edge image to generate different eye center candidates. Morphological closing and local maxima search are used to reduce the number of candidates. A classifier based on spatial and intensity information is used to choose the correct candidates for the locations of eye center. The proposed approach was tested on BioID face database and resulted in better Iris detection rate than the state-of-the-art. The proposed approach is robust against illumination variation, small pose variations, presence of eye glasses and partial occlusion of eyes.Defence Science Journal, 2013, 63(3, pp.292-297, DOI:

  19. Accurate Evaluation of Expected Shortfall for Linear Portfolios with Elliptically Distributed Risk Factors

    Directory of Open Access Journals (Sweden)

    Dobrislav Dobrev∗


    Full Text Available We provide an accurate closed-form expression for the expected shortfall of linear portfolios with elliptically distributed risk factors. Our results aim to correct inaccuracies that originate in Kamdem (2005 and are present also in at least thirty other papers referencing it, including the recent survey by Nadarajah et al. (2014 on estimation methods for expected shortfall. In particular, we show that the correction we provide in the popular multivariate Student t setting eliminates understatement of expected shortfall by a factor varying from at least four to more than 100 across different tail quantiles and degrees of freedom. As such, the resulting economic impact in financial risk management applications could be significant. We further correct such errors encountered also in closely related results in Kamdem (2007 and 2009 for mixtures of elliptical distributions. More generally, our findings point to the extra scrutiny required when deploying new methods for expected shortfall estimation in practice.

  20. Real-time lens distortion correction: speed, accuracy and efficiency (United States)

    Bax, Michael R.; Shahidi, Ramin


    Optical lens systems suffer from nonlinear geometrical distortion. Optical imaging applications such as image-enhanced endoscopy and image-based bronchoscope tracking require correction of this distortion for accurate localization, tracking, registration, and measurement of image features. Real-time capability is desirable for interactive systems and live video. The use of a texture-mapping graphics accelerator, which is standard hardware on current motherboard chipsets and add-in video graphics cards, to perform distortion correction is proposed. Mesh generation for image tessellation, an error analysis, and performance results are presented. It is shown that distortion correction using commodity graphics hardware is substantially faster than using the main processor and can be performed at video frame rates (faster than 30 frames per second), and that the polar-based method of mesh generation proposed here is more accurate than a conventional grid-based approach. Using graphics hardware to perform distortion correction is not only fast and accurate but also efficient as it frees the main processor for other tasks, which is an important issue in some real-time applications.

  1. Exchange-Hole Dipole Dispersion Model for Accurate Energy Ranking in Molecular Crystal Structure Prediction. (United States)

    Whittleton, Sarah R; Otero-de-la-Roza, A; Johnson, Erin R


    Accurate energy ranking is a key facet to the problem of first-principles crystal-structure prediction (CSP) of molecular crystals. This work presents a systematic assessment of B86bPBE-XDM, a semilocal density functional combined with the exchange-hole dipole moment (XDM) dispersion model, for energy ranking using 14 compounds from the first five CSP blind tests. Specifically, the set of crystals studied comprises 11 rigid, planar compounds and 3 co-crystals. The experimental structure was correctly identified as the lowest in lattice energy for 12 of the 14 total crystals. One of the exceptions is 4-hydroxythiophene-2-carbonitrile, for which the experimental structure was correctly identified once a quasi-harmonic estimate of the vibrational free-energy contribution was included, evidencing the occasional importance of thermal corrections for accurate energy ranking. The other exception is an organic salt, where charge-transfer error (also called delocalization error) is expected to cause the base density functional to be unreliable. Provided the choice of base density functional is appropriate and an estimate of temperature effects is used, XDM-corrected density-functional theory is highly reliable for the energetic ranking of competing crystal structures.

  2. Food systems in correctional settings

    DEFF Research Database (Denmark)

    Smoyer, Amy; Kjær Minke, Linda

    Food is a central component of life in correctional institutions and plays a critical role in the physical and mental health of incarcerated people and the construction of prisoners' identities and relationships. An understanding of the role of food in correctional settings and the effective...... management of food systems may improve outcomes for incarcerated people and help correctional administrators to maximize their health and safety. This report summarizes existing research on food systems in correctional settings and provides examples of food programmes in prison and remand facilities......, including a case study of food-related innovation in the Danish correctional system. It offers specific conclusions for policy-makers, administrators of correctional institutions and prison-food-service professionals, and makes proposals for future research....

  3. Gravitational Correction to Vacuum Polarization

    CERN Document Server

    Jentschura, U D


    We consider the gravitational correction to (electronic) vacuum polarization in the presence of a gravitational background field. The Dirac propagators for the virtual fermions are modified to include the leading gravitational correction (potential term) which corresponds to a coordinate-dependent fermion mass. The mass term is assumed to be uniform over a length scale commensurate with the virtual electron-positron pair. The on-mass shell renormalization condition ensures that the gravitational correction vanishes on the mass shell of the photon, i.e., the speed of light is unaffected by the quantum field theoretical loop correction, in full agreement with the equivalence principle. Nontrivial corrections are obtained for off-shell, virtual photons. We compare our findings to other works on generalized Lorentz transformations and combined quantum-electrodynamic gravitational corrections to the speed of light which have recently appeared in the literature.

  4. Nested Quantum Error Correction Codes

    CERN Document Server

    Wang, Zhuo; Fan, Hen; Vedral, Vlatko


    The theory of quantum error correction was established more than a decade ago as the primary tool for fighting decoherence in quantum information processing. Although great progress has already been made in this field, limited methods are available in constructing new quantum error correction codes from old codes. Here we exhibit a simple and general method to construct new quantum error correction codes by nesting certain quantum codes together. The problem of finding long quantum error correction codes is reduced to that of searching several short length quantum codes with certain properties. Our method works for all length and all distance codes, and is quite efficient to construct optimal or near optimal codes. Two main known methods in constructing new codes from old codes in quantum error-correction theory, the concatenating and pasting, can be understood in the framework of nested quantum error correction codes.

  5. Asymptotic expansion based equation of state for hard-disk fluids offering accurate virial coefficients

    CERN Document Server

    Tian, Jianxiang; Mulero, A


    Despite the fact that more that more than 30 analytical expressions for the equation of state of hard-disk fluids have been proposed in the literature, none of them is capable of reproducing the currently accepted numeric or estimated values for the first eighteen virial coefficients. Using the asymptotic expansion method, extended to the first ten virial coefficients for hard-disk fluids, fifty-seven new expressions for the equation of state have been studied. Of these, a new equation of state is selected which reproduces accurately all the first eighteen virial coefficients. Comparisons for the compressibility factor with computer simulations show that this new equation is as accurate as other similar expressions with the same number of parameters. Finally, the location of the poles of the 57 new equations shows that there are some particular configurations which could give both the accurate virial coefficients and the correct closest packing fraction in the future when higher virial coefficients than the t...

  6. Processor register error correction management (United States)

    Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.


    Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.

  7. Cool Cluster Correctly Correlated

    Energy Technology Data Exchange (ETDEWEB)

    Varganov, Sergey Aleksandrovich [Iowa State Univ., Ames, IA (United States)


    Atomic clusters are unique objects, which occupy an intermediate position between atoms and condensed matter systems. For a long time it was thought that physical and chemical properties of atomic dusters monotonically change with increasing size of the cluster from a single atom to a condensed matter system. However, recently it has become clear that many properties of atomic clusters can change drastically with the size of the clusters. Because physical and chemical properties of clusters can be adjusted simply by changing the cluster's size, different applications of atomic clusters were proposed. One example is the catalytic activity of clusters of specific sizes in different chemical reactions. Another example is a potential application of atomic clusters in microelectronics, where their band gaps can be adjusted by simply changing cluster sizes. In recent years significant advances in experimental techniques allow one to synthesize and study atomic clusters of specified sizes. However, the interpretation of the results is often difficult. The theoretical methods are frequently used to help in interpretation of complex experimental data. Most of the theoretical approaches have been based on empirical or semiempirical methods. These methods allow one to study large and small dusters using the same approximations. However, since empirical and semiempirical methods rely on simple models with many parameters, it is often difficult to estimate the quantitative and even qualitative accuracy of the results. On the other hand, because of significant advances in quantum chemical methods and computer capabilities, it is now possible to do high quality ab-initio calculations not only on systems of few atoms but on clusters of practical interest as well. In addition to accurate results for specific clusters, such methods can be used for benchmarking of different empirical and semiempirical approaches. The atomic clusters studied in this work contain from a few atoms

  8. Open quantum systems and error correction (United States)

    Shabani Barzegar, Alireza

    Quantum effects can be harnessed to manipulate information in a desired way. Quantum systems which are designed for this purpose are suffering from harming interaction with their surrounding environment or inaccuracy in control forces. Engineering different methods to combat errors in quantum devices are highly demanding. In this thesis, I focus on realistic formulations of quantum error correction methods. A realistic formulation is the one that incorporates experimental challenges. This thesis is presented in two sections of open quantum system and quantum error correction. Chapters 2 and 3 cover the material on open quantum system theory. It is essential to first study a noise process then to contemplate methods to cancel its effect. In the second chapter, I present the non-completely positive formulation of quantum maps. Most of these results are published in [Shabani and Lidar, 2009b,a], except a subsection on geometric characterization of positivity domain of a quantum map. The real-time formulation of the dynamics is the topic of the third chapter. After introducing the concept of Markovian regime, A new post-Markovian quantum master equation is derived, published in [Shabani and Lidar, 2005a]. The section of quantum error correction is presented in three chapters of 4, 5, 6 and 7. In chapter 4, we introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization (published in [Shabani and Lidar, 2005b]). In Chapter 5, we present a semidefinite program optimization approach to quantum error correction that yields codes and recovery procedures that are robust against significant variations in the noise channel. Our approach allows us to optimize the encoding, recovery, or both, and is amenable to approximations that significantly improve computational cost while retaining fidelity (see [Kosut et al., 2008] for a published version). Chapter 6 is devoted to a theory of quantum error correction (QEC

  9. Accurate, Meshless Methods for Magneto-Hydrodynamics

    CERN Document Server

    Hopkins, Philip F


    Recently, we developed a pair of meshless finite-volume Lagrangian methods for hydrodynamics: the 'meshless finite mass' (MFM) and 'meshless finite volume' (MFV) methods. These capture advantages of both smoothed-particle hydrodynamics (SPH) and adaptive mesh-refinement (AMR) schemes. Here, we extend these to include ideal magneto-hydrodynamics (MHD). The MHD equations are second-order consistent and conservative. We augment these with a divergence-cleaning scheme, which maintains div*B~0 to high accuracy. We implement these in the code GIZMO, together with a state-of-the-art implementation of SPH MHD. In every one of a large suite of test problems, the new methods are competitive with moving-mesh and AMR schemes using constrained transport (CT) to ensure div*B=0. They are able to correctly capture the growth and structure of the magneto-rotational instability (MRI), MHD turbulence, and the launching of magnetic jets, in some cases converging more rapidly than AMR codes. Compared to SPH, the MFM/MFV methods e...

  10. Optimal arbitrarily accurate composite pulse sequences (United States)

    Low, Guang Hao; Yoder, Theodore


    Implementing a single qubit unitary is often hampered by imperfect control. Systematic amplitude errors ɛ, caused by incorrect duration or strength of a pulse, are an especially common problem. But a sequence of imperfect pulses can provide a better implementation of a desired operation, as compared to a single primitive pulse. We find optimal pulse sequences consisting of L primitive π or 2 π rotations that suppress such errors to arbitrary order (ɛn) on arbitrary initial states. Optimality is demonstrated by proving an L = (n) lower bound and saturating it with L = 2 n solutions. Closed-form solutions for arbitrary rotation angles are given for n = 1 , 2 , 3 , 4 . Perturbative solutions for any n are proven for small angles, while arbitrary angle solutions are obtained by analytic continuation up to n = 12 . The derivation proceeds by a novel algebraic and non-recursive approach, in which finding amplitude error correcting sequences can be reduced to solving polynomial equations.

  11. Fast and accurate exhaled breath ammonia measurement. (United States)

    Solga, Steven F; Mudalel, Matthew L; Spacek, Lisa A; Risby, Terence H


    This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations.

  12. Noninvasive hemoglobin monitoring: how accurate is enough? (United States)

    Rice, Mark J; Gravenstein, Nikolaus; Morey, Timothy E


    Evaluating the accuracy of medical devices has traditionally been a blend of statistical analyses, at times without contextualizing the clinical application. There have been a number of recent publications on the accuracy of a continuous noninvasive hemoglobin measurement device, the Masimo Radical-7 Pulse Co-oximeter, focusing on the traditional statistical metrics of bias and precision. In this review, which contains material presented at the Innovations and Applications of Monitoring Perfusion, Oxygenation, and Ventilation (IAMPOV) Symposium at Yale University in 2012, we critically investigated these metrics as applied to the new technology, exploring what is required of a noninvasive hemoglobin monitor and whether the conventional statistics adequately answer our questions about clinical accuracy. We discuss the glucose error grid, well known in the glucose monitoring literature, and describe an analogous version for hemoglobin monitoring. This hemoglobin error grid can be used to evaluate the required clinical accuracy (±g/dL) of a hemoglobin measurement device to provide more conclusive evidence on whether to transfuse an individual patient. The important decision to transfuse a patient usually requires both an accurate hemoglobin measurement and a physiologic reason to elect transfusion. It is our opinion that the published accuracy data of the Masimo Radical-7 is not good enough to make the transfusion decision.

  13. Accurate free energy calculation along optimized paths. (United States)

    Chen, Changjun; Xiao, Yi


    The path-based methods of free energy calculation, such as thermodynamic integration and free energy perturbation, are simple in theory, but difficult in practice because in most cases smooth paths do not exist, especially for large molecules. In this article, we present a novel method to build the transition path of a peptide. We use harmonic potentials to restrain its nonhydrogen atom dihedrals in the initial state and set the equilibrium angles of the potentials as those in the final state. Through a series of steps of geometrical optimization, we can construct a smooth and short path from the initial state to the final state. This path can be used to calculate free energy difference. To validate this method, we apply it to a small 10-ALA peptide and find that the calculated free energy changes in helix-helix and helix-hairpin transitions are both self-convergent and cross-convergent. We also calculate the free energy differences between different stable states of beta-hairpin trpzip2, and the results show that this method is more efficient than the conventional molecular dynamics method in accurate free energy calculation.

  14. Accurate fission data for nuclear safety

    CERN Document Server

    Solders, A; Jokinen, A; Kolhinen, V S; Lantz, M; Mattera, A; Penttila, H; Pomp, S; Rakopoulos, V; Rinta-Antila, S


    The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyvaskyla. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (10^12 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons...

  15. Towards Accurate Modeling of Moving Contact Lines

    CERN Document Server

    Holmgren, Hanna


    A main challenge in numerical simulations of moving contact line problems is that the adherence, or no-slip boundary condition leads to a non-integrable stress singularity at the contact line. In this report we perform the first steps in developing the macroscopic part of an accurate multiscale model for a moving contact line problem in two space dimensions. We assume that a micro model has been used to determine a relation between the contact angle and the contact line velocity. An intermediate region is introduced where an analytical expression for the velocity exists. This expression is used to implement boundary conditions for the moving contact line at a macroscopic scale, along a fictitious boundary located a small distance away from the physical boundary. Model problems where the shape of the interface is constant thought the simulation are introduced. For these problems, experiments show that the errors in the resulting contact line velocities converge with the grid size $h$ at a rate of convergence $...

  16. Does a pneumotach accurately characterize voice function? (United States)

    Walters, Gage; Krane, Michael


    A study is presented which addresses how a pneumotach might adversely affect clinical measurements of voice function. A pneumotach is a device, typically a mask, worn over the mouth, in order to measure time-varying glottal volume flow. By measuring the time-varying difference in pressure across a known aerodynamic resistance element in the mask, the glottal volume flow waveform is estimated. Because it adds aerodynamic resistance to the vocal system, there is some concern that using a pneumotach may not accurately portray the behavior of the voice. To test this hypothesis, experiments were performed in a simplified airway model with the principal dimensions of an adult human upper airway. A compliant constriction, fabricated from silicone rubber, modeled the vocal folds. Variations of transglottal pressure, time-averaged volume flow, model vocal fold vibration amplitude, and radiated sound with subglottal pressure were performed, with and without the pneumotach in place, and differences noted. Acknowledge support of NIH Grant 2R01DC005642-10A1.

  17. Accurate lineshape spectroscopy and the Boltzmann constant. (United States)

    Truong, G-W; Anstie, J D; May, E F; Stace, T M; Luiten, A N


    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m.

  18. Accurate upper body rehabilitation system using kinect. (United States)

    Sinha, Sanjana; Bhowmick, Brojeshwar; Chakravarty, Kingshuk; Sinha, Aniruddha; Das, Abhijit


    The growing importance of Kinect as a tool for clinical assessment and rehabilitation is due to its portability, low cost and markerless system for human motion capture. However, the accuracy of Kinect in measuring three-dimensional body joint center locations often fails to meet clinical standards of accuracy when compared to marker-based motion capture systems such as Vicon. The length of the body segment connecting any two joints, measured as the distance between three-dimensional Kinect skeleton joint coordinates, has been observed to vary with time. The orientation of the line connecting adjoining Kinect skeletal coordinates has also been seen to differ from the actual orientation of the physical body segment. Hence we have proposed an optimization method that utilizes Kinect Depth and RGB information to search for the joint center location that satisfies constraints on body segment length and as well as orientation. An experimental study have been carried out on ten healthy participants performing upper body range of motion exercises. The results report 72% reduction in body segment length variance and 2° improvement in Range of Motion (ROM) angle hence enabling to more accurate measurements for upper limb exercises.

  19. Accurate thermoplasmonic simulation of metallic nanoparticles (United States)

    Yu, Da-Miao; Liu, Yan-Nan; Tian, Fa-Lin; Pan, Xiao-Min; Sheng, Xin-Qing


    Thermoplasmonics leads to enhanced heat generation due to the localized surface plasmon resonances. The measurement of heat generation is fundamentally a complicated task, which necessitates the development of theoretical simulation techniques. In this paper, an efficient and accurate numerical scheme is proposed for applications with complex metallic nanostructures. Light absorption and temperature increase are, respectively, obtained by solving the volume integral equation (VIE) and the steady-state heat diffusion equation through the method of moments (MoM). Previously, methods based on surface integral equations (SIEs) were utilized to obtain light absorption. However, computing light absorption from the equivalent current is as expensive as O(NsNv), where Ns and Nv, respectively, denote the number of surface and volumetric unknowns. Our approach reduces the cost to O(Nv) by using VIE. The accuracy, efficiency and capability of the proposed scheme are validated by multiple simulations. The simulations show that our proposed method is more efficient than the approach based on SIEs under comparable accuracy, especially for the case where many incidents are of interest. The simulations also indicate that the temperature profile can be tuned by several factors, such as the geometry configuration of array, beam direction, and light wavelength.

  20. Fast and Provably Accurate Bilateral Filtering. (United States)

    Chaudhury, Kunal N; Dabhade, Swapnil D


    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy.

  1. A self-interaction-free local hybrid functional: Accurate binding energies vis-\\`a-vis accurate ionization potentials from Kohn-Sham eigenvalues

    CERN Document Server

    Schmidt, Tobias; Makmal, Adi; Kronik, Leeor; Kümmel, Stephan


    We present and test a new approximation for the exchange-correlation (xc) energy of Kohn-Sham density functional theory. It combines exact exchange with a compatible non-local correlation functional. The functional is by construction free of one-electron self-interaction, respects constraints derived from uniform coordinate scaling, and has the correct asymptotic behavior of the xc energy density. It contains one parameter that is not determined ab initio. We investigate whether it is possible to construct a functional that yields accurate binding energies and affords other advantages, specifically Kohn-Sham eigenvalues that reliably reflect ionization potentials. Tests for a set of atoms and small molecules show that within our local-hybrid form accurate binding energies can be achieved by proper optimization of the free parameter in our functional, along with an improvement in dissociation energy curves and in Kohn-Sham eigenvalues. However, the correspondence of the latter to experimental ionization potent...

  2. Development of a Drosophila cell-based error correction assay

    Directory of Open Access Journals (Sweden)

    Jeffrey D. Salemi


    Full Text Available Accurate transmission of the genome through cell division requires microtubules from opposing spindle poles to interact with protein super-structures called kinetochores that assemble on each sister chromatid. Most kinetochores establish erroneous attachments that are destabilized through a process called error correction. Failure to correct improper kinetochore-microtubule (kt-MT interactions before anaphase onset results in chromosomal instability (CIN, which has been implicated in tumorigenesis and tumor adaptation. Thus, it is important to characterize the molecular basis of error correction to better comprehend how CIN occurs and how it can be modulated. An error correction assay has been previously developed in cultured mammalian cells in which incorrect kt-MT attachments are created through the induction of monopolar spindle assembly via chemical inhibition of kinesin-5. Error correction is then monitored following inhibitor wash out. Implementing the error correction assay in Drosophila melanogaster S2 cells would be valuable because kt-MT attachments are easily visualized and the cells are highly amenable to RNAi and high-throughput screening. However, Drosophila kinesin-5 (Klp61F is unaffected by available small molecule inhibitors. To overcome this limitation, we have rendered S2 cells susceptible to kinesin-5 inhibitors by functionally replacing Klp61F with human kinesin-5 (Eg5. Eg5 expression rescued the assembly of monopolar spindles typically caused by Klp61F depletion. Eg5-mediated bipoles collapsed into monopoles due to the activity of kinesin-14 (Ncd when treated with the kinesin-5 inhibitor S-trityl-L-cysteine (STLC. Furthermore, bipolar spindles reassembled and error correction was observed after STLC wash out. Importantly, error correction in Eg5-expressing S2 cells was dependent on the well-established error correction kinase Aurora B. This system provides a powerful new cell-based platform for studying error correction and

  3. Development of a Drosophila cell-based error correction assay. (United States)

    Salemi, Jeffrey D; McGilvray, Philip T; Maresca, Thomas J


    Accurate transmission of the genome through cell division requires microtubules from opposing spindle poles to interact with protein super-structures called kinetochores that assemble on each sister chromatid. Most kinetochores establish erroneous attachments that are destabilized through a process called error correction. Failure to correct improper kinetochore-microtubule (kt-MT) interactions before anaphase onset results in chromosomal instability (CIN), which has been implicated in tumorigenesis and tumor adaptation. Thus, it is important to characterize the molecular basis of error correction to better comprehend how CIN occurs and how it can be modulated. An error correction assay has been previously developed in cultured mammalian cells in which incorrect kt-MT attachments are created through the induction of monopolar spindle assembly via chemical inhibition of kinesin-5. Error correction is then monitored following inhibitor wash out. Implementing the error correction assay in Drosophila melanogaster S2 cells would be valuable because kt-MT attachments are easily visualized and the cells are highly amenable to RNAi and high-throughput screening. However, Drosophila kinesin-5 (Klp61F) is unaffected by available small molecule inhibitors. To overcome this limitation, we have rendered S2 cells susceptible to kinesin-5 inhibitors by functionally replacing Klp61F with human kinesin-5 (Eg5). Eg5 expression rescued the assembly of monopolar spindles typically caused by Klp61F depletion. Eg5-mediated bipoles collapsed into monopoles due, in part, to kinesin-14 (Ncd) activity when treated with the kinesin-5 inhibitor S-trityl-L-cysteine (STLC). Furthermore, bipolar spindles reassembled and error correction was observed after STLC wash out. Importantly, error correction in Eg5-expressing S2 cells was dependent on the well-established error correction kinase Aurora B. This system provides a powerful new cell-based platform for studying error correction and CIN.

  4. Learning-Based Topological Correction for Infant Cortical Surfaces (United States)

    Hao, Shijie; Li, Gang; Wang, Li; Meng, Yu


    Reconstruction of topologically correct and accurate cortical surfaces from infant MR images is of great importance in neuroimaging mapping of early brain development. However, due to rapid growth and ongoing myelination, infant MR images exhibit extremely low tissue contrast and dynamic appearance patterns, thus leading to much more topological errors (holes and handles) in the cortical surfaces derived from tissue segmentation results, in comparison to adult MR images which typically have good tissue contrast. Existing methods for topological correction either rely on the minimal correction criteria, or ad hoc rules based on image intensity priori, thus often resulting in erroneous correction and large anatomical errors in reconstructed infant cortical surfaces. To address these issues, we propose to correct topological errors by learning information from the anatomical references, i.e., manually corrected images. Specifically, in our method, we first locate candidate voxels of topologically defected regions by using a topology-preserving level set method. Then, by leveraging rich information of the corresponding patches from reference images, we build region-specific dictionaries from the anatomical references and infer the correct labels of candidate voxels using sparse representation. Notably, we further integrate these two steps into an iterative framework to enable gradual correction of large topological errors, which are frequently occurred in infant images and cannot be completely corrected using one-shot sparse representation. Extensive experiments on infant cortical surfaces demonstrate that our method not only effectively corrects the topological defects, but also leads to better anatomical consistency, compared to the state-of-the-art methods.

  5. New orbit correction method uniting global and local orbit corrections (United States)

    Nakamura, N.; Takaki, H.; Sakai, H.; Satoh, M.; Harada, K.; Kamiya, Y.


    A new orbit correction method, called the eigenvector method with constraints (EVC), is proposed and formulated to unite global and local orbit corrections for ring accelerators, especially synchrotron radiation(SR) sources. The EVC can exactly correct the beam positions at arbitrarily selected ring positions such as light source points, simultaneously reducing closed orbit distortion (COD) around the whole ring. Computer simulations clearly demonstrate these features of the EVC for both cases of the Super-SOR light source and the Advanced Light Source (ALS) that have typical structures of high-brilliance SR sources. In addition, the effects of errors in beam position monitor (BPM) reading and steering magnet setting on the orbit correction are analytically expressed and also compared with the computer simulations. Simulation results show that the EVC is very effective and useful for orbit correction and beam position stabilization in SR sources.

  6. PET measurements of cerebral metabolism corrected for CSF contributions

    Energy Technology Data Exchange (ETDEWEB)

    Chawluk, J.; Alavi, A.; Dann, R.; Kushner, M.J.; Hurtig, H.; Zimmerman, R.A.; Reivich, M.


    Thirty-three subjects have been studied with PET and anatomic imaging (proton-NMR and/or CT) in order to determine the effect of cerebral atrophy on calculations of metabolic rates. Subgroups of neurologic disease investigated include stroke, brain tumor, epilepsy, psychosis, and dementia. Anatomic images were digitized through a Vidicon camera and analyzed volumetrically. Relative areas for ventricles, sulci, and brain tissue were calculated. Preliminary analysis suggests that ventricular volumes as determined by NMR and CT are similar, while sulcal volumes are larger on NMR scans. Metabolic rates (18F-FDG) were calculated before and after correction for CSF spaces, with initial focus upon dementia and normal aging. Correction for atrophy led to a greater increase (%) in global metabolic rates in demented individuals (18.2 +- 5.3) compared to elderly controls (8.3 +- 3.0,p < .05). A trend towards significantly lower glucose metabolism in demented subjects before CSF correction was not seen following correction for atrophy. These data suggest that volumetric analysis of NMR images may more accurately reflect the degree of cerebral atrophy, since NMR does not suffer from beam hardening artifact due to bone-parenchyma juxtapositions. Furthermore, appropriate correction for CSF spaces should be employed if current resolution PET scanners are to accurately measure residual brain tissue metabolism in various pathological states.

  7. Towards Accurate Application Characterization for Exascale (APEX)

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Simon David [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)


    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  8. Optimizing cell arrays for accurate functional genomics

    Directory of Open Access Journals (Sweden)

    Fengler Sven


    Full Text Available Abstract Background Cellular responses emerge from a complex network of dynamic biochemical reactions. In order to investigate them is necessary to develop methods that allow perturbing a high number of gene products in a flexible and fast way. Cell arrays (CA enable such experiments on microscope slides via reverse transfection of cellular colonies growing on spotted genetic material. In contrast to multi-well plates, CA are susceptible to contamination among neighboring spots hindering accurate quantification in cell-based screening projects. Here we have developed a quality control protocol for quantifying and minimizing contamination in CA. Results We imaged checkered CA that express two distinct fluorescent proteins and segmented images into single cells to quantify the transfection efficiency and interspot contamination. Compared with standard procedures, we measured a 3-fold reduction of contaminants when arrays containing HeLa cells were washed shortly after cell seeding. We proved that nucleic acid uptake during cell seeding rather than migration among neighboring spots was the major source of contamination. Arrays of MCF7 cells developed without the washing step showed 7-fold lower percentage of contaminant cells, demonstrating that contamination is dependent on specific cell properties. Conclusions Previously published methodological works have focused on achieving high transfection rate in densely packed CA. Here, we focused in an equally important parameter: The interspot contamination. The presented quality control is essential for estimating the rate of contamination, a major source of false positives and negatives in current microscopy based functional genomics screenings. We have demonstrated that a washing step after seeding enhances CA quality for HeLA but is not necessary for MCF7. The described method provides a way to find optimal seeding protocols for cell lines intended to be used for the first time in CA.

  9. Accurate paleointensities - the multi-method approach (United States)

    de Groot, Lennart


    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  10. Feature Referenced Error Correction Apparatus. (United States)

    A feature referenced error correction apparatus utilizing the multiple images of the interstage level image format to compensate for positional...images and by the generation of an error correction signal in response to the sub-frame registration errors. (Author)

  11. Precision Corrections to Fine Tuning in SUSY

    CERN Document Server

    Buckley, Matthew R; Shih, David


    Requiring that the contributions of supersymmetric particles to the Higgs mass are not highly tuned places upper limits on the masses of superpartners -- in particular the higgsino, stop, and gluino. We revisit the details of the tuning calculation and introduce a number of improvements, including RGE resummation, two-loop effects, a proper treatment of UV vs. IR masses, and threshold corrections. This improved calculation more accurately connects the tuning measure with the physical masses of the superpartners at LHC-accessible energies. After these refinements, the tuning bound on the stop is now also sensitive to the masses of the 1st and 2nd generation squarks, which limits how far these can be decoupled in Effective SUSY scenarios. We find that, for a fixed level of tuning, our bounds can allow for heavier gluinos and stops than previously considered. Despite this, the natural region of supersymmetry is under pressure from the LHC constraints, with high messenger scales particularly disfavored.

  12. Scattering Correction For Image Reconstruction In Flash Radiography

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Liangzhi; Wang, Mengqi; Wu, Hongchun; Liu, Zhouyu; Cheng, Yuxiong; Zhang, Hongbo [Xi' an Jiaotong Univ., Xi' an (China)


    Scattered photons cause blurring and distortions in flash radiography, reducing the accuracy of image reconstruction significantly. The effect of the scattered photons is taken into account and an iterative deduction of the scattered photons is proposed to amend the scattering effect for image restoration. In order to deduct the scattering contribution, the flux of scattered photons is estimated as the sum of two components. The single scattered component is calculated accurately together with the uncollided flux along the characteristic ray, while the multiple scattered component is evaluated using correction coefficients pre-obtained from Monte Carlo simulations.The arbitrary geometry pretreatment and ray tracing are carried out based on the customization of AutoCAD. With the above model, an Iterative Procedure for image restORation code, IPOR, is developed. Numerical results demonstrate that the IPOR code is much more accurate than the direct reconstruction solution without scattering correction and it has a very high computational efficiency.

  13. On the accurate estimation of gap fraction during daytime with digital cover photography (United States)

    Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.


    Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.

  14. Surface consistent finite frequency phase corrections (United States)

    Kimman, W. P.


    Static time-delay corrections are frequency independent and ignore velocity variations away from the assumed vertical ray path through the subsurface. There is therefore a clear potential for improvement if the finite frequency nature of wave propagation can be properly accounted for. Such a method is presented here based on the Born approximation, the assumption of surface consistency and the misfit of instantaneous phase. The concept of instantaneous phase lends itself very well for sweep-like signals, hence these are the focus of this study. Analytical sensitivity kernels are derived that accurately predict frequency-dependent phase shifts due to P-wave anomalies in the near surface. They are quick to compute and robust near the source and receivers. An additional correction is presented that re-introduces the nonlinear relation between model perturbation and phase delay, which becomes relevant for stronger velocity anomalies. The phase shift as function of frequency is a slowly varying signal, its computation therefore does not require fine sampling even for broad-band sweeps. The kernels reveal interesting features of the sensitivity of seismic arrivals to the near surface: small anomalies can have a relative large impact resulting from the medium field term that is dominant near the source and receivers. Furthermore, even simple velocity anomalies can produce a distinct frequency-dependent phase behaviour. Unlike statics, the predicted phase corrections are smooth in space. Verification with spectral element simulations shows an excellent match for the predicted phase shifts over the entire seismic frequency band. Applying the phase shift to the reference sweep corrects for wavelet distortion, making the technique akin to surface consistent deconvolution, even though no division in the spectral domain is involved. As long as multiple scattering is mild, surface consistent finite frequency phase corrections outperform traditional statics for moderately large

  15. Updating quasar bolometric luminosity corrections

    CERN Document Server

    Runnoe, Jessie C; Shang, Zhaohui


    Bolometric corrections are used in quasar studies to quantify total energy output based on a measurement of a monochromatic luminosity. First, we enumerate and discuss the practical difficulties of determining such corrections, then we present bolometric luminosities between 1 \\mu m and 8 keV rest frame and corrections derived from the detailed spectral energy distributions of 63 bright quasars of low to moderate redshift (z = 0.03-1.4). Exploring several mathematical fittings, we provide practical bolometric corrections of the forms L_iso=\\zeta \\lambda L_{\\lambda} and log(L_iso)=A+B log(\\lambda L_{\\lambda}) for \\lambda= 1450, 3000, and 5100 \\AA, where L_iso is the bolometric luminosity calculated under the assumption of isotropy. The significant scatter in the 5100 \\AA\\ bolometric correction can be reduced by adding a first order correction using the optical slope, \\alpha_\\lambda,opt. We recommend an adjustment to the bolometric correction to account for viewing angle and the anisotropic emission expected fr...

  16. Comparative evaluation of scatter correction techniques in 3D positron emission tomography

    CERN Document Server

    Zaidi, H


    Much research and development has been concentrated on the scatter compensation required for quantitative 3D PET. Increasingly sophisticated scatter correction procedures are under investigation, particularly those based on accurate scatter models, and iterative reconstruction-based scatter compensation approaches. The main difference among the correction methods is the way in which the scatter component in the selected energy window is estimated. Monte Carlo methods give further insight and might in themselves offer a possible correction procedure. Methods: Five scatter correction methods are compared in this paper where applicable. The dual-energy window (DEW) technique, the convolution-subtraction (CVS) method, two variants of the Monte Carlo-based scatter correction technique (MCBSC1 and MCBSC2) and our newly developed statistical reconstruction-based scatter correction (SRBSC) method. These scatter correction techniques are evaluated using Monte Carlo simulation studies, experimental phantom measurements...

  17. Accurate estimation of the boundaries of a structured light pattern. (United States)

    Lee, Sukhan; Bui, Lam Quang


    Depth recovery based on structured light using stripe patterns, especially for a region-based codec, demands accurate estimation of the true boundary of a light pattern captured on a camera image. This is because the accuracy of the estimated boundary has a direct impact on the accuracy of the depth recovery. However, recovering the true boundary of a light pattern is considered difficult due to the deformation incurred primarily by the texture-induced variation of the light reflectance at surface locales. Especially for heavily textured surfaces, the deformation of pattern boundaries becomes rather severe. We present here a novel (to the best of our knowledge) method to estimate the true boundaries of a light pattern that are severely deformed due to the heavy textures involved. First, a general formula that models the deformation of the projected light pattern at the imaging end is presented, taking into account not only the light reflectance variation but also the blurring along the optical passages. The local reflectance indices are then estimated by applying the model to two specially chosen reference projections, all-bright and all-dark. The estimated reflectance indices are to transform the edge-deformed, captured pattern signal into the edge-corrected, canonical pattern signal. A canonical pattern implies the virtual pattern that would have resulted if there were neither the reflectance variation nor the blurring in imaging optics. Finally, we estimate the boundaries of a light pattern by intersecting the canonical form of a light pattern with that of its inverse pattern. The experimental results show that the proposed method results in significant improvements in the accuracy of the estimated boundaries under various adverse conditions.

  18. Generalised geometry for string corrections

    CERN Document Server

    Coimbra, André; Triendl, Hagen; Waldram, Daniel


    We present a general formalism for incorporating the string corrections in generalised geometry, which necessitates the extension of the generalised tangent bundle. Not only are such extensions obstructed, string symmetries and the existence of a well-defined effective action require a precise choice of the (generalised) connection. The action takes a universal form given by a generalised Lichnerowitz--Bismut theorem. As examples of this construction we discuss the corrections linear in $\\alpha'$ in heterotic strings and the absence of such corrections for type II theories.

  19. A New Method for Correcting Vehicle License Plate Tilt

    Institute of Scientific and Technical Information of China (English)

    Mei-Sen Pan; Qi Xiong; Jun-Biao Yan


    In the course of vehicle license plate (VLP) automatic recognition, tilt correction is a very crucial process. According to Karhunen-Loeve (K-L) transformation, the coordinates of characters in the image are arranged into a two-dimensional covariance matrix, on the basis of which the centered process is carried out. Then, the eigenvector and the rotation angle α are computed in turn. The whole image is rotated by -α. Thus, image horizontal tilt correction is performed. In the vertical tilt correction process, three correction methods, which are K-L transformation method, the line fitting method based on K-means clustering (LFMBKC), and the line fitting based on least squares (LFMBLS), are put forward to compute the vertical tilt angle θ. After shear transformation (ST) is imposed on the rotated image, the final correction image is obtained. The experimental results verify that this proposed method can be easily implemented, and can quickly and accurately get the tilt angle. It provides a new effective way for the VLP image tilt correction as well.

  20. Software for Correcting the Dynamic Error of Force Transducers

    Directory of Open Access Journals (Sweden)

    Naoki Miyashita


    Full Text Available Software which corrects the dynamic error of force transducers in impact force measurements using their own output signal has been developed. The software corrects the output waveform of the transducers using the output waveform itself, estimates its uncertainty and displays the results. In the experiment, the dynamic error of three transducers of the same model are evaluated using the Levitation Mass Method (LMM, in which the impact forces applied to the transducers are accurately determined as the inertial force of the moving part of the aerostatic linear bearing. The parameters for correcting the dynamic error are determined from the results of one set of impact measurements of one transducer. Then, the validity of the obtained parameters is evaluated using the results of the other sets of measurements of all the three transducers. The uncertainties in the uncorrected force and those in the corrected force are also estimated. If manufacturers determine the correction parameters for each model using the proposed method, and provide the software with the parameters corresponding to each model, then users can obtain the waveform corrected against dynamic error and its uncertainty. The present status and the future prospects of the developed software are discussed in this paper.

  1. Singularity Correction for Long-Range-Corrected Density Functional Theory with Plane-Wave Basis Sets. (United States)

    Kawashima, Yukio; Hirao, Kimihiko


    We introduced two methods to correct the singularity in the calculation of long-range Hartree-Fock (HF) exchange for long-range-corrected density functional theory (LC-DFT) calculations in plane-wave basis sets. The first method introduces an auxiliary function to cancel out the singularity. The second method introduces a truncated long-range Coulomb potential, which has no singularity. We assessed the introduced methods using the LC-BLYP functional by applying it to isolated systems of naphthalene and pyridine. We first compared the total energies and the HOMO energies of the singularity-corrected and uncorrected calculations and confirmed that singularity correction is essential for LC-DFT calculations using plane-wave basis sets. The LC-DFT calculation results converged rapidly with respect to the cell size as the other functionals, and their results were in good agreement with the calculated results obtained using Gaussian basis sets. LC-DFT succeeded in obtaining accurate orbital energies and excitation energies. We next applied LC-DFT with singularity correction methods to the electronic structure calculations of the extended systems, Si and SiC. We confirmed that singularity correction is important for calculations of extended systems as well. The calculation results of the valence and conduction bands by LC-BLYP showed good convergence with respect to the number of k points sampled. The introduced methods succeeded in overcoming the singularity problem in HF exchange calculation. We investigated the effect of the singularity correction on the excitation state calculation and found that careful treatment of the singularities is required compared to ground-state calculations. We finally examined the excitonic effect on the band gap of the extended systems. We calculated the excitation energies to the first excited state of the extended systems using a supercell model at the Γ point and found that the excitonic binding energy, supposed to be small for

  2. An accurate and practical method for inference of weak gravitational lensing from galaxy images (United States)

    Bernstein, Gary M.; Armstrong, Robert; Krawiec, Christina; March, Marisa C.


    We demonstrate highly accurate recovery of weak gravitational lensing shear using an implementation of the Bayesian Fourier Domain (BFD) method proposed by Bernstein & Armstrong, extended to correct for selection biases. The BFD formalism is rigorously correct for Nyquist-sampled, background-limited, uncrowded images of background galaxies. BFD does not assign shapes to galaxies, instead compressing the pixel data D into a vector of moments M, such that we have an analytic expression for the probability P(M|g) of obtaining the observations with gravitational lensing distortion g along the line of sight. We implement an algorithm for conducting BFD's integrations over the population of unlensed source galaxies which measures ≈10 galaxies s-1 core-1 with good scaling properties. Initial tests of this code on ≈109 simulated lensed galaxy images recover the simulated shear to a fractional accuracy of m = (2.1 ± 0.4) × 10-3, substantially more accurate than has been demonstrated previously for any generally applicable method. Deep sky exposures generate a sufficiently accurate approximation to the noiseless, unlensed galaxy population distribution assumed as input to BFD. Potential extensions of the method include simultaneous measurement of magnification and shear; multiple-exposure, multiband observations; and joint inference of photometric redshifts and lensing tomography.

  3. Spelling Correction in Agglutinative Languages

    CERN Document Server

    Oflazer, K


    This paper presents an approach to spelling correction in agglutinative languages that is based on two-level morphology and a dynamic programming based search algorithm. Spelling correction in agglutinative languages is significantly different than in languages like English. The concept of a word in such languages is much wider that the entries found in a dictionary, owing to {}~productive word formation by derivational and inflectional affixations. After an overview of certain issues and relevant mathematical preliminaries, we formally present the problem and our solution. We then present results from our experiments with spelling correction in Turkish, a Ural--Altaic agglutinative language. Our results indicate that we can find the intended correct word in 95\\% of the cases and offer it as the first candidate in 74\\% of the cases, when the edit distance is 1.

  4. Quantum corrections for Boltzmann equation

    Institute of Scientific and Technical Information of China (English)

    M.; Levy; PETER


    We present the lowest order quantum correction to the semiclassical Boltzmann distribution function,and the equation satisfied by this correction is given. Our equation for the quantum correction is obtained from the conventional quantum Boltzmann equation by explicitly expressing the Planck constant in the gradient approximation,and the quantum Wigner distribution function is expanded in pow-ers of Planck constant,too. The negative quantum correlation in the Wigner dis-tribution function which is just the quantum correction terms is naturally singled out,thus obviating the need for the Husimi’s coarse grain averaging that is usually done to remove the negative quantum part of the Wigner distribution function. We also discuss the classical limit of quantum thermodynamic entropy in the above framework.

  5. Dispersion based beam tilt correction

    CERN Document Server

    Guetg, Marc W; Prat, Eduard; Reiche, Sven


    In Free Electron Lasers (FEL), a transverse centroid misalignment of longitudinal slices in an electron bunch reduces the effective overlap between radiation field and electron bunch and therefore the FEL performance. The dominant sources of slice misalignments for FELs are the incoherent and coherent synchrotron radiation within bunch compressors as well as transverse wake fields in the accelerating cavities. This is of particular importance for over-compression which is required for one of the key operation modes for the SwissFEL planned at the Paul Scherrer Institute. The centroid shift is corrected using corrector magnets in dispersive sections, e.g. the bunch compressors. First and second order corrections are achieved by pairs of sextupole and quadrupole magnets in the horizontal plane while skew quadrupoles correct to first order in the vertical plane. Simulations and measurements at the SwissFEL Injector Test Facility are done to investigate the proposed correction scheme for SwissFEL. This paper pres...

  6. General correcting formula of forecasting?



    A general correcting formula of forecasting (as a framework for long-use and standardized forecasts) is proposed. The formula provides new forecasting resources and areas of application including economic forecasting.

  7. Long term changes of altimeter range and geophysical corrections at altimetry calibration sites

    DEFF Research Database (Denmark)

    Andersen, Ole Baltazar; Cheng, Yongcun; Pascal Willis


    Accurate sea level trend determination is fundamentally related to calibration of both the instrument as well as to investigate if there are linear trends in the set of standard geophysical and range corrections applied to the sea level observations. Long term changes in range corrections can leak...... trends in the sum of range corrections are found for the calibrations sites both for local scales (within 50km around the selected site) and for regional scales (within 300km). However, the geophysical corrections accounting for atmospheric pressure loading and high frequency sea level variations...

  8. Surface corrections to the shell-structure of the moment of inertia

    CERN Document Server

    Gorpinchenko, D V; Bartel, J; Blocki, J P


    The moment of inertia for nuclear collective rotations is derived within a semiclassical approach based on the Inglis cranking and the Strutinsky shell-correction methods, improved by surface corrections within the non-perturbative periodic-orbit theory. For adiabatic (statistical-equilibrium) rotations it was approximated by the generalized rigid-body moment of inertia accounting for the shell corrections of the particle density. An improved phase-space trace formula allows to express the shell components of the moment of inertia more accurately in terms of the free-energy shell correction with their ratio evaluated within the extended Thomas-Fermi effective-surface approximation.

  9. Correcting the Chromatic Aberration in Barrel Distortion of Endoscopic Images

    Directory of Open Access Journals (Sweden)

    Y. M. Harry Ng


    Full Text Available Modern endoscopes offer physicians a wide-angle field of view (FOV for minimally invasive therapies. However, the high level of barrel distortion may prevent accurate perception of image. Fortunately, this kind of distortion may be corrected by digital image processing. In this paper we investigate the chromatic aberrations in the barrel distortion of endoscopic images. In the past, chromatic aberration in endoscopes is corrected by achromatic lenses or active lens control. In contrast, we take a computational approach by modifying the concept of image warping and the existing barrel distortion correction algorithm to tackle the chromatic aberration problem. In addition, an error function for the determination of the level of centroid coincidence is proposed. Simulation and experimental results confirm the effectiveness of our method.

  10. Relativistic Corrections to the Zeeman Effect of Helium Atom

    Institute of Scientific and Technical Information of China (English)

    关晓旭; 李白文; 王治文


    The high-order relativistic corrections to the Zeeman g-factors of the helium atom are calculated. AII the relativistic correction terms and the term describing the motion of the mass centre are treated as perturbations. Most of our results are in good agreement with those of Yah and Drake [Phys. Rev. A 50 (1994)R1980/, who used the wavefunctions constructed by Hylleraas coordinates. For the correction δg of the g-factor of the 3 3P state in 4He, our result, 2.91415 × 10-7 a.u., should be more reasonable and accurate, although there are no experimental data available in the literature to compare.

  11. Three-Dimensional Turbulent RANS Adjoint-Based Error Correction (United States)

    Park, Michael A.


    Engineering problems commonly require functional outputs of computational fluid dynamics (CFD) simulations with specified accuracy. These simulations are performed with limited computational resources. Computable error estimates offer the possibility of quantifying accuracy on a given mesh and predicting a fine grid functional on a coarser mesh. Such an estimate can be computed by solving the flow equations and the associated adjoint problem for the functional of interest. An adjoint-based error correction procedure is demonstrated for transonic inviscid and subsonic laminar and turbulent flow. A mesh adaptation procedure is formulated to target uncertainty in the corrected functional and terminate when error remaining in the calculation is less than a user-specified error tolerance. This adaptation scheme is shown to yield anisotropic meshes with corrected functionals that are more accurate for a given number of grid points then isotropic adapted and uniformly refined grids.

  12. Reflection error correction of gas turbine blade temperature (United States)

    Kipngetich, Ketui Daniel; Feng, Chi; Gao, Shan


    Accurate measurement of gas turbine blades' temperature is one of the greatest challenges encountered in gas turbine temperature measurements. Within an enclosed gas turbine environment with surfaces of varying temperature and low emissivities, a new challenge is introduced into the use of radiation thermometers due to the problem of reflection error. A method for correcting this error has been proposed and demonstrated in this work through computer simulation and experiment. The method assumed that emissivities of all surfaces exchanging thermal radiation are known. Simulations were carried out considering targets with low and high emissivities of 0.3 and 0.8 respectively while experimental measurements were carried out on blades with emissivity of 0.76. Simulated results showed possibility of achieving error less than 1% while experimental result corrected the error to 1.1%. It was thus concluded that the method is appropriate for correcting reflection error commonly encountered in temperature measurement of gas turbine blades.

  13. Radiative corrections to Bose condensation

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, A. (Academia de Ciencias de Cuba, La Habana. Inst. de Matematica, Cibernetica y Computacion)


    The Bose condensation of the scalar field in a theory behaving in the Coleman-Weinberg mode is considered. The effective potential of the model is computed within the semiclassical approximation in a dimensional regularization scheme. Radiative corrections are shown to introduce certain ultraviolet divergences in the effective potential coming from the Many-Particle theory. The weight of radiative corrections in the dynamics of the system is strongly modified by the charge density.

  14. Error correcting coding for OTN

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.


    Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....

  15. Proving Program Correctness. Volume V. (United States)


    Task 2. Proving Program Correctness (P.I.: J.C. Reynolds). This group is working towards programming languaje designs which increase the probability...certain syntactic difficulties: the natural abstract syntax is ambiguous, and syntactic correctness is violated by certain beta reductions. 3 - These...concept of a functor to express a-sp- priat : restrictions on implicit conversion functions. In a similar v-’.1n, we can use the concept of a natural

  16. Quantum error correction for beginners. (United States)

    Devitt, Simon J; Munro, William J; Nemoto, Kae


    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future.

  17. Surgical options for correction of refractive error following cataract surgery. (United States)

    Abdelghany, Ahmed A; Alio, Jorge L


    Refractive errors are frequently found following cataract surgery and refractive lens exchange. Accurate biometric analysis, selection and calculation of the adequate intraocular lens (IOL) and modern techniques for cataract surgery all contribute to achieving the goal of cataract surgery as a refractive procedure with no refractive error. However, in spite of all these advances, residual refractive error still occasionally occurs after cataract surgery and laser in situ keratomileusis (LASIK) can be considered the most accurate method for its correction. Lens-based procedures, such as IOL exchange or piggyback lens implantation are also possible alternatives especially in cases with extreme ametropia, corneal abnormalities, or in situations where excimer laser is unavailable. In our review, we have found that piggyback IOL is safer and more accurate than IOL exchange. Our aim is to provide a review of the recent literature regarding target refraction and residual refractive error in cataract surgery.

  18. Fully 3D refraction correction dosimetry system (United States)

    Manjappa, Rakesh; Sharath Makki, S.; Kumar, Rajesh; Mohan Vasu, Ram; Kanhirodan, Rajan


    medium is 71.8%, an increase of 6.4% compared to that achieved using conventional ART algorithm. Smaller diameter dosimeters are scanned with dry air scanning by using a wide-angle lens that collects refracted light. The images reconstructed using cone beam geometry is seen to deteriorate in some planes as those regions are not scanned. Refraction correction is important and needs to be taken in to consideration to achieve quantitatively accurate dose reconstructions. Refraction modeling is crucial in array based scanners as it is not possible to identify refracted rays in the sinogram space.

  19. Fully 3D refraction correction dosimetry system. (United States)

    Manjappa, Rakesh; Makki, S Sharath; Kumar, Rajesh; Vasu, Ram Mohan; Kanhirodan, Rajan


    medium is 71.8%, an increase of 6.4% compared to that achieved using conventional ART algorithm. Smaller diameter dosimeters are scanned with dry air scanning by using a wide-angle lens that collects refracted light. The images reconstructed using cone beam geometry is seen to deteriorate in some planes as those regions are not scanned. Refraction correction is important and needs to be taken in to consideration to achieve quantitatively accurate dose reconstructions. Refraction modeling is crucial in array based scanners as it is not possible to identify refracted rays in the sinogram space.

  20. An efficient and accurate method for calculating nonlinear diffraction beam fields

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Hyun Jo; Cho, Sung Jong; Nam, Ki Woong; Lee, Jang Hyun [Division of Mechanical and Automotive Engineering, Wonkwang University, Iksan (Korea, Republic of)


    This study develops an efficient and accurate method for calculating nonlinear diffraction beam fields propagating in fluids or solids. The Westervelt equation and quasilinear theory, from which the integral solutions for the fundamental and second harmonics can be obtained, are first considered. A computationally efficient method is then developed using a multi-Gaussian beam (MGB) model that easily separates the diffraction effects from the plane wave solution. The MGB models provide accurate beam fields when compared with the integral solutions for a number of transmitter-receiver geometries. These models can also serve as fast, powerful modeling tools for many nonlinear acoustics applications, especially in making diffraction corrections for the nonlinearity parameter determination, because of their computational efficiency and accuracy.

  1. Arthroscopically assisted Latarjet procedure: A new surgical approach for accurate coracoid graft placement and compression

    Directory of Open Access Journals (Sweden)

    Ettore Taverna


    Full Text Available The Latarjet procedure is a confirmed method for the treatment of shoulder instability in the presence of bone loss. It is a challenging procedure for which a key point is the correct placement of the coracoid graft onto the glenoid neck. We here present our technique for an athroscopically assisted Latarjet procedure with a new drill guide, permitting an accurate and reproducible positioning of the coracoid graft, with optimal compression of the graft onto the glenoid neck due to the perfect position of the screws: perpendicular to the graft and the glenoid neck and parallel between them.

  2. Arthroscopically assisted Latarjet procedure: A new surgical approach for accurate coracoid graft placement and compression. (United States)

    Taverna, Ettore; Ufenast, Henri; Broffoni, Laura; Garavaglia, Guido


    The Latarjet procedure is a confirmed method for the treatment of shoulder instability in the presence of bone loss. It is a challenging procedure for which a key point is the correct placement of the coracoid graft onto the glenoid neck. We here present our technique for an athroscopically assisted Latarjet procedure with a new drill guide, permitting an accurate and reproducible positioning of the coracoid graft, with optimal compression of the graft onto the glenoid neck due to the perfect position of the screws: perpendicular to the graft and the glenoid neck and parallel between them.

  3. Local stretch zeroing NMO correction (United States)

    Kazemi, N.; Siahkoohi, H. R.


    In this paper we present a new method of normal move-out (NMO) correction called local stretch zeroing (LSZ) method that avoids NMO stretch. The method eliminates the theoretical curves that generate interpolated data samples responsible for NMO stretch. Pre-correction time sampling interval is preserved by reassigning and zero padding of true data samples. The optimum mute zone selection feature of the LSZ method eliminates all interfering reflection events at far offsets. The resulted stacked section from the LSZ method contains generally higher frequency components than a normal stack, and preserves most of the shallow reflectors. The LSZ method requires that zero-offset width of the time gate, i.e. zero-offset time difference between two adjacent reflections, be larger than the dominant period. The major shortcoming of the method occurs when CMP data are over- or under-NMO corrected. Both synthetic and real world examples show the efficiency of the LSZ method over the conventional NMO (CNMO) correction. The method loses its superiority when CMP data are over- or under-NMO corrected.

  4. Large continuous perspective transformations are necessary and sufficient for accurate perception of metric shape. (United States)

    Bingham, Geoffrey P; Lind, Mats


    We investigated the ability to perceive the metric shape of elliptical cylinders. A large number of previous studies have shown that small perspective variations (perception of metric shape. If space perception is affine (Koenderink & van Doom, 1991), observers are unable to compare or relate lengths in depth to frontoparallel lengths (i.e., widths). Frontoparallel lengths can be perceived correctly, whereas lengths in depth generally are not. We measured reaches to evaluate shape perception and investigated whether larger perspective variations would allow accurate perception of shape. In Experiment 1, we replicated previous results showing poor perception with small perspective variations. In Experiment 2, we found that a 90 degrees continuous change in perspective, which swapped depth and width, allowed accurate perception of the depth/width aspect ratio. In Experiment 3, we found that discrete views differing by 90 degrees were insufficient to allow accurate perception of metric shape and that perception of a continuous perspective change was required. In Experiment 4, we investigated continuous perspective changes of 30 degrees, 45 degrees, 60 degrees, and 90 degrees and discovered that a 45 degrees change or greater allowed accurate perception of the aspect ratio and that less than this did not. In conclusion, we found that perception of metric shape is possible with continuous perspective transformations somewhat larger than those investigated in the substantial number of previous studies.

  5. Binary Error Correcting Network Codes

    CERN Document Server

    Wang, Qiwen; Li, Shuo-Yen Robert


    We consider network coding for networks experiencing worst-case bit-flip errors, and argue that this is a reasonable model for highly dynamic wireless network transmissions. We demonstrate that in this setup prior network error-correcting schemes can be arbitrarily far from achieving the optimal network throughput. We propose a new metric for errors under this model. Using this metric, we prove a new Hamming-type upper bound on the network capacity. We also show a commensurate lower bound based on GV-type codes that can be used for error-correction. The codes used to attain the lower bound are non-coherent (do not require prior knowledge of network topology). The end-to-end nature of our design enables our codes to be overlaid on classical distributed random linear network codes. Further, we free internal nodes from having to implement potentially computationally intensive link-by-link error-correction.

  6. Gravitomagnetic corrections on gravitational waves

    CERN Document Server

    Capozziello, S; Forte, L; Garufi, F; Milano, L


    Gravitational waveforms and production could be considerably affected by gravitomagnetic corrections considered in relativistic theory of orbits. Beside the standard periastron effect of General Relativity, new nutation effects come out when c^{-3} corrections are taken into account. Such corrections emerge as soon as matter-current densities and vector gravitational potentials cannot be discarded into dynamics. We study the gravitational waves emitted through the capture, in the gravitational field of massive binary systems (e.g. a very massive black hole on which a stellar object is inspiralling) via the quadrupole approximation, considering precession and nutation effects. We present a numerical study to obtain the gravitational wave luminosity, the total energy output and the gravitational radiation amplitude. From a crude estimate of the expected number of events towards peculiar targets (e.g. globular clusters) and in particular, the rate of events per year for dense stellar clusters at the Galactic Cen...

  7. Aberration Correction in Electron Microscopy

    CERN Document Server

    Rose, Harald H


    The resolution of conventional electron microscopes is limited by spherical and chromatic aberrations. Both defects are unavoidable in the case of static rotationally symmetric electromagnetic fields (Scherzer theorem). Multipole correctors and electron mirrros have been designed and built, which compensate for these aberrations. The principles of correction will be demonstrated for the tetrode mirror, the quadrupole-octopole corrector and the hexapole corrector. Electron mirrors require a magnetic beam separator free of second-order aberrations. The multipole correctors are highly symmetric telescopic systems compensating for the defects of the objective lens. The hexapole corrector has the most simple structure yet eliminates only the spherical aberration, whereas the mirror and the quadrupole-octopole corrector are able to correct for both aberrations. Chromatic correction is achieved in the latter corrector by cossed electric and magnetic quadrupoles acting as first-order Wien filters. Micrographs obtaine...

  8. Classical Corrections in String Cosmology

    CERN Document Server

    Brustein, Ram; Brustein, Ram; Madden, Richard


    An important element in a model of non-singular string cosmology is a phase in which classical corrections saturate the growth of curvature in a deSitter-like phase with a linearly growing dilaton (an `algebraic fixed point'). As the form of the classical corrections is not well known, here we look for evidence, based on a suggested symmetry of the action, scale factor duality and on conformal field theory considerations, that they can produce this saturation. It has previously been observed that imposing scale factor duality on the $O(\\alpha')$ corrections is not compatible with fixed point behavior. Here we present arguments that these problems persist to all orders in $\\alpha'$. We also present evidence for the form of a solution to the equations of motion using conformal perturbation theory, examine its implications for the form of the effective action and find novel fixed point structure.

  9. Local Correction of Boolean Functions

    CERN Document Server

    Alon, Noga


    A Boolean function f over n variables is said to be q-locally correctable if, given a black-box access to a function g which is "close" to an isomorphism f_sigma of f, we can compute f_sigma(x) for any x in Z_2^n with good probability using q queries to g. We observe that any k-junta, that is, any function which depends only on k of its input variables, is O(2^k)-locally correctable. Moreover, we show that there are examples where this is essentially best possible, and locally correcting some k-juntas requires a number of queries which is exponential in k. These examples, however, are far from being typical, and indeed we prove that for almost every k-junta, O(k log k) queries suffice.

  10. String-Corrected Black Holes

    Energy Technology Data Exchange (ETDEWEB)

    Hubeny, Veronika; Maloney, Alexander; Rangamani, Mukund


    We investigate the geometry of four dimensional black hole solutions in the presence of stringy higher curvature corrections to the low energy effective action. For certain supersymmetric two charge black holes these corrections drastically alter the causal structure of the solution, converting seemingly pathological null singularities into timelike singularities hidden behind a finite area horizon. We establish, analytically and numerically, that the string-corrected two-charge black hole metric has the same Penrose diagram as the extremal four-charge black hole. The higher derivative terms lead to another dramatic effect -- the gravitational force exerted by a black hole on an inertial observer is no longer purely attractive! The magnitude of this effect is related to the size of the compactification manifold.

  11. When correction turns positive: processing corrective prosody in Dutch. (United States)

    Dimitrova, Diana V; Stowe, Laurie A; Hoeks, John C J


    Current research on spoken language does not provide a consistent picture as to whether prosody, the melody and rhythm of speech, conveys a specific meaning. Perception studies show that English listeners assign meaning to prosodic patterns, and, for instance, associate some accents with contrast, whereas Dutch listeners behave more controversially. In two ERP studies we tested how Dutch listeners process words carrying two types of accents, which either provided new information (new information accents) or corrected information (corrective accents), both in single sentences (experiment 1) and after corrective and new information questions (experiment 2). In both experiments corrective accents elicited a sustained positivity as compared to new information accents, which started earlier in context than in single sentences. The positivity was not modulated by the nature of the preceding question, suggesting that the underlying neural mechanism likely reflects the construction of an interpretation to the accented word, either by identifying an alternative in context or by inferring it when no context is present. Our experimental results provide strong evidence for inferential processes related to prosodic contours in Dutch.

  12. Correction. (United States)


    Because of a production error, the photographs of pierre Chambon and Harald zur Hausen, which appeared on pages 1116 and 1117 of last week's issue (22 November), were transposed. Here's what you should have seen: Chambon is on the left, zur Hausen on the right.

  13. Correction (United States)


    The feature article “Neutrons for new drugs” (August pp26-29) stated that neutron crystallography was used to determine the structures of “wellknown complex biological molecules such as lysine, insulin and trypsin”.

  14. Corrections (United States)


    1. The first photograph on p12 of News in Physics Educaton January 2004 is of Prof. Paul Black and not Prof. Jonathan Osborne, as stated. 2. The review of Flowlog on p209 of the March 2004 issue wrongly gives the maximum sampling rate of the analogue inputs as 25 kHz (40 ms) instead of 25 kHz (40 µs) and the digital inputs as 100 kHz (10 ms) instead of 100 kHz (10 µs). 3. The letter entitled 'A trial of two energies' by Eric McIldowie on pp212-4 of the March 2004 issue was edited to fit the space available. We regret that a few small errors were made in doing this. Rather than detail these, the interested reader can access the whole of the original letter as a Word file from the link below.

  15. Correction

    CERN Multimedia


    From left to right: Luis, Carmen, Mario, Christian and José listening to speeches by theorists Alvaro De Rújula and Luis Alvarez-Gaumé (right) at their farewell gathering on 15 May.We unfortunately cut out a part of the "Word of thanks" from the team retiring from Restaurant No. 1. The complete message is published below: Dear friends, You are the true "nucleus" of CERN. Every member of this extraordinary human mosaic will always remain in our affections and in our thoughts. We have all been very touched by your spontaneous generosity. Arrivederci, Mario Au revoir,Christian Hasta Siempre Carmen, José and Luis PS: Lots of love to the theory team and to the hidden organisers. So long!

  16. Accurate Jones Matrix of the Practical Faraday Rotator

    Institute of Scientific and Technical Information of China (English)

    王林斗; 祝昇翔; 李玉峰; 邢文烈; 魏景芝


    The Jones matrix of practical Faraday rotators is often used in the engineering calculation of non-reciprocal optical field. Nevertheless, only the approximate Jones matrix of practical Faraday rotators has been presented by now. Based on the theory of polarized light, this paper presents the accurate Jones matrix of practical Faraday rotators. In addition, an experiment has been carried out to verify the validity of the accurate Jones matrix. This matrix accurately describes the optical characteristics of practical Faraday rotators, including rotation, loss and depolarization of the polarized light. The accurate Jones matrix can be used to obtain the accurate results for the practical Faraday rotator to transform the polarized light, which paves the way for the accurate analysis and calculation of practical Faraday rotators in relevant engineering applications.

  17. Atmospheric Error Correction of the Laser Beam Ranging

    Directory of Open Access Journals (Sweden)

    J. Saydi


    Full Text Available Atmospheric models based on surface measurements of pressure, temperature, and relative humidity have been used to increase the laser ranging accuracy by ray tracing. Atmospheric refraction can cause significant errors in laser ranging systems. Through the present research, the atmospheric effects on the laser beam were investigated by using the principles of laser ranging. Atmospheric correction was calculated for 0.532, 1.3, and 10.6 micron wavelengths through the weather conditions of Tehran, Isfahan, and Bushehr in Iran since March 2012 to March 2013. Through the present research the atmospheric correction was computed for meteorological data in base of monthly mean. Of course, the meteorological data were received from meteorological stations in Tehran, Isfahan, and Bushehr. Atmospheric correction was calculated for 11, 100, and 200 kilometers laser beam propagations under 30°, 60°, and 90° rising angles for each propagation. The results of the study showed that in the same months and beam emission angles, the atmospheric correction was most accurate for 10.6 micron wavelength. The laser ranging error was decreased by increasing the laser emission angle. The atmospheric correction with two Marini-Murray and Mendes-Pavlis models for 0.532 nm was compared.

  18. Correction of gene expression data

    DEFF Research Database (Denmark)

    Darbani Shirvanehdeh, Behrooz; Stewart, C. Neal, Jr.; Noeparvar, Shahin;


    This report investigates for the first time the potential inter-treatment bias source of cell number for gene expression studies. Cell-number bias can affect gene expression analysis when comparing samples with unequal total cellular RNA content or with different RNA extraction efficiencies...... an analytical approach to examine the suitability of correction methods by considering the inter-treatment bias as well as the inter-replicate variance, which allows use of the best correction method with minimum residual bias. Analyses of RNA sequencing and microarray data showed that the efficiencies...

  19. Accurate and Timely Forecasting of CME-Driven Geomagnetic Storms (United States)

    Chen, J.; Kunkel, V.; Skov, T. M.


    Wide-spread and severe geomagnetic storms are primarily caused by theejecta of coronal mass ejections (CMEs) that impose long durations ofstrong southward interplanetary magnetic field (IMF) on themagnetosphere, the duration and magnitude of the southward IMF (Bs)being the main determinants of geoeffectiveness. Another importantquantity to forecast is the arrival time of the expected geoeffectiveCME ejecta. In order to accurately forecast these quantities in atimely manner (say, 24--48 hours of advance warning time), it isnecessary to calculate the evolving CME ejecta---its structure andmagnetic field vector in three dimensions---using remote sensing solardata alone. We discuss a method based on the validated erupting fluxrope (EFR) model of CME dynamics. It has been shown using STEREO datathat the model can calculate the correct size, magnetic field, and theplasma parameters of a CME ejecta detected at 1 AU, using the observedCME position-time data alone as input (Kunkel and Chen 2010). Onedisparity is in the arrival time, which is attributed to thesimplified geometry of circular toroidal axis of the CME flux rope.Accordingly, the model has been extended to self-consistently includethe transverse expansion of the flux rope (Kunkel 2012; Kunkel andChen 2015). We show that the extended formulation provides a betterprediction of arrival time even if the CME apex does not propagatedirectly toward the earth. We apply the new method to a number of CMEevents and compare predicted flux ropes at 1 AU to the observed ejectastructures inferred from in situ magnetic and plasma data. The EFRmodel also predicts the asymptotic ambient solar wind speed (Vsw) foreach event, which has not been validated yet. The predicted Vswvalues are tested using the ENLIL model. We discuss the minimum andsufficient required input data for an operational forecasting systemfor predicting the drivers of large geomagnetic storms.Kunkel, V., and Chen, J., ApJ Lett, 715, L80, 2010. Kunkel, V., Ph

  20. Biomimetic Approach for Accurate, Real-Time Aerodynamic Coefficients Project (United States)

    National Aeronautics and Space Administration — Aerodynamic and structural reliability and efficiency depends critically on the ability to accurately assess the aerodynamic loads and moments for each lifting...

  1. Bunch mode specific rate corrections for PILATUS3 detectors

    Energy Technology Data Exchange (ETDEWEB)

    Trueb, P., E-mail: [DECTRIS Ltd, 5400 Baden (Switzerland); Dejoie, C. [ETH Zurich, 8093 Zurich (Switzerland); Kobas, M. [DECTRIS Ltd, 5400 Baden (Switzerland); Pattison, P. [EPF Lausanne, 1015 Lausanne (Switzerland); Peake, D. J. [School of Physics, The University of Melbourne, Victoria 3010 (Australia); Radicci, V. [DECTRIS Ltd, 5400 Baden (Switzerland); Sobott, B. A. [School of Physics, The University of Melbourne, Victoria 3010 (Australia); Walko, D. A. [Argonne National Laboratory, Argonne, IL 60439 (United States); Broennimann, C. [DECTRIS Ltd, 5400 Baden (Switzerland)


    The count rate behaviour of PILATUS3 detectors has been characterized for seven bunch modes at four different synchrotrons. The instant retrigger technology of the PILATUS3 application-specific integrated circuit is found to reduce the dependency of the required rate correction on the synchrotron bunch mode. The improvement of using bunch mode specific rate corrections based on a Monte Carlo simulation is quantified. PILATUS X-ray detectors are in operation at many synchrotron beamlines around the world. This article reports on the characterization of the new PILATUS3 detector generation at high count rates. As for all counting detectors, the measured intensities have to be corrected for the dead-time of the counting mechanism at high photon fluxes. The large number of different bunch modes at these synchrotrons as well as the wide range of detector settings presents a challenge for providing accurate corrections. To avoid the intricate measurement of the count rate behaviour for every bunch mode, a Monte Carlo simulation of the counting mechanism has been implemented, which is able to predict the corrections for arbitrary bunch modes and a wide range of detector settings. This article compares the simulated results with experimental data acquired at different synchrotrons. It is found that the usage of bunch mode specific corrections based on this simulation improves the accuracy of the measured intensities by up to 40% for high photon rates and highly structured bunch modes. For less structured bunch modes, the instant retrigger technology of PILATUS3 detectors substantially reduces the dependency of the rate correction on the bunch mode. The acquired data also demonstrate that the instant retrigger technology allows for data acquisition up to 15 million photons per second per pixel.

  2. Multilingual text induced spelling correction

    NARCIS (Netherlands)

    Reynaert, M.W.C.


    We present TISC, a multilingual, language-independent and context-sensitive spelling checking and correction system designed to facilitate the automatic removal of non-word spelling errors in large corpora. Its lexicon is derived from raw text corpora, without supervision, and contains word unigrams

  3. The correct "ball bearings" data. (United States)

    Caroni, C


    The famous data on fatigue failure times of ball bearings have been quoted incorrectly from Lieblein and Zelen's original paper. The correct data include censored values, as well as non-fatigue failures that must be handled appropriately. They could be described by a mixture of Weibull distributions, corresponding to different modes of failure.


    Directory of Open Access Journals (Sweden)

    H. Rohne


    Full Text Available

    ENGLISH ABSTRACT: In this paper the important .issues involved in successfully implementing corrective action systems in quality management are discussed. The work is based on experience in implementing and operating such a system in an automotive manufacturing enterprise in South Africa. The core of a corrective action system is good documentation, supported by a computerised information system. Secondly, a systematic problem solving methodology is essential to resolve the quality related problems identified by the system. In the following paragraphs the general corrective action process is discussed and the elements of a corrective action system are identified, followed by a more detailed discussion of each element. Finally specific results from the application are discussed.

    AFRIKAANSE OPSOMMING: Belangrike oorwegings by die suksesvolle implementering van korrektiewe aksie stelsels in gehaltebestuur word in hierdie artikel bespreek. Die werk is gebaseer op ondervinding in die implementering en bedryf van so 'n stelsel by 'n motorvervaardiger in Suid Afrika. Die kern van 'n korrektiewe aksie stelsel is goeie dokumentering, gesteun deur 'n gerekenariseerde inligtingstelsel. Tweedens is 'n sistematiese probleemoplossings rnetodologie nodig om die gehalte verwante probleme wat die stelsel identifiseer aan te spreek. In die volgende paragrawe word die algemene korrektiewe aksie proses bespreek en die elemente van die korrektiewe aksie stelsel geidentifiseer. Elke element word dan in meer besonderhede bespreek. Ten slotte word spesifieke resultate van die toepassing kortliks behandel.

  5. Clinical Evaluation of Zero-Echo-Time Attenuation Correction for Brain 18F-FDG PET/MRI: Comparison with Atlas Attenuation Correction. (United States)

    Sekine, Tetsuro; Ter Voert, Edwin E G W; Warnock, Geoffrey; Buck, Alfred; Huellner, Martin; Veit-Haibach, Patrick; Delso, Gaspar


    Accurate attenuation correction (AC) on PET/MR is still challenging. The purpose of this study was to evaluate the clinical feasibility of AC based on fast zero-echo-time (ZTE) MRI by comparing it with the default atlas-based AC on a clinical PET/MR scanner.

  6. The statistical nature of the second order corrections to the thermal SZE



    This paper shows that the accepted expressions for the second order corrections in the parameter $z$ to the thermal Sunyaev-Zel'dovich effect can be accurately reproduced by a simple convolution integral approach. This representation allows to separate the second order SZE corrections into two type of components. One associated to a single line broadening, directly related to the even derivative terms present in the distortion intensity curve, while the other is related to a frequency shift, ...

  7. Short- and long-range corrected hybrid density functionals with the D3 dispersion corrections

    CERN Document Server

    Wang, Chih-Wei; Chai, Jeng-Da


    We propose a short- and long-range corrected (SLC) hybrid scheme employing 100% Hartree-Fock (HF) exchange at both zero and infinite interelectronic distances, wherein three SLC hybrid density functionals with the D3 dispersion corrections (SLC-LDA-D3, SLC-PBE-D3, and SLC-B97-D3) are developed. SLC-PBE-D3 and SLC-B97-D3 are shown to be accurate for a very diverse range of applications, such as core ionization and excitation energies, thermochemistry, kinetics, noncovalent interactions, dissociation of symmetric radical cations, vertical ionization potentials, vertical electron affinities, fundamental gaps, and valence, Rydberg, and long-range charge-transfer excitation energies. Relative to omegaB97X-D, SLC-B97-D3 provides significant improvement for core ionization and excitation energies and noticeable improvement for the self-interaction, asymptote, energy-gap, and charge-transfer problems, while performing similarly for thermochemistry, kinetics, and noncovalent interactions.

  8. Accurate reading comprehension rate as an indicator of broad reading in students in first, second, and third grades. (United States)

    Ciancio, Dennis; Thompson, Kelly; Schall, Megan; Skinner, Christopher; Foorman, Barbara


    The relationship between reading comprehension rate measures and broad reading skill development was examined using data from approximately 1425 students (grades 1-3). Students read 3 passages, from a pool of 30, and answered open-ended comprehension questions. Accurate reading comprehension rate (ARCR) was calculated by dividing the percentage of questions answered correctly (%QC) by seconds required to read the passage. Across all 30 passages, ARCR and its two components, %QC correct and time spent reading (1/seconds spent reading the passage), were significantly correlated with broad reading scores, with %QC resulting in the lowest correlations. Two sequential regressions supported previous findings which suggest that ARCR measures consistently produced meaningful incremental increases beyond %QC in the amount of variance explained in broad reading skill; however, ARCR produced small or no incremental increases beyond reading time. Discussion focuses on the importance of the measure of reading time embedded in brief accurate reading rate measures and directions for future research.

  9. Speed-of-sound compensated photoacoustic tomography for accurate imaging

    CERN Document Server

    Jose, Jithin; Steenbergen, Wiendelt; Slump, Cornelis H; van Leeuwen, Ton G; Manohar, Srirang


    In most photoacoustic (PA) measurements, variations in speed-of-sound (SOS) of the subject are neglected under the assumption of acoustic homogeneity. Biological tissue with spatially heterogeneous SOS cannot be accurately reconstructed under this assumption. We present experimental and image reconstruction methods with which 2-D SOS distributions can be accurately acquired and reconstructed, and with which the SOS map can be used subsequently to reconstruct highly accurate PA tomograms. We begin with a 2-D iterative reconstruction approach in an ultrasound transmission tomography (UTT) setting, which uses ray refracted paths instead of straight ray paths to recover accurate SOS images of the subject. Subsequently, we use the SOS distribution in a new 2-D iterative approach, where refraction of rays originating from PA sources are accounted for in accurately retrieving the distribution of these sources. Both the SOS reconstruction and SOS-compensated PA reconstruction methods utilize the Eikonal equation to m...

  10. 5 CFR 1601.34 - Error correction. (United States)


    ... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Error correction. 1601.34 Section 1601.34... Contribution Allocations and Interfund Transfer Requests § 1601.34 Error correction. Errors in processing... in the wrong investment fund, will be corrected in accordance with the error correction...

  11. 7 CFR 800.165 - Corrected certificates. (United States)


    ... this process shall be corrected according to this section. (b) Who may correct. Only official personnel.... According to this section and the instructions, corrected certificates shall show (i) the terms “Corrected... that has been superseded by another certificate or on the basis of a subsequent analysis for...

  12. Correcting ligands, metabolites, and pathways

    Directory of Open Access Journals (Sweden)

    Vriend Gert


    Full Text Available Abstract Background A wide range of research areas in bioinformatics, molecular biology and medicinal chemistry require precise chemical structure information about molecules and reactions, e.g. drug design, ligand docking, metabolic network reconstruction, and systems biology. Most available databases, however, treat chemical structures more as illustrations than as a datafield in its own right. Lack of chemical accuracy impedes progress in the areas mentioned above. We present a database of metabolites called BioMeta that augments the existing pathway databases by explicitly assessing the validity, correctness, and completeness of chemical structure and reaction information. Description The main bulk of the data in BioMeta were obtained from the KEGG Ligand database. We developed a tool for chemical structure validation which assesses the chemical validity and stereochemical completeness of a molecule description. The validation tool was used to examine the compounds in BioMeta, showing that a relatively small number of compounds had an incorrect constitution (connectivity only, not considering stereochemistry and that a considerable number (about one third had incomplete or even incorrect stereochemistry. We made a large effort to correct the errors and to complete the structural descriptions. A total of 1468 structures were corrected and/or completed. We also established the reaction balance of the reactions in BioMeta and corrected 55% of the unbalanced (stoichiometrically incorrect reactions in an automatic procedure. The BioMeta database was implemented in PostgreSQL and provided with a web-based interface. Conclusion We demonstrate that the validation of metabolite structures and reactions is a feasible and worthwhile undertaking, and that the validation results can be used to trigger corrections and improvements to BioMeta, our metabolite database. BioMeta provides some tools for rational drug design, reaction searches, and

  13. A highly accurate and analytic equation of state for a hard sphere fluid in random porous media. (United States)

    Holovko, M; Dong, W


    An analytical equation of state (EOS) for a hard sphere fluid confined in random porous media is derived by extending the scaled particle theory to such complex systems with quenched disorders. A simple empirical correction allows us to obtain a highly accurate EOS with errors within the simulation ones. These are the first analytical results for non trivial off-lattice quench-annealed systems.

  14. New miRNA Profiles Accurately Distinguish Renal Cell Carcinomas and Upper Tract Urothelial Carcinomas from the Normal Kidney


    Apostolos Zaravinos; George I Lambrou; Nikos Mourmouras; Patroklos Katafygiotis; Gregory Papagregoriou; Krinio Giannikou; Dimitris Delakas; Constantinos Deltas


    BACKGROUND: Upper tract urothelial carcinomas (UT-UC) can invade the pelvicalyceal system making differential diagnosis of the various histologically distinct renal cell carcinoma (RCC) subtypes and UT-UC, difficult. Correct diagnosis is critical for determining appropriate surgery and post-surgical treatments. We aimed to identify microRNA (miRNA) signatures that can accurately distinguish the most prevalent RCC subtypes and UT-UC form the normal kidney. METHODS AND FINDINGS: miRNA profiling...

  15. A multiresolution image based approach for correction of partial volume effects in emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Boussion, N; Hatt, M; Lamare, F; Bizais, Y; Turzo, A; Rest, C Cheze-Le; Visvikis, D [INSERM U650, Laboratoire du Traitement de l' Information Medicale (LaTIM), CHU Morvan, Brest (France)


    Partial volume effects (PVEs) are consequences of the limited spatial resolution in emission tomography. They lead to a loss of signal in tissues of size similar to the point spread function and induce activity spillover between regions. Although PVE can be corrected for by using algorithms that provide the correct radioactivity concentration in a series of regions of interest (ROIs), so far little attention has been given to the possibility of creating improved images as a result of PVE correction. Potential advantages of PVE-corrected images include the ability to accurately delineate functional volumes as well as improving tumour-to-background ratio, resulting in an associated improvement in the analysis of response to therapy studies and diagnostic examinations, respectively. The objective of our study was therefore to develop a methodology for PVE correction not only to enable the accurate recuperation of activity concentrations, but also to generate PVE-corrected images. In the multiresolution analysis that we define here, details of a high-resolution image H (MRI or CT) are extracted, transformed and integrated in a low-resolution image L (PET or SPECT). A discrete wavelet transform of both H and L images is performed by using the 'a trous' algorithm, which allows the spatial frequencies (details, edges, textures) to be obtained easily at a level of resolution common to H and L. A model is then inferred to build the lacking details of L from the high-frequency details in H. The process was successfully tested on synthetic and simulated data, proving the ability to obtain accurately corrected images. Quantitative PVE correction was found to be comparable with a method considered as a reference but limited to ROI analyses. Visual improvement and quantitative correction were also obtained in two examples of clinical images, the first using a combined PET/CT scanner with a lymphoma patient and the second using a FDG brain PET and corresponding T1

  16. Proximity effect correction sensitivity analysis (United States)

    Zepka, Alex; Zimmermann, Rainer; Hoppe, Wolfgang; Schulz, Martin


    Determining the quality of a proximity effect correction (PEC) is often done via 1-dimensional measurements such as: CD deviations from target, corner rounding, or line-end shortening. An alternative approach would compare the entire perimeter of the exposed shape and its original design. Unfortunately, this is not a viable solution as there is a practical limit to the number of metrology measurements that can be done in a reasonable amount of time. In this paper we make use of simulated results and introduce a method which may be considered complementary to the standard way of PEC qualification. It compares simulated contours with the target layout via a Boolean XOR operation with the area of the XOR differences providing a direct measure of how close a corrected layout approximates the target.

  17. Interaction and self-correction

    DEFF Research Database (Denmark)

    Satne, Glenda Lucila


    In this paper, I address the question of how to account for the normative dimension involved in conceptual competence in a naturalistic framework. First, I present what I call the naturalist challenge (NC), referring to both the phylogenetic and ontogenetic dimensions of conceptual possession...... and acquisition. I then criticize two models that have been dominant in thinking about conceptual competence, the interpretationist and the causalist models. Both fail to meet NC, by failing to account for the abilities involved in conceptual self-correction. I then offer an alternative account of self......-correction that I develop with the help of the interactionist theory of mutual understanding arising from recent developments in phenomenology and developmental psychology. © 2014 Satne....

  18. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L


    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  19. Holographic superconductors with Weyl corrections (United States)

    Momeni, Davood; Raza, Muhammad; Myrzakulov, Ratbay


    A quick review on the analytical aspects of holographic superconductors (HSCs) with Weyl corrections has been presented. Mainly, we focus on matching method and variational approaches. Different types of such HSC have been investigated — s-wave, p-wave and Stúckelberg ones. We also review the fundamental construction of a p-wave type, in which the non-Abelian gauge field is coupled to the Weyl tensor. The results are compared from numerics to analytical results.

  20. HMM-FRAME: accurate protein domain classification for metagenomic sequences containing frameshift errors

    Directory of Open Access Journals (Sweden)

    Sun Yanni


    Full Text Available Abstract Background Protein domain classification is an important step in metagenomic annotation. The state-of-the-art method for protein domain classification is profile HMM-based alignment. However, the relatively high rates of insertions and deletions in homopolymer regions of pyrosequencing reads create frameshifts, causing conventional profile HMM alignment tools to generate alignments with marginal scores. This makes error-containing gene fragments unclassifiable with conventional tools. Thus, there is a need for an accurate domain classification tool that can detect and correct sequencing errors. Results We introduce HMM-FRAME, a protein domain classification tool based on an augmented Viterbi algorithm that can incorporate error models from different sequencing platforms. HMM-FRAME corrects sequencing errors and classifies putative gene fragments into domain families. It achieved high error detection sensitivity and specificity in a data set with annotated errors. We applied HMM-FRAME in Targeted Metagenomics and a published metagenomic data set. The results showed that our tool can correct frameshifts in error-containing sequences, generate much longer alignments with significantly smaller E-values, and classify more sequences into their native families. Conclusions HMM-FRAME provides a complementary protein domain classification tool to conventional profile HMM-based methods for data sets containing frameshifts. Its current implementation is best used for small-scale metagenomic data sets. The source code of HMM-FRAME can be downloaded at and at

  1. Corrective camouflage in pediatric dermatology. (United States)

    Tedeschi, Aurora; Dall'Oglio, Federica; Micali, Giuseppe; Schwartz, Robert A; Janniger, Camila K


    Many dermatologic diseases, including vitiligo and other pigmentary disorders, vascular malformations, acne, and disfiguring scars from surgery or trauma, can be distressing to pediatric patients and can cause psychological alterations such as depression, loss of self-esteem, deterioration of quality of life, emotional distress, and, in some cases, body dysmorphic disorder. Corrective camouflage can help cover cutaneous unaesthetic disorders using a variety of water-resistant and light to very opaque products that provide effective and natural coverage. These products also can serve as concealers during medical treatment or after surgical procedures before healing is complete. Between May 2001 and July 2003. corrective camouflage was used on 15 children and adolescents (age range, 7-16 years; mean age, 14 years). The majority of patients were girls. Six patients had acne vulgaris; 4 had vitiligo; 2 had Becker nevus; and 1 each had striae distensae, allergic contact dermatitis. and postsurgical scarring. Parents of all patients were satisfied with the cosmetic cover results. We consider corrective makeup to be a well-received and valid adjunctive therapy for use during traditional long-term treatment and as a therapeutic alternative in patients in whom conventional therapy is ineffective.

  2. Quasar bolometric corrections: theoretical considerations

    CERN Document Server

    Nemmen, Rodrigo S


    Bolometric corrections based on the optical-to-ultraviolet continuum spectrum of quasars are widely used to quantify their radiative output, although such estimates are affected by a myriad of uncertainties, such as the generally unknown line-of-sight angle to the central engine. In order to shed light on these issues, we investigate the state-of-the-art models of Hubeny et al. that describe the continuum spectrum of thin accretion discs and include relativistic effects. We explore the bolometric corrections as a function of mass accretion rates, black hole masses and viewing angles, restricted to the parameter space expected for type-1 quasars. We find that a nonlinear relationship log L_bol=A + B log(lambda L_lambda) with B<=0.9 is favoured by the models and becomes tighter as the wavelength decreases. We calculate from the model the bolometric corrections corresponding to the wavelengths lambda = 1450A, 3000A and 5100A. In particular, for lambda=3000A we find A=9.24 +- 0.77 and B=0.81 +- 0.02. We demons...

  3. Correction parameters in conventional dental radiography for dental implant

    Directory of Open Access Journals (Sweden)

    Barunawaty Yunus


    Full Text Available Background: Radiographic imaging as a supportive diagnostic tool is the essential component in treatment planning for dental implant. It help dentist to access target area of implant due to recommendation of many inventions in making radiographic imaging previously. Along with the progress of science and technology, the increasing demand of easier and simpler treatment method, a modern radiographic diagnostic for dental implant is needed. In fact, Makassar, especially in Faculty of Dentistry Hasanuddin University, has only a conventional dental radiography. Researcher wants to optimize the equipment that is used to obtain parameters of the jaw that has been corrected to get accurate dental implant. Purpose: This study aimed to see the difference of radiographic imaging of dental implant size which is going to be placed in patient before and after correction. Method: The type of research is analytical observational with cross sectional design. Sampling method is non random sampling. The amount of samples is 30 people, male and female, aged 20–50 years old. The correction value is evaluated from the parameter result of width, height, and thick of the jaw that were corrected with a metal ball by using conventional dental radiography to see the accuracy. Data is analyzed using SPSS 14 for Windows program with T-test analysis. Result: The result that is obtained by T-Test analysis results with significant value which p<0.05 in the width and height of panoramic radiography technique, the width and height of periapical radiography technique, and the thick of occlusal radiography technique before and after correction. Conclusion: It can be concluded that there is a significant difference before and after the results of panoramic, periapical, and occlusal radiography is corrected.

  4. Drift correction of the dissolved signal in single particle ICPMS. (United States)

    Cornelis, Geert; Rauch, Sebastien


    A method is presented where drift, the random fluctuation of the signal intensity, is compensated for based on the estimation of the drift function by a moving average. It was shown using single particle ICPMS (spICPMS) measurements of 10 and 60 nm Au NPs that drift reduces accuracy of spICPMS analysis at the calibration stage and during calculations of the particle size distribution (PSD), but that the present method can again correct the average signal intensity as well as the signal distribution of particle-containing samples skewed by drift. Moreover, deconvolution, a method that models signal distributions of dissolved signals, fails in some cases when using standards and samples affected by drift, but the present method was shown to improve accuracy again. Relatively high particle signals have to be removed prior to drift correction in this procedure, which was done using a 3 × sigma method, and the signals are treated separately and added again. The method can also correct for flicker noise that increases when signal intensity is increased because of drift. The accuracy was improved in many cases when flicker correction was used, but when accurate results were obtained despite drift, the correction procedures did not reduce accuracy. The procedure may be useful to extract results from experimental runs that would otherwise have to be run again. Graphical Abstract A method is presented where a spICP-MS signal affected by drift (left) is corrected (right) by adjusting the local (moving) averages (green) and standard deviations (purple) to the respective values at a reference time (red). In combination with removing particle events (blue) in the case of calibration standards, this method is shown to obtain particle size distributions where that would otherwise be impossible, even when the deconvolution method is used to discriminate dissolved and particle signals.

  5. An Accurate Heading Solution using MEMS-based Gyroscope and Magnetometer Integrated System (Preliminary Results) (United States)

    El-Diasty, M.


    An accurate heading solution is required for many applications and it can be achieved by high grade (high cost) gyroscopes (gyros) which may not be suitable for such applications. Micro-Electro Mechanical Systems-based (MEMS) is an emerging technology, which has the potential of providing heading solution using a low cost MEMS-based gyro. However, MEMS-gyro-based heading solution drifts significantly over time. The heading solution can also be estimated using MEMS-based magnetometer by measuring the horizontal components of the Earth magnetic field. The MEMS-magnetometer-based heading solution does not drift over time, but are contaminated by high level of noise and may be disturbed by the presence of magnetic field sources such as metal objects. This paper proposed an accurate heading estimation procedure based on the integration of MEMS-based gyro and magnetometer measurements that correct gyro and magnetometer measurements where gyro angular rates of changes are estimated using magnetometer measurements and then integrated with the measured gyro angular rates of changes with a robust filter to estimate the heading. The proposed integration solution is implemented using two data sets; one was conducted in static mode without magnetic disturbances and the second was conducted in kinematic mode with magnetic disturbances. The results showed that the proposed integrated heading solution provides accurate, smoothed and undisturbed solution when compared with magnetometerbased and gyro-based heading solutions.

  6. Accurate Evaluation of the Dispersion Energy in the Simulation of Gas Adsorption into Porous Zeolites. (United States)

    Fraccarollo, Alberto; Canti, Lorenzo; Marchese, Leonardo; Cossi, Maurizio


    The force fields used to simulate the gas adsorption in porous materials are strongly dominated by the van der Waals (vdW) terms. Here we discuss the delicate problem to estimate these terms accurately, analyzing the effect of different models. To this end, we simulated the physisorption of CH4, CO2, and Ar into various Al-free microporous zeolites (ITQ-29, SSZ-13, and silicalite-1), comparing the theoretical results with accurate experimental isotherms. The vdW terms in the force fields were parametrized against the free gas densities and high-level quantum mechanical (QM) calculations, comparing different methods to evaluate the dispersion energies. In particular, MP2 and DFT with semiempirical corrections, with suitable basis sets, were chosen to approximate the best QM calculations; either Lennard-Jones or Morse expressions were used to include the vdW terms in the force fields. The comparison of the simulated and experimental isotherms revealed that a strong interplay exists between the definition of the dispersion energies and the functional form used in the force field; these results are fairly general and reproducible, at least for the systems considered here. On this basis, the reliability of different models can be discussed, and a recipe can be provided to obtain accurate simulated adsorption isotherms.

  7. A powerful test of independent assortment that determines genome-wide significance quickly and accurately. (United States)

    Stewart, W C L; Hager, V R


    In the analysis of DNA sequences on related individuals, most methods strive to incorporate as much information as possible, with little or no attention paid to the issue of statistical significance. For example, a modern workstation can easily handle the computations needed to perform a large-scale genome-wide inheritance-by-descent (IBD) scan, but accurate assessment of the significance of that scan is often hindered by inaccurate approximations and computationally intensive simulation. To address these issues, we developed gLOD-a test of co-segregation that, for large samples, models chromosome-specific IBD statistics as a collection of stationary Gaussian processes. With this simple model, the parametric bootstrap yields an accurate and rapid assessment of significance-the genome-wide corrected P-value. Furthermore, we show that (i) under the null hypothesis, the limiting distribution of the gLOD is the standard Gumbel distribution; (ii) our parametric bootstrap simulator is approximately 40 000 times faster than gene-dropping methods, and it is more powerful than methods that approximate the adjusted P-value; and, (iii) the gLOD has the same statistical power as the widely used maximum Kong and Cox LOD. Thus, our approach gives researchers the ability to determine quickly and accurately the significance of most large-scale IBD scans, which may contain multiple traits, thousands of families and tens of thousands of DNA sequences.


    Directory of Open Access Journals (Sweden)

    Y. F. Hsin


    Full Text Available The accuracy of the Fourier transform (FT, advantageous for the aperiodic lattice (AL design, is significantly improved for strongly scattering periodic lattices (PLs and ALs. The approach is to inversely obtain corrected parameters from an accurate transfer matrix method for the FT. We establish a corrected FT in order to improve the spectral inaccuracy of strongly scattering PLs by redefining wave numbers and reflective intensity. We further correct the FT for strongly scattering ALs by implementing improvements applied to strongly scattering PLs and then making detailed wave number adjustments in the main band spectral region. Silicon lattice simulations are presented.

  9. Aspects of probe correction for odd-order probes in spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Laitinen, Tommi; Pivnenko, Sergey N.; Breinbjerg, Olav


    Probe correction aspects for the spherical near-field antenna measurements are investigated. First, the spherical mode analyses of the radiated fields of several antennas are performed. It is shown that many common antennas are essentially so-called odd-order antennas. Second, the errors caused...... by the use of the first-order probe correction [1] for a rectangular waveguide probe, that is an odd-order antenna, are demonstrated. Third, a recently developed probe correction technique for odd-order probes is applied for the rectangular waveguide probe and shown to provide accurate results....

  10. Realization of Quadrature Signal Generator Using Accurate Magnitude Integrator

    DEFF Research Database (Denmark)

    Xin, Zhen; Yoon, Changwoo; Zhao, Rende


    -signal parameters, espically when a fast resonse is required for usages such as grid synchronization. As a result, the parameters design of the SOGI-QSG becomes complicated. Theoretical analysis shows that it is caused by the inaccurate magnitude-integration characteristic of the SOGI-QSG. To solve this problem......, an Accurate-Magnitude-Integrator based QSG (AMI-QSG) is proposed. The AMI has an accurate magnitude-integration characteristic for the sinusoidal signal, which makes the AMI-QSG possess an accurate First-Order-System (FOS) characteristic in terms of magnitude than the SOGI-QSG. The parameter design process...

  11. Fabricating an Accurate Implant Master Cast: A Technique Report. (United States)

    Balshi, Thomas J; Wolfinger, Glenn J; Alfano, Stephen G; Cacovean, Jeannine N; Balshi, Stephen F


    The technique for fabricating an accurate implant master cast following the 12-week healing period after Teeth in a Day® dental implant surgery is detailed. The clinical, functional, and esthetic details captured during the final master impression are vital to creating an accurate master cast. This technique uses the properties of the all-acrylic resin interim prosthesis to capture these details. This impression captures the relationship between the remodeled soft tissue and the interim prosthesis. This provides the laboratory technician with an accurate orientation of the implant replicas in the master cast with which a passive fitting restoration can be fabricated.

  12. Causal MRI reconstruction via Kalman prediction and compressed sensing correction. (United States)

    Majumdar, Angshul


    This technical note addresses the problem of causal online reconstruction of dynamic MRI, i.e. given the reconstructed frames till the previous time instant, we reconstruct the frame at the current instant. Our work follows a prediction-correction framework. Given the previous frames, the current frame is predicted based on a Kalman estimate. The difference between the estimate and the current frame is then corrected based on the k-space samples of the current frame; this reconstruction assumes that the difference is sparse. The method is compared against prior Kalman filtering based techniques and Compressed Sensing based techniques. Experimental results show that the proposed method is more accurate than these and considerably faster.

  13. Nonlinear hydrodynamic corrections to supersonic F-KPP wave fronts (United States)

    Antoine, C.; Dumazer, G.; Nowakowski, B.; Lemarchand, A.


    We study the hydrodynamic corrections to the dynamics and structure of an exothermic chemical wave front of Fisher-Kolmogorov-Petrovskii-Piskunov (F-KPP) type which travels in a one-dimensional gaseous medium. We show in particular that its long time dynamics, cut-off sensitivity and leading edge behavior are almost entirely controlled by the hydrodynamic front speed correction δUh which characterizes the pushed nature of the front. Reducing the problem to an effective comoving heterogeneous F-KPP equation, we determine two analytical expressions for δUh: an accurate one, derived from a variational method, and an approximate one, from which one can assess the δUh sensitivity to the shear viscosity and heat conductivity of the fluid of interest.

  14. Correction factors for gravimetric measurement of peritumoural oedema in man. (United States)

    Bell, B A; Smith, M A; Tocher, J L; Miller, J D


    The water content of samples of normal and oedematous brain in lobectomy specimens from 16 patients with cerebral tumours has been measured by gravimetry and by wet and dry weighing. Uncorrected gravimetry underestimated the water content of oedematous peritumoural cortex by a mean of 1.17%, and of oedematous peritumoural white matter by a mean of 2.52%. Gravimetric correction equations calculated theoretically and from an animal model of serum infusion white matter oedema overestimate peritumoural white matter oedema in man, and empirical gravimetric error correction factors for oedematous peritumoural human white matter and cortex have therefore been derived. These enable gravimetry to be used to accurately determine peritumoural oedema in man.

  15. Automatic correction of hand pointing in stereoscopic depth. (United States)

    Song, Yalin; Sun, Yaoru; Zeng, Jinhua; Wang, Fang


    In order to examine whether stereoscopic depth information could drive fast automatic correction of hand pointing, an experiment was designed in a 3D visual environment in which participants were asked to point to a target at different stereoscopic depths as quickly and accurately as possible within a limited time window (≤300 ms). The experiment consisted of two tasks: "depthGO" in which participants were asked to point to the new target position if the target jumped, and "depthSTOP" in which participants were instructed to abort their ongoing movements after the target jumped. The depth jump was designed to occur in 20% of the trials in both tasks. Results showed that fast automatic correction of hand movements could be driven by stereoscopic depth to occur in as early as 190 ms.

  16. Corrected Kondo temperature beyond the conventional Kondo scaling limit. (United States)

    Li, ZhenHua; Wei, JianHua; Zheng, Xiao; Yan, YiJing; Luo, Hong-Gang


    In the Kondo systems such as the magnetic impurity screened by the conduction electrons in a metal host, as well as the quantum dots connected by the leads, the low energy behaviors have universal dependence on the [Formula: see text] or [Formula: see text], where [Formula: see text] is the conventional Kondo temperature. However, it was shown that this scaling behavior is only valid at low-energy; this is called the Kondo scaling limit. Here we explore the extention of the scaling parameter range by introducing the corrected Kondo temperature T K, which may depend on the temperature and bias, as well as the other external parameters. We define the corrected Kondo temperature by scaling the local density of states near the Fermi level, obtained by accurate hierarchy of equations of motion approach at finite temperature and finite bias, and thus obtain a phenomenological expression of the corrected Kondo temperature. By using the corrected Kondo temperature as a characteristic energy scale, the conductance of the quantum dot can be well scaled in a wide parameter range, even two orders beyond the conventional scaling parameter range. Our work indicates that the Kondo scaling, although dominated by the conventional Kondo temperature in the low-energy of the Kondo system, could be extended to a higher energy regime, which is useful for analyzing the physics of the Kondo transport in non-equilibrium or high temperature cases.

  17. Children's perception of their synthetically corrected speech production. (United States)

    Strömbergsson, Sofia; Wengelin, Asa; House, David


    We explore children's perception of their own speech - in its online form, in its recorded form, and in synthetically modified forms. Children with phonological disorder (PD) and children with typical speech and language development (TD) performed tasks of evaluating accuracy of the different types of speech stimuli, either immediately after having produced the utterance or after a delay. In addition, they performed a task designed to assess their ability to detect synthetic modification. Both groups showed high performance in tasks involving evaluation of other children's speech, whereas in tasks of evaluating one's own speech, the children with PD were less accurate than their TD peers. The children with PD were less sensitive to misproductions in immediate conjunction with their production of an utterance, and more accurate after a delay. Within-category modification often passed undetected, indicating a satisfactory quality of the generated speech. Potential clinical benefits of using corrective re-synthesis are discussed.

  18. Relativistic and QED corrections for the beryllium atom. (United States)

    Pachucki, Krzysztof; Komasa, Jacek


    Complete relativistic and quantum electrodynamics corrections of order alpha(2) Ry and alpha(3) Ry are calculated for the ground state of the beryllium atom and its positive ion. A basis set of correlated Gaussian functions is used, with exponents optimized against nonrelativistic binding energies. The results for Bethe logarithms ln(k(0)(Be)=5.750 34(3) and ln(k(0)(Be+)=5.751 67(3) demonstrate the availability of high precision theoretical predictions for energy levels of the beryllium atom and light ions. Our recommended value of the ionization potential 75 192.514(80) cm(-1) agrees with equally accurate available experimental values.

  19. β—Correction Spectrophotometric Determination of Cadmium with Cadion

    Institute of Scientific and Technical Information of China (English)



    Cadmium has been determined by β-correction spectrophotometry with cadion,p-nitrobenzenediazoaminoaz-obenzone,and a non-ionic surfactant,tuiton X-100.The real absorbance of a Cd-cadion chelate in the colored solution can be accurately determined and the complex-ratio of cadion with Cd(II) has been worked out to be 2.Beer's law is obeyed over the concentration range of 0-0.20mg/1 cadmium and the detec-tion limit for cadmium is only 0.003mg/1.Satisfactory experimental results are presented with respect to the determination of trace cadmium in wastewaters.

  20. Adaptive dispersion formula for index interpolation and chromatic aberration correction. (United States)

    Li, Chia-Ling; Sasián, José


    This paper defines and discusses a glass dispersion formula that is adaptive. The formula exhibits superior convergence with a minimum number of coefficients. Using this formula we rationalize the correction of chromatic aberration per spectrum order. We compare the formula with the Sellmeier and Buchdahl formulas for glasses in the Schott catalogue. The six coefficient adaptive formula is found to be the most accurate with an average maximum index of refraction error of 2.91 × 10(-6) within the visible band.

  1. Accurate Sliding-Mode Control System Modeling for Buck Converters

    DEFF Research Database (Denmark)

    Høyerby, Mikkel Christian Wendelboe; Andersen, Michael Andreas E.


    This paper shows that classical sliding mode theory fails to correctly predict the output impedance of the highly useful sliding mode PID compensated buck converter. The reason for this is identified as the assumption of the sliding variable being held at zero during sliding mode, effectively...... approach also predicts the self-oscillating switching action of the sliding-mode control system correctly. Analytical findings are verified by simulation as well as experimentally in a 10-30V/3A buck converter....

  2. Highly Accurate Sensor for High-Purity Oxygen Determination Project (United States)

    National Aeronautics and Space Administration — In this STTR effort, Los Gatos Research (LGR) and the University of Wisconsin (UW) propose to develop a highly-accurate sensor for high-purity oxygen determination....

  3. Multi-objective optimization of inverse planning for accurate radiotherapy

    Institute of Scientific and Technical Information of China (English)

    曹瑞芬; 吴宜灿; 裴曦; 景佳; 李国丽; 程梦云; 李贵; 胡丽琴


    The multi-objective optimization of inverse planning based on the Pareto solution set, according to the multi-objective character of inverse planning in accurate radiotherapy, was studied in this paper. Firstly, the clinical requirements of a treatment pl

  4. Controlling Hay Fever Symptoms with Accurate Pollen Counts (United States)

    ... counts Share | Controlling Hay Fever Symptoms with Accurate Pollen Counts This article has been reviewed by Thanai ... rhinitis known as hay fever is caused by pollen carried in the air during different times of ...

  5. Digital system accurately controls velocity of electromechanical drive (United States)

    Nichols, G. B.


    Digital circuit accurately regulates electromechanical drive mechanism velocity. The gain and phase characteristics of digital circuits are relatively unimportant. Control accuracy depends only on the stability of the input signal frequency.


    Institute of Scientific and Technical Information of China (English)


    In this paper, a second order linear differential equation is considered, and an accurate estimate method of characteristic exponent for it is presented. Finally, we give some examples to verify the feasibility of our result.

  7. Mass spectrometry based protein identification with accurate statistical significance assignment


    Alves, Gelio; Yu, Yi-Kuo


    Motivation: Assigning statistical significance accurately has become increasingly important as meta data of many types, often assembled in hierarchies, are constructed and combined for further biological analyses. Statistical inaccuracy of meta data at any level may propagate to downstream analyses, undermining the validity of scientific conclusions thus drawn. From the perspective of mass spectrometry based proteomics, even though accurate statistics for peptide identification can now be ach...

  8. High-Accurate, Physics-Based Wake Simulation Techniques (United States)


    physically accurate problem as well as to show that the sensor can account for artificial viscosity where needed but not overload the problem and ”wash out...code) 1/27/2015 Final Technical Report 02/25/10 - 08/31/14 High-Accurate, Physics -Based Wake Simulation Techniques N00014-10-C-0190 Andrew Shelton...code was developed that utilizes the discontinuous Galerkin method to solve the Euler equations while utilizing a modal artificial viscosity sensor

  9. Taylor spatial frame-software-controlled fixator for deformity correction-the early Indian experience

    Directory of Open Access Journals (Sweden)

    Chaudhary Milind


    Full Text Available Background: Complex deformity correction and fracture treatment with the Ilizarov method needs extensive preoperative analysis and laborious postoperative fixator alterations, which are error-prone. We report our initial experience in treating the first 22 patients having fractures and complex deformities and shortening with software-controlled Taylor spatial frame (TSF external fixator, for its ease of use and accuracy in achieving fracture reduction and complex deformity correction. Settings and Design: The struts of the TSF fixator have multiplane hinges at both ends and the six struts allow correction in all six axes. Hence the same struts act to correct either angulation or translation or rotation. With a single construct assembled during surgery all the desired axis corrections can be performed without a change of the montage as is needed with the Ilizarov fixator. Materials and Methods: Twenty-seven limb segments were operated with the TSF fixator. There were 23 tibiae, two femora, one knee joint and one ankle joint. Seven patients had comminuted fractures. Ten patients who had 13 deformed segments achieved full correction. Eight patients had lengthening in 10 tibiae. (Five of these also had simultaneous correction of deformities. One patient each had correction of knee and ankle deformities. Accurate reduction of fractures and correction of deformities and length could be achieved in all of our patients with minimum postoperative fixator alterations as compared to the Ilizarov system. The X-ray visualization of the osteotomy or lengthening site due to the six crossing struts and added bulk of the fixator rings which made positioning in bed and walking slightly more difficult as compared to the Ilizarov fixator. Conclusions: The TSF external fixator allows accurate fracture reduction and deformity correction without tedious analysis and postoperative frame alterations. The high cost of the fixator is a deterrent. The need for an internet

  10. Matrix Models and Gravitational Corrections

    CERN Document Server

    Dijkgraaf, R; Temurhan, M; Dijkgraaf, Robbert; Sinkovics, Annamaria; Temurhan, Mine


    We provide evidence of the relation between supersymmetric gauge theories and matrix models beyond the planar limit. We compute gravitational R^2 couplings in gauge theories perturbatively, by summing genus one matrix model diagrams. These diagrams give the leading 1/N^2 corrections in the large N limit of the matrix model and can be related to twist field correlators in a collective conformal field theory. In the case of softly broken SU(N) N=2 super Yang-Mills theories, we find that these exact solutions of the matrix models agree with results obtained by topological field theory methods.

  11. A Quantum Correction To Chaos


    A. Fitzpatrick; Kaplan, Jared


    We use results on Virasoro conformal blocks to study chaotic dynamics in CFT 2 at large central charge c . The Lyapunov exponent λ L , which is a diagnostic for the early onset of chaos, receives 1 /c corrections that may be interpreted as λ L = 2 π β 1 + 12 c $$ {\\lambda}_L=\\frac{2\\pi }{\\beta}\\left(1+\\frac{12}{c}\\right) $$ . However, out of time order correlators receive other equally important 1 /c suppressed contributions that do not have such a simple interpretation. We revisit the proof ...

  12. Holographic Thermalization with Weyl Corrections

    CERN Document Server

    Dey, Anshuman; Sarkar, Tapobrata


    We consider holographic thermalization in the presence of a Weyl correction in five dimensional AdS space. We numerically analyze the time dependence of the two point correlation functions and the expectation values of rectangular Wilson loops in the boundary field theory. The subtle interplay between the Weyl coupling constant and the chemical potential is studied in detail. An outcome of our analysis is the appearance of a swallow tail behaviour in the thermalization curve, and we give evidence that this might indicate distinct physical situations relating to different length scales in the problem.

  13. Correct Linearization of Einstein's Equations

    Directory of Open Access Journals (Sweden)

    Rabounski D.


    Full Text Available Regularly Einstein's equations can be reduced to a wave form (linearly dependent from the second derivatives of the space metric in the absence of gravitation, the space rotation and Christoffel's symbols. As shown here, the origin of the problem is that one uses the general covariant theory of measurement. Here the wave form of Einstein's equations is obtained in the terms of Zelmanov's chronometric invariants (physically observable projections on the observer's time line and spatial section. The obtained equations depend on solely the second derivatives even if gravitation, the space rotation and Christoffel's symbols. The correct linearization proves: the Einstein equations are completely compatible with weak waves of the metric.

  14. Heisenberg coupling constant predicted for molecular magnets with pairwise spin-contamination correction

    Energy Technology Data Exchange (ETDEWEB)

    Masunov, Artëm E., E-mail: [NanoScience Technology Center, Department of Chemistry, and Department of Physics, University of Central Florida, Orlando, FL 32826 (United States); Photochemistry Center RAS, ul. Novatorov 7a, Moscow 119421 (Russian Federation); Gangopadhyay, Shruba [Department of Physics, University of California, Davis, CA 95616 (United States); IBM Almaden Research Center, 650 Harry Road, San Jose, CA 95120 (United States)


    New method to eliminate the spin-contamination in broken symmetry density functional theory (BS DFT) calculations is introduced. Unlike conventional spin-purification correction, this method is based on canonical Natural Orbitals (NO) for each high/low spin coupled electron pair. We derive an expression to extract the energy of the pure singlet state given in terms of energy of BS DFT solution, the occupation number of the bonding NO, and the energy of the higher spin state built on these bonding and antibonding NOs (not self-consistent Kohn–Sham orbitals of the high spin state). Compared to the other spin-contamination correction schemes, spin-correction is applied to each correlated electron pair individually. We investigate two binuclear Mn(IV) molecular magnets using this pairwise correction. While one of the molecules is described by magnetic orbitals strongly localized on the metal centers, and spin gap is accurately predicted by Noodleman and Yamaguchi schemes, for the other one the gap is predicted poorly by these schemes due to strong delocalization of the magnetic orbitals onto the ligands. We show our new correction to yield more accurate results in both cases. - Highlights: • Magnetic orbitails obtained for high and low spin states are not related. • Spin-purification correction becomes inaccurate for delocalized magnetic orbitals. • We use the natural orbitals of the broken symmetry state to build high spin state. • This new correction is made separately for each electron pair. • Our spin-purification correction is more accurate for delocalised magnetic orbitals.

  15. Simulation of Kelvin-Helmholtz Instability with Flux-Corrected Transport Method

    Institute of Scientific and Technical Information of China (English)

    WANG Li-Feng; YE Wen-Hua; FAN Zheng-Feng; LI Ying-Jun


    The sixth-order accurate phase error flux-corrected transport numerical algorithm is introduced, and used to simulate Kelvin-Helmholtz instability. Linear growth rates of the simulation agree with the linear theories of Kelvin-Helmholtz instability. It indicates the validity and accuracy of this simulation method. The method also has good capturing ability of the instability interface deformation.

  16. Corrections for shear and rotatory inertia on flexural vibrations of beams

    NARCIS (Netherlands)

    Nederveen, C.J.; Schwarzl, F.R.


    Different correction formulae for the influence of shear and rotatory inertia on flexural vibrations of freely supported beams are compared with the exact solution. It appears that in most cases a simple formula is sufficient because of the appearance of a constant which is not accurately known, viz

  17. On the Effects of Error Correction Strategies on the Grammatical Accuracy of the Iranian English Learners (United States)

    Aliakbari, Mohammad; Toni, Arman


    Writing, as a productive skill, requires an accurate in-depth knowledge of the grammar system, language form and sentence structure. The emphasis on accuracy is justified in the sense that it can lead to the production of structurally correct instances of second language, and to prevent inaccuracy that may result in the production of structurally…

  18. Salting-out effects by pressure-corrected 3D-RISM (United States)

    Misin, Maksim; Vainikka, Petteri A.; Fedorov, Maxim V.; Palmer, David S.


    We demonstrate that using a pressure corrected three-dimensional reference interaction site model one can accurately predict salting-out (Setschenow's) constants for a wide range of organic compounds in aqueous solutions of NaCl. The approach, based on classical molecular force fields, offers an alternative to more heavily parametrized methods.


    Institute of Scientific and Technical Information of China (English)

    黄磊; 刘建业; 曾庆化


    Traditional coning algorithms are based on the first-order coning correction reference model .Usually they reduce the algorithm error of coning axis (z) by increasing the sample numbers in one iteration interval .But the increase of sample numbers requires the faster output rates of sensors .Therefore ,the algorithms are often lim-ited in practical use .Moreover ,the noncommutivity error of rotation usually exists on all three axes and the in-crease of sample numbers has little positive effect on reducing the algorithm errors of orthogonal axes (x ,y) . Considering the errors of orthogonal axes cannot be neglected in the high-precision applications ,a coning algorithm with an additional second-order coning correction term is developed to further improve the performance of coning algorithm .Compared with the traditional algorithms ,the new second-order coning algorithm can effectively reduce the algorithm error without increasing the sample numbers .Theoretical analyses validate that in a coning environ-ment with low frequency ,the new algorithm has the better performance than the traditional time-series and fre-quency-series coning algorithms ,while in a maneuver environment the new algorithm has the same order accuracy as the traditional time-series and frequency-series algorithms .Finally ,the practical feasibility of the new coning al-gorithm is demonstrated by digital simulations and practical turntable tests .

  20. Language Trajectory through Corrective Feedback

    Directory of Open Access Journals (Sweden)

    S. Saber Alavi


    Full Text Available This quasi-experimental study was designed to investigate the effects of corrective feedback on SLA/EFL to determine the potential benefits of two different corrective feedback techniques, namely recasts and elicitation. The research hypotheses were: 1 Learners who are exposed to interactive focused task that requires CR will benefit more than those who are exposed to communicative activities only; 2 Elicitation will be more effective than recasts in leading to L2 development; Three intensive EFL classes in a language center in Songkhla province, Thailand were selected to participate in the study. Based on the study design, two class were assigned to the treatment conditions elicitation group and recasts group and the third was used as a control group. The treatment took place over a period of 9 meetings focusing on teaching third person singular –s morpheme and the provision of CF where it was necessary. The participants' knowledge of the intended syntantic point was tested before treatment and post tested after receiving the treatment. A multiple choice and focused-cloze reading grammar test was used in the pre-test and the post-test to evaluate the effects of the treatments on the learners' acquisition of third person singular morpheme. This classroom-based study showed that the two treatment groups benefited from CF strategies, but according to the study, elicitation group outperformed the recast one.

  1. Simplified correction of g-value measurements

    DEFF Research Database (Denmark)

    Duer, Karsten


    A double glazed unit (Ipasol Natura 66/34) has been investigated in the Danish experimental setup METSET. The corrections of the experimental data are very important for the investigated sample as it shows significant spectral selectivity. In (Duer, 1998) and in (Platzer, 1998) the corrections have...... been carried out using a detailed physical model based on ISO9050 and prEN410 but using polarized data for non-normal incidence. This model is only valid for plane, clear glazings and therefor not suited for corrections of measurements performed on complex glazings. To investigate a more general...... correction procedure the results from the measurements on the Interpane DGU have been corrected using the principle outlined in (Rosenfeld, 1996). This correction procedure is more general as corrections can be carried out without a correct physical model of the investigated glazing. On the other hand...

  2. Juvenile Correctional Institutions Library Services: A Bibliography. (United States)

    McAlister, Annette M.

    This bibliography lists citations for 14 articles, books, and reports concerned with library services in juvenile correctional institutions. A second section lists 21 additional materials on adult correctional libraries which also contain information relevant to the juvenile library. (KP)

  3. Correcting for telluric absorption: Methods, case studies, and release of the TelFit code

    Energy Technology Data Exchange (ETDEWEB)

    Gullikson, Kevin; Kraus, Adam [Department of Astronomy, University of Texas, 2515 Speedway, Stop C1400, Austin, TX 78712 (United States); Dodson-Robinson, Sarah [Department of Physics and Astronomy, 217 Sharp Lab, Newark, DE 19716 (United States)


    Ground-based astronomical spectra are contaminated by the Earth's atmosphere to varying degrees in all spectral regions. We present a Python code that can accurately fit a model to the telluric absorption spectrum present in astronomical data, with residuals of ∼3%-5% of the continuum for moderately strong lines. We demonstrate the quality of the correction by fitting the telluric spectrum in a nearly featureless A0V star, HIP 20264, as well as to a series of dwarf M star spectra near the 819 nm sodium doublet. We directly compare the results to an empirical telluric correction of HIP 20264 and find that our model-fitting procedure is at least as good and sometimes more accurate. The telluric correction code, which we make freely available to the astronomical community, can be used as a replacement for telluric standard star observations for many purposes.

  4. Spectral functions with the density matrix renormalization group: Krylov-space approach for correction vectors (United States)

    Nocera, A.; Alvarez, G.


    Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. This paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper then studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases studied indicate that the Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.

  5. Nonexposure Accurate Location K-Anonymity Algorithm in LBS

    Directory of Open Access Journals (Sweden)

    Jinying Jia


    Full Text Available This paper tackles location privacy protection in current location-based services (LBS where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user’s accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR, nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user’s accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR.

  6. Nonexposure accurate location K-anonymity algorithm in LBS. (United States)

    Jia, Jinying; Zhang, Fengli


    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR.

  7. Analysis of transient electromagnetic interactions on nanodevices using a quantum corrected integral equation approach

    KAUST Repository

    Uysal, Ismail E.


    Analysis of electromagnetic interactions on nanodevices can oftentimes be carried out accurately using “traditional” electromagnetic solvers. However, if a gap of sub-nanometer scale exists between any two surfaces of the device, quantum-mechanical effects including tunneling should be taken into account for an accurate characterization of the device\\'s response. Since the first-principle quantum simulators can not be used efficiently to fully characterize a typical-size nanodevice, a quantum corrected electromagnetic model has been proposed as an efficient and accurate alternative (R. Esteban et al., Nat. Commun., 3(825), 2012). The quantum correction is achieved through an effective layered medium introduced into the gap between the surfaces. The dielectric constant of each layer is obtained using a first-principle quantum characterization of the gap with a different dimension.

  8. 5 CFR 1604.6 - Error correction. (United States)


    ... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Error correction. 1604.6 Section 1604.6 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD UNIFORMED SERVICES ACCOUNTS § 1604.6 Error correction. (a) General rule. A service member's employing agency must correct the service member's...

  9. Sequences of Closed Operators and Correctness

    Directory of Open Access Journals (Sweden)

    Sabra Ramadan


    Full Text Available In applications and in mathematical physics equations it is very important for mathematical models corresponding to the given problem to be correct given. In this research we will study the relationship between the sequence of closed operators An→A and the correctness of the equation Ax = y. Also we will introduce the criterion for correctness.

  10. Deformation field correction for spatial normalization of PET images (United States)

    Bilgel, Murat; Carass, Aaron; Resnick, Susan M.; Wong, Dean F.; Prince, Jerry L.


    Spatial normalization of positron emission tomography (PET) images is essential for population studies, yet the current state of the art in PET-to-PET registration is limited to the application of conventional deformable registration methods that were developed for structural images. A method is presented for the spatial normalization of PET images that improves their anatomical alignment over the state of the art. The approach works by correcting the deformable registration result using a model that is learned from training data having both PET and structural images. In particular, viewing the structural registration of training data as ground truth, correction factors are learned by using a generalized ridge regression at each voxel given the PET intensities and voxel locations in a population-based PET template. The trained model can then be used to obtain more accurate registration of PET images to the PET template without the use of a structural image. A cross validation evaluation on 79 subjects shows that the proposed method yields more accurate alignment of the PET images compared to deformable PET-to-PET registration as revealed by 1) a visual examination of the deformed images, 2) a smaller error in the deformation fields, and 3) a greater overlap of the deformed anatomical labels with ground truth segmentations. PMID:26142272

  11. Simple and accurate analytical calculation of shortest path lengths

    CERN Document Server

    Melnik, Sergey


    We present an analytical approach to calculating the distribution of shortest paths lengths (also called intervertex distances, or geodesic paths) between nodes in unweighted undirected networks. We obtain very accurate results for synthetic random networks with specified degree distribution (the so-called configuration model networks). Our method allows us to accurately predict the distribution of shortest path lengths on real-world networks using their degree distribution, or joint degree-degree distribution. Compared to some other methods, our approach is simpler and yields more accurate results. In order to obtain the analytical results, we use the analogy between an infection reaching a node in $n$ discrete time steps (i.e., as in the susceptible-infected epidemic model) and that node being at a distance $n$ from the source of the infection.

  12. Accurate level set method for simulations of liquid atomization☆

    Institute of Scientific and Technical Information of China (English)

    Changxiao Shao; Kun Luo; Jianshan Yang; Song Chen; Jianren Fan


    Computational fluid dynamics is an efficient numerical approach for spray atomization study, but it is chal enging to accurately capture the gas–liquid interface. In this work, an accurate conservative level set method is intro-duced to accurately track the gas–liquid interfaces in liquid atomization. To validate the capability of this method, binary drop collision and drop impacting on liquid film are investigated. The results are in good agreement with experiment observations. In addition, primary atomization (swirling sheet atomization) is studied using this method. To the swirling sheet atomization, it is found that Rayleigh–Taylor instability in the azimuthal direction causes the primary breakup of liquid sheet and complex vortex structures are clustered around the rim of the liq-uid sheet. The effects of central gas velocity and liquid–gas density ratio on atomization are also investigated. This work lays a solid foundation for further studying the mechanism of spray atomization.

  13. Memory conformity affects inaccurate memories more than accurate memories. (United States)

    Wright, Daniel B; Villalba, Daniella K


    After controlling for initial confidence, inaccurate memories were shown to be more easily distorted than accurate memories. In two experiments groups of participants viewed 50 stimuli and were then presented with these stimuli plus 50 fillers. During this test phase participants reported their confidence that each stimulus was originally shown. This was followed by computer-generated responses from a bogus participant. After being exposed to this response participants again rated the confidence of their memory. The computer-generated responses systematically distorted participants' responses. Memory distortion depended on initial memory confidence, with uncertain memories being more malleable than confident memories. This effect was moderated by whether the participant's memory was initially accurate or inaccurate. Inaccurate memories were more malleable than accurate memories. The data were consistent with a model describing two types of memory (i.e., recollective and non-recollective memories), which differ in how susceptible these memories are to memory distortion.

  14. Accurate nuclear radii and binding energies from a chiral interaction

    CERN Document Server

    Ekstrom, A; Wendt, K A; Hagen, G; Papenbrock, T; Carlsson, B D; Forssen, C; Hjorth-Jensen, M; Navratil, P; Nazarewicz, W


    The accurate reproduction of nuclear radii and binding energies is a long-standing challenge in nuclear theory. To address this problem two-nucleon and three-nucleon forces from chiral effective field theory are optimized simultaneously to low-energy nucleon-nucleon scattering data, as well as binding energies and radii of few-nucleon systems and selected isotopes of carbon and oxygen. Coupled-cluster calculations based on this interaction, named NNLOsat, yield accurate binding energies and radii of nuclei up to 40Ca, and are consistent with the empirical saturation point of symmetric nuclear matter. In addition, the low-lying collective 3- states in 16O and 40Ca are described accurately, while spectra for selected p- and sd-shell nuclei are in reasonable agreement with experiment.

  15. Accurate and Robust Attitude Estimation Using MEMS Gyroscopes and a Monocular Camera (United States)

    Kobori, Norimasa; Deguchi, Daisuke; Takahashi, Tomokazu; Ide, Ichiro; Murase, Hiroshi

    In order to estimate accurate rotations of mobile robots and vehicle, we propose a hybrid system which combines a low-cost monocular camera with gyro sensors. Gyro sensors have drift errors that accumulate over time. On the other hand, a camera cannot obtain the rotation continuously in the case where feature points cannot be extracted from images, although the accuracy is better than gyro sensors. To solve these problems we propose a method for combining these sensors based on Extended Kalman Filter. The errors of the gyro sensors are corrected by referring to the rotations obtained from the camera. In addition, by using the reliability judgment of camera rotations and devising the state value of the Extended Kalman Filter, even when the rotation is not continuously observable from the camera, the proposed method shows a good performance. Experimental results showed the effectiveness of the proposed method.


    Malusek, A; Sandborg, M; Carlsson, G Alm


    Modern X-ray units register the air kerma-area product, PKA, with a built-in KAP meter. Some KAP meters show an energy-dependent bias comparable with the maximum uncertainty articulated by the IEC (25 %), adversely affecting dose-optimisation processes. To correct for the bias, a reference KAP meter calibrated at a standards laboratory and two calibration methods described here can be used to achieve an uncertainty of energy-independent dosemeter via a reference beam quality in the clinic, Q1, to beam quality, Q Biases up to 35 % of built-in KAP meter readings were noted. Energy-dependent calibration factors are needed for unbiased PKA Accurate KAP meter calibration as a prerequisite for optimisation in projection radiography.

  17. Accurate estimation of influenza epidemics using Google search data via ARGO. (United States)

    Yang, Shihao; Santillana, Mauricio; Kou, S C


    Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions.

  18. Fast and accurate prediction of numerical relativity waveforms from binary black hole mergers using surrogate models

    CERN Document Server

    Blackman, Jonathan; Galley, Chad R; Szilagyi, Bela; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A


    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. In this paper, we construct an accurate and fast-to-evaluate surrogate model for numerical relativity (NR) waveforms from non-spinning binary black hole coalescences with mass ratios from $1$ to $10$ and durations corresponding to about $15$ orbits before merger. Our surrogate, which is built using reduced order modeling techniques, is distinct from traditional modeling efforts. We find that the full multi-mode surrogate model agrees with waveforms generated by NR to within the numerical error of the NR code. In particular, we show that our modeling strategy produces surrogates which can correctly predict NR waveforms that were {\\em not} used for the surrogate's training. For all practical purposes, then, the surrogate waveform model is equivalent to the high-accuracy, large-scale simulation waveform but can be evaluated in a millisecond to a second dependin...

  19. What's in a Name? The Impact of Accurate Staphylococcus pseudintermedius Identification on Appropriate Antimicrobial Susceptibility Testing. (United States)

    Limbago, Brandi M


    Bacteria in the Staphylococcus intermedius group, including Staphylococcus pseudintermedius, often encode mecA-mediated methicillin resistance. Reliable detection of this phenotype for proper treatment and infection control decisions requires that these coagulase-positive staphylococci are accurately identified and specifically that they are not misidentified as S. aureus. As correct species level bacterial identification becomes more commonplace in clinical laboratories, one can expect to see changes in guidance for antimicrobial susceptibility testing and interpretation. The study by Wu et al. in this issue (M. T. Wu, C.-A. D. Burnham, L. F. Westblade, J. Dien Bard, S. D. Lawhon, M. A. Wallace, T. Stanley, E. Burd, J. Hindler, R. M. Humphries, J Clin Microbiol 54:535-542, 2016, highlights the impact of robust identification of S. intermedius group organisms on the selection of appropriate antimicrobial susceptibility testing methods and interpretation.

  20. Importance of local exact exchange potential in hybrid functionals for accurate excited states

    CERN Document Server

    Kim, Jaewook; Hwang, Sang-Yeon; Ryu, Seongok; Choi, Sunghwan; Kim, Woo Youn


    Density functional theory has been an essential analysis tool for both theoretical and experimental chemists since accurate hybrid functionals were developed. Here we propose a local hybrid method derived from the optimized effective potential (OEP) method and compare its distinct features with conventional nonlocal ones from the Hartree-Fock (HF) exchange operator. Both are formally exact for ground states and thus show similar accuracy for atomization energies and reaction barrier heights. For excited states, the local version yields virtual orbitals with N-electron character, while those of the nonlocal version have mixed characters between N- and (N+1)-electron orbitals. As a result, the orbital energy gaps from the former well approximate excitation energies with a small mean absolute error (MAE = 0.40 eV) for the Caricato benchmark set. The correction from time-dependent density functional theory with a simple local density approximation kernel further improves its accuracy by incorporating multi-config...

  1. Quantum Corrections in Massive Gravity

    CERN Document Server

    de Rham, Claudia; Ribeiro, Raquel H


    We compute the one-loop quantum corrections to the potential of ghost-free massive gravity. We show how the mass of external matter fields contribute to the running of the cosmological constant, but do not change the ghost-free structure of the massive gravity potential at one-loop. When considering gravitons running in the loops, we show how the structure of the potential gets destabilized at the quantum level, but in a way which would never involve a ghost with a mass smaller than the Planck scale. This is done by explicitly computing the one-loop effective action and supplementing it with the Vainshtein mechanism. We conclude that to one-loop order the special mass structure of ghost-free massive gravity is technically natural.

  2. Quantum corrections in massive gravity (United States)

    de Rham, Claudia; Heisenberg, Lavinia; Ribeiro, Raquel H.


    We compute the one-loop quantum corrections to the potential of ghost-free massive gravity. We show how the mass of external matter fields contributes to the running of the cosmological constant, but does not change the ghost-free structure of the massive gravity potential at one-loop. When considering gravitons running in the loops, we show how the structure of the potential gets destabilized at the quantum level, but in a way which would never involve a ghost with a mass smaller than the Planck scale. This is done by explicitly computing the one-loop effective action and supplementing it with the Vainshtein mechanism. We conclude that to one-loop order the special mass structure of ghost-free massive gravity is technically natural.

  3. A Quantum Correction To Chaos

    CERN Document Server

    Fitzpatrick, A Liam


    We use results on Virasoro conformal blocks to study chaotic dynamics in CFT$_2$ at large central charge c. The Lyapunov exponent $\\lambda_L$, which is a diagnostic for the early onset of chaos, receives $1/c$ corrections that may be interpreted as $\\lambda_L = \\frac{2 \\pi}{\\beta} \\left( 1 + \\frac{12}{c} \\right)$. However, out of time order correlators receive other equally important $1/c$ suppressed contributions that do not have such a simple interpretation. We revisit the proof of a bound on $\\lambda_L$ that emerges at large $c$, focusing on CFT$_2$ and explaining why our results do not conflict with the analysis leading to the bound. We also comment on relationships between chaos, scattering, causality, and bulk locality.

  4. Radiative corrections in bumblebee electrodynamics

    Directory of Open Access Journals (Sweden)

    R.V. Maluf


    Full Text Available We investigate some quantum features of the bumblebee electrodynamics in flat spacetimes. The bumblebee field is a vector field that leads to a spontaneous Lorentz symmetry breaking. For a smooth quadratic potential, the massless excitation (Nambu–Goldstone boson can be identified as the photon, transversal to the vacuum expectation value of the bumblebee field. Besides, there is a massive excitation associated with the longitudinal mode and whose presence leads to instability in the spectrum of the theory. By using the principal-value prescription, we show that no one-loop radiative corrections to the mass term is generated. Moreover, the bumblebee self-energy is not transverse, showing that the propagation of the longitudinal mode cannot be excluded from the effective theory.

  5. Fringe Capacitance Correction for a Coaxial Soil Cell

    Directory of Open Access Journals (Sweden)

    John D. Wanjura


    Full Text Available Accurate measurement of moisture content is a prime requirement in hydrological, geophysical and biogeochemical research as well as for material characterization and process control. Within these areas, accurate measurements of the surface area and bound water content is becoming increasingly important for providing answers to many fundamental questions ranging from characterization of cotton fiber maturity, to accurate characterization of soil water content in soil water conservation research to bio-plant water utilization to chemical reactions and diffusions of ionic species across membranes in cells as well as in the dense suspensions that occur in surface films. One promising technique to address the increasing demands for higher accuracy water content measurements is utilization of electrical permittivity characterization of materials. This technique has enjoyed a strong following in the soil-science and geological community through measurements of apparent permittivity via time-domain-reflectometry (TDR as well in many process control applications. Recent research however, is indicating a need to increase the accuracy beyond that available from traditional TDR. The most logical pathway then becomes a transition from TDR based measurements to network analyzer measurements of absolute permittivity that will remove the adverse effects that high surface area soils and conductivity impart onto the measurements of apparent permittivity in traditional TDR applications. This research examines an observed experimental error for the coaxial probe, from which the modern TDR probe originated, which is hypothesized to be due to fringe capacitance. The research provides an experimental and theoretical basis for the cause of the error and provides a technique by which to correct the system to remove this source of error. To test this theory, a Poisson model of a coaxial cell was formulated to calculate the effective theoretical extra length caused by the

  6. Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements. (United States)

    Franck, Jennifer A; Blume, Janet; Crisco, Joseph J; Franck, Christian


    Sports-related concussions are of significant concern in many impact sports, and their detection relies on accurate measurements of the head kinematics during impact. Among the most prevalent recording technologies are videography, and more recently, the use of single-axis accelerometers mounted in a helmet, such as the HIT system. Successful extraction of the linear and angular impact accelerations depends on an accurate analysis methodology governed by the equations of motion. Current algorithms are able to estimate the magnitude of acceleration and hit location, but make assumptions about the hit orientation and are often limited in the position and/or orientation of the accelerometers. The newly formulated algorithm presented in this manuscript accurately extracts the full linear and rotational acceleration vectors from a broad arrangement of six single-axis accelerometers directly from the governing set of kinematic equations. The new formulation linearizes the nonlinear centripetal acceleration term with a finite-difference approximation and provides a fast and accurate solution for all six components of acceleration over long time periods (>250 ms). The approximation of the nonlinear centripetal acceleration term provides an accurate computation of the rotational velocity as a function of time and allows for reconstruction of a multiple-impact signal. Furthermore, the algorithm determines the impact location and orientation and can distinguish between glancing, high rotational velocity impacts, or direct impacts through the center of mass. Results are shown for ten simulated impact locations on a headform geometry computed with three different accelerometer configurations in varying degrees of signal noise. Since the algorithm does not require simplifications of the actual impacted geometry, the impact vector, or a specific arrangement of accelerometer orientations, it can be easily applied to many impact investigations in which accurate kinematics need to

  7. Evaluation of Range-Corrected Density Functionals for the Simulation of Pyridinium-Containing Molecular Crystals. (United States)

    Ruggiero, Michael T; Gooch, Jonathan; Zubieta, Jon; Korter, Timothy M


    The problem of nonlocal interactions in density functional theory calculations has in part been mitigated by the introduction of range-corrected functional methods. While promising solutions, the continued evaluation of range corrections in the structural simulations of complex molecular crystals is required to judge their efficacy in challenging chemical environments. Here, three pyridinium-based crystals, exhibiting a wide range of intramolecular and intermolecular interactions, are used as benchmark systems for gauging the accuracy of several range-corrected density functional techniques. The computational results are compared to low-temperature experimental single-crystal X-ray diffraction and terahertz spectroscopic measurements, enabling the direct assessment of range correction in the accurate simulation of the potential energy surface minima and curvatures. Ultimately, the simultaneous treatment of both short- and long-range effects by the ωB97-X functional was found to be central to its rank as the top performer in reproducing the complex array of forces that occur in the studied pyridinium solids. These results demonstrate that while long-range corrections are the most commonly implemented range-dependent improvements to density functionals, short-range corrections are vital for the accurate reproduction of forces that rapidly diminish with distance, such as quadrupole-quadrupole interactions.

  8. Correction of a Depth-Dependent Lateral Distortion in 3D Super-Resolution Imaging.

    Directory of Open Access Journals (Sweden)

    Lina Carlini

    Full Text Available Three-dimensional (3D localization-based super-resolution microscopy (SR requires correction of aberrations to accurately represent 3D structure. Here we show how a depth-dependent lateral shift in the apparent position of a fluorescent point source, which we term `wobble`, results in warped 3D SR images and provide a software tool to correct this distortion. This system-specific, lateral shift is typically > 80 nm across an axial range of ~ 1 μm. A theoretical analysis based on phase retrieval data from our microscope suggests that the wobble is caused by non-rotationally symmetric phase and amplitude aberrations in the microscope's pupil function. We then apply our correction to the bacterial cytoskeletal protein FtsZ in live bacteria and demonstrate that the corrected data more accurately represent the true shape of this vertically-oriented ring-like structure. We also include this correction method in a registration procedure for dual-color, 3D SR data and show that it improves target registration error (TRE at the axial limits over an imaging depth of 1 μm, yielding TRE values of < 20 nm. This work highlights the importance of correcting aberrations in 3D SR to achieve high fidelity between the measurements and the sample.

  9. Accurate torque-speed performance prediction for brushless dc motors (United States)

    Gipper, Patrick D.

    Desirable characteristics of the brushless dc motor (BLDCM) have resulted in their application for electrohydrostatic (EH) and electromechanical (EM) actuation systems. But to effectively apply the BLDCM requires accurate prediction of performance. The minimum necessary performance characteristics are motor torque versus speed, peak and average supply current and efficiency. BLDCM nonlinear simulation software specifically adapted for torque-speed prediction is presented. The capability of the software to quickly and accurately predict performance has been verified on fractional to integral HP motor sizes, and is presented. Additionally, the capability of torque-speed prediction with commutation angle advance is demonstrated.

  10. Method of accurate grinding for single enveloping TI worm

    Institute of Scientific and Technical Information of China (English)

    SUN; Yuehai; ZHENG; Huijiang; BI; Qingzhen; WANG; Shuren


    TI worm drive consists of involute helical gear and its enveloping Hourglass worm. Accurate grinding for TI worm is the key manufacture technology for TI worm gearing being popularized and applied. According to the theory of gear mesh, the equations of tooth surface of worm drive are gained, and the equation of the axial section profile of grinding wheel that can accurately grind TI worm is extracted. Simultaneously,the relation of position and motion between TI worm and grinding wheel are expounded.The method for precisely grinding single enveloping TI worm is obtained.

  11. Accurate analysis of planar metamaterials using the RLC theory

    DEFF Research Database (Denmark)

    Malureanu, Radu; Lavrinenko, Andrei


    In this work we will present an accurate description of metallic pads response using RLC theory. In order to calculate such response we take into account several factors including the mutual inductances, precise formula for determining the capacitance and also the pads’ resistance considering...... the variation of permittivity due to small thicknesses. Even if complex, such strategy gives accurate results and we believe that, after more refinement, can be used to completely calculate a complex metallic structure placed on a substrate in a far faster manner than full simulations programs do....

  12. Distortion correction in EPI using an extended PSF method with a reversed phase gradient approach.

    Directory of Open Access Journals (Sweden)

    Myung-Ho In

    Full Text Available In echo-planar imaging (EPI, such as commonly used for functional MRI (fMRI and diffusion-tensor imaging (DTI, compressed distortion is a more difficult challenge than local stretching as spatial information can be lost in strongly compressed areas. In addition, the effects are more severe at ultra-high field (UHF such as 7T due to increased field inhomogeneity. To resolve this problem, two EPIs with opposite phase-encoding (PE polarity were acquired and combined after distortion correction. For distortion correction, a point spread function (PSF mapping method was chosen due to its high correction accuracy and extended to perform distortion correction of both EPIs with opposite PE polarity thus reducing the PSF reference scan time. Because the amount of spatial information differs between the opposite PE datasets, the method was further extended to incorporate a weighted combination of the two distortion-corrected images to maximize the spatial information content of a final corrected image. The correction accuracy of the proposed method was evaluated in distortion-corrected data using both forward and reverse phase-encoded PSF reference data and compared with the reversed gradient approaches suggested previously. Further we demonstrate that the extended PSF method with an improved weighted combination can recover local distortions and spatial information loss and be applied successfully not only to spin-echo EPI, but also to gradient-echo EPIs acquired with both PE directions to perform geometrically accurate image reconstruction.

  13. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe


    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  14. Accurate characterization of OPVs: Device masking and different solar simulators

    DEFF Research Database (Denmark)

    Gevorgyan, Suren; Carlé, Jon Eggert; Søndergaard, Roar R.;


    laboratories following rigorous ASTM and IEC standards. This work tries to address some of the issues confronting the standard laboratory in this regard. Solar simulator lamps are investigated for their light field homogeneity and direct versus diffuse components, as well as the correct device area...

  15. An accurate analytic description of neutrino oscillations in matter

    Energy Technology Data Exchange (ETDEWEB)

    Niro, Viviana [Max-Planck-Institut fuer Kernphysik, Heidelberg (Germany)


    We present a simple closed-form analytic expression for the probability of two-flavour neutrino oscillations in a matter with an arbitrary density profile. Our formula is based on a perturbative expansion and allows an easy calculation of higher order corrections. We demonstrate the validity of our results using a few model density profiles, including the PREM density profile of the Earth.

  16. Comparison and Analysis of Geometric Correction Models of Spaceborne SAR. (United States)

    Jiang, Weihao; Yu, Anxi; Dong, Zhen; Wang, Qingsong


    Following the development of synthetic aperture radar (SAR), SAR images have become increasingly common. Many researchers have conducted large studies on geolocation models, but little work has been conducted on the available models for the geometric correction of SAR images of different terrain. To address the terrain issue, four different models were compared and are described in this paper: a rigorous range-doppler (RD) model, a rational polynomial coefficients (RPC) model, a revised polynomial (PM) model and an elevation derivation (EDM) model. The results of comparisons of the geolocation capabilities of the models show that a proper model for a SAR image of a specific terrain can be determined. A solution table was obtained to recommend a suitable model for users. Three TerraSAR-X images, two ALOS-PALSAR images and one Envisat-ASAR image were used for the experiment, including flat terrain and mountain terrain SAR images as well as two large area images. Geolocation accuracies of the models for different terrain SAR images were computed and analyzed. The comparisons of the models show that the RD model was accurate but was the least efficient; therefore, it is not the ideal model for real-time implementations. The RPC model is sufficiently accurate and efficient for the geometric correction of SAR images of flat terrain, whose precision is below 0.001 pixels. The EDM model is suitable for the geolocation of SAR images of mountainous terrain, and its precision can reach 0.007 pixels. Although the PM model does not produce results as precise as the other models, its efficiency is excellent and its potential should not be underestimated. With respect to the geometric correction of SAR images over large areas, the EDM model has higher accuracy under one pixel, whereas the RPC model consumes one third of the time of the EDM model.

  17. Comparison and Analysis of Geometric Correction Models of Spaceborne SAR

    Directory of Open Access Journals (Sweden)

    Weihao Jiang


    Full Text Available Following the development of synthetic aperture radar (SAR, SAR images have become increasingly common. Many researchers have conducted large studies on geolocation models, but little work has been conducted on the available models for the geometric correction of SAR images of different terrain. To address the terrain issue, four different models were compared and are described in this paper: a rigorous range-doppler (RD model, a rational polynomial coefficients (RPC model, a revised polynomial (PM model and an elevation derivation (EDM model. The results of comparisons of the geolocation capabilities of the models show that a proper model for a SAR image of a specific terrain can be determined. A solution table was obtained to recommend a suitable model for users. Three TerraSAR-X images, two ALOS-PALSAR images and one Envisat-ASAR image were used for the experiment, including flat terrain and mountain terrain SAR images as well as two large area images. Geolocation accuracies of the models for different terrain SAR images were computed and analyzed. The comparisons of the models show that the RD model was accurate but was the least efficient; therefore, it is not the ideal model for real-time implementations. The RPC model is sufficiently accurate and efficient for the geometric correction of SAR images of flat terrain, whose precision is below 0.001 pixels. The EDM model is suitable for the geolocation of SAR images of mountainous terrain, and its precision can reach 0.007 pixels. Although the PM model does not produce results as precise as the other models, its efficiency is excellent and its potential should not be underestimated. With respect to the geometric correction of SAR images over large areas, the EDM model has higher accuracy under one pixel, whereas the RPC model consumes one third of the time of the EDM model.

  18. Importance of Attenuation Correction (AC for Small Animal PET Imaging

    Directory of Open Access Journals (Sweden)

    Henrik H. El Ali


    Full Text Available The purpose of this study was to investigate whether a correction for annihilation photon attenuation in small objects such as mice is necessary. The attenuation recovery for specific organs and subcutaneous tumors was investigated. A comparison between different attenuation correction methods was performed. Methods: Ten NMRI nude mice with subcutaneous implantation of human breast cancer cells (MCF-7 were scanned consecutively in small animal PET and CT scanners (MicroPETTM Focus 120 and ImTek’s MicroCATTM II. CT-based AC, PET-based AC and uniform AC methods were compared. Results: The activity concentration in the same organ with and without AC revealed an overall attenuation recovery of 9–21% for MAP reconstructed images, i.e., SUV without AC could underestimate the true activity at this level. For subcutaneous tumors, the attenuation was 13 ± 4% (9–17%, for kidneys 20 ± 1% (19–21%, and for bladder 18 ± 3% (15–21%. The FBP reconstructed images showed almost the same attenuation levels as the MAP reconstructed images for all organs. Conclusions: The annihilation photons are suffering attenuation even in small subjects. Both PET-based and CT-based are adequate as AC methods. The amplitude of the AC recovery could be overestimated using the uniform map. Therefore, application of a global attenuation factor on PET data might not be accurate for attenuation correction.

  19. Combined registration and motion correction of longitudinal retinal OCT data (United States)

    Lang, Andrew; Carass, Aaron; Al-Louzi, Omar; Bhargava, Pavan; Solomon, Sharon D.; Calabresi, Peter A.; Prince, Jerry L.


    Optical coherence tomography (OCT) has become an important modality for examination of the eye. To measure layer thicknesses in the retina, automated segmentation algorithms are often used, producing accurate and reliable measurements. However, subtle changes over time are difficult to detect since the magnitude of the change can be very small. Thus, tracking disease progression over short periods of time is difficult. Additionally, unstable eye position and motion alter the consistency of these measurements, even in healthy eyes. Thus, both registration and motion correction are important for processing longitudinal data of a specific patient. In this work, we propose a method to jointly do registration and motion correction. Given two scans of the same patient, we initially extract blood vessel points from a fundus projection image generated on the OCT data and estimate point correspondences. Due to saccadic eye movements during the scan, motion is often very abrupt, producing a sparse set of large displacements between successive B-scan images. Thus, we use lasso regression to estimate the movement of each image. By iterating between this regression and a rigid point-based registration, we are able to simultaneously align and correct the data. With longitudinal data from 39 healthy control subjects, our method improves the registration accuracy by 43% compared to simple alignment to the fovea and 8% when using point-based registration only. We also show improved consistency of repeated total retina thickness measurements.

  20. Accounting for Chromatic Atmospheric Effects on Barycentric Corrections (United States)

    Blackman, Ryan T.; Szymkowiak, Andrew E.; Fischer, Debra A.; Jurgenson, Colby A.


    Atmospheric effects on stellar radial velocity measurements for exoplanet discovery and characterization have not yet been fully investigated for extreme precision levels. We carry out calculations to determine the wavelength dependence of barycentric corrections across optical wavelengths, due to the ubiquitous variations in air mass during observations. We demonstrate that radial velocity errors of at least several cm s‑1 can be incurred if the wavelength dependence is not included in the photon-weighted barycentric corrections. A minimum of four wavelength channels across optical spectra (380–680 nm) are required to account for this effect at the 10 cm s‑1 level, with polynomial fits of the barycentric corrections applied to cover all wavelengths. Additional channels may be required in poor observing conditions or to avoid strong telluric absorption features. Furthermore, consistent flux sampling on the order of seconds throughout the observation is necessary to ensure that accurate photon weights are obtained. Finally, we describe how a multiple-channel exposure meter will be implemented in the EXtreme PREcision Spectrograph (EXPRES).

  1. Assessment of density functional methods with correct asymptotic behavior

    CERN Document Server

    Tsai, Chen-Wei; Li, Guan-De; Chai, Jeng-Da


    Long-range corrected (LC) hybrid functionals and asymptotically corrected (AC) model potentials are two distinct density functional methods with correct asymptotic behavior. They are known to be accurate for properties that are sensitive to the asymptote of the exchange-correlation potential, such as the highest occupied molecular orbital energies and Rydberg excitation energies of molecules. To provide a comprehensive comparison, we investigate the performance of the two schemes and others on a very wide range of applications, including the asymptote problems, self-interaction-error problems, energy-gap problems, charge-transfer problems, and many others. The LC hybrid scheme is shown to consistently outperform the AC model potential scheme. In addition, to be consistent with the molecules collected in the IP131 database [Y.-S. Lin, C.-W. Tsai, G.-D. Li, and J.-D. Chai, J. Chem. Phys. 136, 154109 (2012)], we expand the EA115 and FG115 databases to include, respectively, the vertical electron affinities and f...

  2. Total energy evaluation in the Strutinsky shell correction method. (United States)

    Zhou, Baojing; Wang, Yan Alexander


    We analyze the total energy evaluation in the Strutinsky shell correction method (SCM) of Ullmo et al. [Phys. Rev. B 63, 125339 (2001)], where a series expansion of the total energy is developed based on perturbation theory. In agreement with Yannouleas and Landman [Phys. Rev. B 48, 8376 (1993)], we also identify the first-order SCM result to be the Harris functional [Phys. Rev. B 31, 1770 (1985)]. Further, we find that the second-order correction of the SCM turns out to be the second-order error of the Harris functional, which involves the a priori unknown exact Kohn-Sham (KS) density, rho(KS)(r). Interestingly, the approximation of rho(KS)(r) by rho(out)(r), the output density of the SCM calculation, in the evaluation of the second-order correction leads to the Hohenberg-Kohn-Sham functional. By invoking an auxiliary system in the framework of orbital-free density functional theory, Ullmo et al. designed a scheme to approximate rho(KS)(r), but with several drawbacks. An alternative is designed to utilize the optimal density from a high-quality density mixing method to approximate rho(KS)(r). Our new scheme allows more accurate and complex kinetic energy density functionals and nonlocal pseudopotentials to be employed in the SCM. The efficiency of our new scheme is demonstrated in atomistic calculations on the cubic diamond Si and face-centered-cubic Ag systems.

  3. Modeling Battery Behavior for Accurate State-of-Charge Indication

    NARCIS (Netherlands)

    Pop, V.; Bergveld, H.J.; Veld, op het J.H.G.; Regtien, P.P.L.; Danilov, D.; Notten, P.H.L.


    Li-ion is the most commonly used battery chemistry in portable applications nowadays. Accurate state-of-charge (SOC) and remaining run-time indication for portable devices is important for the user's convenience and to prolong the lifetime of batteries. A new SOC indication system, combining the ele

  4. Accurate and Simple Calibration of DLP Projector Systems

    DEFF Research Database (Denmark)

    Wilm, Jakob; Olesen, Oline Vinter; Larsen, Rasmus


    Much work has been devoted to the calibration of optical cameras, and accurate and simple methods are now available which require only a small number of calibration targets. The problem of obtaining these parameters for light projectors has not been studied as extensively and most current methods...

  5. Speed-of-sound compensated photoacoustic tomography for accurate imaging

    NARCIS (Netherlands)

    Jose, J.; Willemink, G.H.; Steenbergen, W.; Leeuwen, van A.G.J.M.; Manohar, S.


    Purpose: In most photoacoustic (PA) tomographic reconstructions, variations in speed-of-sound (SOS) of the subject are neglected under the assumption of acoustic homogeneity. Biological tissue with spatially heterogeneous SOS cannot be accurately reconstructed under this assumption. The authors pres


    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...

  7. Accurate segmentation of dense nanoparticles by partially discrete electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Roelandts, T., E-mail: [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Batenburg, K.J. [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, 1098 XG Amsterdam (Netherlands); Biermans, E. [EMAT, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Kuebel, C. [Institute of Nanotechnology, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Sijbers, J. [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium)


    Accurate segmentation of nanoparticles within various matrix materials is a difficult problem in electron tomography. Due to artifacts related to image series acquisition and reconstruction, global thresholding of reconstructions computed by established algorithms, such as weighted backprojection or SIRT, may result in unreliable and subjective segmentations. In this paper, we introduce the Partially Discrete Algebraic Reconstruction Technique (PDART) for computing accurate segmentations of dense nanoparticles of constant composition. The particles are segmented directly by the reconstruction algorithm, while the surrounding regions are reconstructed using continuously varying gray levels. As no properties are assumed for the other compositions of the sample, the technique can be applied to any sample where dense nanoparticles must be segmented, regardless of the surrounding compositions. For both experimental and simulated data, it is shown that PDART yields significantly more accurate segmentations than those obtained by optimal global thresholding of the SIRT reconstruction. -- Highlights: Black-Right-Pointing-Pointer We present a novel reconstruction method for partially discrete electron tomography. Black-Right-Pointing-Pointer It accurately segments dense nanoparticles directly during reconstruction. Black-Right-Pointing-Pointer The gray level to use for the nanoparticles is determined objectively. Black-Right-Pointing-Pointer The method expands the set of samples for which discrete tomography can be applied.

  8. A Simple and Accurate Method for Measuring Enzyme Activity. (United States)

    Yip, Din-Yan


    Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…

  9. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method (United States)

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey


    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  10. $H_{2}^{+}$ ion in strong magnetic field an accurate calculation

    CERN Document Server

    López, J C; Turbiner, A V


    Using a unique trial function we perform an accurate calculation of the ground state $1\\sigma_g$ of the hydrogenic molecular ion $H^+_2$ in a constant uniform magnetic field ranging $0-10^{13}$ G. We show that this trial function also makes it possible to study the negative parity ground state $1\\sigma_u$.

  11. Fast, Accurate and Detailed NoC Simulations

    NARCIS (Netherlands)

    Wolkotte, P.T.; Hölzenspies, P.K.F.; Smit, G.J.M.; Kellenberger, P.


    Network-on-Chip (NoC) architectures have a wide variety of parameters that can be adapted to the designer's requirements. Fast exploration of this parameter space is only possible at a high-level and several methods have been proposed. Cycle and bit accurate simulation is necessary when the actual r

  12. Accurate Period Approximation for Any Simple Pendulum Amplitude

    Institute of Scientific and Technical Information of China (English)

    XUE De-Sheng; ZHOU Zhao; GAO Mei-Zhen


    Accurate approximate analytical formulae of the pendulum period composed of a few elementary functions for any amplitude are constructed.Based on an approximation of the elliptic integral,two new logarithmic formulae for large amplitude close to 180° are obtained.Considering the trigonometric function modulation results from the dependence of relative error on the amplitude,we realize accurate approximation period expressions for any amplitude between 0 and 180°.A relative error less than 0.02% is achieved for any amplitude.This kind of modulation is also effective for other large-amplitude logarithmic approximation expressions.%Accurate approximate analytical formulae of the pendulum period composed of a few elementary functions for any amplitude are constructed. Based on an approximation of the elliptic integral, two new logarithmic formulae for large amplitude close to 180° are obtained. Considering the trigonometric function modulation results from the dependence of relative error on the amplitude, we realize accurate approximation period expressions for any amplitude between 0 and 180°. A relative error less than 0.02% is achieved for any amplitude. This kind of modulation is also effective for other large-amplitude logarithmic approximation expressions.

  13. On accurate boundary conditions for a shape sensitivity equation method (United States)

    Duvigneau, R.; Pelletier, D.


    This paper studies the application of the continuous sensitivity equation method (CSEM) for the Navier-Stokes equations in the particular case of shape parameters. Boundary conditions for shape parameters involve flow derivatives at the boundary. Thus, accurate flow gradients are critical to the success of the CSEM. A new approach is presented to extract accurate flow derivatives at the boundary. High order Taylor series expansions are used on layered patches in conjunction with a constrained least-squares procedure to evaluate accurate first and second derivatives of the flow variables at the boundary, required for Dirichlet and Neumann sensitivity boundary conditions. The flow and sensitivity fields are solved using an adaptive finite-element method. The proposed methodology is first verified on a problem with a closed form solution obtained by the Method of Manufactured Solutions. The ability of the proposed method to provide accurate sensitivity fields for realistic problems is then demonstrated. The flow and sensitivity fields for a NACA 0012 airfoil are used for fast evaluation of the nearby flow over an airfoil of different thickness (NACA 0015).

  14. A Self-Instructional Device for Conditioning Accurate Prosody. (United States)

    Buiten, Roger; Lane, Harlan


    A self-instructional device for conditioning accurate prosody in second-language learning is described in this article. The Speech Auto-Instructional Device (SAID) is electro-mechanical and performs three functions: SAID (1) presents to the student tape-recorded pattern sentences that are considered standards in prosodic performance; (2) processes…

  15. Practical schemes for accurate forces in quantum Monte Carlo

    NARCIS (Netherlands)

    Moroni, S.; Saccani, S.; Filippi, C.


    While the computation of interatomic forces has become a well-established practice within variational Monte Carlo (VMC), the use of the more accurate Fixed-Node Diffusion Monte Carlo (DMC) method is still largely limited to the computation of total energies on structures obtained at a lower level of

  16. Bioaccessibility tests accurately estimate bioavailability of lead to quail (United States)

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb, we incorporated Pb-contaminated soils or Pb acetate into diets for Japanese quail (Coturnix japonica), fed the quail for 15 days, and ...

  17. Accurate Simulations of Binary Black-Hole Mergers in Force-Free Electrodynamics

    CERN Document Server

    Alic, Daniela; Rezzolla, Luciano; Zanotti, Olindo; Jaramillo, Jose Luis


    We provide additional information on our recent study of the electromagnetic emission produced during the inspiral and merger of supermassive black holes when these are immersed in a force-free plasma threaded by a uniform magnetic field. As anticipated in a recent letter, our results show that although a dual-jet structure is present, the associated luminosity is ~ 100 times smaller than the total one, which is predominantly quadrupolar. We here discuss the details of our implementation of the equations in which the force-free condition is not implemented at a discrete level, but rather obtained via a damping scheme which drives the solution to satisfy the correct condition. We show that this is important for a correct and accurate description of the current sheets that can develop in the course of the simulation. We also study in greater detail the three-dimensional charge distribution produced as a consequence of the inspiral and show that during the inspiral it possesses a complex but ordered structure wh...

  18. Removing the thermal component from heart rate provides an accurate VO2 estimation in forest work. (United States)

    Dubé, Philippe-Antoine; Imbeau, Daniel; Dubeau, Denise; Lebel, Luc; Kolus, Ahmet


    Heart rate (HR) was monitored continuously in 41 forest workers performing brushcutting or tree planting work. 10-min seated rest periods were imposed during the workday to estimate the HR thermal component (ΔHRT) per Vogt et al. (1970, 1973). VO2 was measured using a portable gas analyzer during a morning submaximal step-test conducted at the work site, during a work bout over the course of the day (range: 9-74 min), and during an ensuing 10-min rest pause taken at the worksite. The VO2 estimated, from measured HR and from corrected HR (thermal component removed), were compared to VO2 measured during work and rest. Varied levels of HR thermal component (ΔHRTavg range: 0-38 bpm) originating from a wide range of ambient thermal conditions, thermal clothing insulation worn, and physical load exerted during work were observed. Using raw HR significantly overestimated measured work VO2 by 30% on average (range: 1%-64%). 74% of VO2 prediction error variance was explained by the HR thermal component. VO2 estimated from corrected HR, was not statistically different from measured VO2. Work VO2 can be estimated accurately in the presence of thermal stress using Vogt et al.'s method, which can be implemented easily by the practitioner with inexpensive instruments.

  19. Fast and accurate solution of the Poisson equation in an immersed setting

    CERN Document Server

    Marques, Alexandre Noll; Rosales, Rodolfo Ruben


    We present a fast and accurate algorithm for the Poisson equation in complex geometries, using regular Cartesian grids. We consider a variety of configurations, including Poisson equations with interfaces across which the solution is discontinuous (of the type arising in multi-fluid flows). The algorithm is based on a combination of the Correction Function Method (CFM) and Boundary Integral Methods (BIM). Interface and boundary conditions can be treated in a fast and accurate manner using boundary integral equations, and the associated BIM. Unfortunately, BIM can be costly when the solution is needed everywhere in a grid, e.g. fluid flow problems. We use the CFM to circumvent this issue. The solution from the BIM is used to rewrite the problem as a series of Poisson equations in rectangular domains --- which requires the BIM solution at interfaces/boundaries only. These Poisson equations involve discontinuities at interfaces, of the type that the CFM can handle. Hence we use the CFM to solve them (to high ord...

  20. Accurate and efficient computation of nonlocal potentials based on Gaussian-sum approximation (United States)

    Exl, Lukas; Mauser, Norbert J.; Zhang, Yong


    We introduce an accurate and efficient method for the numerical evaluation of nonlocal potentials, including the 3D/2D Coulomb, 2D Poisson and 3D dipole-dipole potentials. Our method is based on a Gaussian-sum approximation of the singular convolution kernel combined with a Taylor expansion of the density. Starting from the convolution formulation of the nonlocal potential, for smooth and fast decaying densities, we make a full use of the Fourier pseudospectral (plane wave) approximation of the density and a separable Gaussian-sum approximation of the kernel in an interval where the singularity (the origin) is excluded. The potential is separated into a regular integral and a near-field singular correction integral. The first is computed with the Fourier pseudospectral method, while the latter is well resolved utilizing a low-order Taylor expansion of the density. Both parts are accelerated by fast Fourier transforms (FFT). The method is accurate (14-16 digits), efficient (O (Nlog ⁡ N) complexity), low in storage, easily adaptable to other different kernels, applicable for anisotropic densities and highly parallelizable.

  1. RNASequel: accurate and repeat tolerant realignment of RNA-seq reads. (United States)

    Wilson, Gavin W; Stein, Lincoln D


    RNA-seq is a key technology for understanding the biology of the cell because of its ability to profile transcriptional and post-transcriptional regulation at single nucleotide resolutions. Compared to DNA sequencing alignment algorithms, RNA-seq alignment algorithms have a diminished ability to accurately detect and map base pair substitutions, gaps, discordant pairs and repetitive regions. These shortcomings adversely affect experiments that require a high degree of accuracy, notably the ability to detect RNA editing. We have developed RNASequel, a software package that runs as a post-processing step in conjunction with an RNA-seq aligner and systematically corrects common alignment artifacts. Its key innovations are a two-pass splice junction alignment system that includes de novo splice junctions and the use of an empirically determined estimate of the fragment size distribution when resolving read pairs. We demonstrate that RNASequel produces improved alignments when used in conjunction with STAR or Tophat2 using two simulated datasets. We then show that RNASequel improves the identification of adenosine to inosine RNA editing sites on biological datasets. This software will be useful in applications requiring the accurate identification of variants in RNA sequencing data, the discovery of RNA editing sites and the analysis of alternative splicing.

  2. A Novel Method for Accurate Operon Predictions in All SequencedProkaryotes

    Energy Technology Data Exchange (ETDEWEB)

    Price, Morgan N.; Huang, Katherine H.; Alm, Eric J.; Arkin, Adam P.


    We combine comparative genomic measures and the distance separating adjacent genes to predict operons in 124 completely sequenced prokaryotic genomes. Our method automatically tailors itself to each genome using sequence information alone, and thus can be applied to any prokaryote. For Escherichia coli K12 and Bacillus subtilis, our method is 85 and 83% accurate, respectively, which is similar to the accuracy of methods that use the same features but are trained on experimentally characterized transcripts. In Halobacterium NRC-1 and in Helicobacterpylori, our method correctly infers that genes in operons are separated by shorter distances than they are in E.coli, and its predictions using distance alone are more accurate than distance-only predictions trained on a database of E.coli transcripts. We use microarray data from sixphylogenetically diverse prokaryotes to show that combining intergenic distance with comparative genomic measures further improves accuracy and that our method is broadly effective. Finally, we survey operon structure across 124 genomes, and find several surprises: H.pylori has many operons, contrary to previous reports; Bacillus anthracis has an unusual number of pseudogenes within conserved operons; and Synechocystis PCC6803 has many operons even though it has unusually wide spacings between conserved adjacent genes.

  3. Novel micelle PCR-based method for accurate, sensitive and quantitative microbiota profiling (United States)

    Boers, Stefan A.; Hays, John P.; Jansen, Ruud


    In the last decade, many researchers have embraced 16S rRNA gene sequencing techniques, which has led to a wealth of publications and documented differences in the composition of microbial communities derived from many different ecosystems. However, comparison between different microbiota studies is currently very difficult due to the lack of a standardized 16S rRNA gene sequencing protocol. Here we report on a novel approach employing micelle PCR (micPCR) in combination with an internal calibrator that allows for standardization of microbiota profiles via their absolute abundances. The addition of an internal calibrator allows the researcher to express the resulting operational taxonomic units (OTUs) as a measure of 16S rRNA gene copies by correcting the number of sequences of each individual OTU in a sample for efficiency differences in the NGS process. Additionally, accurate quantification of OTUs obtained from negative extraction control samples allows for the subtraction of contaminating bacterial DNA derived from the laboratory environment or chemicals/reagents used. Using equimolar synthetic microbial community samples and low biomass clinical samples, we demonstrate that the calibrated micPCR/NGS methodology possess a much higher precision and a lower limit of detection compared with traditional PCR/NGS, resulting in more accurate microbiota profiles suitable for multi-study comparison. PMID:28378789

  4. Pulse compressor with aberration correction

    Energy Technology Data Exchange (ETDEWEB)

    Mankos, Marian [Electron Optica, Inc., Palo Alto, CA (United States)


    In this SBIR project, Electron Optica, Inc. (EOI) is developing an electron mirror-based pulse compressor attachment to new and retrofitted dynamic transmission electron microscopes (DTEMs) and ultrafast electron diffraction (UED) cameras for improving the temporal resolution of these instruments from the characteristic range of a few picoseconds to a few nanoseconds and beyond, into the sub-100 femtosecond range. The improvement will enable electron microscopes and diffraction cameras to better resolve the dynamics of reactions in the areas of solid state physics, chemistry, and biology. EOI’s pulse compressor technology utilizes the combination of electron mirror optics and a magnetic beam separator to compress the electron pulse. The design exploits the symmetry inherent in reversing the electron trajectory in the mirror in order to compress the temporally broadened beam. This system also simultaneously corrects the chromatic and spherical aberration of the objective lens for improved spatial resolution. This correction will be found valuable as the source size is reduced with laser-triggered point source emitters. With such emitters, it might be possible to significantly reduce the illuminated area and carry out ultrafast diffraction experiments from small regions of the sample, e.g. from individual grains or nanoparticles. During phase I, EOI drafted a set of candidate pulse compressor architectures and evaluated the trade-offs between temporal resolution and electron bunch size to achieve the optimum design for two particular applications with market potential: increasing the temporal and spatial resolution of UEDs, and increasing the temporal and spatial resolution of DTEMs. Specialized software packages that have been developed by MEBS, Ltd. were used to calculate the electron optical properties of the key pulse compressor components: namely, the magnetic prism, the electron mirror, and the electron lenses. In the final step, these results were folded

  5. On the importance of having accurate data for astrophysical modelling (United States)

    Lique, Francois


    The Herschel telescope and the ALMA and NOEMA interferometers have opened new windows of observation for wavelengths ranging from far infrared to sub-millimeter with spatial and spectral resolutions previously unmatched. To make the most of these observations, an accurate knowledge of the physical and chemical processes occurring in the interstellar and circumstellar media is essential.In this presentation, I will discuss what are the current needs of astrophysics in terms of molecular data and I will show that accurate molecular data are crucial for the proper determination of the physical conditions in molecular clouds.First, I will focus on collisional excitation studies that are needed for molecular lines modelling beyond the Local Thermodynamic Equilibrium (LTE) approach. In particular, I will show how new collisional data for the HCN and HNC isomers, two tracers of star forming conditions, have allowed solving the problem of their respective abundance in cold molecular clouds. I will also present the last collisional data that have been computed in order to analyse new highly resolved observations provided by the ALMA interferometer.Then, I will present the calculation of accurate rate constants for the F+H2 → HF+H and Cl+H2 ↔ HCl+H reactions, which have allowed a more accurate determination of the physical conditions in diffuse molecular clouds. I will also present the recent work on the ortho-para-H2 conversion due to hydrogen exchange that allow more accurate determination of the ortho-to-para-H2 ratio in the universe and that imply a significant revision of the cooling mechanism in astrophysical media.

  6. Correction of oral contrast artifacts in CT-based attenuation correction of PET images using an automated segmentation algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Ahmadian, Alireza; Ay, Mohammad R.; Sarkar, Saeed [Medical Sciences/University of Tehran, Research Center for Science and Technology in Medicine, Tehran (Iran); Medical Sciences/University of Tehran, Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran (Iran); Bidgoli, Javad H. [Medical Sciences/University of Tehran, Research Center for Science and Technology in Medicine, Tehran (Iran); East Tehran Azad University, Department of Electrical and Computer Engineering, Tehran (Iran); Zaidi, Habib [Geneva University Hospital, Division of Nuclear Medicine, Geneva (Switzerland)


    Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map ({mu}map), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated {mu}maps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique

  7. Calculation of accurate small angle X-ray scattering curves from coarse-grained protein models

    Directory of Open Access Journals (Sweden)

    Stovgaard Kasper


    Full Text Available Abstract Background Genome sequencing projects have expanded the gap between the amount of known protein sequences and structures. The limitations of current high resolution structure determination methods make it unlikely that this gap will disappear in the near future. Small angle X-ray scattering (SAXS is an established low resolution method for routinely determining the structure of proteins in solution. The purpose of this study is to develop a method for the efficient calculation of accurate SAXS curves from coarse-grained protein models. Such a method can for example be used to construct a likelihood function, which is paramount for structure determination based on statistical inference. Results We present a method for the efficient calculation of accurate SAXS curves based on the Debye formula and a set of scattering form factors for dummy atom representations of amino acids. Such a method avoids the computationally costly iteration over all atoms. We estimated the form factors using generated data from a set of high quality protein structures. No ad hoc scaling or correction factors are applied in the calculation of the curves. Two coarse-grained representations of protein structure were investigated; two scattering bodies per amino acid led to significantly better results than a single scattering body. Conclusion We show that the obtained point estimates allow the calculation of accurate SAXS curves from coarse-grained protein models. The resulting curves are on par with the current state-of-the-art program CRYSOL, which requires full atomic detail. Our method was also comparable to CRYSOL in recognizing native structures among native-like decoys. As a proof-of-concept, we combined the coarse-grained Debye calculation with a previously described probabilistic model of protein structure, TorusDBN. This resulted in a significant improvement in the decoy recognition performance. In conclusion, the presented method shows great promise for

  8. Accurate mass replacement method for the sediment concentration measurement with a constant volume container (United States)

    Ban, Yunyun; Chen, Tianqin; Yan, Jun; Lei, Tingwu


    The measurement of sediment concentration in water is of great importance in soil erosion research and soil and water loss monitoring systems. The traditional weighing method has long been the foundation of all the other measuring methods and instrument calibration. The development of a new method to replace the traditional oven-drying method is of interest in research and practice for the quick and efficient measurement of sediment concentration, especially field measurements. A new method is advanced in this study for accurately measuring the sediment concentration based on the accurate measurement of the mass of the sediment-water mixture in the confined constant volume container (CVC). A sediment-laden water sample is put into the CVC to determine its mass before the CVC is filled with water and weighed again for the total mass of the water and sediments in the container. The known volume of the CVC, the mass of sediment-laden water, and sediment particle density are used to calculate the mass of water, which is replaced by sediments, therefore sediment concentration of the sample is calculated. The influence of water temperature was corrected by measuring water density to determine the temperature of water before measurements were conducted. The CVC was used to eliminate the surface tension effect so as to obtain the accurate volume of water and sediment mixture. Experimental results showed that the method was capable of measuring the sediment concentration from 0.5 up to 1200 kg m‑3. A good liner relationship existed between the designed and measured sediment concentrations with all the coefficients of determination greater than 0.999 and the averaged relative error less than 0.2%. All of these seem to indicate that the new method is capable of measuring a full range of sediment concentration above 0.5 kg m‑3 to replace the traditional oven-drying method as a standard method for evaluating and calibrating other methods.

  9. Rulison Site corrective action report

    Energy Technology Data Exchange (ETDEWEB)



    Project Rulison was a joint US Atomic Energy Commission (AEC) and Austral Oil Company (Austral) experiment, conducted under the AEC`s Plowshare Program, to evaluate the feasibility of using a nuclear device to stimulate natural gas production in low-permeability gas-producing geologic formations. The experiment was conducted on September 10, 1969, and consisted of detonating a 40-kiloton nuclear device at a depth of 2,568 m below ground surface (BGS). This Corrective Action Report describes the cleanup of petroleum hydrocarbon- and heavy-metal-contaminated sediments from an old drilling effluent pond and characterization of the mud pits used during drilling of the R-EX well at the Rulison Site. The Rulison Site is located approximately 65 kilometers (40 miles) northeast of Grand Junction, Colorado. The effluent pond was used for the storage of drilling mud during drilling of the emplacement hole for the 1969 gas stimulation test conducted by the AEC. This report also describes the activities performed to determine whether contamination is present in mud pits used during the drilling of well R-EX, the gas production well drilled at the site to evaluate the effectiveness of the detonation in stimulating gas production. The investigation activities described in this report were conducted during the autumn of 1995, concurrent with the cleanup of the drilling effluent pond. This report describes the activities performed during the soil investigation and provides the analytical results for the samples collected during that investigation.

  10. Self-correction coil: operation mechanism of self-correction coil

    Energy Technology Data Exchange (ETDEWEB)

    Hosoyama, K.


    We discuss here the operation mechanism of self-correction coil with a simple model. At the first stage, for the ideal self-correction coil case we calculate the self-inductance L of self-correction coil, the mutual inductance M between the error field coil and the self-correction coil, and using the model the induced curent in the self-correction coil by the external magnetic error field and induced magnetic field by the self-correction coil. And at the second stage, we extend this calculation method to non-ideal self-correction coil case, there we realize that the wire distribution of self-correction coil is important to get the high enough self-correction effect. For measure of completeness of self-correction effect, we introduce the efficiency eta of self-correction coil by the ratio of induced magnetic field by the self-correction coil and error field. As for the examples, we calculate L, M and eta for two cases; one is a single block approximation of self-correction coil winding and the other is a two block approximation case. By choosing the adequate angles of self-correction coil winding, we can get about 98% efficiency for single block approximation case and 99.8% for two block approximation case. This means that by using the self-correction coil we can improve the field quality about two orders.

  11. Processing of airborne laser scanning data to generate accurate DTM for floodplain wetland (United States)

    Szporak-Wasilewska, Sylwia; Mirosław-Świątek, Dorota; Grygoruk, Mateusz; Michałowski, Robert; Kardel, Ignacy


    Structure of the floodplain, especially its topography and vegetation, influences the overland flow and dynamics of floods which are key factors shaping ecosystems in surface water-fed wetlands. Therefore elaboration of the digital terrain model (DTM) of a high spatial accuracy is crucial in hydrodynamic flow modelling in river valleys. In this study the research was conducted in the unique Central European complex of fens and marshes - the Lower Biebrza river valley. The area is represented mainly by peat ecosystems which according to EU Water Framework Directive (WFD) are called "water-dependent ecosystems". Development of accurate DTM in these areas which are overgrown by dense wetland vegetation consisting of alder forest, willow shrubs, reed, sedges and grass is very difficult, therefore to represent terrain in high accuracy the airborne laser scanning data (ALS) with scanning density of 4 points/m2 was used and the correction of the "vegetation effect" on DTM was executed. This correction was performed utilizing remotely sensed images, topographical survey using the Real Time Kinematic positioning and vegetation height measurements. In order to classify different types of vegetation within research area the object based image analysis (OBIA) was used. OBIA allowed partitioning remotely sensed imagery into meaningful image-objects, and assessing their characteristics through spatial and spectral scale. The final maps of vegetation patches that include attributes of vegetation height and vegetation spectral properties, utilized both the laser scanning data and the vegetation indices developed on the basis of airborne and satellite imagery. This data was used in process of segmentation, attribution and classification. Several different vegetation indices were tested to distinguish different types of vegetation in wetland area. The OBIA classification allowed correction of the "vegetation effect" on DTM. The final digital terrain model was compared and examined

  12. Accurate and precise determination of critical properties from Gibbs ensemble Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Dinpajooh, Mohammadhasan [Department of Chemistry and Chemical Theory Center, University of Minnesota, 207 Pleasant Street SE, Minneapolis, Minnesota 55455 (United States); Bai, Peng; Allan, Douglas A. [Department of Chemical Engineering and Materials Science, University of Minnesota, 421 Washington Avenue SE, Minneapolis, Minnesota 55455 (United States); Siepmann, J. Ilja, E-mail: [Department of Chemistry and Chemical Theory Center, University of Minnesota, 207 Pleasant Street SE, Minneapolis, Minnesota 55455 (United States); Department of Chemical Engineering and Materials Science, University of Minnesota, 421 Washington Avenue SE, Minneapolis, Minnesota 55455 (United States)


    Since the seminal paper by Panagiotopoulos [Mol. Phys. 61, 813 (1997)], the Gibbs ensemble Monte Carlo (GEMC) method has been the most popular particle-based simulation approach for the computation of vapor–liquid phase equilibria. However, the validity of GEMC simulations in the near-critical region has been questioned because rigorous finite-size scaling approaches cannot be applied to simulations with fluctuating volume. Valleau [Mol. Simul. 29, 627 (2003)] has argued that GEMC simulations would lead to a spurious overestimation of the critical temperature. More recently, Patel et al. [J. Chem. Phys. 134, 024101 (2011)] opined that the use of analytical tail corrections would be problematic in the near-critical region. To address these issues, we perform extensive GEMC simulations for Lennard-Jones particles in the near-critical region varying the system size, the overall system density, and the cutoff distance. For a system with N = 5500 particles, potential truncation at 8σ and analytical tail corrections, an extrapolation of GEMC simulation data at temperatures in the range from 1.27 to 1.305 yields T{sub c} = 1.3128 ± 0.0016, ρ{sub c} = 0.316 ± 0.004, and p{sub c} = 0.1274 ± 0.0013 in excellent agreement with the thermodynamic limit determined by Potoff and Panagiotopoulos [J. Chem. Phys. 109, 10914 (1998)] using grand canonical Monte Carlo simulations and finite-size scaling. Critical properties estimated using GEMC simulations with different overall system densities (0.296 ≤ ρ{sub t} ≤ 0.336) agree to within the statistical uncertainties. For simulations with tail corrections, data obtained using r{sub cut} = 3.5σ yield T{sub c} and p{sub c} that are higher by 0.2% and 1.4% than simulations with r{sub cut} = 5 and 8σ but still with overlapping 95% confidence intervals. In contrast, GEMC simulations with a truncated and shifted potential show that r{sub cut} = 8σ is insufficient to obtain accurate results. Additional GEMC simulations for hard

  13. A simple and efficient dispersion correction to the Hartree-Fock theory (2): Incorporation of a geometrical correction for the basis set superposition error. (United States)

    Yoshida, Tatsusada; Hayashi, Takahisa; Mashima, Akira; Chuman, Hiroshi


    One of the most challenging problems in computer-aided drug discovery is the accurate prediction of the binding energy between a ligand and a protein. For accurate estimation of net binding energy ΔEbind in the framework of the Hartree-Fock (HF) theory, it is necessary to estimate two additional energy terms; the dispersion interaction energy (Edisp) and the basis set superposition error (BSSE). We previously reported a simple and efficient dispersion correction, Edisp, to the Hartree-Fock theory (HF-Dtq). In the present study, an approximation procedure for estimating BSSE proposed by Kruse and Grimme, a geometrical counterpoise correction (gCP), was incorporated into HF-Dtq (HF-Dtq-gCP). The relative weights of the Edisp (Dtq) and BSSE (gCP) terms were determined to reproduce ΔEbind calculated with CCSD(T)/CBS or /aug-cc-pVTZ (HF-Dtq-gCP (scaled)). The performance of HF-Dtq-gCP (scaled) was compared with that of B3LYP-D3(BJ)-bCP (dispersion corrected B3LYP with the Boys and Bernadi counterpoise correction (bCP)), by taking ΔEbind (CCSD(T)-bCP) of small non-covalent complexes as 'a golden standard'. As a critical test, HF-Dtq-gCP (scaled)/6-31G(d) and B3LYP-D3(BJ)-bCP/6-31G(d) were applied to the complex model for HIV-1 protease and its potent inhibitor, KNI-10033. The present results demonstrate that HF-Dtq-gCP (scaled) is a useful and powerful remedy for accurately and promptly predicting ΔEbind between a ligand and a protein, albeit it is a simple correction procedure.

  14. Carbon-wire loop based artifact correction outperforms post-processing EEG/fMRI corrections--A validation of a real-time simultaneous EEG/fMRI correction method. (United States)

    van der Meer, Johan N; Pampel, André; Van Someren, Eus J W; Ramautar, Jennifer R; van der Werf, Ysbrand D; Gomez-Herrero, German; Lepsien, Jöran; Hellrung, Lydia; Hinrichs, Hermann; Möller, Harald E; Walter, Martin


    Simultaneous EEG-fMRI combines two powerful neuroimaging techniques, but the EEG signal suffers from severe artifacts in the MRI environment that are difficult to remove. These are the MR scanning artifact and the blood-pulsation artifact--strategies to remove them are a topic of ongoing research. Additionally large, unsystematic artifacts are produced across the full frequency spectrum by the magnet's helium pump (and ventilator) systems which are notoriously hard to remove. As a consequence, experimenters routinely deactivate the helium pump during simultaneous EEG-fMRI acquisitions which potentially risks damaging the MRI system and necessitates more frequent and expensive helium refills. We present a novel correction method addressing both helium pump and ballisto-cardiac (BCG) artifacts, consisting of carbon-wire loops (CWL) as additional sensors to accurately track unpredictable artifacts related to subtle movements in the scanner, and an EEGLAB plugin to perform artifact correction. We compare signal-to-noise metrics of EEG data, corrected with CWL and three conventional correction methods, for helium pump off and on measurements. Because the CWL setup records signals in real-time, it fits requirements of applications where immediate correction is necessary, such as neuro-feedback applications or stimulation time-locked to specific sleep oscillations. The comparison metrics in this paper relate to: (1) the EEG signal itself, (2) the "eyes open vs. eyes closed" effect, and (3) an assessment of how the artifact corrections impacts the ability to perform meaningful correlations between EEG alpha power and the BOLD signal. Results show that the CWL correction corrects for He pump artifact and also produces EEG data more comparable to EEG obtained outside the magnet than conventional post-processing methods.

  15. Automatic Power Factor Correction Using Capacitive Bank

    Directory of Open Access Journals (Sweden)

    Mr.Anant Kumar Tiwari,


    Full Text Available The power factor correction of electrical loads is a problem common to all industrial companies. Earlier the power factor correction was done by adjusting the capacitive bank manually [1]. The automated power factor corrector (APFC using capacitive load bank is helpful in providing the power factor correction. Proposed automated project involves measuring the power factor value from the load using microcontroller. The design of this auto-adjustable power factor correction is to ensure the entire power system always preserving unity power factor. The software and hardware required to implement the suggested automatic power factor correction scheme are explained and its operation is described. APFC thus helps us to decrease the time taken to correct the power factor which helps to increase the efficiency.

  16. Proof-Carrying Code with Correct Compilers (United States)

    Appel, Andrew W.


    In the late 1990s, proof-carrying code was able to produce machine-checkable safety proofs for machine-language programs even though (1) it was impractical to prove correctness properties of source programs and (2) it was impractical to prove correctness of compilers. But now it is practical to prove some correctness properties of source programs, and it is practical to prove correctness of optimizing compilers. We can produce more expressive proof-carrying code, that can guarantee correctness properties for machine code and not just safety. We will construct program logics for source languages, prove them sound w.r.t. the operational semantics of the input language for a proved-correct compiler, and then use these logics as a basis for proving the soundness of static analyses.

  17. Effectiveness of Corrective Feedback on Writing

    Institute of Scientific and Technical Information of China (English)



      This study aims to find out the effectiveness of corrective feedback on ESL writing. By reviewing and analyzing the previous six research studies, the author tries to reveal the most effective way to provide corrective feedback for L2 students and the factors that impact the processing of error feedback. Findings indicated that corrective feedback is helpful for students to improve ESL writing on both accuracy and fluency. Furthermore, correction and direct corrective feedbacks as well as the oral and written meta-linguistic explanation are the most effective ways to help students improving their writing. However, in⁃dividual learner’s difference has influence on processing corrective feedback. At last, limitation of present study and suggestion for future research were made.

  18. Visual texture accurate material appearance measurement, representation and modeling

    CERN Document Server

    Haindl, Michal


    This book surveys the state of the art in multidimensional, physically-correct visual texture modeling. Features: reviews the entire process of texture synthesis, including material appearance representation, measurement, analysis, compression, modeling, editing, visualization, and perceptual evaluation; explains the derivation of the most common representations of visual texture, discussing their properties, advantages, and limitations; describes a range of techniques for the measurement of visual texture, including BRDF, SVBRDF, BTF and BSSRDF; investigates the visualization of textural info

  19. Loop corrections to pion and kaon neutrinoproduction

    CERN Document Server

    Siddikov, Marat


    In this paper we study the next-to-leading order corrections to deeply virtual pion and kaon production in neutrino experiments. We estimate these corrections in the kinematics of the Minerva experiment at FERMILAB, and find that they are sizable and increase the leading order cross-section by up to a factor of two. We provide a code, which can be used for the evaluation of the cross-sections, taking into account these corrections and employing various GPD models.

  20. Illumination correction in psoriasis lesions images

    DEFF Research Database (Denmark)

    Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær


    An approach to automatically correct illumination problems in dermatological images is presented. The illumination function is estimated after combining the thematic map indicating skin-produced by an automated classification scheme- with the dermatological image data. The user is only required...... to specify the class for which its thematic map is most suitable to be used in the illumination correction. Results are shown for real examples. It is also shown that the classification output improves after illumination correction....

  1. Experimental demonstration of topological error correction



    Scalable quantum computing can only be achieved if qubits are manipulated fault-tolerantly. Topological error correction - a novel method which combines topological quantum computing and quantum error correction - possesses the highest known tolerable error rate for a local architecture. This scheme makes use of cluster states with topological properties and requires only nearest-neighbour interactions. Here we report the first experimental demonstration of topological error correction with a...

  2. Spatial Light Modulator for wavefront correction

    CERN Document Server

    Vyas, Akondi; Banyal, Ravinder Kumar; Prasad, B Raghavendra


    We present a liquid crystal method of correcting the phase of an aberrated wavefront using a spatial light modulator. A simple and efficient lab model has been demonstrated for wavefront correction. The crux of a wavefront correcting system in an adaptive optics system lies in the speed and the image quality that can be achieved. The speeds and the accuracy of wavefront representation using Zernike polynomials have been presented using a very fast method of computation.

  3. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    Directory of Open Access Journals (Sweden)

    Jianhua Zhang


    Full Text Available This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views’ calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain.

  4. Accurate multireference study of Si3 electronic manifold

    CERN Document Server

    Goncalves, Cayo Emilio Monteiro; Braga, Joao Pedro


    Since it has been shown that the silicon trimer has a highly multi-reference character, accurate multi-reference configuration interaction calculations are performed to elucidate its electronic manifold. Emphasis is given to the long range part of the potential, aiming to understand the atom-diatom collisions dynamical aspects, to describe conical intersections and important saddle points along the reactive path. Potential energy surface main features analysis are performed for benchmarking, and highly accurate values for structures, vibrational constants and energy gaps are reported, as well as the unpublished spin-orbit coupling magnitude. The results predict that inter-system crossings will play an important role in dynamical simulations, specially in triplet state quenching, making the problem of constructing a precise potential energy surface more complicated and multi-layer dependent. The ground state is predicted to be the singlet one, but since the singlet-triplet gap is rather small (2.448 kJ/mol) bo...

  5. A fast and accurate method for echocardiography strain rate imaging (United States)

    Tavakoli, Vahid; Sahba, Nima; Hajebi, Nima; Nambakhsh, Mohammad Saleh


    Recently Strain and strain rate imaging have proved their superiority with respect to classical motion estimation methods in myocardial evaluation as a novel technique for quantitative analysis of myocardial function. Here in this paper, we propose a novel strain rate imaging algorithm using a new optical flow technique which is more rapid and accurate than the previous correlation-based methods. The new method presumes a spatiotemporal constancy of intensity and Magnitude of the image. Moreover the method makes use of the spline moment in a multiresolution approach. Moreover cardiac central point is obtained using a combination of center of mass and endocardial tracking. It is proved that the proposed method helps overcome the intensity variations of ultrasound texture while preserving the ability of motion estimation technique for different motions and orientations. Evaluation is performed on simulated, phantom (a contractile rubber balloon) and real sequences and proves that this technique is more accurate and faster than the previous methods.

  6. Accurate speed and slip measurement of induction motors

    Energy Technology Data Exchange (ETDEWEB)

    Ho, S.Y.S.; Langman, R. [Tasmania Univ., Hobart, TAS (Australia)


    Two alternative hardware circuits, for the accurate measurement of low slip in cage induction motors, are discussed. Both circuits compare the periods of the fundamental of the supply frequency and pulses from a shaft-connected toothed-wheel. The better of the two achieves accuracy to 0.5 percent of slip over the range 0.1 to 0.005, or better than 0.001 percent of speed over the range. This method is considered useful for slip measurement of motors supplied by either constant frequency mains of variable speed controllers with PMW waveforms. It is demonstrated that accurate slip measurement supports the conclusions of work previously done on the detection of broken rotor bars. (author). 1 tab., 6 figs., 13 refs.

  7. Accurate analysis of arbitrarily-shaped helical groove waveguide

    Institute of Scientific and Technical Information of China (English)

    Liu Hong-Tao; Wei Yan-Yu; Gong Yu-Bin; Yue Ling-Na; Wang Wen-Xiang


    This paper presents a theory on accurately analysing the dispersion relation and the interaction impedance of electromagnetic waves propagating through a helical groove waveguide with arbitrary groove shape, in which the complex groove profile is synthesized by a series of rectangular steps. By introducing the influence of high-order evanescent modes on the connection of any two neighbouring steps by an equivalent susceptance under a modified admittance matching condition, the assumption of the neglecting discontinuity capacitance in previously published analysis is avoided, and the accurate dispersion equation is obtained by means of a combination of field-matching method and admittancematching technique. The validity of this theory is proved by comparison between the measurements and the numerical calculations for two kinds of helical groove waveguides with different groove shapes.

  8. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    Directory of Open Access Journals (Sweden)

    Zhiwei Zhao


    Full Text Available Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1 achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2 greatly improves the performance of protocols exploiting link correlation.

  9. Accurate measurement of the helical twisting power of chiral dopants (United States)

    Kosa, Tamas; Bodnar, Volodymyr; Taheri, Bahman; Palffy-Muhoray, Peter


    We propose a method for the accurate determination of the helical twisting power (HTP) of chiral dopants. In the usual Cano-wedge method, the wedge angle is determined from the far-field separation of laser beams reflected from the windows of the test cell. Here we propose to use an optical fiber based spectrometer to accurately measure the cell thickness. Knowing the cell thickness at the positions of the disclination lines allows determination of the HTP. We show that this extension of the Cano-wedge method greatly increases the accuracy with which the HTP is determined. We show the usefulness of this method by determining the HTP of ZLI811 in a variety of hosts with negative dielectric anisotropy.

  10. Efficient and Accurate Robustness Estimation for Large Complex Networks

    CERN Document Server

    Wandelt, Sebastian


    Robustness estimation is critical for the design and maintenance of resilient networks, one of the global challenges of the 21st century. Existing studies exploit network metrics to generate attack strategies, which simulate intentional attacks in a network, and compute a metric-induced robustness estimation. While some metrics are easy to compute, e.g. degree centrality, other, more accurate, metrics require considerable computation efforts, e.g. betweennes centrality. We propose a new algorithm for estimating the robustness of a network in sub-quadratic time, i.e., significantly faster than betweenness centrality. Experiments on real-world networks and random networks show that our algorithm estimates the robustness of networks close to or even better than betweenness centrality, while being orders of magnitudes faster. Our work contributes towards scalable, yet accurate methods for robustness estimation of large complex networks.

  11. Accurate parameter estimation for unbalanced three-phase system. (United States)

    Chen, Yuan; So, Hing Cheung


    Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS.

  12. Symmetric Uniformly Accurate Gauss-Runge-Kutta Method

    Directory of Open Access Journals (Sweden)

    Dauda G. YAKUBU


    Full Text Available Symmetric methods are particularly attractive for solving stiff ordinary differential equations. In this paper by the selection of Gauss-points for both interpolation and collocation, we derive high order symmetric single-step Gauss-Runge-Kutta collocation method for accurate solution of ordinary differential equations. The resulting symmetric method with continuous coefficients is evaluated for the proposed block method for accurate solution of ordinary differential equations. More interestingly, the block method is self-starting with adequate absolute stability interval that is capable of producing simultaneously dense approximation to the solution of ordinary differential equations at a block of points. The use of this method leads to a maximal gain in efficiency as well as in minimal function evaluation per step.

  13. Library preparation for highly accurate population sequencing of RNA viruses (United States)

    Acevedo, Ashley; Andino, Raul


    Circular resequencing (CirSeq) is a novel technique for efficient and highly accurate next-generation sequencing (NGS) of RNA virus populations. The foundation of this approach is the circularization of fragmented viral RNAs, which are then redundantly encoded into tandem repeats by ‘rolling-circle’ reverse transcription. When sequenced, the redundant copies within each read are aligned to derive a consensus sequence of their initial RNA template. This process yields sequencing data with error rates far below the variant frequencies observed for RNA viruses, facilitating ultra-rare variant detection and accurate measurement of low-frequency variants. Although library preparation takes ~5 d, the high-quality data generated by CirSeq simplifies downstream data analysis, making this approach substantially more tractable for experimentalists. PMID:24967624


    Directory of Open Access Journals (Sweden)

    M. Rehak


    Full Text Available In this study we present a Micro Aerial Vehicle (MAV equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.

  15. Weather-Corrected Performance Ratio

    Energy Technology Data Exchange (ETDEWEB)

    Dierauf, T.; Growitz, A.; Kurtz, S.; Cruz, J. L. B.; Riley, E.; Hansen, C.


    Photovoltaic (PV) system performance depends on both the quality of the system and the weather. One simple way to communicate the system performance is to use the performance ratio (PR): the ratio of the electricity generated to the electricity that would have been generated if the plant consistently converted sunlight to electricity at the level expected from the DC nameplate rating. The annual system yield for flat-plate PV systems is estimated by the product of the annual insolation in the plane of the array, the nameplate rating of the system, and the PR, which provides an attractive way to estimate expected annual system yield. Unfortunately, the PR is, again, a function of both the PV system efficiency and the weather. If the PR is measured during the winter or during the summer, substantially different values may be obtained, making this metric insufficient to use as the basis for a performance guarantee when precise confidence intervals are required. This technical report defines a way to modify the PR calculation to neutralize biases that may be introduced by variations in the weather, while still reporting a PR that reflects the annual PR at that site given the project design and the project weather file. This resulting weather-corrected PR gives more consistent results throughout the year, enabling its use as a metric for performance guarantees while still retaining the familiarity this metric brings to the industry and the value of its use in predicting actual annual system yield. A testing protocol is also presented to illustrate the use of this new metric with the intent of providing a reference starting point for contractual content.

  16. Novel multi-beam radiometers for accurate ocean surveillance

    DEFF Research Database (Denmark)

    Cappellin, C.; Pontoppidan, K.; Nielsen, P. H.


    Novel antenna architectures for real aperture multi-beam radiometers providing high resolution and high sensitivity for accurate sea surface temperature (SST) and ocean vector wind (OVW) measurements are investigated. On the basis of the radiometer requirements set for future SST/OVW missions, co......, conical scanners and push-broom antennas are compared. The comparison will cover reflector optics and focal plane array configuration....

  17. Accurate Insertion Loss Measurements of the Juno Patch Array Antennas (United States)

    Chamberlain, Neil; Chen, Jacqueline; Hodges, Richard; Demas, John


    This paper describes two independent methods for estimating the insertion loss of patch array antennas that were developed for the Juno Microwave Radiometer instrument. One method is based principally on pattern measurements while the other method is based solely on network analyzer measurements. The methods are accurate to within 0.1 dB for the measured antennas and show good agreement (to within 0.1dB) of separate radiometric measurements.

  18. The highly accurate anteriolateral portal for injecting the knee

    Directory of Open Access Journals (Sweden)

    Chavez-Chiang Colbert E


    Full Text Available Abstract Background The extended knee lateral midpatellar portal for intraarticular injection of the knee is accurate but is not practical for all patients. We hypothesized that a modified anteriolateral portal where the synovial membrane of the medial femoral condyle is the target would be highly accurate and effective for intraarticular injection of the knee. Methods 83 subjects with non-effusive osteoarthritis of the knee were randomized to intraarticular injection using the modified anteriolateral bent knee versus the standard lateral midpatellar portal. After hydrodissection of the synovial membrane with lidocaine using a mechanical syringe (reciprocating procedure device, 80 mg of triamcinolone acetonide were injected into the knee with a 2.0-in (5.1-cm 21-gauge needle. Baseline pain, procedural pain, and pain at outcome (2 weeks and 6 months were determined with the 10 cm Visual Analogue Pain Score (VAS. The accuracy of needle placement was determined by sonographic imaging. Results The lateral midpatellar and anteriolateral portals resulted in equivalent clinical outcomes including procedural pain (VAS midpatellar: 4.6 ± 3.1 cm; anteriolateral: 4.8 ± 3.2 cm; p = 0.77, pain at outcome (VAS midpatellar: 2.6 ± 2.8 cm; anteriolateral: 1.7 ± 2.3 cm; p = 0.11, responders (midpatellar: 45%; anteriolateral: 56%; p = 0.33, duration of therapeutic effect (midpatellar: 3.9 ± 2.4 months; anteriolateral: 4.1 ± 2.2 months; p = 0.69, and time to next procedure (midpatellar: 7.3 ± 3.3 months; anteriolateral: 7.7 ± 3.7 months; p = 0.71. The anteriolateral portal was 97% accurate by real-time ultrasound imaging. Conclusion The modified anteriolateral bent knee portal is an effective, accurate, and equivalent alternative to the standard lateral midpatellar portal for intraarticular injection of the knee. Trial Registration NCT00651625

  19. Ultra accurate collaborative information filtering via directed user similarity


    Guo, Qiang; Song, Wen-Jun; Liu, Jian-Guo


    A key challenge of the collaborative filtering (CF) information filtering is how to obtain the reliable and accurate results with the help of peers' recommendation. Since the similarities from small-degree users to large-degree users would be larger than the ones opposite direction, the large-degree users' selections are recommended extensively by the traditional second-order CF algorithms. By considering the users' similarity direction and the second-order correlations to depress the influen...

  20. Accurate quantum state estimation via "Keeping the experimentalist honest"

    CERN Document Server

    Blume-Kohout, R; Blume-Kohout, Robin; Hayden, Patrick


    In this article, we derive a unique procedure for quantum state estimation from a simple, self-evident principle: an experimentalist's estimate of the quantum state generated by an apparatus should be constrained by honesty. A skeptical observer should subject the estimate to a test that guarantees that a self-interested experimentalist will report the true state as accurately as possible. We also find a non-asymptotic, operational interpretation of the quantum relative entropy function.

  1. Accurate Method for Determining Adhesion of Cantilever Beams

    Energy Technology Data Exchange (ETDEWEB)

    Michalske, T.A.; de Boer, M.P.


    Using surface micromachined samples, we demonstrate the accurate measurement of cantilever beam adhesion by using test structures which are adhered over long attachment lengths. We show that this configuration has a deep energy well, such that a fracture equilibrium is easily reached. When compared to the commonly used method of determining the shortest attached beam, the present method is much less sensitive to variations in surface topography or to details of capillary drying.

  2. Accurate and Simple Calibration of DLP Projector Systems


    Wilm, Jakob; Olesen, Oline Vinter; Larsen, Rasmus


    Much work has been devoted to the calibration of optical cameras, and accurate and simple methods are now available which require only a small number of calibration targets. The problem of obtaining these parameters for light projectors has not been studied as extensively and most current methods require a camera and involve feature extraction from a known projected pattern. In this work we present a novel calibration technique for DLP Projector systems based on phase shifting profilometry pr...

  3. A highly accurate method to solve Fisher’s equation

    Indian Academy of Sciences (India)

    Mehdi Bastani; Davod Khojasteh Salkuyeh


    In this study, we present a new and very accurate numerical method to approximate the Fisher’s-type equations. Firstly, the spatial derivative in the proposed equation is approximated by a sixth-order compact finite difference (CFD6) scheme. Secondly, we solve the obtained system of differential equations using a third-order total variation diminishing Runge–Kutta (TVD-RK3) scheme. Numerical examples are given to illustrate the efficiency of the proposed method.

  4. Bayesian long branch attraction bias and corrections. (United States)

    Susko, Edward


    Previous work on the star-tree paradox has shown that Bayesian methods suffer from a long branch attraction bias. That work is extended to settings involving more taxa and partially resolved trees. The long branch attraction bias is confirmed to arise more broadly and an additional source of bias is found. A by-product of the analysis is methods that correct for biases toward particular topologies. The corrections can be easily calculated using existing Bayesian software. Posterior support for a set of two or more trees can thus be supplemented with corrected versions to cross-check or replace results. Simulations show the corrections to be highly effective.

  5. A Global Correction to PPMXL Proper Motions

    CERN Document Server

    Vickers, John J; Grebel, Eva K


    In this paper we notice that extragalactic sources seem to have non-zero proper motions in the PPMXL proper motion catalog. We collect a large, all-sky sample of extragalactic objects and fit their reported PPMXL proper motions to an ensemble of spherical harmonics in magnitude shells. A magnitude dependent proper motion correction is thus constructed. This correction is applied to a set of fundamental radio sources, quasars, and is compared to similar corrections to assess its utility. We publish, along with this paper, code which may be used to correct proper motions in the PPMXL catalog over the full sky which have 2 Micron All Sky Survey photometry.

  6. Using corrected Cone-Beam CT image for accelerated partial breast irradiation treatment dose verification: the preliminary experience


    Wang, Jiazhou; Hu, Weigang; Cai, Gang; Peng, Jiayuan; Pan, Ziqiang; Guo, Xiaomao; Chen, Jiayi


    Background Accurate target localization is mandatory in the accelerated partial breast irradiation (APBI) delivery. Dosimetric verification for positional error will further guarantee the accuracy of treatment delivery. The purpose of this study is to evaluate the clinical feasibility of a cone beam computer tomographic (CBCT) image correction method in APBI. Methods A CBCT image correction method was developed. First, rigid image registration was proceeded for CTs and CBCTs; second, these im...

  7. Detector to detector corrections: a comprehensive experimental study of detector specific correction factors for beam output measurements for small radiotherapy beams

    DEFF Research Database (Denmark)

    Azangwe, Godfrey; Grochowska, Paulina; Georg, Dietmar


    -doped aluminium oxide (Al2O3:C), organic plastic scintillators, diamond detectors, liquid filled ion chamber, and a range of small volume air filled ionization chambers (volumes ranging from 0.002 cm3 to 0.3 cm3). All detector measurements were corrected for volume averaging effect and compared with dose ratios.......148 for the 0.14 cm3 air filled ionization chamber and were as high as 1.924 for the 0.3 cm3 ionization chamber. After applying volume averaging corrections, the detector readings were consistent among themselves and with alanine measurements for several small detectors but they differed for larger detectors......, in particular for some small ionization chambers with volumes larger than 0.1 cm3. Conclusions: The results demonstrate how important it is for the appropriate corrections to be applied to give consistent and accurate measurements for a range of detectors in small beam geometry. The results further demonstrate...

  8. Accurate genome relative abundance estimation based on shotgun metagenomic reads.

    Directory of Open Access Journals (Sweden)

    Li C Xia

    Full Text Available Accurate estimation of microbial community composition based on metagenomic sequencing data is fundamental for subsequent metagenomics analysis. Prevalent estimation methods are mainly based on directly summarizing alignment results or its variants; often result in biased and/or unstable estimates. We have developed a unified probabilistic framework (named GRAMMy by explicitly modeling read assignment ambiguities, genome size biases and read distributions along the genomes. Maximum likelihood method is employed to compute Genome Relative Abundance of microbial communities using the Mixture Model theory (GRAMMy. GRAMMy has been demonstrated to give estimates that are accurate and robust across both simulated and real read benchmark datasets. We applied GRAMMy to a collection of 34 metagenomic read sets from four metagenomics projects and identified 99 frequent species (minimally 0.5% abundant in at least 50% of the data-sets in the human gut samples. Our results show substantial improvements over previous studies, such as adjusting the over-estimated abundance for Bacteroides species for human gut samples, by providing a new reference-based strategy for metagenomic sample comparisons. GRAMMy can be used flexibly with many read assignment tools (mapping, alignment or composition-based even with low-sensitivity mapping results from huge short-read datasets. It will be increasingly useful as an accurate and robust tool for abundance estimation with the growing size of read sets and the expanding database of reference genomes.

  9. Accurate and simple calibration of DLP projector systems (United States)

    Wilm, Jakob; Olesen, Oline V.; Larsen, Rasmus


    Much work has been devoted to the calibration of optical cameras, and accurate and simple methods are now available which require only a small number of calibration targets. The problem of obtaining these parameters for light projectors has not been studied as extensively and most current methods require a camera and involve feature extraction from a known projected pattern. In this work we present a novel calibration technique for DLP Projector systems based on phase shifting profilometry projection onto a printed calibration target. In contrast to most current methods, the one presented here does not rely on an initial camera calibration, and so does not carry over the error into projector calibration. A radial interpolation scheme is used to convert features coordinates into projector space, thereby allowing for a very accurate procedure. This allows for highly accurate determination of parameters including lens distortion. Our implementation acquires printed planar calibration scenes in less than 1s. This makes our method both fast and convenient. We evaluate our method in terms of reprojection errors and structured light image reconstruction quality.

  10. An accurate metric for the spacetime around neutron stars

    CERN Document Server

    Pappas, George


    The problem of having an accurate description of the spacetime around neutron stars is of great astrophysical interest. For astrophysical applications, one needs to have a metric that captures all the properties of the spacetime around a neutron star. Furthermore, an accurate appropriately parameterised metric, i.e., a metric that is given in terms of parameters that are directly related to the physical structure of the neutron star, could be used to solve the inverse problem, which is to infer the properties of the structure of a neutron star from astrophysical observations. In this work we present such an approximate stationary and axisymmetric metric for the exterior of neutron stars, which is constructed using the Ernst formalism and is parameterised by the relativistic multipole moments of the central object. This metric is given in terms of an expansion on the Weyl-Papapetrou coordinates with the multipole moments as free parameters and is shown to be extremely accurate in capturing the physical propert...

  11. [Spectroscopy technique and ruminant methane emissions accurate inspecting]. (United States)

    Shang, Zhan-Huan; Guo, Xu-Sheng; Long, Rui-Jun


    The increase in atmospheric CH4 concentration, on the one hand through the radiation process, will directly cause climate change, and on the other hand, cause a lot of changes in atmospheric chemical processes, indirectly causing climate change. The rapid growth of atmospheric methane has gained attention of governments and scientists. All countries in the world now deal with global climate change as an important task of reducing emissions of greenhouse gases, but the need for monitoring the concentration of methane gas, in particular precision monitoring, can be scientifically formulated to provide a scientific basis for emission reduction measures. So far, CH4 gas emissions of different animal production systems have received extensive research. The methane emission by ruminant reported in the literature is only estimation. This is due to the various factors that affect the methane production in ruminant, there are various variables associated with the techniques for measuring methane production, the techniques currently developed to measure methane are unable to accurately determine the dynamics of methane emission by ruminant, and therefore there is an urgent need to develop an accurate method for this purpose. Currently, spectroscopy technique has been used and is relatively a more accurate and reliable method. Various spectroscopy techniques such as modified infrared spectroscopy methane measuring system, laser and near-infrared sensory system are able to achieve the objective of determining the dynamic methane emission by both domestic and grazing ruminant. Therefore spectroscopy technique is an important methane measuring technique, and contributes to proposing reduction methods of methane.

  12. Accurate measurements of carbon monoxide in humid air using the cavity ring-down spectroscopy (CRDS technique

    Directory of Open Access Journals (Sweden)

    H. Chen


    Full Text Available Accurate measurements of carbon monoxide (CO in humid air have been made using the cavity ring-down spectroscopy (CRDS technique. The measurements of CO mole fractions are determined from the strength of its spectral absorption in the near infrared region (∼1.57 μm after removing interferences from adjacent carbon dioxide (CO2 and water vapor (H2O absorption lines. Water correction functions that account for the dilution and pressure-broadening effects as well as absorption line interferences from adjacent CO2 and H2O lines have been derived for CO2 mole fractions between 360–390 ppm. The line interference corrections are independent of CO mole fractions. The dependence of the line interference correction on CO2 abundance is estimated to be approximately −0.3 ppb/100 ppm CO2 for dry mole fractions of CO. Comparisons of water correction functions from different analyzers of the same type show significant differences, making it necessary to perform instrument-specific water tests for each individual analyzer. The CRDS analyzer was flown on an aircraft in Alaska from April to November in 2011, and the accuracy of the CO measurements by the CRDS analyzer has been validated against discrete NOAA/ESRL flask sample measurements made on board the same aircraft, with a mean difference between integrated in situ and flask measurements of −0.6 ppb and a standard deviation of 2.8 ppb. Preliminary testing of CRDS instrumentation that employs new spectroscopic analysis (available since the beginning of 2012 indicates a smaller water vapor dependence than the models discussed here, but more work is necessary to fully validate the performance. The CRDS technique provides an accurate and low-maintenance method of monitoring the atmospheric dry mole fractions of CO in humid air streams.

  13. ARMA Prediction of SBAS Ephemeris and Clock Corrections for Low Earth Orbiting Satellites

    Directory of Open Access Journals (Sweden)

    Jeongrae Kim


    Full Text Available For low earth orbit (LEO satellite GPS receivers, space-based augmentation system (SBAS ephemeris/clock corrections can be applied to improve positioning accuracy in real time. The SBAS correction is only available within its service area, and the prediction of the SBAS corrections during the outage period can extend the coverage area. Two time series forecasting models, autoregressive moving average (ARMA and autoregressive (AR, are proposed to predict the corrections outside the service area. A simulated GPS satellite visibility condition is applied to the WAAS correction data, and the prediction accuracy degradation, along with the time, is investigated. Prediction results using the SBAS rate of change information are compared, and the ARMA method yields a better accuracy than the rate method. The error reductions of the ephemeris and clock by the ARMA method over the rate method are 37.8% and 38.5%, respectively. The AR method shows a slightly better orbit accuracy than the rate method, but its clock accuracy is even worse than the rate method. If the SBAS correction is sufficiently accurate comparing with the required ephemeris accuracy of a real-time navigation filter, then the predicted SBAS correction may improve orbit determination accuracy.

  14. Comparative study of van der Waals corrections to the bulk properties of graphite. (United States)

    Rêgo, Celso R C; Oliveira, Luiz N; Tereshchuk, Polina; Da Silva, Juarez L F


    Graphite is a stack of honeycomb (graphene) layers bound together by nonlocal, long-range van der Waals (vdW) forces, which are poorly described by density functional theory (DFT) within local or semilocal exchange-correlation functionals. Several approximations have been proposed to add a vdW correction to the DFT total energies (Stefan Grimme (D2 and D3) with different damping functions (D3-BJ), Tkatchenko-Scheffler (TS) without and with self-consistent screening (TS  +  SCS) effects). Those corrections have remarkly improved the agreement between our results and experiment for the interlayer distance (from 3.9 to 0.6%) [corrected] and high-level random-phase approximation (RPA) calculations for interlayer binding energy (from 69.5 to 1.5%). [corrected]. We report a systematic investigation of various structural, energetic and electron properties with the aforementioned vdW corrections followed by comparison with experimental and theoretical RPA data. Comparison between the resulting relative errors shows that the TS  +  SCS correction provides the best results; the other corrections yield significantly larger errors for at least one of the studied properties. If considerations of computational costs or convergence problems rule out the TS  +  SCS approach, we recommend the D3-BJ correction. Comparison between the computed π(z)Γ-splitting and experimental results shows disagreements of 10% or more with all vdW corrections. Even the computationally more expensive hybrid PBE0 has proved unable to improve the agreement with the measured splitting. Our results indicate that improvements of the exchange-correlation functionals beyond the vdW corrections are necessary to accurately describe the band structure of graphite.

  15. Dispersion-corrected density functional theory for aromatic interactions in complex systems. (United States)

    Ehrlich, Stephan; Moellmann, Jonas; Grimme, Stefan


    Aromatic interactions play a key role in many chemical and biological systems. However, even if very simple models are chosen, the systems of interest are often too large to be handled with standard wave function theory (WFT). Although density functional theory (DFT) can easily treat systems of more than 200 atoms, standard semilocal (hybrid) density functional approximations fail to describe the London dispersion energy, a factor that is essential for accurate predictions of inter- and intramolecular noncovalent interactions. Therefore dispersion-corrected DFT provides a unique tool for the investigation and analysis of a wide range of complex aromatic systems. In this Account, we start with an analysis of the noncovalent interactions in simple model dimers of hexafluorobenzene (HFB) and benzene, with a focus on electrostatic and dispersion interactions. The minima for the parallel-displaced dimers of HFB/HFB and HFB/benzene can only be explained when taking into account all contributions to the interaction energy and not by electrostatics alone. By comparison of saturated and aromatic model complexes, we show that increased dispersion coefficients for sp(2)-hybridized carbon atoms play a major role in aromatic stacking. Modern dispersion-corrected DFT yields accurate results (about 5-10% error for the dimerization energy) for the relatively large porphyrin and coronene dimers, systems for which WFT can provide accurate reference data only with huge computational effort. In this example, it is also demonstrated that new nonlocal, density-dependent dispersion corrections and atom pairwise schemes mutually agree with each other. The dispersion energy is also important for the complex inter- and intramolecular interactions that arise in the molecular crystals of aromatic molecules. In studies of hexahelicene, dispersion-corrected DFT yields "the right answer for the right reason". By comparison, standard DFT calculations reproduce intramolecular distances quite

  16. Escaping the correction for body surface area when calculating glomerular filtration rate in children

    Energy Technology Data Exchange (ETDEWEB)

    Piepsz, Amy; Tondeur, Marianne [CHU St. Pierre, Department of Radioisotopes, Brussels (Belgium); Ham, Hamphrey [University Hospital Ghent, Department of Nuclear Medicine, Ghent (Belgium)


    {sup 51}Cr ethylene diamine tetraacetic acid ({sup 51}Cr EDTA) clearance is nowadays considered as an accurate and reproducible method for measuring glomerular filtration rate (GFR) in children. Normal values in function of age, corrected for body surface area, have been recently updated. However, much criticism has been expressed about the validity of body surface area correction. The aim of the present paper was to present the normal GFR values, not corrected for body surface area, with the associated percentile curves. For that purpose, the same patients as in the previous paper were selected, namely those with no recent urinary tract infection, having a normal left to right {sup 99m}Tc MAG3 uptake ratio and a normal kidney morphology on the early parenchymal images. A single blood sample method was used for {sup 51}Cr EDTA clearance measurement. Clearance values, not corrected for body surface area, increased progressively up to the adolescence. The percentile curves were determined and allow, for a single patient, to estimate accurately the level of non-corrected clearance and the evolution with time, whatever the age. (orig.)

  17. Improving transcriptome assembly through error correction of high-throughput sequence reads. (United States)

    Macmanes, Matthew D; Eisen, Michael B


    The study of functional genomics, particularly in non-model organisms, has been dramatically improved over the last few years by the use of transcriptomes and RNAseq. While these studies are potentially extremely powerful, a computationally intensive procedure, the de novo construction of a reference transcriptome must be completed as a prerequisite to further analyses. The accurate reference is critically important as all downstream steps, including estimating transcript abundance are critically dependent on the construction of an accurate reference. Though a substantial amount of research has been done on assembly, only recently have the pre-assembly procedures been studied in detail. Specifically, several stand-alone error correction modules have been reported on and, while they have shown to be effective in reducing errors at the level of sequencing reads, how error correction impacts assembly accuracy is largely unknown. Here, we show via use of a simulated and empiric dataset, that applying error correction to sequencing reads has significant positive effects on assembly accuracy, and should be applied to all datasets. A complete collection of commands which will allow for the production of Reptile corrected reads is available at and as File S1.

  18. Next-to-Leading Order Corrections to Higgs Boson Pair Production in Gluon Fusion

    CERN Document Server

    Kerner, Matthias


    We present a calculation of the next-to-leading order QCD corrections to the production of Higgs boson pairs in gluon fusion keeping the full dependence on the mass of the top quark. The virtual corrections, involving two-loop integrals with up to four mass scales, have been calculated numerically and we present an efficient algorithm to obtain accurate results of the virtual amplitude using numerical integrations. Taking the top quark mass into account we obtain significant differences compared to results obtained in the heavy top limit.

  19. Radial lens distortion correction with sub-pixel accuracy for X-ray micro-tomography. (United States)

    Vo, Nghia T; Atwood, Robert C; Drakopoulos, Michael


    Distortion correction or camera calibration for an imaging system which is highly configurable and requires frequent disassembly for maintenance or replacement of parts needs a speedy method for recalibration. Here we present direct techniques for calculating distortion parameters of a non-linear model based on the correct determination of the center of distortion. These techniques are fast, very easy to implement, and accurate at sub-pixel level. The implementation at the X-ray tomography system of the I12 beamline, Diamond Light Source, which strictly requires sub-pixel accuracy, shows excellent performance in the calibration image and in the reconstructed images.

  20. The statistical nature of the second order corrections to the thermal SZE

    CERN Document Server

    Sandoval-Villalbazo, A


    This paper shows that the accepted expressions for the second order corrections in the parameter $z$ to the thermal Sunyaev-Zel'dovich effect can be accurately reproduced by a simple convolution integral approach. This representation allows to separate the second order SZE corrections into two type of components. One associated to a single line broadening, directly related to the even derivative terms present in the distortion intensity curve, while the other is related to a frequency shift, which is in turn related to the first derivative term.

  1. Comparative study on atmospheric correction methods of visible and near-infrared hyperspectral image (United States)

    He, Qian; Wu, Jingli; Wang, Guangping; Liu, Chang; Tao, Tao


    Currently, common atmospheric correction methods usually based on the statistical information of image itself for relative reflectance calculation, or make use of the radiative transfer model and meteorological parameters for accurate calculations. In order to compare the advantages and disadvantages of these methods, we carried out some atmospheric correction experiments based on AVIRIS Airborne Visible and Near-Infrared hyperspectral data. It proved that, the statistical method is simple and convenient, but not wide adaptability, that can only get the relative reflectance; while the radiative transfer model method is very complex and require the support of auxiliary information, but it can get the precise absolute reflectance of surface features.

  2. Evaluation of DFT-D3 dispersion corrections for various structural benchmark sets (United States)

    Schröder, Heiner; Hühnert, Jens; Schwabe, Tobias


    We present an evaluation of our newly developed density functional theory (DFT)-D3 dispersion correction D3(CSO) in comparison to its predecessor D3(BJ) for geometry optimizations. Therefore, various benchmark sets covering bond lengths, rotational constants, and center of mass distances of supramolecular complexes have been chosen. Overall both corrections give accurate structures and show no systematic differences. Additionally, we present an optimized algorithm for the computation of the DFT-D3 gradient, which reduces the formal scaling of the gradient calculation from O (N3) to O (N2) .

  3. A Hybrid Approach for Correcting Grammatical Errors (United States)

    Lee, Kiyoung; Kwon, Oh-Woog; Kim, Young-Kil; Lee, Yunkeun


    This paper presents a hybrid approach for correcting grammatical errors in the sentences uttered by Korean learners of English. The error correction system plays an important role in GenieTutor, which is a dialogue-based English learning system designed to teach English to Korean students. During the talk with GenieTutor, grammatical error…

  4. Beyond Political Correctness: Toward the Inclusive University. (United States)

    Richer, Stephen, Ed.; Weir, Lorna, Ed.

    This collection of 12 essays examines the history of the discourse over political correctness (PC) in Canadian academia, focusing on the neoconservative backlash to affirmative action, inclusive policies, and feminist and anti-racist teaching in the classroom. It includes: (1) "Introduction: Political Correctness and the Inclusive University"…

  5. Correcting Poor Posture without Awareness or Willpower (United States)

    Wernik, Uri


    In this article, a new technique for correcting poor posture is presented. Rather than intentionally increasing awareness or mobilizing willpower to correct posture, this approach offers a game using randomly drawn cards with easy daily assignments. A case using the technique is presented to emphasize the subjective experience of living with poor…

  6. 76 FR 32866 - Cable Landing Licenses; Correction (United States)


    ... Systems Agency in the regulations that we published in the Federal Register of January 14, 2002, 67 FR... COMMISSION 47 CFR Part 1 Cable Landing Licenses; Correction AGENCY: Federal Communications Commission. ACTION... streamlined processing of cable landing license applications. Need for Correction As published, the...

  7. ART and SIRT correction factors in geotomography (United States)

    Balanis, C. A.; Hill, H. W.; Freeland, K. A.


    The ART and SIRT image reconstruction techniques are introduced and a correction factor which provides high resolution images is presented. The techniques were tested using synthetic measurements. In order to examine the manner in which each reconstruction progresses, the Euclidean distance for the reconstructions were plotted, versus iteration number. Results indicate that ART produces better reconstructed profiles than the SIRT (using identical correction factors).

  8. Electroweak Corrections at the LHC with MCFM

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, John M. [Fermilab; Wackeroth, Doreen [SUNY, Buffalo; Zhou, Jia [SUNY, Buffalo


    Electroweak (EW) corrections at the LHC can be enhanced at high energies due to soft/collinear radiation of W and Z bosons, being dominated by Sudakov-like corrections in the form of $\\alpha_W^l\\log^n(Q^2/M_W^2)$ $(n \\le 2l, \\alpha_W = \\alpha/(4\\pi\\sin\\theta_W^2))$ when the energy scale $Q$ enters the TeV regime. Thus, the inclusion of EW corrections in LHC predictions is important for the search of possible signals of new physics in tails of kinematic distributions. EW corrections should also be taken into account in virtue of their comparable size ($\\mathcal{O}(\\alpha)$) to that of higher order QCD corrections ($\\mathcal{O}(\\alpha_s^2)$). We calculated the next-to-leading-order (NLO) weak corrections to the neutral-current (NC) Drell-Yan process, top-quark pair production and di-jet producion, and implemented them in the Monte-Carlo program MCFM. This enables a combined study with the corresponding NLO QCD corrections. We provide both the full NLO weak corrections and their weak Sudakov approximation valid at high energies. The latter is often used for a fast evaluation of weak effects, and having the exact result available as well allows to quantify the validity of the Sudakov approximation.

  9. Optical advantages of astigmatic aberration corrected heliostats (United States)

    van Rooyen, De Wet; Schöttl, Peter; Bern, Gregor; Heimsath, Anna; Nitz, Peter


    Astigmatic aberration corrected heliostats adapt their shape in dependence of the incidence angle of the sun on the heliostat. Simulations show that this optical correction leads to a higher concentration ratio at the target and thus in a decrease in required receiver aperture in particular for smaller heliostat fields.

  10. Offset Correction Techniques for Voltage Sense Amplifiers

    NARCIS (Netherlands)

    Groeneveld, S.


    This report deals with offset correction techniques for voltage sense amplifiers and is divided into two different parts: 1) mismatch and 2) offset correction techniques. First a literature study is done on the subject mismatch with specially focus on the future. Mismatch of a transistor is determin

  11. Relativistic Scott correction for atoms and molecules

    DEFF Research Database (Denmark)

    Solovej, Jan Philip; Sørensen, Thomas Østergaard; Spitzer, Wolfgang Ludwig


    We prove the first correction to the leading Thomas-Fermi energy for the ground state energy of atoms and molecules in a model where the kinetic energy of the electrons is treated relativistically. The leading Thomas-Fermi energy, established in [25], as well as the correction given here, are of ...

  12. 40 CFR 1065.672 - Drift correction. (United States)


    ...-zero or post-span might have occurred after one or more subsequent test intervals. (5) If you do not...) Correction principles. The calculations in this section utilize a gas analyzer's responses to reference zero... correction is based on an analyzer's mean responses to reference zero and span gases, and it is based on...

  13. Thermoelastic Correction in the Torsion Pendulum Experiment

    Institute of Scientific and Technical Information of China (English)

    胡忠坤; 王雪黎; 罗俊


    The thermoelastic effect of the suspension fibre in the torsion pendulum experiment with magnetic damping was studied. The disagreement in the oscillation periods was reduced by one order of magnitude through monitoring the ambient temperature and thermoelastic correction. We also found that the period on uncertainty due to noise increases with the amplitude attenuation after thermoelastic correction.

  14. 76 FR 11337 - Presidential Library Facilities; Correction (United States)


    ..., June 17, 2008 (73 FR 34197) that are the subject of this correction, NARA adopted and incorporated by... RECORDS ADMINISTRATION 36 CFR Part 1281 RIN 3095-AA82 Presidential Library Facilities; Correction AGENCY... libraries and information required in NARA's reports to Congress before accepting title to or entering...

  15. Correction of errors in power measurements

    DEFF Research Database (Denmark)

    Pedersen, Knud Ole Helgesen


    Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors.......Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors....

  16. The Organization of Delayed Second Language Correction (United States)

    Rolin-Ianziti, Jeanne


    The present study uses a conversation analytic framework to examine the organization of a type of classroom talk: the delayed correction sequence. Such talk occurs when teacher and students interactively correct errors after the students have completed a communicative activity. This study investigates naturally occurring instances of correction…

  17. Optimal correction of independent and correlated errors


    Jacobsen, Sol H.; Mintert, Florian


    We identify optimal quantum error correction codes for situations that do not admit perfect correction. We provide analytic n-qubit results for standard cases with correlated errors on multiple qubits and demonstrate significant improvements to the fidelity bounds and optimal entanglement decay profiles.

  18. Radiative corrections to electron-proton scattering

    NARCIS (Netherlands)

    Maximon, LC; Tjon, JA


    The radiative corrections to elastic electron-proton scattering are analyzed in a hadronic model including the finite size of the nucleon. For initial electron energies above 8 GeV and large scattering angles, the proton vertex correction in this model increases by at least 2% of the overall factor

  19. Preparing and correcting extracted BRITE observations

    CERN Document Server

    Buysschaert, B; Neiner, C


    Extracted BRITE lightcurves must be carefully prepared and corrected for instrumental effects before a scientific analysis can be performed. Therefore, we have created a suite of Python routines to prepare and correct the lightcurves, which is publicly available. In this paper we describe the method and successive steps performed by these routines.

  20. 75 FR 41530 - Petitions for Modification; Correction (United States)


    ... Safety and Health Administration Petitions for Modification; Correction AGENCY: Mine Safety and Health Administration, Labor. ACTION: Notice; correction. SUMMARY: The Mine Safety and Health Administration (MSHA... Affected: 30 CFR 75.507-1(a) (Electric equipment other than power-connection points; outby the last...

  1. 9 CFR 417.3 - Corrective actions. (United States)


    ... REGULATORY REQUIREMENTS UNDER THE FEDERAL MEAT INSPECTION ACT AND THE POULTRY PRODUCTS INSPECTION ACT HAZARD ANALYSIS AND CRITICAL CONTROL POINT (HACCP) SYSTEMS § 417.3 Corrective actions. (a) The written HACCP plan.... The HACCP plan shall describe the corrective action to be taken, and assign responsibility for...

  2. 16 CFR 1209.37 - Corrective actions. (United States)


    ... taken. Corrective action includes changes to the manufacturing process as well as reworking the insulation product itself. Corrective action may consist of equipment adjustment, equipment repair, equipment replacement, change in chemical formulation, change in chemical quantity, change in cellulosic stock, or...

  3. Characteristics of Correctional Instruction, 1789-1875. (United States)

    Gehring, Thom


    In the 19th century, ministers would teach reading to prisoners on Sunday evenings, so-called sabbath schools. Expansion of these efforts into other subjects led to correctional education. Inadequate resources and facilities and resistance from administrators and prisoners parallel the struggles of today's correctional educators. (SK)

  4. The Organization of Correctional Education Services (United States)

    Gehring, Thom


    There have been five major types of correctional education organizations over the centuries: Sabbath school, traditional or decentralized, bureau, correctional school district (CSD), and integral education. The middle three are modern organizational patterns that can be implemented throughout a system: Decentralized, bureau, and CSD. The…

  5. Quantum Error Correction Beyond Completely Positive Maps


    Shabani, A.; Lidar, D. A.


    By introducing an operator sum representation for arbitrary linear maps, we develop a generalized theory of quantum error correction (QEC) that applies to any linear map, in particular maps that are not completely positive (CP). This theory of "linear quantum error correction" is applicable in cases where the standard and restrictive assumption of a factorized initial system-bath state does not apply.

  6. 7 CFR 868.73 - Corrected certificates. (United States)


    ... authorized agent who affixed the name or signature, or both. Errors found during this process shall be... this section and the instructions, corrected certificates shall show— (i) The terms “Corrected Original... that has been superseded by another certificate or on the basis of a subsequent analysis for quality....

  7. Minimal Exit Trajectories with Optimum Correctional Manoeuvres

    Directory of Open Access Journals (Sweden)

    T. N. Srivastava


    Full Text Available Minimal exit trajectories with optimum correctional manoeuvers to a rocket between two coplaner, noncoaxial elliptic orbits in an inverse square gravitational field have been investigated. Case of trajectories with no correctional manoeuvres has been analysed. In the end minimal exit trajectories through specified orbital terminals are discussed and problem of ref. (2 is derived as a particular case.

  8. Correctional Education Reform--School Libraries. (United States)

    Puffer, Margaret; Burton, Linda


    School library services are essential to standards-based education reform. Correctional educators must be aware of information literacy standards as well as library standards for juvenile facilities and should incorporate library media specialists into correctional education programs. (Contains 32 references.) (SK)

  9. FISICO: Fast Image SegmentatIon COrrection.

    Directory of Open Access Journals (Sweden)

    Waldo Valenzuela

    Full Text Available In clinical diagnosis, medical image segmentation plays a key role in the analysis of pathological regions. Despite advances in automatic and semi-automatic segmentation techniques, time-effective correction tools are commonly needed to improve segmentation results. Therefore, these tools must provide faster corrections with a lower number of interactions, and a user-independent solution to reduce the time frame between image acquisition and diagnosis.We present a new interactive method for correcting image segmentations. Our method provides 3D shape corrections through 2D interactions. This approach enables an intuitive and natural corrections of 3D segmentation results. The developed method has been implemented into a software tool and has been evaluated for the task of lumbar muscle and knee joint segmentations from MR images.Experimental results show that full segmentation corrections could be performed within an average correction time of 5.5±3.3 minutes and an average of 56.5±33.1 user interactions, while maintaining the quality of the final segmentation result within an average Dice coefficient of 0.92±0.02 for both anatomies. In addition, for users with different levels of expertise, our method yields a correction time and number of interaction decrease from 38±19.2 minutes to 6.4±4.3 minutes, and 339±157.1 to 67.7±39.6 interactions, respectively.

  10. Background correction in near-infrared spectra of plant extracts by orthogonal signal correction

    Institute of Scientific and Technical Information of China (English)

    QU Hai-bin; OU Dan-lin; CHENG Yi-yu


    In near-infrared (NIR) analysis of plant extracts, excessive background often exists in near-infrared spectra. The detection of active constituents is difficult because of excessive background, and correction of this problem remains difficult. In this work, the orthogonal signal correction (OSC) method was used to correct excessive background. The method was also compared with several classical background correction methods, such as offset correction, multiplicative scatter correction (MSC),standard normal variate (SNV) transformation, de-trending (DT), first derivative, second derivative and wavelet methods. A simulated dataset and a real NIR spectral dataset were used to test the efficiency of different background correction methods. The results showed that OSC is the only effective method for correcting excessive background.

  11. Towards a single proposal is spelling correction

    CERN Document Server

    Agirre, E; Sarasola, K


    The study presented here relies on the integrated use of different kinds of knowledge in order to improve first-guess accuracy in non-word context-sensitive correction for general unrestricted texts. State of the art spelling correction systems, e.g. ispell, apart from detecting spelling errors, also assist the user by offering a set of candidate corrections that are close to the misspelled word. Based on the correction proposals of ispell, we built several guessers, which were combined in different ways. Firstly, we evaluated all possibilities and selected the best ones in a corpus with artificially generated typing errors. Secondly, the best combinations were tested on texts with genuine spelling errors. The results for the latter suggest that we can expect automatic non-word correction for all the errors in a free running text with 80% precision and a single proposal 98% of the times (1.02 proposals on average).

  12. Key Techniques of Terminal Correction Mortar Projectiles

    Institute of Scientific and Technical Information of China (English)

    XU Jin-xiang


    The operational principle, the impulse force and terminal guidance laws of terminal correction mortar projectiles(TCMP) are researched in this paper, by using the TCMP simulation program, key techniques such as the miss distance influenced by the acting point of impulse force, the impulse force value, the correction threshold, and the number of impulse rockets are researched in this paper.And the dual pulse control scheme is also studied.Simulation results indicate that the best acting point is near the center of gravity, sufficient correction resources are needed, the miss distance is insentive to the correction threshold, increasing the number of impulse rockets properly is beneficial to increase the hit precision, the velocity pursuit guidance law has less miss distance, the change of the attack angle is milder and the transient time becomes less in the dual impulse control scheme.These conclusions are important for choosing parameters and impulse correction schemes designed for TCMP.

  13. Quantum gravitational corrections for spinning particles

    CERN Document Server

    Fröb, Markus B


    We calculate the quantum corrections to the gauge-invariant gravitational potentials of spinning particles in flat space, induced by loops of both massive and massless matter fields of various types. While the corrections to the Newtonian potential induced by massless conformal matter for spinless particles are well-known, and the same corrections due to massless minimally coupled scalars [S. Park and R. P. Woodard, Class. Quant. Grav. 27 (2010) 245008], massless non-conformal scalars [A. Marunovic and T. Prokopec, Phys. Rev. D 87 (2013) 104027] and massive scalars, fermions and vector bosons [D. Burns and A. Pilaftsis, Phys. Rev. D 91 (2015) 064047] have been recently derived, spinning particles receive additional corrections which are the subject of the present work. We give both fully analytic results valid for all distances from the particle, and present numerical results as well as asymptotic expansions. At large distances from the particle, the corrections due to massive fields are exponentially suppres...

  14. Mechanism for Corrective Action on Budget Imbalances

    Directory of Open Access Journals (Sweden)

    Ion Lucian CATRINA


    Full Text Available The European Fiscal Compact sets the obligation for the signatory states to establish an automatic mechanism for taking corrective action on budget imbalances. Nevertheless, the European Treaty says nothing about the tools that should be used in order to reach the desired equilibrium of budgets, but only that it should aim at correcting deviations from the medium-term objective or the adjustment path, including their cumulated impact on government debt dynamics. This paper is aiming at showing that each member state has to build the correction mechanism according to the impact of the chosen tools on economic growth and on general government revenues. We will also emphasize that the correction mechanism should be built not only exacerbating the corrective action through spending/ tax based adjustments, but on a high quality package of economic policies as well.

  15. Interpretation and application of reaction class transition state theory for accurate calculation of thermokinetic parameters using isodesmic reaction method. (United States)

    Wang, Bi-Yao; Li, Ze-Rong; Tan, Ning-Xin; Yao, Qian; Li, Xiang-Yuan


    We present a further interpretation of reaction class transition state theory (RC-TST) proposed by Truong et al. for the accurate calculation of rate coefficients for reactions in a class. It is found that the RC-TST can be interpreted through the isodesmic reaction method, which is usually used to calculate reaction enthalpy or enthalpy of formation for a species, and the theory can also be used for the calculation of the reaction barriers and reaction enthalpies for reactions in a class. A correction scheme based on this theory is proposed for the calculation of the reaction barriers and reaction enthalpies for reactions in a class. To validate the scheme, 16 combinations of various ab initio levels with various basis sets are used as the approximate methods and CCSD(T)/CBS method is used as the benchmarking method in this study to calculate the reaction energies and energy barriers for a representative set of five reactions from the reaction class: R(c)CH(R(b))CR(a)CH2 + OH(•) → R(c)C(•)(R(b))CR(a)CH2 + H2O (R(a), R(b), and R(c) in the reaction formula represent the alkyl or hydrogen). Then the results of the approximate methods are corrected by the theory. The maximum values of the average deviations of the energy barrier and the reaction enthalpy are 99.97 kJ/mol and 70.35 kJ/mol, respectively, before correction and are reduced to 4.02 kJ/mol and 8.19 kJ/mol, respectively, after correction, indicating that after correction the results are not sensitive to the level of the ab initio method and the size of the basis set, as they are in the case before correction. Therefore, reaction energies and energy barriers for reactions in a class can be calculated accurately at a relatively low level of ab initio method using our scheme. It is also shown that the rate coefficients for the five representative reactions calculated at the BHandHLYP/6-31G(d,p) level of theory via our scheme are very close to the values calculated at CCSD(T)/CBS level. Finally, reaction

  16. Segmental Orthodontics for the Correction of Cross Bites (United States)

    Mathur, Rinku


    ABSTRACT Cross bite is a condition where one or more teeth may be abnormally malposed buccally or lingually or labially with reference to the opposing tooth or teeth. Cross bite correction is highly recommended as this kind of malocclusion do not diminish with age. Uncorrected cross bite may lead to abnormal wear of lower anteriors and cuspal interference, mandibular shift resulting in mandibular asymmetry and temporomandibular joint dysfunction syndrome. There are several methods for treating this type of malocclusion. In this article, segmental orthodontics has been highlighted by using 2 × 4 appliance therapy and lingual button with cross elastics. This appliance offers many advantages as it provides complete control of anterior tooth position, is extremely well tolerated, requires no adjustment by the patient and allows accurate and rapid positioning of teeth. PMID:27616858

  17. Evolutionary modeling-based approach for model errors correction (United States)

    Wan, S. Q.; He, W. P.; Wang, L.; Jiang, W.; Zhang, W.


    The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963) equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data." On the basis of the intelligent features of evolutionary modeling (EM), including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  18. Differential aberration correction (DAC) microscopy: a new molecular ruler. (United States)

    Vallotton, P


    Considerable efforts have been deployed towards measuring molecular range distances in fluorescence microscopy. In the 1-10 nm range, Förster energy transfer microscopy is difficult to beat. Above 300 nm, conventional diffraction limited microscopy is suitable. We introduce a simple experimental technique that allows bridging the gap between those two resolution scales in both 2D and 3D with a resolution of about 20 nm. The method relies on a computational approach to accurately correct optical aberrations over the whole field of view. The method is differential because the probes of interest are affected in exactly the same manner by aberrations as are the reference probes used to construct the aberration deformation field. We expect that this technique will have significant implications for investigating structural and functional questions in bio-molecular sciences.

  19. Relic density computations at NLO: infrared finiteness and thermal correction

    CERN Document Server

    Beneke, Martin; Hryczuk, Andrzej


    There is an increasing interest in accurate dark matter relic density predictions, which requires next-to-leading order (NLO) calculations. The method applied up to now uses zero-temperature NLO calculations of annihilation cross sections in the standard Boltzmann equation for freeze-out, and is conceptually problematic, since it ignores the finite-temperature infrared (IR) divergences from soft and collinear radiation and virtual effects. We address this problem systematically by starting from non-equilibrium quantum field theory, and demonstrate on a realistic model that soft and collinear temperature-dependent divergences cancel in the collision term. Our analysis provides justification for the use of the freeze-out equation in its conventional form and determines the leading finite-temperature correction to the annihilation cross section. This turns out to have a remarkably simple structure.

  20. Correcting GRACE gravity fields for ocean tide effects

    DEFF Research Database (Denmark)

    Knudsen, Per; Andersen, Ole Baltazar


    [1] The GRACE mission will be launch in early 2002 and will map the Earth's gravity fields and its variations with unprecedented accuracy during its 5-year lifetime. Unless ocean tide signals and their load upon the solid earth are removed from the GRACE data, their long period aliases obscure more...... subtle climate signals which GRACE aims at. The difference between two existing ocean tide models can be used as an estimate of current tidal model error for the M-2,S-2,K-1, and O-1 constituents. When compared with the expected accuracy of the GRACE system, both expressed as spherical harmonic degree...... variances, we find that the current ocean tide models are not accurate enough to correct GRACE data at harmonic degrees lower that 35. The accumulated tidal errors may affect the GRACE data up to harmonic degree 56. Furthermore, the atmospheric (radiation) tides may cause significant errors in the ocean...

  1. Improving comparability between microarray probe signals by thermodynamic intensity correction

    DEFF Research Database (Denmark)

    Bruun, G. M.; Wernersson, Rasmus; Juncker, Agnieszka


    different probes. It is therefore of great interest to correct for the variation between probes. Much of this variation is sequence dependent. We demonstrate that a thermodynamic model for hybridization of either DNA or RNA to a DNA microarray, which takes the sequence-dependent probe affinities......Signals from different oligonucleotide probes against the same target show great variation in intensities. However, detection of differences along a sequence e.g. to reveal intron/exon architecture, transcription boundary as well as simple absent/present calls depends on comparisons between...... into account significantly reduces the signal fluctuation between probes targeting the same gene transcript. For a test set of tightly tiled yeast genes, the model reduces the variance by up to a factor approximately 1/3. As a consequence of this reduction, the model is shown to yield a more accurate...

  2. Accurate LAI retrieval method based on PROBA/CHRIS data

    Directory of Open Access Journals (Sweden)

    W. Fan


    Full Text Available Leaf area index (LAI is one of the key structural variables in terrestrial vegetation ecosystems. Remote sensing offers a chance to derive LAI in regional scales accurately. Variations of background, atmospheric conditions and the anisotropy of canopy reflectance are three factors that can strongly restrain the accuracy of retrieved LAI. Based on the hybrid canopy reflectance model, a new hyperspectral directional second derivative method (DSD is proposed in this paper. This method can estimate LAI accurately through analyzing the canopy anisotropy. The effect of the background can also be effectively removed. So the inversion precision and the dynamic range can be improved remarkably, which has been proved by numerical simulations. As the derivative method is very sensitive to the random noise, we put forward an innovative filtering approach, by which the data can be de-noised in spectral and spatial dimensions synchronously. It shows that the filtering method can remove the random noise effectively; therefore, the method can be performed to the remotely sensed hyperspectral image. The study region is situated in Zhangye, Gansu Province, China; the hyperspectral and multi-angular image of the study region has been acquired from Compact High-Resolution Imaging Spectrometer/Project for On-Board Autonomy (CHRIS/PROBA, on 4 and 14 June 2008. After the pre-processing procedures, the DSD method was applied, and the retrieve LAI was validated by the ground truth of 11 sites. It shows that by applying innovative filtering method, the new LAI inversion method is accurate and effective.

  3. Solvent effects on zero-point vibrational corrections to optical rotations and nuclear magnetic resonance shielding constants (United States)

    Kongsted, Jacob; Ruud, Kenneth


    We present a study of solvent effects on the zero-point vibrational corrections (ZPVC) to optical rotations and nuclear magnetic resonance shielding constants of solvated molecules. The model used to calculate vibrational corrections rely on an expansion of the potential and property surfaces around an effective molecular geometry and includes both harmonic and anharmonic corrections. Numerical examples are presented for ( S)-propylene oxide in various solvents as well as for acetone and the three diazene molecules. We find that solvent effects on the ZPVCs may be significant and in some cases crucial to accurately predict solvent shifts on molecular properties.

  4. Accurate Programming: Thinking about programs in terms of properties

    Directory of Open Access Journals (Sweden)

    Walid Taha


    Full Text Available Accurate programming is a practical approach to producing high quality programs. It combines ideas from test-automation, test-driven development, agile programming, and other state of the art software development methods. In addition to building on approaches that have proven effective in practice, it emphasizes concepts that help programmers sharpen their understanding of both the problems they are solving and the solutions they come up with. This is achieved by encouraging programmers to think about programs in terms of properties.

  5. Accurate emulators for large-scale computer experiments

    CERN Document Server

    Haaland, Ben; 10.1214/11-AOS929


    Large-scale computer experiments are becoming increasingly important in science. A multi-step procedure is introduced to statisticians for modeling such experiments, which builds an accurate interpolator in multiple steps. In practice, the procedure shows substantial improvements in overall accuracy, but its theoretical properties are not well established. We introduce the terms nominal and numeric error and decompose the overall error of an interpolator into nominal and numeric portions. Bounds on the numeric and nominal error are developed to show theoretically that substantial gains in overall accuracy can be attained with the multi-step approach.

  6. Accurate studies on dissociation energies of diatomic molecules

    Institute of Scientific and Technical Information of China (English)

    SUN; WeiGuo; FAN; QunChao


    The molecular dissociation energies of some electronic states of hydride and N2 molecules were studied using a parameter-free analytical formula suggested in this study and the algebraic method (AM) proposed recently. The results show that the accurate AM dissociation energies DeAM agree excellently with experimental dissociation energies Deexpt, and that the dissociation energy of an electronic state such as the 23△g state of 7Li2 whose experimental value is not available can be predicted using the new formula.

  7. Accurate Excited State Geometries within Reduced Subspace TDDFT/TDA. (United States)

    Robinson, David


    A method for the calculation of TDDFT/TDA excited state geometries within a reduced subspace of Kohn-Sham orbitals has been implemented and tested. Accurate geometries are found for all of the fluorophore-like molecules tested, with at most all valence occupied orbitals and half of the virtual orbitals included but for some molecules even fewer orbitals. Efficiency gains of between 15 and 30% are found for essentially the same level of accuracy as a standard TDDFT/TDA excited state geometry optimization calculation.

  8. Accurate strand-specific quantification of viral RNA.

    Directory of Open Access Journals (Sweden)

    Nicole E Plaskon

    Full Text Available The presence of full-length complements of viral genomic RNA is a hallmark of RNA virus replication within an infected cell. As such, methods for detecting and measuring specific strands of viral RNA in infected cells and tissues are important in the study of RNA viruses. Strand-specific quantitative real-time PCR (ssqPCR assays are increasingly being used for this purpose, but the accuracy of these assays depends on the assumption that the amount of cDNA measured during the quantitative PCR (qPCR step accurately reflects amounts of a specific viral RNA strand present in the RT reaction. To specifically test this assumption, we developed multiple ssqPCR assays for the positive-strand RNA virus o'nyong-nyong (ONNV that were based upon the most prevalent ssqPCR assay design types in the literature. We then compared various parameters of the ONNV-specific assays. We found that an assay employing standard unmodified virus-specific primers failed to discern the difference between cDNAs generated from virus specific primers and those generated through false priming. Further, we were unable to accurately measure levels of ONNV (- strand RNA with this assay when higher levels of cDNA generated from the (+ strand were present. Taken together, these results suggest that assays of this type do not accurately quantify levels of the anti-genomic strand present during RNA virus infectious cycles. However, an assay permitting the use of a tag-specific primer was able to distinguish cDNAs transcribed from ONNV (- strand RNA from other cDNAs present, thus allowing accurate quantification of the anti-genomic strand. We also report the sensitivities of two different detection strategies and chemistries, SYBR(R Green and DNA hydrolysis probes, used with our tagged ONNV-specific ssqPCR assays. Finally, we describe development, design and validation of ssqPCR assays for chikungunya virus (CHIKV, the recent cause of large outbreaks of disease in the Indian Ocean

  9. A novel simple and accurate flatness measurement method

    CERN Document Server

    Thang, H L


    Flatness measurement of a surface plate is an intensive and old research topic. However ISO definition related and other measurement methods seem uneasy in measuring and/ or complicated in data analysis. Especially in reality, the mentioned methods don't take a clear and straightforward care on the inclining angle which is always included in any given flatness measurement. In this report a novel simple and accurate flatness measurement method was introduced to overcome this prevailing feature in the available methods. The mathematical modeling for this method was also presented making the underlying nature of the method transparent. The applying examples show consistent results.

  10. Fourth order accurate compact scheme with group velocity control (GVC)

    Institute of Scientific and Technical Information of China (English)


    For solving complex flow field with multi-scale structure higher order accurate schemes are preferred. Among high order schemes the compact schemes have higher resolving efficiency. When the compact and upwind compact schemes are used to solve aerodynamic problems there are numerical oscillations near the shocks. The reason of oscillation production is because of non-uniform group velocity of wave packets in numerical solutions. For improvement of resolution of the shock a parameter function is introduced in compact scheme to control the group velocity. The newly developed method is simple. It has higher accuracy and less stencil of grid points.

  11. Rapid and Accurate Idea Transfer: Presenting Ideas with Concept Maps (United States)


    questions from the Pre-test were included in the Post-test. I. At the end of the day, people in the camps take home about __ taka to feed their families. 2...teachers are not paid regularly. Tuition fees for different classes are 40 to 80 taka (Bangladesh currency) 27 per month which is very high for the...October 26, 2002. 27 In April 2005, exchange rate, one $ = 60 taka . 44 Rapid and Accurate Idea Transfer: CDRL (DI-MIS-807 11 A, 00012 1) Presenting

  12. Calibration Techniques for Accurate Measurements by Underwater Camera Systems

    Directory of Open Access Journals (Sweden)

    Mark Shortis


    Full Text Available Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  13. Calibration Techniques for Accurate Measurements by Underwater Camera Systems. (United States)

    Shortis, Mark


    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  14. Calculating NMR parameters in aluminophosphates: evaluation of dispersion correction schemes. (United States)

    Sneddon, Scott; Dawson, Daniel M; Pickard, Chris J; Ashbrook, Sharon E


    Periodic density functional theory (DFT) calculations have recently emerged as a popular tool for assigning solid-state nuclear magnetic resonance (NMR) spectra. However, in order for the calculations to yield accurate results, accurate structural models are also required. In many cases the structural model (often derived from crystallographic diffraction) must be optimised (i.e., to an energy minimum) using DFT prior to the calculation of NMR parameters. However, DFT does not reproduce weak long-range "dispersion" interactions well, and optimisation using some functionals can expand the crystallographic unit cell, particularly when dispersion interactions are important in defining the structure. Recently, dispersion-corrected DFT (DFT-D) has been extended to periodic calculations, to compensate for these missing interactions. Here, we investigate whether dispersion corrections are important for aluminophosphate zeolites (AlPOs) by comparing the structures optimised by DFT and DFT-D (using the PBE functional). For as-made AlPOs (containing cationic structure-directing agents (SDAs) and framework-bound anions) dispersion interactions appear to be important, with significant changes between the DFT and DFT-D unit cells. However, for calcined AlPOs, where the SDA-anion pairs are removed, dispersion interactions appear much less important, and the DFT and DFT-D unit cells are similar. We show that, while the different optimisation strategies yield similar calculated NMR parameters (providing that the atomic positions are optimised), the DFT-D optimisations provide structures in better agreement with the experimental diffraction measurements. Therefore, it appears that DFT-D calculations can, and should, be used for the optimisation of calcined and as-made AlPOs, in order to provide the closest agreement with all experimental measurements.

  15. Corrective Action Plan for Corrective Action Unit 424: Area 3 Landfill Complex, Tonopah Test Range, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    Bechtel Nevada


    This corrective action plan provides the closure implementation methods for the Area 3 Landfill Complex, Corrective Action Unit (CAU) 424, located at the Tonopah Test Range. The Area 3 Landfill Complex consists of 8 landfill sites, each designated as a separate corrective action site.

  16. A Form—Correcting System of Chinese Characters Using a Model of Correcting Procedures of Calligraphists

    Institute of Scientific and Technical Information of China (English)

    曾建超; HidehikoSanada; 等


    A support system for form-correction of Chinese Characters is developed based upon a generation model SAM,and its feasibility is evaluated.SAM is excellent as a model for generating Chinese characters,but it is difficult to determine appropriate parameters because the use of calligraphic knowledge is noticing that calligraphic knowledge of calligraphists is included in their corrective actions, we adopt a strategy to acquire calligraphic knowledge by monitoring,recording and analyzing corrective actions of calligraphists,and try to realize an environment under which calligraphists can easily make corrections to character forms and which can record corrective actions of calligraphists without interfering with them.In this paper,we first construct a model of correcting procedures of calligraphists,which is composed of typical correcting procedures that are acquired by extensively observing their corrective actions and interviewing them,and develop a form-correcting system for brush-written Chinese characters by using the model.Secondly,through actual correcting experiments,we demonstrate that parameters within SAM can be easily corrected at the level of character patterns by our system,and show that it is effective and easy for calligraphists to be used by evaluating effectiveness of the correcting model,sufficiency of its functions and execution speed.

  17. A Simple and Accurate Closed-Form EGN Model Formula

    CERN Document Server

    Poggiolini, P; Carena, A; Forghieri, F


    The GN model of non-linear fiber propagation has been shown to overestimate the variance of non-linearity due to the signal Gaussianity approximation, leading to maximum reach predictions for typical optical systems which may be pessimistic by about 5% to 15%, depending on fiber type and system set-up. Various models have been proposed which improve over the GN model accuracy. One of them is the EGN model, which completely removes the Gaussianity approximation from all non-linear interference (NLI) components. The EGN model is, however, substantially more complex than the GN model. Recently, we proposed a simple closed-form formula which permits to approximate the EGN model, starting from the GN. It was however limited to all-identical, equispaced channels, and did not correct single-channel NLI (also called SCI). In this follow-up contribution, we propose an improved version which both allows to address non-identical channels and corrects the SCI contribution as well. Extensive simulative testing shows the n...

  18. Accurate Runout Measurement for HDD Spinning Motors and Disks (United States)

    Jiang, Quan; Bi, Chao; Lin, Song

    As hard disk drive (HDD) areal density increases, its track width becomes smaller and smaller and so is non-repeatable runout. HDD industry needs more accurate and better resolution runout measurements of spinning spindle motors and media platters in both axial and radial directions. This paper introduces a new system how to precisely measure the runout of HDD spinning disks and motors through synchronously acquiring the rotor position signal and the displacements in axial or radial directions. In order to minimize the synchronizing error between the rotor position and the displacement signal, a high resolution counter is adopted instead of the conventional phase-lock loop method. With Laser Doppler Vibrometer and proper signal processing, the proposed runout system can precisely measure the runout of the HDD spinning disks and motors with 1 nm resolution and 0.2% accuracy with a proper sampling rate. It can provide an effective and accurate means to measure the runout of high areal density HDDs, in particular the next generation HDDs, such as, pattern media HDDs and HAMR HDDs.

  19. Simple and accurate optical height sensor for wafer inspection systems (United States)

    Shimura, Kei; Nakai, Naoya; Taniguchi, Koichi; Itoh, Masahide


    An accurate method for measuring the wafer surface height is required for wafer inspection systems to adjust the focus of inspection optics quickly and precisely. A method for projecting a laser spot onto the wafer surface obliquely and for detecting its image displacement using a one-dimensional position-sensitive detector is known, and a variety of methods have been proposed for improving the accuracy by compensating the measurement error due to the surface patterns. We have developed a simple and accurate method in which an image of a reticle with eight slits is projected on the wafer surface and its reflected image is detected using an image sensor. The surface height is calculated by averaging the coordinates of the images of the slits in both the two directions in the captured image. Pattern-related measurement error was reduced by applying the coordinates averaging to the multiple-slit-projection method. Accuracy of better than 0.35 μm was achieved for a patterned wafer at the reference height and ±0.1 mm from the reference height in a simple configuration.

  20. Cerebral fat embolism: Use of MR spectroscopy for accurate diagnosis

    Directory of Open Access Journals (Sweden)

    Laxmi Kokatnur


    Full Text Available Cerebral fat embolism (CFE is an uncommon but serious complication following orthopedic procedures. It usually presents with altered mental status, and can be a part of fat embolism syndrome (FES if associated with cutaneous and respiratory manifestations. Because of the presence of other common factors affecting the mental status, particularly in the postoperative period, the diagnosis of CFE can be challenging. Magnetic resonance imaging (MRI of brain typically shows multiple lesions distributed predominantly in the subcortical region, which appear as hyperintense lesions on T2 and diffusion weighted images. Although the location offers a clue, the MRI findings are not specific for CFE. Watershed infarcts, hypoxic encephalopathy, disseminated infections, demyelinating disorders, diffuse axonal injury can also show similar changes on MRI of brain. The presence of fat in these hyperintense lesions, identified by MR spectroscopy as raised lipid peaks will help in accurate diagnosis of CFE. Normal brain tissue or conditions producing similar MRI changes will not show any lipid peak on MR spectroscopy. We present a case of CFE initially misdiagnosed as brain stem stroke based on clinical presentation and cranial computed tomography (CT scan, and later, MR spectroscopy elucidated the accurate diagnosis.