Accurate light-time correction due to a gravitating mass
Ashby, Neil
2009-01-01
This work arose as an aftermath of Cassini's 2002 experiment \\cite{bblipt03}, in which the PPN parameter $\\gamma$ was measured with an accuracy $\\sigma_\\gamma = 2.3\\times 10^{-5}$ and found consistent with the prediction $\\gamma =1$ of general relativity. The Orbit Determination Program (ODP) of NASA's Jet Propulsion Laboratory, which was used in the data analysis, is based on an expression for the gravitational delay which differs from the standard formula; this difference is of second order in powers of $m$ -- the sun's gravitational radius -- but in Cassini's case it was much larger than the expected order of magnitude $m^2/b$, where $b$ is the ray's closest approach distance. Since the ODP does not account for any other second-order terms, it is necessary, also in view of future more accurate experiments, to systematically evaluate higher order corrections and to determine which terms are significant. Light propagation in a static spacetime is equivalent to a problem in ordinary geometrical optics; Fermat...
Correcting incompatible DN values and geometric errors in nighttime lights time series images
Zhao, Naizhuo [Texas Tech Univ., Lubbock, TX (United States); Zhou, Yuyu [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Samson, Eric L. [Mayan Esteem Project, Farmington, CT (United States)
2014-09-19
The Defense Meteorological Satellite Program’s Operational Linescan System (DMSP-OLS) nighttime lights imagery has proven to be a powerful remote sensing tool to monitor urbanization and assess socioeconomic activities at large scales. However, the existence of incompatible digital number (DN) values and geometric errors severely limit application of nighttime light image data on multi-year quantitative research. In this study we extend and improve previous studies on inter-calibrating nighttime lights image data to obtain more compatible and reliable nighttime lights time series (NLT) image data for China and the United States (US) through four steps: inter-calibration, geometric correction, steady increase adjustment, and population data correction. We then use gross domestic product (GDP) data to test the processed NLT image data indirectly and find that sum light (summed DN value of pixels in a nighttime light image) maintains apparent increase trends with relatively large GDP growth rates but does not increase or decrease with relatively small GDP growth rates. As nighttime light is a sensitive indicator for economic activity, the temporally consistent trends between sum light and GDP growth rate imply that brightness of nighttime lights on the ground is correctly represented by the processed NLT image data. Finally, through analyzing the corrected NLT image data from 1992 to 2008, we find that China experienced apparent nighttime lights development in 1992-1997 and 2001-2008 respectively and the US suffered from nighttime lights decay in large areas after 2001.
AN ACCURATE MODEL FOR CALCULATING CORRECTION OF PATH FLEXURE OF SATELLITE SIGNALS
LiYanxing; HuXinkang; ShuaiPing; ZhangZhongfu
2003-01-01
The propagation path of satellite signals in the atmosphere is a curve thus it,is very difficult to calculate its flexure correction accurately, a strict calculating expressions has so far not been derived. In this study, the flexure correction of the refraction curve is divided into two parts and their strict calculating expressions are derived. By use of the standard atmospheric model, the accurate flexure correction of the refraction curve is calculated for different zenith distance Z. On this basis, a calculation model is structured. This model is very simple in structure, convenient in use and high in accuracy. When Z is smaller than 85°,the accuracy of the correction exceeds 0.06mm. The flexure correction is basically proportional to tan2Z and increases rapidly with the increase of Z When Z＞50°,the correction is smaller than 0.5 mm and can be neglected. When Z＞50°, the correction must be made. When Z is 85°, 88° and 89° , the corrections are 198mm, 8.911m and 28.497 km, respectively. The calculation results shows that the correction estimate by Hopfield is correct when Z≤80°, but too small when Z=89°. The expression in this paper is applicable to any satellite.
AN ACCURATE MODEL FOR CALCULATING CORRECTION OF PATH FLEXURE OF SATELLITE SIGNALS
Li Yanxing; Hu Xinkang; Shuai Ping; Zhang Zhongfu
2003-01-01
The propagation path of satellite signals in the atmosphere is a curve thus it.is very difficult to calculate its flexure correction accurately, a strict calculating expressions has so far not been derived. In this study, the flexure correction of the refraction curve is divided into two parts and their strict calculating expressions are derived. By use of the standard atmospheric model, the accurate flexure correction of the refraction curve is calculated for different zenith distance Z. On this basis, a calculation model is structured. This model is very simple in structure, convenient in use and high in accuracy. When Z is smaller than 85°, the accuracy of the correction exceeds 0.06 mm. The flexure correction is basically proportional to tan2Z and increases rapidly with the increase of Z When Z＞50°,the correction is smaller than 0.5 mm and can be neglected.When Z＞50°, the correction must be made. When Z is 85° , 88° and 89° , the corrections are 198mm, 8. 911 m and 28. 497 km, respectively. The calculation results shows that the correction estimate by Hopfield is correct when Z≤80 °, but too small when Z=89°. The expression in this paper is applicable to any satellite.
Allam, Amin
2015-07-14
Motivation: Next-generation sequencing generates large amounts of data affected by errors in the form of substitutions, insertions or deletions of bases. Error correction based on the high-coverage information, typically improves de novo assembly. Most existing tools can correct substitution errors only; some support insertions and deletions, but accuracy in many cases is low. Results: We present Karect, a novel error correction technique based on multiple alignment. Our approach supports substitution, insertion and deletion errors. It can handle non-uniform coverage as well as moderately covered areas of the sequenced genome. Experiments with data from Illumina, 454 FLX and Ion Torrent sequencing machines demonstrate that Karect is more accurate than previous methods, both in terms of correcting individual-bases errors (up to 10% increase in accuracy gain) and post de novo assembly quality (up to 10% increase in NGA50). We also introduce an improved framework for evaluating the quality of error correction.
Sei, Alain
2016-10-01
The state of the art of atmospheric correction for moderate resolution and high resolution sensors is based on assuming that the surface reflectance at the bottom of the atmosphere is uniform. This assumption accounts for multiple scattering but ignores the contribution of neighboring pixels, that is it ignores adjacency effects. Its great advantage however is to substantially reduce the computational cost of performing atmospheric correction and make the problem computationally tractable. In a recent paper, (Sei, 2015) a computationally efficient method was introduced for the correction of adjacency effects through the use of fast FFT-based evaluations of singular integrals and the use of analytic continuation. It was shown that divergent Neumann series can be avoided and accurate results be obtained for clear and turbid atmospheres. We analyze in this paper the error of the standard state of the art Lambertian atmospheric correction method on Landsat imagery and compare it to our newly introduced method. We show that for high contrast scenes the state of the art atmospheric correction yields much larger errors than our method.
Brandenburg, Jan Gerit; Grimme, Stefan
2014-06-05
The ambitious goal of organic crystal structure prediction challenges theoretical methods regarding their accuracy and efficiency. Dispersion-corrected density functional theory (DFT-D) in principle is applicable, but the computational demands, for example, to compute a huge number of polymorphs, are too high. Here, we demonstrate that this task can be carried out by a dispersion-corrected density functional tight binding (DFTB) method. The semiempirical Hamiltonian with the D3 correction can accurately and efficiently model both solid- and gas-phase inter- and intramolecular interactions at a speed up of 2 orders of magnitude compared to DFT-D. The mean absolute deviations for interaction (lattice) energies for various databases are typically 2-3 kcal/mol (10-20%), that is, only about two times larger than those for DFT-D. For zero-point phonon energies, small deviations of <0.5 kcal/mol compared to DFT-D are obtained.
Linge Johnsen, S A; Bollmann, J; Lee, H W; Zhou, Y
2017-09-21
Here a work flow towards an accurate representation of interference colours (Michel-Lévy chart) digitally captured on a polarised light microscope using dry and oil immersion objectives is presented. The work flow includes accurate rendering of interference colours considering the colour temperature of the light source of the microscope and chromatic adaptation to white points of RGB colour spaces as well as the colour correction of the camera using readily available colour targets. The quality of different colour correction profiles was tested independently on an IT8.7/1 target. The best performing profile was using the XYZ cLUT algorithm and it revealed a ΔE00 of 1.9 (6.4 no profile) at 5× and 1.1 (8.4 no profile) at 100× magnification, respectively. The overall performance of the workflow was tested by comparing rendered interference colours with colour-corrected images of a quartz wedge captured over a retardation range from 80-2500 nm at 5× magnification. Uncorrected images of the quartz wedge in sRGB colour space revealed a mean ΔE00 of 12.3, which could be reduced to a mean of 4.9 by applying a camera correction profile based on an IT8.7/1 target and the Matrix only algorithm (ΔE00 < 1.0 signifies colour differences imperceptible by the human eye). ΔE00 varied significantly over the retardation range of 80-2500 nm of the quartz wedge, but the reasons for this variation is not well understood and the quality of colour correction might be further improved in future by using custom made colour targets specifically designed for the analysis of high-order interference colours. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
Park, Seyoun; Robinson, Adam; Quon, Harry; Kiess, Ana P.; Shen, Colette; Wong, John; Plishker, William; Shekhar, Raj; Lee, Junghoon
2016-03-01
In this paper, we propose a CT-CBCT registration method to accurately predict the tumor volume change based on daily cone-beam CTs (CBCTs) during radiotherapy. CBCT is commonly used to reduce patient setup error during radiotherapy, but its poor image quality impedes accurate monitoring of anatomical changes. Although physician's contours drawn on the planning CT can be automatically propagated to daily CBCTs by deformable image registration (DIR), artifacts in CBCT often cause undesirable errors. To improve the accuracy of the registration-based segmentation, we developed a DIR method that iteratively corrects CBCT intensities by local histogram matching. Three popular DIR algorithms (B-spline, demons, and optical flow) with the intensity correction were implemented on a graphics processing unit for efficient computation. We evaluated their performances on six head and neck (HN) cancer cases. For each case, four trained scientists manually contoured the nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial image registration software based on conventional mutual information (MI), VelocityAI (Varian Medical Systems Inc.). The volume differences (mean±std in cc) between the average of the manual segmentations and automatic segmentations are 3.70+/-2.30 (B-spline), 1.25+/-1.78 (demons), 0.93+/-1.14 (optical flow), and 4.39+/-3.86 (VelocityAI). The proposed method significantly reduced the estimation error by 9% (B-spline), 38% (demons), and 51% (optical flow) over the results using VelocityAI. Although demonstrated only on HN nodal GTVs, the results imply that the proposed method can produce improved segmentation of other critical structures over conventional methods.
Annecchione, Maria; Hatch, David; Hefford, Shane W.
2017-01-01
In this paper we investigate digital elevation model (DEM) sourcing requirements to compute gravity gradiometry terrain corrections accurate to 1 Eötvös (Eö) at observation heights of 80 m or more above ground. Such survey heights are typical in fixed-wing airborne surveying for resource exploration where the maximum signal-to-noise ratio is sought. We consider the accuracy of terrain corrections relevant for recent commercial airborne gravity gradiometry systems operating at the 10 Eö noise level and for future systems with a target noise level of 1 Eö. We focus on the requirements for the vertical gradient of the vertical component of gravity (Gdd) because this element of the gradient tensor is most commonly interpreted qualitatively and quantitatively. Terrain correction accuracy depends on the bare-earth DEM accuracy and spatial resolution. The bare-earth DEM accuracy and spatial resolution depends on its source. Two possible sources are considered: airborne LiDAR and Shuttle Radar Topography Mission (SRTM). The accuracy of an SRTM DEM is affected by vegetation height. The SRTM footprint is also larger and the DEM resolution is thus lower. However, resolution requirements relax as relief decreases. Publicly available LiDAR data and 1 arc-second and 3 arc-second SRTM data were selected over four study areas representing end member cases of vegetation cover and relief. The four study areas are presented as reference material for processing airborne gravity gradiometry data at the 1 Eö noise level with 50 m spatial resolution. From this investigation we find that to achieve 1 Eö accuracy in the terrain correction at 80 m height airborne LiDAR data are required even when terrain relief is a few tens of meters and the vegetation is sparse. However, as satellite ranging technologies progress bare-earth DEMs of sufficient accuracy and resolution may be sourced at lesser cost. We found that a bare-earth DEM of 10 m resolution and 2 m accuracy are sufficient for
Berland, Kristian
2016-01-01
A computationally inexpensive kp-based interpolation scheme is developed that can extend the eigenvalues and momentum matrix elements of a sparsely sampled k-point grid into a densely sampled one. Dense sampling, often required to accurately describe transport and optical properties of bulk materials, can be computationally demanding to compute, for instance, in combination with hybrid functionals within the density functional theory (DFT) or with perturbative expansions beyond DFT such as the GW method. The scheme is based on solving the k$\\cdot$p method and extrapolating from multiple reference k points. It includes a correction term that reduces the number of empty bands needed and ameliorates band discontinuities. We show that the scheme can be used to generate accurate band structures, density of states, and dielectric functions. Several examples are given, using traditional and hybrid functionals, with Si, TiNiSn, and Cu as test cases. We illustrate that d-electron and semi-core states, which are partic...
Accurate mask model implementation in optical proximity correction model for 14-nm nodes and beyond
Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Farys, Vincent; Huguennet, Frederic; Armeanu, Ana-Maria; Bork, Ingo; Chomat, Michael; Buck, Peter; Schanen, Isabelle
2016-04-01
In a previous work, we demonstrated that the current optical proximity correction model assuming the mask pattern to be analogous to the designed data is no longer valid. An extreme case of line-end shortening shows a gap up to 10 nm difference (at mask level). For that reason, an accurate mask model has been calibrated for a 14-nm logic gate level. A model with a total RMS of 1.38 nm at mask level was obtained. Two-dimensional structures, such as line-end shortening and corner rounding, were well predicted using scanning electron microscopy pictures overlaid with simulated contours. The first part of this paper is dedicated to the implementation of our improved model in current flow. The improved model consists of a mask model capturing mask process and writing effects, and a standard optical and resist model addressing the litho exposure and development effects at wafer level. The second part will focus on results from the comparison of the two models, the new and the regular.
Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna, T.; Mykkeltveit, S.
2017-01-01
Declared North Korean nuclear tests in 2006, 2009, 2013 and 2016 were observed seismically at regional and teleseismic distances. Waveform similarity allows the events to be located relatively with far greater accuracy than the absolute locations can be determined from seismic data alone. There is now significant redundancy in the data given the large number of regional and teleseismic stations that have recorded multiple events, and relative location estimates can be confirmed independently by performing calculations on many mutually exclusive sets of measurements. Using a 1-D global velocity model, the distances between the events estimated using teleseismic P phases are found to be approximately 25 per cent shorter than the distances between events estimated using regional Pn phases. The 2009, 2013 and 2016 events all take place within 1 km of each other and the discrepancy between the regional and teleseismic relative location estimates is no more than about 150 m. The discrepancy is much more significant when estimating the location of the more distant 2006 event relative to the later explosions with regional and teleseismic estimates varying by many hundreds of metres. The relative location of the 2006 event is challenging given the smaller number of observing stations, the lower signal-to-noise ratio and significant waveform dissimilarity at some regional stations. The 2006 event is however highly significant in constraining the absolute locations in the terrain at the Punggye-ri test-site in relation to observed surface infrastructure. For each seismic arrival used to estimate the relative locations, we define a slowness scaling factor which multiplies the gradient of seismic traveltime versus distance, evaluated at the source, relative to the applied 1-D velocity model. A procedure for estimating correction terms which reduce the double-difference time residual vector norms is presented together with a discussion of the associated uncertainty. The modified
Accurate and Simple Time Synchronization and Frequency Offset Correction in OFDM System
LIU Xiao-ming; JIANG Wei-yu; LIU Yuan-an
2004-01-01
We present a new synchronization scheme for Orthogonal Frequency-Division Multiplexing (OFDM) systems. In this scheme, time synchronization and carrier frequency offset correction can be performed in one identical training symbol. Time synchronization algorithm is robust and simple operated, and its performance is independent of the carrier frequency offset. We derive the theoretical variance error for our time synchronization algorithm in AWGN channel. We also derive the performance lower bound of our frequency offset correction algorithm. The frequency offset correction algorithm is high accuracy and its performance will degrade very little under multipath fading environment.
Ionospheric Correction of InSAR for Accurate Ice Motion Mapping at High Latitudes
Liao, H.; Meyer, F. J.
2016-12-01
Monitoring the motion of the large ice sheets is of great importance for determining ice mass balance and its contribution to sea level rise. Recently the first comprehensive ice motion of the Greenland and the Antarctica have been generated with InSAR. However, these studies have indicated that the performance of InSAR-based ice motion mapping is limited by the presence of the ionosphere. This is particularly true at high latitudes and for low-frequency SAR data. Filter-based and empirical methods (e.g., removing polynomials), which have often been used to mitigate ionospheric effects, are often ineffective in these areas due to the typically strong spatial variability of ionospheric phase delay in high latitudes and due to the risk of removing true deformation signals from the observations. In this study, we will first present an outline of our split-spectrum InSAR-based ionospheric correction approach and particularly highlight how our method improves upon published techniques, such as the multiple sub-band approach to boost estimation accuracy as well as advanced error correction and filtering algorithms. We applied our work flow to a large number of ionosphere-affected dataset over the large ice sheets to estimate the benefit of ionospheric correction on ice motion mapping accuracy. Appropriate test sites over Greenland and the Antarctic have been chosen through cooperation with authors (UW, Ian Joughin) of previous ice motion studies. To demonstrate the magnitude of ionospheric noise and to showcase the performance of ionospheric correction, we will show examples of ionospheric-affected InSAR data and our ionosphere corrected result for comparison in visual. We also compared the corrected phase data to known ice velocity fields quantitatively for the analyzed areas from experts in ice velocity mapping. From our studies we found that ionospheric correction significantly reduces biases in ice velocity estimates and boosts accuracy by a factor that depends on a
Beare, Richard; Brown, Michael J. I.; Pimbblet, Kevin, E-mail: richard@beares.net [Monash Centre for Astrophysics, Monash University, Clayton, Victoria 3800 (Australia)
2014-12-20
We describe an accurate new method for determining absolute magnitudes, and hence also K-corrections, that is simpler than most previous methods, being based on a quadratic function of just one suitably chosen observed color. The method relies on the extensive and accurate new set of 129 empirical galaxy template spectral energy distributions from Brown et al. A key advantage of our method is that we can reliably estimate random errors in computed absolute magnitudes due to galaxy diversity, photometric error and redshift error. We derive K-corrections for the five Sloan Digital Sky Survey filters and provide parameter tables for use by the astronomical community. Using the New York Value-Added Galaxy Catalog, we compare our K-corrections with those from kcorrect. Our K-corrections produce absolute magnitudes that are generally in good agreement with kcorrect. Absolute griz magnitudes differ by less than 0.02 mag and those in the u band by ∼0.04 mag. The evolution of rest-frame colors as a function of redshift is better behaved using our method, with relatively few galaxies being assigned anomalously red colors and a tight red sequence being observed across the whole 0.0 < z < 0.5 redshift range.
Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen
2017-03-01
Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.
Dral, Pavlo O.; Owens, Alec; Yurchenko, Sergei N.; Thiel, Walter
2017-06-01
We present an efficient approach for generating highly accurate molecular potential energy surfaces (PESs) using self-correcting, kernel ridge regression (KRR) based machine learning (ML). We introduce structure-based sampling to automatically assign nuclear configurations from a pre-defined grid to the training and prediction sets, respectively. Accurate high-level ab initio energies are required only for the points in the training set, while the energies for the remaining points are provided by the ML model with negligible computational cost. The proposed sampling procedure is shown to be superior to random sampling and also eliminates the need for training several ML models. Self-correcting machine learning has been implemented such that each additional layer corrects errors from the previous layer. The performance of our approach is demonstrated in a case study on a published high-level ab initio PES of methyl chloride with 44 819 points. The ML model is trained on sets of different sizes and then used to predict the energies for tens of thousands of nuclear configurations within seconds. The resulting datasets are utilized in variational calculations of the vibrational energy levels of CH3Cl. By using both structure-based sampling and self-correction, the size of the training set can be kept small (e.g., 10% of the points) without any significant loss of accuracy. In ab initio rovibrational spectroscopy, it is thus possible to reduce the number of computationally costly electronic structure calculations through structure-based sampling and self-correcting KRR-based machine learning by up to 90%.
Yamamoto, Katsuyuki; Niwayama, Masatsugu; Lin, Ling; Shiga, Toshikazu; Kudo, Nobuki; Takahashi, Makoto
1998-01-01
Although the inhomogeneity of tissue structure affects the sensitivity of tissue oxygenation measurement by reflectance near-infrared spectroscopy, few analyses of this effect have been reported. In this study, the influence of a subcutaneous fat layer on muscle oxygenation measurement was investigated by Monte Carlo simulation and experimental studies. In the experiments, measurement sensitivity was examined by measuring the falling rate of oxygenation in occlusion tests on the forearm using a tissue oxygen monitor. The fat layer thickness was measured by ultrasonography. Results of the simulation and occlusion tests clearly showed that the presence of a fat layer greatly decreases the measurement sensitivity and increases the light intensity at the detector. The correction factors of sensitivity were obtained from this relationship and were successfully validated by experiments on 12 subjects whose fat layer thickness ranged from 3.5 to 8 mm.
Bing, Chenchen; Staruch, Robert M; Tillander, Matti; Köhler, Max O; Mougenot, Charles; Ylihautala, Mika; Laetsch, Theodore W; Chopra, Rajiv
2016-09-01
There is growing interest in performing hyperthermia treatments with clinical magnetic resonance imaging-guided high-intensity focused ultrasound (MR-HIFU) therapy systems designed for tissue ablation. During hyperthermia treatment, however, due to the narrow therapeutic window (41-45 °C), careful evaluation of the accuracy of proton resonant frequency (PRF) shift MR thermometry for these types of exposures is required. The purpose of this study was to evaluate the accuracy of MR thermometry using a clinical MR-HIFU system equipped with a hyperthermia treatment algorithm. Mild heating was performed in a tissue-mimicking phantom with implanted temperature sensors using the clinical MR-HIFU system. The influence of image-acquisition settings and post-acquisition correction algorithms on the accuracy of temperature measurements was investigated. The ability to achieve uniform heating for up to 40 min was evaluated in rabbit experiments. Automatic centre-frequency adjustments prior to image-acquisition corrected the image-shifts in the order of 0.1 mm/min. Zero- and first-order phase variations were observed over time, supporting the use of a combined drift correction algorithm. The temperature accuracy achieved using both centre-frequency adjustment and the combined drift correction algorithm was 0.57° ± 0.58 °C in the heated region and 0.54° ± 0.42 °C in the unheated region. Accurate temperature monitoring of hyperthermia exposures using PRF shift MR thermometry is possible through careful implementation of image-acquisition settings and drift correction algorithms. For the evaluated clinical MR-HIFU system, centre-frequency adjustment eliminated image shifts, and a combined drift correction algorithm achieved temperature measurements with an acceptable accuracy for monitoring and controlling hyperthermia exposures.
Shuai, P; Zhang, Y H; Litvinov, Yu A; Wang, M; Tu, X L; Blaum, K; Zhou, X H; Yuan, Y J; Audi, G; Yan, X L; Chen, X C; Xu, X; Zhang, W; Sun, B H; Yamaguchi, T; Chen, R J; Fu, C Y; Ge, Z; Huang, W J; Liu, D W; Xing, Y M; Zeng, Q
2014-01-01
Isochronous mass spectrometry (IMS) in storage rings is a successful technique for accurate mass measurements of short-lived nuclides with relative precision of about $10^{-5}-10^{-7}$. Instabilities of the magnetic fields in storage rings are one of the major contributions limiting the achievable mass resolving power, which is directly related to the precision of the obtained mass values. A new data analysis method is proposed allowing one to minimise the effect of such instabilities. The masses of the previously measured at the CSRe $^{41}$Ti, $^{43}$V, $^{47}$Mn, $^{49}$Fe, $^{53}$Ni and $^{55}$Cu nuclides were re-determined with this method. An improvement of the mass precision by a factor of $\\sim 1.7$ has been achieved for $^{41}$Ti and $^{43}$V. The method can be applied to any isochronous mass experiment irrespective of the accelerator facility. Furthermore, the method can be used as an on-line tool for checking the isochronous conditions of the storage ring.
Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki
2016-03-01
Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and
Khalifé, Maya; Fernandez, Brice; Jaubert, Olivier; Soussan, Michael; Brulon, Vincent; Buvat, Irène; Comtat, Claude
2017-10-01
In brain PET/MR applications, accurate attenuation maps are required for accurate PET image quantification. An implemented attenuation correction (AC) method for brain imaging is the single-atlas approach that estimates an AC map from an averaged CT template. As an alternative, we propose to use a zero echo time (ZTE) pulse sequence to segment bone, air and soft tissue. A linear relationship between histogram normalized ZTE intensity and measured CT density in Hounsfield units (HU ) in bone has been established thanks to a CT-MR database of 16 patients. Continuous AC maps were computed based on the segmented ZTE by setting a fixed linear attenuation coefficient (LAC) to air and soft tissue and by using the linear relationship to generate continuous μ values for the bone. Additionally, for the purpose of comparison, four other AC maps were generated: a ZTE derived AC map with a fixed LAC for the bone, an AC map based on the single-atlas approach as provided by the PET/MR manufacturer, a soft-tissue only AC map and, finally, the CT derived attenuation map used as the gold standard (CTAC). All these AC maps were used with different levels of smoothing for PET image reconstruction with and without time-of-flight (TOF). The subject-specific AC map generated by combining ZTE-based segmentation and linear scaling of the normalized ZTE signal into HU was found to be a good substitute for the measured CTAC map in brain PET/MR when used with a Gaussian smoothing kernel of 4~mm corresponding to the PET scanner intrinsic resolution. As expected TOF reduces AC error regardless of the AC method. The continuous ZTE-AC performed better than the other alternative MR derived AC methods, reducing the quantification error between the MRAC corrected PET image and the reference CTAC corrected PET image.
2013-01-01
Background Population stratification is a systematic difference in allele frequencies between subpopulations. This can lead to spurious association findings in the case–control genome wide association studies (GWASs) used to identify single nucleotide polymorphisms (SNPs) associated with disease-linked phenotypes. Methods such as self-declared ancestry, ancestry informative markers, genomic control, structured association, and principal component analysis are used to assess and correct population stratification but each has limitations. We provide an alternative technique to address population stratification. Results We propose a novel machine learning method, ETHNOPRED, which uses the genotype and ethnicity data from the HapMap project to learn ensembles of disjoint decision trees, capable of accurately predicting an individual’s continental and sub-continental ancestry. To predict an individual’s continental ancestry, ETHNOPRED produced an ensemble of 3 decision trees involving a total of 10 SNPs, with 10-fold cross validation accuracy of 100% using HapMap II dataset. We extended this model to involve 29 disjoint decision trees over 149 SNPs, and showed that this ensemble has an accuracy of ≥ 99.9%, even if some of those 149 SNP values were missing. On an independent dataset, predominantly of Caucasian origin, our continental classifier showed 96.8% accuracy and improved genomic control’s λ from 1.22 to 1.11. We next used the HapMap III dataset to learn classifiers to distinguish European subpopulations (North-Western vs. Southern), East Asian subpopulations (Chinese vs. Japanese), African subpopulations (Eastern vs. Western), North American subpopulations (European vs. Chinese vs. African vs. Mexican vs. Indian), and Kenyan subpopulations (Luhya vs. Maasai). In these cases, ETHNOPRED produced ensembles of 3, 39, 21, 11, and 25 disjoint decision trees, respectively involving 31, 502, 526, 242 and 271 SNPs, with 10-fold cross validation accuracy of
Pinkevych, Mykola; Cromer, Deborah; Tolstrup, Martin
2016-01-01
[This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.].......[This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.]....
Nils T Hagen
Full Text Available Authorship credit for multi-authored scientific publications is routinely allocated either by issuing full publication credit repeatedly to all coauthors, or by dividing one credit equally among all coauthors. The ensuing inflationary and equalizing biases distort derived bibliometric measures of merit by systematically benefiting secondary authors at the expense of primary authors. Here I show how harmonic counting, which allocates credit according to authorship rank and the number of coauthors, provides simultaneous source-level correction for both biases as well as accommodating further decoding of byline information. I also demonstrate large and erratic effects of counting bias on the original h-index, and show how the harmonic version of the h-index provides unbiased bibliometric ranking of scientific merit while retaining the original's essential simplicity, transparency and intended fairness. Harmonic decoding of byline information resolves the conundrum of authorship credit allocation by providing a simple recipe for source-level correction of inflationary and equalizing bias. Harmonic counting could also offer unrivalled accuracy in automated assessments of scientific productivity, impact and achievement.
2002-01-01
Tile Calorimeter modules stored at CERN. The larger modules belong to the Barrel, whereas the smaller ones are for the two Extended Barrels. (The article was about the completion of the 64 modules for one of the latter.) The photo on the first page of the Bulletin n°26/2002, from 24 July 2002, illustrating the article «The ATLAS Tile Calorimeter gets into shape» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.
2002-01-01
The photo on the second page of the Bulletin n°48/2002, from 25 November 2002, illustrating the article «Spanish Visit to CERN» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption. The Spanish delegation, accompanied by Spanish scientists at CERN, also visited the LHC superconducting magnet test hall (photo). From left to right: Felix Rodriguez Mateos of CERN LHC Division, Josep Piqué i Camps, Spanish Minister of Science and Technology, César Dopazo, Director-General of CIEMAT (Spanish Research Centre for Energy, Environment and Technology), Juan Antonio Rubio, ETT Division Leader at CERN, Manuel Aguilar-Benitez, Spanish Delegate to Council, Manuel Delfino, IT Division Leader at CERN, and Gonzalo León, Secretary-General of Scientific Policy to the Minister.
2012-01-01
Full Text Available Regarding Gorelik, G., & Shackelford, T.K. (2011. Human sexual conflict from molecules to culture. Evolutionary Psychology, 9, 564–587: The authors wish to correct an omission in citation to the existing literature. In the final paragraph on p. 570, we neglected to cite Burch and Gallup (2006 [Burch, R. L., & Gallup, G. G., Jr. (2006. The psychobiology of human semen. In S. M. Platek & T. K. Shackelford (Eds., Female infidelity and paternal uncertainty (pp. 141–172. New York: Cambridge University Press.]. Burch and Gallup (2006 reviewed the relevant literature on FSH and LH discussed in this paragraph, and should have been cited accordingly. In addition, Burch and Gallup (2006 should have been cited as the originators of the hypothesis regarding the role of FSH and LH in the semen of rapists. The authors apologize for this oversight.
2014-01-01
Full Text Available Regarding Tagler, M. J., and Jeffers, H. M. (2013. Sex differences in attitudes toward partner infidelity. Evolutionary Psychology, 11, 821–832: The authors wish to correct values in the originally published manuscript. Specifically, incorrect 95% confidence intervals around the Cohen's d values were reported on page 826 of the manuscript where we reported the within-sex simple effects for the significant Participant Sex × Infidelity Type interaction (first paragraph, and for attitudes toward partner infidelity (second paragraph. Corrected values are presented in bold below. The authors would like to thank Dr. Bernard Beins at Ithaca College for bringing these errors to our attention. Men rated sexual infidelity significantly more distressing (M = 4.69, SD = 0.74 than they rated emotional infidelity (M = 4.32, SD = 0.92, F(1, 322 = 23.96, p < .001, d = 0.44, 95% CI [0.23, 0.65], but there was little difference between women's ratings of sexual (M = 4.80, SD = 0.48 and emotional infidelity (M = 4.76, SD = 0.57, F(1, 322 = 0.48, p = .29, d = 0.08, 95% CI [−0.10, 0.26]. As expected, men rated sexual infidelity (M = 1.44, SD = 0.70 more negatively than they rated emotional infidelity (M = 2.66, SD = 1.37, F(1, 322 = 120.00, p < .001, d = 1.12, 95% CI [0.85, 1.39]. Although women also rated sexual infidelity (M = 1.40, SD = 0.62 more negatively than they rated emotional infidelity (M = 2.09, SD = 1.10, this difference was not as large and thus in the evolutionary theory supportive direction, F(1, 322 = 72.03, p < .001, d = 0.77, 95% CI [0.60, 0.94].
2015-10-01
In the article by Quintavalle et al (Quintavalle C, Anselmi CV, De Micco F, Roscigno G, Visconti G, Golia B, Focaccio A, Ricciardelli B, Perna E, Papa L, Donnarumma E, Condorelli G, Briguori C. Neutrophil gelatinase–associated lipocalin and contrast-induced acute kidney injury. Circ Cardiovasc Interv. 2015;8:e002673. DOI: 10.1161/CIRCINTERVENTIONS.115.002673.), which published online September 2, 2015, and appears in the September 2015 issue of the journal, a correction was needed. On page 1, the institutional affiliation for Elvira Donnarumma, PhD, “SDN Foundation,” has been changed to read, “IRCCS SDN, Naples, Italy.” The institutional affiliation for Laura Papa, PhD, “Institute for Endocrinology and Experimental Oncology, National Research Council, Naples, Italy,” has been changed to read, “Institute of Genetics and Biomedical Research, Milan Unit, Milan, Italy” and “Humanitas Research Hospital, Rozzano, Italy.” The authors regret this error.
Fang, Changming; Li, Wun-Fan; Koster, Rik S; Klimeš, Jiří; van Blaaderen, Alfons; van Huis, Marijn A
2015-01-07
Knowledge about the intrinsic electronic properties of water is imperative for understanding the behaviour of aqueous solutions that are used throughout biology, chemistry, physics, and industry. The calculation of the electronic band gap of liquids is challenging, because the most accurate ab initio approaches can be applied only to small numbers of atoms, while large numbers of atoms are required for having configurations that are representative of a liquid. Here we show that a high-accuracy value for the electronic band gap of water can be obtained by combining beyond-DFT methods and statistical time-averaging. Liquid water is simulated at 300 K using a plane-wave density functional theory molecular dynamics (PW-DFT-MD) simulation and a van der Waals density functional (optB88-vdW). After applying a self-consistent GW correction the band gap of liquid water at 300 K is calculated as 7.3 eV, in good agreement with recent experimental observations in the literature (6.9 eV). For simulations of phase transformations and chemical reactions in water or aqueous solutions whereby an accurate description of the electronic structure is required, we suggest to use these advanced GW corrections in combination with the statistical analysis of quantum mechanical MD simulations.
DiLabio, Gino A; Torres, Edmanuel
2013-01-01
We recently showed that dispersion-correcting potentials (DCPs), atom-centered Gaussian-type functions developed for use with B3LYP (J. Phys. Chem. Lett. 2012, 3, 1738-1744) greatly improved the ability of the underlying functional to predict non-covalent interactions. However, the application of B3LYP-DCP for the {\\beta}-scission of the cumyloxyl radical led a calculated barrier height that was over-estimated by ca. 8 kcal/mol. We show in the present work that the source of this error arises from the previously developed carbon atom DCPs, which erroneously alters the electron density in the C-C covalent-bonding region. In this work, we present a new C-DCP with a form that was expected to influence the electron density farther from the nucleus. Tests of the new C-DCP, with previously published H-, N- and O-DCPs, with B3LYP-DCP/6-31+G(2d,2p) on the S66, S22B, HSG-A, and HC12 databases of non-covalently interacting dimers showed that it is one of the most accurate methods available for treating intermolecular i...
Spencer, Robert J; Axelrod, Bradley N; Drag, Lauren L; Waldron-Perrine, Brigid; Pangilinan, Percival H; Bieliauskas, Linas A
2013-01-01
Reliable Digit Span (RDS) is a measure of effort derived from the Digit Span subtest of the Wechsler intelligence scales. Some authors have suggested that the age-corrected scaled score provides a more accurate measure of effort than RDS. This study examined the relative diagnostic accuracy of the traditional RDS, an extended RDS including the new Sequencing task from the Wechsler Adult Intelligence Scale-IV, and the age-corrected scaled score, relative to performance validity as determined by the Test of Memory Malingering. Data were collected from 138 Veterans seen in a traumatic brain injury clinic. The traditional RDS (≤ 7), revised RDS (≤ 11), and Digit Span age-corrected scaled score ( ≤ 6) had respective sensitivities of 39%, 39%, and 33%, and respective specificities of 82%, 89%, and 91%. Of these indices, revised RDS and the Digit Span age-corrected scaled score provide the most accurate measure of performance validity among the three measures.
Fang, Changming; Li, Wun Fan; Koster, Rik S.; Klimeš, Jiří; Van Blaaderen, Alfons; Van Huis, Marijn A.
2015-01-01
Knowledge about the intrinsic electronic properties of water is imperative for understanding the behaviour of aqueous solutions that are used throughout biology, chemistry, physics, and industry. The calculation of the electronic band gap of liquids is challenging, because the most accurate ab initi
Yongshuai Jiang
Full Text Available Traditional permutation (TradPerm tests are usually considered the gold standard for multiple testing corrections. However, they can be difficult to complete for the meta-analyses of genetic association studies based on multiple single nucleotide polymorphism loci as they depend on individual-level genotype and phenotype data to perform random shuffles, which are not easy to obtain. Most meta-analyses have therefore been performed using summary statistics from previously published studies. To carry out a permutation using only genotype counts without changing the size of the TradPerm P-value, we developed a Monte Carlo permutation (MCPerm method. First, for each study included in the meta-analysis, we used a two-step hypergeometric distribution to generate a random number of genotypes in cases and controls. We then carried out a meta-analysis using these random genotype data. Finally, we obtained the corrected permutation P-value of the meta-analysis by repeating the entire process N times. We used five real datasets and five simulation datasets to evaluate the MCPerm method and our results showed the following: (1 MCPerm requires only the summary statistics of the genotype, without the need for individual-level data; (2 Genotype counts generated by our two-step hypergeometric distributions had the same distributions as genotype counts generated by shuffling; (3 MCPerm had almost exactly the same permutation P-values as TradPerm (r = 0.999; P<2.2e-16; (4 The calculation speed of MCPerm is much faster than that of TradPerm. In summary, MCPerm appears to be a viable alternative to TradPerm, and we have developed it as a freely available R package at CRAN: http://cran.r-project.org/web/packages/MCPerm/index.html.
Ole Roemer and the Light-Time Effect
Sterken, C.
2005-07-01
We discuss the observational background of Roemer's remarkable hypothesis that the velocity of light is finite. The outcome of the joint efforts of a highly-skilled instrumentalist and a team of surveyors driven to produce accurate maps and technically supported by the revolutionary advancements in horology, illustrates the synergy between the accuracy of the O and the C terms in the O-C concept which led to one of the most fundamental discoveries of the Renaissance.
Sandra Jakob
2017-01-01
Full Text Available Drone-borne hyperspectral imaging is a new and promising technique for fast and precise acquisition, as well as delivery of high-resolution hyperspectral data to a large variety of end-users. Drones can overcome the scale gap between field and air-borne remote sensing, thus providing high-resolution and multi-temporal data. They are easy to use, flexible and deliver data within cm-scale resolution. So far, however, drone-borne imagery has prominently and successfully been almost solely used in precision agriculture and photogrammetry. Drone technology currently mainly relies on structure-from-motion photogrammetry, aerial photography and agricultural monitoring. Recently, a few hyperspectral sensors became available for drones, but complex geometric and radiometric effects complicate their use for geology-related studies. Using two examples, we first show that precise corrections are required for any geological mapping. We then present a processing toolbox for frame-based hyperspectral imaging systems adapted for the complex correction of drone-borne hyperspectral imagery. The toolbox performs sensor- and platform-specific geometric distortion corrections. Furthermore, a topographic correction step is implemented to correct for rough terrain surfaces. We recommend the c-factor-algorithm for geological applications. To our knowledge, we demonstrate for the first time the applicability of the corrected dataset for lithological mapping and mineral exploration.
Trinquier, Anne; Touboul, Mathieu; Walker, Richard J
2016-02-02
Determination of the (182)W/(184)W ratio to a precision of ± 5 ppm (2σ) is desirable for constraining the timing of core formation and other early planetary differentiation processes. However, WO3(-) analysis by negative thermal ionization mass spectrometry normally results in a residual correlation between the instrumental-mass-fractionation-corrected (182)W/(184)W and (183)W/(184)W ratios that is attributed to mass-dependent variability of O isotopes over the course of an analysis and between different analyses. A second-order correction using the (183)W/(184)W ratio relies on the assumption that this ratio is constant in nature. This may prove invalid, as has already been realized for other isotope systems. The present study utilizes simultaneous monitoring of the (18)O/(16)O and W isotope ratios to correct oxide interferences on a per-integration basis and thus avoid the need for a double normalization of W isotopes. After normalization of W isotope ratios to a pair of W isotopes, following the exponential law, no residual W-O isotope correlation is observed. However, there is a nonideal mass bias residual correlation between (182)W/(i)W and (183)W/(i)W with time. Without double normalization of W isotopes and on the basis of three or four duplicate analyses, the external reproducibility per session of (182)W/(184)W and (183)W/(184)W normalized to (186)W/(183)W is 5-6 ppm (2σ, 1-3 μg loads). The combined uncertainty per session is less than 4 ppm for (183)W/(184)W and less than 6 ppm for (182)W/(184)W (2σm) for loads between 3000 and 50 ng.
Rocklin, Gabriel J. [Department of Pharmaceutical Chemistry, University of California San Francisco, 1700 4th St., San Francisco, California 94143-2550, USA and Biophysics Graduate Program, University of California San Francisco, 1700 4th St., San Francisco, California 94143-2550 (United States); Mobley, David L. [Departments of Pharmaceutical Sciences and Chemistry, University of California Irvine, 147 Bison Modular, Building 515, Irvine, California 92697-0001, USA and Department of Chemistry, University of New Orleans, 2000 Lakeshore Drive, New Orleans, Louisiana 70148 (United States); Dill, Ken A. [Laufer Center for Physical and Quantitative Biology, 5252 Stony Brook University, Stony Brook, New York 11794-0001 (United States); Hünenberger, Philippe H., E-mail: phil@igc.phys.chem.ethz.ch [Laboratory of Physical Chemistry, Swiss Federal Institute of Technology, ETH, 8093 Zürich (Switzerland)
2013-11-14
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol{sup −1}) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non
Chimot, J.; Vlemmix, T.; Veefkind, J. P.; de Haan, J. F.; Levelt, P. F.
2016-02-01
The Ozone Monitoring Instrument (OMI) has provided daily global measurements of tropospheric NO2 for more than a decade. Numerous studies have drawn attention to the complexities related to measurements of tropospheric NO2 in the presence of aerosols. Fine particles affect the OMI spectral measurements and the length of the average light path followed by the photons. However, they are not explicitly taken into account in the current operational OMI tropospheric NO2 retrieval chain (DOMINO - Derivation of OMI tropospheric NO2) product. Instead, the operational OMI O2 - O2 cloud retrieval algorithm is applied both to cloudy and to cloud-free scenes (i.e. clear sky) dominated by the presence of aerosols. This paper describes in detail the complex interplay between the spectral effects of aerosols in the satellite observation and the associated response of the OMI O2 - O2 cloud retrieval algorithm. Then, it evaluates the impact on the accuracy of the tropospheric NO2 retrievals through the computed Air Mass Factor (AMF) with a focus on cloud-free scenes. For that purpose, collocated OMI NO2 and MODIS (Moderate Resolution Imaging Spectroradiometer) Aqua aerosol products are analysed over the strongly industrialized East China area. In addition, aerosol effects on the tropospheric NO2 AMF and the retrieval of OMI cloud parameters are simulated. Both the observation-based and the simulation-based approach demonstrate that the retrieved cloud fraction increases with increasing Aerosol Optical Thickness (AOT), but the magnitude of this increase depends on the aerosol properties and surface albedo. This increase is induced by the additional scattering effects of aerosols which enhance the scene brightness. The decreasing effective cloud pressure with increasing AOT primarily represents the shielding effects of the O2 - O2 column located below the aerosol layers. The study cases show that the aerosol correction based on the implemented OMI cloud model results in biases
The effect of the motion of the Sun on the light-time in interplanetary relativity experiments
Bertotti, B [Dipartimento di Fisica Nucleare e Teorica, Universita di Pavia, via U Bassi 6, 27100 Pavia (Italy); Ashby, N [Department of Physics, University of Colorado, Boulder, CO 80309-0390 (United States); Iess, L [Dipartimento di Ingegneria Aerospaziale ed Astronautica, Universita La Sapienza, via Eudossiana 18, 00184 Rome (Italy)], E-mail: bb.142857@pv.infn.it, E-mail: ashby@boulder.nist.gov, E-mail: luciano.iess@uniroma1.it
2008-02-21
In 2002, a measurement of the effect of solar gravity upon the phase of coherent microwave beams passing near the Sun was carried out by the Cassini mission, allowing a very accurate measurement of the PPN parameter {gamma}. The data have been analysed with NASA's Orbit Determination Program (ODP) in the Barycentric Celestial Reference System, in which the Sun moves around the centre of mass of the solar system with a velocity v{sub o-dot} of about 15 m s{sup -1}; the question arises: what correction does this imply for the predicted phase shift? After a review of the way the ODP works, we set the problem in the framework of Lorentz (and Galilean) transformations and evaluate the correction; it is several orders of the magnitude below our experimental accuracy. We also discuss a recent paper (Kopeikin et al 2007 Phys. Lett. A 367 276), which claims wrong and much larger corrections, and clarify the reasons for the discrepancy.
Gillespie, Thomas W; Frankenberg, Elizabeth; Chum, Kai Fung; Thomas, Duncan
2014-01-01
On 26 December 2004, a magnitude 9.2 earthquake off the west coast of the northern Sumatra, Indonesia resulted in 160,000 Indonesians killed. We examine the Defense Meteorological Satellite Program-Operational Linescan System (DMSP-OLS) nighttime light imagery brightness values for 307 communities in the Study of the Tsunami Aftermath and Recovery (STAR), a household survey in Sumatra from 2004 to 2008. We examined night light time series between the annual brightness and extent of damage, economic metrics collected from STAR households and aggregated to the community level. There were significant changes in brightness values from 2004 to 2008 with a significant drop in brightness values in 2005 due to the tsunami and pre-tsunami nighttime light values returning in 2006 for all damage zones. There were significant relationships between the nighttime imagery brightness and per capita expenditures, and spending on energy and on food. Results suggest that Defense Meteorological Satellite Program nighttime light imagery can be used to capture the impacts and recovery from the tsunami and other natural disasters and estimate time series economic metrics at the community level in developing countries.
ACE: accurate correction of errors using K-mer tries
Sheikhizadeh Anari, S.; Ridder, de D.
2015-01-01
The quality of high-throughput next-generation sequencing data significantly influences the performance and memory consumption of assembly and mapping algorithms. The most ubiquitous platform, Illumina, mainly suffers from substitution errors. We have developed a tool, ACE, based on K-mer tries to c
Centi-pixel accurate real-time inverse distortion correction
De Villiers, Johan P
2008-11-01
Full Text Available lens distortion model is that of Brown1, 2 and Conrady.3 The equation for Brown’s model is: xu =xd + (xd − xc)(K1r2 + K2r4 + . . .)+( P1(r2 + 2(xd − xc)2) + 2P2(xd − xc)(yd − yc))(1 + P3r2 + . . .) yu =yd + (yd − yc)(K1r2 + K2r4 + . . .)+( 2P1(xd... − xc)(yd − yc) + P2(r2 + 2(yd − yc)2))(1 + P3r2 + . . .)) (1) where: (xu, yu) = undistorted image point, (xd, yd) = distorted image point, (xc, yc) = centre of distortion, Kn = Nth radial distortion coefficient, Pn = Nth tangential distortion...
Accurate ab initio spin densities
Boguslawski, Katharina; Legeza, Örs; Reiher, Markus
2012-01-01
We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys. 2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CA...
The Near-contact Binary RZ Draconis with Two Possible Light-time Orbits
Yang, Y.-G.; Li, H.-L.; Dai, H.-F.; Zhang, L.-Y.
2010-12-01
We present new multicolor photometry for RZ Draconis, observed in 2009 at the Xinglong Station of the National Astronomical Observatories of China. By using the updated version of the Wilson-Devinney Code, the photometric-spectroscopic elements were deduced from new photometric observations and published radial velocity data. The mass ratio and orbital inclination are q = 0.375(±0.002) and i = 84fdg60(±0fdg13), respectively. The fill-out factor of the primary is f = 98.3%, implying that RZ Dra is an Algol-like near-contact binary. Based on 683 light minimum times from 1907 to 2009, the orbital period change was investigated in detail. From the O - C curve, it is discovered that two quasi-sinusoidal variations may exist (i.e., P 3 = 75.62(±2.20) yr and P 4 = 27.59(±0.10) yr), which likely result from light-time effects via the presence of two additional bodies. In a coplanar orbit with the binary system, the third and fourth bodies may be low-mass drafts (i.e., M 3 = 0.175 M sun and M 4 = 0.074 M sun). If this is true, RZ Dra may be a quadruple star. The additional body could extract angular momentum from the binary system, which may cause the orbit to shrink. With the orbit shrinking, the primary may fill its Roche lobe and RZ Dra evolves into a contact configuration.
Speaking Fluently And Accurately
JosephDeVeto
2004-01-01
Even after many years of study,students make frequent mistakes in English. In addition, many students still need a long time to think of what they want to say. For some reason, in spite of all the studying, students are still not quite fluent.When I teach, I use one technique that helps students not only speak more accurately, but also more fluently. That technique is dictations.
Accurate backgrounds to Higgs production at the LHC
Kauer, N
2007-01-01
Corrections of 10-30% for backgrounds to the H --> WW --> l^+l^-\\sla{p}_T search in vector boson and gluon fusion at the LHC are reviewed to make the case for precise and accurate theoretical background predictions.
BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE ...
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with P significantly reduced the bioavailability of Pb. The bioaccessibility of the Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter 24%, or present as Pb sulfate 18%. Ad
Highly Accurate Measurement of the Electron Orbital Magnetic Moment
Awobode, A M
2015-01-01
We propose to accurately determine the orbital magnetic moment of the electron by measuring, in a Magneto-Optical or Ion trap, the ratio of the Lande g-factors in two atomic states. From the measurement of (gJ1/gJ2), the quantity A, which depends on the corrections to the electron g-factors can be extracted, if the states are LS coupled. Given that highly accurate values of the correction to the spin g-factor are currently available, accurate values of the correction to the orbital g-factor may also be determined. At present, (-1.8 +/- 0.4) x 10-4 has been determined as a correction to the electron orbital g-factor, by using earlier measurements of the ratio gJ1/gJ2, made on the Indium 2P1/2 and 2P3/2 states.
Groundwater recharge: Accurately representing evapotranspiration
Bugan, Richard DH
2011-09-01
Full Text Available Groundwater recharge is the basis for accurate estimation of groundwater resources, for determining the modes of water allocation and groundwater resource susceptibility to climate change. Accurate estimations of groundwater recharge with models...
Fast and accurate methods for phylogenomic analyses
Warnow Tandy
2011-10-01
Full Text Available Abstract Background Species phylogenies are not estimated directly, but rather through phylogenetic analyses of different gene datasets. However, true gene trees can differ from the true species tree (and hence from one another due to biological processes such as horizontal gene transfer, incomplete lineage sorting, and gene duplication and loss, so that no single gene tree is a reliable estimate of the species tree. Several methods have been developed to estimate species trees from estimated gene trees, differing according to the specific algorithmic technique used and the biological model used to explain differences between species and gene trees. Relatively little is known about the relative performance of these methods. Results We report on a study evaluating several different methods for estimating species trees from sequence datasets, simulating sequence evolution under a complex model including indels (insertions and deletions, substitutions, and incomplete lineage sorting. The most important finding of our study is that some fast and simple methods are nearly as accurate as the most accurate methods, which employ sophisticated statistical methods and are computationally quite intensive. We also observe that methods that explicitly consider errors in the estimated gene trees produce more accurate trees than methods that assume the estimated gene trees are correct. Conclusions Our study shows that highly accurate estimations of species trees are achievable, even when gene trees differ from each other and from the species tree, and that these estimations can be obtained using fairly simple and computationally tractable methods.
Performance Evaluation of Blind Tropospheric Delay correction ...
Performance Evaluation of Blind Tropospheric Delay correction Models over Africa. ... consisting of surface meteorological models and global empirical models. ... GPT2w and UNB3M models with accurate International GNSS Service (IGS)- ...
Yang, Y.-G.; Li, H.-L.; Dai, H.-F.
2012-01-01
We present the CCD photometry of two Algol-type binaries, AL Gem and BM Mon, observed from 2008 November to 2011 January. With the updated Wilson-Devinney program, photometric solutions were deduced from their EA-type light curves. The mass ratios and fill-out factors of the primaries are found to be q ph = 0.090(± 0.005) and f 1 = 47.3%(± 0.3%) for AL Gem, and q ph = 0.275(± 0.007) and f 1 = 55.4%(± 0.5%) for BM Mon, respectively. By analyzing the O-C curves, we discovered that the periods of AL Gem and BM Mon change in a quasi-sinusoidal mode, which may possibly result from the light-time effect via the presence of a third body. Periods, amplitudes, and eccentricities of light-time orbits are 78.83(± 1.17) yr, 0fd0204(±0fd0007), and 0.28(± 0.02) for AL Gem and 97.78(± 2.67) yr, 0fd0175(±0fd0006), and 0.29(± 0.02) for BM Mon, respectively. Assumed to be in a coplanar orbit with the binary, the masses of the third bodies would be 0.29 M ⊙ for AL Gem and 0.26 M ⊙ for BM Mon. This kind of additional companion can extract angular momentum from the close binary orbit, and such processes may play an important role in multiple star evolution.
Accurate measurement of unsteady state fluid temperature
Jaremkiewicz, Magdalena
2017-03-01
In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.
NNLOPS accurate associated HW production
Astill, William; Re, Emanuele; Zanderighi, Giulia
2016-01-01
We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross Section Working Group.
Jiang, Tian-Yu; Li, Li-Fang; Han, Zhan-Wen; Jiang, Deng-Kai
2010-04-01
The first complete charge-coupled device (CCD) light curves in B and V passbands of a neglected contact binary system, CW Cassiopeiae (CW Cas), are presented. They were analyzed simultaneously by using the Wilson and Devinney (WD) code (1971, ApJ, 166, 605). The photometric solution indicates that CW Cas is a W-type W UMa system with a mass ratio of m2/m1 2.234, and that it is in a marginal contact state with a contact degree of ˜6.5% and a relatively large temperature difference of ˜327K between its two components. Based on the minimum times collected from the literature, together with the new ones obtained in this study, the orbital period changes of CW Cas were investigated in detail. It was found that a periodical variation overlaps with a secular period decrease in its orbital period. The long-term period decrease with a rate of dP/dt = -3.44 × 10-8d yr-1 can be interpreted either by mass transfer from the more-massive component to the less-massive with a rate of dm2/dt = -3.6 × 10-8M⊙ yr-1, or by mass and angular-momentum losses through magnetic braking due to a magnetic stellar wind. A low-amplitude cyclic variation with a period of T = 63.7 yr might be caused by the light-time effect due to the presence of a third body.
Full Text Available ... Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment of jaws and ... Implant Surgery Dental Implant Surgery Dental implant surgery is, of course, surgery, and is best performed by ...
Full Text Available ... and Craniofacial Surgery Cleft Lip/Palate and Craniofacial Surgery A cleft lip may require one or more ... find out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment ...
... and Craniofacial Surgery Cleft Lip/Palate and Craniofacial Surgery A cleft lip may require one or more ... find out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment ...
王雪桃; 柏森; 李光俊; 蒋晓芹; 苏晨; 李衍龙; 朱智慧
2015-01-01
目的：研究千伏级CBCT图像CT值校正方法，提高其用于剂量计算的准确性。方法以扇形束计划 CT 作为先验信息，将 CBCT 与计划 CT 图像进行刚性配准，通过将 CBCT 与计划 CT 图像相减得到 CBCT 散射背景估计，对散射背景进行低通滤波处理，最后将原始 CBCT 图像减去滤波后的散射背景得到校正的 CBCT 图像。对 Catphan600模体和4例盆腔恶性肿瘤患者的 CBCT 图像进行校正，配对 t 检验校正前后 CBCT 与计划 CT 的差异，评估校正后的 CBCT 图像质量并分析用于剂量计算的准确性。结果经 CT 值校正后 CBCT 图像伪影明显减少，空气、脂肪、肌肉、股骨头的平均值校正前与计划 CT 分别相差232、89、29、66 HU，而校正后平均值差别缩小至5 HU 内（P＝0??39、0??66、0??59、1??00）。校正后 CBCT 图像用于剂量计算误差在2％内。结论校正后的 CBCT 图像 CT 值与计划 CT 的 CT 值相似，用于剂量计算可得到准确的结果。%Objective To study CT numbers correction of kilo?voltage cone?beam CT (KV?CBCT) images for dose calculation. Method Aligning the CBCT images with plan CT images, then obtain the background scatter by subtracting CT images from CBCT images. The background scatter is then processed by low?pass filter. The final CBCT images are acquired by subtracting the background scatter from the raw CBCT. KV?CBCT images of Catphan600 phantom and four patients with pelvic tumors were obtained with the linac?integrated CBCT system. The CBCT images were modified to correct the CT numbers. Finally, compare HU numbers between corrected CBCT and planning CT by paired T test. Evaluate the image quality and accuracy of dose calculation of the modified CBCT images. Results The proposed method reduces the artifacts of CBCT images significantly. The differences of CT numbers were 232 HU, 89 HU, 29 HU and 66 HU for air, fat, muscle and femoral head between CT and CBCT
Efficient and accurate fragmentation methods.
Pruitt, Spencer R; Bertoni, Colleen; Brorsen, Kurt R; Gordon, Mark S
2014-09-16
Conspectus Three novel fragmentation methods that are available in the electronic structure program GAMESS (general atomic and molecular electronic structure system) are discussed in this Account. The fragment molecular orbital (FMO) method can be combined with any electronic structure method to perform accurate calculations on large molecular species with no reliance on capping atoms or empirical parameters. The FMO method is highly scalable and can take advantage of massively parallel computer systems. For example, the method has been shown to scale nearly linearly on up to 131 000 processor cores for calculations on large water clusters. There have been many applications of the FMO method to large molecular clusters, to biomolecules (e.g., proteins), and to materials that are used as heterogeneous catalysts. The effective fragment potential (EFP) method is a model potential approach that is fully derived from first principles and has no empirically fitted parameters. Consequently, an EFP can be generated for any molecule by a simple preparatory GAMESS calculation. The EFP method provides accurate descriptions of all types of intermolecular interactions, including Coulombic interactions, polarization/induction, exchange repulsion, dispersion, and charge transfer. The EFP method has been applied successfully to the study of liquid water, π-stacking in substituted benzenes and in DNA base pairs, solvent effects on positive and negative ions, electronic spectra and dynamics, non-adiabatic phenomena in electronic excited states, and nonlinear excited state properties. The effective fragment molecular orbital (EFMO) method is a merger of the FMO and EFP methods, in which interfragment interactions are described by the EFP potential, rather than the less accurate electrostatic potential. The use of EFP in this manner facilitates the use of a smaller value for the distance cut-off (Rcut). Rcut determines the distance at which EFP interactions replace fully quantum
Accurate determination of antenna directivity
Dich, Mikael
1997-01-01
The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power......-pattern measurements. The derivation is based on the theory of spherical wave expansion of electromagnetic fields, which also establishes a simple criterion for the required number of samples of the power density. An array antenna consisting of Hertzian dipoles is used to test the accuracy and rate of convergence...
K-corrections and extinction corrections for Type Ia supernovae
Nugent, Peter; Kim, Alex; Perlmutter, Saul
2002-05-21
The measurement of the cosmological parameters from Type Ia supernovae hinges on our ability to compare nearby and distant supernovae accurately. Here we present an advance on a method for performing generalized K-corrections for Type Ia supernovae which allows us to compare these objects from the UV to near-IR over the redshift range 0 < z < 2. We discuss the errors currently associated with this method and how future data can improve upon it significantly. We also examine the effects of reddening on the K-corrections and the light curves of Type Ia supernovae. Finally, we provide a few examples of how these techniques affect our current understanding of a sample of both nearby and distant supernovae.
Motion-corrected Fourier ptychography
Bian, Liheng; Guo, Kaikai; Suo, Jinli; Yang, Changhuei; Chen, Feng; Dai, Qionghai
2016-01-01
Fourier ptychography (FP) is a recently proposed computational imaging technique for high space-bandwidth product imaging. In real setups such as endoscope and transmission electron microscope, the common sample motion largely degrades the FP reconstruction and limits its practicability. In this paper, we propose a novel FP reconstruction method to efficiently correct for unknown sample motion. Specifically, we adaptively update the sample's Fourier spectrum from low spatial-frequency regions towards high spatial-frequency ones, with an additional motion recovery and phase-offset compensation procedure for each sub-spectrum. Benefiting from the phase retrieval redundancy theory, the required large overlap between adjacent sub-spectra offers an accurate guide for successful motion recovery. Experimental results on both simulated data and real captured data show that the proposed method can correct for unknown sample motion with its standard deviation being up to 10% of the field-of-view scale. We have released...
MR image intensity inhomogeneity correction
(Vişan Pungǎ, Mirela; Moldovanu, Simona; Moraru, Luminita
2015-01-01
MR technology is one of the best and most reliable ways of studying the brain. Its main drawback is the so-called intensity inhomogeneity or bias field which impairs the visual inspection and the medical proceedings for diagnosis and strongly affects the quantitative image analysis. Noise is yet another artifact in medical images. In order to accurately and effectively restore the original signal, reference is hereof made to filtering, bias correction and quantitative analysis of correction. In this report, two denoising algorithms are used; (i) Basis rotation fields of experts (BRFoE) and (ii) Anisotropic Diffusion (when Gaussian noise, the Perona-Malik and Tukey's biweight functions and the standard deviation of the noise of the input image are considered).
Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry
Fuchs, Franz G.; Hjelmervik, Jon M.
2014-01-01
A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire...
The Accurate Particle Tracer Code
Wang, Yulei; Qin, Hong; Yu, Zhi
2016-01-01
The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusion energy research, computational mathematics, software engineering, and high-performance computation. The APT code consists of seven main modules, including the I/O module, the initialization module, the particle pusher module, the parallelization module, the field configuration module, the external force-field module, and the extendible module. The I/O module, supported by Lua and Hdf5 projects, provides a user-friendly interface for both numerical simulation and data analysis. A series of new geometric numerical methods...
Accurate Modeling of Advanced Reflectarrays
Zhou, Min
Analysis and optimization methods for the design of advanced printed re ectarrays have been investigated, and the study is focused on developing an accurate and efficient simulation tool. For the analysis, a good compromise between accuracy and efficiency can be obtained using the spectral domain...... to the POT. The GDOT can optimize for the size as well as the orientation and position of arbitrarily shaped array elements. Both co- and cross-polar radiation can be optimized for multiple frequencies, dual polarization, and several feed illuminations. Several contoured beam reflectarrays have been designed...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...
Accurate thickness measurement of graphene
Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.
2016-03-01
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.
Accurate thickness measurement of graphene.
Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T
2016-03-29
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.
Accurate and precise zinc isotope ratio measurements in urban aerosols.
Gioia, Simone; Weiss, Dominik; Coles, Barry; Arnold, Tim; Babinski, Marly
2008-12-15
We developed an analytical method and constrained procedural boundary conditions that enable accurate and precise Zn isotope ratio measurements in urban aerosols. We also demonstrate the potential of this new isotope system for air pollutant source tracing. The procedural blank is around 5 ng and significantly lower than published methods due to a tailored ion chromatographic separation. Accurate mass bias correction using external correction with Cu is limited to Zn sample content of approximately 50 ng due to the combined effect of blank contribution of Cu and Zn from the ion exchange procedure and the need to maintain a Cu/Zn ratio of approximately 1. Mass bias is corrected for by applying the common analyte internal standardization method approach. Comparison with other mass bias correction methods demonstrates the accuracy of the method. The average precision of delta(66)Zn determinations in aerosols is around 0.05 per thousand per atomic mass unit. The method was tested on aerosols collected in Sao Paulo City, Brazil. The measurements reveal significant variations in delta(66)Zn(Imperial) ranging between -0.96 and -0.37 per thousand in coarse and between -1.04 and 0.02 per thousand in fine particular matter. This variability suggests that Zn isotopic compositions distinguish atmospheric sources. The isotopic light signature suggests traffic as the main source. We present further delta(66)Zn(Imperial) data for the standard reference material NIST SRM 2783 (delta(66)Zn(Imperial) = 0.26 +/- 0.10 per thousand).
Hees, A; Poncin-Lafitte, C Le
2014-01-01
Given the extreme accuracy of modern space science, a precise relativistic modeling of observations is required. In particular, it is important to describe properly light propagation through the Solar System. For two decades, several modeling efforts based on the solution of the null geodesic equations have been proposed but they are mainly valid only for the first order Post-Newtonian approximation. However, with the increasing precision of ongoing space missions as Gaia, GAME, BepiColombo, JUNO or JUICE, we know that some corrections up to the second order have to be taken into account for future experiments. We present a procedure to compute the relativistic coordinate time delay, Doppler and astrometric observables avoiding the integration of the null geodesic equation. This is possible using the Time Transfer Function formalism, a powerful tool providing key quantities such as the time of flight of a light signal between two point-events and the tangent vector to its null-geodesic. Indeed we show how to ...
Accurate Fiber Length Measurement Using Time-of-Flight Technique
Terra, Osama; Hussein, Hatem
2016-06-01
Fiber artifacts of very well-measured length are required for the calibration of optical time domain reflectometers (OTDR). In this paper accurate length measurement of different fiber lengths using the time-of-flight technique is performed. A setup is proposed to measure accurately lengths from 1 to 40 km at 1,550 and 1,310 nm using high-speed electro-optic modulator and photodetector. This setup offers traceability to the SI unit of time, the second (and hence to meter by definition), by locking the time interval counter to the Global Positioning System (GPS)-disciplined quartz oscillator. Additionally, the length of a recirculating loop artifact is measured and compared with the measurement made for the same fiber by the National Physical Laboratory of United Kingdom (NPL). Finally, a method is proposed to relatively correct the fiber refractive index to allow accurate fiber length measurement.
A More Accurate Fourier Transform
Courtney, Elya
2015-01-01
Fourier transform methods are used to analyze functions and data sets to provide frequencies, amplitudes, and phases of underlying oscillatory components. Fast Fourier transform (FFT) methods offer speed advantages over evaluation of explicit integrals (EI) that define Fourier transforms. This paper compares frequency, amplitude, and phase accuracy of the two methods for well resolved peaks over a wide array of data sets including cosine series with and without random noise and a variety of physical data sets, including atmospheric $\\mathrm{CO_2}$ concentrations, tides, temperatures, sound waveforms, and atomic spectra. The FFT uses MIT's FFTW3 library. The EI method uses the rectangle method to compute the areas under the curve via complex math. Results support the hypothesis that EI methods are more accurate than FFT methods. Errors range from 5 to 10 times higher when determining peak frequency by FFT, 1.4 to 60 times higher for peak amplitude, and 6 to 10 times higher for phase under a peak. The ability t...
NWS Corrections to Observations
National Oceanic and Atmospheric Administration, Department of Commerce — Form B-14 is the National Weather Service form entitled 'Notice of Corrections to Weather Records.' The forms are used to make corrections to observations on forms...
Dr. Grace Zhang
2000-01-01
Error correction is an important issue in foreign language acquisition. This paper investigates how students feel about the way in which error correction should take place in a Chinese-as-a foreign-language classroom, based on empirical data of a large scale. The study shows that there is a general consensus that error correction is necessary. In terms of correction strategy, the students preferred a combination of direct and indirect corrections, or a direct only correction. The former choice indicates that students would be happy to take either so long as the correction gets done.Most students didn't mind peer correcting provided it is conducted in a constructive way. More than halfofthe students would feel uncomfortable ifthe same error they make in class is corrected consecutively more than three times. Taking these findings into consideration, we may want to cncourage peer correcting, use a combination of correction strategies (direct only if suitable) and do it in a non-threatening and sensitive way. It is hoped that this study would contribute to the effectiveness of error correction in a Chinese language classroom and it may also have a wider implication on other languages.
Accurate, meshless methods for magnetohydrodynamics
Hopkins, Philip F.; Raives, Matthias J.
2016-01-01
Recently, we explored new meshless finite-volume Lagrangian methods for hydrodynamics: the `meshless finite mass' (MFM) and `meshless finite volume' (MFV) methods; these capture advantages of both smoothed particle hydrodynamics (SPH) and adaptive mesh refinement (AMR) schemes. We extend these to include ideal magnetohydrodynamics (MHD). The MHD equations are second-order consistent and conservative. We augment these with a divergence-cleaning scheme, which maintains nabla \\cdot B≈ 0. We implement these in the code GIZMO, together with state-of-the-art SPH MHD. We consider a large test suite, and show that on all problems the new methods are competitive with AMR using constrained transport (CT) to ensure nabla \\cdot B=0. They correctly capture the growth/structure of the magnetorotational instability, MHD turbulence, and launching of magnetic jets, in some cases converging more rapidly than state-of-the-art AMR. Compared to SPH, the MFM/MFV methods exhibit convergence at fixed neighbour number, sharp shock-capturing, and dramatically reduced noise, divergence errors, and diffusion. Still, `modern' SPH can handle most test problems, at the cost of larger kernels and `by hand' adjustment of artificial diffusion. Compared to non-moving meshes, the new methods exhibit enhanced `grid noise' but reduced advection errors and diffusion, easily include self-gravity, and feature velocity-independent errors and superior angular momentum conservation. They converge more slowly on some problems (smooth, slow-moving flows), but more rapidly on others (involving advection/rotation). In all cases, we show divergence control beyond the Powell 8-wave approach is necessary, or all methods can converge to unphysical answers even at high resolution.
Second-order accurate finite volume method for well-driven flows
Dotlić, Milan; Pokorni, Boris; Pušić, Milenko; Dimkić, Milan
2013-01-01
We consider a finite volume method for a well-driven fluid flow in a porous medium. Due to the singularity of the well, modeling in the near-well region with standard numerical schemes results in a completely wrong total well flux and an inaccurate hydraulic head. Local grid refinement can help, but it comes at computational cost. In this article we propose two methods to address well singularity. In the first method the flux through well faces is corrected using a logarithmic function, in a way related to the Peaceman correction. Coupling this correction with a second-order accurate two-point scheme gives a greatly improved total well flux, but the resulting scheme is still not even first order accurate on coarse grids. In the second method fluxes in the near-well region are corrected by representing the hydraulic head as a sum of a logarithmic and a linear function. This scheme is second-order accurate.
Accurate skin dose measurements using radiochromic film in clinical applications.
Devic, S; Seuntjens, J; Abdel-Rahman, W; Evans, M; Olivares, M; Podgorsak, E B; Vuong, Té; Soares, Christopher G
2006-04-01
Megavoltage x-ray beams exhibit the well-known phenomena of dose buildup within the first few millimeters of the incident phantom surface, or the skin. Results of the surface dose measurements, however, depend vastly on the measurement technique employed. Our goal in this study was to determine a correction procedure in order to obtain an accurate skin dose estimate at the clinically relevant depth based on radiochromic film measurements. To illustrate this correction, we have used as a reference point a depth of 70 micron. We used the new GAFCHROMIC dosimetry films (HS, XR-T, and EBT) that have effective points of measurement at depths slightly larger than 70 micron. In addition to films, we also used an Attix parallel-plate chamber and a home-built extrapolation chamber to cover tissue-equivalent depths in the range from 4 micron to 1 mm of water-equivalent depth. Our measurements suggest that within the first millimeter of the skin region, the PDD for a 6 MV photon beam and field size of 10 x 10 cm2 increases from 14% to 43%. For the three GAFCHROMIC dosimetry film models, the 6 MV beam entrance skin dose measurement corrections due to their effective point of measurement are as follows: 15% for the EBT, 15% for the HS, and 16% for the XR-T model GAFCHROMIC films. The correction factors for the exit skin dose due to the build-down region are negligible. There is a small field size dependence for the entrance skin dose correction factor when using the EBT GAFCHROMIC film model. Finally, a procedure that uses EBT model GAFCHROMIC film for an accurate measurement of the skin dose in a parallel-opposed pair 6 MV photon beam arrangement is described.
孟瑞锋; 马小康; 王州博; 董龙梅; 杨涛; 刘东红
2015-01-01
abnormal sample points and checking out the regression coefficient of the model by t-test. The developed model had high prediction accuracy and stability with the maximum prediction error of 0.25 g/100 g, the determination coefficient of calibration (Rcal2) of 0.9992, the determination coefficient of validation (Rval2) of 0.9988, the root mean square error of calibration (RMSEC) of 0.0894 g/100 g, the root mean square error of prediction (RMSEP) of 0.1015 g/100 g and the ratio performance deviation (RPD) of 28.57, which indicated that the model could be used for practical detection accurately and steadily, and was helpful for on-line measuring.
How flatbed scanners upset accurate film dosimetry.
van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S
2016-01-21
Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.
38 CFR 4.46 - Accurate measurement.
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
BASIC: A Simple and Accurate Modular DNA Assembly Method.
Storch, Marko; Casini, Arturo; Mackrow, Ben; Ellis, Tom; Baldwin, Geoff S
2017-01-01
Biopart Assembly Standard for Idempotent Cloning (BASIC) is a simple, accurate, and robust DNA assembly method. The method is based on linker-mediated DNA assembly and provides highly accurate DNA assembly with 99 % correct assemblies for four parts and 90 % correct assemblies for seven parts [1]. The BASIC standard defines a single entry vector for all parts flanked by the same prefix and suffix sequences and its idempotent nature means that the assembled construct is returned in the same format. Once a part has been adapted into the BASIC format it can be placed at any position within a BASIC assembly without the need for reformatting. This allows laboratories to grow comprehensive and universal part libraries and to share them efficiently. The modularity within the BASIC framework is further extended by the possibility of encoding ribosomal binding sites (RBS) and peptide linker sequences directly on the linkers used for assembly. This makes BASIC a highly versatile library construction method for combinatorial part assembly including the construction of promoter, RBS, gene variant, and protein-tag libraries. In comparison with other DNA assembly standards and methods, BASIC offers a simple robust protocol; it relies on a single entry vector, provides for easy hierarchical assembly, and is highly accurate for up to seven parts per assembly round [2].
[Orthognathic surgery: corrective bone operations].
Reuther, J
2000-05-01
The article reviews the history of orthognathic surgery from the middle of the last century up to the present. Initially, mandibular osteotomies were only performed in cases of severe malformations. But during the last century a precise and standardized procedure for correction of the mandible was established. Multiple modifications allowed control of small fragments, functionally stable osteosynthesis, and finally a precise positioning of the condyle. In 1955 Obwegeser and Trauner introduced the sagittal split osteotomy by an intraoral approach. It was the final breakthrough for orthognathic surgery as a standard treatment for corrections of the mandible. Surgery of the maxilla dates back to the nineteenth century. B. von Langenbeck from Berlin is said to have performed the first Le Fort I osteotomy in 1859. After minor changes, Wassmund corrected a posttraumatic malocclusion by a Le Fort I osteotomy in 1927. But it was Axhausen who risked the total mobilization of the maxilla in 1934. By additional modifications and further refinements, Obwegeser paved the way for this approach to become a standard procedure in maxillofacial surgery. Tessier mobilized the whole midface by a Le Fort III osteotomy and showed new perspectives in the correction of severe malformations of the facial bones, creating the basis of modern craniofacial surgery. While the last 150 years were distinguished by the creation and standardization of surgical methods, the present focus lies on precise treatment planning and the consideration of functional aspects of the whole stomatognathic system. To date, 3D visualization by CT scans, stereolithographic models, and computer-aided treatment planning and simulation allow surgery of complex cases and accurate predictions of soft tissue changes.
Diophantine Correct Open Induction
Raffer, Sidney
2010-01-01
We give an induction-free axiom system for diophantine correct open induction. We relate the problem of whether a finitely generated ring of Puiseux polynomials is diophantine correct to a problem about the value-distribution of a tuple of semialgebraic functions with integer arguments. We use this result, and a theorem of Bergelson and Leibman on generalized polynomials, to identify a class of diophantine correct subrings of the field of descending Puiseux series with real coefficients.
The FLUKA code: An accurate simulation tool for particle therapy
Battistoni, Giuseppe; Böhlen, Till T; Cerutti, Francesco; Chin, Mary Pik Wai; Dos Santos Augusto, Ricardo M; Ferrari, Alfredo; Garcia Ortega, Pablo; Kozlowska, Wioletta S; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis
2016-01-01
Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically-based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in-vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with bot...
1994-01-01
Introduction During the teaching and learning process, teachers often check how much students have understood through written assignments. In this article I’d like to describe one method of correcting students’ written work by using a variety of symbols to indicate where students have gone wrong, then asking students to correct their work themselves.
Accurate measurement of ultrasonic velocity by eliminating the diffraction effect
WEI Tingcun
2003-01-01
The accurate measurement method of ultrasonic velocity by the pulse interferencemethod with eliminating the diffraction effect has been investigated in VHF range experimen-tally. Two silicate glasses were taken as the specimens, their frequency dependences of longitu-dinal velocities were measured in the frequency range 50-350 MHz, and the phase advances ofultrasonic signals caused by diffraction effect were calculated using A. O. Williams' theoreticalexpression. For the frequency dependences of longitudinal velocities, the measurement resultswere in good agreement with the simulation ones in which the phase advances were included.It has been shown that the velocity error due to diffraction effect can be corrected very well bythis method.
Simple and accurate temperature correction for moisture pin calibrations in oriented strand board
Charles Boardman; Samuel V. Glass; Patricia K. Lebow
2017-01-01
Oriented strand board (OSB) is commonly used in the residential construction market in North America and its moisture-related durability is a critical consideration for building envelope design. Measurement of OSB moisture content (MC), a key determinant of durability, is often done using moisture pins and relies on a correlation between MC and the electrical...
Surface EMG measurements during fMRI at 3T : Accurate EMG recordings after artifact correction
van Duinen, Hiske; Zijdewind, Inge; Hoogduin, H; Maurits, N
2005-01-01
In this experiment, we have measured surface EMG of the first dorsal interosseus during predefined submaximal isometric contractions (5, 15, 30, 50, and 70% of maximal force) of the index finger simultaneously with fMRI measurements. Since we have used sparse sampling fMRI (3-s scanning; 2-s non-sca
Probabilistic quantum error correction
Fern, J; Fern, Jesse; Terilla, John
2002-01-01
There are well known necessary and sufficient conditions for a quantum code to correct a set of errors. We study weaker conditions under which a quantum code may correct errors with probabilities that may be less than one. We work with stabilizer codes and as an application study how the nine qubit code, the seven qubit code, and the five qubit code perform when there are errors on more than one qubit. As a second application, we discuss the concept of syndrome quality and use it to suggest a way that quantum error correction can be practically improved.
Full Text Available ... It can also invite bacteria that lead to gum disease. Click here to find out more. Who We ... It can also invite bacteria that lead to gum disease. Click here to find out more. Corrective Jaw ...
Correction of Neonatal Hypovolemia
V. V. Moskalev
2007-01-01
Full Text Available Objective: to evaluate the efficiency of hydroxyethyl starch solution (6% refortane, Berlin-Chemie versus fresh frozen plasma used to correct neonatal hypovolemia.Materials and methods. In 12 neonatal infants with hypoco-agulation, hypovolemia was corrected with fresh frozen plasma (10 ml/kg body weight. In 13 neonates, it was corrected with 6% refortane infusion in a dose of 10 ml/kg. Doppler echocardiography was used to study central hemodynamic parameters and Doppler study was employed to examine regional blood flow in the anterior cerebral and renal arteries.Results. Infusion of 6% refortane and fresh frozen plasma at a rate of 10 ml/hour during an hour was found to normalize the parameters of central hemodynamics and regional blood flow.Conclusion. Comparative analysis of the findings suggests that 6% refortane is the drug of choice in correcting neonatal hypovolemia. Fresh frozen plasma should be infused in hemostatic disorders.
... Spread the Word Shop AAP Find a Pediatrician Ages & Stages Prenatal Baby Bathing & Skin Care Breastfeeding Crying & ... Listen Español Text Size Email Print Share Corrected Age For Preemies Page Content Article Body If your ...
Pryor, Louise
2008-01-01
The usual aim of spreadsheet audit is to verify correctness. There are two problems with this: first, it is often difficult to tell whether the spreadsheets in question are correct, and second, even if they are, they may still give the wrong results. These problems are explained in this paper, which presents the key criteria for judging a spreadsheet and discusses how those criteria can be achieved
Adaptable DC offset correction
Golusky, John M. (Inventor); Muldoon, Kelly P. (Inventor)
2009-01-01
Methods and systems for adaptable DC offset correction are provided. An exemplary adaptable DC offset correction system evaluates an incoming baseband signal to determine an appropriate DC offset removal scheme; removes a DC offset from the incoming baseband signal based on the appropriate DC offset scheme in response to the evaluated incoming baseband signal; and outputs a reduced DC baseband signal in response to the DC offset removed from the incoming baseband signal.
Respiration correction by clustering in ultrasound images
Wu, Kaizhi; Chen, Xi; Ding, Mingyue; Sang, Nong
2016-03-01
Respiratory motion is a challenging factor for image acquisition, image-guided procedures and perfusion quantification using contrast-enhanced ultrasound in the abdominal and thoracic region. In order to reduce the influence of respiratory motion, respiratory correction methods were investigated. In this paper we propose a novel, cluster-based respiratory correction method. In the proposed method, we assign the image frames of the corresponding respiratory phase using spectral clustering firstly. And then, we achieve the images correction automatically by finding a cluster in which points are close to each other. Unlike the traditional gating method, we don't need to estimate the breathing cycle accurate. It is because images are similar at the corresponding respiratory phase, and they are close in high-dimensional space. The proposed method is tested on simulation image sequence and real ultrasound image sequence. The experimental results show the effectiveness of our proposed method in quantitative and qualitative.
Mobile image based color correction using deblurring
Wang, Yu; Xu, Chang; Boushey, Carol; Zhu, Fengqing; Delp, Edward J.
2015-03-01
Dietary intake, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as obesity and cancer. The goals of the Technology Assisted Dietary Assessment (TADA) System, developed at Purdue University, is to automatically identify and quantify foods and beverages consumed by utilizing food images acquired with a mobile device. Color correction serves as a critical step to ensure accurate food identification and volume estimation. We make use of a specifically designed color checkerboard (i.e. a fiducial marker) to calibrate the imaging system so that the variations of food appearance under different lighting conditions can be determined. In this paper, we propose an image quality enhancement technique by combining image de-blurring and color correction. The contribution consists of introducing an automatic camera shake removal method using a saliency map and improving the polynomial color correction model using the LMS color space.
moco: Fast Motion Correction for Calcium Imaging
Alexander eDubbs
2016-02-01
Full Text Available Motion correction is the first step in a pipeline of algorithms to analyze calcium imaging videos and extract biologically relevant information, for example the network structure of the neurons therein. Fast motion correction is especially critical for closed-loop activity triggered stimulation experiments, where accurate detection and targeting of specific cells in necessary. We introduce a novel motion-correction algorithm that uses a Fourier-transform approach, and a combination of judicious downsampling and the accelerated computation of many $L_2$ norms using dynamic programming and two-dimensional, fft-accelerated convolutions, to enhance its efficiency. Its accuracy is comparable to that of established community-used algorithms, and it is more stable to large translational motions. It is programmed in Java and is compatible with ImageJ.
moco: Fast Motion Correction for Calcium Imaging.
Dubbs, Alexander; Guevara, James; Yuste, Rafael
2016-01-01
Motion correction is the first step in a pipeline of algorithms to analyze calcium imaging videos and extract biologically relevant information, for example the network structure of the neurons therein. Fast motion correction is especially critical for closed-loop activity triggered stimulation experiments, where accurate detection and targeting of specific cells in necessary. We introduce a novel motion-correction algorithm which uses a Fourier-transform approach, and a combination of judicious downsampling and the accelerated computation of many L 2 norms using dynamic programming and two-dimensional, fft-accelerated convolutions, to enhance its efficiency. Its accuracy is comparable to that of established community-used algorithms, and it is more stable to large translational motions. It is programmed in Java and is compatible with ImageJ.
Accurate thermoelastic tensor and acoustic velocities of NaCl
Marcondes, Michel L., E-mail: michel@if.usp.br [Physics Institute, University of Sao Paulo, Sao Paulo, 05508-090 (Brazil); Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Shukla, Gaurav, E-mail: shukla@physics.umn.edu [School of Physics and Astronomy, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States); Silveira, Pedro da [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Wentzcovitch, Renata M., E-mail: wentz002@umn.edu [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States)
2015-12-15
Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.
Laboratory Building for Accurate Determination of Plutonium
2008-01-01
<正>The accurate determination of plutonium is one of the most important assay techniques of nuclear fuel, also the key of the chemical measurement transfer and the base of the nuclear material balance. An
1996-01-01
IntroductionI have been teaching English for ten years and like many other teachers in middle schools.I teach threebig classes each year.Before I had the opportunity to further my study in the SMSTT project run jointlyby the British Council and the State Education Commission of China at Southwest China TeachersUniversity.I found it somewhat difficult to correct students homework since I had so many students.Now I still have three big classes.but I have found it casier to correct students homework since I havebeen combining the techniques learned in the project with my own successful experience.In this article.I attempt to discuss my approach to correcting students homework.I hope that it will be of some use tothose who have not vet had the opportunity to further their training.
Model Correction Factor Method
Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes
1997-01-01
The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... statebased on an idealized mechanical model to be adapted to the original limit state by the model correction factor. Reliable approximations are obtained by iterative use of gradient information on the original limit state function analogously to previous response surface approaches. However, the strength...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...
JIANG Min; FANG Zhen-Yun; SANG Wen-Long; GAO Fei
2006-01-01
@@ In the minimum electromagnetism coupling model of interaction between photon and electron (positron), we accurately calculate photon chain renormalized propagator and obtain the accurate result of differential cross section of Bhabha scattering with a photon chain renormalized propagator in quantum electrodynamics. The related radiative corrections are briefly reviewed and discussed.
Understanding the Code: keeping accurate records.
Griffith, Richard
2015-10-01
In his continuing series looking at the legal and professional implications of the Nursing and Midwifery Council's revised Code of Conduct, Richard Griffith discusses the elements of accurate record keeping under Standard 10 of the Code. This article considers the importance of accurate record keeping for the safety of patients and protection of district nurses. The legal implications of records are explained along with how district nurses should write records to ensure these legal requirements are met.
Correction of ocular dystopia.
Janecka, I P
1996-04-01
The purpose of this study was to examine results with elective surgical correction of enophthalmos. The study was a retrospective assessment in a university-based referral practice. A consecutive sample of 10 patients who developed ocular dystopia following orbital trauma was examined. The main outcome measures were a subjective evaluation by patients and objective measurements of patients' eye position. The intervention was three-dimensional orbital reconstruction with titanium plates. It is concluded that satisfactory correction of enophthalmos and ocular dystopia can be achieved with elective surgery using titanium plates. In addition, intraoperative measurements of eye position in three planes increases the precision of surgery.
Probabilistic error correction for RNA sequencing.
Le, Hai-Son; Schulz, Marcel H; McCauley, Brenna M; Hinman, Veronica F; Bar-Joseph, Ziv
2013-05-01
Sequencing of RNAs (RNA-Seq) has revolutionized the field of transcriptomics, but the reads obtained often contain errors. Read error correction can have a large impact on our ability to accurately assemble transcripts. This is especially true for de novo transcriptome analysis, where a reference genome is not available. Current read error correction methods, developed for DNA sequence data, cannot handle the overlapping effects of non-uniform abundance, polymorphisms and alternative splicing. Here we present SEquencing Error CorrEction in Rna-seq data (SEECER), a hidden Markov Model (HMM)-based method, which is the first to successfully address these problems. SEECER efficiently learns hundreds of thousands of HMMs and uses these to correct sequencing errors. Using human RNA-Seq data, we show that SEECER greatly improves on previous methods in terms of quality of read alignment to the genome and assembly accuracy. To illustrate the usefulness of SEECER for de novo transcriptome studies, we generated new RNA-Seq data to study the development of the sea cucumber Parastichopus parvimensis. Our corrected assembled transcripts shed new light on two important stages in sea cucumber development. Comparison of the assembled transcripts to known transcripts in other species has also revealed novel transcripts that are unique to sea cucumber, some of which we have experimentally validated. Supporting website: http://sb.cs.cmu.edu/seecer/.
Partial Volume Correction in Quantitative Amyloid Imaging
Su, Yi; Blazey, Tyler M.; Snyder, Abraham Z.; Raichle, Marcus E.; Marcus, Daniel S.; Ances, Beau M.; Bateman, Randall J.; Cairns, Nigel J.; Aldea, Patricia; Cash, Lisa; Christensen, Jon J.; Friedrichsen, Karl; Hornbeck, Russ C.; Farrar, Angela M.; Owen, Christopher J.; Mayeux, Richard; Brickman, Adam M.; Klunk, William; Price, Julie C.; Thompson, Paul M.; Ghetti, Bernardino; Saykin, Andrew J.; Sperling, Reisa A.; Johnson, Keith A.; Schofield, Peter R.; Buckles, Virginia; Morris, John C.; Benzinger, Tammie. LS.
2014-01-01
Amyloid imaging is a valuable tool for research and diagnosis in dementing disorders. As positron emission tomography (PET) scanners have limited spatial resolution, measured signals are distorted by partial volume effects. Various techniques have been proposed for correcting partial volume effects, but there is no consensus as to whether these techniques are necessary in amyloid imaging, and, if so, how they should be implemented. We evaluated a two-component partial volume correction technique and a regional spread function technique using both simulated and human Pittsburgh compound B (PiB) PET imaging data. Both correction techniques compensated for partial volume effects and yielded improved detection of subtle changes in PiB retention. However, the regional spread function technique was more accurate in application to simulated data. Because PiB retention estimates depend on the correction technique, standardization is necessary to compare results across groups. Partial volume correction has sometimes been avoided because it increases the sensitivity to inaccuracy in image registration and segmentation. However, our results indicate that appropriate PVC may enhance our ability to detect changes in amyloid deposition. PMID:25485714
Fractal Correction of Well Logging Curves
无
2001-01-01
It is always significant for assessing and evaluation of oil-bearing layers, especially for well logging data processing and interpretation of non-marine oil beds to get more accurate physical properties in thin and inter-thin layers. This paper presents a definition of measures and the measure presents power law relation with the corresponded scale described by fractal theory. Thus, logging curves can be reconstructed according to this power law relation. This method uses the local structure nearby concurrent points to com pensate the average effect of logging probes and measurement errors. As an example, deep and medium induced conductivity (IMPH and IDPH) curves in ODP Leg 127 Hole 797C are reconstructed or corrected. Corrected curves are with less adjacent effects through comparison of corrected curves with original one. And also, the power spectra of corrected well logging curve are abounding with more resolution components than the original one. Thus, fractal correction method makes the well logging more resoluble for thin beds.``
Refraction corrections for surveying
Lear, W. M.
1980-01-01
Optical measurements of range and elevation angles are distorted by refraction of Earth's atmosphere. Theoretical discussion of effect, along with equations for determining exact range and elevation corrections, is presented in report. Potentially useful in optical site surveying and related applications, analysis is easily programmed on pocket calculator. Input to equation is measured range and measured elevation; output is true range and true elevation.
Renormalons and Power Corrections
Beneke, Martin
2000-01-01
Even for short-distance dominated observables the QCD perturbation expansion is never complete. The divergence of the expansion through infrared renormalons provides formal evidence of this fact. In this article we review how this apparent failure can be turned into a useful tool to investigate power corrections to hard processes in QCD.
Erler, Jens
2009-01-01
Radiative corrections to parity violating deep inelastic electron scattering are reviewed including a discussion of the renormalization group evolution of the weak mixing angle. Recently obtained results on hypothetical Z' bosons - for which parity violating observables play an important role - are also presented.
General forecasting correcting formula
Harin, Alexander
2009-01-01
A general forecasting correcting formula, as a framework for long-use and standardized forecasts, is created. The formula provides new forecasting resources and new possibilities for expansion of forecasting including economic forecasting into the areas of municipal needs, middle-size and small-size business and, even, to individual forecasting.
1998-01-01
To err is human . Since the 1960s, most second language teachers or language theorists have regarded errors as natural and inevitable in the language learning process . Instead of regarding them as terrible and disappointing, teachers have come to realize their value. This paper will consider these values, analyze some errors and propose some effective correction techniques.
Full Text Available ... of jaws and teeth. Surgery can improve chewing, speaking and breathing. While the patient's appearance may be dramatically enhanced as a result of their surgery, orthognathic surgery is performed to correct functional problems. Jaw Surgery can have a dramatic effect on ...
Full Text Available ... functional problems. Jaw Surgery can have a dramatic effect on many aspects of life. Following are some of the conditions that may indicate the need for corrective jaw surgery: Difficulty chewing, or biting food Difficulty swallowing Chronic jaw or jaw joint (TMJ) ...
NN,
1953-01-01
In the ”Directions and Hints” for collaborators in Flora Malesiana, which has been forwarded to all collaborators, two corrections should be made, viz: 1) p. 12; Omit the explanatory notes under Jamaica Plain, Mass., and Cambridge, Mass. 2) p. 13; Add as number 12a; Stockholm, Paleobotaniska Avdelni
General forecasting correcting formula
2009-01-01
A general forecasting correcting formula, as a framework for long-use and standardized forecasts, is created. The formula provides new forecasting resources and new possibilities for expansion of forecasting including economic forecasting into the areas of municipal needs, middle-size and small-size business and, even, to individual forecasting.
DNA barcode data accurately assign higher spider taxa
Jonathan A. Coddington
2016-07-01
Full Text Available The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios “barcodes” (whether single or multiple, organelle or nuclear, loci clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families—taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75–100%. Accurate assignment of higher taxa (PIdent above which errors totaled less than 5% occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However
Accurately determining log and bark volumes of saw logs using high-resolution laser scan data
R. Edward Thomas; Neal D. Bennett
2014-01-01
Accurately determining the volume of logs and bark is crucial to estimating the total expected value recovery from a log. Knowing the correct size and volume of a log helps to determine which processing method, if any, should be used on a given log. However, applying volume estimation methods consistently can be difficult. Errors in log measurement and oddly shaped...
Geometric correction of APEX hyperspectral data
Vreys Kristin
2016-03-01
Full Text Available Hyperspectral imagery originating from airborne sensors is nowadays widely used for the detailed characterization of land surface. The correct mapping of the pixel positions to ground locations largely contributes to the success of the applications. Accurate geometric correction, also referred to as “orthorectification”, is thus an important prerequisite which must be performed prior to using airborne imagery for evaluations like change detection, or mapping or overlaying the imagery with existing data sets or maps. A so-called “ortho-image” provides an accurate representation of the earth’s surface, having been adjusted for lens distortions, camera tilt and topographic relief. In this paper, we describe the different steps in the geometric correction process of APEX hyperspectral data, as applied in the Central Data Processing Center (CDPC at the Flemish Institute for Technological Research (VITO, Mol, Belgium. APEX ortho-images are generated through direct georeferencing of the raw images, thereby making use of sensor interior and exterior orientation data, boresight calibration data and elevation data. They can be referenced to any userspecified output projection system and can be resampled to any output pixel size.
Corrected transposition of the great arteries
Choi, Young Hi; Park, Jae Hyung; Han, Man Chung [Seoul National University College of Medicine, Seoul (Korea, Republic of)
1981-12-15
The corrected transposition of the great arteries is an usual congenital cardiac malformation, which consists of transposition of great arteries and ventricular inversion, and which is caused by abnormal development of conotruncus and ventricular looping. High frequency of associated cardiac malformations makes it difficult to get accurate morphologic diagnosis. A total of 18 cases of corrected transposition of the great arteries is presented, in which cardiac catheterization and angiocardiography were done at the Department of Radiology, Seoul National University Hospital between September 1976 and June 1981. The clinical, radiographic, and operative findings with the emphasis on the angiocardiographic findings were analyzed. The results are as follows: 1. Among 18 cases, 13 cases have normal cardiac position, 2 cases have dextrocardia with situs solitus, 2 cases have dextrocardia with situs inversus and 1 case has levocardia with situs inversus. 2. Segmental sets are (S, L, L) in 15 cases, and (I, D,D) in 3 cases and there is no exception to loop rule. 3. Side by side interrelationships of both ventricles and both semilunar valves are noticed in 10 and 12 cases respectively. 4. Subaortic type conus is noted in all 18 cases. 5. Associated cardic malformations are VSD in 14 cases, PS in 11, PDA in 3, PFO in 3, ASD in 2, right aortic arch in 2, tricuspid insufficiency, mitral prolapse, persistent left SVC and persistent right SVC in 1 case respectively. 6. For accurate diagnosis of corrected TGA, selective biventriculography using biplane cineradiography is an essential procedure.
Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.
Fuchs, Franz G; Hjelmervik, Jon M
2016-02-01
A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results.
A fast and accurate FPGA based QRS detection system.
Shukla, Ashish; Macchiarulo, Luca
2008-01-01
An accurate Field Programmable Gate Array (FPGA) based ECG Analysis system is described in this paper. The design, based on a popular software based QRS detection algorithm, calculates the threshold value for the next peak detection cycle, from the median of eight previously detected peaks. The hardware design has accuracy in excess of 96% in detecting the beats correctly when tested with a subset of five 30 minute data records obtained from the MIT-BIH Arrhythmia database. The design, implemented using a proprietary design tool (System Generator), is an extension of our previous work and uses 76% resources available in a small-sized FPGA device (Xilinx Spartan xc3s500), has a higher detection accuracy as compared to our previous design and takes almost half the analysis time in comparison to software based approach.
Fast and accurate determination of modularity and its effect size
Treviño, Santiago; Del Genio, Charo I; Bassler, Kevin E
2014-01-01
We present a fast spectral algorithm for community detection in complex networks. Our method searches for the partition with the maximum value of the modularity via the interplay of several refinement steps that include both agglomeration and division. We validate the accuracy of the algorithm by applying it to several real-world benchmark networks. On all these, our algorithm performs as well or better than any other known polynomial scheme. This allows us to extensively study the modularity distribution in ensembles of Erd\\H{o}s-R\\'enyi networks, producing theoretical predictions for means and variances inclusive of finite-size corrections. Our work provides a way to accurately estimate the effect size of modularity, providing a $z$-score measure of it and enabling a more informative comparison of networks with different numbers of nodes and links.
Accurate Iterative Analysis of the K-V Equations
Anderson, O.A.
2005-05-09
Those working with alternating-gradient (A-G) systems look for simple, accurate ways to analyze A-G performance for matched beams. The useful K-V equations are easily solved in the smooth approximation. This approximate solution becomes quite inaccurate for applications with large focusing fields and phase advances. Results of efforts to improve the accuracy have tended to be indirect or complex. Their generalizations presented previously gave better accuracy in a simple explicit format. However, the method used to derive their results (expansion in powers of a small parameter) was complex and hard to follow; also, reference 7 only gave low-order correction formulas. The present paper uses a straightforward iteration method and obtains equations of higher order than shown in their previous paper.
Accurate tracking control in LOM application
无
2003-01-01
The fabrication of accurate prototype from CAD model directly in short time depends on the accurate tracking control and reference trajectory planning in (Laminated Object Manufacture) LOM application. An improvement on contour accuracy is acquired by the introduction of a tracking controller and a trajectory generation policy. A model of the X-Y positioning system of LOM machine is developed as the design basis of tracking controller. The ZPETC (Zero Phase Error Tracking Controller) is used to eliminate single axis following error, thus reduce the contour error. The simulation is developed on a Maltab model based on a retrofitted LOM machine and the satisfied result is acquired.
Krasny, Mieczyslaw Witold
2008-01-01
Early deep inelastic scattering (DIS) experiments at SLAC discovered partons, identified them as quarks and gluons, and restricted the set of the candidate theories for strong interactions to those exhibiting the asymptotic freedom property. The next generation DIS experiments at FNAL and CERN confirmed the predictions of QCD for the size of the scaling violation effects in the nucleon structure functions. The QCD fits to their data resulted in determining the momentum distributions of the point-like constituents of nucleons. Interpretation of data coming from all these experiments and, in the case of the SLAC experiments, even an elaboration of the running strategies, would not have been possible without a precise understanding of the electromagnetic radiative corrections. In this note I recollect the important milestones, achieved in the period preceding the HERA era, in the high precision calculations of the radiative corrections to DIS, and in the development of the methods of their experimental control. ...
Aberration Corrected Emittance Exchange
Nanni, Emilio A
2015-01-01
Full exploitation of emittance exchange (EEX) requires aberration-free performance of a complex imaging system including active radio-frequency (RF) elements which can add temporal distortions. We investigate the performance of an EEX line where the exchange occurs between two dimensions with normalized emittances which differ by orders of magnitude. The transverse emittance is exchanged into the longitudinal dimension using a double dog-leg emittance exchange setup with a 5 cell RF deflector cavity. Aberration correction is performed on the four most dominant aberrations. These include temporal aberrations that are corrected with higher order magnetic optical elements located where longitudinal and transverse emittance are coupled. We demonstrate aberration-free performance of emittances differing by 4 orders of magnitude, i.e. an initial transverse emittance of $\\epsilon_x=1$ pm-rad is exchanged with a longitudinal emittance of $\\epsilon_z=10$ nm-rad.
Nawratil, Georg
2014-01-01
In 1898, Ernest Duporcq stated a famous theorem about rigid-body motions with spherical trajectories, without giving a rigorous proof. Today, this theorem is again of interest, as it is strongly connected with the topic of self-motions of planar Stewart–Gough platforms. We discuss Duporcq's theorem from this point of view and demonstrate that it is not correct. Moreover, we also present a revised version of this theorem. PMID:25540467
Congenitally corrected transposition
Debich-Spicer Diane
2011-05-01
Full Text Available Abstract Congenitally corrected transposition is a rare cardiac malformation characterized by the combination of discordant atrioventricular and ventriculo-arterial connections, usually accompanied by other cardiovascular malformations. Incidence has been reported to be around 1/33,000 live births, accounting for approximately 0.05% of congenital heart malformations. Associated malformations may include interventricular communications, obstructions of the outlet from the morphologically left ventricle, and anomalies of the tricuspid valve. The clinical picture and age of onset depend on the associated malformations, with bradycardia, a single loud second heart sound and a heart murmur being the most common manifestations. In the rare cases where there are no associated malformations, congenitally corrected transposition can lead to progressive atrioventricular valvar regurgitation and failure of the systemic ventricle. The diagnosis can also be made late in life when the patient presents with complete heart block or cardiac failure. The etiology of congenitally corrected transposition is currently unknown, and with an increase in incidence among families with previous cases of congenitally corrected transposition reported. Diagnosis can be made by fetal echocardiography, but is more commonly made postnatally with a combination of clinical signs and echocardiography. The anatomical delineation can be further assessed by magnetic resonance imaging and catheterization. The differential diagnosis is centred on the assessing if the patient is presenting with isolated malformations, or as part of a spectrum. Surgical management consists of repair of the associated malformations, or redirection of the systemic and pulmonary venous return associated with an arterial switch procedure, the so-called double switch approach. Prognosis is defined by the associated malformations, and on the timing and approach to palliative surgical care.
Accurate Switched-Voltage voltage averaging circuit
金光, 一幸; 松本, 寛樹
2006-01-01
Abstract ###This paper proposes an accurate Switched-Voltage (SV) voltage averaging circuit. It is presented ###to compensated for NMOS missmatch error at MOS differential type voltage averaging circuit. ###The proposed circuit consists of a voltage averaging and a SV sample/hold (S/H) circuit. It can ###operate using nonoverlapping three phase clocks. Performance of this circuit is verified by PSpice ###simulations.
Accurate overlaying for mobile augmented reality
Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.
1999-01-01
Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency renderi
Accurate overlaying for mobile augmented reality
Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.
1999-01-01
Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency
Herschel SPIRE FTS telescope model correction
Hopwood, Rosalind; Polehampton, Edward T; Valtchanov, Ivan; Benielli, Dominique; Imhof, Peter; Lim, Tanya; Lu, Nanyao; Marchili, Nicola; Pearson, Chris P; Swinyard, Bruce M
2014-01-01
Emission from the Herschel telescope is the dominant source of radiation for the majority of SPIRE Fourier transform spectrometer (FTS) observations, despite the exceptionally low emissivity of the primary and secondary mirrors. Accurate modelling and removal of the telescope contribution is, therefore, an important and challenging aspect of FTS calibration and data reduction pipeline. A dust-contaminated telescope model with time invariant mirror emissivity was adopted before the Herschel launch. However, measured FTS spectra show a clear evolution of the telescope contribution over the mission and strong need for a correction to the standard telescope model in order to reduce residual background (of up to 7 Jy) in the final data products. Systematic changes in observations of dark sky, taken over the course of the mission, provide a measure of the evolution between observed telescope emission and the telescope model. These dark sky observations have been used to derive a time dependent correction to the tel...
Quality metric for accurate overlay control in <20nm nodes
Klein, Dana; Amit, Eran; Cohen, Guy; Amir, Nuriel; Har-Zvi, Michael; Huang, Chin-Chou Kevin; Karur-Shanmugam, Ramkumar; Pierson, Bill; Kato, Cindy; Kurita, Hiroyuki
2013-04-01
The semiconductor industry is moving toward 20nm nodes and below. As the Overlay (OVL) budget is getting tighter at these advanced nodes, the importance in the accuracy in each nanometer of OVL error is critical. When process owners select OVL targets and methods for their process, they must do it wisely; otherwise the reported OVL could be inaccurate, resulting in yield loss. The same problem can occur when the target sampling map is chosen incorrectly, consisting of asymmetric targets that will cause biased correctable terms and a corrupted wafer. Total measurement uncertainty (TMU) is the main parameter that process owners use when choosing an OVL target per layer. Going towards the 20nm nodes and below, TMU will not be enough for accurate OVL control. KLA-Tencor has introduced a quality score named `Qmerit' for its imaging based OVL (IBO) targets, which is obtained on the-fly for each OVL measurement point in X & Y. This Qmerit score will enable the process owners to select compatible targets which provide accurate OVL values for their process and thereby improve their yield. Together with K-T Analyzer's ability to detect the symmetric targets across the wafer and within the field, the Archer tools will continue to provide an independent, reliable measurement of OVL error into the next advanced nodes, enabling fabs to manufacture devices that meet their tight OVL error budgets.
Accurate phylogenetic classification of DNA fragments based onsequence composition
McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis; Hugenholtz, Philip; Rigoutsos, Isidore
2006-05-01
Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequence characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.
Automated numerical calculation of Sagnac correction for photonic paths
Šlapák, Martin; Vojtěch, Josef; Velc, Radek
2017-04-01
Relativistic effects must be taken into account for highly accurate time and frequency transfers. The most important is the Sagnac correction which is also source of non-reciprocity in various directions of any transfer in relation with the Earth rotation. In this case, not all important parameters as exact trajectory of the optical fibre path (leased fibres) are known with sufficient precision thus it is necessary to estimate lower and upper bounds of computed corrections. The presented approach deals with uncertainty in knowledge of detailed fibre paths, and also with complex paths with loops. We made the whole process of calculation of the Sagnac correction fully automated.
The dynamic correction of collimation errors of CT slicing pictures
LIU Ya-xiong; Sekou Sing-are; LI Di-chen; LU Bing-heng
2006-01-01
To eliminate the motion artifacts of CT images caused by patient motions and other related errors,two kinds of correctors (A type and U type) are proposed to monitor the scanning process and correct the motion artifacts of the original images via reverse geometrical transformation such as reverse scaling,moving,rotating and offsetting.The results confirm that the correction method with any of the correctors can improve the accuracy and reliability of CT images,which facilitates in eliminating or decreasing the motion artifacts and correcting other static errors and image processing errors.This provides a foundation for the 3D reconstruction and accurate fabrication of the customized implants.
Exemplar-based human action pose correction.
Shen, Wei; Deng, Ke; Bai, Xiang; Leyvand, Tommer; Guo, Baining; Tu, Zhuowen
2014-07-01
The launch of Xbox Kinect has built a very successful computer vision product and made a big impact on the gaming industry. This sheds lights onto a wide variety of potential applications related to action recognition. The accurate estimation of human poses from the depth image is universally a critical step. However, existing pose estimation systems exhibit failures when facing severe occlusion. In this paper, we propose an exemplar-based method to learn to correct the initially estimated poses. We learn an inhomogeneous systematic bias by leveraging the exemplar information within a specific human action domain. Furthermore, as an extension, we learn a conditional model by incorporation of pose tags to further increase the accuracy of pose correction. In the experiments, significant improvements on both joint-based skeleton correction and tag prediction are observed over the contemporary approaches, including what is delivered by the current Kinect system. Our experiments for the facial landmark correction also illustrate that our algorithm can improve the accuracy of other detection/estimation systems.
The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.
Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis
2016-01-01
Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features.
Electroweak Corrections to the Neutralino Pair Production at CERN LHC
Ahmadov, A I
2013-01-01
We apply the leading and sub-leading electroweak (EW) corrections to the Drell-Yan process of the neutralino pair production at proton-proton collision, in order to calculate the effects of the these corrections on the neutralino pair production at the LHC. We provide an analysis of the dependence of the Born cross-sections for $pp\\rightarrow\\widetilde\\chi_{i}^{0}\\widetilde\\chi_{j}^{0}$ and the EW corrections to this process, on the center-of-mass energy $\\sqrt s$, on the $M_2$-$\\mu$ mass plane and on the squark mass for the three different scenarios. The numerical results show that the relative correction can be reached the few tens of percent level as the increment of the center-of-mass energy, and the evaluation of EW corrections is a crucial task for all accurate measurements of the neutralino pair production processes.
A New Geometrical Correction Method for Inaccessible Area Imagery
Lee Hong-shik; Park Jun-ku; Lim Sam-sung
2003-01-01
The geometric correction of a satellite imagery is performed by making a systematic correction with satellite ephemerides and attitude angles followed by employing the Ground Control Points (GCPs) or Digital Elevation Models (DEMs). In a remote area or an inaccessible area, however,GCPs are unavailable to be surveyed and thus they can be obtained only by reading maps, which is not accurate in reality.In this study, we performed the systematic correction process to the inaccessible area and the precise geometric correction process to the adjacent accessible area by using GCPs. Then we analyzed the correlation between the two geo-referenced Korea Multipurpose Satellite (KOMPSAT-1 EOC) images. A new geometrical correction for the inaccessible area imagery is achieved by applying the correlation to the inaccessible imagery. By employing this new method, the accuracy of the inaccessible area imagery is significantly improved absolutely and relatively.
Chicago aberration correction work
Beck, V.D., E-mail: vnlbeck@earthlink.net [1 Hobby Drive, Ridgefield, CT 06877-01922 (United States)
2012-12-15
The author describes from his personal involvement the many improvements to electron microscopy Albert Crewe and his group brought by minimizing the effects of aberrations. The Butler gun was developed to minimize aperture aberrations in a field emission electron gun. In the 1960s, Crewe anticipated using a spherical aberration corrector based on Scherzer's design. Since the tolerances could not be met mechanically, a method of moving the center of the octopoles electrically was developed by adding lower order multipole fields. Because the corrector was located about 15 cm ahead of the objective lens, combination aberrations would arise with the objective lens. This fifth order aberration would then limit the aperture of the microscope. The transformation of the off axis aberration coefficients of a round lens was developed and a means to cancel anisotropic coma was developed. A new method of generating negative spherical aberration was invented using the combination aberrations of hexapoles. Extensions of this technique to higher order aberrations were developed. An electrostatic electron mirror was invented, which allows the cancellation of primary spherical aberration and first order chromatic aberration. A reduction of chromatic aberration by two orders of magnitude was demonstrated using such a system. -- Highlights: Black-Right-Pointing-Pointer Crewe and his group made significant advances in aberration correction and reduction. Black-Right-Pointing-Pointer A deeper understanding of the quadrupole octopole corrector was developed. Black-Right-Pointing-Pointer A scheme to correct spherical aberration using hexapoles was developed. Black-Right-Pointing-Pointer Chromatic aberration was corrected using a uniform field mirror.
HOW CORRECTION CAN MOTIVATE LEARNING
1996-01-01
IntroductionMistakes and their correction generally follow one another in the language classroom.Most teachersthink that correction is a necessary part of teaching;while most students agree that making mistakesis a necessary part of learning.Although both teachers and students maintain that correction andmistakes are necessary,we often find that some correction helps students’ learning and some does not.Correction can make students lose confidence and interest in learning.In order to try and find outmore about why this happens I surveyed students attitudes towards mistakes and correction.
Experimental repetitive quantum error correction.
Schindler, Philipp; Barreiro, Julio T; Monz, Thomas; Nebendahl, Volckmar; Nigg, Daniel; Chwalla, Michael; Hennrich, Markus; Blatt, Rainer
2011-05-27
The computational potential of a quantum processor can only be unleashed if errors during a quantum computation can be controlled and corrected for. Quantum error correction works if imperfections of quantum gate operations and measurements are below a certain threshold and corrections can be applied repeatedly. We implement multiple quantum error correction cycles for phase-flip errors on qubits encoded with trapped ions. Errors are corrected by a quantum-feedback algorithm using high-fidelity gate operations and a reset technique for the auxiliary qubits. Up to three consecutive correction cycles are realized, and the behavior of the algorithm for different noise environments is analyzed.
Bahr, Patrick; Hutton, Graham
2015-01-01
In this article, we present a new approach to the problem of calculating compilers. In particular, we develop a simple but general technique that allows us to derive correct compilers from high-level semantics by systematic calculation, with all details of the implementation of the compilers...... falling naturally out of the calculation process. Our approach is based upon the use of standard equational reasoning techniques, and has been applied to calculate compilers for a wide range of language features and their combination, including arithmetic expressions, exceptions, state, various forms...
Bianchi, M
1998-12-01
A thorough evaluation of both urethral and penile malformation are mandatory for the choice of the best surgical treatment of patients with hypospadias. The site and the size of the urethral meatus, the presence of a chordee and of a velamentous distal urethra must be carefully assessed. In distal (glandular and coronal) hypospadias, the meatal advancement with glanduloplasty is the treatment of choice. In proximal hypospadias with chordee, the transverse preputial island flap according to the Duckett's technique allows a one-stage hypospadias repair. The awareness of the possible psychologic impact of genital malformations in childhood recommends an early correction of hypospadias, if possible during the first year of life.
[Correction of paralytic lagophthalmos].
Iskusnykh, N S; Grusha, Y O
2015-01-01
Current options for correction of paralytic lagophthalmos are either temporary (external eyelid weight placement, hyaluronic acid gel or botulinum toxin A injection) or permanent (various procedures for narrowing of the palpebral fissure, upper eyelid weights or spring implantation). Neuroplastic surgery (cross-facial nerve grafting, nerve anastomoses) and muscle transposition surgery is not effective enough. The majority of elderly and medically compromised patients should not be considered for such complicated and long procedures. Upper eyelid weight implantation thus appears the most reliable and simple treatment.
Jensen, Rasmus Ramsbøl; Benjaminsen, Claus; Larsen, Rasmus
2015-01-01
The application of motion tracking is wide, including: industrial production lines, motion interaction in gaming, computer-aided surgery and motion correction in medical brain imaging. Several devices for motion tracking exist using a variety of different methodologies. In order to use such devices...... offset and tracking noise in medical brain imaging. The data are generated from a phantom mounted on a rotary stage and have been collected using a Siemens High Resolution Research Tomograph for positron emission tomography. During acquisition the phantom was tracked with our latest tracking prototype...
Gay, Guillaume; Courtheoux, Thibault; Reyes, Céline; Tournier, Sylvie; Gachet, Yannick
2012-03-19
In fission yeast, erroneous attachments of spindle microtubules to kinetochores are frequent in early mitosis. Most are corrected before anaphase onset by a mechanism involving the protein kinase Aurora B, which destabilizes kinetochore microtubules (ktMTs) in the absence of tension between sister chromatids. In this paper, we describe a minimal mathematical model of fission yeast chromosome segregation based on the stochastic attachment and detachment of ktMTs. The model accurately reproduces the timing of correct chromosome biorientation and segregation seen in fission yeast. Prevention of attachment defects requires both appropriate kinetochore orientation and an Aurora B-like activity. The model also reproduces abnormal chromosome segregation behavior (caused by, for example, inhibition of Aurora B). It predicts that, in metaphase, merotelic attachment is prevented by a kinetochore orientation effect and corrected by an Aurora B-like activity, whereas in anaphase, it is corrected through unbalanced forces applied to the kinetochore. These unbalanced forces are sufficient to prevent aneuploidy.
Atmospheric correction with multi-angle polarimeters: information content assessment
Knobelspiesse, K. D.; Chowdhary, J.; Franz, B. A.
2016-12-01
Accurate ocean color remote sensing requires an appropriate atmospheric correction, to compensate for the atmosphere so that ocean geophysical properties can be determined. At optical wavelengths, atmospheric aerosols are the largest contributor to atmospheric correction uncertainty. In canonical missions such as SeaWiFS (Sea-Viewing Wide Field-of-View Sensor) and MODIS (Moderate Resolution Imaging Spectroradiometer), atmospheric correction uses observations in the Near Infrared (NIR) to determine aerosol optical properties, which are extrapolated to shorter wavelengths in the visible (VIS), where they are used correct for the aerosol signal. This works because ocean reflectance is very small in the NIR, but the technique is limited by the ability to determine aerosol optical properties in only that spectral range. The Ocean Color Instrument (OCI) on the upcoming NASA PACE (Plankton, Aerosol, Cloud, and ocean Ecosystem) mission will have greater spectral sensitivity and range than previous instruments, requiring atmospheric correction technique improvements. For this reason, PACE is considering an additional instrument, a multi-angle, multi-spectral, polarimeter. Such an instrument could provide more information about aerosols and significantly improve atmospheric correction. However, the atmospheric correction process is complex and nonlinear, and understanding the relationship between instrument characteristics and atmospheric correction success can be difficult without quantitative tools. We present a toolset we have developed, which couples radiative transfer simulations with information content assessment tools, to predict and explore the atmospheric correction benefit of different multi-angle polarimeter designs.
Using Online Annotations to Support Error Correction and Corrective Feedback
Yeh, Shiou-Wen; Lo, Jia-Jiunn
2009-01-01
Giving feedback on second language (L2) writing is a challenging task. This research proposed an interactive environment for error correction and corrective feedback. First, we developed an online corrective feedback and error analysis system called "Online Annotator for EFL Writing". The system consisted of five facilities: Document Maker,…
Accurate estimation of indoor travel times
Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan
2014-01-01
The ability to accurately estimate indoor travel times is crucial for enabling improvements within application areas such as indoor navigation, logistics for mobile workers, and facility management. In this paper, we study the challenges inherent in indoor travel time estimation, and we propose...... the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. InTraTime...... allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...
Accurate colorimetric feedback for RGB LED clusters
Man, Kwong; Ashdown, Ian
2006-08-01
We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.
Accurate guitar tuning by cochlear implant musicians.
Thomas Lu
Full Text Available Modern cochlear implant (CI users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.
Synthesizing Accurate Floating-Point Formulas
Ioualalen, Arnault; Martel, Matthieu
2013-01-01
International audience; Many critical embedded systems perform floating-point computations yet their accuracy is difficult to assert and strongly depends on how formulas are written in programs. In this article, we focus on the synthesis of accurate formulas mathematically equal to the original formulas occurring in source codes. In general, an expression may be rewritten in many ways. To avoid any combinatorial explosion, we use an intermediate representation, called APEG, enabling us to rep...
Efficient Accurate Context-Sensitive Anomaly Detection
无
2007-01-01
For program behavior-based anomaly detection, the only way to ensure accurate monitoring is to construct an efficient and precise program behavior model. A new program behavior-based anomaly detection model,called combined pushdown automaton (CPDA) model was proposed, which is based on static binary executable analysis. The CPDA model incorporates the optimized call stack walk and code instrumentation technique to gain complete context information. Thereby the proposed method can detect more attacks, while retaining good performance.
Accurate Control of Josephson Phase Qubits
2016-04-14
61 ~1986!. 23 K. Kraus, States, Effects, and Operations: Fundamental Notions of Quantum Theory, Lecture Notes in Physics , Vol. 190 ~Springer-Verlag... PHYSICAL REVIEW B 68, 224518 ~2003!Accurate control of Josephson phase qubits Matthias Steffen,1,2,* John M. Martinis,3 and Isaac L. Chuang1 1Center...for Bits and Atoms and Department of Physics , MIT, Cambridge, Massachusetts 02139, USA 2Solid State and Photonics Laboratory, Stanford University
On accurate determination of contact angle
Concus, P.; Finn, R.
1992-01-01
Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.
Accurate guitar tuning by cochlear implant musicians.
Lu, Thomas; Huang, Juan; Zeng, Fan-Gang
2014-01-01
Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.
Accurate integration of forced and damped oscillators
García Alonso, Fernando Luis; Cortés Molina, Mónica; Villacampa, Yolanda; Reyes Perales, José Antonio
2016-01-01
The new methods accurately integrate forced and damped oscillators. A family of analytical functions is introduced known as T-functions which are dependent on three parameters. The solution is expressed as a series of T-functions calculating their coefficients by means of recurrences which involve the perturbation function. In the T-functions series method the perturbation parameter is the factor in the local truncation error. Furthermore, this method is zero-stable and convergent. An applica...
Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo
2016-02-01
Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.
Flange Correction For Metal-To-Metal Contacts
Lieneweg, Udo; Hannaman, David J.
1991-01-01
Improved mathematical model provides correction for flange effect in estimating resistance of square contact between two metal layers from standard four-terminal measurements. Extended version of one developed previously for contact between metal layer and semiconductor layer, wherein flange effect important in semiconductor layer only. Here flange effect in both metal layers significant. Interfacial resistances extracted more accurately.
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Accurate finite element modeling of acoustic waves
Idesman, A.; Pham, D.
2014-07-01
In the paper we suggest an accurate finite element approach for the modeling of acoustic waves under a suddenly applied load. We consider the standard linear elements and the linear elements with reduced dispersion for the space discretization as well as the explicit central-difference method for time integration. The analytical study of the numerical dispersion shows that the most accurate results can be obtained with the time increments close to the stability limit. However, even in this case and the use of the linear elements with reduced dispersion, mesh refinement leads to divergent numerical results for acoustic waves under a suddenly applied load. This is explained by large spurious high-frequency oscillations. For the quantification and the suppression of spurious oscillations, we have modified and applied a two-stage time-integration technique that includes the stage of basic computations and the filtering stage. This technique allows accurate convergent results at mesh refinement as well as significantly reduces the numerical anisotropy of solutions. We should mention that the approach suggested is very general and can be equally applied to any loading as well as for any space-discretization technique and any explicit or implicit time-integration method.
78 FR 34245 - Miscellaneous Corrections
2013-06-07
... Federal Regulations is sold by the Superintendent of Documents. #0;Prices of new books are listed in the... office, correcting and adding missing cross-references, correcting grammatical errors, revising language... the name of its human capital office, correcting and adding missing cross-references,...
Power corrections, renormalons and resummation
Beneke, M.
1996-08-01
I briefly review three topics of recent interest concerning power corrections, renormalons and Sudakov resummation: (a) 1/Q corrections to event shape observables in e(+)e(-) annihilation, (b) power corrections in Drell-Yan production and (c) factorial divergences that arise in resummation of large infrared (Sudakov) logarithms in moment or `real` space.
Radiation camera motion correction system
Hoffer, P.B.
1973-12-18
The device determines the ratio of the intensity of radiation received by a radiation camera from two separate portions of the object. A correction signal is developed to maintain this ratio at a substantially constant value and this correction signal is combined with the camera signal to correct for object motion. (Official Gazette)
75 FR 16516 - Dates Correction
2010-04-01
... From the Federal Register Online via the Government Publishing Office ] NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Office of the Federal Register Dates Correction Correction In the Notices section... through 15499, the date at the top of each page is corrected to read ``Monday, March 29, 2010''....
Direct anharmonic correction method by molecular dynamics
Liu, Zhong-Li; Li, Rui; Zhang, Xiu-Lu; Qu, Nuo; Cai, Ling-Cang
2017-04-01
The quick calculation of accurate anharmonic effects of lattice vibrations is crucial to the calculations of thermodynamic properties, the construction of the multi-phase diagram and equation of states of materials, and the theoretical designs of new materials. In this paper, we proposed a direct free energy interpolation (DFEI) method based on the temperature dependent phonon density of states (TD-PDOS) reduced from molecular dynamics simulations. Using the DFEI method, after anharmonic free energy corrections we reproduced the thermal expansion coefficients, the specific heat, the thermal pressure, the isothermal bulk modulus, and the Hugoniot P- V- T relationships of Cu easily and accurately. The extensive tests on other materials including metal, alloy, semiconductor and insulator also manifest that the DFEI method can easily uncover the rest anharmonicity that the quasi-harmonic approximation (QHA) omits. It is thus evidenced that the DFEI method is indeed a very efficient method used to conduct anharmonic effect corrections beyond QHA. More importantly it is much more straightforward and easier compared to previous anharmonic methods.
Anomaly corrected heterotic horizons
Fontanella, A.; Gutowski, J. B.; Papadopoulos, G.
2016-10-01
We consider supersymmetric near-horizon geometries in heterotic supergravity up to two loop order in sigma model perturbation theory. We identify the conditions for the horizons to admit enhancement of supersymmetry. We show that solutions which undergo supersymmetry enhancement exhibit an {s}{l}(2,{R}) symmetry, and we describe the geometry of their horizon sections. We also prove a modified Lichnerowicz type theorem, incorporating α' corrections, which relates Killing spinors to zero modes of near-horizon Dirac operators. Furthermore, we demonstrate that there are no AdS2 solutions in heterotic supergravity up to second order in α' for which the fields are smooth and the internal space is smooth and compact without boundary. We investigate a class of nearly supersymmetric horizons, for which the gravitino Killing spinor equation is satisfied on the spatial cross sections but not the dilatino one, and present a description of their geometry.
Anomaly Corrected Heterotic Horizons
Fontanella, A; Papadopoulos, G
2016-01-01
We consider supersymmetric near-horizon geometries in heterotic supergravity up to two loop order in sigma model perturbation theory. We identify the conditions for the horizons to admit enhancement of supersymmetry. We show that solutions which undergo supersymmetry enhancement exhibit an sl(2,R) symmetry, and we describe the geometry of their horizon sections. We also prove a modified Lichnerowicz type theorem, incorporating $\\alpha'$ corrections, which relates Killing spinors to zero modes of near-horizon Dirac operators. Furthermore, we demonstrate that there are no AdS2 solutions in heterotic supergravity up to second order in $\\alpha'$ for which the fields are smooth and the internal space is smooth and compact without boundary. We investigate a class of nearly supersymmetric horizons, for which the gravitino Killing spinor equation is satisfied on the spatial cross sections but not the dilatino one, and present a description of their geometry.
Catalytic quantum error correction
Brun, T; Hsieh, M H; Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu
2006-01-01
We develop the theory of entanglement-assisted quantum error correcting (EAQEC) codes, a generalization of the stabilizer formalism to the setting in which the sender and receiver have access to pre-shared entanglement. Conventional stabilizer codes are equivalent to dual-containing symplectic codes. In contrast, EAQEC codes do not require the dual-containing condition, which greatly simplifies their construction. We show how any quaternary classical code can be made into a EAQEC code. In particular, efficient modern codes, like LDPC codes, which attain the Shannon capacity, can be made into EAQEC codes attaining the hashing bound. In a quantum computation setting, EAQEC codes give rise to catalytic quantum codes which maintain a region of inherited noiseless qubits. We also give an alternative construction of EAQEC codes by making classical entanglement assisted codes coherent.
EDITORIAL: Politically correct physics?
Pople Deputy Editor, Stephen
1997-03-01
If you were a caring, thinking, liberally minded person in the 1960s, you marched against the bomb, against the Vietnam war, and for civil rights. By the 1980s, your voice was raised about the destruction of the rainforests and the threat to our whole planetary environment. At the same time, you opposed discrimination against any group because of race, sex or sexual orientation. You reasoned that people who spoke or acted in a discriminatory manner should be discriminated against. In other words, you became politically correct. Despite its oft-quoted excesses, the political correctness movement sprang from well-founded concerns about injustices in our society. So, on balance, I am all for it. Or, at least, I was until it started to invade science. Biologists were the first to feel the impact. No longer could they refer to 'higher' and 'lower' orders, or 'primitive' forms of life. To the list of undesirable 'isms' - sexism, racism, ageism - had been added a new one: speciesism. Chemists remained immune to the PC invasion, but what else could you expect from a group of people so steeped in tradition that their principal unit, the mole, requires the use of the thoroughly unreconstructed gram? Now it is the turn of the physicists. This time, the offenders are not those who talk disparagingly about other people or animals, but those who refer to 'forms of energy' and 'heat'. Political correctness has evolved into physical correctness. I was always rather fond of the various forms of energy: potential, kinetic, chemical, electrical, sound and so on. My students might merge heat and internal energy into a single, fuzzy concept loosely associated with moving molecules. They might be a little confused at a whole new crop of energies - hydroelectric, solar, wind, geothermal and tidal - but they could tell me what devices turned chemical energy into electrical energy, even if they couldn't quite appreciate that turning tidal energy into geothermal energy wasn't part of the
Second order QCD corrections to inclusive semileptonic b \\to Xc l \\bar \
Biswas, Sandip
2009-01-01
We extend previous computations of the second order QCD corrections to semileptonic b \\to c inclusive transitions, to the case where the charged lepton in the final state is massive. This allows accurate description of b \\to c \\tau \\bar \
New law requires 'medically accurate' lesson plans.
1999-09-17
The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.
Accurate diagnosis is essential for amebiasis
无
2004-01-01
@@ Amebiasis is one of the three most common causes of death from parasitic disease, and Entamoeba histolytica is the most widely distributed parasites in the world. Particularly, Entamoeba histolytica infection in the developing countries is a significant health problem in amebiasis-endemic areas with a significant impact on infant mortality[1]. In recent years a world wide increase in the number of patients with amebiasis has refocused attention on this important infection. On the other hand, improving the quality of parasitological methods and widespread use of accurate tecniques have improved our knowledge about the disease.
The first accurate description of an aurora
Schröder, Wilfried
2006-12-01
As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.
Niche Genetic Algorithm with Accurate Optimization Performance
LIU Jian-hua; YAN De-kun
2005-01-01
Based on crowding mechanism, a novel niche genetic algorithm was proposed which can record evolutionary direction dynamically during evolution. After evolution, the solutions's precision can be greatly improved by means of the local searching along the recorded direction. Simulation shows that this algorithm can not only keep population diversity but also find accurate solutions. Although using this method has to take more time compared with the standard GA, it is really worth applying to some cases that have to meet a demand for high solution precision.
Universality: Accurate Checks in Dyson's Hierarchical Model
Godina, J. J.; Meurice, Y.; Oktay, M. B.
2003-06-01
In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.
Accurate Stellar Parameters for Exoplanet Host Stars
Brewer, John Michael; Fischer, Debra; Basu, Sarbani; Valenti, Jeff A.
2015-01-01
A large impedement to our understanding of planet formation is obtaining a clear picture of planet radii and densities. Although determining precise ratios between planet and stellar host are relatively easy, determining accurate stellar parameters is still a difficult and costly undertaking. High resolution spectral analysis has traditionally yielded precise values for some stellar parameters but stars in common between catalogs from different authors or analyzed using different techniques often show offsets far in excess of their uncertainties. Most analyses now use some external constraint, when available, to break observed degeneracies between surface gravity, effective temperature, and metallicity which can otherwise lead to correlated errors in results. However, these external constraints are impossible to obtain for all stars and can require more costly observations than the initial high resolution spectra. We demonstrate that these discrepencies can be mitigated by use of a larger line list that has carefully tuned atomic line data. We use an iterative modeling technique that does not require external constraints. We compare the surface gravity obtained with our spectral synthesis modeling to asteroseismically determined values for 42 Kepler stars. Our analysis agrees well with only a 0.048 dex offset and an rms scatter of 0.05 dex. Such accurate stellar gravities can reduce the primary source of uncertainty in radii by almost an order of magnitude over unconstrained spectral analysis.
Accurate pose estimation for forensic identification
Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk
2010-04-01
In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.
Accurate pattern registration for integrated circuit tomography
Levine, Zachary H.; Grantham, Steven; Neogi, Suneeta; Frigo, Sean P.; McNulty, Ian; Retsch, Cornelia C.; Wang, Yuxin; Lucatorto, Thomas B.
2001-07-15
As part of an effort to develop high resolution microtomography for engineered structures, a two-level copper integrated circuit interconnect was imaged using 1.83 keV x rays at 14 angles employing a full-field Fresnel zone plate microscope. A major requirement for high resolution microtomography is the accurate registration of the reference axes in each of the many views needed for a reconstruction. A reconstruction with 100 nm resolution would require registration accuracy of 30 nm or better. This work demonstrates that even images that have strong interference fringes can be used to obtain accurate fiducials through the use of Radon transforms. We show that we are able to locate the coordinates of the rectilinear circuit patterns to 28 nm. The procedure is validated by agreement between an x-ray parallax measurement of 1.41{+-}0.17 {mu}m and a measurement of 1.58{+-}0.08 {mu}m from a scanning electron microscope image of a cross section.
Accurate basis set truncation for wavefunction embedding
Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.
2013-07-01
Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.
How Accurately can we Calculate Thermal Systems?
Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A
2004-04-20
I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.
Accurate taxonomic assignment of short pyrosequencing reads.
Clemente, José C; Jansson, Jesper; Valiente, Gabriel
2010-01-01
Ambiguities in the taxonomy dependent assignment of pyrosequencing reads are usually resolved by mapping each read to the lowest common ancestor in a reference taxonomy of all those sequences that match the read. This conservative approach has the drawback of mapping a read to a possibly large clade that may also contain many sequences not matching the read. A more accurate taxonomic assignment of short reads can be made by mapping each read to the node in the reference taxonomy that provides the best precision and recall. We show that given a suffix array for the sequences in the reference taxonomy, a short read can be mapped to the node of the reference taxonomy with the best combined value of precision and recall in time linear in the size of the taxonomy subtree rooted at the lowest common ancestor of the matching sequences. An accurate taxonomic assignment of short reads can thus be made with about the same efficiency as when mapping each read to the lowest common ancestor of all matching sequences in a reference taxonomy. We demonstrate the effectiveness of our approach on several metagenomic datasets of marine and gut microbiota.
Accurate determination of characteristic relative permeability curves
Krause, Michael H.; Benson, Sally M.
2015-09-01
A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.
Accurate Holdup Calculations with Predictive Modeling & Data Integration
Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering
2017-04-03
In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use
Accurate Classification of RNA Structures Using Topological Fingerprints
Li, Kejie; Gribskov, Michael
2016-01-01
While RNAs are well known to possess complex structures, functionally similar RNAs often have little sequence similarity. While the exact size and spacing of base-paired regions vary, functionally similar RNAs have pronounced similarity in the arrangement, or topology, of base-paired stems. Furthermore, predicted RNA structures often lack pseudoknots (a crucial aspect of biological activity), and are only partially correct, or incomplete. A topological approach addresses all of these difficulties. In this work we describe each RNA structure as a graph that can be converted to a topological spectrum (RNA fingerprint). The set of subgraphs in an RNA structure, its RNA fingerprint, can be compared with the fingerprints of other RNA structures to identify and correctly classify functionally related RNAs. Topologically similar RNAs can be identified even when a large fraction, up to 30%, of the stems are omitted, indicating that highly accurate structures are not necessary. We investigate the performance of the RNA fingerprint approach on a set of eight highly curated RNA families, with diverse sizes and functions, containing pseudoknots, and with little sequence similarity–an especially difficult test set. In spite of the difficult test set, the RNA fingerprint approach is very successful (ROC AUC > 0.95). Due to the inclusion of pseudoknots, the RNA fingerprint approach both covers a wider range of possible structures than methods based only on secondary structure, and its tolerance for incomplete structures suggests that it can be applied even to predicted structures. Source code is freely available at https://github.rcac.purdue.edu/mgribsko/XIOS_RNA_fingerprint. PMID:27755571
Accurate molecular classification of cancer using simple rules
Gotoh Osamu
2009-10-01
Full Text Available Abstract Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often hampers the interpretability of the models. For a better understanding of the classification results, it is desirable to develop simpler rule-based models with as few marker genes as possible. Methods We screened a small number of informative single genes and gene pairs on the basis of their depended degrees proposed in rough sets. Applying the decision rules induced by the selected genes or gene pairs, we constructed cancer classifiers. We tested the efficacy of the classifiers by leave-one-out cross-validation (LOOCV of training sets and classification of independent test sets. Results We applied our methods to five cancerous gene expression datasets: leukemia (acute lymphoblastic leukemia [ALL] vs. acute myeloid leukemia [AML], lung cancer, prostate cancer, breast cancer, and leukemia (ALL vs. mixed-lineage leukemia [MLL] vs. AML. Accurate classification outcomes were obtained by utilizing just one or two genes. Some genes that correlated closely with the pathogenesis of relevant cancers were identified. In terms of both classification performance and algorithm simplicity, our approach outperformed or at least matched existing methods. Conclusion In cancerous gene expression datasets, a small number of genes, even one or two if selected correctly, is capable of achieving an ideal cancer classification effect. This finding also means that very simple rules may perform well for cancerous class prediction.
Accurate, fully-automated NMR spectral profiling for metabolomics.
Siamak Ravanbakhsh
Full Text Available Many diseases cause significant changes to the concentrations of small molecules (a.k.a. metabolites that appear in a person's biofluids, which means such diseases can often be readily detected from a person's "metabolic profile"-i.e., the list of concentrations of those metabolites. This information can be extracted from a biofluids Nuclear Magnetic Resonance (NMR spectrum. However, due to its complexity, NMR spectral profiling has remained manual, resulting in slow, expensive and error-prone procedures that have hindered clinical and industrial adoption of metabolomics via NMR. This paper presents a system, BAYESIL, which can quickly, accurately, and autonomously produce a person's metabolic profile. Given a 1D 1H NMR spectrum of a complex biofluid (specifically serum or cerebrospinal fluid, BAYESIL can automatically determine the metabolic profile. This requires first performing several spectral processing steps, then matching the resulting spectrum against a reference compound library, which contains the "signatures" of each relevant metabolite. BAYESIL views spectral matching as an inference problem within a probabilistic graphical model that rapidly approximates the most probable metabolic profile. Our extensive studies on a diverse set of complex mixtures including real biological samples (serum and CSF, defined mixtures and realistic computer generated spectra; involving > 50 compounds, show that BAYESIL can autonomously find the concentration of NMR-detectable metabolites accurately (~ 90% correct identification and ~ 10% quantification error, in less than 5 minutes on a single CPU. These results demonstrate that BAYESIL is the first fully-automatic publicly-accessible system that provides quantitative NMR spectral profiling effectively-with an accuracy on these biofluids that meets or exceeds the performance of trained experts. We anticipate this tool will usher in high-throughput metabolomics and enable a wealth of new applications of
Accurate microfour-point probe sheet resistance measurements on small samples
Thorsteinsson, Sune; Wang, Fei; Petersen, Dirch Hjorth
2009-01-01
We show that accurate sheet resistance measurements on small samples may be performed using microfour-point probes without applying correction factors. Using dual configuration measurements, the sheet resistance may be extracted with high accuracy when the microfour-point probes are in proximity...... with sufficient accuracy. As an example, the sheet resistance of a 40 µm (50 µm) square sample may be characterized with an accuracy of 0.3% (0.1%) using a 10 µm pitch microfour-point probe and assuming a probe alignment accuracy of ±2.5 µm. ©2009 American Institute of Physics...... of a mirror plane on small samples with dimensions of a few times the probe pitch. We calculate theoretically the size of the “sweet spot,” where sufficiently accurate sheet resistances result and show that even for very small samples it is feasible to do correction free extraction of the sheet resistance...
Stanley, Jeffrey R; Adkins, Joshua N; Slysz, Gordon W; Monroe, Matthew E; Purvine, Samuel O; Karpievitch, Yuliya V; Anderson, Gordon A; Smith, Richard D; Dabney, Alan R
2011-08-15
Current algorithms for quantifying peptide identification confidence in the accurate mass and time (AMT) tag approach assume that the AMT tags themselves have been correctly identified. However, there is uncertainty in the identification of AMT tags, because this is based on matching LC-MS/MS fragmentation spectra to peptide sequences. In this paper, we incorporate confidence measures for the AMT tag identifications into the calculation of probabilities for correct matches to an AMT tag database, resulting in a more accurate overall measure of identification confidence for the AMT tag approach. The method is referenced as Statistical Tools for AMT Tag Confidence (STAC). STAC additionally provides a uniqueness probability (UP) to help distinguish between multiple matches to an AMT tag and a method to calculate an overall false discovery rate (FDR). STAC is freely available for download, as both a command line and a Windows graphical application.
Accurate Astrometry and Photometry of Saturated and Coronagraphic Point Spread Functions
Marois, C; Lafrenière, D
2006-01-01
Accurate astrometry and photometry of saturated and coronagraphic point spread functions (PSFs) are fundamental to both ground- and space-based high contrast imaging projects. For ground-based adaptive optics imaging, differential atmospheric refraction and flexure introduce a small drift of the PSF with time, and seeing and sky transmission variations modify the PSF flux distribution. For space-based imaging, vibrations, thermal fluctuations and pointing jitters can modify the PSF core position and flux. These effects need to be corrected to properly combine the images and obtain optimal signal-to-noise ratios, accurate relative astrometry and photometry of detected objects as well as precise detection limits. Usually, one can easily correct for these effects by using the PSF core, but this is impossible when high dynamic range observing techniques are used, like coronagrahy with a non-transmissive occulting mask, or if the stellar PSF core is saturated. We present a new technique that can solve these issues...
Airborne experiment results for spaceborne atmospheric synchronous correction system
Cui, Wenyu; Yi, Weining; Du, Lili; Liu, Xiao
2015-10-01
The image quality of optical remote sensing satellite is affected by the atmosphere, thus the image needs to be corrected. Due to the spatial and temporal variability of atmospheric conditions, correction by using synchronous atmospheric parameters can effectively improve the remote sensing image quality. For this reason, a small light spaceborne instrument, the atmospheric synchronous correction device (airborne prototype), is developed by AIOFM of CAS(Anhui Institute of Optics and Fine Mechanics of Chinese Academy of Sciences). With this instrument, of which the detection mode is timing synchronization and spatial coverage, the atmospheric parameters consistent with the images to be corrected in time and space can be obtained, and then the correction is achieved by radiative transfer model. To verify the technical process and treatment effect of spaceborne atmospheric correction system, the first airborne experiment is designed and completed. The experiment is implemented by the "satellite-airborne-ground" synchronous measuring method. A high resolution(0.4 m) camera and the atmospheric correction device are equipped on the aircraft, which photograph the ground with the satellite observation over the top simultaneously. And aerosol optical depth (AOD) and columnar water vapor (CWV) in the imagery area are also acquired, which are used for the atmospheric correction for satellite and aerial images. Experimental results show that using the AOD and CWV of imagery area retrieved by the data obtained by the device to correct aviation and satellite images, can improve image definition and contrast by more than 30%, and increase MTF by more than 1 time, which means atmospheric correction for satellite images by using the data of spaceborne atmospheric synchronous correction device is accurate and effective.
Correcting Reflux Laparoscopically
Eric C Poulin
1998-01-01
Full Text Available Most operations in the abdominal cavity and chest can be performed using minimally invasive techniques. As yet it has not been determined which laparoscopic procedures are preferable to the same operations done through conventional laparotomy. However, most surgeons who have completed the learning curves of these procedures believe that most minimally invasive techniques will be scientifically recognized soon. The evolution, validation and justification of advanced laparoscopic surgical methods seem inevitable. Most believe that the trend towards procedures that minimize or eliminate the trauma of surgery while adhering to accepted surgical principles is irreversible. The functional results of laparoscopic antireflux surgery in the seven years since its inception have been virtually identical to the success curves generated with open fundoplication in past years. Furthermore, overall patient outcomes with laparoscopic procedures have been superior to outcomes with the traditional approach. Success is determined by patient selection and operative technique. Patient evaluation should include esophagogastroduodenoscopy, barium swallow, 24 h pH study and esophageal motility study. Gastric emptying also should be evaluated. Patients who have abnormal propulsion in the esophagus should not receive a complete fundoplication (Nissen because it adds a factor of obstruction. Dor or Toupet procedures are adequate alternatives. Prokinetic agents, dilation or pyloroplasty are used for pyloric obstruction ranging from little to more severe. Correcting reflux laparoscopically is more difficult in patients with obesity, peptic strictures, paraesophageal hernias, short esophagus, or a history of previous upper abdominal or antireflux surgery.
Ettore Taverna; Henri Ufenast; Laura Broffoni; Guido Garavaglia
2013-01-01
The Latarjet procedure is a confirmed method for the treatment of shoulder instability in the presence of bone loss. It is a challenging procedure for which a key point is the correct placement of the coracoid graft onto the glenoid neck. We here present our technique for an athroscopically assisted Latarjet procedure with a new drill guide, permitting an accurate and reproducible positioning of the coracoid graft, with optimal compression of the graft onto the glenoid neck due to the perfect...
Gay, Guillaume; Courtheoux, Thibault; Reyes, Céline; Tournier, Sylvie; Gachet, Yannick
2012-01-01
In fission yeast, erroneous attachments of spindle microtubules to kinetochores are frequent in early mitosis. Most are corrected before anaphase onset by a mechanism involving the protein kinase Aurora B, which destabilizes kinetochore microtubules (ktMTs) in the absence of tension between sister chromatids. In this paper, we describe a minimal mathematical model of fission yeast chromosome segregation based on the stochastic attachment and detachment of ktMTs. The model accurately reproduce...
Ettore Taverna; Henri Ufenast; Laura Broffoni; Guido Garavaglia
2013-01-01
The Latarjet procedure is a confirmed method for the treatment of shoulder instability in the presence of bone loss. It is a challenging procedure for which a key point is the correct placement of the coracoid graft onto the glenoid neck. We here present our technique for an athroscopically assisted Latarjet procedure with a new drill guide, permitting an accurate and reproducible positioning of the coracoid graft, with optimal compression of the graft onto the glenoid neck due to the perfect...
Correction, improvement and model verification of CARE 3, version 3
Rose, D. M.; Manke, J. W.; Altschul, R. E.; Nelson, D. L.
1987-01-01
An independent verification of the CARE 3 mathematical model and computer code was conducted and reported in NASA Contractor Report 166096, Review and Verification of CARE 3 Mathematical Model and Code: Interim Report. The study uncovered some implementation errors that were corrected and are reported in this document. The corrected CARE 3 program is called version 4. Thus the document, correction. improvement, and model verification of CARE 3, version 3 was written in April 1984. It is being published now as it has been determined to contain a more accurate representation of CARE 3 than the preceding document of April 1983. This edition supercedes NASA-CR-166122 entitled, 'Correction and Improvement of CARE 3,' version 3, April 1983.
Accurate Telescope Mount Positioning with MEMS Accelerometers
Mészáros, L.; Jaskó, A.; Pál, A.; Csépány, G.
2014-08-01
This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the sub-arcminute range which is well smaller than the field-of-view of conventional imaging telescope systems. Here we present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.
Apparatus for accurately measuring high temperatures
Smith, D.D.
The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.
Toward Accurate and Quantitative Comparative Metagenomics
Nayfach, Stephen; Pollard, Katherine S.
2016-01-01
Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341
Accurate renormalization group analyses in neutrino sector
Haba, Naoyuki [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Kaneta, Kunio [Kavli IPMU (WPI), The University of Tokyo, Kashiwa, Chiba 277-8568 (Japan); Takahashi, Ryo [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Yamaguchi, Yuya [Department of Physics, Faculty of Science, Hokkaido University, Sapporo 060-0810 (Japan)
2014-08-15
We investigate accurate renormalization group analyses in neutrino sector between ν-oscillation and seesaw energy scales. We consider decoupling effects of top quark and Higgs boson on the renormalization group equations of light neutrino mass matrix. Since the decoupling effects are given in the standard model scale and independent of high energy physics, our method can basically apply to any models beyond the standard model. We find that the decoupling effects of Higgs boson are negligible, while those of top quark are not. Particularly, the decoupling effects of top quark affect neutrino mass eigenvalues, which are important for analyzing predictions such as mass squared differences and neutrinoless double beta decay in an underlying theory existing at high energy scale.
Accurate Weather Forecasting for Radio Astronomy
Maddalena, Ronald J.
2010-01-01
The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.
Analytical model for relativistic corrections to the nuclear magnetic shielding constant in atoms
Romero, Rodolfo H. [Facultad de Ciencias Exactas, Universidad Nacional del Nordeste, Avenida Libertad 5500 (3400), Corrientes (Argentina)]. E-mail: rhromero@exa.unne.edu.ar; Gomez, Sergio S. [Facultad de Ciencias Exactas, Universidad Nacional del Nordeste, Avenida Libertad 5500 (3400), Corrientes (Argentina)
2006-04-24
We present a simple analytical model for calculating and rationalizing the main relativistic corrections to the nuclear magnetic shielding constant in atoms. It provides good estimates for those corrections and their trends, in reasonable agreement with accurate four-component calculations and perturbation methods. The origin of the effects in deep core atomic orbitals is manifestly shown.
Comparison of techniques for correction of magnification of pelvic x-rays for hip surgery planning
The, Bertram; Kootstra, Johan W. J.; Hosman, Anton H.; Verdonschot, Nico; Gerritsma, Carina L. E.; Diercks, Ron L.
2007-01-01
The aim of this study was to develop an accurate method for correction of magnification of pelvic x-rays to enhance accuracy of hip surgery planning. All investigated methods aim at estimating the anteroposterior location of the hip joint in supine position to correctly position a reference object f
Unpacking Corrections in Mobile Instruction
Levin, Lena; Cromdal, Jakob; Broth, Mathias
2017-01-01
This article deals with the organisation of correction in mobile instructional settings. Five sets of video data (>250 h) documenting how learners were instructed to fly aeroplanes, drive cars and ride bicycles in real life traffic were examined to reveal some common features of correction...... exchanges. Through detailed multimodal analysis of participants’ actions, it is shown how instructors systematically elaborate their corrective instructions to include relevant information about the trouble and remedial action – a practice we refer to as unpacking corrections. It is proposed...... that the practice of unpacking the local particulars of corrections (i) provides for the instructional character of the interaction, and (ii) is highly sensitive to the relevant physical and mobile contingencies. These findings contribute to the existing literature on the interactional organisation of correction...
Gravitational Correction to Vacuum Polarization
Jentschura, U D
2015-01-01
We consider the gravitational correction to (electronic) vacuum polarization in the presence of a gravitational background field. The Dirac propagators for the virtual fermions are modified to include the leading gravitational correction (potential term) which corresponds to a coordinate-dependent fermion mass. The mass term is assumed to be uniform over a length scale commensurate with the virtual electron-positron pair. The on-mass shell renormalization condition ensures that the gravitational correction vanishes on the mass shell of the photon, i.e., the speed of light is unaffected by the quantum field theoretical loop correction, in full agreement with the equivalence principle. Nontrivial corrections are obtained for off-shell, virtual photons. We compare our findings to other works on generalized Lorentz transformations and combined quantum-electrodynamic gravitational corrections to the speed of light which have recently appeared in the literature.
Food systems in correctional settings
Smoyer, Amy; Kjær Minke, Linda
Food is a central component of life in correctional institutions and plays a critical role in the physical and mental health of incarcerated people and the construction of prisoners' identities and relationships. An understanding of the role of food in correctional settings and the effective...... management of food systems may improve outcomes for incarcerated people and help correctional administrators to maximize their health and safety. This report summarizes existing research on food systems in correctional settings and provides examples of food programmes in prison and remand facilities......, including a case study of food-related innovation in the Danish correctional system. It offers specific conclusions for policy-makers, administrators of correctional institutions and prison-food-service professionals, and makes proposals for future research....
Nested Quantum Error Correction Codes
Wang, Zhuo; Fan, Hen; Vedral, Vlatko
2009-01-01
The theory of quantum error correction was established more than a decade ago as the primary tool for fighting decoherence in quantum information processing. Although great progress has already been made in this field, limited methods are available in constructing new quantum error correction codes from old codes. Here we exhibit a simple and general method to construct new quantum error correction codes by nesting certain quantum codes together. The problem of finding long quantum error correction codes is reduced to that of searching several short length quantum codes with certain properties. Our method works for all length and all distance codes, and is quite efficient to construct optimal or near optimal codes. Two main known methods in constructing new codes from old codes in quantum error-correction theory, the concatenating and pasting, can be understood in the framework of nested quantum error correction codes.
Real-time lens distortion correction: speed, accuracy and efficiency
Bax, Michael R.; Shahidi, Ramin
2014-11-01
Optical lens systems suffer from nonlinear geometrical distortion. Optical imaging applications such as image-enhanced endoscopy and image-based bronchoscope tracking require correction of this distortion for accurate localization, tracking, registration, and measurement of image features. Real-time capability is desirable for interactive systems and live video. The use of a texture-mapping graphics accelerator, which is standard hardware on current motherboard chipsets and add-in video graphics cards, to perform distortion correction is proposed. Mesh generation for image tessellation, an error analysis, and performance results are presented. It is shown that distortion correction using commodity graphics hardware is substantially faster than using the main processor and can be performed at video frame rates (faster than 30 frames per second), and that the polar-based method of mesh generation proposed here is more accurate than a conventional grid-based approach. Using graphics hardware to perform distortion correction is not only fast and accurate but also efficient as it frees the main processor for other tasks, which is an important issue in some real-time applications.
Cool Cluster Correctly Correlated
Varganov, Sergey Aleksandrovich [Iowa State Univ., Ames, IA (United States)
2005-01-01
Atomic clusters are unique objects, which occupy an intermediate position between atoms and condensed matter systems. For a long time it was thought that physical and chemical properties of atomic dusters monotonically change with increasing size of the cluster from a single atom to a condensed matter system. However, recently it has become clear that many properties of atomic clusters can change drastically with the size of the clusters. Because physical and chemical properties of clusters can be adjusted simply by changing the cluster's size, different applications of atomic clusters were proposed. One example is the catalytic activity of clusters of specific sizes in different chemical reactions. Another example is a potential application of atomic clusters in microelectronics, where their band gaps can be adjusted by simply changing cluster sizes. In recent years significant advances in experimental techniques allow one to synthesize and study atomic clusters of specified sizes. However, the interpretation of the results is often difficult. The theoretical methods are frequently used to help in interpretation of complex experimental data. Most of the theoretical approaches have been based on empirical or semiempirical methods. These methods allow one to study large and small dusters using the same approximations. However, since empirical and semiempirical methods rely on simple models with many parameters, it is often difficult to estimate the quantitative and even qualitative accuracy of the results. On the other hand, because of significant advances in quantum chemical methods and computer capabilities, it is now possible to do high quality ab-initio calculations not only on systems of few atoms but on clusters of practical interest as well. In addition to accurate results for specific clusters, such methods can be used for benchmarking of different empirical and semiempirical approaches. The atomic clusters studied in this work contain from a few atoms
Processor register error correction management
Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.
2016-12-27
Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.
Comparison of Topographic Correction Methods
Rudolf Richter
2009-07-01
Full Text Available A comparison of topographic correction methods is conducted for Landsat-5 TM, Landsat-7 ETM+, and SPOT-5 imagery from different geographic areas and seasons. Three successful and known methods are compared: the semi-empirical C correction, the Gamma correction depending on the incidence and exitance angles, and a modified Minnaert approach. In the majority of cases the modified Minnaert approach performed best, but no method is superior in all cases.
Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi
2015-02-01
With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.
Accurate measurement of RF exposure from emerging wireless communication systems
Letertre, Thierry; Monebhurrun, Vikass; Toffano, Zeno
2013-04-01
Isotropic broadband probes or spectrum analyzers (SAs) may be used for the measurement of rapidly varying electromagnetic fields generated by emerging wireless communication systems. In this paper this problematic is investigated by comparing the responses measured by two different isotropic broadband probes typically used to perform electric field (E-field) evaluations. The broadband probes are submitted to signals with variable duty cycles (DC) and crest factors (CF) either with or without Orthogonal Frequency Division Multiplexing (OFDM) modulation but with the same root-mean-square (RMS) power. The two probes do not provide accurate enough results for deterministic signals such as Worldwide Interoperability for Microwave Access (WIMAX) or Long Term Evolution (LTE) as well as for non-deterministic signals such as Wireless Fidelity (WiFi). The legacy measurement protocols should be adapted to cope for the emerging wireless communication technologies based on the OFDM modulation scheme. This is not easily achieved except when the statistics of the RF emission are well known. In this case the measurement errors are shown to be systematic and a correction factor or calibration can be applied to obtain a good approximation of the total RMS power.
An accurate {delta}f method for neoclassical transport calculation
Wang, W.X.; Nakajima, N.; Murakami, S.; Okamoto, M. [National Inst. for Fusion Science, Toki, Gifu (Japan)
1999-03-01
A {delta}f method, solving drift kinetic equation, for neoclassical transport calculation is presented in detail. It is demonstrated that valid results essentially rely on the correct evaluation of marker density g in weight calculation. A general and accurate weighting scheme is developed without using some assumed g in weight equation for advancing particle weights, unlike the previous schemes. This scheme employs an additional weight function to directly solve g from its kinetic equation using the idea of {delta}f method. Therefore the severe constraint that the real marker distribution must be consistent with the initially assumed g during a simulation is relaxed. An improved like-particle collision scheme is presented. By performing compensation for momentum, energy and particle losses arising from numerical errors, the conservations of all the three quantities are greatly improved during collisions. Ion neoclassical transport due to self-collisions is examined under finite banana case as well as zero banana limit. A solution with zero particle and zero energy flux (in case of no temperature gradient) over whole poloidal section is obtained. With the improvement in both like-particle collision scheme and weighting scheme, the {delta}f simulation shows a significantly upgraded performance for neoclassical transport study. (author)
Accurate Completion of Medical Report on Diagnosing Death.
Savić, Slobodan; Alempijević, Djordje; Andjelić, Sladjana
2015-01-01
Diagnosing death and issuing a Death Diagnosing Form (DDF) represents an activity that carries a great deal of public responsibility for medical professionals of the Emergency Medical Services (EMS) and is perpetually exposed to the control of the general public. Diagnosing death is necessary so as to confirm true, to exclude apparent death and consequentially to avoid burying a person alive, i.e. apparently dead. These expert-methodological guidelines based on the most up-to-date and medically based evidence have the goal of helping the physicians of the EMS in accurately filling out a medical report on diagnosing death. If the outcome of applied cardiopulmonary resuscitation measures is negative or when the person is found dead, the physician is under obligation to diagnose death and correctly fill out the DDF. It is also recommended to perform electrocardiography (EKG) and record asystole in at least two leads. In the process of diagnostics and treatment, it is a moral obligation of each Belgrade EMS physician to apply all available achievements and knowledge of modern medicine acquired from extensive international studies, which have been indeed the major theoretical basis for the creation of these expert-methodological guidelines. Those acting differently do so in accordance with their conscience and risk professional, and even criminal sanctions.
A Distributed Weighted Voting Approach for Accurate Eye Center Estimation
Gagandeep Singh
2013-05-01
Full Text Available This paper proposes a novel approach for accurate estimation of eye center in face images. A distributed voting based approach in which every pixel votes is adopted for potential eye center candidates. The votes are distributed over a subset of pixels which lie in a direction which is opposite to gradient direction and the weightage of votes is distributed according to a novel mechanism. First, image is normalized to eliminate illumination variations and its edge map is generated using Canny edge detector. Distributed voting is applied on the edge image to generate different eye center candidates. Morphological closing and local maxima search are used to reduce the number of candidates. A classifier based on spatial and intensity information is used to choose the correct candidates for the locations of eye center. The proposed approach was tested on BioID face database and resulted in better Iris detection rate than the state-of-the-art. The proposed approach is robust against illumination variation, small pose variations, presence of eye glasses and partial occlusion of eyes.Defence Science Journal, 2013, 63(3, pp.292-297, DOI:http://dx.doi.org/10.14429/dsj.63.2763
Study of accurate volume measurement system for plutonium nitrate solution
Hosoma, T. [Power Reactor and Nuclear Fuel Development Corp., Tokai, Ibaraki (Japan). Tokai Works
1998-12-01
It is important for effective safeguarding of nuclear materials to establish a technique for accurate volume measurement of plutonium nitrate solution in accountancy tank. The volume of the solution can be estimated by two differential pressures between three dip-tubes, in which the air is purged by an compressor. One of the differential pressure corresponds to the density of the solution, and another corresponds to the surface level of the solution in the tank. The measurement of the differential pressure contains many uncertain errors, such as precision of pressure transducer, fluctuation of back-pressure, generation of bubbles at the front of the dip-tubes, non-uniformity of temperature and density of the solution, pressure drop in the dip-tube, and so on. The various excess pressures at the volume measurement are discussed and corrected by a reasonable method. High precision-differential pressure measurement system is developed with a quartz oscillation type transducer which converts a differential pressure to a digital signal. The developed system is used for inspection by the government and IAEA. (M. Suetake)
Food systems in correctional settings
Smoyer, Amy; Kjær Minke, Linda
Food is a central component of life in correctional institutions and plays a critical role in the physical and mental health of incarcerated people and the construction of prisoners' identities and relationships. An understanding of the role of food in correctional settings and the effective mana......, including a case study of food-related innovation in the Danish correctional system. It offers specific conclusions for policy-makers, administrators of correctional institutions and prison-food-service professionals, and makes proposals for future research....
Dobrislav Dobrev∗
2017-02-01
Full Text Available We provide an accurate closed-form expression for the expected shortfall of linear portfolios with elliptically distributed risk factors. Our results aim to correct inaccuracies that originate in Kamdem (2005 and are present also in at least thirty other papers referencing it, including the recent survey by Nadarajah et al. (2014 on estimation methods for expected shortfall. In particular, we show that the correction we provide in the popular multivariate Student t setting eliminates understatement of expected shortfall by a factor varying from at least four to more than 100 across different tail quantiles and degrees of freedom. As such, the resulting economic impact in ﬁnancial risk management applications could be signiﬁcant. We further correct such errors encountered also in closely related results in Kamdem (2007 and 2009 for mixtures of elliptical distributions. More generally, our ﬁndings point to the extra scrutiny required when deploying new methods for expected shortfall estimation in practice.
Open quantum systems and error correction
Shabani Barzegar, Alireza
Quantum effects can be harnessed to manipulate information in a desired way. Quantum systems which are designed for this purpose are suffering from harming interaction with their surrounding environment or inaccuracy in control forces. Engineering different methods to combat errors in quantum devices are highly demanding. In this thesis, I focus on realistic formulations of quantum error correction methods. A realistic formulation is the one that incorporates experimental challenges. This thesis is presented in two sections of open quantum system and quantum error correction. Chapters 2 and 3 cover the material on open quantum system theory. It is essential to first study a noise process then to contemplate methods to cancel its effect. In the second chapter, I present the non-completely positive formulation of quantum maps. Most of these results are published in [Shabani and Lidar, 2009b,a], except a subsection on geometric characterization of positivity domain of a quantum map. The real-time formulation of the dynamics is the topic of the third chapter. After introducing the concept of Markovian regime, A new post-Markovian quantum master equation is derived, published in [Shabani and Lidar, 2005a]. The section of quantum error correction is presented in three chapters of 4, 5, 6 and 7. In chapter 4, we introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization (published in [Shabani and Lidar, 2005b]). In Chapter 5, we present a semidefinite program optimization approach to quantum error correction that yields codes and recovery procedures that are robust against significant variations in the noise channel. Our approach allows us to optimize the encoding, recovery, or both, and is amenable to approximations that significantly improve computational cost while retaining fidelity (see [Kosut et al., 2008] for a published version). Chapter 6 is devoted to a theory of quantum error correction (QEC
Whittleton, Sarah R; Otero-de-la-Roza, A; Johnson, Erin R
2017-02-14
Accurate energy ranking is a key facet to the problem of first-principles crystal-structure prediction (CSP) of molecular crystals. This work presents a systematic assessment of B86bPBE-XDM, a semilocal density functional combined with the exchange-hole dipole moment (XDM) dispersion model, for energy ranking using 14 compounds from the first five CSP blind tests. Specifically, the set of crystals studied comprises 11 rigid, planar compounds and 3 co-crystals. The experimental structure was correctly identified as the lowest in lattice energy for 12 of the 14 total crystals. One of the exceptions is 4-hydroxythiophene-2-carbonitrile, for which the experimental structure was correctly identified once a quasi-harmonic estimate of the vibrational free-energy contribution was included, evidencing the occasional importance of thermal corrections for accurate energy ranking. The other exception is an organic salt, where charge-transfer error (also called delocalization error) is expected to cause the base density functional to be unreliable. Provided the choice of base density functional is appropriate and an estimate of temperature effects is used, XDM-corrected density-functional theory is highly reliable for the energetic ranking of competing crystal structures.
New orbit correction method uniting global and local orbit corrections
Nakamura, N.; Takaki, H.; Sakai, H.; Satoh, M.; Harada, K.; Kamiya, Y.
2006-01-01
A new orbit correction method, called the eigenvector method with constraints (EVC), is proposed and formulated to unite global and local orbit corrections for ring accelerators, especially synchrotron radiation(SR) sources. The EVC can exactly correct the beam positions at arbitrarily selected ring positions such as light source points, simultaneously reducing closed orbit distortion (COD) around the whole ring. Computer simulations clearly demonstrate these features of the EVC for both cases of the Super-SOR light source and the Advanced Light Source (ALS) that have typical structures of high-brilliance SR sources. In addition, the effects of errors in beam position monitor (BPM) reading and steering magnet setting on the orbit correction are analytically expressed and also compared with the computer simulations. Simulation results show that the EVC is very effective and useful for orbit correction and beam position stabilization in SR sources.
Tian, Jianxiang; Mulero, A
2016-01-01
Despite the fact that more that more than 30 analytical expressions for the equation of state of hard-disk fluids have been proposed in the literature, none of them is capable of reproducing the currently accepted numeric or estimated values for the first eighteen virial coefficients. Using the asymptotic expansion method, extended to the first ten virial coefficients for hard-disk fluids, fifty-seven new expressions for the equation of state have been studied. Of these, a new equation of state is selected which reproduces accurately all the first eighteen virial coefficients. Comparisons for the compressibility factor with computer simulations show that this new equation is as accurate as other similar expressions with the same number of parameters. Finally, the location of the poles of the 57 new equations shows that there are some particular configurations which could give both the accurate virial coefficients and the correct closest packing fraction in the future when higher virial coefficients than the t...
Development of a Drosophila cell-based error correction assay
Jeffrey D. Salemi
2013-07-01
Full Text Available Accurate transmission of the genome through cell division requires microtubules from opposing spindle poles to interact with protein super-structures called kinetochores that assemble on each sister chromatid. Most kinetochores establish erroneous attachments that are destabilized through a process called error correction. Failure to correct improper kinetochore-microtubule (kt-MT interactions before anaphase onset results in chromosomal instability (CIN, which has been implicated in tumorigenesis and tumor adaptation. Thus, it is important to characterize the molecular basis of error correction to better comprehend how CIN occurs and how it can be modulated. An error correction assay has been previously developed in cultured mammalian cells in which incorrect kt-MT attachments are created through the induction of monopolar spindle assembly via chemical inhibition of kinesin-5. Error correction is then monitored following inhibitor wash out. Implementing the error correction assay in Drosophila melanogaster S2 cells would be valuable because kt-MT attachments are easily visualized and the cells are highly amenable to RNAi and high-throughput screening. However, Drosophila kinesin-5 (Klp61F is unaffected by available small molecule inhibitors. To overcome this limitation, we have rendered S2 cells susceptible to kinesin-5 inhibitors by functionally replacing Klp61F with human kinesin-5 (Eg5. Eg5 expression rescued the assembly of monopolar spindles typically caused by Klp61F depletion. Eg5-mediated bipoles collapsed into monopoles due to the activity of kinesin-14 (Ncd when treated with the kinesin-5 inhibitor S-trityl-L-cysteine (STLC. Furthermore, bipolar spindles reassembled and error correction was observed after STLC wash out. Importantly, error correction in Eg5-expressing S2 cells was dependent on the well-established error correction kinase Aurora B. This system provides a powerful new cell-based platform for studying error correction and
Development of a Drosophila cell-based error correction assay.
Salemi, Jeffrey D; McGilvray, Philip T; Maresca, Thomas J
2013-01-01
Accurate transmission of the genome through cell division requires microtubules from opposing spindle poles to interact with protein super-structures called kinetochores that assemble on each sister chromatid. Most kinetochores establish erroneous attachments that are destabilized through a process called error correction. Failure to correct improper kinetochore-microtubule (kt-MT) interactions before anaphase onset results in chromosomal instability (CIN), which has been implicated in tumorigenesis and tumor adaptation. Thus, it is important to characterize the molecular basis of error correction to better comprehend how CIN occurs and how it can be modulated. An error correction assay has been previously developed in cultured mammalian cells in which incorrect kt-MT attachments are created through the induction of monopolar spindle assembly via chemical inhibition of kinesin-5. Error correction is then monitored following inhibitor wash out. Implementing the error correction assay in Drosophila melanogaster S2 cells would be valuable because kt-MT attachments are easily visualized and the cells are highly amenable to RNAi and high-throughput screening. However, Drosophila kinesin-5 (Klp61F) is unaffected by available small molecule inhibitors. To overcome this limitation, we have rendered S2 cells susceptible to kinesin-5 inhibitors by functionally replacing Klp61F with human kinesin-5 (Eg5). Eg5 expression rescued the assembly of monopolar spindles typically caused by Klp61F depletion. Eg5-mediated bipoles collapsed into monopoles due, in part, to kinesin-14 (Ncd) activity when treated with the kinesin-5 inhibitor S-trityl-L-cysteine (STLC). Furthermore, bipolar spindles reassembled and error correction was observed after STLC wash out. Importantly, error correction in Eg5-expressing S2 cells was dependent on the well-established error correction kinase Aurora B. This system provides a powerful new cell-based platform for studying error correction and CIN.
Learning-Based Topological Correction for Infant Cortical Surfaces
Hao, Shijie; Li, Gang; Wang, Li; Meng, Yu
2017-01-01
Reconstruction of topologically correct and accurate cortical surfaces from infant MR images is of great importance in neuroimaging mapping of early brain development. However, due to rapid growth and ongoing myelination, infant MR images exhibit extremely low tissue contrast and dynamic appearance patterns, thus leading to much more topological errors (holes and handles) in the cortical surfaces derived from tissue segmentation results, in comparison to adult MR images which typically have good tissue contrast. Existing methods for topological correction either rely on the minimal correction criteria, or ad hoc rules based on image intensity priori, thus often resulting in erroneous correction and large anatomical errors in reconstructed infant cortical surfaces. To address these issues, we propose to correct topological errors by learning information from the anatomical references, i.e., manually corrected images. Specifically, in our method, we first locate candidate voxels of topologically defected regions by using a topology-preserving level set method. Then, by leveraging rich information of the corresponding patches from reference images, we build region-specific dictionaries from the anatomical references and infer the correct labels of candidate voxels using sparse representation. Notably, we further integrate these two steps into an iterative framework to enable gradual correction of large topological errors, which are frequently occurred in infant images and cannot be completely corrected using one-shot sparse representation. Extensive experiments on infant cortical surfaces demonstrate that our method not only effectively corrects the topological defects, but also leads to better anatomical consistency, compared to the state-of-the-art methods.
PET measurements of cerebral metabolism corrected for CSF contributions
Chawluk, J.; Alavi, A.; Dann, R.; Kushner, M.J.; Hurtig, H.; Zimmerman, R.A.; Reivich, M.
1984-01-01
Thirty-three subjects have been studied with PET and anatomic imaging (proton-NMR and/or CT) in order to determine the effect of cerebral atrophy on calculations of metabolic rates. Subgroups of neurologic disease investigated include stroke, brain tumor, epilepsy, psychosis, and dementia. Anatomic images were digitized through a Vidicon camera and analyzed volumetrically. Relative areas for ventricles, sulci, and brain tissue were calculated. Preliminary analysis suggests that ventricular volumes as determined by NMR and CT are similar, while sulcal volumes are larger on NMR scans. Metabolic rates (18F-FDG) were calculated before and after correction for CSF spaces, with initial focus upon dementia and normal aging. Correction for atrophy led to a greater increase (%) in global metabolic rates in demented individuals (18.2 +- 5.3) compared to elderly controls (8.3 +- 3.0,p < .05). A trend towards significantly lower glucose metabolism in demented subjects before CSF correction was not seen following correction for atrophy. These data suggest that volumetric analysis of NMR images may more accurately reflect the degree of cerebral atrophy, since NMR does not suffer from beam hardening artifact due to bone-parenchyma juxtapositions. Furthermore, appropriate correction for CSF spaces should be employed if current resolution PET scanners are to accurately measure residual brain tissue metabolism in various pathological states.
Optimal arbitrarily accurate composite pulse sequences
Low, Guang Hao; Yoder, Theodore
2014-03-01
Implementing a single qubit unitary is often hampered by imperfect control. Systematic amplitude errors ɛ, caused by incorrect duration or strength of a pulse, are an especially common problem. But a sequence of imperfect pulses can provide a better implementation of a desired operation, as compared to a single primitive pulse. We find optimal pulse sequences consisting of L primitive π or 2 π rotations that suppress such errors to arbitrary order (ɛn) on arbitrary initial states. Optimality is demonstrated by proving an L = (n) lower bound and saturating it with L = 2 n solutions. Closed-form solutions for arbitrary rotation angles are given for n = 1 , 2 , 3 , 4 . Perturbative solutions for any n are proven for small angles, while arbitrary angle solutions are obtained by analytic continuation up to n = 12 . The derivation proceeds by a novel algebraic and non-recursive approach, in which finding amplitude error correcting sequences can be reduced to solving polynomial equations.
Accurate, Meshless Methods for Magneto-Hydrodynamics
Hopkins, Philip F
2016-01-01
Recently, we developed a pair of meshless finite-volume Lagrangian methods for hydrodynamics: the 'meshless finite mass' (MFM) and 'meshless finite volume' (MFV) methods. These capture advantages of both smoothed-particle hydrodynamics (SPH) and adaptive mesh-refinement (AMR) schemes. Here, we extend these to include ideal magneto-hydrodynamics (MHD). The MHD equations are second-order consistent and conservative. We augment these with a divergence-cleaning scheme, which maintains div*B~0 to high accuracy. We implement these in the code GIZMO, together with a state-of-the-art implementation of SPH MHD. In every one of a large suite of test problems, the new methods are competitive with moving-mesh and AMR schemes using constrained transport (CT) to ensure div*B=0. They are able to correctly capture the growth and structure of the magneto-rotational instability (MRI), MHD turbulence, and the launching of magnetic jets, in some cases converging more rapidly than AMR codes. Compared to SPH, the MFM/MFV methods e...
Accurate lineshape spectroscopy and the Boltzmann constant.
Truong, G-W; Anstie, J D; May, E F; Stace, T M; Luiten, A N
2015-10-14
Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m.
MEMS accelerometers in accurate mount positioning systems
Mészáros, László; Pál, András.; Jaskó, Attila
2014-07-01
In order to attain precise, accurate and stateless positioning of telescope mounts we apply microelectromechanical accelerometer systems (also known as MEMS accelerometers). In common practice, feedback from the mount position is provided by electronic, optical or magneto-mechanical systems or via real-time astrometric solution based on the acquired images. Hence, MEMS-based systems are completely independent from these mechanisms. Our goal is to investigate the advantages and challenges of applying such devices and to reach the sub-arcminute range { that is well smaller than the field-of-view of conventional imaging telescope systems. We present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors. Basically, these sensors yield raw output within an accuracy of a few degrees. We show what kind of calibration procedures could exploit spherical and cylindrical constraints between accelerometer output channels in order to achieve the previously mentioned accuracy level. We also demonstrate how can our implementation be inserted in a telescope control system. Although this attainable precision is less than both the resolution of telescope mount drive mechanics and the accuracy of astrometric solutions, the independent nature of attitude determination could significantly increase the reliability of autonomous or remotely operated astronomical observations.
Does a pneumotach accurately characterize voice function?
Walters, Gage; Krane, Michael
2016-11-01
A study is presented which addresses how a pneumotach might adversely affect clinical measurements of voice function. A pneumotach is a device, typically a mask, worn over the mouth, in order to measure time-varying glottal volume flow. By measuring the time-varying difference in pressure across a known aerodynamic resistance element in the mask, the glottal volume flow waveform is estimated. Because it adds aerodynamic resistance to the vocal system, there is some concern that using a pneumotach may not accurately portray the behavior of the voice. To test this hypothesis, experiments were performed in a simplified airway model with the principal dimensions of an adult human upper airway. A compliant constriction, fabricated from silicone rubber, modeled the vocal folds. Variations of transglottal pressure, time-averaged volume flow, model vocal fold vibration amplitude, and radiated sound with subglottal pressure were performed, with and without the pneumotach in place, and differences noted. Acknowledge support of NIH Grant 2R01DC005642-10A1.
Towards Accurate Modeling of Moving Contact Lines
Holmgren, Hanna
2015-01-01
A main challenge in numerical simulations of moving contact line problems is that the adherence, or no-slip boundary condition leads to a non-integrable stress singularity at the contact line. In this report we perform the first steps in developing the macroscopic part of an accurate multiscale model for a moving contact line problem in two space dimensions. We assume that a micro model has been used to determine a relation between the contact angle and the contact line velocity. An intermediate region is introduced where an analytical expression for the velocity exists. This expression is used to implement boundary conditions for the moving contact line at a macroscopic scale, along a fictitious boundary located a small distance away from the physical boundary. Model problems where the shape of the interface is constant thought the simulation are introduced. For these problems, experiments show that the errors in the resulting contact line velocities converge with the grid size $h$ at a rate of convergence $...
Accurate upper body rehabilitation system using kinect.
Sinha, Sanjana; Bhowmick, Brojeshwar; Chakravarty, Kingshuk; Sinha, Aniruddha; Das, Abhijit
2016-08-01
The growing importance of Kinect as a tool for clinical assessment and rehabilitation is due to its portability, low cost and markerless system for human motion capture. However, the accuracy of Kinect in measuring three-dimensional body joint center locations often fails to meet clinical standards of accuracy when compared to marker-based motion capture systems such as Vicon. The length of the body segment connecting any two joints, measured as the distance between three-dimensional Kinect skeleton joint coordinates, has been observed to vary with time. The orientation of the line connecting adjoining Kinect skeletal coordinates has also been seen to differ from the actual orientation of the physical body segment. Hence we have proposed an optimization method that utilizes Kinect Depth and RGB information to search for the joint center location that satisfies constraints on body segment length and as well as orientation. An experimental study have been carried out on ten healthy participants performing upper body range of motion exercises. The results report 72% reduction in body segment length variance and 2° improvement in Range of Motion (ROM) angle hence enabling to more accurate measurements for upper limb exercises.
Fast and accurate exhaled breath ammonia measurement.
Solga, Steven F; Mudalel, Matthew L; Spacek, Lisa A; Risby, Terence H
2014-06-11
This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations.
Noninvasive hemoglobin monitoring: how accurate is enough?
Rice, Mark J; Gravenstein, Nikolaus; Morey, Timothy E
2013-10-01
Evaluating the accuracy of medical devices has traditionally been a blend of statistical analyses, at times without contextualizing the clinical application. There have been a number of recent publications on the accuracy of a continuous noninvasive hemoglobin measurement device, the Masimo Radical-7 Pulse Co-oximeter, focusing on the traditional statistical metrics of bias and precision. In this review, which contains material presented at the Innovations and Applications of Monitoring Perfusion, Oxygenation, and Ventilation (IAMPOV) Symposium at Yale University in 2012, we critically investigated these metrics as applied to the new technology, exploring what is required of a noninvasive hemoglobin monitor and whether the conventional statistics adequately answer our questions about clinical accuracy. We discuss the glucose error grid, well known in the glucose monitoring literature, and describe an analogous version for hemoglobin monitoring. This hemoglobin error grid can be used to evaluate the required clinical accuracy (±g/dL) of a hemoglobin measurement device to provide more conclusive evidence on whether to transfuse an individual patient. The important decision to transfuse a patient usually requires both an accurate hemoglobin measurement and a physiologic reason to elect transfusion. It is our opinion that the published accuracy data of the Masimo Radical-7 is not good enough to make the transfusion decision.
Accurate free energy calculation along optimized paths.
Chen, Changjun; Xiao, Yi
2010-05-01
The path-based methods of free energy calculation, such as thermodynamic integration and free energy perturbation, are simple in theory, but difficult in practice because in most cases smooth paths do not exist, especially for large molecules. In this article, we present a novel method to build the transition path of a peptide. We use harmonic potentials to restrain its nonhydrogen atom dihedrals in the initial state and set the equilibrium angles of the potentials as those in the final state. Through a series of steps of geometrical optimization, we can construct a smooth and short path from the initial state to the final state. This path can be used to calculate free energy difference. To validate this method, we apply it to a small 10-ALA peptide and find that the calculated free energy changes in helix-helix and helix-hairpin transitions are both self-convergent and cross-convergent. We also calculate the free energy differences between different stable states of beta-hairpin trpzip2, and the results show that this method is more efficient than the conventional molecular dynamics method in accurate free energy calculation.
Accurate fission data for nuclear safety
Solders, A; Jokinen, A; Kolhinen, V S; Lantz, M; Mattera, A; Penttila, H; Pomp, S; Rakopoulos, V; Rinta-Antila, S
2013-01-01
The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyvaskyla. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (10^12 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons...
Fast and Provably Accurate Bilateral Filtering.
Chaudhury, Kunal N; Dabhade, Swapnil D
2016-06-01
The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy.
Accurate thermoplasmonic simulation of metallic nanoparticles
Yu, Da-Miao; Liu, Yan-Nan; Tian, Fa-Lin; Pan, Xiao-Min; Sheng, Xin-Qing
2017-01-01
Thermoplasmonics leads to enhanced heat generation due to the localized surface plasmon resonances. The measurement of heat generation is fundamentally a complicated task, which necessitates the development of theoretical simulation techniques. In this paper, an efficient and accurate numerical scheme is proposed for applications with complex metallic nanostructures. Light absorption and temperature increase are, respectively, obtained by solving the volume integral equation (VIE) and the steady-state heat diffusion equation through the method of moments (MoM). Previously, methods based on surface integral equations (SIEs) were utilized to obtain light absorption. However, computing light absorption from the equivalent current is as expensive as O(NsNv), where Ns and Nv, respectively, denote the number of surface and volumetric unknowns. Our approach reduces the cost to O(Nv) by using VIE. The accuracy, efficiency and capability of the proposed scheme are validated by multiple simulations. The simulations show that our proposed method is more efficient than the approach based on SIEs under comparable accuracy, especially for the case where many incidents are of interest. The simulations also indicate that the temperature profile can be tuned by several factors, such as the geometry configuration of array, beam direction, and light wavelength.
Schmidt, Tobias; Makmal, Adi; Kronik, Leeor; Kümmel, Stephan
2014-01-01
We present and test a new approximation for the exchange-correlation (xc) energy of Kohn-Sham density functional theory. It combines exact exchange with a compatible non-local correlation functional. The functional is by construction free of one-electron self-interaction, respects constraints derived from uniform coordinate scaling, and has the correct asymptotic behavior of the xc energy density. It contains one parameter that is not determined ab initio. We investigate whether it is possible to construct a functional that yields accurate binding energies and affords other advantages, specifically Kohn-Sham eigenvalues that reliably reflect ionization potentials. Tests for a set of atoms and small molecules show that within our local-hybrid form accurate binding energies can be achieved by proper optimization of the free parameter in our functional, along with an improvement in dissociation energy curves and in Kohn-Sham eigenvalues. However, the correspondence of the latter to experimental ionization potent...
Unpacking Corrections in Mobile Instruction
Levin, Lene; Broth, Mathias; Cromdal, Jakob
2017-01-01
This article deals with the organisation of correction in mobile instructional settings. Five sets of video data (>250 h) documenting how learners were instructed to fly aeroplanes, drive cars and ride bicycles in real life traffic were examined to reveal some common features of correction exchan...
Feature Referenced Error Correction Apparatus.
A feature referenced error correction apparatus utilizing the multiple images of the interstage level image format to compensate for positional...images and by the generation of an error correction signal in response to the sub-frame registration errors. (Author)
Precision Corrections to Fine Tuning in SUSY
Buckley, Matthew R; Shih, David
2016-01-01
Requiring that the contributions of supersymmetric particles to the Higgs mass are not highly tuned places upper limits on the masses of superpartners -- in particular the higgsino, stop, and gluino. We revisit the details of the tuning calculation and introduce a number of improvements, including RGE resummation, two-loop effects, a proper treatment of UV vs. IR masses, and threshold corrections. This improved calculation more accurately connects the tuning measure with the physical masses of the superpartners at LHC-accessible energies. After these refinements, the tuning bound on the stop is now also sensitive to the masses of the 1st and 2nd generation squarks, which limits how far these can be decoupled in Effective SUSY scenarios. We find that, for a fixed level of tuning, our bounds can allow for heavier gluinos and stops than previously considered. Despite this, the natural region of supersymmetry is under pressure from the LHC constraints, with high messenger scales particularly disfavored.
Accurate paleointensities - the multi-method approach
de Groot, Lennart
2016-04-01
The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.
Towards Accurate Application Characterization for Exascale (APEX)
Hammond, Simon David [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-09-01
Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.
Optimizing cell arrays for accurate functional genomics
Fengler Sven
2012-07-01
Full Text Available Abstract Background Cellular responses emerge from a complex network of dynamic biochemical reactions. In order to investigate them is necessary to develop methods that allow perturbing a high number of gene products in a flexible and fast way. Cell arrays (CA enable such experiments on microscope slides via reverse transfection of cellular colonies growing on spotted genetic material. In contrast to multi-well plates, CA are susceptible to contamination among neighboring spots hindering accurate quantification in cell-based screening projects. Here we have developed a quality control protocol for quantifying and minimizing contamination in CA. Results We imaged checkered CA that express two distinct fluorescent proteins and segmented images into single cells to quantify the transfection efficiency and interspot contamination. Compared with standard procedures, we measured a 3-fold reduction of contaminants when arrays containing HeLa cells were washed shortly after cell seeding. We proved that nucleic acid uptake during cell seeding rather than migration among neighboring spots was the major source of contamination. Arrays of MCF7 cells developed without the washing step showed 7-fold lower percentage of contaminant cells, demonstrating that contamination is dependent on specific cell properties. Conclusions Previously published methodological works have focused on achieving high transfection rate in densely packed CA. Here, we focused in an equally important parameter: The interspot contamination. The presented quality control is essential for estimating the rate of contamination, a major source of false positives and negatives in current microscopy based functional genomics screenings. We have demonstrated that a washing step after seeding enhances CA quality for HeLA but is not necessary for MCF7. The described method provides a way to find optimal seeding protocols for cell lines intended to be used for the first time in CA.
Important Nearby Galaxies without Accurate Distances
McQuinn, Kristen
2014-10-01
The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.
Atmospheric Corrections for Altimetry Studies over Inland Water
M. Joana Fernandes
2014-05-01
Full Text Available Originally designed for applications over the ocean, satellite altimetry has been proven to be a useful tool for hydrologic studies. Altimeter products, mainly conceived for oceanographic studies, often fail to provide atmospheric corrections suitable for inland water studies. The focus of this paper is the analysis of the main issues related with the atmospheric corrections that need to be applied to the altimeter range to get precise water level heights. Using the corrections provided on the Radar Altimeter Database System, the main errors present in the dry and wet tropospheric corrections and in the ionospheric correction of the various satellites are reported. It has been shown that the model-based tropospheric corrections are not modeled properly and in a consistent way in the various altimetric products. While over the ocean, the dry tropospheric correction (DTC is one of the most precise range corrections, in some of the present altimeter products, it is the correction with the largest errors over continental water regions, causing large biases of several decimeters, and along-track interpolation errors up to several centimeters, both with small temporal variations. The wet tropospheric correction (WTC from the on-board microwave radiometers is hampered by the contamination on the radiometer measurements of the surrounding lands, making it usable only in the central parts of large lakes. In addition, the WTC from atmospheric models may also have large errors when it is provided at sea level instead of surface height. These errors cannot be corrected by the user, since no accurate expression exists for the height variation of the WTC. Alternative and accurate corrections can be computed from in situ data, e.g., DTC from surface pressure at barometric stations and WTC from Global Navigation Satellite System permanent stations. The latter approach is particularly favorable for small lakes and reservoirs, where GNSS-derived WTC at a single
Thermal Correction to the Molar Polarizability of a Boltzmann Gas
Jentschura, U D; Mohr, P J
2013-01-01
Metrology in atomic physics has been crucial for a number of advanced determinations of fundamental constants. In addition to very precise frequency measurements, the molar polarizability of an atomic gas has recently also been measured very accurately. Part of the motivation for the measurements is due to ongoing efforts to redefine the International System of Units (SI) for which an accurate value of the Boltzmann constant is needed. Here, we calculate the dominant shift of the molar polarizability in an atomic gas due to thermal effects. It is given by the relativistic correction to the dipole interaction, which emerges when the probing electric field is Lorenz transformed into the rest frame of the atoms that undergo thermal motion. While this effect is small when compared to currently available experimental accuracy, the relativistic correction to the dipole interaction is much larger than the thermal shift of the polarizability induced by blackbody radiation.
Scattering Correction For Image Reconstruction In Flash Radiography
Cao, Liangzhi; Wang, Mengqi; Wu, Hongchun; Liu, Zhouyu; Cheng, Yuxiong; Zhang, Hongbo [Xi' an Jiaotong Univ., Xi' an (China)
2013-08-15
Scattered photons cause blurring and distortions in flash radiography, reducing the accuracy of image reconstruction significantly. The effect of the scattered photons is taken into account and an iterative deduction of the scattered photons is proposed to amend the scattering effect for image restoration. In order to deduct the scattering contribution, the flux of scattered photons is estimated as the sum of two components. The single scattered component is calculated accurately together with the uncollided flux along the characteristic ray, while the multiple scattered component is evaluated using correction coefficients pre-obtained from Monte Carlo simulations.The arbitrary geometry pretreatment and ray tracing are carried out based on the customization of AutoCAD. With the above model, an Iterative Procedure for image restORation code, IPOR, is developed. Numerical results demonstrate that the IPOR code is much more accurate than the direct reconstruction solution without scattering correction and it has a very high computational efficiency.
Surface consistent finite frequency phase corrections
Kimman, W. P.
2016-07-01
Static time-delay corrections are frequency independent and ignore velocity variations away from the assumed vertical ray path through the subsurface. There is therefore a clear potential for improvement if the finite frequency nature of wave propagation can be properly accounted for. Such a method is presented here based on the Born approximation, the assumption of surface consistency and the misfit of instantaneous phase. The concept of instantaneous phase lends itself very well for sweep-like signals, hence these are the focus of this study. Analytical sensitivity kernels are derived that accurately predict frequency-dependent phase shifts due to P-wave anomalies in the near surface. They are quick to compute and robust near the source and receivers. An additional correction is presented that re-introduces the nonlinear relation between model perturbation and phase delay, which becomes relevant for stronger velocity anomalies. The phase shift as function of frequency is a slowly varying signal, its computation therefore does not require fine sampling even for broad-band sweeps. The kernels reveal interesting features of the sensitivity of seismic arrivals to the near surface: small anomalies can have a relative large impact resulting from the medium field term that is dominant near the source and receivers. Furthermore, even simple velocity anomalies can produce a distinct frequency-dependent phase behaviour. Unlike statics, the predicted phase corrections are smooth in space. Verification with spectral element simulations shows an excellent match for the predicted phase shifts over the entire seismic frequency band. Applying the phase shift to the reference sweep corrects for wavelet distortion, making the technique akin to surface consistent deconvolution, even though no division in the spectral domain is involved. As long as multiple scattering is mild, surface consistent finite frequency phase corrections outperform traditional statics for moderately large
Updating quasar bolometric luminosity corrections
Runnoe, Jessie C; Shang, Zhaohui
2012-01-01
Bolometric corrections are used in quasar studies to quantify total energy output based on a measurement of a monochromatic luminosity. First, we enumerate and discuss the practical difficulties of determining such corrections, then we present bolometric luminosities between 1 \\mu m and 8 keV rest frame and corrections derived from the detailed spectral energy distributions of 63 bright quasars of low to moderate redshift (z = 0.03-1.4). Exploring several mathematical fittings, we provide practical bolometric corrections of the forms L_iso=\\zeta \\lambda L_{\\lambda} and log(L_iso)=A+B log(\\lambda L_{\\lambda}) for \\lambda= 1450, 3000, and 5100 \\AA, where L_iso is the bolometric luminosity calculated under the assumption of isotropy. The significant scatter in the 5100 \\AA\\ bolometric correction can be reduced by adding a first order correction using the optical slope, \\alpha_\\lambda,opt. We recommend an adjustment to the bolometric correction to account for viewing angle and the anisotropic emission expected fr...
[Atmospheric adjacency effect correction of ETM images].
Liu, Cheng-yu; Chen, Chun; Zhang, Shu-qing; Gao, Ji-yue
2010-09-01
It is an important precondition to retrieve the ground surface reflectance exactly for improving the subsequent product of remote sensing images and the quantitative application of remote sensing. However, because the electromagnetic wave is scattered by the atmosphere during its transmission from the ground surface to the sensor, the electromagnetic wave signal of the target received by the sensor contained the signal of the background. The adjacency effect emerges. Because of the adjacency effect, the remote sensing images become blurry, and their contrast reduces. So the ground surface reflectance retrieved from the remote sensing images is also inaccurate. Finally, the quality of subsequent product of remote sensing images and the accuracy of quantitative application of remote sensing might decrease. In the present paper, according to the radiative transfer equation, the atmospheric adjacency effect correction experiment of ETM images was carried out by using the point spread function method. The result of the experiment indicated that the contrast of the corrected ETM images increased, and the ground surface reflectance retrieved from those images was more accurate.
An automated method for accurate vessel segmentation
Yang, Xin; Liu, Chaoyue; Le Minh, Hung; Wang, Zhiwei; Chien, Aichi; (Tim Cheng, Kwang-Ting
2017-05-01
Vessel segmentation is a critical task for various medical applications, such as diagnosis assistance of diabetic retinopathy, quantification of cerebral aneurysm’s growth, and guiding surgery in neurosurgical procedures. Despite technology advances in image segmentation, existing methods still suffer from low accuracy for vessel segmentation in the two challenging while common scenarios in clinical usage: (1) regions with a low signal-to-noise-ratio (SNR), and (2) at vessel boundaries disturbed by adjacent non-vessel pixels. In this paper, we present an automated system which can achieve highly accurate vessel segmentation for both 2D and 3D images even under these challenging scenarios. Three key contributions achieved by our system are: (1) a progressive contrast enhancement method to adaptively enhance contrast of challenging pixels that were otherwise indistinguishable, (2) a boundary refinement method to effectively improve segmentation accuracy at vessel borders based on Canny edge detection, and (3) a content-aware region-of-interests (ROI) adjustment method to automatically determine the locations and sizes of ROIs which contain ambiguous pixels and demand further verification. Extensive evaluation of our method is conducted on both 2D and 3D datasets. On a public 2D retinal dataset (named DRIVE (Staal 2004 IEEE Trans. Med. Imaging 23 501-9)) and our 2D clinical cerebral dataset, our approach achieves superior performance to the state-of-the-art methods including a vesselness based method (Frangi 1998 Int. Conf. on Medical Image Computing and Computer-Assisted Intervention) and an optimally oriented flux (OOF) based method (Law and Chung 2008 European Conf. on Computer Vision). An evaluation on 11 clinical 3D CTA cerebral datasets shows that our method can achieve 94% average accuracy with respect to the manual segmentation reference, which is 23% to 33% better than the five baseline methods (Yushkevich 2006 Neuroimage 31 1116-28; Law and Chung 2008
Generalised geometry for string corrections
Coimbra, André; Triendl, Hagen; Waldram, Daniel
2014-01-01
We present a general formalism for incorporating the string corrections in generalised geometry, which necessitates the extension of the generalised tangent bundle. Not only are such extensions obstructed, string symmetries and the existence of a well-defined effective action require a precise choice of the (generalised) connection. The action takes a universal form given by a generalised Lichnerowitz--Bismut theorem. As examples of this construction we discuss the corrections linear in $\\alpha'$ in heterotic strings and the absence of such corrections for type II theories.
On the accurate estimation of gap fraction during daytime with digital cover photography
Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.
2015-12-01
Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.
Comparative evaluation of scatter correction techniques in 3D positron emission tomography
Zaidi, H
2000-01-01
Much research and development has been concentrated on the scatter compensation required for quantitative 3D PET. Increasingly sophisticated scatter correction procedures are under investigation, particularly those based on accurate scatter models, and iterative reconstruction-based scatter compensation approaches. The main difference among the correction methods is the way in which the scatter component in the selected energy window is estimated. Monte Carlo methods give further insight and might in themselves offer a possible correction procedure. Methods: Five scatter correction methods are compared in this paper where applicable. The dual-energy window (DEW) technique, the convolution-subtraction (CVS) method, two variants of the Monte Carlo-based scatter correction technique (MCBSC1 and MCBSC2) and our newly developed statistical reconstruction-based scatter correction (SRBSC) method. These scatter correction techniques are evaluated using Monte Carlo simulation studies, experimental phantom measurements...
Kawashima, Yukio; Hirao, Kimihiko
2017-02-24
We introduced two methods to correct the singularity in the calculation of long-range Hartree-Fock (HF) exchange for long-range-corrected density functional theory (LC-DFT) calculations in plane-wave basis sets. The first method introduces an auxiliary function to cancel out the singularity. The second method introduces a truncated long-range Coulomb potential, which has no singularity. We assessed the introduced methods using the LC-BLYP functional by applying it to isolated systems of naphthalene and pyridine. We first compared the total energies and the HOMO energies of the singularity-corrected and uncorrected calculations and confirmed that singularity correction is essential for LC-DFT calculations using plane-wave basis sets. The LC-DFT calculation results converged rapidly with respect to the cell size as the other functionals, and their results were in good agreement with the calculated results obtained using Gaussian basis sets. LC-DFT succeeded in obtaining accurate orbital energies and excitation energies. We next applied LC-DFT with singularity correction methods to the electronic structure calculations of the extended systems, Si and SiC. We confirmed that singularity correction is important for calculations of extended systems as well. The calculation results of the valence and conduction bands by LC-BLYP showed good convergence with respect to the number of k points sampled. The introduced methods succeeded in overcoming the singularity problem in HF exchange calculation. We investigated the effect of the singularity correction on the excitation state calculation and found that careful treatment of the singularities is required compared to ground-state calculations. We finally examined the excitonic effect on the band gap of the extended systems. We calculated the excitation energies to the first excited state of the extended systems using a supercell model at the Γ point and found that the excitonic binding energy, supposed to be small for
Software for Correcting the Dynamic Error of Force Transducers
Naoki Miyashita
2014-07-01
Full Text Available Software which corrects the dynamic error of force transducers in impact force measurements using their own output signal has been developed. The software corrects the output waveform of the transducers using the output waveform itself, estimates its uncertainty and displays the results. In the experiment, the dynamic error of three transducers of the same model are evaluated using the Levitation Mass Method (LMM, in which the impact forces applied to the transducers are accurately determined as the inertial force of the moving part of the aerostatic linear bearing. The parameters for correcting the dynamic error are determined from the results of one set of impact measurements of one transducer. Then, the validity of the obtained parameters is evaluated using the results of the other sets of measurements of all the three transducers. The uncertainties in the uncorrected force and those in the corrected force are also estimated. If manufacturers determine the correction parameters for each model using the proposed method, and provide the software with the parameters corresponding to each model, then users can obtain the waveform corrected against dynamic error and its uncertainty. The present status and the future prospects of the developed software are discussed in this paper.
A New Method for Correcting Vehicle License Plate Tilt
Mei-Sen Pan; Qi Xiong; Jun-Biao Yan
2009-01-01
In the course of vehicle license plate (VLP) automatic recognition, tilt correction is a very crucial process. According to Karhunen-Loeve (K-L) transformation, the coordinates of characters in the image are arranged into a two-dimensional covariance matrix, on the basis of which the centered process is carried out. Then, the eigenvector and the rotation angle α are computed in turn. The whole image is rotated by -α. Thus, image horizontal tilt correction is performed. In the vertical tilt correction process, three correction methods, which are K-L transformation method, the line fitting method based on K-means clustering (LFMBKC), and the line fitting based on least squares (LFMBLS), are put forward to compute the vertical tilt angle θ. After shear transformation (ST) is imposed on the rotated image, the final correction image is obtained. The experimental results verify that this proposed method can be easily implemented, and can quickly and accurately get the tilt angle. It provides a new effective way for the VLP image tilt correction as well.
Long-Range Corrected Hybrid Density Functionals with Improved Dispersion Corrections
Lin, You-Sheng; Mao, Shan-Ping; Chai, Jeng-Da
2012-01-01
By incorporating the improved empirical atom-atom dispersion corrections from DFT-D3 [Grimme, S.; Antony, J.; Ehrlich, S.; Krieg, H. J. Chem. Phys. 2010, 132, 154104], two long-range corrected (LC) hybrid density functionals are proposed. Our resulting LC hybrid functionals, omegaM06-D3 and omegaB97X-D3, are shown to be accurate for a very wide range of applications, such as thermochemistry, kinetics, noncovalent interactions, frontier orbital energies, fundamental gaps, and long-range charge-transfer excitations, when compared with common global and LC hybrid functionals. Relative to omegaB97X-D [Chai, J.-D.; Head-Gordon, M. Phys. Chem. Chem. Phys. 2008, 10, 6615], omegaB97X-D3 (reoptimization of omegaB97X-D with improved dispersion corrections) is shown to be superior for non-bonded interactions, and similar in performance for bonded interactions, while omegaM06-D3 is shown to be superior for general applications.
Accurate estimation of the boundaries of a structured light pattern.
Lee, Sukhan; Bui, Lam Quang
2011-06-01
Depth recovery based on structured light using stripe patterns, especially for a region-based codec, demands accurate estimation of the true boundary of a light pattern captured on a camera image. This is because the accuracy of the estimated boundary has a direct impact on the accuracy of the depth recovery. However, recovering the true boundary of a light pattern is considered difficult due to the deformation incurred primarily by the texture-induced variation of the light reflectance at surface locales. Especially for heavily textured surfaces, the deformation of pattern boundaries becomes rather severe. We present here a novel (to the best of our knowledge) method to estimate the true boundaries of a light pattern that are severely deformed due to the heavy textures involved. First, a general formula that models the deformation of the projected light pattern at the imaging end is presented, taking into account not only the light reflectance variation but also the blurring along the optical passages. The local reflectance indices are then estimated by applying the model to two specially chosen reference projections, all-bright and all-dark. The estimated reflectance indices are to transform the edge-deformed, captured pattern signal into the edge-corrected, canonical pattern signal. A canonical pattern implies the virtual pattern that would have resulted if there were neither the reflectance variation nor the blurring in imaging optics. Finally, we estimate the boundaries of a light pattern by intersecting the canonical form of a light pattern with that of its inverse pattern. The experimental results show that the proposed method results in significant improvements in the accuracy of the estimated boundaries under various adverse conditions.
Spelling Correction in Agglutinative Languages
Oflazer, K
1994-01-01
This paper presents an approach to spelling correction in agglutinative languages that is based on two-level morphology and a dynamic programming based search algorithm. Spelling correction in agglutinative languages is significantly different than in languages like English. The concept of a word in such languages is much wider that the entries found in a dictionary, owing to {}~productive word formation by derivational and inflectional affixations. After an overview of certain issues and relevant mathematical preliminaries, we formally present the problem and our solution. We then present results from our experiments with spelling correction in Turkish, a Ural--Altaic agglutinative language. Our results indicate that we can find the intended correct word in 95\\% of the cases and offer it as the first candidate in 74\\% of the cases, when the edit distance is 1.
Dispersion based beam tilt correction
Guetg, Marc W; Prat, Eduard; Reiche, Sven
2013-01-01
In Free Electron Lasers (FEL), a transverse centroid misalignment of longitudinal slices in an electron bunch reduces the effective overlap between radiation field and electron bunch and therefore the FEL performance. The dominant sources of slice misalignments for FELs are the incoherent and coherent synchrotron radiation within bunch compressors as well as transverse wake fields in the accelerating cavities. This is of particular importance for over-compression which is required for one of the key operation modes for the SwissFEL planned at the Paul Scherrer Institute. The centroid shift is corrected using corrector magnets in dispersive sections, e.g. the bunch compressors. First and second order corrections are achieved by pairs of sextupole and quadrupole magnets in the horizontal plane while skew quadrupoles correct to first order in the vertical plane. Simulations and measurements at the SwissFEL Injector Test Facility are done to investigate the proposed correction scheme for SwissFEL. This paper pres...
General correcting formula of forecasting?
2009-01-01
A general correcting formula of forecasting (as a framework for long-use and standardized forecasts) is proposed. The formula provides new forecasting resources and areas of application including economic forecasting.
Quantum corrections for Boltzmann equation
M.; Levy; PETER
2008-01-01
We present the lowest order quantum correction to the semiclassical Boltzmann distribution function,and the equation satisfied by this correction is given. Our equation for the quantum correction is obtained from the conventional quantum Boltzmann equation by explicitly expressing the Planck constant in the gradient approximation,and the quantum Wigner distribution function is expanded in pow-ers of Planck constant,too. The negative quantum correlation in the Wigner dis-tribution function which is just the quantum correction terms is naturally singled out,thus obviating the need for the Husimi’s coarse grain averaging that is usually done to remove the negative quantum part of the Wigner distribution function. We also discuss the classical limit of quantum thermodynamic entropy in the above framework.
General correcting formula of forecasting?
Harin, Alexander
2009-01-01
A general correcting formula of forecasting (as a framework for long-use and standardized forecasts) is proposed. The formula provides new forecasting resources and areas of application including economic forecasting.
Error correcting coding for OTN
Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.
2010-01-01
Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....
Radiative corrections to Bose condensation
Gonzalez, A. (Academia de Ciencias de Cuba, La Habana. Inst. de Matematica, Cibernetica y Computacion)
1985-04-01
The Bose condensation of the scalar field in a theory behaving in the Coleman-Weinberg mode is considered. The effective potential of the model is computed within the semiclassical approximation in a dimensional regularization scheme. Radiative corrections are shown to introduce certain ..mu..-dependent ultraviolet divergences in the effective potential coming from the Many-Particle theory. The weight of radiative corrections in the dynamics of the system is strongly modified by the charge density.
Proving Program Correctness. Volume V.
1981-11-01
Task 2. Proving Program Correctness (P.I.: J.C. Reynolds). This group is working towards programming languaje designs which increase the probability...certain syntactic difficulties: the natural abstract syntax is ambiguous, and syntactic correctness is violated by certain beta reductions. 3 - These...concept of a functor to express a-sp- priat : restrictions on implicit conversion functions. In a similar v-’.1n, we can use the concept of a natural
Quantum error correction for beginners.
Devitt, Simon J; Munro, William J; Nemoto, Kae
2013-07-01
Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future.
Three-Dimensional Turbulent RANS Adjoint-Based Error Correction
Park, Michael A.
2003-01-01
Engineering problems commonly require functional outputs of computational fluid dynamics (CFD) simulations with specified accuracy. These simulations are performed with limited computational resources. Computable error estimates offer the possibility of quantifying accuracy on a given mesh and predicting a fine grid functional on a coarser mesh. Such an estimate can be computed by solving the flow equations and the associated adjoint problem for the functional of interest. An adjoint-based error correction procedure is demonstrated for transonic inviscid and subsonic laminar and turbulent flow. A mesh adaptation procedure is formulated to target uncertainty in the corrected functional and terminate when error remaining in the calculation is less than a user-specified error tolerance. This adaptation scheme is shown to yield anisotropic meshes with corrected functionals that are more accurate for a given number of grid points then isotropic adapted and uniformly refined grids.
Tilt correction method of text image based on wavelet pyramid
Yu, Mingyang; Zhu, Qiguo
2017-04-01
Text images captured by camera may be tilted and distorted, which is unfavorable for document character recognition. Therefore,a method of text image tilt correction based on wavelet pyramid is proposed in this paper. The first step is to convert the text image captured by cameras to binary images. After binarization, the images are layered by wavelet transform to achieve noise reduction, enhancement and compression of image. Afterwards,the image would bedetected for edge by Canny operator, and extracted for straight lines by Radon transform. In the final step, this method calculates the intersection of straight lines and gets the corrected text images according to the intersection points and perspective transformation. The experimental result shows this method can correct text images accurately.
Relativistic Corrections to the Zeeman Effect of Helium Atom
关晓旭; 李白文; 王治文
2002-01-01
The high-order relativistic corrections to the Zeeman g-factors of the helium atom are calculated. AII the relativistic correction terms and the term describing the motion of the mass centre are treated as perturbations. Most of our results are in good agreement with those of Yah and Drake [Phys. Rev. A 50 (1994)R1980/, who used the wavefunctions constructed by Hylleraas coordinates. For the correction δg of the g-factor of the 3 3P state in 4He, our result, 2.91415 × 10-7 a.u., should be more reasonable and accurate, although there are no experimental data available in the literature to compare.
Reflection error correction of gas turbine blade temperature
Kipngetich, Ketui Daniel; Feng, Chi; Gao, Shan
2016-03-01
Accurate measurement of gas turbine blades' temperature is one of the greatest challenges encountered in gas turbine temperature measurements. Within an enclosed gas turbine environment with surfaces of varying temperature and low emissivities, a new challenge is introduced into the use of radiation thermometers due to the problem of reflection error. A method for correcting this error has been proposed and demonstrated in this work through computer simulation and experiment. The method assumed that emissivities of all surfaces exchanging thermal radiation are known. Simulations were carried out considering targets with low and high emissivities of 0.3 and 0.8 respectively while experimental measurements were carried out on blades with emissivity of 0.76. Simulated results showed possibility of achieving error less than 1% while experimental result corrected the error to 1.1%. It was thus concluded that the method is appropriate for correcting reflection error commonly encountered in temperature measurement of gas turbine blades.
Correcting the Chromatic Aberration in Barrel Distortion of Endoscopic Images
Y. M. Harry Ng
2003-04-01
Full Text Available Modern endoscopes offer physicians a wide-angle field of view (FOV for minimally invasive therapies. However, the high level of barrel distortion may prevent accurate perception of image. Fortunately, this kind of distortion may be corrected by digital image processing. In this paper we investigate the chromatic aberrations in the barrel distortion of endoscopic images. In the past, chromatic aberration in endoscopes is corrected by achromatic lenses or active lens control. In contrast, we take a computational approach by modifying the concept of image warping and the existing barrel distortion correction algorithm to tackle the chromatic aberration problem. In addition, an error function for the determination of the level of centroid coincidence is proposed. Simulation and experimental results confirm the effectiveness of our method.
Surface corrections to the shell-structure of the moment of inertia
Gorpinchenko, D V; Bartel, J; Blocki, J P
2015-01-01
The moment of inertia for nuclear collective rotations is derived within a semiclassical approach based on the Inglis cranking and the Strutinsky shell-correction methods, improved by surface corrections within the non-perturbative periodic-orbit theory. For adiabatic (statistical-equilibrium) rotations it was approximated by the generalized rigid-body moment of inertia accounting for the shell corrections of the particle density. An improved phase-space trace formula allows to express the shell components of the moment of inertia more accurately in terms of the free-energy shell correction with their ratio evaluated within the extended Thomas-Fermi effective-surface approximation.
Long term changes of altimeter range and geophysical corrections at altimetry calibration sites
Andersen, Ole Baltazar; Cheng, Yongcun; Pascal Willis
2013-01-01
Accurate sea level trend determination is fundamentally related to calibration of both the instrument as well as to investigate if there are linear trends in the set of standard geophysical and range corrections applied to the sea level observations. Long term changes in range corrections can leak...... trends in the sum of range corrections are found for the calibrations sites both for local scales (within 50km around the selected site) and for regional scales (within 300km). However, the geophysical corrections accounting for atmospheric pressure loading and high frequency sea level variations...
An accurate and practical method for inference of weak gravitational lensing from galaxy images
Bernstein, Gary M.; Armstrong, Robert; Krawiec, Christina; March, Marisa C.
2016-07-01
We demonstrate highly accurate recovery of weak gravitational lensing shear using an implementation of the Bayesian Fourier Domain (BFD) method proposed by Bernstein & Armstrong, extended to correct for selection biases. The BFD formalism is rigorously correct for Nyquist-sampled, background-limited, uncrowded images of background galaxies. BFD does not assign shapes to galaxies, instead compressing the pixel data D into a vector of moments M, such that we have an analytic expression for the probability P(M|g) of obtaining the observations with gravitational lensing distortion g along the line of sight. We implement an algorithm for conducting BFD's integrations over the population of unlensed source galaxies which measures ≈10 galaxies s-1 core-1 with good scaling properties. Initial tests of this code on ≈109 simulated lensed galaxy images recover the simulated shear to a fractional accuracy of m = (2.1 ± 0.4) × 10-3, substantially more accurate than has been demonstrated previously for any generally applicable method. Deep sky exposures generate a sufficiently accurate approximation to the noiseless, unlensed galaxy population distribution assumed as input to BFD. Potential extensions of the method include simultaneous measurement of magnification and shear; multiple-exposure, multiband observations; and joint inference of photometric redshifts and lensing tomography.
Surgical options for correction of refractive error following cataract surgery.
Abdelghany, Ahmed A; Alio, Jorge L
2014-01-01
Refractive errors are frequently found following cataract surgery and refractive lens exchange. Accurate biometric analysis, selection and calculation of the adequate intraocular lens (IOL) and modern techniques for cataract surgery all contribute to achieving the goal of cataract surgery as a refractive procedure with no refractive error. However, in spite of all these advances, residual refractive error still occasionally occurs after cataract surgery and laser in situ keratomileusis (LASIK) can be considered the most accurate method for its correction. Lens-based procedures, such as IOL exchange or piggyback lens implantation are also possible alternatives especially in cases with extreme ametropia, corneal abnormalities, or in situations where excimer laser is unavailable. In our review, we have found that piggyback IOL is safer and more accurate than IOL exchange. Our aim is to provide a review of the recent literature regarding target refraction and residual refractive error in cataract surgery.
Updating quasar bolometric luminosity corrections - III. [O iii] bolometric corrections
Pennell, Alison; Runnoe, Jessie C.; Brotherton, M. S.
2017-06-01
We present quasar bolometric corrections using the [O III] λ 5007 narrow emission line luminosity based on the detailed spectral energy distributions of 53 bright quasars at low to moderate redshift (0.0345 diversity, introduces scatter into the L_{[O III]}-Liso relationship. We found that the {[O III]} bolometric correction can be significantly improved by adding a term including the equivalent width ratio R_{Fe II} ≡ EW_{{Fe II}}/EW_{Hβ }, which is an EV1 indicator. Inclusion of R_{Fe II} in predicting Liso is significant at nearly the 3σ level and reduces the scatter and systematic offset of the luminosity residuals. Typically, {[O III]} bolometric corrections are adopted for Type 2 sources where the quasar continuum is not observed and in these cases, R_{Fe II} cannot be measured. We searched for an alternative measure of EV1 that could be measured in the optical spectra of Type 2 sources but were unable to identify one. Thus, the main contribution of this work is to present an improved {[O III]} bolometric correction based on measured bolometric luminosities and highlight the EV1 dependence of the correction in Type 1 sources.
Fully 3D refraction correction dosimetry system
Manjappa, Rakesh; Sharath Makki, S.; Kumar, Rajesh; Mohan Vasu, Ram; Kanhirodan, Rajan
2016-02-01
medium is 71.8%, an increase of 6.4% compared to that achieved using conventional ART algorithm. Smaller diameter dosimeters are scanned with dry air scanning by using a wide-angle lens that collects refracted light. The images reconstructed using cone beam geometry is seen to deteriorate in some planes as those regions are not scanned. Refraction correction is important and needs to be taken in to consideration to achieve quantitatively accurate dose reconstructions. Refraction modeling is crucial in array based scanners as it is not possible to identify refracted rays in the sinogram space.
Fully 3D refraction correction dosimetry system.
Manjappa, Rakesh; Makki, S Sharath; Kumar, Rajesh; Vasu, Ram Mohan; Kanhirodan, Rajan
2016-02-21
medium is 71.8%, an increase of 6.4% compared to that achieved using conventional ART algorithm. Smaller diameter dosimeters are scanned with dry air scanning by using a wide-angle lens that collects refracted light. The images reconstructed using cone beam geometry is seen to deteriorate in some planes as those regions are not scanned. Refraction correction is important and needs to be taken in to consideration to achieve quantitatively accurate dose reconstructions. Refraction modeling is crucial in array based scanners as it is not possible to identify refracted rays in the sinogram space.
Local stretch zeroing NMO correction
Kazemi, N.; Siahkoohi, H. R.
2012-01-01
In this paper we present a new method of normal move-out (NMO) correction called local stretch zeroing (LSZ) method that avoids NMO stretch. The method eliminates the theoretical curves that generate interpolated data samples responsible for NMO stretch. Pre-correction time sampling interval is preserved by reassigning and zero padding of true data samples. The optimum mute zone selection feature of the LSZ method eliminates all interfering reflection events at far offsets. The resulted stacked section from the LSZ method contains generally higher frequency components than a normal stack, and preserves most of the shallow reflectors. The LSZ method requires that zero-offset width of the time gate, i.e. zero-offset time difference between two adjacent reflections, be larger than the dominant period. The major shortcoming of the method occurs when CMP data are over- or under-NMO corrected. Both synthetic and real world examples show the efficiency of the LSZ method over the conventional NMO (CNMO) correction. The method loses its superiority when CMP data are over- or under-NMO corrected.
2004-05-01
1. The first photograph on p12 of News in Physics Educaton January 2004 is of Prof. Paul Black and not Prof. Jonathan Osborne, as stated. 2. The review of Flowlog on p209 of the March 2004 issue wrongly gives the maximum sampling rate of the analogue inputs as 25 kHz (40 ms) instead of 25 kHz (40 µs) and the digital inputs as 100 kHz (10 ms) instead of 100 kHz (10 µs). 3. The letter entitled 'A trial of two energies' by Eric McIldowie on pp212-4 of the March 2004 issue was edited to fit the space available. We regret that a few small errors were made in doing this. Rather than detail these, the interested reader can access the whole of the original letter as a Word file from the link below.
2016-04-01
Sukcharanjit S, Tan AS, Loo AV, Chan XL, Wang CY. The effect of a forced-air warming blanket on patients’ end-tidal and transcutaneous carbon dioxide partial pressures during eye surgery under local anaesthesia: a single-blind, randomised controlled trial. Anaesthesia 2015; 70: 1390–4. In the article [1] by Sukcharanjit et al., data in the ‘Systolic blood pressure; mmHg’ row in Table 1 is listed incorrectly. It should be: 158.0 (14.3) in the Forced air warmer column and 160.9 (15.6) in the Heated Overblanket column.
1991-11-29
Because of a production error, the photographs of pierre Chambon and Harald zur Hausen, which appeared on pages 1116 and 1117 of last week's issue (22 November), were transposed. Here's what you should have seen: Chambon is on the left, zur Hausen on the right.
1999-11-01
Synsedimentary deformation in the Jurassic of southeastern Utah—A case of impact shaking? COMMENT Geology, v. 27, p. 661 (July 1999) The sentence on p. 661, first column, second paragraph, line one, should read: The 1600 m of Pennsylvania Paradox Formation is 75 90% salt in Arches National Park. The sentence on p. 661, second column, third paragraph, line seven, should read: This high-pressured ydrothermal solution created the clastic dikes, chert nodules from reprecipitated siliceous cement that have been called “siliceous impactites” (Kriens et al., 1997), and much of the present structure at Upheaval Dome by further faulting.
2016-09-01
The feature article “Neutrons for new drugs” (August pp26-29) stated that neutron crystallography was used to determine the structures of “wellknown complex biological molecules such as lysine, insulin and trypsin”.
2007-01-01
From left to right: Luis, Carmen, Mario, Christian and José listening to speeches by theorists Alvaro De Rújula and Luis Alvarez-Gaumé (right) at their farewell gathering on 15 May.We unfortunately cut out a part of the "Word of thanks" from the team retiring from Restaurant No. 1. The complete message is published below: Dear friends, You are the true "nucleus" of CERN. Every member of this extraordinary human mosaic will always remain in our affections and in our thoughts. We have all been very touched by your spontaneous generosity. Arrivederci, Mario Au revoir,Christian Hasta Siempre Carmen, José and Luis PS: Lots of love to the theory team and to the hidden organisers. So long!
Local Correction of Boolean Functions
Alon, Noga
2011-01-01
A Boolean function f over n variables is said to be q-locally correctable if, given a black-box access to a function g which is "close" to an isomorphism f_sigma of f, we can compute f_sigma(x) for any x in Z_2^n with good probability using q queries to g. We observe that any k-junta, that is, any function which depends only on k of its input variables, is O(2^k)-locally correctable. Moreover, we show that there are examples where this is essentially best possible, and locally correcting some k-juntas requires a number of queries which is exponential in k. These examples, however, are far from being typical, and indeed we prove that for almost every k-junta, O(k log k) queries suffice.
Hubeny, Veronika; Maloney, Alexander; Rangamani, Mukund
2005-02-07
We investigate the geometry of four dimensional black hole solutions in the presence of stringy higher curvature corrections to the low energy effective action. For certain supersymmetric two charge black holes these corrections drastically alter the causal structure of the solution, converting seemingly pathological null singularities into timelike singularities hidden behind a finite area horizon. We establish, analytically and numerically, that the string-corrected two-charge black hole metric has the same Penrose diagram as the extremal four-charge black hole. The higher derivative terms lead to another dramatic effect -- the gravitational force exerted by a black hole on an inertial observer is no longer purely attractive! The magnitude of this effect is related to the size of the compactification manifold.
Hubeny, Veronika E. [Department of Physics, University of California, Berkeley, CA 94720 (United States); Maloney, Alexander [SLAC and Department of Physics, Stanford University, Stanford, CA 94309 (United States); Rangamani, Mukund [Department of Physics, University of California, Berkeley, CA 94720 (United States)
2005-05-01
We investigate the geometry of four dimensional black hole solutions in the presence of stringy higher curvature corrections to the low energy effective action. For certain supersymmetric two charge black holes these corrections drastically alter the causal structure of the solution, converting seemingly pathological null singularities into timelike singularities hidden behind a finite area horizon. We establish, analytically and numerically, that the string-corrected two-charge black hole metric has the same Penrose diagram as the extremal four-charge black hole. The higher derivative terms lead to another dramatic effect - the gravitational force exerted by a black hole on an inertial observer is no longer purely attractive{exclamation_point} The magnitude of this effect is related to the size of the compactification manifold.
Aberration Correction in Electron Microscopy
Rose, Harald H
2005-01-01
The resolution of conventional electron microscopes is limited by spherical and chromatic aberrations. Both defects are unavoidable in the case of static rotationally symmetric electromagnetic fields (Scherzer theorem). Multipole correctors and electron mirrros have been designed and built, which compensate for these aberrations. The principles of correction will be demonstrated for the tetrode mirror, the quadrupole-octopole corrector and the hexapole corrector. Electron mirrors require a magnetic beam separator free of second-order aberrations. The multipole correctors are highly symmetric telescopic systems compensating for the defects of the objective lens. The hexapole corrector has the most simple structure yet eliminates only the spherical aberration, whereas the mirror and the quadrupole-octopole corrector are able to correct for both aberrations. Chromatic correction is achieved in the latter corrector by cossed electric and magnetic quadrupoles acting as first-order Wien filters. Micrographs obtaine...
Classical Corrections in String Cosmology
Brustein, Ram; Brustein, Ram; Madden, Richard
1999-01-01
An important element in a model of non-singular string cosmology is a phase in which classical corrections saturate the growth of curvature in a deSitter-like phase with a linearly growing dilaton (an `algebraic fixed point'). As the form of the classical corrections is not well known, here we look for evidence, based on a suggested symmetry of the action, scale factor duality and on conformal field theory considerations, that they can produce this saturation. It has previously been observed that imposing scale factor duality on the $O(\\alpha')$ corrections is not compatible with fixed point behavior. Here we present arguments that these problems persist to all orders in $\\alpha'$. We also present evidence for the form of a solution to the equations of motion using conformal perturbation theory, examine its implications for the form of the effective action and find novel fixed point structure.
Classical corrections in string cosmology
Brustein, Ram; Madden, Richard
1999-07-01
An important element in a model of non-singular string cosmology is a phase in which classical corrections saturate the growth of curvature in a deSitter-like phase with a linearly growing dilaton (an `algebraic fixed point'). As the form of the classical corrections is not well known, here we look for evidence, based on a suggested symmetry of the action, scale factor duality and on conformal field theory considerations, that they can produce this saturation. It has previously been observed that imposing scale factor duality on the O(alpha') corrections is not compatible with fixed point behavior. Here we present arguments that these problems persist to all orders in alpha'. We also present evidence for the form of a solution to the equations of motion using conformal perturbation theory, examine its implications for the form of the effective action and find novel fixed point structure.
Binary Error Correcting Network Codes
Wang, Qiwen; Li, Shuo-Yen Robert
2011-01-01
We consider network coding for networks experiencing worst-case bit-flip errors, and argue that this is a reasonable model for highly dynamic wireless network transmissions. We demonstrate that in this setup prior network error-correcting schemes can be arbitrarily far from achieving the optimal network throughput. We propose a new metric for errors under this model. Using this metric, we prove a new Hamming-type upper bound on the network capacity. We also show a commensurate lower bound based on GV-type codes that can be used for error-correction. The codes used to attain the lower bound are non-coherent (do not require prior knowledge of network topology). The end-to-end nature of our design enables our codes to be overlaid on classical distributed random linear network codes. Further, we free internal nodes from having to implement potentially computationally intensive link-by-link error-correction.
Hubeny, V.
2005-01-12
We investigate the geometry of four dimensional black hole solutions in the presence of stringy higher curvature corrections to the low energy effective action. For certain supersymmetric two charge black holes these corrections drastically alter the causal structure of the solution, converting seemingly pathological null singularities into timelike singularities hidden behind a finite area horizon. We establish, analytically and numerically, that the string-corrected two-charge black hole metric has the same Penrose diagram as the extremal four-charge black hole. The higher derivative terms lead to another dramatic effect--the gravitational force exerted by a black hole on an inertial observer is no longer purely attractive. The magnitude of this effect is related to the size of the compactification manifold.
Automatic orientation correction for radiographs
Luo, Hui; Luo, Jiebo; Wang, Xiaohui
2006-03-01
In picture archiving and communications systems (PACS), images need to be displayed in standardized ways for radiologists' interpretations. However, for most radiographs acquired by computed radiography (CR), digital radiography (DR), or digitized films, the image orientation is undetermined because of the variation of examination conditions and patient situations. To address this problem, an automatic orientation correction method is presented. It first detects the most indicative region for orientation in a radiograph, and then extracts a set of low-level visual features sensitive to rotation from the region. Based on these features, a trained classifier based on a support vector machine is employed to recognize the correct orientation of the radiograph and reorient it to a desired position. A large-scale experiment has been conducted on more than 12,000 radiographs covering a large variety of body parts and projections to validate the method. The overall performance is quite promising, with the success rate of orientation correction reaching 95.2%.
Gravitomagnetic corrections on gravitational waves
Capozziello, S; Forte, L; Garufi, F; Milano, L
2009-01-01
Gravitational waveforms and production could be considerably affected by gravitomagnetic corrections considered in relativistic theory of orbits. Beside the standard periastron effect of General Relativity, new nutation effects come out when c^{-3} corrections are taken into account. Such corrections emerge as soon as matter-current densities and vector gravitational potentials cannot be discarded into dynamics. We study the gravitational waves emitted through the capture, in the gravitational field of massive binary systems (e.g. a very massive black hole on which a stellar object is inspiralling) via the quadrupole approximation, considering precession and nutation effects. We present a numerical study to obtain the gravitational wave luminosity, the total energy output and the gravitational radiation amplitude. From a crude estimate of the expected number of events towards peculiar targets (e.g. globular clusters) and in particular, the rate of events per year for dense stellar clusters at the Galactic Cen...
An efficient and accurate method for calculating nonlinear diffraction beam fields
Jeong, Hyun Jo; Cho, Sung Jong; Nam, Ki Woong; Lee, Jang Hyun [Division of Mechanical and Automotive Engineering, Wonkwang University, Iksan (Korea, Republic of)
2016-04-15
This study develops an efficient and accurate method for calculating nonlinear diffraction beam fields propagating in fluids or solids. The Westervelt equation and quasilinear theory, from which the integral solutions for the fundamental and second harmonics can be obtained, are first considered. A computationally efficient method is then developed using a multi-Gaussian beam (MGB) model that easily separates the diffraction effects from the plane wave solution. The MGB models provide accurate beam fields when compared with the integral solutions for a number of transmitter-receiver geometries. These models can also serve as fast, powerful modeling tools for many nonlinear acoustics applications, especially in making diffraction corrections for the nonlinearity parameter determination, because of their computational efficiency and accuracy.
When correction turns positive: processing corrective prosody in Dutch.
Diana V Dimitrova
Full Text Available Current research on spoken language does not provide a consistent picture as to whether prosody, the melody and rhythm of speech, conveys a specific meaning. Perception studies show that English listeners assign meaning to prosodic patterns, and, for instance, associate some accents with contrast, whereas Dutch listeners behave more controversially. In two ERP studies we tested how Dutch listeners process words carrying two types of accents, which either provided new information (new information accents or corrected information (corrective accents, both in single sentences (experiment 1 and after corrective and new information questions (experiment 2. In both experiments corrective accents elicited a sustained positivity as compared to new information accents, which started earlier in context than in single sentences. The positivity was not modulated by the nature of the preceding question, suggesting that the underlying neural mechanism likely reflects the construction of an interpretation to the accented word, either by identifying an alternative in context or by inferring it when no context is present. Our experimental results provide strong evidence for inferential processes related to prosodic contours in Dutch.
When correction turns positive: processing corrective prosody in Dutch.
Dimitrova, Diana V; Stowe, Laurie A; Hoeks, John C J
2015-01-01
Current research on spoken language does not provide a consistent picture as to whether prosody, the melody and rhythm of speech, conveys a specific meaning. Perception studies show that English listeners assign meaning to prosodic patterns, and, for instance, associate some accents with contrast, whereas Dutch listeners behave more controversially. In two ERP studies we tested how Dutch listeners process words carrying two types of accents, which either provided new information (new information accents) or corrected information (corrective accents), both in single sentences (experiment 1) and after corrective and new information questions (experiment 2). In both experiments corrective accents elicited a sustained positivity as compared to new information accents, which started earlier in context than in single sentences. The positivity was not modulated by the nature of the preceding question, suggesting that the underlying neural mechanism likely reflects the construction of an interpretation to the accented word, either by identifying an alternative in context or by inferring it when no context is present. Our experimental results provide strong evidence for inferential processes related to prosodic contours in Dutch.
Ettore Taverna
2013-01-01
Full Text Available The Latarjet procedure is a confirmed method for the treatment of shoulder instability in the presence of bone loss. It is a challenging procedure for which a key point is the correct placement of the coracoid graft onto the glenoid neck. We here present our technique for an athroscopically assisted Latarjet procedure with a new drill guide, permitting an accurate and reproducible positioning of the coracoid graft, with optimal compression of the graft onto the glenoid neck due to the perfect position of the screws: perpendicular to the graft and the glenoid neck and parallel between them.
Taverna, Ettore; Ufenast, Henri; Broffoni, Laura; Garavaglia, Guido
2013-07-01
The Latarjet procedure is a confirmed method for the treatment of shoulder instability in the presence of bone loss. It is a challenging procedure for which a key point is the correct placement of the coracoid graft onto the glenoid neck. We here present our technique for an athroscopically assisted Latarjet procedure with a new drill guide, permitting an accurate and reproducible positioning of the coracoid graft, with optimal compression of the graft onto the glenoid neck due to the perfect position of the screws: perpendicular to the graft and the glenoid neck and parallel between them.
SELF CORRECTION WORKS BETTER THAN TEACHER CORRECTION IN EFL SETTING
Azizollah Dabaghi
2012-11-01
Full Text Available Learning a foreign language takes place step by step, during which mistakes are to be expected in all stages of learning. EFL learners are usually afraid of making mistakes which prevents them from being receptive and responsive. Overcoming fear of mistakes depends on the way mistakes are rectified. It is believed that autonomy and learner-centeredness suggest that in some settings learner's self-correction of mistakes might be more beneficial for language learning than teacher's correction. This assumption has been the subject of debates for some time. Some researchers believe that correction whether that of teacher's or on behalf of learners is effective in showing them how their current interlanguage differs from the target (Long &Robinson, 1998. Others suggest that correcting the students whether directly or through recasts are ambiguous and may be perceived by the learner as confirmation of meaning rather than feedback on form (Lyster, 1998a. This study is intended to investigate the effects of correction on Iranian intermediate EFL learners' writing composition in Payam Noor University. For this purpose, 90 English majoring students, studying at Isfahan Payam Noor University were invited to participate at the experiment. They all received a sample of TOFEL test and a total number of 60 participants whose scores were within the range of one standard deviation below and above the mean were divided into two equal groups; experimental and control. The experimental group went through some correction during the experiment while the control group remained intact and the ordinary processes of teaching went on. Each group received twelve sessions of two hour classes every week on advanced writing course in which some activities of Modern English (II were selected. Then after the treatment both groups received an immediate test as post-test and the experimental group took the second post-test as the delayed recall test with the same design as the
MAC: identifying and correcting annotation for multi-nucleotide variations.
Wei, Lei; Liu, Lu T; Conroy, Jacob R; Hu, Qiang; Conroy, Jeffrey M; Morrison, Carl D; Johnson, Candace S; Wang, Jianmin; Liu, Song
2015-08-01
Next-Generation Sequencing (NGS) technologies have rapidly advanced our understanding of human variation in cancer. To accurately translate the raw sequencing data into practical knowledge, annotation tools, algorithms and pipelines must be developed that keep pace with the rapidly evolving technology. Currently, a challenge exists in accurately annotating multi-nucleotide variants (MNVs). These tandem substitutions, when affecting multiple nucleotides within a single protein codon of a gene, result in a translated amino acid involving all nucleotides in that codon. Most existing variant callers report a MNV as individual single-nucleotide variants (SNVs), often resulting in multiple triplet codon sequences and incorrect amino acid predictions. To correct potentially misannotated MNVs among reported SNVs, a primary challenge resides in haplotype phasing which is to determine whether the neighboring SNVs are co-located on the same chromosome. Here we describe MAC (Multi-Nucleotide Variant Annotation Corrector), an integrative pipeline developed to correct potentially mis-annotated MNVs. MAC was designed as an application that only requires a SNV file and the matching BAM file as data inputs. Using an example data set containing 3024 SNVs and the corresponding whole-genome sequencing BAM files, we show that MAC identified eight potentially mis-annotated SNVs, and accurately updated the amino acid predictions for seven of the variant calls. MAC can identify and correct amino acid predictions that result from MNVs affecting multiple nucleotides within a single protein codon, which cannot be handled by most existing SNV-based variant pipelines. The MAC software is freely available and represents a useful tool for the accurate translation of genomic sequence to protein function.
Detecting and Correcting Speech Repairs
Heeman, P A; Heeman, Peter; Allen, James
1994-01-01
Interactive spoken dialog provides many new challenges for spoken language systems. One of the most critical is the prevalence of speech repairs. This paper presents an algorithm that detects and corrects speech repairs based on finding the repair pattern. The repair pattern is built by finding word matches and word replacements, and identifying fragments and editing terms. Rather than using a set of prebuilt templates, we build the pattern on the fly. In a fair test, our method, when combined with a statistical model to filter possible repairs, was successful at detecting and correcting 80\\% of the repairs, without using prosodic information or a parser.
Self-correcting Multigrid Solver
Jerome L.V. Lewandowski
2004-06-29
A new multigrid algorithm based on the method of self-correction for the solution of elliptic problems is described. The method exploits information contained in the residual to dynamically modify the source term (right-hand side) of the elliptic problem. It is shown that the self-correcting solver is more efficient at damping the short wavelength modes of the algebraic error than its standard equivalent. When used in conjunction with a multigrid method, the resulting solver displays an improved convergence rate with no additional computational work.
Correction of gene expression data
Darbani Shirvanehdeh, Behrooz; Stewart, C. Neal, Jr.; Noeparvar, Shahin;
2014-01-01
This report investigates for the first time the potential inter-treatment bias source of cell number for gene expression studies. Cell-number bias can affect gene expression analysis when comparing samples with unequal total cellular RNA content or with different RNA extraction efficiencies...... an analytical approach to examine the suitability of correction methods by considering the inter-treatment bias as well as the inter-replicate variance, which allows use of the best correction method with minimum residual bias. Analyses of RNA sequencing and microarray data showed that the efficiencies...
Bingham, Geoffrey P; Lind, Mats
2008-04-01
We investigated the ability to perceive the metric shape of elliptical cylinders. A large number of previous studies have shown that small perspective variations (perception of metric shape. If space perception is affine (Koenderink & van Doom, 1991), observers are unable to compare or relate lengths in depth to frontoparallel lengths (i.e., widths). Frontoparallel lengths can be perceived correctly, whereas lengths in depth generally are not. We measured reaches to evaluate shape perception and investigated whether larger perspective variations would allow accurate perception of shape. In Experiment 1, we replicated previous results showing poor perception with small perspective variations. In Experiment 2, we found that a 90 degrees continuous change in perspective, which swapped depth and width, allowed accurate perception of the depth/width aspect ratio. In Experiment 3, we found that discrete views differing by 90 degrees were insufficient to allow accurate perception of metric shape and that perception of a continuous perspective change was required. In Experiment 4, we investigated continuous perspective changes of 30 degrees, 45 degrees, 60 degrees, and 90 degrees and discovered that a 45 degrees change or greater allowed accurate perception of the aspect ratio and that less than this did not. In conclusion, we found that perception of metric shape is possible with continuous perspective transformations somewhat larger than those investigated in the substantial number of previous studies.
Touchless attitude correction for satellite with constant magnetic moment
Ao, Hou-jun; Yang, Le-ping; Zhu, Yan-wei; Zhang, Yuan-wen; Huang, Huan
2017-09-01
Rescue of satellite with attitude fault is of great value. Satellite with improper injection attitude may lose contact with ground as the antenna points to the wrong direction, or encounter energy problems as solar arrays are not facing the sun. Improper uploaded command may set the attitude out of control, exemplified by Japanese Hitomi spacecraft. In engineering practice, traditional physical contact approaches have been applied, yet with a potential risk of collision and a lack of versatility since the mechanical systems are mission-specific. This paper puts forward a touchless attitude correction approach, in which three satellites are considered, one having constant dipole and two having magnetic coils to control attitude of the first. Particular correction configurations are designed and analyzed to maintain the target's orbit during the attitude correction process. A reference coordinate system is introduced to simplify the control process and avoid the singular value problem of Euler angles. Based on the spherical triangle basic relations, the accurate varying geomagnetic field is considered in the attitude dynamic mode. Sliding mode control method is utilized to design the correction law. Finally, numerical simulation is conducted to verify the theoretical derivation. It can be safely concluded that the no-contact attitude correction approach for the satellite with uniaxial constant magnetic moment is feasible and potentially applicable to on-orbit operations.
Atmospheric Error Correction of the Laser Beam Ranging
J. Saydi
2014-01-01
Full Text Available Atmospheric models based on surface measurements of pressure, temperature, and relative humidity have been used to increase the laser ranging accuracy by ray tracing. Atmospheric refraction can cause significant errors in laser ranging systems. Through the present research, the atmospheric effects on the laser beam were investigated by using the principles of laser ranging. Atmospheric correction was calculated for 0.532, 1.3, and 10.6 micron wavelengths through the weather conditions of Tehran, Isfahan, and Bushehr in Iran since March 2012 to March 2013. Through the present research the atmospheric correction was computed for meteorological data in base of monthly mean. Of course, the meteorological data were received from meteorological stations in Tehran, Isfahan, and Bushehr. Atmospheric correction was calculated for 11, 100, and 200 kilometers laser beam propagations under 30°, 60°, and 90° rising angles for each propagation. The results of the study showed that in the same months and beam emission angles, the atmospheric correction was most accurate for 10.6 micron wavelength. The laser ranging error was decreased by increasing the laser emission angle. The atmospheric correction with two Marini-Murray and Mendes-Pavlis models for 0.532 nm was compared.
B. Thrasher
2012-09-01
Full Text Available When applying a quantile mapping-based bias correction to daily temperature extremes simulated by a global climate model (GCM, the transformed values of maximum and minimum temperatures are changed, and the diurnal temperature range (DTR can become physically unrealistic. While causes are not thoroughly explored, there is a strong relationship between GCM biases in snow albedo feedback during snowmelt and bias correction resulting in unrealistic DTR values. We propose a technique to bias correct DTR, based on comparing observations and GCM historic simulations, and combine that with either bias correcting daily maximum temperatures and calculating daily minimum temperatures or vice versa. By basing the bias correction on a base period of 1961–1980 and validating it during a test period of 1981–1999, we show that bias correcting DTR and maximum daily temperature can produce more accurate estimations of daily temperature extremes while avoiding the pathological cases of unrealistic DTR values.
Accurate Jones Matrix of the Practical Faraday Rotator
王林斗; 祝昇翔; 李玉峰; 邢文烈; 魏景芝
2003-01-01
The Jones matrix of practical Faraday rotators is often used in the engineering calculation of non-reciprocal optical field. Nevertheless, only the approximate Jones matrix of practical Faraday rotators has been presented by now. Based on the theory of polarized light, this paper presents the accurate Jones matrix of practical Faraday rotators. In addition, an experiment has been carried out to verify the validity of the accurate Jones matrix. This matrix accurately describes the optical characteristics of practical Faraday rotators, including rotation, loss and depolarization of the polarized light. The accurate Jones matrix can be used to obtain the accurate results for the practical Faraday rotator to transform the polarized light, which paves the way for the accurate analysis and calculation of practical Faraday rotators in relevant engineering applications.
Retrograde weight implantation for correction of lagophthalmos.
Kao, Chuan-Hsiang; Moe, Kris S
2004-09-01
Gold weight implantation is the most commonly used method for surgical correction of paralytic lagophthalmos. Numerous techniques for placement of the weight have been described, yet complications with these methods continue to occur (implant migration or extrusion, wound infection, failure to correct the lagophthalmos, and excessive postoperative ptosis). We developed a retrograde, postlevator aponeurosis method for implantation to improve the placement and fixation of the weight. This study describes the rationale, technique, and surgical outcome of the retrograde approach. Retrospective analysis. Data maintained and collected on 25 consecutive cases of retrograde upper lid weight implantation for paralytic lagophthalmos. Pre- and postoperative photographs were obtained, and patients were followed for at least 6 months. All procedures were performed by or under the direction of a single surgeon at tertiary academic medical centers (University of California, San Diego and University of Zurich, Switzerland). Twenty-five consecutive patients were evaluated, 16 male and 9 female, ranging in age from 27 to 86 years. There were no surgical failures or perioperative complications and no instances of implant migration or extrusion. One patient developed a delayed infection requiring removal of the implant, and one patient required replacement of the gold weight with a platinum chain implant to better fit the contour of her eyelid. Retrograde implantation allows more accurate placement of the weight while creating a permanent circumferential seal for fixation. The procedure is minimally invasive, less traumatic than previous methods, and produces an excellent cosmetic result. The efficacy has been demonstrated in the outcome of the 25 cases described in this study.
Accurate and Timely Forecasting of CME-Driven Geomagnetic Storms
Chen, J.; Kunkel, V.; Skov, T. M.
2015-12-01
Wide-spread and severe geomagnetic storms are primarily caused by theejecta of coronal mass ejections (CMEs) that impose long durations ofstrong southward interplanetary magnetic field (IMF) on themagnetosphere, the duration and magnitude of the southward IMF (Bs)being the main determinants of geoeffectiveness. Another importantquantity to forecast is the arrival time of the expected geoeffectiveCME ejecta. In order to accurately forecast these quantities in atimely manner (say, 24--48 hours of advance warning time), it isnecessary to calculate the evolving CME ejecta---its structure andmagnetic field vector in three dimensions---using remote sensing solardata alone. We discuss a method based on the validated erupting fluxrope (EFR) model of CME dynamics. It has been shown using STEREO datathat the model can calculate the correct size, magnetic field, and theplasma parameters of a CME ejecta detected at 1 AU, using the observedCME position-time data alone as input (Kunkel and Chen 2010). Onedisparity is in the arrival time, which is attributed to thesimplified geometry of circular toroidal axis of the CME flux rope.Accordingly, the model has been extended to self-consistently includethe transverse expansion of the flux rope (Kunkel 2012; Kunkel andChen 2015). We show that the extended formulation provides a betterprediction of arrival time even if the CME apex does not propagatedirectly toward the earth. We apply the new method to a number of CMEevents and compare predicted flux ropes at 1 AU to the observed ejectastructures inferred from in situ magnetic and plasma data. The EFRmodel also predicts the asymptotic ambient solar wind speed (Vsw) foreach event, which has not been validated yet. The predicted Vswvalues are tested using the ENLIL model. We discuss the minimum andsufficient required input data for an operational forecasting systemfor predicting the drivers of large geomagnetic storms.Kunkel, V., and Chen, J., ApJ Lett, 715, L80, 2010. Kunkel, V., Ph
UAV multirotor platform for accurate turbulence measurements in the atmosphere
Carbajo Fuertes, Fernando; Wilhelm, Lionel; Sin, Kevin Edgar; Hofer, Matthias; Porté-Agel, Fernando
2017-04-01
One of the most challenging tasks in atmospheric field studies for wind energy is to obtain accurate turbulence measurements at any location inside the region of interest for a wind farm study. This volume would ideally include from several hundred meters to several kilometers around it and from ground height to the top of the boundary layer. An array of meteorological masts equipped with several sonic anemometers to cover all points of interest would be the best in terms of accuracy and data availability, but it is an obviously unfeasible solution. On the other hand, the evolution of wind LiDAR technology allows to measure at any point in space but unfortunately it involves two important limitations: the first one is the relatively low spatial and temporal resolution when compared to a sonic anemometer and the second one is the fact that the measurements are limited to the velocity component parallel to the laser beam (radial velocity). To overcome the aforementioned drawbacks, a UAV multirotor platform has been developed. It is based on a state-of-the-art octocopter with enough payload to carry laboratory-grade instruments for the measurement of time-resolved atmospheric pressure, three-component velocity vector and temperature; and enough autonomy to fly from 10 to 20 minutes, which is a standard averaging time in most atmospheric measurement applications. The UAV uses a gyroscope, an accelerometer, a GPS and an algorithm has been developed and integrated for the correction of any orientation and movement. This UAV platform opens many possibilities for the study of features that have been almost exclusively studied until now in wind tunnel such as wind turbine blade tip vortex characteristics, near-wake to far-wake transition, momentum entrainment from the higher part of the boundary layer in wind farms, etc. The validation of this new measurement technique has been performed against sonic anemometry in terms of wind speed and temperature time series as well as
Bunch mode specific rate corrections for PILATUS3 detectors
Trueb, P., E-mail: peter.trueb@dectris.com [DECTRIS Ltd, 5400 Baden (Switzerland); Dejoie, C. [ETH Zurich, 8093 Zurich (Switzerland); Kobas, M. [DECTRIS Ltd, 5400 Baden (Switzerland); Pattison, P. [EPF Lausanne, 1015 Lausanne (Switzerland); Peake, D. J. [School of Physics, The University of Melbourne, Victoria 3010 (Australia); Radicci, V. [DECTRIS Ltd, 5400 Baden (Switzerland); Sobott, B. A. [School of Physics, The University of Melbourne, Victoria 3010 (Australia); Walko, D. A. [Argonne National Laboratory, Argonne, IL 60439 (United States); Broennimann, C. [DECTRIS Ltd, 5400 Baden (Switzerland)
2015-04-09
The count rate behaviour of PILATUS3 detectors has been characterized for seven bunch modes at four different synchrotrons. The instant retrigger technology of the PILATUS3 application-specific integrated circuit is found to reduce the dependency of the required rate correction on the synchrotron bunch mode. The improvement of using bunch mode specific rate corrections based on a Monte Carlo simulation is quantified. PILATUS X-ray detectors are in operation at many synchrotron beamlines around the world. This article reports on the characterization of the new PILATUS3 detector generation at high count rates. As for all counting detectors, the measured intensities have to be corrected for the dead-time of the counting mechanism at high photon fluxes. The large number of different bunch modes at these synchrotrons as well as the wide range of detector settings presents a challenge for providing accurate corrections. To avoid the intricate measurement of the count rate behaviour for every bunch mode, a Monte Carlo simulation of the counting mechanism has been implemented, which is able to predict the corrections for arbitrary bunch modes and a wide range of detector settings. This article compares the simulated results with experimental data acquired at different synchrotrons. It is found that the usage of bunch mode specific corrections based on this simulation improves the accuracy of the measured intensities by up to 40% for high photon rates and highly structured bunch modes. For less structured bunch modes, the instant retrigger technology of PILATUS3 detectors substantially reduces the dependency of the rate correction on the bunch mode. The acquired data also demonstrate that the instant retrigger technology allows for data acquisition up to 15 million photons per second per pixel.
Corrective Physical Education. Revised Edition.
Wilmington Public Schools, DE.
This guide, prepared to assist students who have postural and other remedial defects, is divided into four sections. Section one outlines the organization and administration of a remedial physical education program and gives information concerning the administration of alignment tests and corrections. Section two discusses anteroposterior…
The correct "ball bearings" data.
Caroni, C
2002-12-01
The famous data on fatigue failure times of ball bearings have been quoted incorrectly from Lieblein and Zelen's original paper. The correct data include censored values, as well as non-fatigue failures that must be handled appropriately. They could be described by a mixture of Weibull distributions, corresponding to different modes of failure.
"Free Speech" and "Political Correctness"
Scott, Peter
2016-01-01
"Free speech" and "political correctness" are best seen not as opposing principles, but as part of a spectrum. Rather than attempting to establish some absolute principles, this essay identifies four trends that impact on this debate: (1) there are, and always have been, legitimate debates about the--absolute--beneficence of…
Multilingual text induced spelling correction
Reynaert, M.W.C.
2004-01-01
We present TISC, a multilingual, language-independent and context-sensitive spelling checking and correction system designed to facilitate the automatic removal of non-word spelling errors in large corpora. Its lexicon is derived from raw text corpora, without supervision, and contains word unigrams
Speech Correction in the Schools.
Eisenson, Jon; Ogilvie, Mardel
An introduction to the problems and therapeutic needs of school age children whose speech requires remedial attention, the text is intended for both the classroom teacher and the speech correctionist. General considerations include classification and incidence of speech defects, speech correction services, the teacher as a speaker, the mechanism…
ADMINISTRATIVE GUIDE IN SPEECH CORRECTION.
HEALEY, WILLIAM C.
WRITTEN PRIMARILY FOR SCHOOL SUPERINTENDENTS, PRINCIPALS, SPEECH CLINICIANS, AND SUPERVISORS, THIS GUIDE OUTLINES THE MECHANICS OF ORGANIZING AND CONDUCTING SPEECH CORRECTION ACTIVITIES IN THE PUBLIC SCHOOLS. IT INCLUDES THE REQUIREMENTS FOR CERTIFICATION OF A SPEECH CLINICIAN IN MISSOURI AND DESCRIBES ESSENTIAL STEPS FOR THE DEVELOPMENT OF A…
CORRECTIVE ACTION IN CAR MANUFACTURING
H. Rohne
2012-01-01
Full Text Available
ENGLISH ABSTRACT: In this paper the important .issues involved in successfully implementing corrective action systems in quality management are discussed. The work is based on experience in implementing and operating such a system in an automotive manufacturing enterprise in South Africa. The core of a corrective action system is good documentation, supported by a computerised information system. Secondly, a systematic problem solving methodology is essential to resolve the quality related problems identified by the system. In the following paragraphs the general corrective action process is discussed and the elements of a corrective action system are identified, followed by a more detailed discussion of each element. Finally specific results from the application are discussed.
AFRIKAANSE OPSOMMING: Belangrike oorwegings by die suksesvolle implementering van korrektiewe aksie stelsels in gehaltebestuur word in hierdie artikel bespreek. Die werk is gebaseer op ondervinding in die implementering en bedryf van so 'n stelsel by 'n motorvervaardiger in Suid Afrika. Die kern van 'n korrektiewe aksie stelsel is goeie dokumentering, gesteun deur 'n gerekenariseerde inligtingstelsel. Tweedens is 'n sistematiese probleemoplossings rnetodologie nodig om die gehalte verwante probleme wat die stelsel identifiseer aan te spreek. In die volgende paragrawe word die algemene korrektiewe aksie proses bespreek en die elemente van die korrektiewe aksie stelsel geidentifiseer. Elke element word dan in meer besonderhede bespreek. Ten slotte word spesifieke resultate van die toepassing kortliks behandel.
Biomimetic Approach for Accurate, Real-Time Aerodynamic Coefficients Project
National Aeronautics and Space Administration — Aerodynamic and structural reliability and efficiency depends critically on the ability to accurately assess the aerodynamic loads and moments for each lifting...
Sekine, Tetsuro; Ter Voert, Edwin E G W; Warnock, Geoffrey; Buck, Alfred; Huellner, Martin; Veit-Haibach, Patrick; Delso, Gaspar
2016-12-01
Accurate attenuation correction (AC) on PET/MR is still challenging. The purpose of this study was to evaluate the clinical feasibility of AC based on fast zero-echo-time (ZTE) MRI by comparing it with the default atlas-based AC on a clinical PET/MR scanner.
The statistical nature of the second order corrections to the thermal SZE
2004-01-01
This paper shows that the accepted expressions for the second order corrections in the parameter $z$ to the thermal Sunyaev-Zel'dovich effect can be accurately reproduced by a simple convolution integral approach. This representation allows to separate the second order SZE corrections into two type of components. One associated to a single line broadening, directly related to the even derivative terms present in the distortion intensity curve, while the other is related to a frequency shift, ...
Short- and long-range corrected hybrid density functionals with the D3 dispersion corrections
Wang, Chih-Wei; Chai, Jeng-Da
2016-01-01
We propose a short- and long-range corrected (SLC) hybrid scheme employing 100% Hartree-Fock (HF) exchange at both zero and infinite interelectronic distances, wherein three SLC hybrid density functionals with the D3 dispersion corrections (SLC-LDA-D3, SLC-PBE-D3, and SLC-B97-D3) are developed. SLC-PBE-D3 and SLC-B97-D3 are shown to be accurate for a very diverse range of applications, such as core ionization and excitation energies, thermochemistry, kinetics, noncovalent interactions, dissociation of symmetric radical cations, vertical ionization potentials, vertical electron affinities, fundamental gaps, and valence, Rydberg, and long-range charge-transfer excitation energies. Relative to omegaB97X-D, SLC-B97-D3 provides significant improvement for core ionization and excitation energies and noticeable improvement for the self-interaction, asymptote, energy-gap, and charge-transfer problems, while performing similarly for thermochemistry, kinetics, and noncovalent interactions.
Correcting ligands, metabolites, and pathways
Vriend Gert
2006-11-01
Full Text Available Abstract Background A wide range of research areas in bioinformatics, molecular biology and medicinal chemistry require precise chemical structure information about molecules and reactions, e.g. drug design, ligand docking, metabolic network reconstruction, and systems biology. Most available databases, however, treat chemical structures more as illustrations than as a datafield in its own right. Lack of chemical accuracy impedes progress in the areas mentioned above. We present a database of metabolites called BioMeta that augments the existing pathway databases by explicitly assessing the validity, correctness, and completeness of chemical structure and reaction information. Description The main bulk of the data in BioMeta were obtained from the KEGG Ligand database. We developed a tool for chemical structure validation which assesses the chemical validity and stereochemical completeness of a molecule description. The validation tool was used to examine the compounds in BioMeta, showing that a relatively small number of compounds had an incorrect constitution (connectivity only, not considering stereochemistry and that a considerable number (about one third had incomplete or even incorrect stereochemistry. We made a large effort to correct the errors and to complete the structural descriptions. A total of 1468 structures were corrected and/or completed. We also established the reaction balance of the reactions in BioMeta and corrected 55% of the unbalanced (stoichiometrically incorrect reactions in an automatic procedure. The BioMeta database was implemented in PostgreSQL and provided with a web-based interface. Conclusion We demonstrate that the validation of metabolite structures and reactions is a feasible and worthwhile undertaking, and that the validation results can be used to trigger corrections and improvements to BioMeta, our metabolite database. BioMeta provides some tools for rational drug design, reaction searches, and
7 CFR 800.165 - Corrected certificates.
2010-01-01
... this process shall be corrected according to this section. (b) Who may correct. Only official personnel.... According to this section and the instructions, corrected certificates shall show (i) the terms “Corrected... that has been superseded by another certificate or on the basis of a subsequent analysis for...
5 CFR 1601.34 - Error correction.
2010-01-01
... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Error correction. 1601.34 Section 1601.34... Contribution Allocations and Interfund Transfer Requests § 1601.34 Error correction. Errors in processing... in the wrong investment fund, will be corrected in accordance with the error correction...
Speed-of-sound compensated photoacoustic tomography for accurate imaging
Jose, Jithin; Steenbergen, Wiendelt; Slump, Cornelis H; van Leeuwen, Ton G; Manohar, Srirang
2012-01-01
In most photoacoustic (PA) measurements, variations in speed-of-sound (SOS) of the subject are neglected under the assumption of acoustic homogeneity. Biological tissue with spatially heterogeneous SOS cannot be accurately reconstructed under this assumption. We present experimental and image reconstruction methods with which 2-D SOS distributions can be accurately acquired and reconstructed, and with which the SOS map can be used subsequently to reconstruct highly accurate PA tomograms. We begin with a 2-D iterative reconstruction approach in an ultrasound transmission tomography (UTT) setting, which uses ray refracted paths instead of straight ray paths to recover accurate SOS images of the subject. Subsequently, we use the SOS distribution in a new 2-D iterative approach, where refraction of rays originating from PA sources are accounted for in accurately retrieving the distribution of these sources. Both the SOS reconstruction and SOS-compensated PA reconstruction methods utilize the Eikonal equation to m...
Video Error Correction Using Steganography
Robie David L
2002-01-01
Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.
Proximity effect correction sensitivity analysis
Zepka, Alex; Zimmermann, Rainer; Hoppe, Wolfgang; Schulz, Martin
2010-05-01
Determining the quality of a proximity effect correction (PEC) is often done via 1-dimensional measurements such as: CD deviations from target, corner rounding, or line-end shortening. An alternative approach would compare the entire perimeter of the exposed shape and its original design. Unfortunately, this is not a viable solution as there is a practical limit to the number of metrology measurements that can be done in a reasonable amount of time. In this paper we make use of simulated results and introduce a method which may be considered complementary to the standard way of PEC qualification. It compares simulated contours with the target layout via a Boolean XOR operation with the area of the XOR differences providing a direct measure of how close a corrected layout approximates the target.
Interaction and self-correction
Satne, Glenda Lucila
2014-01-01
In this paper, I address the question of how to account for the normative dimension involved in conceptual competence in a naturalistic framework. First, I present what I call the naturalist challenge (NC), referring to both the phylogenetic and ontogenetic dimensions of conceptual possession...... and acquisition. I then criticize two models that have been dominant in thinking about conceptual competence, the interpretationist and the causalist models. Both fail to meet NC, by failing to account for the abilities involved in conceptual self-correction. I then offer an alternative account of self......-correction that I develop with the help of the interactionist theory of mutual understanding arising from recent developments in phenomenology and developmental psychology. © 2014 Satne....
Historical aspects of aberration correction.
Rose, Harald H
2009-06-01
A brief history of the development of direct aberration correction in electron microscopy is outlined starting from the famous Scherzer theorem established in 1936. Aberration correction is the long story of many seemingly fruitless efforts to improve the resolution of electron microscopes by compensating for the unavoidable resolution-limiting aberrations of round electron lenses over a period of 50 years. The successful breakthrough, in 1997, can be considered as a quantum step in electron microscopy because it provides genuine atomic resolution approaching the size of the radius of the hydrogen atom. The additional realization of monochromators, aberration-free imaging energy filters and spectrometers has been leading to a new generation of analytical electron microscopes providing elemental and electronic information about the object on an atomic scale.
Quantum corrections to unimodular gravity
Álvarez, Enrique; González-Martín, Sergio; Herrero-Valea, Mario [Instituto de Física Teórica UAM/CSIC,C/Nicolas Cabrera, 13-15, C.University Cantoblanco, 28049 Madrid (Spain); Departamento de Física Teórica,Universidad Autónoma de Madrid, 20849 Madrid (Spain); Martín, Carmelo P. [Universidad Complutense de Madrid (UCM), Departamento de Física Téorica I,Facultad de Ciencias Físicas, Av. Complutense S/N (Ciudad University), 28040 Madrid (Spain)
2015-08-17
The problem of the comological constant appears in a new light in Unimodular Gravity. In particular, the zero momentum piece of the potential (that is, the constant piece independent of the matter fields) does not automatically produce a cosmological constant proportional to it. The aim of this paper is to give some details on a calculation showing that quantum corrections do not renormalize the classical value of this observable.
Holographic superconductors with Weyl corrections
Momeni, Davood; Raza, Muhammad; Myrzakulov, Ratbay
2016-10-01
A quick review on the analytical aspects of holographic superconductors (HSCs) with Weyl corrections has been presented. Mainly, we focus on matching method and variational approaches. Different types of such HSC have been investigated — s-wave, p-wave and Stúckelberg ones. We also review the fundamental construction of a p-wave type, in which the non-Abelian gauge field is coupled to the Weyl tensor. The results are compared from numerics to analytical results.
EPS Young Physicist Prize - CORRECTION
2009-01-01
The original text for the article 'Prizes aplenty in Krakow' in Bulletin 30-31 assigned the award of the EPS HEPP Young Physicist Prize to Maurizio Pierini. In fact he shared the prize with Niki Saoulidou of Fermilab, who was rewarded for her contribution to neutrino physics, as the article now correctly indicates. We apologise for not having named Niki Saoulidou in the original article.
Corrective camouflage in pediatric dermatology.
Tedeschi, Aurora; Dall'Oglio, Federica; Micali, Giuseppe; Schwartz, Robert A; Janniger, Camila K
2007-02-01
Many dermatologic diseases, including vitiligo and other pigmentary disorders, vascular malformations, acne, and disfiguring scars from surgery or trauma, can be distressing to pediatric patients and can cause psychological alterations such as depression, loss of self-esteem, deterioration of quality of life, emotional distress, and, in some cases, body dysmorphic disorder. Corrective camouflage can help cover cutaneous unaesthetic disorders using a variety of water-resistant and light to very opaque products that provide effective and natural coverage. These products also can serve as concealers during medical treatment or after surgical procedures before healing is complete. Between May 2001 and July 2003. corrective camouflage was used on 15 children and adolescents (age range, 7-16 years; mean age, 14 years). The majority of patients were girls. Six patients had acne vulgaris; 4 had vitiligo; 2 had Becker nevus; and 1 each had striae distensae, allergic contact dermatitis. and postsurgical scarring. Parents of all patients were satisfied with the cosmetic cover results. We consider corrective makeup to be a well-received and valid adjunctive therapy for use during traditional long-term treatment and as a therapeutic alternative in patients in whom conventional therapy is ineffective.
Quasar bolometric corrections: theoretical considerations
Nemmen, Rodrigo S
2010-01-01
Bolometric corrections based on the optical-to-ultraviolet continuum spectrum of quasars are widely used to quantify their radiative output, although such estimates are affected by a myriad of uncertainties, such as the generally unknown line-of-sight angle to the central engine. In order to shed light on these issues, we investigate the state-of-the-art models of Hubeny et al. that describe the continuum spectrum of thin accretion discs and include relativistic effects. We explore the bolometric corrections as a function of mass accretion rates, black hole masses and viewing angles, restricted to the parameter space expected for type-1 quasars. We find that a nonlinear relationship log L_bol=A + B log(lambda L_lambda) with B<=0.9 is favoured by the models and becomes tighter as the wavelength decreases. We calculate from the model the bolometric corrections corresponding to the wavelengths lambda = 1450A, 3000A and 5100A. In particular, for lambda=3000A we find A=9.24 +- 0.77 and B=0.81 +- 0.02. We demons...
Boussion, N; Hatt, M; Lamare, F; Bizais, Y; Turzo, A; Rest, C Cheze-Le; Visvikis, D [INSERM U650, Laboratoire du Traitement de l' Information Medicale (LaTIM), CHU Morvan, Brest (France)
2006-04-07
Partial volume effects (PVEs) are consequences of the limited spatial resolution in emission tomography. They lead to a loss of signal in tissues of size similar to the point spread function and induce activity spillover between regions. Although PVE can be corrected for by using algorithms that provide the correct radioactivity concentration in a series of regions of interest (ROIs), so far little attention has been given to the possibility of creating improved images as a result of PVE correction. Potential advantages of PVE-corrected images include the ability to accurately delineate functional volumes as well as improving tumour-to-background ratio, resulting in an associated improvement in the analysis of response to therapy studies and diagnostic examinations, respectively. The objective of our study was therefore to develop a methodology for PVE correction not only to enable the accurate recuperation of activity concentrations, but also to generate PVE-corrected images. In the multiresolution analysis that we define here, details of a high-resolution image H (MRI or CT) are extracted, transformed and integrated in a low-resolution image L (PET or SPECT). A discrete wavelet transform of both H and L images is performed by using the 'a trous' algorithm, which allows the spatial frequencies (details, edges, textures) to be obtained easily at a level of resolution common to H and L. A model is then inferred to build the lacking details of L from the high-frequency details in H. The process was successfully tested on synthetic and simulated data, proving the ability to obtain accurately corrected images. Quantitative PVE correction was found to be comparable with a method considered as a reference but limited to ROI analyses. Visual improvement and quantitative correction were also obtained in two examples of clinical images, the first using a combined PET/CT scanner with a lymphoma patient and the second using a FDG brain PET and corresponding T1
A highly accurate and analytic equation of state for a hard sphere fluid in random porous media.
Holovko, M; Dong, W
2009-05-07
An analytical equation of state (EOS) for a hard sphere fluid confined in random porous media is derived by extending the scaled particle theory to such complex systems with quenched disorders. A simple empirical correction allows us to obtain a highly accurate EOS with errors within the simulation ones. These are the first analytical results for non trivial off-lattice quench-annealed systems.
Apostolos Zaravinos; George I Lambrou; Nikos Mourmouras; Patroklos Katafygiotis; Gregory Papagregoriou; Krinio Giannikou; Dimitris Delakas; Constantinos Deltas
2014-01-01
BACKGROUND: Upper tract urothelial carcinomas (UT-UC) can invade the pelvicalyceal system making differential diagnosis of the various histologically distinct renal cell carcinoma (RCC) subtypes and UT-UC, difficult. Correct diagnosis is critical for determining appropriate surgery and post-surgical treatments. We aimed to identify microRNA (miRNA) signatures that can accurately distinguish the most prevalent RCC subtypes and UT-UC form the normal kidney. METHODS AND FINDINGS: miRNA profiling...
Drift correction of the dissolved signal in single particle ICPMS.
Cornelis, Geert; Rauch, Sebastien
2016-07-01
A method is presented where drift, the random fluctuation of the signal intensity, is compensated for based on the estimation of the drift function by a moving average. It was shown using single particle ICPMS (spICPMS) measurements of 10 and 60 nm Au NPs that drift reduces accuracy of spICPMS analysis at the calibration stage and during calculations of the particle size distribution (PSD), but that the present method can again correct the average signal intensity as well as the signal distribution of particle-containing samples skewed by drift. Moreover, deconvolution, a method that models signal distributions of dissolved signals, fails in some cases when using standards and samples affected by drift, but the present method was shown to improve accuracy again. Relatively high particle signals have to be removed prior to drift correction in this procedure, which was done using a 3 × sigma method, and the signals are treated separately and added again. The method can also correct for flicker noise that increases when signal intensity is increased because of drift. The accuracy was improved in many cases when flicker correction was used, but when accurate results were obtained despite drift, the correction procedures did not reduce accuracy. The procedure may be useful to extract results from experimental runs that would otherwise have to be run again. Graphical Abstract A method is presented where a spICP-MS signal affected by drift (left) is corrected (right) by adjusting the local (moving) averages (green) and standard deviations (purple) to the respective values at a reference time (red). In combination with removing particle events (blue) in the case of calibration standards, this method is shown to obtain particle size distributions where that would otherwise be impossible, even when the deconvolution method is used to discriminate dissolved and particle signals.
Correction parameters in conventional dental radiography for dental implant
Barunawaty Yunus
2009-12-01
Full Text Available Background: Radiographic imaging as a supportive diagnostic tool is the essential component in treatment planning for dental implant. It help dentist to access target area of implant due to recommendation of many inventions in making radiographic imaging previously. Along with the progress of science and technology, the increasing demand of easier and simpler treatment method, a modern radiographic diagnostic for dental implant is needed. In fact, Makassar, especially in Faculty of Dentistry Hasanuddin University, has only a conventional dental radiography. Researcher wants to optimize the equipment that is used to obtain parameters of the jaw that has been corrected to get accurate dental implant. Purpose: This study aimed to see the difference of radiographic imaging of dental implant size which is going to be placed in patient before and after correction. Method: The type of research is analytical observational with cross sectional design. Sampling method is non random sampling. The amount of samples is 30 people, male and female, aged 20–50 years old. The correction value is evaluated from the parameter result of width, height, and thick of the jaw that were corrected with a metal ball by using conventional dental radiography to see the accuracy. Data is analyzed using SPSS 14 for Windows program with T-test analysis. Result: The result that is obtained by T-Test analysis results with significant value which p<0.05 in the width and height of panoramic radiography technique, the width and height of periapical radiography technique, and the thick of occlusal radiography technique before and after correction. Conclusion: It can be concluded that there is a significant difference before and after the results of panoramic, periapical, and occlusal radiography is corrected.
Automatic correction of hand pointing in stereoscopic depth.
Song, Yalin; Sun, Yaoru; Zeng, Jinhua; Wang, Fang
2014-12-11
In order to examine whether stereoscopic depth information could drive fast automatic correction of hand pointing, an experiment was designed in a 3D visual environment in which participants were asked to point to a target at different stereoscopic depths as quickly and accurately as possible within a limited time window (≤300 ms). The experiment consisted of two tasks: "depthGO" in which participants were asked to point to the new target position if the target jumped, and "depthSTOP" in which participants were instructed to abort their ongoing movements after the target jumped. The depth jump was designed to occur in 20% of the trials in both tasks. Results showed that fast automatic correction of hand movements could be driven by stereoscopic depth to occur in as early as 190 ms.
Correction factors for gravimetric measurement of peritumoural oedema in man.
Bell, B A; Smith, M A; Tocher, J L; Miller, J D
1987-01-01
The water content of samples of normal and oedematous brain in lobectomy specimens from 16 patients with cerebral tumours has been measured by gravimetry and by wet and dry weighing. Uncorrected gravimetry underestimated the water content of oedematous peritumoural cortex by a mean of 1.17%, and of oedematous peritumoural white matter by a mean of 2.52%. Gravimetric correction equations calculated theoretically and from an animal model of serum infusion white matter oedema overestimate peritumoural white matter oedema in man, and empirical gravimetric error correction factors for oedematous peritumoural human white matter and cortex have therefore been derived. These enable gravimetry to be used to accurately determine peritumoural oedema in man.
Causal MRI reconstruction via Kalman prediction and compressed sensing correction.
Majumdar, Angshul
2017-02-04
This technical note addresses the problem of causal online reconstruction of dynamic MRI, i.e. given the reconstructed frames till the previous time instant, we reconstruct the frame at the current instant. Our work follows a prediction-correction framework. Given the previous frames, the current frame is predicted based on a Kalman estimate. The difference between the estimate and the current frame is then corrected based on the k-space samples of the current frame; this reconstruction assumes that the difference is sparse. The method is compared against prior Kalman filtering based techniques and Compressed Sensing based techniques. Experimental results show that the proposed method is more accurate than these and considerably faster.
Nonlinear hydrodynamic corrections to supersonic F-KPP wave fronts
Antoine, C.; Dumazer, G.; Nowakowski, B.; Lemarchand, A.
2012-03-01
We study the hydrodynamic corrections to the dynamics and structure of an exothermic chemical wave front of Fisher-Kolmogorov-Petrovskii-Piskunov (F-KPP) type which travels in a one-dimensional gaseous medium. We show in particular that its long time dynamics, cut-off sensitivity and leading edge behavior are almost entirely controlled by the hydrodynamic front speed correction δUh which characterizes the pushed nature of the front. Reducing the problem to an effective comoving heterogeneous F-KPP equation, we determine two analytical expressions for δUh: an accurate one, derived from a variational method, and an approximate one, from which one can assess the δUh sensitivity to the shear viscosity and heat conductivity of the fluid of interest.
Aspects of probe correction for odd-order probes in spherical near-field antenna measurements
Laitinen, Tommi; Pivnenko, Sergey N.; Breinbjerg, Olav
2004-01-01
Probe correction aspects for the spherical near-field antenna measurements are investigated. First, the spherical mode analyses of the radiated fields of several antennas are performed. It is shown that many common antennas are essentially so-called odd-order antennas. Second, the errors caused...... by the use of the first-order probe correction [1] for a rectangular waveguide probe, that is an odd-order antenna, are demonstrated. Third, a recently developed probe correction technique for odd-order probes is applied for the rectangular waveguide probe and shown to provide accurate results....
INVERSE CORRECTION OF FOURIER TRANSFORMS FOR ONE-DIMENSIONAL STRONGLY SCATTERING APERIODIC LATTICES
Y. F. Hsin
2016-05-01
Full Text Available The accuracy of the Fourier transform (FT, advantageous for the aperiodic lattice (AL design, is significantly improved for strongly scattering periodic lattices (PLs and ALs. The approach is to inversely obtain corrected parameters from an accurate transfer matrix method for the FT. We establish a corrected FT in order to improve the spectral inaccuracy of strongly scattering PLs by redefining wave numbers and reflective intensity. We further correct the FT for strongly scattering ALs by implementing improvements applied to strongly scattering PLs and then making detailed wave number adjustments in the main band spectral region. Silicon lattice simulations are presented.
Children's perception of their synthetically corrected speech production.
Strömbergsson, Sofia; Wengelin, Asa; House, David
2014-06-01
We explore children's perception of their own speech - in its online form, in its recorded form, and in synthetically modified forms. Children with phonological disorder (PD) and children with typical speech and language development (TD) performed tasks of evaluating accuracy of the different types of speech stimuli, either immediately after having produced the utterance or after a delay. In addition, they performed a task designed to assess their ability to detect synthetic modification. Both groups showed high performance in tasks involving evaluation of other children's speech, whereas in tasks of evaluating one's own speech, the children with PD were less accurate than their TD peers. The children with PD were less sensitive to misproductions in immediate conjunction with their production of an utterance, and more accurate after a delay. Within-category modification often passed undetected, indicating a satisfactory quality of the generated speech. Potential clinical benefits of using corrective re-synthesis are discussed.
Atlas-guided correction of brain histology distortion
Xi Qiu
2011-01-01
Full Text Available Histological tissue preparation stages (e.g., cutting, sectioning, etc. often introduce tissue distortions that prevent a smooth 3D reconstruction from being built. In this paper, we propose a method to correct histology distortions by running a piecewise registration scheme. It takes the information of several consecutive slices in a neighborhood into account. In order to achieve an accurate anatomic presentation, we run the method iteratively with the assistance from a pre-segmented brain atlas. The registration parameters are optimized to accommodate different brain sub-regions, e.g., cerebellum, hippocampus, etc. The results are evaluated by both visual and quantitative approaches. The proposed method has been proved to be robust enough for reconstructing an accurate and smooth mouse brain volume.
Assessment of ionospheric and tropospheric corrections for PPP-RTK
de Oliveira, Paulo; Fund, François; Morel, Laurent; Monico, João; Durand, Stéphane; Durand, Fréderic
2016-04-01
The PPP-RTK is a state of art GNSS (Global Navigation Satellite System) technique employed to determine accurate positions in real-time. To perform the PPP-RTK it is necessary to accomplish the SSR (State Space Representation) of the spatially correlated errors affecting the GNSS observables, such as the tropospheric delay and the ionospheric effect. Using GNSS data of local or regional GNSS active networks, it is possible to determine quite well the atmospheric errors for any position in the network coverage area, by modeling these effects or biases. This work presents the results of tropospheric and ionospheric modeling employed to obtain the respective corrections. The region in the study is France and the Orphéon GNSS active network is used to generate the atmospheric corrections. The CNES (Centre National d'Etudes Spatiales) satellite orbit products are used to perform ambiguity fixing in GNSS processing. Two atmospheric modeling approaches are considered: 1) generation of a priori correction by coefficients estimated using the GNSS network and 2) the use of interpolated ionospheric and tropospheric effects from the closest reference stations to the user's location, as suggested in the second stage of RTCM (Ratio Technical Commission for Maritime) messages development. Finally, the atmospheric corrections are introduced in PPP-RTK as a priori values to allow improvements in ambiguity fixing and to reduce its convergence time. The discussion emphasizes the positive and the negative points of each solution or even the associated use of them.
Corrected Kondo temperature beyond the conventional Kondo scaling limit
Li, ZhenHua; Wei, JianHua; Zheng, Xiao; Yan, YiJing; Luo, Hong-Gang
2017-05-01
In the Kondo systems such as the magnetic impurity screened by the conduction electrons in a metal host, as well as the quantum dots connected by the leads, the low energy behaviors have universal dependence on the T/T\\text{K}0 or eV/{{k}\\text{B}}T\\text{K}0 , where T\\text{K}0 is the conventional Kondo temperature. However, it was shown that this scaling behavior is only valid at low-energy; this is called the Kondo scaling limit. Here we explore the extention of the scaling parameter range by introducing the corrected Kondo temperature T K, which may depend on the temperature and bias, as well as the other external parameters. We define the corrected Kondo temperature by scaling the local density of states near the Fermi level, obtained by accurate hierarchy of equations of motion approach at finite temperature and finite bias, and thus obtain a phenomenological expression of the corrected Kondo temperature. By using the corrected Kondo temperature as a characteristic energy scale, the conductance of the quantum dot can be well scaled in a wide parameter range, even two orders beyond the conventional scaling parameter range. Our work indicates that the Kondo scaling, although dominated by the conventional Kondo temperature in the low-energy of the Kondo system, could be extended to a higher energy regime, which is useful for analyzing the physics of the Kondo transport in non-equilibrium or high temperature cases.
Corrected Kondo temperature beyond the conventional Kondo scaling limit.
Li, ZhenHua; Wei, JianHua; Zheng, Xiao; Yan, YiJing; Luo, Hong-Gang
2017-02-20
In the Kondo systems such as the magnetic impurity screened by the conduction electrons in a metal host, as well as the quantum dots connected by the leads, the low energy behaviors have universal dependence on the [Formula: see text] or [Formula: see text], where [Formula: see text] is the conventional Kondo temperature. However, it was shown that this scaling behavior is only valid at low-energy; this is called the Kondo scaling limit. Here we explore the extention of the scaling parameter range by introducing the corrected Kondo temperature T K, which may depend on the temperature and bias, as well as the other external parameters. We define the corrected Kondo temperature by scaling the local density of states near the Fermi level, obtained by accurate hierarchy of equations of motion approach at finite temperature and finite bias, and thus obtain a phenomenological expression of the corrected Kondo temperature. By using the corrected Kondo temperature as a characteristic energy scale, the conductance of the quantum dot can be well scaled in a wide parameter range, even two orders beyond the conventional scaling parameter range. Our work indicates that the Kondo scaling, although dominated by the conventional Kondo temperature in the low-energy of the Kondo system, could be extended to a higher energy regime, which is useful for analyzing the physics of the Kondo transport in non-equilibrium or high temperature cases.
Sun Yanni
2011-05-01
Full Text Available Abstract Background Protein domain classification is an important step in metagenomic annotation. The state-of-the-art method for protein domain classification is profile HMM-based alignment. However, the relatively high rates of insertions and deletions in homopolymer regions of pyrosequencing reads create frameshifts, causing conventional profile HMM alignment tools to generate alignments with marginal scores. This makes error-containing gene fragments unclassifiable with conventional tools. Thus, there is a need for an accurate domain classification tool that can detect and correct sequencing errors. Results We introduce HMM-FRAME, a protein domain classification tool based on an augmented Viterbi algorithm that can incorporate error models from different sequencing platforms. HMM-FRAME corrects sequencing errors and classifies putative gene fragments into domain families. It achieved high error detection sensitivity and specificity in a data set with annotated errors. We applied HMM-FRAME in Targeted Metagenomics and a published metagenomic data set. The results showed that our tool can correct frameshifts in error-containing sequences, generate much longer alignments with significantly smaller E-values, and classify more sequences into their native families. Conclusions HMM-FRAME provides a complementary protein domain classification tool to conventional profile HMM-based methods for data sets containing frameshifts. Its current implementation is best used for small-scale metagenomic data sets. The source code of HMM-FRAME can be downloaded at http://www.cse.msu.edu/~zhangy72/hmmframe/ and at https://sourceforge.net/projects/hmm-frame/.
Adaptive dispersion formula for index interpolation and chromatic aberration correction.
Li, Chia-Ling; Sasián, José
2014-01-13
This paper defines and discusses a glass dispersion formula that is adaptive. The formula exhibits superior convergence with a minimum number of coefficients. Using this formula we rationalize the correction of chromatic aberration per spectrum order. We compare the formula with the Sellmeier and Buchdahl formulas for glasses in the Schott catalogue. The six coefficient adaptive formula is found to be the most accurate with an average maximum index of refraction error of 2.91 × 10(-6) within the visible band.
A string correction algorithm for cursive script recognition.
Bozinovic, R; Srihari, S N
1982-06-01
This paper deals with a method of estimating a correct string X from its noisy version Y produced by a cursive script recognition system. An accurate channel model that allows for splitting, merging, and substitution of symbols is introduced. The best estimate X is obtained by using a dynamic programming search which combines a known search strategy (stack decoding) with a trie structure representation of a dictionary. The computational complexity of the algorithm is derived and compared with that of a method based on the generalized Levenshtein metric. Experimental results with the algorithm on English text based on a dictionary of the 1027 most commonly occurring words are described.
Relativistic and QED corrections for the beryllium atom.
Pachucki, Krzysztof; Komasa, Jacek
2004-05-28
Complete relativistic and quantum electrodynamics corrections of order alpha(2) Ry and alpha(3) Ry are calculated for the ground state of the beryllium atom and its positive ion. A basis set of correlated Gaussian functions is used, with exponents optimized against nonrelativistic binding energies. The results for Bethe logarithms ln(k(0)(Be)=5.750 34(3) and ln(k(0)(Be+)=5.751 67(3) demonstrate the availability of high precision theoretical predictions for energy levels of the beryllium atom and light ions. Our recommended value of the ionization potential 75 192.514(80) cm(-1) agrees with equally accurate available experimental values.
β—Correction Spectrophotometric Determination of Cadmium with Cadion
郜洪文
1995-01-01
Cadmium has been determined by β-correction spectrophotometry with cadion,p-nitrobenzenediazoaminoaz-obenzone,and a non-ionic surfactant,tuiton X-100.The real absorbance of a Cd-cadion chelate in the colored solution can be accurately determined and the complex-ratio of cadion with Cd(II) has been worked out to be 2.Beer's law is obeyed over the concentration range of 0-0.20mg/1 cadmium and the detec-tion limit for cadmium is only 0.003mg/1.Satisfactory experimental results are presented with respect to the determination of trace cadmium in wastewaters.
El-Diasty, M.
2014-11-01
An accurate heading solution is required for many applications and it can be achieved by high grade (high cost) gyroscopes (gyros) which may not be suitable for such applications. Micro-Electro Mechanical Systems-based (MEMS) is an emerging technology, which has the potential of providing heading solution using a low cost MEMS-based gyro. However, MEMS-gyro-based heading solution drifts significantly over time. The heading solution can also be estimated using MEMS-based magnetometer by measuring the horizontal components of the Earth magnetic field. The MEMS-magnetometer-based heading solution does not drift over time, but are contaminated by high level of noise and may be disturbed by the presence of magnetic field sources such as metal objects. This paper proposed an accurate heading estimation procedure based on the integration of MEMS-based gyro and magnetometer measurements that correct gyro and magnetometer measurements where gyro angular rates of changes are estimated using magnetometer measurements and then integrated with the measured gyro angular rates of changes with a robust filter to estimate the heading. The proposed integration solution is implemented using two data sets; one was conducted in static mode without magnetic disturbances and the second was conducted in kinematic mode with magnetic disturbances. The results showed that the proposed integrated heading solution provides accurate, smoothed and undisturbed solution when compared with magnetometerbased and gyro-based heading solutions.
Fraccarollo, Alberto; Canti, Lorenzo; Marchese, Leonardo; Cossi, Maurizio
2017-03-07
The force fields used to simulate the gas adsorption in porous materials are strongly dominated by the van der Waals (vdW) terms. Here we discuss the delicate problem to estimate these terms accurately, analyzing the effect of different models. To this end, we simulated the physisorption of CH4, CO2, and Ar into various Al-free microporous zeolites (ITQ-29, SSZ-13, and silicalite-1), comparing the theoretical results with accurate experimental isotherms. The vdW terms in the force fields were parametrized against the free gas densities and high-level quantum mechanical (QM) calculations, comparing different methods to evaluate the dispersion energies. In particular, MP2 and DFT with semiempirical corrections, with suitable basis sets, were chosen to approximate the best QM calculations; either Lennard-Jones or Morse expressions were used to include the vdW terms in the force fields. The comparison of the simulated and experimental isotherms revealed that a strong interplay exists between the definition of the dispersion energies and the functional form used in the force field; these results are fairly general and reproducible, at least for the systems considered here. On this basis, the reliability of different models can be discussed, and a recipe can be provided to obtain accurate simulated adsorption isotherms.
Stewart, W C L; Hager, V R
2016-08-01
In the analysis of DNA sequences on related individuals, most methods strive to incorporate as much information as possible, with little or no attention paid to the issue of statistical significance. For example, a modern workstation can easily handle the computations needed to perform a large-scale genome-wide inheritance-by-descent (IBD) scan, but accurate assessment of the significance of that scan is often hindered by inaccurate approximations and computationally intensive simulation. To address these issues, we developed gLOD-a test of co-segregation that, for large samples, models chromosome-specific IBD statistics as a collection of stationary Gaussian processes. With this simple model, the parametric bootstrap yields an accurate and rapid assessment of significance-the genome-wide corrected P-value. Furthermore, we show that (i) under the null hypothesis, the limiting distribution of the gLOD is the standard Gumbel distribution; (ii) our parametric bootstrap simulator is approximately 40 000 times faster than gene-dropping methods, and it is more powerful than methods that approximate the adjusted P-value; and, (iii) the gLOD has the same statistical power as the widely used maximum Kong and Cox LOD. Thus, our approach gives researchers the ability to determine quickly and accurately the significance of most large-scale IBD scans, which may contain multiple traits, thousands of families and tens of thousands of DNA sequences.
Realization of Quadrature Signal Generator Using Accurate Magnitude Integrator
Xin, Zhen; Yoon, Changwoo; Zhao, Rende
2016-01-01
-signal parameters, espically when a fast resonse is required for usages such as grid synchronization. As a result, the parameters design of the SOGI-QSG becomes complicated. Theoretical analysis shows that it is caused by the inaccurate magnitude-integration characteristic of the SOGI-QSG. To solve this problem......, an Accurate-Magnitude-Integrator based QSG (AMI-QSG) is proposed. The AMI has an accurate magnitude-integration characteristic for the sinusoidal signal, which makes the AMI-QSG possess an accurate First-Order-System (FOS) characteristic in terms of magnitude than the SOGI-QSG. The parameter design process...
Fabricating an Accurate Implant Master Cast: A Technique Report.
Balshi, Thomas J; Wolfinger, Glenn J; Alfano, Stephen G; Cacovean, Jeannine N; Balshi, Stephen F
2015-12-01
The technique for fabricating an accurate implant master cast following the 12-week healing period after Teeth in a Day® dental implant surgery is detailed. The clinical, functional, and esthetic details captured during the final master impression are vital to creating an accurate master cast. This technique uses the properties of the all-acrylic resin interim prosthesis to capture these details. This impression captures the relationship between the remodeled soft tissue and the interim prosthesis. This provides the laboratory technician with an accurate orientation of the implant replicas in the master cast with which a passive fitting restoration can be fabricated.
Surgical correction of pectus excavatum.
Holcomb, G W
1977-06-01
It has been observed that some patients who had correction of funnel chest deformity by methods which failed to provide fixed elevation of the involved sternal segment developed progressive sagging in later years in spite of looking good at the operating table. This has led to the adoption of a new technique of double sternal support. This procedure has resulted in 35 of 37 children (94%) being classified as excellent or satisfactory. This double support was initially established in 1959 by overlapping the upper transsected sternum while maintaining elevation of the lower end with a soft tissue sling of perichondrium and intercostal muscle. Beginning in 1961, a rigid bridge of rib or stainless steel bar was substituted at the lower end of the sternum. This has provided better support and the current preference of using the steel bar has been validated in this group of patients. The few disappointments were related to removal of the bar earlier than desired, failure to excise all the protruding sternal cartilage stumps or rib graft tips and inability to cover the lateral sternal edges with pectoral muscles. If possible, the steel bar should not be removed before 12 mo. When these pitfalls were avoided, the results were almost uniformly excellent. The wisdom of excising all depressed cartilaginous segments, as advocated by Ravitch in 1949, has been substantiated. A submammary transverse incision has provided an excellent cosmetic appearance. The morbidity has been low and the mortality zero. In spite of the absence of objective evidence of cardiopulmonary dysfunction, there seems to be an almost uniform improvement in appearance and in patient activity following successful correction of the funnel chest. The latter may be as much a psychological response as a physiologic one. The low morbidity, satisfactory long term results, and general improvement in the patient's body image and outlook on life indicate the need to offer correction of the severe pectus excavatum
Misalignment corrections in optical interconnects
Song, Deqiang
Optical interconnects are considered a promising solution for long distance and high bitrate data transmissions, outperforming electrical interconnects in terms of loss and dispersion. Due to the bandwidth and distance advantage of optical interconnects, longer links have been implemented with optics. Recent studies show that optical interconnects have clear advantages even at very short distances---intra system interconnects. The biggest challenge for such optical interconnects is the alignment tolerance. Many free space optical components require very precise assembly and installation, and therefore the overall cost could be increased. This thesis studied the misalignment tolerance and possible alignment correction solutions for optical interconnects at backplane or board level. First the alignment tolerance for free space couplers was simulated and the result indicated the most critical alignments occur between the VCSEL, waveguide and microlens arrays. An in-situ microlens array fabrication method was designed and experimentally demonstrated, with no observable misalignment with the waveguide array. At the receiver side, conical lens arrays were proposed to replace simple microlens arrays for a larger angular alignment tolerance. Multilayer simulation models in CodeV were built to optimized the refractive index and shape profiles of the conical lens arrays. Conical lenses fabricated with micro injection molding machine and fiber etching were characterized. Active component VCSOA was used to correct misalignment in optical connectors between the board and backplane. The alignment correction capability were characterized for both DC and AC (1GHz) optical signal. The speed and bandwidth of the VCSOA was measured and compared with a same structure VCSEL. Based on the optical inverter being studied in our lab, an all-optical flip-flop was demonstrated using a pair of VCSOAs. This memory cell with random access ability can store one bit optical signal with set or
Matrix Models and Gravitational Corrections
Dijkgraaf, R; Temurhan, M; Dijkgraaf, Robbert; Sinkovics, Annamaria; Temurhan, Mine
2002-01-01
We provide evidence of the relation between supersymmetric gauge theories and matrix models beyond the planar limit. We compute gravitational R^2 couplings in gauge theories perturbatively, by summing genus one matrix model diagrams. These diagrams give the leading 1/N^2 corrections in the large N limit of the matrix model and can be related to twist field correlators in a collective conformal field theory. In the case of softly broken SU(N) N=2 super Yang-Mills theories, we find that these exact solutions of the matrix models agree with results obtained by topological field theory methods.
A. Fitzpatrick; Kaplan, Jared
2016-01-01
We use results on Virasoro conformal blocks to study chaotic dynamics in CFT 2 at large central charge c . The Lyapunov exponent λ L , which is a diagnostic for the early onset of chaos, receives 1 /c corrections that may be interpreted as λ L = 2 π β 1 + 12 c $$ {\\lambda}_L=\\frac{2\\pi }{\\beta}\\left(1+\\frac{12}{c}\\right) $$ . However, out of time order correlators receive other equally important 1 /c suppressed contributions that do not have such a simple interpretation. We revisit the proof ...
Sparse MRI for motion correction
Yang, Zai; Xie, Lihua
2013-01-01
MR image sparsity/compressibility has been widely exploited for imaging acceleration with the development of compressed sensing. A sparsity-based approach to rigid-body motion correction is presented for the first time in this paper. A motion is sought after such that the compensated MR image is maximally sparse/compressible among the infinite candidates. Iterative algorithms are proposed that jointly estimate the motion and the image content. The proposed method has a lot of merits, such as no need of additional data and loose requirement for the sampling sequence. Promising results are presented to demonstrate its performance.
Holographic Thermalization with Weyl Corrections
Dey, Anshuman; Sarkar, Tapobrata
2015-01-01
We consider holographic thermalization in the presence of a Weyl correction in five dimensional AdS space. We numerically analyze the time dependence of the two point correlation functions and the expectation values of rectangular Wilson loops in the boundary field theory. The subtle interplay between the Weyl coupling constant and the chemical potential is studied in detail. An outcome of our analysis is the appearance of a swallow tail behaviour in the thermalization curve, and we give evidence that this might indicate distinct physical situations relating to different length scales in the problem.
Correct Linearization of Einstein's Equations
Rabounski D.
2006-06-01
Full Text Available Regularly Einstein's equations can be reduced to a wave form (linearly dependent from the second derivatives of the space metric in the absence of gravitation, the space rotation and Christoffel's symbols. As shown here, the origin of the problem is that one uses the general covariant theory of measurement. Here the wave form of Einstein's equations is obtained in the terms of Zelmanov's chronometric invariants (physically observable projections on the observer's time line and spatial section. The obtained equations depend on solely the second derivatives even if gravitation, the space rotation and Christoffel's symbols. The correct linearization proves: the Einstein equations are completely compatible with weak waves of the metric.
Highly Accurate Sensor for High-Purity Oxygen Determination Project
National Aeronautics and Space Administration — In this STTR effort, Los Gatos Research (LGR) and the University of Wisconsin (UW) propose to develop a highly-accurate sensor for high-purity oxygen determination....
Multi-objective optimization of inverse planning for accurate radiotherapy
曹瑞芬; 吴宜灿; 裴曦; 景佳; 李国丽; 程梦云; 李贵; 胡丽琴
2011-01-01
The multi-objective optimization of inverse planning based on the Pareto solution set, according to the multi-objective character of inverse planning in accurate radiotherapy, was studied in this paper. Firstly, the clinical requirements of a treatment pl
ACCURATE ESTIMATES OF CHARACTERISTIC EXPONENTS FOR SECOND ORDER DIFFERENTIAL EQUATION
无
2009-01-01
In this paper, a second order linear differential equation is considered, and an accurate estimate method of characteristic exponent for it is presented. Finally, we give some examples to verify the feasibility of our result.
Controlling Hay Fever Symptoms with Accurate Pollen Counts
... counts Share | Controlling Hay Fever Symptoms with Accurate Pollen Counts This article has been reviewed by Thanai ... rhinitis known as hay fever is caused by pollen carried in the air during different times of ...
Digital system accurately controls velocity of electromechanical drive
Nichols, G. B.
1965-01-01
Digital circuit accurately regulates electromechanical drive mechanism velocity. The gain and phase characteristics of digital circuits are relatively unimportant. Control accuracy depends only on the stability of the input signal frequency.
Mass spectrometry based protein identification with accurate statistical significance assignment
Alves, Gelio; Yu, Yi-Kuo
2014-01-01
Motivation: Assigning statistical significance accurately has become increasingly important as meta data of many types, often assembled in hierarchies, are constructed and combined for further biological analyses. Statistical inaccuracy of meta data at any level may propagate to downstream analyses, undermining the validity of scientific conclusions thus drawn. From the perspective of mass spectrometry based proteomics, even though accurate statistics for peptide identification can now be ach...
Chaudhary Milind
2007-01-01
Full Text Available Background: Complex deformity correction and fracture treatment with the Ilizarov method needs extensive preoperative analysis and laborious postoperative fixator alterations, which are error-prone. We report our initial experience in treating the first 22 patients having fractures and complex deformities and shortening with software-controlled Taylor spatial frame (TSF external fixator, for its ease of use and accuracy in achieving fracture reduction and complex deformity correction. Settings and Design: The struts of the TSF fixator have multiplane hinges at both ends and the six struts allow correction in all six axes. Hence the same struts act to correct either angulation or translation or rotation. With a single construct assembled during surgery all the desired axis corrections can be performed without a change of the montage as is needed with the Ilizarov fixator. Materials and Methods: Twenty-seven limb segments were operated with the TSF fixator. There were 23 tibiae, two femora, one knee joint and one ankle joint. Seven patients had comminuted fractures. Ten patients who had 13 deformed segments achieved full correction. Eight patients had lengthening in 10 tibiae. (Five of these also had simultaneous correction of deformities. One patient each had correction of knee and ankle deformities. Accurate reduction of fractures and correction of deformities and length could be achieved in all of our patients with minimum postoperative fixator alterations as compared to the Ilizarov system. The X-ray visualization of the osteotomy or lengthening site due to the six crossing struts and added bulk of the fixator rings which made positioning in bed and walking slightly more difficult as compared to the Ilizarov fixator. Conclusions: The TSF external fixator allows accurate fracture reduction and deformity correction without tedious analysis and postoperative frame alterations. The high cost of the fixator is a deterrent. The need for an internet
Masunov, Artëm E., E-mail: amasunov@ucf.edu [NanoScience Technology Center, Department of Chemistry, and Department of Physics, University of Central Florida, Orlando, FL 32826 (United States); Photochemistry Center RAS, ul. Novatorov 7a, Moscow 119421 (Russian Federation); Gangopadhyay, Shruba [Department of Physics, University of California, Davis, CA 95616 (United States); IBM Almaden Research Center, 650 Harry Road, San Jose, CA 95120 (United States)
2015-12-15
New method to eliminate the spin-contamination in broken symmetry density functional theory (BS DFT) calculations is introduced. Unlike conventional spin-purification correction, this method is based on canonical Natural Orbitals (NO) for each high/low spin coupled electron pair. We derive an expression to extract the energy of the pure singlet state given in terms of energy of BS DFT solution, the occupation number of the bonding NO, and the energy of the higher spin state built on these bonding and antibonding NOs (not self-consistent Kohn–Sham orbitals of the high spin state). Compared to the other spin-contamination correction schemes, spin-correction is applied to each correlated electron pair individually. We investigate two binuclear Mn(IV) molecular magnets using this pairwise correction. While one of the molecules is described by magnetic orbitals strongly localized on the metal centers, and spin gap is accurately predicted by Noodleman and Yamaguchi schemes, for the other one the gap is predicted poorly by these schemes due to strong delocalization of the magnetic orbitals onto the ligands. We show our new correction to yield more accurate results in both cases. - Highlights: • Magnetic orbitails obtained for high and low spin states are not related. • Spin-purification correction becomes inaccurate for delocalized magnetic orbitals. • We use the natural orbitals of the broken symmetry state to build high spin state. • This new correction is made separately for each electron pair. • Our spin-purification correction is more accurate for delocalised magnetic orbitals.
Drift-corrected nanoplasmonic hydrogen sensing by polarization
Wadell, Carl; Langhammer, Christoph
2015-06-01
Accurate and reliable hydrogen sensors are an important enabling technology for the large-scale introduction of hydrogen as a fuel or energy storage medium. As an example, in a hydrogen-powered fuel cell car of the type now introduced to the market, more than 15 hydrogen sensors are required for safe operation. To enable the long-term use of plasmonic sensors in this particular context, we introduce a concept for drift-correction based on light polarization utilizing symmetric sensor and sensing material nanoparticles arranged in a heterodimer. In this way the inert gold sensor element of the plasmonic dimer couples to a sensing-active palladium element if illuminated in the dimer-parallel polarization direction but not the perpendicular one. Thus the perpendicular polarization readout can be used to efficiently correct for drifts occurring due to changes of the sensor element itself or due to non-specific events like a temperature change. Furthermore, by the use of a polarizing beamsplitter, both polarization signals can be read out simultaneously making it possible to continuously correct the sensor response to eliminate long-term drift and ageing effects. Since our approach is generic, we also foresee its usefulness for other applications of nanoplasmonic sensors than hydrogen sensing.Accurate and reliable hydrogen sensors are an important enabling technology for the large-scale introduction of hydrogen as a fuel or energy storage medium. As an example, in a hydrogen-powered fuel cell car of the type now introduced to the market, more than 15 hydrogen sensors are required for safe operation. To enable the long-term use of plasmonic sensors in this particular context, we introduce a concept for drift-correction based on light polarization utilizing symmetric sensor and sensing material nanoparticles arranged in a heterodimer. In this way the inert gold sensor element of the plasmonic dimer couples to a sensing-active palladium element if illuminated in the dimer
Corrections for shear and rotatory inertia on flexural vibrations of beams
Nederveen, C.J.; Schwarzl, F.R.
1964-01-01
Different correction formulae for the influence of shear and rotatory inertia on flexural vibrations of freely supported beams are compared with the exact solution. It appears that in most cases a simple formula is sufficient because of the appearance of a constant which is not accurately known, viz
Aliakbari, Mohammad; Toni, Arman
2009-01-01
Writing, as a productive skill, requires an accurate in-depth knowledge of the grammar system, language form and sentence structure. The emphasis on accuracy is justified in the sense that it can lead to the production of structurally correct instances of second language, and to prevent inaccuracy that may result in the production of structurally…
Simulation of Kelvin-Helmholtz Instability with Flux-Corrected Transport Method
WANG Li-Feng; YE Wen-Hua; FAN Zheng-Feng; LI Ying-Jun
2009-01-01
The sixth-order accurate phase error flux-corrected transport numerical algorithm is introduced, and used to simulate Kelvin-Helmholtz instability. Linear growth rates of the simulation agree with the linear theories of Kelvin-Helmholtz instability. It indicates the validity and accuracy of this simulation method. The method also has good capturing ability of the instability interface deformation.
Salting-out effects by pressure-corrected 3D-RISM
Misin, Maksim; Vainikka, Petteri A.; Fedorov, Maxim V.; Palmer, David S.
2016-11-01
We demonstrate that using a pressure corrected three-dimensional reference interaction site model one can accurately predict salting-out (Setschenow's) constants for a wide range of organic compounds in aqueous solutions of NaCl. The approach, based on classical molecular force fields, offers an alternative to more heavily parametrized methods.
Language Trajectory through Corrective Feedback
S. Saber Alavi
2016-08-01
Full Text Available This quasi-experimental study was designed to investigate the effects of corrective feedback on SLA/EFL to determine the potential benefits of two different corrective feedback techniques, namely recasts and elicitation. The research hypotheses were: 1 Learners who are exposed to interactive focused task that requires CR will benefit more than those who are exposed to communicative activities only; 2 Elicitation will be more effective than recasts in leading to L2 development; Three intensive EFL classes in a language center in Songkhla province, Thailand were selected to participate in the study. Based on the study design, two class were assigned to the treatment conditions elicitation group and recasts group and the third was used as a control group. The treatment took place over a period of 9 meetings focusing on teaching third person singular –s morpheme and the provision of CF where it was necessary. The participants' knowledge of the intended syntantic point was tested before treatment and post tested after receiving the treatment. A multiple choice and focused-cloze reading grammar test was used in the pre-test and the post-test to evaluate the effects of the treatments on the learners' acquisition of third person singular morpheme. This classroom-based study showed that the two treatment groups benefited from CF strategies, but according to the study, elicitation group outperformed the recast one.
Aberration correction past and present.
Hawkes, P W
2009-09-28
Electron lenses are extremely poor: if glass lenses were as bad, we should see as well with the naked eye as with a microscope! The demonstration by Otto Scherzer in 1936 that skillful lens design could never eliminate the spherical and chromatic aberrations of rotationally symmetric electron lenses was therefore most unwelcome and the other great electron optician of those years, Walter Glaser, never ceased striving to find a loophole in Scherzer's proof. In the wartime and early post-war years, the first proposals for correcting C(s) were made and in 1947, in a second milestone paper, Scherzer listed these and other ways of correcting lenses; soon after, Dennis Gabor invented holography for the same purpose. These approaches will be briefly summarized and the work that led to the successful implementation of quadupole-octopole and sextupole correctors in the 1990 s will be analysed. In conclusion, the elegant role of image algebra in describing image formation and processing and, above all, in developing new methods will be mentioned.
OPTIMIZED STRAPDOWN CONING CORRECTION ALGORITHM
黄磊; 刘建业; 曾庆化
2013-01-01
Traditional coning algorithms are based on the first-order coning correction reference model .Usually they reduce the algorithm error of coning axis (z) by increasing the sample numbers in one iteration interval .But the increase of sample numbers requires the faster output rates of sensors .Therefore ,the algorithms are often lim-ited in practical use .Moreover ,the noncommutivity error of rotation usually exists on all three axes and the in-crease of sample numbers has little positive effect on reducing the algorithm errors of orthogonal axes (x ,y) . Considering the errors of orthogonal axes cannot be neglected in the high-precision applications ,a coning algorithm with an additional second-order coning correction term is developed to further improve the performance of coning algorithm .Compared with the traditional algorithms ,the new second-order coning algorithm can effectively reduce the algorithm error without increasing the sample numbers .Theoretical analyses validate that in a coning environ-ment with low frequency ,the new algorithm has the better performance than the traditional time-series and fre-quency-series coning algorithms ,while in a maneuver environment the new algorithm has the same order accuracy as the traditional time-series and frequency-series algorithms .Finally ,the practical feasibility of the new coning al-gorithm is demonstrated by digital simulations and practical turntable tests .
Simplified correction of g-value measurements
Duer, Karsten
1998-01-01
A double glazed unit (Ipasol Natura 66/34) has been investigated in the Danish experimental setup METSET. The corrections of the experimental data are very important for the investigated sample as it shows significant spectral selectivity. In (Duer, 1998) and in (Platzer, 1998) the corrections have...... been carried out using a detailed physical model based on ISO9050 and prEN410 but using polarized data for non-normal incidence. This model is only valid for plane, clear glazings and therefor not suited for corrections of measurements performed on complex glazings. To investigate a more general...... correction procedure the results from the measurements on the Interpane DGU have been corrected using the principle outlined in (Rosenfeld, 1996). This correction procedure is more general as corrections can be carried out without a correct physical model of the investigated glazing. On the other hand...
Juvenile Correctional Institutions Library Services: A Bibliography.
McAlister, Annette M.
This bibliography lists citations for 14 articles, books, and reports concerned with library services in juvenile correctional institutions. A second section lists 21 additional materials on adult correctional libraries which also contain information relevant to the juvenile library. (KP)
Sequences of Closed Operators and Correctness
Sabra Ramadan
2010-01-01
Full Text Available In applications and in mathematical physics equations it is very important for mathematical models corresponding to the given problem to be correct given. In this research we will study the relationship between the sequence of closed operators An→A and the correctness of the equation Ax = y. Also we will introduce the criterion for correctness.
5 CFR 1604.6 - Error correction.
2010-01-01
... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Error correction. 1604.6 Section 1604.6 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD UNIFORMED SERVICES ACCOUNTS § 1604.6 Error correction. (a) General rule. A service member's employing agency must correct the service member's...
Monte Carlo scatter correction for SPECT
Liu, Zemei
The goal of this dissertation is to present a quantitatively accurate and computationally fast scatter correction method that is robust and easily accessible for routine applications in SPECT imaging. A Monte Carlo based scatter estimation method is investigated and developed further. The Monte Carlo simulation program SIMIND (Simulating Medical Imaging Nuclear Detectors), was specifically developed to simulate clinical SPECT systems. The SIMIND scatter estimation (SSE) method was developed further using a multithreading technique to distribute the scatter estimation task across multiple threads running concurrently on multi-core CPU's to accelerate the scatter estimation process. An analytical collimator that ensures less noise was used during SSE. The research includes the addition to SIMIND of charge transport modeling in cadmium zinc telluride (CZT) detectors. Phenomena associated with radiation-induced charge transport including charge trapping, charge diffusion, charge sharing between neighboring detector pixels, as well as uncertainties in the detection process are addressed. Experimental measurements and simulation studies were designed for scintillation crystal based SPECT and CZT based SPECT systems to verify and evaluate the expanded SSE method. Jaszczak Deluxe and Anthropomorphic Torso Phantoms (Data Spectrum Corporation, Hillsborough, NC, USA) were used for experimental measurements and digital versions of the same phantoms employed during simulations to mimic experimental acquisitions. This study design enabled easy comparison of experimental and simulated data. The results have consistently shown that the SSE method performed similarly or better than the triple energy window (TEW) and effective scatter source estimation (ESSE) methods for experiments on all the clinical SPECT systems. The SSE method is proven to be a viable method for scatter estimation for routine clinical use.
Nocera, A.; Alvarez, G.
2016-11-01
Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. This paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper then studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases studied indicate that the Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.
Correcting for telluric absorption: Methods, case studies, and release of the TelFit code
Gullikson, Kevin; Kraus, Adam [Department of Astronomy, University of Texas, 2515 Speedway, Stop C1400, Austin, TX 78712 (United States); Dodson-Robinson, Sarah [Department of Physics and Astronomy, 217 Sharp Lab, Newark, DE 19716 (United States)
2014-09-01
Ground-based astronomical spectra are contaminated by the Earth's atmosphere to varying degrees in all spectral regions. We present a Python code that can accurately fit a model to the telluric absorption spectrum present in astronomical data, with residuals of ∼3%-5% of the continuum for moderately strong lines. We demonstrate the quality of the correction by fitting the telluric spectrum in a nearly featureless A0V star, HIP 20264, as well as to a series of dwarf M star spectra near the 819 nm sodium doublet. We directly compare the results to an empirical telluric correction of HIP 20264 and find that our model-fitting procedure is at least as good and sometimes more accurate. The telluric correction code, which we make freely available to the astronomical community, can be used as a replacement for telluric standard star observations for many purposes.
Nocera, A; Alvarez, G
2016-11-01
Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. This paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper then studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases studied indicate that the Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.
Uysal, Ismail E.
2015-10-26
Analysis of electromagnetic interactions on nanodevices can oftentimes be carried out accurately using “traditional” electromagnetic solvers. However, if a gap of sub-nanometer scale exists between any two surfaces of the device, quantum-mechanical effects including tunneling should be taken into account for an accurate characterization of the device\\'s response. Since the first-principle quantum simulators can not be used efficiently to fully characterize a typical-size nanodevice, a quantum corrected electromagnetic model has been proposed as an efficient and accurate alternative (R. Esteban et al., Nat. Commun., 3(825), 2012). The quantum correction is achieved through an effective layered medium introduced into the gap between the surfaces. The dielectric constant of each layer is obtained using a first-principle quantum characterization of the gap with a different dimension.
Deformation field correction for spatial normalization of PET images
Bilgel, Murat; Carass, Aaron; Resnick, Susan M.; Wong, Dean F.; Prince, Jerry L.
2015-01-01
Spatial normalization of positron emission tomography (PET) images is essential for population studies, yet the current state of the art in PET-to-PET registration is limited to the application of conventional deformable registration methods that were developed for structural images. A method is presented for the spatial normalization of PET images that improves their anatomical alignment over the state of the art. The approach works by correcting the deformable registration result using a model that is learned from training data having both PET and structural images. In particular, viewing the structural registration of training data as ground truth, correction factors are learned by using a generalized ridge regression at each voxel given the PET intensities and voxel locations in a population-based PET template. The trained model can then be used to obtain more accurate registration of PET images to the PET template without the use of a structural image. A cross validation evaluation on 79 subjects shows that the proposed method yields more accurate alignment of the PET images compared to deformable PET-to-PET registration as revealed by 1) a visual examination of the deformed images, 2) a smaller error in the deformation fields, and 3) a greater overlap of the deformed anatomical labels with ground truth segmentations. PMID:26142272
Fitzpatrick, A Liam
2016-01-01
We use results on Virasoro conformal blocks to study chaotic dynamics in CFT$_2$ at large central charge c. The Lyapunov exponent $\\lambda_L$, which is a diagnostic for the early onset of chaos, receives $1/c$ corrections that may be interpreted as $\\lambda_L = \\frac{2 \\pi}{\\beta} \\left( 1 + \\frac{12}{c} \\right)$. However, out of time order correlators receive other equally important $1/c$ suppressed contributions that do not have such a simple interpretation. We revisit the proof of a bound on $\\lambda_L$ that emerges at large $c$, focusing on CFT$_2$ and explaining why our results do not conflict with the analysis leading to the bound. We also comment on relationships between chaos, scattering, causality, and bulk locality.
Pileup correction of microdosimetric spectra
Langen, K M; Lennox, A J; Kroc, T K; De Luca, P M
2002-01-01
Microdosimetric spectra were measured at the Fermilab neutron therapy facility using low pressure proportional counters operated in pulse mode. The neutron beam has a very low duty cycle (<0.1%) and consequently a high instantaneous dose rate which causes distortions of the microdosimetric spectra due to pulse pileup. The determination of undistorted spectra at this facility necessitated (i) the modified operation of the proton accelerator to reduce the instantaneous dose rate and (ii) the establishment of a computational procedure to correct the measured spectra for remaining pileup distortions. In support of the latter effort, two different pileup simulation algorithms using analytical and Monte-Carlo-based approaches were developed. While the analytical algorithm allows a detailed analysis of pileup processes it only treats two-pulse and three-pulse pileup and its validity is hence restricted. A Monte-Carlo-based pileup algorithm was developed that inherently treats all degrees of pileup. This algorithm...
Radiative corrections in bumblebee electrodynamics
R.V. Maluf
2015-10-01
Full Text Available We investigate some quantum features of the bumblebee electrodynamics in flat spacetimes. The bumblebee field is a vector field that leads to a spontaneous Lorentz symmetry breaking. For a smooth quadratic potential, the massless excitation (Nambu–Goldstone boson can be identified as the photon, transversal to the vacuum expectation value of the bumblebee field. Besides, there is a massive excitation associated with the longitudinal mode and whose presence leads to instability in the spectrum of the theory. By using the principal-value prescription, we show that no one-loop radiative corrections to the mass term is generated. Moreover, the bumblebee self-energy is not transverse, showing that the propagation of the longitudinal mode cannot be excluded from the effective theory.
Quantum Corrections in Massive Gravity
de Rham, Claudia; Ribeiro, Raquel H
2013-01-01
We compute the one-loop quantum corrections to the potential of ghost-free massive gravity. We show how the mass of external matter fields contribute to the running of the cosmological constant, but do not change the ghost-free structure of the massive gravity potential at one-loop. When considering gravitons running in the loops, we show how the structure of the potential gets destabilized at the quantum level, but in a way which would never involve a ghost with a mass smaller than the Planck scale. This is done by explicitly computing the one-loop effective action and supplementing it with the Vainshtein mechanism. We conclude that to one-loop order the special mass structure of ghost-free massive gravity is technically natural.
Quantum corrections in massive gravity
de Rham, Claudia; Heisenberg, Lavinia; Ribeiro, Raquel H.
2013-10-01
We compute the one-loop quantum corrections to the potential of ghost-free massive gravity. We show how the mass of external matter fields contributes to the running of the cosmological constant, but does not change the ghost-free structure of the massive gravity potential at one-loop. When considering gravitons running in the loops, we show how the structure of the potential gets destabilized at the quantum level, but in a way which would never involve a ghost with a mass smaller than the Planck scale. This is done by explicitly computing the one-loop effective action and supplementing it with the Vainshtein mechanism. We conclude that to one-loop order the special mass structure of ghost-free massive gravity is technically natural.
Correct Linearization of Einstein's Equations
Rabounski D.
2006-04-01
Full Text Available Routinely, Einstein’s equations are be reduced to a wave form (linearly independent of the second derivatives of the space metric in the absence of gravitation, the space rotation and Christoffel’s symbols. As shown herein, the origin of the problem is the use of the general covariant theory of measurement. Herein the wave form of Einstein’s equations is obtained in terms of Zelmanov’s chronometric invariants (physically observable projections on the observer’s time line and spatial section. The equations so obtained depend solely upon the second derivatives, even for gravitation, the space rotation and Christoffel’s symbols. The correct linearization proves that the Einstein equations are completely compatible with weak waves of the metric.
Fitzpatrick, A. Liam [Department of Physics, Boston University,590 Commonwealth Avenue, Boston, MA 02215 (United States); Kaplan, Jared [Department of Physics and Astronomy, Johns Hopkins University,3400 N. Charles St, Baltimore, MD 21218 (United States)
2016-05-12
We use results on Virasoro conformal blocks to study chaotic dynamics in CFT{sub 2} at large central charge c. The Lyapunov exponent λ{sub L}, which is a diagnostic for the early onset of chaos, receives 1/c corrections that may be interpreted as λ{sub L}=((2π)/β)(1+(12/c)). However, out of time order correlators receive other equally important 1/c suppressed contributions that do not have such a simple interpretation. We revisit the proof of a bound on λ{sub L} that emerges at large c, focusing on CFT{sub 2} and explaining why our results do not conflict with the analysis leading to the bound. We also comment on relationships between chaos, scattering, causality, and bulk locality.
Static Correctness of Hierarchical Procedures
Schwartzbach, Michael Ignatieff
1990-01-01
A system of hierarchical, fully recursive types in a truly imperative language allows program fragments written for small types to be reused for all larger types. To exploit this property to enable type-safe hierarchical procedures, it is necessary to impose a static requirement on procedure calls....... We introduce an example language and prove the existence of a sound requirement which preserves static correctness while allowing hierarchical procedures. This requirement is further shown to be optimal, in the sense that it imposes as few restrictions as possible. This establishes the theoretical...... basis for a general type hierarchy with static type checking, which enables first-order polymorphism combined with multiple inheritance and specialization in a language with assignments. We extend the results to include opaque types. An opaque version of a type is different from the original but has...
Nonexposure Accurate Location K-Anonymity Algorithm in LBS
Jinying Jia
2014-01-01
Full Text Available This paper tackles location privacy protection in current location-based services (LBS where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user’s accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR, nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user’s accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR.
Nonexposure accurate location K-anonymity algorithm in LBS.
Jia, Jinying; Zhang, Fengli
2014-01-01
This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR.
Fringe Capacitance Correction for a Coaxial Soil Cell
John D. Wanjura
2011-01-01
Full Text Available Accurate measurement of moisture content is a prime requirement in hydrological, geophysical and biogeochemical research as well as for material characterization and process control. Within these areas, accurate measurements of the surface area and bound water content is becoming increasingly important for providing answers to many fundamental questions ranging from characterization of cotton fiber maturity, to accurate characterization of soil water content in soil water conservation research to bio-plant water utilization to chemical reactions and diffusions of ionic species across membranes in cells as well as in the dense suspensions that occur in surface films. One promising technique to address the increasing demands for higher accuracy water content measurements is utilization of electrical permittivity characterization of materials. This technique has enjoyed a strong following in the soil-science and geological community through measurements of apparent permittivity via time-domain-reflectometry (TDR as well in many process control applications. Recent research however, is indicating a need to increase the accuracy beyond that available from traditional TDR. The most logical pathway then becomes a transition from TDR based measurements to network analyzer measurements of absolute permittivity that will remove the adverse effects that high surface area soils and conductivity impart onto the measurements of apparent permittivity in traditional TDR applications. This research examines an observed experimental error for the coaxial probe, from which the modern TDR probe originated, which is hypothesized to be due to fringe capacitance. The research provides an experimental and theoretical basis for the cause of the error and provides a technique by which to correct the system to remove this source of error. To test this theory, a Poisson model of a coaxial cell was formulated to calculate the effective theoretical extra length caused by the
Accurate Sliding-Mode Control System Modeling for Buck Converters
Høyerby, Mikkel Christian Wendelboe; Andersen, Michael Andreas E.
2007-01-01
This paper shows that classical sliding mode theory fails to correctly predict the output impedance of the highly useful sliding mode PID compensated buck converter. The reason for this is identified as the assumption of the sliding variable being held at zero during sliding mode, effectively mod...... approach also predicts the self-oscillating switching action of the sliding-mode control system correctly. Analytical findings are verified by simulation as well as experimentally in a 10-30V/3A buck converter.......This paper shows that classical sliding mode theory fails to correctly predict the output impedance of the highly useful sliding mode PID compensated buck converter. The reason for this is identified as the assumption of the sliding variable being held at zero during sliding mode, effectively...
Milojević, Slavka; Stojanovic, Vojislav
2017-04-01
Due to the continuous development of the seismic acquisition and processing method, the increase of the signal/fault ratio always represents a current target. The correct application of the latest software solutions improves the processing results and justifies their development. A correct computation and application of static corrections represents one of the most important tasks in pre-processing. This phase is of great importance for further processing steps. Static corrections are applied to seismic data in order to compensate the effects of irregular topography, the difference between the levels of source points and receipt in relation to the level of reduction, of close to the low-velocity surface layer (weathering correction), or any reasons that influence the spatial and temporal position of seismic routes. The refraction statics method is the most common method for computation of static corrections. It is successful in resolving of both the long-period statics problems and determining of the difference in the statics caused by abrupt lateral changes in velocity in close to the surface layer. XtremeGeo FlatironsTM is a program whose main purpose is computation of static correction through a refraction statics method and allows the application of the following procedures: picking of first arrivals, checking of geometry, multiple methods for analysis and modelling of statics, analysis of the refractor anisotropy and tomography (Eikonal Tomography). The exploration area is located on the southern edge of the Pannonian Plain, in the plain area with altitudes of 50 to 195 meters. The largest part of the exploration area covers Deliblato Sands, where the geological structure of the terrain and high difference in altitudes significantly affects the calculation of static correction. Software XtremeGeo FlatironsTM has powerful visualization and tools for statistical analysis which contributes to significantly more accurate assessment of geometry close to the surface
Correction of a Depth-Dependent Lateral Distortion in 3D Super-Resolution Imaging.
Lina Carlini
Full Text Available Three-dimensional (3D localization-based super-resolution microscopy (SR requires correction of aberrations to accurately represent 3D structure. Here we show how a depth-dependent lateral shift in the apparent position of a fluorescent point source, which we term `wobble`, results in warped 3D SR images and provide a software tool to correct this distortion. This system-specific, lateral shift is typically > 80 nm across an axial range of ~ 1 μm. A theoretical analysis based on phase retrieval data from our microscope suggests that the wobble is caused by non-rotationally symmetric phase and amplitude aberrations in the microscope's pupil function. We then apply our correction to the bacterial cytoskeletal protein FtsZ in live bacteria and demonstrate that the corrected data more accurately represent the true shape of this vertically-oriented ring-like structure. We also include this correction method in a registration procedure for dual-color, 3D SR data and show that it improves target registration error (TRE at the axial limits over an imaging depth of 1 μm, yielding TRE values of < 20 nm. This work highlights the importance of correcting aberrations in 3D SR to achieve high fidelity between the measurements and the sample.
Ruggiero, Michael T; Gooch, Jonathan; Zubieta, Jon; Korter, Timothy M
2016-02-18
The problem of nonlocal interactions in density functional theory calculations has in part been mitigated by the introduction of range-corrected functional methods. While promising solutions, the continued evaluation of range corrections in the structural simulations of complex molecular crystals is required to judge their efficacy in challenging chemical environments. Here, three pyridinium-based crystals, exhibiting a wide range of intramolecular and intermolecular interactions, are used as benchmark systems for gauging the accuracy of several range-corrected density functional techniques. The computational results are compared to low-temperature experimental single-crystal X-ray diffraction and terahertz spectroscopic measurements, enabling the direct assessment of range correction in the accurate simulation of the potential energy surface minima and curvatures. Ultimately, the simultaneous treatment of both short- and long-range effects by the ωB97-X functional was found to be central to its rank as the top performer in reproducing the complex array of forces that occur in the studied pyridinium solids. These results demonstrate that while long-range corrections are the most commonly implemented range-dependent improvements to density functionals, short-range corrections are vital for the accurate reproduction of forces that rapidly diminish with distance, such as quadrupole-quadrupole interactions.
Accurate level set method for simulations of liquid atomization☆
Changxiao Shao; Kun Luo; Jianshan Yang; Song Chen; Jianren Fan
2015-01-01
Computational fluid dynamics is an efficient numerical approach for spray atomization study, but it is chal enging to accurately capture the gas–liquid interface. In this work, an accurate conservative level set method is intro-duced to accurately track the gas–liquid interfaces in liquid atomization. To validate the capability of this method, binary drop collision and drop impacting on liquid film are investigated. The results are in good agreement with experiment observations. In addition, primary atomization (swirling sheet atomization) is studied using this method. To the swirling sheet atomization, it is found that Rayleigh–Taylor instability in the azimuthal direction causes the primary breakup of liquid sheet and complex vortex structures are clustered around the rim of the liq-uid sheet. The effects of central gas velocity and liquid–gas density ratio on atomization are also investigated. This work lays a solid foundation for further studying the mechanism of spray atomization.
Simple and accurate analytical calculation of shortest path lengths
Melnik, Sergey
2016-01-01
We present an analytical approach to calculating the distribution of shortest paths lengths (also called intervertex distances, or geodesic paths) between nodes in unweighted undirected networks. We obtain very accurate results for synthetic random networks with specified degree distribution (the so-called configuration model networks). Our method allows us to accurately predict the distribution of shortest path lengths on real-world networks using their degree distribution, or joint degree-degree distribution. Compared to some other methods, our approach is simpler and yields more accurate results. In order to obtain the analytical results, we use the analogy between an infection reaching a node in $n$ discrete time steps (i.e., as in the susceptible-infected epidemic model) and that node being at a distance $n$ from the source of the infection.
Accurate nuclear radii and binding energies from a chiral interaction
Ekstrom, A; Wendt, K A; Hagen, G; Papenbrock, T; Carlsson, B D; Forssen, C; Hjorth-Jensen, M; Navratil, P; Nazarewicz, W
2015-01-01
The accurate reproduction of nuclear radii and binding energies is a long-standing challenge in nuclear theory. To address this problem two-nucleon and three-nucleon forces from chiral effective field theory are optimized simultaneously to low-energy nucleon-nucleon scattering data, as well as binding energies and radii of few-nucleon systems and selected isotopes of carbon and oxygen. Coupled-cluster calculations based on this interaction, named NNLOsat, yield accurate binding energies and radii of nuclei up to 40Ca, and are consistent with the empirical saturation point of symmetric nuclear matter. In addition, the low-lying collective 3- states in 16O and 40Ca are described accurately, while spectra for selected p- and sd-shell nuclei are in reasonable agreement with experiment.
Accurate reconstruction of digital holography using frequency domain zero padding
Shin, Jun Geun; Kim, Ju Wan; Lee, Jae Hwi; Lee, Byeong Ha
2017-04-01
We propose an image reconstruction method of digital holography for getting more accurate reconstruction. Digital holography provides both the light amplitude and the phase of a specimen through recording the interferogram. Since the Fresenl diffraction can be efficiently implemented by the Fourier transform, zero padding technique can be applied to obtain more accurate information. In this work, we report the method of frequency domain zero padding (FDZP). Both in computer-simulation and in experiment made with a USAF 1951 resolution chart and target, the FDZD gave the more accurate rconstruction images. Even though, the FDZD asks more processing time, with the help of graphics processing unit (GPU), it can find good applications in digital holography for 3-D profile imaging.
Memory conformity affects inaccurate memories more than accurate memories.
Wright, Daniel B; Villalba, Daniella K
2012-01-01
After controlling for initial confidence, inaccurate memories were shown to be more easily distorted than accurate memories. In two experiments groups of participants viewed 50 stimuli and were then presented with these stimuli plus 50 fillers. During this test phase participants reported their confidence that each stimulus was originally shown. This was followed by computer-generated responses from a bogus participant. After being exposed to this response participants again rated the confidence of their memory. The computer-generated responses systematically distorted participants' responses. Memory distortion depended on initial memory confidence, with uncertain memories being more malleable than confident memories. This effect was moderated by whether the participant's memory was initially accurate or inaccurate. Inaccurate memories were more malleable than accurate memories. The data were consistent with a model describing two types of memory (i.e., recollective and non-recollective memories), which differ in how susceptible these memories are to memory distortion.
Accurate and Robust Attitude Estimation Using MEMS Gyroscopes and a Monocular Camera
Kobori, Norimasa; Deguchi, Daisuke; Takahashi, Tomokazu; Ide, Ichiro; Murase, Hiroshi
In order to estimate accurate rotations of mobile robots and vehicle, we propose a hybrid system which combines a low-cost monocular camera with gyro sensors. Gyro sensors have drift errors that accumulate over time. On the other hand, a camera cannot obtain the rotation continuously in the case where feature points cannot be extracted from images, although the accuracy is better than gyro sensors. To solve these problems we propose a method for combining these sensors based on Extended Kalman Filter. The errors of the gyro sensors are corrected by referring to the rotations obtained from the camera. In addition, by using the reliability judgment of camera rotations and devising the state value of the Extended Kalman Filter, even when the rotation is not continuously observable from the camera, the proposed method shows a good performance. Experimental results showed the effectiveness of the proposed method.
Limbago, Brandi M
2016-03-01
Bacteria in the Staphylococcus intermedius group, including Staphylococcus pseudintermedius, often encode mecA-mediated methicillin resistance. Reliable detection of this phenotype for proper treatment and infection control decisions requires that these coagulase-positive staphylococci are accurately identified and specifically that they are not misidentified as S. aureus. As correct species level bacterial identification becomes more commonplace in clinical laboratories, one can expect to see changes in guidance for antimicrobial susceptibility testing and interpretation. The study by Wu et al. in this issue (M. T. Wu, C.-A. D. Burnham, L. F. Westblade, J. Dien Bard, S. D. Lawhon, M. A. Wallace, T. Stanley, E. Burd, J. Hindler, R. M. Humphries, J Clin Microbiol 54:535-542, 2016, http://dx.doi.org/10.1128/JCM.02864-15) highlights the impact of robust identification of S. intermedius group organisms on the selection of appropriate antimicrobial susceptibility testing methods and interpretation.
Accurate estimation of influenza epidemics using Google search data via ARGO.
Yang, Shihao; Santillana, Mauricio; Kou, S C
2015-11-24
Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions.
Importance of local exact exchange potential in hybrid functionals for accurate excited states
Kim, Jaewook; Hwang, Sang-Yeon; Ryu, Seongok; Choi, Sunghwan; Kim, Woo Youn
2016-01-01
Density functional theory has been an essential analysis tool for both theoretical and experimental chemists since accurate hybrid functionals were developed. Here we propose a local hybrid method derived from the optimized effective potential (OEP) method and compare its distinct features with conventional nonlocal ones from the Hartree-Fock (HF) exchange operator. Both are formally exact for ground states and thus show similar accuracy for atomization energies and reaction barrier heights. For excited states, the local version yields virtual orbitals with N-electron character, while those of the nonlocal version have mixed characters between N- and (N+1)-electron orbitals. As a result, the orbital energy gaps from the former well approximate excitation energies with a small mean absolute error (MAE = 0.40 eV) for the Caricato benchmark set. The correction from time-dependent density functional theory with a simple local density approximation kernel further improves its accuracy by incorporating multi-config...
ACCURATE KAP METER CALIBRATION AS A PREREQUISITE FOR OPTIMISATION IN PROJECTION RADIOGRAPHY.
Malusek, A; Sandborg, M; Carlsson, G Alm
2016-06-01
Modern X-ray units register the air kerma-area product, PKA, with a built-in KAP meter. Some KAP meters show an energy-dependent bias comparable with the maximum uncertainty articulated by the IEC (25 %), adversely affecting dose-optimisation processes. To correct for the bias, a reference KAP meter calibrated at a standards laboratory and two calibration methods described here can be used to achieve an uncertainty of energy-independent dosemeter via a reference beam quality in the clinic, Q1, to beam quality, Q Biases up to 35 % of built-in KAP meter readings were noted. Energy-dependent calibration factors are needed for unbiased PKA Accurate KAP meter calibration as a prerequisite for optimisation in projection radiography.
Blackman, Jonathan; Galley, Chad R; Szilagyi, Bela; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A
2015-01-01
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. In this paper, we construct an accurate and fast-to-evaluate surrogate model for numerical relativity (NR) waveforms from non-spinning binary black hole coalescences with mass ratios from $1$ to $10$ and durations corresponding to about $15$ orbits before merger. Our surrogate, which is built using reduced order modeling techniques, is distinct from traditional modeling efforts. We find that the full multi-mode surrogate model agrees with waveforms generated by NR to within the numerical error of the NR code. In particular, we show that our modeling strategy produces surrogates which can correctly predict NR waveforms that were {\\em not} used for the surrogate's training. For all practical purposes, then, the surrogate waveform model is equivalent to the high-accuracy, large-scale simulation waveform but can be evaluated in a millisecond to a second dependin...
Advanced hardware design for error correcting codes
Coussy, Philippe
2015-01-01
This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.
Comparison and Analysis of Geometric Correction Models of Spaceborne SAR.
Jiang, Weihao; Yu, Anxi; Dong, Zhen; Wang, Qingsong
2016-06-25
Following the development of synthetic aperture radar (SAR), SAR images have become increasingly common. Many researchers have conducted large studies on geolocation models, but little work has been conducted on the available models for the geometric correction of SAR images of different terrain. To address the terrain issue, four different models were compared and are described in this paper: a rigorous range-doppler (RD) model, a rational polynomial coefficients (RPC) model, a revised polynomial (PM) model and an elevation derivation (EDM) model. The results of comparisons of the geolocation capabilities of the models show that a proper model for a SAR image of a specific terrain can be determined. A solution table was obtained to recommend a suitable model for users. Three TerraSAR-X images, two ALOS-PALSAR images and one Envisat-ASAR image were used for the experiment, including flat terrain and mountain terrain SAR images as well as two large area images. Geolocation accuracies of the models for different terrain SAR images were computed and analyzed. The comparisons of the models show that the RD model was accurate but was the least efficient; therefore, it is not the ideal model for real-time implementations. The RPC model is sufficiently accurate and efficient for the geometric correction of SAR images of flat terrain, whose precision is below 0.001 pixels. The EDM model is suitable for the geolocation of SAR images of mountainous terrain, and its precision can reach 0.007 pixels. Although the PM model does not produce results as precise as the other models, its efficiency is excellent and its potential should not be underestimated. With respect to the geometric correction of SAR images over large areas, the EDM model has higher accuracy under one pixel, whereas the RPC model consumes one third of the time of the EDM model.
Comparison and Analysis of Geometric Correction Models of Spaceborne SAR
Weihao Jiang
2016-06-01
Full Text Available Following the development of synthetic aperture radar (SAR, SAR images have become increasingly common. Many researchers have conducted large studies on geolocation models, but little work has been conducted on the available models for the geometric correction of SAR images of different terrain. To address the terrain issue, four different models were compared and are described in this paper: a rigorous range-doppler (RD model, a rational polynomial coefficients (RPC model, a revised polynomial (PM model and an elevation derivation (EDM model. The results of comparisons of the geolocation capabilities of the models show that a proper model for a SAR image of a specific terrain can be determined. A solution table was obtained to recommend a suitable model for users. Three TerraSAR-X images, two ALOS-PALSAR images and one Envisat-ASAR image were used for the experiment, including flat terrain and mountain terrain SAR images as well as two large area images. Geolocation accuracies of the models for different terrain SAR images were computed and analyzed. The comparisons of the models show that the RD model was accurate but was the least efficient; therefore, it is not the ideal model for real-time implementations. The RPC model is sufficiently accurate and efficient for the geometric correction of SAR images of flat terrain, whose precision is below 0.001 pixels. The EDM model is suitable for the geolocation of SAR images of mountainous terrain, and its precision can reach 0.007 pixels. Although the PM model does not produce results as precise as the other models, its efficiency is excellent and its potential should not be underestimated. With respect to the geometric correction of SAR images over large areas, the EDM model has higher accuracy under one pixel, whereas the RPC model consumes one third of the time of the EDM model.
Optics measurement and correction for the Relativistic Heavy Ion Collider
Shen, Xiaozhe
The quality of beam optics is of great importance for the performance of a high energy accelerator like the Relativistic Heavy Ion Collider (RHIC). The turn-by-turn (TBT) beam position monitor (BPM) data can be used to derive beam optics. However, the accuracy of the derived beam optics is often limited by the performance and imperfections of instruments as well as measurement methods and conditions. Therefore, a robust and model-independent data analysis method is highly desired to extract noise-free information from TBT BPM data. As a robust signal-processing technique, an independent component analysis (ICA) algorithm called second order blind identification (SOBI) has been proven to be particularly efficient in extracting physical beam signals from TBT BPM data even in the presence of instrument's noise and error. We applied the SOBI ICA algorithm to RHIC during the 2013 polarized proton operation to extract accurate linear optics from TBT BPM data of AC dipole driven coherent beam oscillation. From the same data, a first systematic estimation of RHIC BPM noise performance was also obtained by the SOBI ICA algorithm, and showed a good agreement with the RHIC BPM configurations. Based on the accurate linear optics measurement, a beta-beat response matrix correction method and a scheme of using horizontal closed orbit bumps at sextupoles for arc beta-beat correction were successfully applied to reach a record-low beam optics error at RHIC. This thesis presents principles of the SOBI ICA algorithm and theory as well as experimental results of optics measurement and correction at RHIC.
Accurate torque-speed performance prediction for brushless dc motors
Gipper, Patrick D.
Desirable characteristics of the brushless dc motor (BLDCM) have resulted in their application for electrohydrostatic (EH) and electromechanical (EM) actuation systems. But to effectively apply the BLDCM requires accurate prediction of performance. The minimum necessary performance characteristics are motor torque versus speed, peak and average supply current and efficiency. BLDCM nonlinear simulation software specifically adapted for torque-speed prediction is presented. The capability of the software to quickly and accurately predict performance has been verified on fractional to integral HP motor sizes, and is presented. Additionally, the capability of torque-speed prediction with commutation angle advance is demonstrated.
Accurate analysis of planar metamaterials using the RLC theory
Malureanu, Radu; Lavrinenko, Andrei
2008-01-01
In this work we will present an accurate description of metallic pads response using RLC theory. In order to calculate such response we take into account several factors including the mutual inductances, precise formula for determining the capacitance and also the pads’ resistance considering...... the variation of permittivity due to small thicknesses. Even if complex, such strategy gives accurate results and we believe that, after more refinement, can be used to completely calculate a complex metallic structure placed on a substrate in a far faster manner than full simulations programs do....
Method of accurate grinding for single enveloping TI worm
SUN; Yuehai; ZHENG; Huijiang; BI; Qingzhen; WANG; Shuren
2005-01-01
TI worm drive consists of involute helical gear and its enveloping Hourglass worm. Accurate grinding for TI worm is the key manufacture technology for TI worm gearing being popularized and applied. According to the theory of gear mesh, the equations of tooth surface of worm drive are gained, and the equation of the axial section profile of grinding wheel that can accurately grind TI worm is extracted. Simultaneously,the relation of position and motion between TI worm and grinding wheel are expounded.The method for precisely grinding single enveloping TI worm is obtained.
THE SELF-CORRECTION OF ENGLISH SPEECH ERRORS IN SECOND LANGUANGE LEARNING
Ketut Santi Indriani
2015-05-01
Full Text Available The process of second language (L2 learning is strongly influenced by the factors of error reconstruction that occur when the language is learned. Errors will definitely appear in the learning process. However, errors can be used as a step to accelerate the process of understanding the language. Doing self-correction (with or without giving cues is one of the examples. In the aspect of speaking, self-correction is done immediately after the error appears. This study is aimed at finding (i what speech errors the L2 speakers are able to identify, (ii of the errors identified, what speech errors the L2 speakers are able to self correct and (iii whether the self-correction of speech error are able to immediately improve the L2 learning. Based on the data analysis, it was found that the majority identified errors are related to noun (plurality, subject-verb agreement, grammatical structure and pronunciation.. B2 speakers tend to correct errors properly. Of the 78% identified speech errors, as much as 66% errors could be self-corrected accurately by the L2 speakers. Based on the analysis, it was also found that self-correction is able to improve L2 learning ability directly. This is evidenced by the absence of repetition of the same error after the error had been corrected.
Distortion correction in EPI using an extended PSF method with a reversed phase gradient approach.
Myung-Ho In
Full Text Available In echo-planar imaging (EPI, such as commonly used for functional MRI (fMRI and diffusion-tensor imaging (DTI, compressed distortion is a more difficult challenge than local stretching as spatial information can be lost in strongly compressed areas. In addition, the effects are more severe at ultra-high field (UHF such as 7T due to increased field inhomogeneity. To resolve this problem, two EPIs with opposite phase-encoding (PE polarity were acquired and combined after distortion correction. For distortion correction, a point spread function (PSF mapping method was chosen due to its high correction accuracy and extended to perform distortion correction of both EPIs with opposite PE polarity thus reducing the PSF reference scan time. Because the amount of spatial information differs between the opposite PE datasets, the method was further extended to incorporate a weighted combination of the two distortion-corrected images to maximize the spatial information content of a final corrected image. The correction accuracy of the proposed method was evaluated in distortion-corrected data using both forward and reverse phase-encoded PSF reference data and compared with the reversed gradient approaches suggested previously. Further we demonstrate that the extended PSF method with an improved weighted combination can recover local distortions and spatial information loss and be applied successfully not only to spin-echo EPI, but also to gradient-echo EPIs acquired with both PE directions to perform geometrically accurate image reconstruction.
Digital correction of computed X-radiographs for coral densitometry
Boucher, H.; Duprey, N.; Jiménez, C.
2011-12-01
Corals are widely used for environmental and climatic changes assessment as their skeletal growth is influenced by the surrounding environment. Variations in skeletal density are sensitive to environmental variations (water temperature, nutrients concentration etc.). Digitized X-radiographs have been used for coral skeleton density measurements since the 1980s. However, the shape of the X-ray beam emitted during the irradiation process is strongly distorted due to spherical spreading (inverse square law) and heel effect. Consequently, the X-ray intensity intersecting the surface of the sensitive film or the electronic sensor (e.g. PSL plate) is heterogeneous. These heterogeneities are characterized by an asymmetrical concentric pattern of decreasing intensity from the center to the edges of the X-radiographs. It commonly generates an error on density measurements that may reach up to 40%. This is twice as much as the seasonal density variations that are usually found in corals. Until now, extra X-ray images or aluminum standards were used to correct X-radiographs. Such corrective methods may be constraining when working with a high number of coral samples. We present an inexpensive, straightforward, and accurate method to correct strong heterogeneities of X-ray irradiation that affect X-ray images. The method relies on the relation between optical density (OD) and skeletal density; it is non-destructive, and provides high-resolution measurements. Our method was applied to measure density variations on Caribbean reef-building coral Siderastrea siderea from Costa Rica. The basic assumption is that the X-radiograph background, i.e., areas without objects, records the asymmetrical concentric pattern of X-ray intensity. A full image of this pattern was created with a natural neighbor interpolation. The resulting modeled image was then subtracted from the original X-ray image, permitting thus a reliable OD measurement directly on the corrected X-ray image. This Digital
Assessment of density functional methods with correct asymptotic behavior
Tsai, Chen-Wei; Li, Guan-De; Chai, Jeng-Da
2012-01-01
Long-range corrected (LC) hybrid functionals and asymptotically corrected (AC) model potentials are two distinct density functional methods with correct asymptotic behavior. They are known to be accurate for properties that are sensitive to the asymptote of the exchange-correlation potential, such as the highest occupied molecular orbital energies and Rydberg excitation energies of molecules. To provide a comprehensive comparison, we investigate the performance of the two schemes and others on a very wide range of applications, including the asymptote problems, self-interaction-error problems, energy-gap problems, charge-transfer problems, and many others. The LC hybrid scheme is shown to consistently outperform the AC model potential scheme. In addition, to be consistent with the molecules collected in the IP131 database [Y.-S. Lin, C.-W. Tsai, G.-D. Li, and J.-D. Chai, J. Chem. Phys. 136, 154109 (2012)], we expand the EA115 and FG115 databases to include, respectively, the vertical electron affinities and f...
Accounting for Chromatic Atmospheric Effects on Barycentric Corrections
Blackman, Ryan T.; Szymkowiak, Andrew E.; Fischer, Debra A.; Jurgenson, Colby A.
2017-03-01
Atmospheric effects on stellar radial velocity measurements for exoplanet discovery and characterization have not yet been fully investigated for extreme precision levels. We carry out calculations to determine the wavelength dependence of barycentric corrections across optical wavelengths, due to the ubiquitous variations in air mass during observations. We demonstrate that radial velocity errors of at least several cm s‑1 can be incurred if the wavelength dependence is not included in the photon-weighted barycentric corrections. A minimum of four wavelength channels across optical spectra (380–680 nm) are required to account for this effect at the 10 cm s‑1 level, with polynomial fits of the barycentric corrections applied to cover all wavelengths. Additional channels may be required in poor observing conditions or to avoid strong telluric absorption features. Furthermore, consistent flux sampling on the order of seconds throughout the observation is necessary to ensure that accurate photon weights are obtained. Finally, we describe how a multiple-channel exposure meter will be implemented in the EXtreme PREcision Spectrograph (EXPRES).
Combined registration and motion correction of longitudinal retinal OCT data
Lang, Andrew; Carass, Aaron; Al-Louzi, Omar; Bhargava, Pavan; Solomon, Sharon D.; Calabresi, Peter A.; Prince, Jerry L.
2016-03-01
Optical coherence tomography (OCT) has become an important modality for examination of the eye. To measure layer thicknesses in the retina, automated segmentation algorithms are often used, producing accurate and reliable measurements. However, subtle changes over time are difficult to detect since the magnitude of the change can be very small. Thus, tracking disease progression over short periods of time is difficult. Additionally, unstable eye position and motion alter the consistency of these measurements, even in healthy eyes. Thus, both registration and motion correction are important for processing longitudinal data of a specific patient. In this work, we propose a method to jointly do registration and motion correction. Given two scans of the same patient, we initially extract blood vessel points from a fundus projection image generated on the OCT data and estimate point correspondences. Due to saccadic eye movements during the scan, motion is often very abrupt, producing a sparse set of large displacements between successive B-scan images. Thus, we use lasso regression to estimate the movement of each image. By iterating between this regression and a rigid point-based registration, we are able to simultaneously align and correct the data. With longitudinal data from 39 healthy control subjects, our method improves the registration accuracy by 43% compared to simple alignment to the fovea and 8% when using point-based registration only. We also show improved consistency of repeated total retina thickness measurements.
Total energy evaluation in the Strutinsky shell correction method.
Zhou, Baojing; Wang, Yan Alexander
2007-08-14
We analyze the total energy evaluation in the Strutinsky shell correction method (SCM) of Ullmo et al. [Phys. Rev. B 63, 125339 (2001)], where a series expansion of the total energy is developed based on perturbation theory. In agreement with Yannouleas and Landman [Phys. Rev. B 48, 8376 (1993)], we also identify the first-order SCM result to be the Harris functional [Phys. Rev. B 31, 1770 (1985)]. Further, we find that the second-order correction of the SCM turns out to be the second-order error of the Harris functional, which involves the a priori unknown exact Kohn-Sham (KS) density, rho(KS)(r). Interestingly, the approximation of rho(KS)(r) by rho(out)(r), the output density of the SCM calculation, in the evaluation of the second-order correction leads to the Hohenberg-Kohn-Sham functional. By invoking an auxiliary system in the framework of orbital-free density functional theory, Ullmo et al. designed a scheme to approximate rho(KS)(r), but with several drawbacks. An alternative is designed to utilize the optimal density from a high-quality density mixing method to approximate rho(KS)(r). Our new scheme allows more accurate and complex kinetic energy density functionals and nonlocal pseudopotentials to be employed in the SCM. The efficiency of our new scheme is demonstrated in atomistic calculations on the cubic diamond Si and face-centered-cubic Ag systems.
Pulse compressor with aberration correction
Mankos, Marian [Electron Optica, Inc., Palo Alto, CA (United States)
2015-11-30
In this SBIR project, Electron Optica, Inc. (EOI) is developing an electron mirror-based pulse compressor attachment to new and retrofitted dynamic transmission electron microscopes (DTEMs) and ultrafast electron diffraction (UED) cameras for improving the temporal resolution of these instruments from the characteristic range of a few picoseconds to a few nanoseconds and beyond, into the sub-100 femtosecond range. The improvement will enable electron microscopes and diffraction cameras to better resolve the dynamics of reactions in the areas of solid state physics, chemistry, and biology. EOI’s pulse compressor technology utilizes the combination of electron mirror optics and a magnetic beam separator to compress the electron pulse. The design exploits the symmetry inherent in reversing the electron trajectory in the mirror in order to compress the temporally broadened beam. This system also simultaneously corrects the chromatic and spherical aberration of the objective lens for improved spatial resolution. This correction will be found valuable as the source size is reduced with laser-triggered point source emitters. With such emitters, it might be possible to significantly reduce the illuminated area and carry out ultrafast diffraction experiments from small regions of the sample, e.g. from individual grains or nanoparticles. During phase I, EOI drafted a set of candidate pulse compressor architectures and evaluated the trade-offs between temporal resolution and electron bunch size to achieve the optimum design for two particular applications with market potential: increasing the temporal and spatial resolution of UEDs, and increasing the temporal and spatial resolution of DTEMs. Specialized software packages that have been developed by MEBS, Ltd. were used to calculate the electron optical properties of the key pulse compressor components: namely, the magnetic prism, the electron mirror, and the electron lenses. In the final step, these results were folded
An accurate analytic description of neutrino oscillations in matter
Niro, Viviana [Max-Planck-Institut fuer Kernphysik, Heidelberg (Germany)
2009-07-01
We present a simple closed-form analytic expression for the probability of two-flavour neutrino oscillations in a matter with an arbitrary density profile. Our formula is based on a perturbative expansion and allows an easy calculation of higher order corrections. We demonstrate the validity of our results using a few model density profiles, including the PREM density profile of the Earth.
Accurate characterization of OPVs: Device masking and different solar simulators
Gevorgyan, Suren; Carlé, Jon Eggert; Søndergaard, Roar R.;
2013-01-01
laboratories following rigorous ASTM and IEC standards. This work tries to address some of the issues confronting the standard laboratory in this regard. Solar simulator lamps are investigated for their light field homogeneity and direct versus diffuse components, as well as the correct device area...
Accurate Period Approximation for Any Simple Pendulum Amplitude
XUE De-Sheng; ZHOU Zhao; GAO Mei-Zhen
2012-01-01
Accurate approximate analytical formulae of the pendulum period composed of a few elementary functions for any amplitude are constructed.Based on an approximation of the elliptic integral,two new logarithmic formulae for large amplitude close to 180° are obtained.Considering the trigonometric function modulation results from the dependence of relative error on the amplitude,we realize accurate approximation period expressions for any amplitude between 0 and 180°.A relative error less than 0.02％ is achieved for any amplitude.This kind of modulation is also effective for other large-amplitude logarithmic approximation expressions.%Accurate approximate analytical formulae of the pendulum period composed of a few elementary functions for any amplitude are constructed. Based on an approximation of the elliptic integral, two new logarithmic formulae for large amplitude close to 180° are obtained. Considering the trigonometric function modulation results from the dependence of relative error on the amplitude, we realize accurate approximation period expressions for any amplitude between 0 and 180°. A relative error less than 0.02% is achieved for any amplitude. This kind of modulation is also effective for other large-amplitude logarithmic approximation expressions.
On accurate boundary conditions for a shape sensitivity equation method
Duvigneau, R.; Pelletier, D.
2006-01-01
This paper studies the application of the continuous sensitivity equation method (CSEM) for the Navier-Stokes equations in the particular case of shape parameters. Boundary conditions for shape parameters involve flow derivatives at the boundary. Thus, accurate flow gradients are critical to the success of the CSEM. A new approach is presented to extract accurate flow derivatives at the boundary. High order Taylor series expansions are used on layered patches in conjunction with a constrained least-squares procedure to evaluate accurate first and second derivatives of the flow variables at the boundary, required for Dirichlet and Neumann sensitivity boundary conditions. The flow and sensitivity fields are solved using an adaptive finite-element method. The proposed methodology is first verified on a problem with a closed form solution obtained by the Method of Manufactured Solutions. The ability of the proposed method to provide accurate sensitivity fields for realistic problems is then demonstrated. The flow and sensitivity fields for a NACA 0012 airfoil are used for fast evaluation of the nearby flow over an airfoil of different thickness (NACA 0015).
A Simple and Accurate Method for Measuring Enzyme Activity.
Yip, Din-Yan
1997-01-01
Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…
The value of accurate A/R information.
Freeman, G; Allcorn, S
1985-01-01
The understanding and management of an accounts receivable system in a medical group practice is particularly important to administrators in today's economy. As the authors explain, an accurate information system can provide the medical group with valuable information regarding its financial condition and cash flow collections status, as well as help plan for future funding needs.
Technique to accurately quantify collagen content in hyperconfluent cell culture.
See, Eugene Yong-Shun; Toh, Siew Lok; Goh, James Cho Hong
2008-12-01
Tissue engineering aims to regenerate tissues that can successfully take over the functions of the native tissue when it is damaged or diseased. In most tissues, collagen makes up the bulk component of the extracellular matrix, thus, there is great emphasis on its accurate quantification in tissue engineering. It has already been reported that pepsin digestion is able to solubilize the collagen deposited within the cell layer for accurate quantification of collagen content in cultures, but this method has drawbacks when cultured cells are hyperconfluent. In this condition, Pepsin digestion will result in fragments of the cell layers that cannot be completely resolved. These fragments of the undigested cell sheet are visible to the naked eye, which can bias the final results. To the best of our knowledge, there has been no reported method to accurately quantify the collagen content in hyperconfluent cell sheet. Therefore, this study aims to illustrate that sonication is able to aid pepsin digestion of hyperconfluent cell layers of fibroblasts and bone marrow mesenchymal stem cells, to solubilize all the collagen for accurate quantification purposes.
A Simple and Accurate Method for Measuring Enzyme Activity.
Yip, Din-Yan
1997-01-01
Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…
On a more accurate Hardy-Mulholland-type inequality
Bicheng Yang
2016-03-01
Full Text Available Abstract By using weight coefficients, technique of real analysis, and Hermite-Hadamard’s inequality, we give a more accurate Hardy-Mulholland-type inequality with multiparameters and a best possible constant factor related to the beta function. The equivalent forms, the reverses, the operator expressions, and some particular cases are also considered.
Improved fingercode alignment for accurate and compact fingerprint recognition
Brown, Dane
2016-05-01
Full Text Available The traditional texture-based fingerprint recognition system known as FingerCode is improved in this work. Texture-based fingerprint recognition methods are generally more accurate than other methods, but at the disadvantage of increased storage...
Accurate analysis of planar metamaterials using the RLC theory
Malureanu, Radu; Lavrinenko, Andrei
2008-01-01
In this work we will present an accurate description of metallic pads response using RLC theory. In order to calculate such response we take into account several factors including the mutual inductances, precise formula for determining the capacitance and also the pads’ resistance considering the...
$H_{2}^{+}$ ion in strong magnetic field an accurate calculation
López, J C; Turbiner, A V
1997-01-01
Using a unique trial function we perform an accurate calculation of the ground state $1\\sigma_g$ of the hydrogenic molecular ion $H^+_2$ in a constant uniform magnetic field ranging $0-10^{13}$ G. We show that this trial function also makes it possible to study the negative parity ground state $1\\sigma_u$.
BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...
Accurate and Simple Calibration of DLP Projector Systems
Wilm, Jakob; Olesen, Oline Vinter; Larsen, Rasmus
2014-01-01
Much work has been devoted to the calibration of optical cameras, and accurate and simple methods are now available which require only a small number of calibration targets. The problem of obtaining these parameters for light projectors has not been studied as extensively and most current methods...
Accurate segmentation of dense nanoparticles by partially discrete electron tomography
Roelandts, T., E-mail: tom.roelandts@ua.ac.be [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Batenburg, K.J. [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, 1098 XG Amsterdam (Netherlands); Biermans, E. [EMAT, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Kuebel, C. [Institute of Nanotechnology, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Sijbers, J. [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium)
2012-03-15
Accurate segmentation of nanoparticles within various matrix materials is a difficult problem in electron tomography. Due to artifacts related to image series acquisition and reconstruction, global thresholding of reconstructions computed by established algorithms, such as weighted backprojection or SIRT, may result in unreliable and subjective segmentations. In this paper, we introduce the Partially Discrete Algebraic Reconstruction Technique (PDART) for computing accurate segmentations of dense nanoparticles of constant composition. The particles are segmented directly by the reconstruction algorithm, while the surrounding regions are reconstructed using continuously varying gray levels. As no properties are assumed for the other compositions of the sample, the technique can be applied to any sample where dense nanoparticles must be segmented, regardless of the surrounding compositions. For both experimental and simulated data, it is shown that PDART yields significantly more accurate segmentations than those obtained by optimal global thresholding of the SIRT reconstruction. -- Highlights: Black-Right-Pointing-Pointer We present a novel reconstruction method for partially discrete electron tomography. Black-Right-Pointing-Pointer It accurately segments dense nanoparticles directly during reconstruction. Black-Right-Pointing-Pointer The gray level to use for the nanoparticles is determined objectively. Black-Right-Pointing-Pointer The method expands the set of samples for which discrete tomography can be applied.
Fast, Accurate and Detailed NoC Simulations
Wolkotte, P.T.; Hölzenspies, P.K.F.; Smit, G.J.M.; Kellenberger, P.
2007-01-01
Network-on-Chip (NoC) architectures have a wide variety of parameters that can be adapted to the designer's requirements. Fast exploration of this parameter space is only possible at a high-level and several methods have been proposed. Cycle and bit accurate simulation is necessary when the actual r
Novel multi-beam radiometers for accurate ocean surveillance
Cappellin, C.; Pontoppidan, K.; Nielsen, P. H.
2014-01-01
Novel antenna architectures for real aperture multi-beam radiometers providing high resolution and high sensitivity for accurate sea surface temperature (SST) and ocean vector wind (OVW) measurements are investigated. On the basis of the radiometer requirements set for future SST/OVW missions...
Practical schemes for accurate forces in quantum Monte Carlo
Moroni, S.; Saccani, S.; Filippi, Claudia
2014-01-01
While the computation of interatomic forces has become a well-established practice within variational Monte Carlo (VMC), the use of the more accurate Fixed-Node Diffusion Monte Carlo (DMC) method is still largely limited to the computation of total energies on structures obtained at a lower level of
Accurate eye center location through invariant isocentric patterns
Valenti, R.; Gevers, T.
2012-01-01
Locating the center of the eyes allows for valuable information to be captured and used in a wide range of applications. Accurate eye center location can be determined using commercial eye-gaze trackers, but additional constraints and expensive hardware make these existing solutions unattractive and
Creating a Culture of Accurate and Precise Data.
Bergren, Martha Dewey; Maughan, Erin D; Johnson, Kathleen H; Wolfe, Linda C; Watts, H Estelle S; Cole, Marjorie
2017-01-01
There are many stakeholders for school health data. Each one has a stake in the quality and accuracy of the health data collected and reported in schools. The joint NASN and NASSNC national school nurse data set initiative, Step Up & Be Counted!, heightens the need to assure accurate and precise data. The use of a standardized terminology allows the data on school health care delivered in local schools to be aggregated for use at the local, state, and national levels. The use of uniform terminology demands that data elements be defined and that accurate and reliable data are entered into the database. Barriers to accurate data are misunderstanding of accurate data needs, student caseloads that exceed the national recommendations, lack of electronic student health records, and electronic student health records that do not collect the indicators using the standardized terminology or definitions. The quality of the data that school nurses report and share has an impact at the personal, district, state, and national levels and influences the confidence and quality of the decisions made using that data.
Modeling Battery Behavior for Accurate State-of-Charge Indication
Pop, V.; Bergveld, H.J.; Veld, op het J.H.G.; Regtien, P.P.L.; Danilov, D.; Notten, P.H.L.
2006-01-01
Li-ion is the most commonly used battery chemistry in portable applications nowadays. Accurate state-of-charge (SOC) and remaining run-time indication for portable devices is important for the user's convenience and to prolong the lifetime of batteries. A new SOC indication system, combining the ele
Compact and Accurate Turbocharger Modelling for Engine Control
Sorenson, Spencer C; Hendricks, Elbert; Magnússon, Sigurjón
2005-01-01
(Engine Control Unit) as a table. This method uses a great deal of memory space and often requires on-line interpolation and thus a large amount of CPU time. In this paper a more compact, accurate and rapid method of dealing with the compressor modelling problem is presented and is applicable to all...
Speed-of-sound compensated photoacoustic tomography for accurate imaging
Jose, J.; Willemink, G.H.; Steenbergen, W.; Leeuwen, van A.G.J.M.; Manohar, S.
2012-01-01
Purpose: In most photoacoustic (PA) tomographic reconstructions, variations in speed-of-sound (SOS) of the subject are neglected under the assumption of acoustic homogeneity. Biological tissue with spatially heterogeneous SOS cannot be accurately reconstructed under this assumption. The authors pres
BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...
Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method
Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey
2013-01-01
Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…
Dynamic weighing for accurate fertilizer application and monitoring
Bergeijk, van J.; Goense, D.; Willigenburg, van L.G.; Speelman, L.
2001-01-01
The mass flow of fertilizer spreaders must be calibrated for the different types of fertilizers used. To obtain accurate fertilizer application manual calibration of actual mass flow must be repeated frequently. Automatic calibration is possible by measurement of the actual mass flow, based on
A Self-Instructional Device for Conditioning Accurate Prosody.
Buiten, Roger; Lane, Harlan
1965-01-01
A self-instructional device for conditioning accurate prosody in second-language learning is described in this article. The Speech Auto-Instructional Device (SAID) is electro-mechanical and performs three functions: SAID (1) presents to the student tape-recorded pattern sentences that are considered standards in prosodic performance; (2) processes…
Bioaccessibility tests accurately estimate bioavailability of lead to quail
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb, we incorporated Pb-contaminated soils or Pb acetate into diets for Japanese quail (Coturnix japonica), fed the quail for 15 days, and ...
Practical schemes for accurate forces in quantum Monte Carlo
Moroni, S.; Saccani, S.; Filippi, C.
2014-01-01
While the computation of interatomic forces has become a well-established practice within variational Monte Carlo (VMC), the use of the more accurate Fixed-Node Diffusion Monte Carlo (DMC) method is still largely limited to the computation of total energies on structures obtained at a lower level of
Fast and Accurate Residential Fire Detection Using Wireless Sensor Networks
Bahrepour, M.; Meratnia, Nirvana; Havinga, Paul J.M.
2010-01-01
Prompt and accurate residential fire detection is important for on-time fire extinguishing and consequently reducing damages and life losses. To detect fire sensors are needed to measure the environmental parameters and algorithms are required to decide about occurrence of fire. Recently, wireless
Rulison Site corrective action report
NONE
1996-09-01
Project Rulison was a joint US Atomic Energy Commission (AEC) and Austral Oil Company (Austral) experiment, conducted under the AEC`s Plowshare Program, to evaluate the feasibility of using a nuclear device to stimulate natural gas production in low-permeability gas-producing geologic formations. The experiment was conducted on September 10, 1969, and consisted of detonating a 40-kiloton nuclear device at a depth of 2,568 m below ground surface (BGS). This Corrective Action Report describes the cleanup of petroleum hydrocarbon- and heavy-metal-contaminated sediments from an old drilling effluent pond and characterization of the mud pits used during drilling of the R-EX well at the Rulison Site. The Rulison Site is located approximately 65 kilometers (40 miles) northeast of Grand Junction, Colorado. The effluent pond was used for the storage of drilling mud during drilling of the emplacement hole for the 1969 gas stimulation test conducted by the AEC. This report also describes the activities performed to determine whether contamination is present in mud pits used during the drilling of well R-EX, the gas production well drilled at the site to evaluate the effectiveness of the detonation in stimulating gas production. The investigation activities described in this report were conducted during the autumn of 1995, concurrent with the cleanup of the drilling effluent pond. This report describes the activities performed during the soil investigation and provides the analytical results for the samples collected during that investigation.
Ahmadian, Alireza; Ay, Mohammad R.; Sarkar, Saeed [Medical Sciences/University of Tehran, Research Center for Science and Technology in Medicine, Tehran (Iran); Medical Sciences/University of Tehran, Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran (Iran); Bidgoli, Javad H. [Medical Sciences/University of Tehran, Research Center for Science and Technology in Medicine, Tehran (Iran); East Tehran Azad University, Department of Electrical and Computer Engineering, Tehran (Iran); Zaidi, Habib [Geneva University Hospital, Division of Nuclear Medicine, Geneva (Switzerland)
2008-10-15
Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map ({mu}map), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated {mu}maps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique
Removing the thermal component from heart rate provides an accurate VO2 estimation in forest work.
Dubé, Philippe-Antoine; Imbeau, Daniel; Dubeau, Denise; Lebel, Luc; Kolus, Ahmet
2016-05-01
Heart rate (HR) was monitored continuously in 41 forest workers performing brushcutting or tree planting work. 10-min seated rest periods were imposed during the workday to estimate the HR thermal component (ΔHRT) per Vogt et al. (1970, 1973). VO2 was measured using a portable gas analyzer during a morning submaximal step-test conducted at the work site, during a work bout over the course of the day (range: 9-74 min), and during an ensuing 10-min rest pause taken at the worksite. The VO2 estimated, from measured HR and from corrected HR (thermal component removed), were compared to VO2 measured during work and rest. Varied levels of HR thermal component (ΔHRTavg range: 0-38 bpm) originating from a wide range of ambient thermal conditions, thermal clothing insulation worn, and physical load exerted during work were observed. Using raw HR significantly overestimated measured work VO2 by 30% on average (range: 1%-64%). 74% of VO2 prediction error variance was explained by the HR thermal component. VO2 estimated from corrected HR, was not statistically different from measured VO2. Work VO2 can be estimated accurately in the presence of thermal stress using Vogt et al.'s method, which can be implemented easily by the practitioner with inexpensive instruments.
Towards first-principles based prediction of highly accurate electrochemical Pourbiax diagrams
Zeng, Zhenhua; Chan, Maria; Greeley, Jeff
2015-03-01
Electrochemical Pourbaix diagrams lie at the heart of aqueous electrochemical processes and are central to the identification of stable phases of metals for processes ranging from electrocatalysis to corrosion. Even though standard DFT calculations are potentially powerful tools for the prediction of such Pourbaix diagrams, inherent errors in the description of strongly-correlated transition metal (hydr)oxides, together with neglect of weak van der Waals (vdW) interactions, has limited the reliability of the predictions for even the simplest bulk systems; corresponding predictions for more complex alloy or surface structures are even more challenging . Through introduction of a Hubbard U correction, employment of a state-of-the-art van der Waals functional, and use of pure water as a reference state for the calculations, these errors are systematically corrected. The strong performance is illustrated on a series of bulk transition metal (Mn, Fe, Co and Ni) hydroxide, oxyhydroxide, binary and ternary oxides where the corresponding thermodynamics of oxidation and reduction can be accurately described with standard errors of less than 0.04 eV in comparison with experiment.
Accurate Simulations of Binary Black-Hole Mergers in Force-Free Electrodynamics
Alic, Daniela; Rezzolla, Luciano; Zanotti, Olindo; Jaramillo, Jose Luis
2012-01-01
We provide additional information on our recent study of the electromagnetic emission produced during the inspiral and merger of supermassive black holes when these are immersed in a force-free plasma threaded by a uniform magnetic field. As anticipated in a recent letter, our results show that although a dual-jet structure is present, the associated luminosity is ~ 100 times smaller than the total one, which is predominantly quadrupolar. We here discuss the details of our implementation of the equations in which the force-free condition is not implemented at a discrete level, but rather obtained via a damping scheme which drives the solution to satisfy the correct condition. We show that this is important for a correct and accurate description of the current sheets that can develop in the course of the simulation. We also study in greater detail the three-dimensional charge distribution produced as a consequence of the inspiral and show that during the inspiral it possesses a complex but ordered structure wh...
Fast and accurate solution of the Poisson equation in an immersed setting
Marques, Alexandre Noll; Rosales, Rodolfo Ruben
2014-01-01
We present a fast and accurate algorithm for the Poisson equation in complex geometries, using regular Cartesian grids. We consider a variety of configurations, including Poisson equations with interfaces across which the solution is discontinuous (of the type arising in multi-fluid flows). The algorithm is based on a combination of the Correction Function Method (CFM) and Boundary Integral Methods (BIM). Interface and boundary conditions can be treated in a fast and accurate manner using boundary integral equations, and the associated BIM. Unfortunately, BIM can be costly when the solution is needed everywhere in a grid, e.g. fluid flow problems. We use the CFM to circumvent this issue. The solution from the BIM is used to rewrite the problem as a series of Poisson equations in rectangular domains --- which requires the BIM solution at interfaces/boundaries only. These Poisson equations involve discontinuities at interfaces, of the type that the CFM can handle. Hence we use the CFM to solve them (to high ord...
RNASequel: accurate and repeat tolerant realignment of RNA-seq reads.
Wilson, Gavin W; Stein, Lincoln D
2015-10-15
RNA-seq is a key technology for understanding the biology of the cell because of its ability to profile transcriptional and post-transcriptional regulation at single nucleotide resolutions. Compared to DNA sequencing alignment algorithms, RNA-seq alignment algorithms have a diminished ability to accurately detect and map base pair substitutions, gaps, discordant pairs and repetitive regions. These shortcomings adversely affect experiments that require a high degree of accuracy, notably the ability to detect RNA editing. We have developed RNASequel, a software package that runs as a post-processing step in conjunction with an RNA-seq aligner and systematically corrects common alignment artifacts. Its key innovations are a two-pass splice junction alignment system that includes de novo splice junctions and the use of an empirically determined estimate of the fragment size distribution when resolving read pairs. We demonstrate that RNASequel produces improved alignments when used in conjunction with STAR or Tophat2 using two simulated datasets. We then show that RNASequel improves the identification of adenosine to inosine RNA editing sites on biological datasets. This software will be useful in applications requiring the accurate identification of variants in RNA sequencing data, the discovery of RNA editing sites and the analysis of alternative splicing.
A Novel Method for Accurate Operon Predictions in All SequencedProkaryotes
Price, Morgan N.; Huang, Katherine H.; Alm, Eric J.; Arkin, Adam P.
2004-12-01
We combine comparative genomic measures and the distance separating adjacent genes to predict operons in 124 completely sequenced prokaryotic genomes. Our method automatically tailors itself to each genome using sequence information alone, and thus can be applied to any prokaryote. For Escherichia coli K12 and Bacillus subtilis, our method is 85 and 83% accurate, respectively, which is similar to the accuracy of methods that use the same features but are trained on experimentally characterized transcripts. In Halobacterium NRC-1 and in Helicobacterpylori, our method correctly infers that genes in operons are separated by shorter distances than they are in E.coli, and its predictions using distance alone are more accurate than distance-only predictions trained on a database of E.coli transcripts. We use microarray data from sixphylogenetically diverse prokaryotes to show that combining intergenic distance with comparative genomic measures further improves accuracy and that our method is broadly effective. Finally, we survey operon structure across 124 genomes, and find several surprises: H.pylori has many operons, contrary to previous reports; Bacillus anthracis has an unusual number of pseudogenes within conserved operons; and Synechocystis PCC6803 has many operons even though it has unusually wide spacings between conserved adjacent genes.
Rapid and accurate prediction and scoring of water molecules in protein binding sites.
Gregory A Ross
Full Text Available Water plays a critical role in ligand-protein interactions. However, it is still challenging to predict accurately not only where water molecules prefer to bind, but also which of those water molecules might be displaceable. The latter is often seen as a route to optimizing affinity of potential drug candidates. Using a protocol we call WaterDock, we show that the freely available AutoDock Vina tool can be used to predict accurately the binding sites of water molecules. WaterDock was validated using data from X-ray crystallography, neutron diffraction and molecular dynamics simulations and correctly predicted 97% of the water molecules in the test set. In addition, we combined data-mining, heuristic and machine learning techniques to develop probabilistic water molecule classifiers. When applied to WaterDock predictions in the Astex Diverse Set of protein ligand complexes, we could identify whether a water molecule was conserved or displaced to an accuracy of 75%. A second model predicted whether water molecules were displaced by polar groups or by non-polar groups to an accuracy of 80%. These results should prove useful for anyone wishing to undertake rational design of new compounds where the displacement of water molecules is being considered as a route to improved affinity.
Novel micelle PCR-based method for accurate, sensitive and quantitative microbiota profiling
Boers, Stefan A.; Hays, John P.; Jansen, Ruud
2017-01-01
In the last decade, many researchers have embraced 16S rRNA gene sequencing techniques, which has led to a wealth of publications and documented differences in the composition of microbial communities derived from many different ecosystems. However, comparison between different microbiota studies is currently very difficult due to the lack of a standardized 16S rRNA gene sequencing protocol. Here we report on a novel approach employing micelle PCR (micPCR) in combination with an internal calibrator that allows for standardization of microbiota profiles via their absolute abundances. The addition of an internal calibrator allows the researcher to express the resulting operational taxonomic units (OTUs) as a measure of 16S rRNA gene copies by correcting the number of sequences of each individual OTU in a sample for efficiency differences in the NGS process. Additionally, accurate quantification of OTUs obtained from negative extraction control samples allows for the subtraction of contaminating bacterial DNA derived from the laboratory environment or chemicals/reagents used. Using equimolar synthetic microbial community samples and low biomass clinical samples, we demonstrate that the calibrated micPCR/NGS methodology possess a much higher precision and a lower limit of detection compared with traditional PCR/NGS, resulting in more accurate microbiota profiles suitable for multi-study comparison. PMID:28378789
Lee, Ho; Lee, Jeongjin; Shin, Yeong Gil; Lee, Rena; Xing, Lei
2010-06-21
This paper presents a fast and accurate marker-based automatic registration technique for aligning uncalibrated projections taken from a transmission electron microscope (TEM) with different tilt angles and orientations. Most of the existing TEM image alignment methods estimate the similarity between images using the projection model with least-squares metric and guess alignment parameters by computationally expensive nonlinear optimization schemes. Approaches based on the least-squares metric which is sensitive to outliers may cause misalignment since automatic tracking methods, though reliable, can produce a few incorrect trajectories due to a large number of marker points. To decrease the influence of outliers, we propose a robust similarity measure using the projection model with a Gaussian weighting function. This function is very effective in suppressing outliers that are far from correct trajectories and thus provides a more robust metric. In addition, we suggest a fast search strategy based on the non-gradient Powell's multidimensional optimization scheme to speed up optimization as only meaningful parameters are considered during iterative projection model estimation. Experimental results show that our method brings more accurate alignment with less computational cost compared to conventional automatic alignment methods.
Accurate and efficient computation of nonlocal potentials based on Gaussian-sum approximation
Exl, Lukas; Mauser, Norbert J.; Zhang, Yong
2016-12-01
We introduce an accurate and efficient method for the numerical evaluation of nonlocal potentials, including the 3D/2D Coulomb, 2D Poisson and 3D dipole-dipole potentials. Our method is based on a Gaussian-sum approximation of the singular convolution kernel combined with a Taylor expansion of the density. Starting from the convolution formulation of the nonlocal potential, for smooth and fast decaying densities, we make a full use of the Fourier pseudospectral (plane wave) approximation of the density and a separable Gaussian-sum approximation of the kernel in an interval where the singularity (the origin) is excluded. The potential is separated into a regular integral and a near-field singular correction integral. The first is computed with the Fourier pseudospectral method, while the latter is well resolved utilizing a low-order Taylor expansion of the density. Both parts are accelerated by fast Fourier transforms (FFT). The method is accurate (14-16 digits), efficient (O (Nlog N) complexity), low in storage, easily adaptable to other different kernels, applicable for anisotropic densities and highly parallelizable.
Accurate and efficient computation of nonlocal potentials based on Gaussian-sum approximation
Exl, Lukas; Zhang, Yong
2015-01-01
We introduce an accurate and efficient method for a class of nonlocal potential evaluations with free boundary condition, including the 3D/2D Coulomb, 2D Poisson and 3D dipolar potentials. Our method is based on a Gaussian-sum approximation of the singular convolution kernel and Taylor expansion of the density. Starting from the convolution formulation, for smooth and fast decaying densities, we make a full use of the Fourier pseudospectral (plane wave) approximation of the density and a separable Gaussian-sum approximation of the kernel in an interval where the singularity (the origin) is excluded. Hence, the potential is separated into a regular integral and a near-field singular correction integral, where the first integral is computed with the Fourier pseudospectral method and the latter singular one can be well resolved utilizing a low-order Taylor expansion of the density. Both evaluations can be accelerated by fast Fourier transforms (FFT). The new method is accurate (14-16 digits), efficient ($O(N \\lo...
Self-correction coil: operation mechanism of self-correction coil
Hosoyama, K.
1983-06-01
We discuss here the operation mechanism of self-correction coil with a simple model. At the first stage, for the ideal self-correction coil case we calculate the self-inductance L of self-correction coil, the mutual inductance M between the error field coil and the self-correction coil, and using the model the induced curent in the self-correction coil by the external magnetic error field and induced magnetic field by the self-correction coil. And at the second stage, we extend this calculation method to non-ideal self-correction coil case, there we realize that the wire distribution of self-correction coil is important to get the high enough self-correction effect. For measure of completeness of self-correction effect, we introduce the efficiency eta of self-correction coil by the ratio of induced magnetic field by the self-correction coil and error field. As for the examples, we calculate L, M and eta for two cases; one is a single block approximation of self-correction coil winding and the other is a two block approximation case. By choosing the adequate angles of self-correction coil winding, we can get about 98% efficiency for single block approximation case and 99.8% for two block approximation case. This means that by using the self-correction coil we can improve the field quality about two orders.
On the importance of having accurate data for astrophysical modelling
Lique, Francois
2016-06-01
The Herschel telescope and the ALMA and NOEMA interferometers have opened new windows of observation for wavelengths ranging from far infrared to sub-millimeter with spatial and spectral resolutions previously unmatched. To make the most of these observations, an accurate knowledge of the physical and chemical processes occurring in the interstellar and circumstellar media is essential.In this presentation, I will discuss what are the current needs of astrophysics in terms of molecular data and I will show that accurate molecular data are crucial for the proper determination of the physical conditions in molecular clouds.First, I will focus on collisional excitation studies that are needed for molecular lines modelling beyond the Local Thermodynamic Equilibrium (LTE) approach. In particular, I will show how new collisional data for the HCN and HNC isomers, two tracers of star forming conditions, have allowed solving the problem of their respective abundance in cold molecular clouds. I will also present the last collisional data that have been computed in order to analyse new highly resolved observations provided by the ALMA interferometer.Then, I will present the calculation of accurate rate constants for the F+H2 → HF+H and Cl+H2 ↔ HCl+H reactions, which have allowed a more accurate determination of the physical conditions in diffuse molecular clouds. I will also present the recent work on the ortho-para-H2 conversion due to hydrogen exchange that allow more accurate determination of the ortho-to-para-H2 ratio in the universe and that imply a significant revision of the cooling mechanism in astrophysical media.
A simplified method for correcting contaminant concentrations in eggs for moisture loss.
Heinz, Gary H.; Stebbins, Katherine R.; Klimstra, Jon D.; Hoffman, David J.
2009-01-01
We developed a simplified and highly accurate method for correcting contaminant concentrations in eggs for the moisture that is lost from an egg during incubation. To make the correction, one injects water into the air cell of the egg until overflowing. The amount of water injected corrects almost perfectly for the amount of water lost during incubation or when an egg is left in the nest and dehydrates and deteriorates over time. To validate the new method we weighed freshly laid chicken (Gallus gallus) eggs and then incubated sets of fertile and dead eggs for either 12 or 19 d. We then injected water into the air cells of these eggs and verified that the weights after water injection were almost identical to the weights of the eggs when they were fresh. The advantages of the new method are its speed, accuracy, and simplicity: It does not require the calculation of a correction factor that has to be applied to each contaminant residue.
Ban, Yunyun; Chen, Tianqin; Yan, Jun; Lei, Tingwu
2017-04-01
The measurement of sediment concentration in water is of great importance in soil erosion research and soil and water loss monitoring systems. The traditional weighing method has long been the foundation of all the other measuring methods and instrument calibration. The development of a new method to replace the traditional oven-drying method is of interest in research and practice for the quick and efficient measurement of sediment concentration, especially field measurements. A new method is advanced in this study for accurately measuring the sediment concentration based on the accurate measurement of the mass of the sediment-water mixture in the confined constant volume container (CVC). A sediment-laden water sample is put into the CVC to determine its mass before the CVC is filled with water and weighed again for the total mass of the water and sediments in the container. The known volume of the CVC, the mass of sediment-laden water, and sediment particle density are used to calculate the mass of water, which is replaced by sediments, therefore sediment concentration of the sample is calculated. The influence of water temperature was corrected by measuring water density to determine the temperature of water before measurements were conducted. The CVC was used to eliminate the surface tension effect so as to obtain the accurate volume of water and sediment mixture. Experimental results showed that the method was capable of measuring the sediment concentration from 0.5 up to 1200 kg m‑3. A good liner relationship existed between the designed and measured sediment concentrations with all the coefficients of determination greater than 0.999 and the averaged relative error less than 0.2%. All of these seem to indicate that the new method is capable of measuring a full range of sediment concentration above 0.5 kg m‑3 to replace the traditional oven-drying method as a standard method for evaluating and calibrating other methods.
Calculation of accurate small angle X-ray scattering curves from coarse-grained protein models
Stovgaard Kasper
2010-08-01
Full Text Available Abstract Background Genome sequencing projects have expanded the gap between the amount of known protein sequences and structures. The limitations of current high resolution structure determination methods make it unlikely that this gap will disappear in the near future. Small angle X-ray scattering (SAXS is an established low resolution method for routinely determining the structure of proteins in solution. The purpose of this study is to develop a method for the efficient calculation of accurate SAXS curves from coarse-grained protein models. Such a method can for example be used to construct a likelihood function, which is paramount for structure determination based on statistical inference. Results We present a method for the efficient calculation of accurate SAXS curves based on the Debye formula and a set of scattering form factors for dummy atom representations of amino acids. Such a method avoids the computationally costly iteration over all atoms. We estimated the form factors using generated data from a set of high quality protein structures. No ad hoc scaling or correction factors are applied in the calculation of the curves. Two coarse-grained representations of protein structure were investigated; two scattering bodies per amino acid led to significantly better results than a single scattering body. Conclusion We show that the obtained point estimates allow the calculation of accurate SAXS curves from coarse-grained protein models. The resulting curves are on par with the current state-of-the-art program CRYSOL, which requires full atomic detail. Our method was also comparable to CRYSOL in recognizing native structures among native-like decoys. As a proof-of-concept, we combined the coarse-grained Debye calculation with a previously described probabilistic model of protein structure, TorusDBN. This resulted in a significant improvement in the decoy recognition performance. In conclusion, the presented method shows great promise for