Accurate light-time correction due to a gravitating mass
Energy Technology Data Exchange (ETDEWEB)
Ashby, Neil [Department of Physics, University of Colorado, Boulder, CO (United States); Bertotti, Bruno, E-mail: ashby@boulder.nist.go [Dipartimento di Fisica Nucleare e Teorica, Universita di Pavia (Italy)
2010-07-21
This technical paper of mathematical physics arose as an aftermath of the 2002 Cassini experiment (Bertotti et al 2003 Nature 425 374-6), in which the PPN parameter {gamma} was measured with an accuracy {sigma}{sub {gamma}} = 2.3 x 10{sup -5} and found consistent with the prediction {gamma} = 1 of general relativity. The Orbit Determination Program (ODP) of NASA's Jet Propulsion Laboratory, which was used in the data analysis, is based on an expression (8) for the gravitational delay {Delta}t that differs from the standard formula (2); this difference is of second order in powers of m-the gravitational radius of the Sun-but in Cassini's case it was much larger than the expected order of magnitude m{sup 2}/b, where b is the distance of the closest approach of the ray. Since the ODP does not take into account any other second-order terms, it is necessary, also in view of future more accurate experiments, to revisit the whole problem, to systematically evaluate higher order corrections and to determine which terms, and why, are larger than the expected value. We note that light propagation in a static spacetime is equivalent to a problem in ordinary geometrical optics; Fermat's action functional at its minimum is just the light-time between the two end points A and B. A new and powerful formulation is thus obtained. This method is closely connected with the much more general approach of Le Poncin-Lafitte et al (2004 Class. Quantum Grav. 21 4463-83), which is based on Synge's world function. Asymptotic power series are necessary to provide a safe and automatic way of selecting which terms to keep at each order. Higher order approximations to the required quantities, in particular the delay and the deflection, are easily obtained. We also show that in a close superior conjunction, when b is much smaller than the distances of A and B from the Sun, say of order R, the second-order correction has an enhanced part of order m{sup 2}R/b{sup 2}, which
Onboard Autonomous Corrections for Accurate IRF Pointing.
Jorgensen, J. L.; Betto, M.; Denver, T.
2002-05-01
filtered GPS updates, a world time clock, astrometric correction tables, and a attitude output transform system, that allow the ASC to deliver the spacecraft attitude relative to the Inertial Reference Frame (IRF) in realtime. This paper describes the operations of the onboard autonomy of the ASC, which in realtime removes the residuals from the attitude measurements, whereby a timely IRF attitude at arcsecond level, is delivered to the AOCS (or sent to ground). A discussion about achievable robustness and accuracy is given, and compared to inflight results from the operations of the two Advanced Stellar Compass's (ASC), which are flying in LEO onboard the German geo-potential research satellite CHAMP. The ASC's onboard CHAMP are dual head versions, i.e. each processing unit is attached to two star camera heads. The dual head configuration is primarily employed to achieve a carefree AOCS control with respect to the Sun, Moon and Earth, and to increase the attitude accuracy, but it also enables onboard estimation and removal of thermal generated biases.
Using an eye tracker for accurate eye movement artifact correction
Kierkels, J.J.M.; Riani, J.; Bergmans, J.W.M.; Boxtel, van G.J.M.
2007-01-01
We present a new method to correct eye movement artifacts in electroencephalogram (EEG) data. By using an eye tracker, whose data cannot be corrupted by any electrophysiological signals, an accurate method for correction is developed. The eye-tracker data is used in a Kalman filter to estimate which
Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions
Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara
2012-01-01
This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…
Using BRDFs for accurate albedo calculations and adjacency effect corrections
Energy Technology Data Exchange (ETDEWEB)
Borel, C.C.; Gerstl, S.A.W.
1996-09-01
In this paper the authors discuss two uses of BRDFs in remote sensing: (1) in determining the clear sky top of the atmosphere (TOA) albedo, (2) in quantifying the effect of the BRDF on the adjacency point-spread function and on atmospheric corrections. The TOA spectral albedo is an important parameter retrieved by the Multi-angle Imaging Spectro-Radiometer (MISR). Its accuracy depends mainly on how well one can model the surface BRDF for many different situations. The authors present results from an algorithm which matches several semi-empirical functions to the nine MISR measured BRFs that are then numerically integrated to yield the clear sky TOA spectral albedo in four spectral channels. They show that absolute accuracies in the albedo of better than 1% are possible for the visible and better than 2% in the near infrared channels. Using a simplified extensive radiosity model, the authors show that the shape of the adjacency point-spread function (PSF) depends on the underlying surface BRDFs. The adjacency point-spread function at a given offset (x,y) from the center pixel is given by the integral of transmission-weighted products of BRDF and scattering phase function along the line of sight.
Defect correction and multigrid for an efficient and accurate computation of airfoil flows
Koren, B.
1988-01-01
Results are presented for an efficient solution method for second-order accurate discretizations of the 2D steady Euler equations. The solution method is based on iterative defect correction. Several schemes are considered for the computation of the second-order defect. In each defect correction
International Nuclear Information System (INIS)
Jeong, Hyunjo; Zhang, Shuzeng; Li, Xiongbing; Barnard, Dan
2015-01-01
The accurate measurement of acoustic nonlinearity parameter β for fluids or solids generally requires making corrections for diffraction effects due to finite size geometry of transmitter and receiver. These effects are well known in linear acoustics, while those for second harmonic waves have not been well addressed and therefore not properly considered in previous studies. In this work, we explicitly define the attenuation and diffraction corrections using the multi-Gaussian beam (MGB) equations which were developed from the quasilinear solutions of the KZK equation. The effects of making these corrections are examined through the simulation of β determination in water. Diffraction corrections are found to have more significant effects than attenuation corrections, and the β values of water can be estimated experimentally with less than 5% errors when the exact second harmonic diffraction corrections are used together with the negligible attenuation correction effects on the basis of linear frequency dependence between attenuation coefficients, α 2 ≃ 2α 1
Jeong, Hyunjo; Zhang, Shuzeng; Barnard, Dan; Li, Xiongbing
2015-09-01
The accurate measurement of acoustic nonlinearity parameter β for fluids or solids generally requires making corrections for diffraction effects due to finite size geometry of transmitter and receiver. These effects are well known in linear acoustics, while those for second harmonic waves have not been well addressed and therefore not properly considered in previous studies. In this work, we explicitly define the attenuation and diffraction corrections using the multi-Gaussian beam (MGB) equations which were developed from the quasilinear solutions of the KZK equation. The effects of making these corrections are examined through the simulation of β determination in water. Diffraction corrections are found to have more significant effects than attenuation corrections, and the β values of water can be estimated experimentally with less than 5% errors when the exact second harmonic diffraction corrections are used together with the negligible attenuation correction effects on the basis of linear frequency dependence between attenuation coefficients, α2 ≃ 2α1.
Allam, Amin
2015-07-14
Motivation: Next-generation sequencing generates large amounts of data affected by errors in the form of substitutions, insertions or deletions of bases. Error correction based on the high-coverage information, typically improves de novo assembly. Most existing tools can correct substitution errors only; some support insertions and deletions, but accuracy in many cases is low. Results: We present Karect, a novel error correction technique based on multiple alignment. Our approach supports substitution, insertion and deletion errors. It can handle non-uniform coverage as well as moderately covered areas of the sequenced genome. Experiments with data from Illumina, 454 FLX and Ion Torrent sequencing machines demonstrate that Karect is more accurate than previous methods, both in terms of correcting individual-bases errors (up to 10% increase in accuracy gain) and post de novo assembly quality (up to 10% increase in NGA50). We also introduce an improved framework for evaluating the quality of error correction.
Allam, Amin; Kalnis, Panos; Solovyev, Victor
2015-01-01
accurate than previous methods, both in terms of correcting individual-bases errors (up to 10% increase in accuracy gain) and post de novo assembly quality (up to 10% increase in NGA50). We also introduce an improved framework for evaluating the quality
Energy Technology Data Exchange (ETDEWEB)
Jeong, Hyunjo, E-mail: hjjeong@wku.ac.kr [Division of Mechanical and Automotive Engineering, Wonkwang University, Iksan, Jeonbuk 570-749 (Korea, Republic of); Zhang, Shuzeng; Li, Xiongbing [School of Traffic and Transportation Engineering, Central South University, Changsha, Hunan 410075 (China); Barnard, Dan [Center for Nondestructive Evaluation, Iowa State University, Ames, IA 50010 (United States)
2015-09-15
The accurate measurement of acoustic nonlinearity parameter β for fluids or solids generally requires making corrections for diffraction effects due to finite size geometry of transmitter and receiver. These effects are well known in linear acoustics, while those for second harmonic waves have not been well addressed and therefore not properly considered in previous studies. In this work, we explicitly define the attenuation and diffraction corrections using the multi-Gaussian beam (MGB) equations which were developed from the quasilinear solutions of the KZK equation. The effects of making these corrections are examined through the simulation of β determination in water. Diffraction corrections are found to have more significant effects than attenuation corrections, and the β values of water can be estimated experimentally with less than 5% errors when the exact second harmonic diffraction corrections are used together with the negligible attenuation correction effects on the basis of linear frequency dependence between attenuation coefficients, α{sub 2} ≃ 2α{sub 1}.
Highly accurate fluorogenic DNA sequencing with information theory-based error correction.
Chen, Zitian; Zhou, Wenxiong; Qiao, Shuo; Kang, Li; Duan, Haifeng; Xie, X Sunney; Huang, Yanyi
2017-12-01
Eliminating errors in next-generation DNA sequencing has proved challenging. Here we present error-correction code (ECC) sequencing, a method to greatly improve sequencing accuracy by combining fluorogenic sequencing-by-synthesis (SBS) with an information theory-based error-correction algorithm. ECC embeds redundancy in sequencing reads by creating three orthogonal degenerate sequences, generated by alternate dual-base reactions. This is similar to encoding and decoding strategies that have proved effective in detecting and correcting errors in information communication and storage. We show that, when combined with a fluorogenic SBS chemistry with raw accuracy of 98.1%, ECC sequencing provides single-end, error-free sequences up to 200 bp. ECC approaches should enable accurate identification of extremely rare genomic variations in various applications in biology and medicine.
Fischer, Michael; Angel, Ross J.
2017-05-01
Density-functional theory (DFT) calculations incorporating a pairwise dispersion correction were employed to optimize the structures of various neutral-framework compounds with zeolite topologies. The calculations used the PBE functional for solids (PBEsol) in combination with two different dispersion correction schemes, the D2 correction devised by Grimme and the TS correction of Tkatchenko and Scheffler. In the first part of the study, a benchmarking of the DFT-optimized structures against experimental crystal structure data was carried out, considering a total of 14 structures (8 all-silica zeolites, 4 aluminophosphate zeotypes, and 2 dense phases). Both PBEsol-D2 and PBEsol-TS showed an excellent performance, improving significantly over the best-performing approach identified in a previous study (PBE-TS). The temperature dependence of lattice parameters and bond lengths was assessed for those zeotypes where the available experimental data permitted such an analysis. In most instances, the agreement between DFT and experiment improved when the experimental data were corrected for the effects of thermal motion and when low-temperature structure data rather than room-temperature structure data were used as a reference. In the second part, a benchmarking against experimental enthalpies of transition (with respect to α-quartz) was carried out for 16 all-silica zeolites. Excellent agreement was obtained with the PBEsol-D2 functional, with the overall error being in the same range as the experimental uncertainty. Altogether, PBEsol-D2 can be recommended as a computationally efficient DFT approach that simultaneously delivers accurate structures and energetics of neutral-framework zeotypes.
BLESS 2: accurate, memory-efficient and fast error correction method.
Heo, Yun; Ramachandran, Anand; Hwu, Wen-Mei; Ma, Jian; Chen, Deming
2016-08-01
The most important features of error correction tools for sequencing data are accuracy, memory efficiency and fast runtime. The previous version of BLESS was highly memory-efficient and accurate, but it was too slow to handle reads from large genomes. We have developed a new version of BLESS to improve runtime and accuracy while maintaining a small memory usage. The new version, called BLESS 2, has an error correction algorithm that is more accurate than BLESS, and the algorithm has been parallelized using hybrid MPI and OpenMP programming. BLESS 2 was compared with five top-performing tools, and it was found to be the fastest when it was executed on two computing nodes using MPI, with each node containing twelve cores. Also, BLESS 2 showed at least 11% higher gain while retaining the memory efficiency of the previous version for large genomes. Freely available at https://sourceforge.net/projects/bless-ec dchen@illinois.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Qian, Cheng; Kovalchik, Kevin A; MacLennan, Matthew S; Huang, Xiaohua; Chen, David D Y
2017-06-01
Capillary electrophoresis frontal analysis (CE-FA) can be used to determine binding affinity of molecular interactions. However, its current data processing method mandate specific requirement on the mobilities of the binding pair in order to obtain accurate binding constants. This work shows that significant errors are resulted when the mobilities of the interacting species do not meet these requirements. Therefore, the applicability of CE-FA in many real word applications becomes questionable. An electrophoretic mobility-based correction method is developed in this work based on the flux of each species. A simulation program and a pair of model compounds are used to verify the new equations and evaluate the effectiveness of this method. Ibuprofen and hydroxypropyl-β-cyclodextrinare used to demonstrate the differences in the obtained binding constant by CE-FA when different calculation methods are used, and the results are compared with those obtained by affinity capillary electrophoresis (ACE). The results suggest that CE-FA, with the mobility-based correction method, can be a generally applicable method for a much wider range of applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna, T.; Mykkeltveit, S.
2017-01-01
Declared North Korean nuclear tests in 2006, 2009, 2013 and 2016 were observed seismically at regional and teleseismic distances. Waveform similarity allows the events to be located relatively with far greater accuracy than the absolute locations can be determined from seismic data alone. There is now significant redundancy in the data given the large number of regional and teleseismic stations that have recorded multiple events, and relative location estimates can be confirmed independently by performing calculations on many mutually exclusive sets of measurements. Using a 1-D global velocity model, the distances between the events estimated using teleseismic P phases are found to be approximately 25 per cent shorter than the distances between events estimated using regional Pn phases. The 2009, 2013 and 2016 events all take place within 1 km of each other and the discrepancy between the regional and teleseismic relative location estimates is no more than about 150 m. The discrepancy is much more significant when estimating the location of the more distant 2006 event relative to the later explosions with regional and teleseismic estimates varying by many hundreds of metres. The relative location of the 2006 event is challenging given the smaller number of observing stations, the lower signal-to-noise ratio and significant waveform dissimilarity at some regional stations. The 2006 event is however highly significant in constraining the absolute locations in the terrain at the Punggye-ri test-site in relation to observed surface infrastructure. For each seismic arrival used to estimate the relative locations, we define a slowness scaling factor which multiplies the gradient of seismic traveltime versus distance, evaluated at the source, relative to the applied 1-D velocity model. A procedure for estimating correction terms which reduce the double-difference time residual vector norms is presented together with a discussion of the associated uncertainty. The modified
Matsuda, Atsushi; Schermelleh, Lothar; Hirano, Yasuhiro; Haraguchi, Tokuko; Hiraoka, Yasushi
2018-05-15
Correction of chromatic shift is necessary for precise registration of multicolor fluorescence images of biological specimens. New emerging technologies in fluorescence microscopy with increasing spatial resolution and penetration depth have prompted the need for more accurate methods to correct chromatic aberration. However, the amount of chromatic shift of the region of interest in biological samples often deviates from the theoretical prediction because of unknown dispersion in the biological samples. To measure and correct chromatic shift in biological samples, we developed a quadrisection phase correlation approach to computationally calculate translation, rotation, and magnification from reference images. Furthermore, to account for local chromatic shifts, images are split into smaller elements, for which the phase correlation between channels is measured individually and corrected accordingly. We implemented this method in an easy-to-use open-source software package, called Chromagnon, that is able to correct shifts with a 3D accuracy of approximately 15 nm. Applying this software, we quantified the level of uncertainty in chromatic shift correction, depending on the imaging modality used, and for different existing calibration methods, along with the proposed one. Finally, we provide guidelines to choose the optimal chromatic shift registration method for any given situation.
Accurate and simple wavefunctions for the helium isoelectronic sequence with correct cusp conditions
Energy Technology Data Exchange (ETDEWEB)
Rodriguez, K V [Departamento de Fisica, Universidad Nacional del Sur and Consejo Nacional de Investigaciones CientIficas y Tecnicas, 8000 BahIa Blanca, Buenos Aires (Argentina); Gasaneo, G [Departamento de Fisica, Universidad Nacional del Sur and Consejo Nacional de Investigaciones CientIficas y Tecnicas, 8000 BahIa Blanca, Buenos Aires (Argentina); Mitnik, D M [Instituto de AstronomIa y Fisica del Espacio, and Departamento de Fisica, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, C C 67, Suc. 28 (C1428EGA) Buenos Aires (Argentina)
2007-10-14
Simple and accurate wavefunctions for the He atom and He-like isoelectronic ions are presented. These functions-the product of hydrogenic one-electron solutions and a fully correlated part-satisfy all the coalescence cusp conditions at the Coulomb singularities. Functions with different numbers of parameters and different degrees of accuracy are discussed. Simple analytic expressions for the wavefunction and the energy, valid for a wide range of nuclear charges, are presented. The wavefunctions are tested, in the case of helium, through the calculations of various cross sections which probe different regions of the configuration space, mostly those close to the two-particle coalescence points.
Rcorrector: efficient and accurate error correction for Illumina RNA-seq reads.
Song, Li; Florea, Liliana
2015-01-01
Next-generation sequencing of cellular RNA (RNA-seq) is rapidly becoming the cornerstone of transcriptomic analysis. However, sequencing errors in the already short RNA-seq reads complicate bioinformatics analyses, in particular alignment and assembly. Error correction methods have been highly effective for whole-genome sequencing (WGS) reads, but are unsuitable for RNA-seq reads, owing to the variation in gene expression levels and alternative splicing. We developed a k-mer based method, Rcorrector, to correct random sequencing errors in Illumina RNA-seq reads. Rcorrector uses a De Bruijn graph to compactly represent all trusted k-mers in the input reads. Unlike WGS read correctors, which use a global threshold to determine trusted k-mers, Rcorrector computes a local threshold at every position in a read. Rcorrector has an accuracy higher than or comparable to existing methods, including the only other method (SEECER) designed for RNA-seq reads, and is more time and memory efficient. With a 5 GB memory footprint for 100 million reads, it can be run on virtually any desktop or server. The software is available free of charge under the GNU General Public License from https://github.com/mourisl/Rcorrector/.
Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen
2017-03-01
Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.
Cahuantzi, Roberto; Buckley, Alastair
2017-09-01
Making accurate and reliable measurements of solar irradiance is important for understanding performance in the photovoltaic energy sector. In this paper, we present design details and performance of a number of fibre optic couplers for use in irradiance measurement systems employing remote light sensors applicable for either spectrally resolved or broadband measurement. The angular and spectral characteristics of different coupler designs are characterised and compared with existing state-of-the-art commercial technology. The new coupler designs are fabricated from polytetrafluorethylene (PTFE) rods and operate through forward scattering of incident sunlight on the front surfaces of the structure into an optic fibre located in a cavity to the rear of the structure. The PTFE couplers exhibit up to 4.8% variation in scattered transmission intensity between 425 nm and 700 nm and show minimal specular reflection, making the designs accurate and reliable over the visible region. Through careful geometric optimization near perfect cosine dependence on the angular response of the coupler can be achieved. The PTFE designs represent a significant improvement over the state of the art with less than 0.01% error compared with ideal cosine response for angles of incidence up to 50°.
Cahuantzi, Roberto; Buckley, Alastair
2017-09-01
Making accurate and reliable measurements of solar irradiance is important for understanding performance in the photovoltaic energy sector. In this paper, we present design details and performance of a number of fibre optic couplers for use in irradiance measurement systems employing remote light sensors applicable for either spectrally resolved or broadband measurement. The angular and spectral characteristics of different coupler designs are characterised and compared with existing state-of-the-art commercial technology. The new coupler designs are fabricated from polytetrafluorethylene (PTFE) rods and operate through forward scattering of incident sunlight on the front surfaces of the structure into an optic fibre located in a cavity to the rear of the structure. The PTFE couplers exhibit up to 4.8% variation in scattered transmission intensity between 425 nm and 700 nm and show minimal specular reflection, making the designs accurate and reliable over the visible region. Through careful geometric optimization near perfect cosine dependence on the angular response of the coupler can be achieved. The PTFE designs represent a significant improvement over the state of the art with less than 0.01% error compared with ideal cosine response for angles of incidence up to 50°.
Dral, Pavlo O.; Owens, Alec; Yurchenko, Sergei N.; Thiel, Walter
2017-06-01
We present an efficient approach for generating highly accurate molecular potential energy surfaces (PESs) using self-correcting, kernel ridge regression (KRR) based machine learning (ML). We introduce structure-based sampling to automatically assign nuclear configurations from a pre-defined grid to the training and prediction sets, respectively. Accurate high-level ab initio energies are required only for the points in the training set, while the energies for the remaining points are provided by the ML model with negligible computational cost. The proposed sampling procedure is shown to be superior to random sampling and also eliminates the need for training several ML models. Self-correcting machine learning has been implemented such that each additional layer corrects errors from the previous layer. The performance of our approach is demonstrated in a case study on a published high-level ab initio PES of methyl chloride with 44 819 points. The ML model is trained on sets of different sizes and then used to predict the energies for tens of thousands of nuclear configurations within seconds. The resulting datasets are utilized in variational calculations of the vibrational energy levels of CH3Cl. By using both structure-based sampling and self-correction, the size of the training set can be kept small (e.g., 10% of the points) without any significant loss of accuracy. In ab initio rovibrational spectroscopy, it is thus possible to reduce the number of computationally costly electronic structure calculations through structure-based sampling and self-correcting KRR-based machine learning by up to 90%.
Zhang, DaDi; Yang, Xiaolong; Zheng, Xiao; Yang, Weitao
2018-04-01
Electron affinity (EA) is the energy released when an additional electron is attached to an atom or a molecule. EA is a fundamental thermochemical property, and it is closely pertinent to other important properties such as electronegativity and hardness. However, accurate prediction of EA is difficult with density functional theory methods. The somewhat large error of the calculated EAs originates mainly from the intrinsic delocalisation error associated with the approximate exchange-correlation functional. In this work, we employ a previously developed non-empirical global scaling correction approach, which explicitly imposes the Perdew-Parr-Levy-Balduz condition to the approximate functional, and achieve a substantially improved accuracy for the calculated EAs. In our approach, the EA is given by the scaling corrected Kohn-Sham lowest unoccupied molecular orbital energy of the neutral molecule, without the need to carry out the self-consistent-field calculation for the anion.
Directory of Open Access Journals (Sweden)
Souichi Telada
2014-07-01
Full Text Available A highly accurate two-color interferometer with automatic correction of the refractive index of air was developed for crustal strain observation. The two-color interferometer, which can measure a geometrical distance of approximately 70 m, with a relative resolution of 2 × 10−9, clearly detected a change in strain due to earth tides in spite of optical measurement in air. Moreover, a large strain quake due to an earthquake could be observed without disturbing the measurement. We demonstrated the advantages of the two-color interferometer in air for geodetic observation.
Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki
2016-03-01
Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and
Khalifé, Maya; Fernandez, Brice; Jaubert, Olivier; Soussan, Michael; Brulon, Vincent; Buvat, Irène; Comtat, Claude
2017-10-01
In brain PET/MR applications, accurate attenuation maps are required for accurate PET image quantification. An implemented attenuation correction (AC) method for brain imaging is the single-atlas approach that estimates an AC map from an averaged CT template. As an alternative, we propose to use a zero echo time (ZTE) pulse sequence to segment bone, air and soft tissue. A linear relationship between histogram normalized ZTE intensity and measured CT density in Hounsfield units (HU ) in bone has been established thanks to a CT-MR database of 16 patients. Continuous AC maps were computed based on the segmented ZTE by setting a fixed linear attenuation coefficient (LAC) to air and soft tissue and by using the linear relationship to generate continuous μ values for the bone. Additionally, for the purpose of comparison, four other AC maps were generated: a ZTE derived AC map with a fixed LAC for the bone, an AC map based on the single-atlas approach as provided by the PET/MR manufacturer, a soft-tissue only AC map and, finally, the CT derived attenuation map used as the gold standard (CTAC). All these AC maps were used with different levels of smoothing for PET image reconstruction with and without time-of-flight (TOF). The subject-specific AC map generated by combining ZTE-based segmentation and linear scaling of the normalized ZTE signal into HU was found to be a good substitute for the measured CTAC map in brain PET/MR when used with a Gaussian smoothing kernel of 4~mm corresponding to the PET scanner intrinsic resolution. As expected TOF reduces AC error regardless of the AC method. The continuous ZTE-AC performed better than the other alternative MR derived AC methods, reducing the quantification error between the MRAC corrected PET image and the reference CTAC corrected PET image.
2013-01-01
Background Population stratification is a systematic difference in allele frequencies between subpopulations. This can lead to spurious association findings in the case–control genome wide association studies (GWASs) used to identify single nucleotide polymorphisms (SNPs) associated with disease-linked phenotypes. Methods such as self-declared ancestry, ancestry informative markers, genomic control, structured association, and principal component analysis are used to assess and correct population stratification but each has limitations. We provide an alternative technique to address population stratification. Results We propose a novel machine learning method, ETHNOPRED, which uses the genotype and ethnicity data from the HapMap project to learn ensembles of disjoint decision trees, capable of accurately predicting an individual’s continental and sub-continental ancestry. To predict an individual’s continental ancestry, ETHNOPRED produced an ensemble of 3 decision trees involving a total of 10 SNPs, with 10-fold cross validation accuracy of 100% using HapMap II dataset. We extended this model to involve 29 disjoint decision trees over 149 SNPs, and showed that this ensemble has an accuracy of ≥ 99.9%, even if some of those 149 SNP values were missing. On an independent dataset, predominantly of Caucasian origin, our continental classifier showed 96.8% accuracy and improved genomic control’s λ from 1.22 to 1.11. We next used the HapMap III dataset to learn classifiers to distinguish European subpopulations (North-Western vs. Southern), East Asian subpopulations (Chinese vs. Japanese), African subpopulations (Eastern vs. Western), North American subpopulations (European vs. Chinese vs. African vs. Mexican vs. Indian), and Kenyan subpopulations (Luhya vs. Maasai). In these cases, ETHNOPRED produced ensembles of 3, 39, 21, 11, and 25 disjoint decision trees, respectively involving 31, 502, 526, 242 and 271 SNPs, with 10-fold cross validation accuracy of
DEFF Research Database (Denmark)
Pinkevych, Mykola; Cromer, Deborah; Tolstrup, Martin
2016-01-01
[This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.].......[This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.]....
Good, Nicholas; Mölter, Anna; Peel, Jennifer L; Volckens, John
2017-07-01
The AE51 micro-Aethalometer (microAeth) is a popular and useful tool for assessing personal exposure to particulate black carbon (BC). However, few users of the AE51 are aware that its measurements are biased low (by up to 70%) due to the accumulation of BC on the filter substrate over time; previous studies of personal black carbon exposure are likely to have suffered from this bias. Although methods to correct for bias in micro-Aethalometer measurements of particulate black carbon have been proposed, these methods have not been verified in the context of personal exposure assessment. Here, five Aethalometer loading correction equations based on published methods were evaluated. Laboratory-generated aerosols of varying black carbon content (ammonium sulfate, Aquadag and NIST diesel particulate matter) were used to assess the performance of these methods. Filters from a personal exposure assessment study were also analyzed to determine how the correction methods performed for real-world samples. Standard correction equations produced correction factors with root mean square errors of 0.10 to 0.13 and mean bias within ±0.10. An optimized correction equation is also presented, along with sampling recommendations for minimizing bias when assessing personal exposure to BC using the AE51 micro-Aethalometer.
Dobbe, J G G; Vroemen, J C; Strackee, S D; Streekstra, G J
2014-11-01
Preoperative three-dimensional planning methods have been described extensively. However, transferring the virtual plan to the patient is often challenging. In this report, we describe the management of a severely malunited distal radius fracture using a patient-specific plate for accurate spatial positioning and fixation. Twenty months postoperatively the patient shows almost painless reconstruction and a nearly normal range of motion.
Marzouka, Nour-Al-Dain; Nordlund, Jessica; Bäcklin, Christofer L; Lönnerholm, Gudmar; Syvänen, Ann-Christine; Carlsson Almlöf, Jonas
2016-04-01
The Illumina Infinium HumanMethylation450 BeadChip (450k) is widely used for the evaluation of DNA methylation levels in large-scale datasets, particularly in cancer. The 450k design allows copy number variant (CNV) calling using existing bioinformatics tools. However, in cancer samples, numerous large-scale aberrations cause shifting in the probe intensities and thereby may result in erroneous CNV calling. Therefore, a baseline correction process is needed. We suggest the maximum peak of probe segment density to correct the shift in the intensities in cancer samples. CopyNumber450kCancer is implemented as an R package. The package with examples can be downloaded at http://cran.r-project.org nour.marzouka@medsci.uu.se Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.
Bural, Gonca; Torigian, Drew; Basu, Sandip; Houseni, Mohamed; Zhuge, Ying; Rubello, Domenico; Udupa, Jayaram; Alavi, Abass
2015-12-01
Our aim was to explore a novel quantitative method [based upon an MRI-based image segmentation that allows actual calculation of grey matter, white matter and cerebrospinal fluid (CSF) volumes] for overcoming the difficulties associated with conventional techniques for measuring actual metabolic activity of the grey matter. We included four patients with normal brain MRI and fluorine-18 fluorodeoxyglucose (F-FDG)-PET scans (two women and two men; mean age 46±14 years) in this analysis. The time interval between the two scans was 0-180 days. We calculated the volumes of grey matter, white matter and CSF by using a novel segmentation technique applied to the MRI images. We measured the mean standardized uptake value (SUV) representing the whole metabolic activity of the brain from the F-FDG-PET images. We also calculated the white matter SUV from the upper transaxial slices (centrum semiovale) of the F-FDG-PET images. The whole brain volume was calculated by summing up the volumes of the white matter, grey matter and CSF. The global cerebral metabolic activity was calculated by multiplying the mean SUV with total brain volume. The whole brain white matter metabolic activity was calculated by multiplying the mean SUV for the white matter by the white matter volume. The global cerebral metabolic activity only reflects those of the grey matter and the white matter, whereas that of the CSF is zero. We subtracted the global white matter metabolic activity from that of the whole brain, resulting in the global grey matter metabolism alone. We then divided the grey matter global metabolic activity by grey matter volume to accurately calculate the SUV for the grey matter alone. The brain volumes ranged between 1546 and 1924 ml. The mean SUV for total brain was 4.8-7. Total metabolic burden of the brain ranged from 5565 to 9617. The mean SUV for white matter was 2.8-4.1. On the basis of these measurements we generated the grey matter SUV, which ranged from 8.1 to 11.3. The
2002-01-01
Tile Calorimeter modules stored at CERN. The larger modules belong to the Barrel, whereas the smaller ones are for the two Extended Barrels. (The article was about the completion of the 64 modules for one of the latter.) The photo on the first page of the Bulletin n°26/2002, from 24 July 2002, illustrating the article «The ATLAS Tile Calorimeter gets into shape» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.
Elsayed, Mustafa M A; Vierl, Ulrich; Cevc, Gregor
2009-06-01
Potentiometric lipid membrane-water partition coefficient studies neglect electrostatic interactions to date; this leads to incorrect results. We herein show how to account properly for such interactions in potentiometric data analysis. We conducted potentiometric titration experiments to determine lipid membrane-water partition coefficients of four illustrative drugs, bupivacaine, diclofenac, ketoprofen and terbinafine. We then analyzed the results conventionally and with an improved analytical approach that considers Coulombic electrostatic interactions. The new analytical approach delivers robust partition coefficient values. In contrast, the conventional data analysis yields apparent partition coefficients of the ionized drug forms that depend on experimental conditions (mainly the lipid-drug ratio and the bulk ionic strength). This is due to changing electrostatic effects originating either from bound drug and/or lipid charges. A membrane comprising 10 mol-% mono-charged molecules in a 150 mM (monovalent) electrolyte solution yields results that differ by a factor of 4 from uncharged membranes results. Allowance for the Coulombic electrostatic interactions is a prerequisite for accurate and reliable determination of lipid membrane-water partition coefficients of ionizable drugs from potentiometric titration data. The same conclusion applies to all analytical methods involving drug binding to a surface.
Directory of Open Access Journals (Sweden)
2012-01-01
Full Text Available Regarding Gorelik, G., & Shackelford, T.K. (2011. Human sexual conflict from molecules to culture. Evolutionary Psychology, 9, 564–587: The authors wish to correct an omission in citation to the existing literature. In the final paragraph on p. 570, we neglected to cite Burch and Gallup (2006 [Burch, R. L., & Gallup, G. G., Jr. (2006. The psychobiology of human semen. In S. M. Platek & T. K. Shackelford (Eds., Female infidelity and paternal uncertainty (pp. 141–172. New York: Cambridge University Press.]. Burch and Gallup (2006 reviewed the relevant literature on FSH and LH discussed in this paragraph, and should have been cited accordingly. In addition, Burch and Gallup (2006 should have been cited as the originators of the hypothesis regarding the role of FSH and LH in the semen of rapists. The authors apologize for this oversight.
2002-01-01
The photo on the second page of the Bulletin n°48/2002, from 25 November 2002, illustrating the article «Spanish Visit to CERN» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption. The Spanish delegation, accompanied by Spanish scientists at CERN, also visited the LHC superconducting magnet test hall (photo). From left to right: Felix Rodriguez Mateos of CERN LHC Division, Josep Piqué i Camps, Spanish Minister of Science and Technology, César Dopazo, Director-General of CIEMAT (Spanish Research Centre for Energy, Environment and Technology), Juan Antonio Rubio, ETT Division Leader at CERN, Manuel Aguilar-Benitez, Spanish Delegate to Council, Manuel Delfino, IT Division Leader at CERN, and Gonzalo León, Secretary-General of Scientific Policy to the Minister.
Directory of Open Access Journals (Sweden)
2014-01-01
Full Text Available Regarding Tagler, M. J., and Jeffers, H. M. (2013. Sex differences in attitudes toward partner infidelity. Evolutionary Psychology, 11, 821–832: The authors wish to correct values in the originally published manuscript. Specifically, incorrect 95% confidence intervals around the Cohen's d values were reported on page 826 of the manuscript where we reported the within-sex simple effects for the significant Participant Sex × Infidelity Type interaction (first paragraph, and for attitudes toward partner infidelity (second paragraph. Corrected values are presented in bold below. The authors would like to thank Dr. Bernard Beins at Ithaca College for bringing these errors to our attention. Men rated sexual infidelity significantly more distressing (M = 4.69, SD = 0.74 than they rated emotional infidelity (M = 4.32, SD = 0.92, F(1, 322 = 23.96, p < .001, d = 0.44, 95% CI [0.23, 0.65], but there was little difference between women's ratings of sexual (M = 4.80, SD = 0.48 and emotional infidelity (M = 4.76, SD = 0.57, F(1, 322 = 0.48, p = .29, d = 0.08, 95% CI [−0.10, 0.26]. As expected, men rated sexual infidelity (M = 1.44, SD = 0.70 more negatively than they rated emotional infidelity (M = 2.66, SD = 1.37, F(1, 322 = 120.00, p < .001, d = 1.12, 95% CI [0.85, 1.39]. Although women also rated sexual infidelity (M = 1.40, SD = 0.62 more negatively than they rated emotional infidelity (M = 2.09, SD = 1.10, this difference was not as large and thus in the evolutionary theory supportive direction, F(1, 322 = 72.03, p < .001, d = 0.77, 95% CI [0.60, 0.94].
Shuxia, ZHAO; Lei, ZHANG; Jiajia, HOU; Yang, ZHAO; Wangbao, YIN; Weiguang, MA; Lei, DONG; Liantuan, XIAO; Suotang, JIA
2018-03-01
The chemical composition of alloys directly determines their mechanical behaviors and application fields. Accurate and rapid analysis of both major and minor elements in alloys plays a key role in metallurgy quality control and material classification processes. A quantitative calibration-free laser-induced breakdown spectroscopy (CF-LIBS) analysis method, which carries out combined correction of plasma temperature and spectral intensity by using a second-order iterative algorithm and two boundary standard samples, is proposed to realize accurate composition measurements. Experimental results show that, compared to conventional CF-LIBS analysis, the relative errors for major elements Cu and Zn and minor element Pb in the copper-lead alloys has been reduced from 12%, 26% and 32% to 1.8%, 2.7% and 13.4%, respectively. The measurement accuracy for all elements has been improved substantially.
Lin, Longting; Bivard, Andrew; Kleinig, Timothy; Spratt, Neil J; Levi, Christopher R; Yang, Qing; Parsons, Mark W
2018-04-01
This study aimed to assess how the ischemic core measured by perfusion computed tomography (CTP) was affected by the delay and dispersion effect. Ischemic stroke patients having CTP performed within 6 hours of onset were included. The CTP data were processed twice, generating standard cerebral blood flow (sCBF) and delay- and dispersion-corrected CBF (ddCBF), respectively. Ischemic core measured by the sCBF and ddCBF was then compared at the relative threshold core were used: acute diffusion-weighted imaging or 24-hour diffusion-weighted imaging in patients with complete recanalization. Difference of core volume between CTP and diffusion-weighted imaging was estimated by Mann-Whitney U test and limits of agreement. Patients were also classified into favorable and unfavorable CTP patterns. The imaging pattern classification by sCBF and ddCBF was compared by the χ 2 test; their respective ability to predict good clinical outcome (3-month modified Rankin Scale score) was tested in logistic regression. Fifty-five patients were included in this study. Median sCBF ischemic core volume was 38.5 mL (12.4-61.9 mL), much larger than the median core volume of 17.2 mL measured by ddCBF (interquartile range, 5.5-38.8; P core much closer to diffusion-weighted imaging core references, with the mean volume difference of -0.1 mL (95% limits of agreement, -25.4 to 25.2; P =0.97) and 16.7 mL (95% limits of agreement, -21.7 to 55.2; P core measurement on CTP. © 2018 American Heart Association, Inc.
International Nuclear Information System (INIS)
Slagmolen, Pieter; Hermans, Jeroen; Maes, Frederik; Budiharto, Tom; Haustermans, Karin; Heuvel, Frank van den
2010-01-01
Purpose: A robust and accurate method that allows the automatic detection of fiducial markers in MV and kV projection image pairs is proposed. The method allows to automatically correct for inter or intrafraction motion. Methods: Intratreatment MV projection images are acquired during each of five treatment beams of prostate cancer patients with four implanted fiducial markers. The projection images are first preprocessed using a series of marker enhancing filters. 2D candidate marker locations are generated for each of the filtered projection images and 3D candidate marker locations are reconstructed by pairing candidates in subsequent projection images. The correct marker positions are retrieved in 3D by the minimization of a cost function that combines 2D image intensity and 3D geometric or shape information for the entire marker configuration simultaneously. This optimization problem is solved using dynamic programming such that the globally optimal configuration for all markers is always found. Translational interfraction and intrafraction prostate motion and the required patient repositioning is assessed from the position of the centroid of the detected markers in different MV image pairs. The method was validated on a phantom using CT as ground-truth and on clinical data sets of 16 patients using manual marker annotations as ground-truth. Results: The entire setup was confirmed to be accurate to around 1 mm by the phantom measurements. The reproducibility of the manual marker selection was less than 3.5 pixels in the MV images. In patient images, markers were correctly identified in at least 99% of the cases for anterior projection images and 96% of the cases for oblique projection images. The average marker detection accuracy was 1.4±1.8 pixels in the projection images. The centroid of all four reconstructed marker positions in 3D was positioned within 2 mm of the ground-truth position in 99.73% of all cases. Detecting four markers in a pair of MV images
Krawczynski, M.; McLean, N.
2017-12-01
One of the most accurate and useful ways of determining the age of rocks that formed more than about 500,000 years ago is uranium-lead (U-Pb) geochronology. Earth scientists use U-Pb geochronology to put together the geologic history of entire regions and of specific events, like the mass extinction of all non-avian dinosaurs about 66 million years ago or the catastrophic eruptions of supervolcanoes like the one currently centered at Yellowstone. The mineral zircon is often utilized because it is abundant, durable, and readily incorporates uranium into its crystal structure. But it excludes thorium, whose isotope 230Th is part of the naturally occurring isotopic decay chain from 238U to 206Pb. Calculating a date from the relative abundances of 206Pb and 238U therefore requires a correction for the missing 230Th. Existing experimental and observational constraints on the way U and Th behave when zircon crystallizes from a melt are not known precisely enough, and thus currently the uncertainty in dates introduced by they `Th correction' is one of the largest sources of systematic error in determining dates. Here we present preliminary results on our study of actinide partitioning between zircon and melt. Experiments have been conducted to grow zircon from melts doped with U and Th that mimic natural magmas at a range of temperatures, and compositions. Synthetic zircons are separated from their coexisting glass and using high precision and high-spatial-resolution techniques, the abundance and distribution of U and Th in each phase is determined. These preliminary experiments are the beginning of a study that will result in precise determination of the zircon/melt uranium and thorium partition coefficients under a wide variety of naturally occurring conditions. This data will be fit to a multidimensional surface using maximum likelihood regression techniques, so that the ratio of partition coefficients can be calculated for any set of known parameters. The results of
Energy Technology Data Exchange (ETDEWEB)
Hoff, M; Rane-Levandovsky, S; Andre, J [University of Washington, Seattle, WA (United States)
2016-06-15
Purpose: Traditional arterial spin labeling (ASL) acquisitions with echo planar imaging (EPI) readouts suffer from image distortion due to susceptibility effects, compromising ASL’s ability to accurately quantify cerebral blood flow (CBF) and assess disease-specific patterns associated with CBF abnormalities. Phase labeling for additional coordinate encoding (PLACE) can remove image distortion; our goal is to apply PLACE to improve the quantitative accuracy of ASL CBF in humans. Methods: Four subjects were imaged on a 3T Philips Ingenia scanner using a 16-channel receive coil with a 21/21/10cm (frequency/phase/slice direction) field-of-view. An ASL sequence with a pseudo-continuous ASL (pCASL) labeling scheme was employed to acquire thirty dynamics of single-shot EPI data, with control and label datasets for all dynamics, and PLACE gradients applied on odd dynamics. Parameters included a post-labeling delay = 2s, label duration = 1.8s, flip angle = 90°, TR/TE = 5000/23.5ms, and 2.9/2.9/5.0mm (frequency/phase/slice direction) voxel size. “M0” EPI-reference images and T1-weighted spin-echo images with 0.8/1.0/3.3mm (frequency/phase/slice directions) voxel size were also acquired. Complex conjugate image products of pCASL odd and even dynamics were formed, a linear phase ramp applied, and data expanded and smoothed. Data phase was extracted to map control, label, and M0 magnitude image pixels to their undistorted locations, and images were rebinned to original size. All images were corrected for motion artifacts in FSL 5.0. pCASL images were registered to M0 images, and control and label images were subtracted to compute quantitative CBF maps. Results: pCASL image and CBF map distortions were removed by PLACE in all subjects. Corrected images conformed well to the anatomical T1-weighted reference image, and deviations in corrected CBF maps were evident. Conclusion: Eliminating pCASL distortion with PLACE can improve CBF quantification accuracy using minimal
International Nuclear Information System (INIS)
Hoff, M; Rane-Levandovsky, S; Andre, J
2016-01-01
Purpose: Traditional arterial spin labeling (ASL) acquisitions with echo planar imaging (EPI) readouts suffer from image distortion due to susceptibility effects, compromising ASL’s ability to accurately quantify cerebral blood flow (CBF) and assess disease-specific patterns associated with CBF abnormalities. Phase labeling for additional coordinate encoding (PLACE) can remove image distortion; our goal is to apply PLACE to improve the quantitative accuracy of ASL CBF in humans. Methods: Four subjects were imaged on a 3T Philips Ingenia scanner using a 16-channel receive coil with a 21/21/10cm (frequency/phase/slice direction) field-of-view. An ASL sequence with a pseudo-continuous ASL (pCASL) labeling scheme was employed to acquire thirty dynamics of single-shot EPI data, with control and label datasets for all dynamics, and PLACE gradients applied on odd dynamics. Parameters included a post-labeling delay = 2s, label duration = 1.8s, flip angle = 90°, TR/TE = 5000/23.5ms, and 2.9/2.9/5.0mm (frequency/phase/slice direction) voxel size. “M0” EPI-reference images and T1-weighted spin-echo images with 0.8/1.0/3.3mm (frequency/phase/slice directions) voxel size were also acquired. Complex conjugate image products of pCASL odd and even dynamics were formed, a linear phase ramp applied, and data expanded and smoothed. Data phase was extracted to map control, label, and M0 magnitude image pixels to their undistorted locations, and images were rebinned to original size. All images were corrected for motion artifacts in FSL 5.0. pCASL images were registered to M0 images, and control and label images were subtracted to compute quantitative CBF maps. Results: pCASL image and CBF map distortions were removed by PLACE in all subjects. Corrected images conformed well to the anatomical T1-weighted reference image, and deviations in corrected CBF maps were evident. Conclusion: Eliminating pCASL distortion with PLACE can improve CBF quantification accuracy using minimal
Kosar, Naveen; Mahmood, Tariq; Ayub, Khurshid
2017-12-01
Benchmark study has been carried out to find a cost effective and accurate method for bond dissociation energy (BDE) of carbon halogen (Csbnd X) bond. BDE of C-X bond plays a vital role in chemical reactions, particularly for kinetic barrier and thermochemistry etc. The compounds (1-16, Fig. 1) with Csbnd X bond used for current benchmark study are important reactants in organic, inorganic and bioorganic chemistry. Experimental data of Csbnd X bond dissociation energy is compared with theoretical results. The statistical analysis tools such as root mean square deviation (RMSD), standard deviation (SD), Pearson's correlation (R) and mean absolute error (MAE) are used for comparison. Overall, thirty-one density functionals from eight different classes of density functional theory (DFT) along with Pople and Dunning basis sets are evaluated. Among different classes of DFT, the dispersion corrected range separated hybrid GGA class along with 6-31G(d), 6-311G(d), aug-cc-pVDZ and aug-cc-pVTZ basis sets performed best for bond dissociation energy calculation of C-X bond. ωB97XD show the best performance with less deviations (RMSD, SD), mean absolute error (MAE) and a significant Pearson's correlation (R) when compared to experimental data. ωB97XD along with Pople basis set 6-311g(d) has RMSD, SD, R and MAE of 3.14 kcal mol-1, 3.05 kcal mol-1, 0.97 and -1.07 kcal mol-1, respectively.
Directory of Open Access Journals (Sweden)
Yongshuai Jiang
Full Text Available Traditional permutation (TradPerm tests are usually considered the gold standard for multiple testing corrections. However, they can be difficult to complete for the meta-analyses of genetic association studies based on multiple single nucleotide polymorphism loci as they depend on individual-level genotype and phenotype data to perform random shuffles, which are not easy to obtain. Most meta-analyses have therefore been performed using summary statistics from previously published studies. To carry out a permutation using only genotype counts without changing the size of the TradPerm P-value, we developed a Monte Carlo permutation (MCPerm method. First, for each study included in the meta-analysis, we used a two-step hypergeometric distribution to generate a random number of genotypes in cases and controls. We then carried out a meta-analysis using these random genotype data. Finally, we obtained the corrected permutation P-value of the meta-analysis by repeating the entire process N times. We used five real datasets and five simulation datasets to evaluate the MCPerm method and our results showed the following: (1 MCPerm requires only the summary statistics of the genotype, without the need for individual-level data; (2 Genotype counts generated by our two-step hypergeometric distributions had the same distributions as genotype counts generated by shuffling; (3 MCPerm had almost exactly the same permutation P-values as TradPerm (r = 0.999; P<2.2e-16; (4 The calculation speed of MCPerm is much faster than that of TradPerm. In summary, MCPerm appears to be a viable alternative to TradPerm, and we have developed it as a freely available R package at CRAN: http://cran.r-project.org/web/packages/MCPerm/index.html.
International Nuclear Information System (INIS)
Li, Y.; Krieger, J.B.; Norman, M.R.; Iafrate, G.J.
1991-01-01
The optimized-effective-potential (OEP) method and a method developed recently by Krieger, Li, and Iafrate (KLI) are applied to the band-structure calculations of noble-gas and alkali halide solids employing the self-interaction-corrected (SIC) local-spin-density (LSD) approximation for the exchange-correlation energy functional. The resulting band gaps from both calculations are found to be in fair agreement with the experimental values. The discrepancies are typically within a few percent with results that are nearly the same as those of previously published orbital-dependent multipotential SIC calculations, whereas the LSD results underestimate the band gaps by as much as 40%. As in the LSD---and it is believed to be the case even for the exact Kohn-Sham potential---both the OEP and KLI predict valence-band widths which are narrower than those of experiment. In all cases, the KLI method yields essentially the same results as the OEP
Wang, Jian; Shete, Sanjay
2011-11-01
We recently proposed a bias correction approach to evaluate accurate estimation of the odds ratio (OR) of genetic variants associated with a secondary phenotype, in which the secondary phenotype is associated with the primary disease, based on the original case-control data collected for the purpose of studying the primary disease. As reported in this communication, we further investigated the type I error probabilities and powers of the proposed approach, and compared the results to those obtained from logistic regression analysis (with or without adjustment for the primary disease status). We performed a simulation study based on a frequency-matching case-control study with respect to the secondary phenotype of interest. We examined the empirical distribution of the natural logarithm of the corrected OR obtained from the bias correction approach and found it to be normally distributed under the null hypothesis. On the basis of the simulation study results, we found that the logistic regression approaches that adjust or do not adjust for the primary disease status had low power for detecting secondary phenotype associated variants and highly inflated type I error probabilities, whereas our approach was more powerful for identifying the SNP-secondary phenotype associations and had better-controlled type I error probabilities. © 2011 Wiley Periodicals, Inc.
Detection of aurorae in light time of the day at rocket investigations of atmospheric radiation
International Nuclear Information System (INIS)
Khokhlov, V.N.
1996-01-01
Results of rocket observations of aurorae in light time of the day were analyzed. Characteristic features of Rayleigh scattering, day airglow, solar radiation, scattered in the device and near-rocket glow were considered. The contribution of aurorae in the light time of the day was determined on the basis of analyzing results of rocket experiments, laboratory measurements and theoretical simulation. 4 refs., 2 figs
Directory of Open Access Journals (Sweden)
Sandra Jakob
2017-01-01
Full Text Available Drone-borne hyperspectral imaging is a new and promising technique for fast and precise acquisition, as well as delivery of high-resolution hyperspectral data to a large variety of end-users. Drones can overcome the scale gap between field and air-borne remote sensing, thus providing high-resolution and multi-temporal data. They are easy to use, flexible and deliver data within cm-scale resolution. So far, however, drone-borne imagery has prominently and successfully been almost solely used in precision agriculture and photogrammetry. Drone technology currently mainly relies on structure-from-motion photogrammetry, aerial photography and agricultural monitoring. Recently, a few hyperspectral sensors became available for drones, but complex geometric and radiometric effects complicate their use for geology-related studies. Using two examples, we first show that precise corrections are required for any geological mapping. We then present a processing toolbox for frame-based hyperspectral imaging systems adapted for the complex correction of drone-borne hyperspectral imagery. The toolbox performs sensor- and platform-specific geometric distortion corrections. Furthermore, a topographic correction step is implemented to correct for rough terrain surfaces. We recommend the c-factor-algorithm for geological applications. To our knowledge, we demonstrate for the first time the applicability of the corrected dataset for lithological mapping and mineral exploration.
DEFF Research Database (Denmark)
Haack, Søren; Pedersen, Erik Morre; Vinding, Mads Sloth
in dose planning of radiotherapy. This study evaluates the use of k-means clustering for automatic user independent delineation of regions of reduced apparent diffusion coefficient (ADC) and the value of B0-correction of DW-MRI for reduction of geometrical distortions during dose planning of brachytherapy...
Rocklin, Gabriel J; Mobley, David L; Dill, Ken A; Hünenberger, Philippe H
2013-11-14
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol(-1)) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB
Energy Technology Data Exchange (ETDEWEB)
Rocklin, Gabriel J. [Department of Pharmaceutical Chemistry, University of California San Francisco, 1700 4th St., San Francisco, California 94143-2550, USA and Biophysics Graduate Program, University of California San Francisco, 1700 4th St., San Francisco, California 94143-2550 (United States); Mobley, David L. [Departments of Pharmaceutical Sciences and Chemistry, University of California Irvine, 147 Bison Modular, Building 515, Irvine, California 92697-0001, USA and Department of Chemistry, University of New Orleans, 2000 Lakeshore Drive, New Orleans, Louisiana 70148 (United States); Dill, Ken A. [Laufer Center for Physical and Quantitative Biology, 5252 Stony Brook University, Stony Brook, New York 11794-0001 (United States); Hünenberger, Philippe H., E-mail: phil@igc.phys.chem.ethz.ch [Laboratory of Physical Chemistry, Swiss Federal Institute of Technology, ETH, 8093 Zürich (Switzerland)
2013-11-14
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol{sup −1}) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non
Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.
2013-11-01
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol-1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB
Light-Time Effect and Mass Transfer in the Triple Star SW Lyncis
Directory of Open Access Journals (Sweden)
Chun-Hwey Kim
1999-06-01
Full Text Available In this paper all the photoelectric times of minimum for the triple star SW Lyn have been analyzed in terms of light-time e ect due to the third-body and secular period decreases induced by mass transfer process. The light-time orbit determined recently by Ogloza et al.(1998 were modi ed and improved. And it is found that the orbital period of SW Lyn have been decreasing secularly. The third-body revolves around the mass center of triple stars every 5y.77 in a highly eccentric elliptical orbit(e=0.61. The third-body with a minimum mass of 1.13M may be a binary or a white dwarf. The rate of secular period-decrease were obtained as ¡âP/P = -12.45 x 10^-11, implying the mass-transfer from the massive primary star to the secondary. The mass losing rate from the primary were calculated as about 1.24 x 10^-8M /y. It is noticed that the mass-transfer in SW Lyn system is opposite in direction to that deduced from it's Roche geometry by previous investigators.
Centi-pixel accurate real-time inverse distortion correction
CSIR Research Space (South Africa)
De Villiers, Johan P
2008-11-01
Full Text Available Inverse distortion is used to create an undistorted image from a distorted image. For each pixel in the undistorted image it is required to determine which pixel in the distorted image should be used. However the process of characterizing a lens...
Spectrally accurate contour dynamics
International Nuclear Information System (INIS)
Van Buskirk, R.D.; Marcus, P.S.
1994-01-01
We present an exponentially accurate boundary integral method for calculation the equilibria and dynamics of piece-wise constant distributions of potential vorticity. The method represents contours of potential vorticity as a spectral sum and solves the Biot-Savart equation for the velocity by spectrally evaluating a desingularized contour integral. We use the technique in both an initial-value code and a newton continuation method. Our methods are tested by comparing the numerical solutions with known analytic results, and it is shown that for the same amount of computational work our spectral methods are more accurate than other contour dynamics methods currently in use
DEFF Research Database (Denmark)
Turcot, Valérie; Lu, Yingchang; Highland, Heather M
2018-01-01
In the published version of this paper, the name of author Emanuele Di Angelantonio was misspelled. This error has now been corrected in the HTML and PDF versions of the article.......In the published version of this paper, the name of author Emanuele Di Angelantonio was misspelled. This error has now been corrected in the HTML and PDF versions of the article....
DEFF Research Database (Denmark)
Grundle, D S; Löscher, C R; Krahmann, G
2018-01-01
A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has not been fixed in the paper.......A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has not been fixed in the paper....
Accurate quantum chemical calculations
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.
DEFF Research Database (Denmark)
Stokholm, Jakob; Blaser, Martin J.; Thorsen, Jonathan
2018-01-01
The originally published version of this Article contained an incorrect version of Figure 3 that was introduced following peer review and inadvertently not corrected during the production process. Both versions contain the same set of abundance data, but the incorrect version has the children...
DEFF Research Database (Denmark)
Flachsbart, Friederike; Dose, Janina; Gentschew, Liljana
2018-01-01
The original version of this Article contained an error in the spelling of the author Robert Häsler, which was incorrectly given as Robert Häesler. This has now been corrected in both the PDF and HTML versions of the Article....
DEFF Research Database (Denmark)
Roehle, Robert; Wieske, Viktoria; Schuetz, Georg M
2018-01-01
The original version of this article, published on 19 March 2018, unfortunately contained a mistake. The following correction has therefore been made in the original: The names of the authors Philipp A. Kaufmann, Ronny Ralf Buechel and Bernhard A. Herzog were presented incorrectly....
Full Text Available ... out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment of jaws ... out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment of jaws ...
Measured attenuation correction methods
International Nuclear Information System (INIS)
Ostertag, H.; Kuebler, W.K.; Doll, J.; Lorenz, W.J.
1989-01-01
Accurate attenuation correction is a prerequisite for the determination of exact local radioactivity concentrations in positron emission tomography. Attenuation correction factors range from 4-5 in brain studies to 50-100 in whole body measurements. This report gives an overview of the different methods of determining the attenuation correction factors by transmission measurements using an external positron emitting source. The long-lived generator nuclide 68 Ge/ 68 Ga is commonly used for this purpose. The additional patient dose from the transmission source is usually a small fraction of the dose due to the subsequent emission measurement. Ring-shaped transmission sources as well as rotating point or line sources are employed in modern positron tomographs. By masking a rotating line or point source, random and scattered events in the transmission scans can be effectively suppressed. The problems of measured attenuation correction are discussed: Transmission/emission mismatch, random and scattered event contamination, counting statistics, transmission/emission scatter compensation, transmission scan after administration of activity to the patient. By using a double masking technique simultaneous emission and transmission scans become feasible. (orig.)
International Nuclear Information System (INIS)
Beenakker, W.J.P.
1989-01-01
The prospect of high accuracy measurements investigating the weak interactions, which are expected to take place at the electron-positron storage ring LEP at CERN and the linear collider SCL at SLAC, offers the possibility to study also the weak quantum effects. In order to distinguish if the measured weak quantum effects lie within the margins set by the standard model and those bearing traces of new physics one had to go beyond the lowest order and also include electroweak radiative corrections (EWRC) in theoretical calculations. These higher-order corrections also can offer the possibility of getting information about two particles present in the Glashow-Salam-Weinberg model (GSW), but not discovered up till now, the top quark and the Higgs boson. In ch. 2 the GSW standard model of electroweak interactions is described. In ch. 3 some special techniques are described for determination of integrals which are responsible for numerical instabilities caused by large canceling terms encountered in the calculation of EWRC effects, and methods necessary to get hold of the extensive algebra typical for EWRC. In ch. 4 various aspects related to EWRC effects are discussed, in particular the dependence of the unknown model parameters which are the masses of the top quark and the Higgs boson. The processes which are discussed are production of heavy fermions from electron-positron annihilation and those of the fermionic decay of the Z gauge boson. (H.W.). 106 refs.; 30 figs.; 6 tabs.; schemes
Fast and accurate methods for phylogenomic analyses
Directory of Open Access Journals (Sweden)
Warnow Tandy
2011-10-01
Full Text Available Abstract Background Species phylogenies are not estimated directly, but rather through phylogenetic analyses of different gene datasets. However, true gene trees can differ from the true species tree (and hence from one another due to biological processes such as horizontal gene transfer, incomplete lineage sorting, and gene duplication and loss, so that no single gene tree is a reliable estimate of the species tree. Several methods have been developed to estimate species trees from estimated gene trees, differing according to the specific algorithmic technique used and the biological model used to explain differences between species and gene trees. Relatively little is known about the relative performance of these methods. Results We report on a study evaluating several different methods for estimating species trees from sequence datasets, simulating sequence evolution under a complex model including indels (insertions and deletions, substitutions, and incomplete lineage sorting. The most important finding of our study is that some fast and simple methods are nearly as accurate as the most accurate methods, which employ sophisticated statistical methods and are computationally quite intensive. We also observe that methods that explicitly consider errors in the estimated gene trees produce more accurate trees than methods that assume the estimated gene trees are correct. Conclusions Our study shows that highly accurate estimations of species trees are achievable, even when gene trees differ from each other and from the species tree, and that these estimations can be obtained using fairly simple and computationally tractable methods.
Highly accurate surface maps from profilometer measurements
Medicus, Kate M.; Nelson, Jessica D.; Mandina, Mike P.
2013-04-01
Many aspheres and free-form optical surfaces are measured using a single line trace profilometer which is limiting because accurate 3D corrections are not possible with the single trace. We show a method to produce an accurate fully 2.5D surface height map when measuring a surface with a profilometer using only 6 traces and without expensive hardware. The 6 traces are taken at varying angular positions of the lens, rotating the part between each trace. The output height map contains low form error only, the first 36 Zernikes. The accuracy of the height map is ±10% of the actual Zernike values and within ±3% of the actual peak to valley number. The calculated Zernike values are affected by errors in the angular positioning, by the centering of the lens, and to a small effect, choices made in the processing algorithm. We have found that the angular positioning of the part should be better than 1?, which is achievable with typical hardware. The centering of the lens is essential to achieving accurate measurements. The part must be centered to within 0.5% of the diameter to achieve accurate results. This value is achievable with care, with an indicator, but the part must be edged to a clean diameter.
Quantum-electrodynamics corrections in pionic hydrogen
Schlesser, S.; Le Bigot, E. -O.; Indelicato, P.; Pachucki, K.
2011-01-01
We investigate all pure quantum-electrodynamics corrections to the np --> 1s, n = 2-4 transition energies of pionic hydrogen larger than 1 meV, which requires an accurate evaluation of all relevant contributions up to order alpha 5. These values are needed to extract an accurate strong interaction
Position Error Covariance Matrix Validation and Correction
Frisbee, Joe, Jr.
2016-01-01
In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.
Accurate Evaluation of Quantum Integrals
Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)
1995-01-01
Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.
Device accurately measures and records low gas-flow rates
Branum, L. W.
1966-01-01
Free-floating piston in a vertical column accurately measures and records low gas-flow rates. The system may be calibrated, using an adjustable flow-rate gas supply, a low pressure gage, and a sequence recorder. From the calibration rates, a nomograph may be made for easy reduction. Temperature correction may be added for further accuracy.
A universal PWR spectral history correction
International Nuclear Information System (INIS)
Hutt, P.K.; Nunn, D.L.
1989-01-01
The accuracy of a form of universal correction for the difference between depletion conditions assumed in PWR assembly lattice calculations and those experienced in a reactor burn-up is investigated. The correction is based on lattice calculations in which only one such depletion history difference, depletion at two different water densities, is explicitly represented by lattice calculations. The assumption is made that other historical effects bear the same relationship to an appropriate time-average of the two-group neutron flux spectrum. The correction is shown to be accurate for the most important historical effects, depletion with burnable absorbers inserted, control rods inserted or at a different soluble boron level, in addition to density itself. The correction is less accurate for representing depletion at a different fuel or coolant temperature but even in these cases gives an improvement over no correction. In addition it is argued that these historic temperature effects are likely to be of minor importance. (author)
Towards accurate emergency response behavior
International Nuclear Information System (INIS)
Sargent, T.O.
1981-01-01
Nuclear reactor operator emergency response behavior has persisted as a training problem through lack of information. The industry needs an accurate definition of operator behavior in adverse stress conditions, and training methods which will produce the desired behavior. Newly assembled information from fifty years of research into human behavior in both high and low stress provides a more accurate definition of appropriate operator response, and supports training methods which will produce the needed control room behavior. The research indicates that operator response in emergencies is divided into two modes, conditioned behavior and knowledge based behavior. Methods which assure accurate conditioned behavior, and provide for the recovery of knowledge based behavior, are described in detail
Universality of quantum gravity corrections.
Das, Saurya; Vagenas, Elias C
2008-11-28
We show that the existence of a minimum measurable length and the related generalized uncertainty principle (GUP), predicted by theories of quantum gravity, influence all quantum Hamiltonians. Thus, they predict quantum gravity corrections to various quantum phenomena. We compute such corrections to the Lamb shift, the Landau levels, and the tunneling current in a scanning tunneling microscope. We show that these corrections can be interpreted in two ways: (a) either that they are exceedingly small, beyond the reach of current experiments, or (b) that they predict upper bounds on the quantum gravity parameter in the GUP, compatible with experiments at the electroweak scale. Thus, more accurate measurements in the future should either be able to test these predictions, or further tighten the above bounds and predict an intermediate length scale between the electroweak and the Planck scale.
Accurate metacognition for visual sensory memory representations.
Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F
2014-04-01
The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.
An accurate nonlinear Monte Carlo collision operator
International Nuclear Information System (INIS)
Wang, W.X.; Okamoto, M.; Nakajima, N.; Murakami, S.
1995-03-01
A three dimensional nonlinear Monte Carlo collision model is developed based on Coulomb binary collisions with the emphasis both on the accuracy and implementation efficiency. The operator of simple form fulfills particle number, momentum and energy conservation laws, and is equivalent to exact Fokker-Planck operator by correctly reproducing the friction coefficient and diffusion tensor, in addition, can effectively assure small-angle collisions with a binary scattering angle distributed in a limited range near zero. Two highly vectorizable algorithms are designed for its fast implementation. Various test simulations regarding relaxation processes, electrical conductivity, etc. are carried out in velocity space. The test results, which is in good agreement with theory, and timing results on vector computers show that it is practically applicable. The operator may be used for accurately simulating collisional transport problems in magnetized and unmagnetized plasmas. (author)
Geodetic analysis of disputed accurate qibla direction
Saksono, Tono; Fulazzaky, Mohamad Ali; Sari, Zamah
2018-04-01
Muslims perform the prayers facing towards the correct qibla direction would be the only one of the practical issues in linking theoretical studies with practice. The concept of facing towards the Kaaba in Mecca during the prayers has long been the source of controversy among the muslim communities to not only in poor and developing countries but also in developed countries. The aims of this study were to analyse the geodetic azimuths of qibla calculated using three different models of the Earth. The use of ellipsoidal model of the Earth could be the best method for determining the accurate direction of Kaaba from anywhere on the Earth's surface. A muslim cannot direct himself towards the qibla correctly if he cannot see the Kaaba due to setting out process and certain motions during the prayer this can significantly shift the qibla direction from the actual position of the Kaaba. The requirement of muslim prayed facing towards the Kaaba is more as spiritual prerequisite rather than physical evidence.
Accurate shear measurement with faint sources
International Nuclear Information System (INIS)
Zhang, Jun; Foucaud, Sebastien; Luo, Wentao
2015-01-01
For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys
Full Text Available ... and Craniofacial Surgery Cleft Lip/Palate and Craniofacial Surgery A cleft lip may require one or more ... find out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment ...
When Is Network Lasso Accurate?
Directory of Open Access Journals (Sweden)
Alexander Jung
2018-01-01
Full Text Available The “least absolute shrinkage and selection operator” (Lasso method has been adapted recently for network-structured datasets. In particular, this network Lasso method allows to learn graph signals from a small number of noisy signal samples by using the total variation of a graph signal for regularization. While efficient and scalable implementations of the network Lasso are available, only little is known about the conditions on the underlying network structure which ensure network Lasso to be accurate. By leveraging concepts of compressed sensing, we address this gap and derive precise conditions on the underlying network topology and sampling set which guarantee the network Lasso for a particular loss function to deliver an accurate estimate of the entire underlying graph signal. We also quantify the error incurred by network Lasso in terms of two constants which reflect the connectivity of the sampled nodes.
The Accurate Particle Tracer Code
Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi
2016-01-01
The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusio...
International Nuclear Information System (INIS)
Deslattes, R.D.
1987-01-01
Heavy ion accelerators are the most flexible and readily accessible sources of highly charged ions. These having only one or two remaining electrons have spectra whose accurate measurement is of considerable theoretical significance. Certain features of ion production by accelerators tend to limit the accuracy which can be realized in measurement of these spectra. This report aims to provide background about spectroscopic limitations and discuss how accelerator operations may be selected to permit attaining intrinsically limited data
Rethinking political correctness.
Ely, Robin J; Meyerson, Debra E; Davidson, Martin N
2006-09-01
Legal and cultural changes over the past 40 years ushered unprecedented numbers of women and people of color into companies' professional ranks. Laws now protect these traditionally underrepresented groups from blatant forms of discrimination in hiring and promotion. Meanwhile, political correctness has reset the standards for civility and respect in people's day-to-day interactions. Despite this obvious progress, the authors' research has shown that political correctness is a double-edged sword. While it has helped many employees feel unlimited by their race, gender, or religion,the PC rule book can hinder people's ability to develop effective relationships across race, gender, and religious lines. Companies need to equip workers with skills--not rules--for building these relationships. The authors offer the following five principles for healthy resolution of the tensions that commonly arise over difference: Pause to short-circuit the emotion and reflect; connect with others, affirming the importance of relationships; question yourself to identify blind spots and discover what makes you defensive; get genuine support that helps you gain a broader perspective; and shift your mind-set from one that says, "You need to change," to one that asks, "What can I change?" When people treat their cultural differences--and related conflicts and tensions--as opportunities to gain a more accurate view of themselves, one another, and the situation, trust builds and relationships become stronger. Leaders should put aside the PC rule book and instead model and encourage risk taking in the service of building the organization's relational capacity. The benefits will reverberate through every dimension of the company's work.
Accurate determination of antenna directivity
DEFF Research Database (Denmark)
Dich, Mikael
1997-01-01
The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power......-pattern measurements. The derivation is based on the theory of spherical wave expansion of electromagnetic fields, which also establishes a simple criterion for the required number of samples of the power density. An array antenna consisting of Hertzian dipoles is used to test the accuracy and rate of convergence...
Thermodynamics of Error Correction
Directory of Open Access Journals (Sweden)
Pablo Sartori
2015-12-01
Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.
Deferred correction approach on generic transport equation
International Nuclear Information System (INIS)
Shah, I.A.; Ali, M.
2004-01-01
In this study, a two dimensional Steady Convection-Diffusion was solved, using Deferred correction approach, and results were compared with standard spatial discretization schemes. Numerical investigations were carried out based on the velocity and flow direction, for various diffusivity coefficients covering a range from diffusive to convective flows. The results show that the Deferred Ted Correction Approach gives more accurate and stable results in relation to UDS and CDs discretization of convective terms. Deferred Correction Approach caters for the wiggles for convective flows in case of central difference discretization of the equation and also caters for the dissipative error generated by the first order upwind discretization of convective fluxes. (author)
Accurate Modeling of Advanced Reflectarrays
DEFF Research Database (Denmark)
Zhou, Min
to the conventional phase-only optimization technique (POT), the geometrical parameters of the array elements are directly optimized to fulfill the far-field requirements, thus maintaining a direct relation between optimization goals and optimization variables. As a result, better designs can be obtained compared...... of the incident field, the choice of basis functions, and the technique to calculate the far-field. Based on accurate reference measurements of two offset reflectarrays carried out at the DTU-ESA Spherical NearField Antenna Test Facility, it was concluded that the three latter factors are particularly important...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...
Accurate thickness measurement of graphene
International Nuclear Information System (INIS)
Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T
2016-01-01
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1–1.3 nm to 0.1–0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials. (paper)
Scatter and attenuation correction in SPECT
International Nuclear Information System (INIS)
Ljungberg, Michael
2004-01-01
The adsorbed dose is related to the activity uptake in the organ and its temporal distribution. Measured count rate with scintillation cameras is related to activity through the system sensitivity, cps/MBq. By accounting for physical processes and imaging limitations we can measure the activity at different time points. Correction for physical factor, such as attenuation and scatter is required for accurate quantitation. Both planar and SPECT imaging can be used to estimate activities for radiopharmaceutical dosimetry. Planar methods have been the most widely used but is a 2D technique. With accurate modelling for imagine in iterative reconstruction, SPECT methods will prove to be more accurate
Accurate measurements of neutron activation cross sections
International Nuclear Information System (INIS)
Semkova, V.
1999-01-01
The applications of some recent achievements of neutron activation method on high intensity neutron sources are considered from the view point of associated errors of cross sections data for neutron induced reaction. The important corrections in -y-spectrometry insuring precise determination of the induced radioactivity, methods for accurate determination of the energy and flux density of neutrons, produced by different sources, and investigations of deuterium beam composition are considered as factors determining the precision of the experimental data. The influence of the ion beam composition on the mean energy of neutrons has been investigated by measurement of the energy of neutrons induced by different magnetically analysed deuterium ion groups. Zr/Nb method for experimental determination of the neutron energy in the 13-15 MeV energy range allows to measure energy of neutrons from D-T reaction with uncertainty of 50 keV. Flux density spectra from D(d,n) E d = 9.53 MeV and Be(d,n) E d = 9.72 MeV are measured by PHRS and foil activation method. Future applications of the activation method on NG-12 are discussed. (author)
NWS Corrections to Observations
National Oceanic and Atmospheric Administration, Department of Commerce — Form B-14 is the National Weather Service form entitled 'Notice of Corrections to Weather Records.' The forms are used to make corrections to observations on forms...
Full Text Available ... more surgeries depending on the extent of the repair needed. Click here to find out more. Corrective ... more surgeries depending on the extent of the repair needed. Click here to find out more. Corrective ...
Full Text Available ... Jaw Surgery Download Download the ebook for further information Corrective jaw, or orthognathic surgery is performed by ... your treatment. Correction of Common Dentofacial Deformities The information provided here is not intended as a substitute ...
Accurate EPR radiosensitivity calibration using small sample masses
Hayes, R. B.; Haskell, E. H.; Barrus, J. K.; Kenner, G. H.; Romanyukha, A. A.
2000-03-01
We demonstrate a procedure in retrospective EPR dosimetry which allows for virtually nondestructive sample evaluation in terms of sample irradiations. For this procedure to work, it is shown that corrections must be made for cavity response characteristics when using variable mass samples. Likewise, methods are employed to correct for empty tube signals, sample anisotropy and frequency drift while considering the effects of dose distribution optimization. A demonstration of the method's utility is given by comparing sample portions evaluated using both the described methodology and standard full sample additive dose techniques. The samples used in this study are tooth enamel from teeth removed during routine dental care. We show that by making all the recommended corrections, very small masses can be both accurately measured and correlated with measurements of other samples. Some issues relating to dose distribution optimization are also addressed.
Accurate EPR radiosensitivity calibration using small sample masses
International Nuclear Information System (INIS)
Hayes, R.B.; Haskell, E.H.; Barrus, J.K.; Kenner, G.H.; Romanyukha, A.A.
2000-01-01
We demonstrate a procedure in retrospective EPR dosimetry which allows for virtually nondestructive sample evaluation in terms of sample irradiations. For this procedure to work, it is shown that corrections must be made for cavity response characteristics when using variable mass samples. Likewise, methods are employed to correct for empty tube signals, sample anisotropy and frequency drift while considering the effects of dose distribution optimization. A demonstration of the method's utility is given by comparing sample portions evaluated using both the described methodology and standard full sample additive dose techniques. The samples used in this study are tooth enamel from teeth removed during routine dental care. We show that by making all the recommended corrections, very small masses can be both accurately measured and correlated with measurements of other samples. Some issues relating to dose distribution optimization are also addressed
78 FR 4766 - Authority Citation Correction
2013-01-23
...-19-11] Authority Citation Correction AGENCY: Securities and Exchange Commission. ACTION: Final rule..., respectively) that each included an inaccurate amendatory instruction pertaining to an authority citation. The Commission is publishing this technical amendment to accurately reflect the authority citation in the Code of...
The accurate particle tracer code
Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun
2017-11-01
The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.
CTE Corrections for WFPC2 and ACS
Dolphin, Andrew
2003-07-01
The error budget for optical broadband photometry is dominated by three factors: CTE corrections, long-short anomaly corrections, and photometric zero points. Questions about the dependencies of the CTE have largely been resolved, and my CTE corrections have been included in the WFPC2 handbook and tutorial. What remains to be done is the determination of the "final" CTE correction at the end of the WFPC2 mission, which will increase the accuracy of photometry obtained in the final few cycles. The long-short anomaly is still the subject of much debate, as it remains unclear whethere or not this effect is real and, if so, what its size and nature is. Photometric zero points have likewise varied by over 0.05 magnitudes in the literature, and will likely remain unresolved until the long-short anomaly is addressed {given that most calibration exposures are short while most science exposures are long}. It is also becoming apparent that similar issues will affect the accuracy of ACS photometry, and consequently that an ACS CTE study analogous to my WFPC2 work would significantly improve the calibration of ACS. I therefore propose to use archival WFPC2 images of omega Cen and ACS images of 47 Tuc to continue my HST calibration work. I also propose to begin work on "next-generation" CTE corrections, in which corrections are applied to the images based on accurate charge-trapping models rather than to the reduced photometry. This technique will allow for more accurate CTE corrections in certain cases {such as a star above a bright star or on a variable background}, improved PSF-fitting photometry of faint stars, and image restoration for accurate analysis of extended objects.
Accurate deuterium spectroscopy for fundamental studies
Wcisło, P.; Thibault, F.; Zaborowski, M.; Wójtewicz, S.; Cygan, A.; Kowzan, G.; Masłowski, P.; Komasa, J.; Puchalski, M.; Pachucki, K.; Ciuryło, R.; Lisak, D.
2018-07-01
We present an accurate measurement of the weak quadrupole S(2) 2-0 line in self-perturbed D2 and theoretical ab initio calculations of both collisional line-shape effects and energy of this rovibrational transition. The spectra were collected at the 247-984 Torr pressure range with a frequency-stabilized cavity ring-down spectrometer linked to an optical frequency comb (OFC) referenced to a primary time standard. Our line-shape modeling employed quantum calculations of molecular scattering (the pressure broadening and shift and their speed dependencies were calculated, while the complex frequency of optical velocity-changing collisions was fitted to experimental spectra). The velocity-changing collisions are handled with the hard-sphere collisional kernel. The experimental and theoretical pressure broadening and shift are consistent within 5% and 27%, respectively (the discrepancy for shift is 8% when referred not to the speed averaged value, which is close to zero, but to the range of variability of the speed-dependent shift). We use our high pressure measurement to determine the energy, ν0, of the S(2) 2-0 transition. The ab initio line-shape calculations allowed us to mitigate the expected collisional systematics reaching the 410 kHz accuracy of ν0. We report theoretical determination of ν0 taking into account relativistic and QED corrections up to α5. Our estimation of the accuracy of the theoretical ν0 is 1.3 MHz. We observe 3.4σ discrepancy between experimental and theoretical ν0.
How flatbed scanners upset accurate film dosimetry
van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.
2016-01-01
Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.
How flatbed scanners upset accurate film dosimetry
International Nuclear Information System (INIS)
Van Battum, L J; Verdaasdonk, R M; Heukelom, S; Huizenga, H
2016-01-01
Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2–2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red–green–blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film. (paper)
Source distribution dependent scatter correction for PVI
International Nuclear Information System (INIS)
Barney, J.S.; Harrop, R.; Dykstra, C.J.
1993-01-01
Source distribution dependent scatter correction methods which incorporate different amounts of information about the source position and material distribution have been developed and tested. The techniques use image to projection integral transformation incorporating varying degrees of information on the distribution of scattering material, or convolution subtraction methods, with some information about the scattering material included in one of the convolution methods. To test the techniques, the authors apply them to data generated by Monte Carlo simulations which use geometric shapes or a voxelized density map to model the scattering material. Source position and material distribution have been found to have some effect on scatter correction. An image to projection method which incorporates a density map produces accurate scatter correction but is computationally expensive. Simpler methods, both image to projection and convolution, can also provide effective scatter correction
Corrections to primordial nucleosynthesis
International Nuclear Information System (INIS)
Dicus, D.A.; Kolb, E.W.; Gleeson, A.M.; Sudarshan, E.C.G.; Teplitz, V.L.; Turner, M.S.
1982-01-01
The changes in primordial nucleosynthesis resulting from small corrections to rates for weak processes that connect neutrons and protons are discussed. The weak rates are corrected by improved treatment of Coulomb and radiative corrections, and by inclusion of plasma effects. The calculations lead to a systematic decrease in the predicted 4 He abundance of about ΔY = 0.0025. The relative changes in other primoridal abundances are also 1 to 2%
Publisher Correction: Predicting unpredictability
Davis, Steven J.
2018-06-01
In this News & Views article originally published, the wrong graph was used for panel b of Fig. 1, and the numbers on the y axes of panels a and c were incorrect; the original and corrected Fig. 1 is shown below. This has now been corrected in all versions of the News & Views.
Exploring the relationship between sequence similarity and accurate phylogenetic trees.
Cantarel, Brandi L; Morrison, Hilary G; Pearson, William
2006-11-01
We have characterized the relationship between accurate phylogenetic reconstruction and sequence similarity, testing whether high levels of sequence similarity can consistently produce accurate evolutionary trees. We generated protein families with known phylogenies using a modified version of the PAML/EVOLVER program that produces insertions and deletions as well as substitutions. Protein families were evolved over a range of 100-400 point accepted mutations; at these distances 63% of the families shared significant sequence similarity. Protein families were evolved using balanced and unbalanced trees, with ancient or recent radiations. In families sharing statistically significant similarity, about 60% of multiple sequence alignments were 95% identical to true alignments. To compare recovered topologies with true topologies, we used a score that reflects the fraction of clades that were correctly clustered. As expected, the accuracy of the phylogenies was greatest in the least divergent families. About 88% of phylogenies clustered over 80% of clades in families that shared significant sequence similarity, using Bayesian, parsimony, distance, and maximum likelihood methods. However, for protein families with short ancient branches (ancient radiation), only 30% of the most divergent (but statistically significant) families produced accurate phylogenies, and only about 70% of the second most highly conserved families, with median expectation values better than 10(-60), produced accurate trees. These values represent upper bounds on expected tree accuracy for sequences with a simple divergence history; proteins from 700 Giardia families, with a similar range of sequence similarities but considerably more gaps, produced much less accurate trees. For our simulated insertions and deletions, correct multiple sequence alignments did not perform much better than those produced by T-COFFEE, and including sequences with expressed sequence tag-like sequencing errors did not
The FLUKA code: An accurate simulation tool for particle therapy
Battistoni, Giuseppe; Böhlen, Till T; Cerutti, Francesco; Chin, Mary Pik Wai; Dos Santos Augusto, Ricardo M; Ferrari, Alfredo; Garcia Ortega, Pablo; Kozlowska, Wioletta S; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis
2016-01-01
Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically-based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in-vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with bot...
An Accurate Technique for Calculation of Radiation From Printed Reflectarrays
DEFF Research Database (Denmark)
Zhou, Min; Sorensen, Stig B.; Jorgensen, Erik
2011-01-01
The accuracy of various techniques for calculating the radiation from printed reflectarrays is examined, and an improved technique based on the equivalent currents approach is proposed. The equivalent currents are found from a continuous plane wave spectrum calculated by use of the spectral dyadic...... Green's function. This ensures a correct relation between the equivalent electric and magnetic currents and thus allows an accurate calculation of the radiation over the entire far-field sphere. A comparison to DTU-ESA Facility measurements of a reference offset reflectarray designed and manufactured...
Accurate characterization of OPVs: Device masking and different solar simulators
DEFF Research Database (Denmark)
Gevorgyan, Suren; Carlé, Jon Eggert; Søndergaard, Roar R.
2013-01-01
One of the prime objects of organic solar cell research has been to improve the power conversion efficiency. Unfortunately, the accurate determination of this property is not straight forward and has led to the recommendation that record devices be tested and certified at a few accredited...... laboratories following rigorous ASTM and IEC standards. This work tries to address some of the issues confronting the standard laboratory in this regard. Solar simulator lamps are investigated for their light field homogeneity and direct versus diffuse components, as well as the correct device area...
Accurate and fast multiple-testing correction in eQTL studies
Sul, Jae Hoon; Raj, Towfique; de Jong, Simone; de Bakker, Paul I W; Raychaudhuri, Soumya; Ophoff, Roel A; Stranger, Barbara E; Eskin, Eleazar; Han, Buhm
2015-01-01
In studies of expression quantitative trait loci (eQTLs), it is of increasing interest to identify eGenes, the genes whose expression levels are associated with variation at a particular genetic variant. Detecting eGenes is important for follow-up analyses and prioritization because genes are the
Another method of dead time correction
International Nuclear Information System (INIS)
Sabol, J.
1988-01-01
A new method of the correction of counting losses caused by a non-extended dead time of pulse detection systems is presented. The approach is based on the distribution of time intervals between pulses at the output of the system. The method was verified both experimentally and by using the Monte Carlo simulations. The results show that the suggested technique is more reliable and accurate than other methods based on a separate measurement of the dead time. (author) 5 refs
Reducing dose calculation time for accurate iterative IMRT planning
International Nuclear Information System (INIS)
Siebers, Jeffrey V.; Lauterbach, Marc; Tong, Shidong; Wu Qiuwen; Mohan, Radhe
2002-01-01
A time-consuming component of IMRT optimization is the dose computation required in each iteration for the evaluation of the objective function. Accurate superposition/convolution (SC) and Monte Carlo (MC) dose calculations are currently considered too time-consuming for iterative IMRT dose calculation. Thus, fast, but less accurate algorithms such as pencil beam (PB) algorithms are typically used in most current IMRT systems. This paper describes two hybrid methods that utilize the speed of fast PB algorithms yet achieve the accuracy of optimizing based upon SC algorithms via the application of dose correction matrices. In one method, the ratio method, an infrequently computed voxel-by-voxel dose ratio matrix (R=D SC /D PB ) is applied for each beam to the dose distributions calculated with the PB method during the optimization. That is, D PB xR is used for the dose calculation during the optimization. The optimization proceeds until both the IMRT beam intensities and the dose correction ratio matrix converge. In the second method, the correction method, a periodically computed voxel-by-voxel correction matrix for each beam, defined to be the difference between the SC and PB dose computations, is used to correct PB dose distributions. To validate the methods, IMRT treatment plans developed with the hybrid methods are compared with those obtained when the SC algorithm is used for all optimization iterations and with those obtained when PB-based optimization is followed by SC-based optimization. In the 12 patient cases studied, no clinically significant differences exist in the final treatment plans developed with each of the dose computation methodologies. However, the number of time-consuming SC iterations is reduced from 6-32 for pure SC optimization to four or less for the ratio matrix method and five or less for the correction method. Because the PB algorithm is faster at computing dose, this reduces the inverse planning optimization time for our implementation
Correction of Neonatal Hypovolemia
Directory of Open Access Journals (Sweden)
V. V. Moskalev
2007-01-01
Full Text Available Objective: to evaluate the efficiency of hydroxyethyl starch solution (6% refortane, Berlin-Chemie versus fresh frozen plasma used to correct neonatal hypovolemia.Materials and methods. In 12 neonatal infants with hypoco-agulation, hypovolemia was corrected with fresh frozen plasma (10 ml/kg body weight. In 13 neonates, it was corrected with 6% refortane infusion in a dose of 10 ml/kg. Doppler echocardiography was used to study central hemodynamic parameters and Doppler study was employed to examine regional blood flow in the anterior cerebral and renal arteries.Results. Infusion of 6% refortane and fresh frozen plasma at a rate of 10 ml/hour during an hour was found to normalize the parameters of central hemodynamics and regional blood flow.Conclusion. Comparative analysis of the findings suggests that 6% refortane is the drug of choice in correcting neonatal hypovolemia. Fresh frozen plasma should be infused in hemostatic disorders.
Full Text Available ... surgery. It is important to understand that your treatment, which will probably include orthodontics before and after ... to realistically estimate the time required for your treatment. Correction of Common Dentofacial Deformities The information provided ...
Full Text Available ... misalignment of jaws and teeth. Surgery can improve chewing, speaking and breathing. While the patient's appearance may ... indicate the need for corrective jaw surgery: Difficulty chewing, or biting food Difficulty swallowing Chronic jaw or ...
Full Text Available ... It can also invite bacteria that lead to gum disease. Click here to find out more. Who We ... It can also invite bacteria that lead to gum disease. Click here to find out more. Corrective Jaw ...
Full Text Available ... is performed by an oral and maxillofacial surgeon (OMS) to correct a wide range of minor and ... when sleeping, including snoring) Your dentist, orthodontist and OMS will work together to determine whether you are ...
ICT: isotope correction toolbox.
Jungreuthmayer, Christian; Neubauer, Stefan; Mairinger, Teresa; Zanghellini, Jürgen; Hann, Stephan
2016-01-01
Isotope tracer experiments are an invaluable technique to analyze and study the metabolism of biological systems. However, isotope labeling experiments are often affected by naturally abundant isotopes especially in cases where mass spectrometric methods make use of derivatization. The correction of these additive interferences--in particular for complex isotopic systems--is numerically challenging and still an emerging field of research. When positional information is generated via collision-induced dissociation, even more complex calculations for isotopic interference correction are necessary. So far, no freely available tools can handle tandem mass spectrometry data. We present isotope correction toolbox, a program that corrects tandem mass isotopomer data from tandem mass spectrometry experiments. Isotope correction toolbox is written in the multi-platform programming language Perl and, therefore, can be used on all commonly available computer platforms. Source code and documentation can be freely obtained under the Artistic License or the GNU General Public License from: https://github.com/jungreuc/isotope_correction_toolbox/ {christian.jungreuthmayer@boku.ac.at,juergen.zanghellini@boku.ac.at} Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Method of absorbance correction in a spectroscopic heating value sensor
Saveliev, Alexei; Jangale, Vilas Vyankatrao; Zelepouga, Sergeui; Pratapas, John
2013-09-17
A method and apparatus for absorbance correction in a spectroscopic heating value sensor in which a reference light intensity measurement is made on a non-absorbing reference fluid, a light intensity measurement is made on a sample fluid, and a measured light absorbance of the sample fluid is determined. A corrective light intensity measurement at a non-absorbing wavelength of the sample fluid is made on the sample fluid from which an absorbance correction factor is determined. The absorbance correction factor is then applied to the measured light absorbance of the sample fluid to arrive at a true or accurate absorbance for the sample fluid.
Geological Corrections in Gravimetry
Mikuška, J.; Marušiak, I.
2015-12-01
Applying corrections for the known geology to gravity data can be traced back into the first quarter of the 20th century. Later on, mostly in areas with sedimentary cover, at local and regional scales, the correction known as gravity stripping has been in use since the mid 1960s, provided that there was enough geological information. Stripping at regional to global scales became possible after releasing the CRUST 2.0 and later CRUST 1.0 models in the years 2000 and 2013, respectively. Especially the later model provides quite a new view on the relevant geometries and on the topographic and crustal densities as well as on the crust/mantle density contrast. Thus, the isostatic corrections, which have been often used in the past, can now be replaced by procedures working with an independent information interpreted primarily from seismic studies. We have developed software for performing geological corrections in space domain, based on a-priori geometry and density grids which can be of either rectangular or spherical/ellipsoidal types with cells of the shapes of rectangles, tesseroids or triangles. It enables us to calculate the required gravitational effects not only in the form of surface maps or profiles but, for instance, also along vertical lines, which can shed some additional light on the nature of the geological correction. The software can work at a variety of scales and considers the input information to an optional distance from the calculation point up to the antipodes. Our main objective is to treat geological correction as an alternative to accounting for the topography with varying densities since the bottoms of the topographic masses, namely the geoid or ellipsoid, generally do not represent geological boundaries. As well we would like to call attention to the possible distortions of the corrected gravity anomalies. This work was supported by the Slovak Research and Development Agency under the contract APVV-0827-12.
Accurate ab initio vibrational energies of methyl chloride
International Nuclear Information System (INIS)
Owens, Alec; Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter
2015-01-01
Two new nine-dimensional potential energy surfaces (PESs) have been generated using high-level ab initio theory for the two main isotopologues of methyl chloride, CH 3 35 Cl and CH 3 37 Cl. The respective PESs, CBS-35 HL , and CBS-37 HL , are based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set (CBS) limit, and incorporate a range of higher-level (HL) additive energy corrections to account for core-valence electron correlation, higher-order coupled cluster terms, scalar relativistic effects, and diagonal Born-Oppenheimer corrections. Variational calculations of the vibrational energy levels were performed using the computer program TROVE, whose functionality has been extended to handle molecules of the form XY 3 Z. Fully converged energies were obtained by means of a complete vibrational basis set extrapolation. The CBS-35 HL and CBS-37 HL PESs reproduce the fundamental term values with root-mean-square errors of 0.75 and 1.00 cm −1 , respectively. An analysis of the combined effect of the HL corrections and CBS extrapolation on the vibrational wavenumbers indicates that both are needed to compute accurate theoretical results for methyl chloride. We believe that it would be extremely challenging to go beyond the accuracy currently achieved for CH 3 Cl without empirical refinement of the respective PESs
Accurate ab initio vibrational energies of methyl chloride
Energy Technology Data Exchange (ETDEWEB)
Owens, Alec, E-mail: owens@mpi-muelheim.mpg.de [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany); Department of Physics and Astronomy, University College London, Gower Street, WC1E 6BT London (United Kingdom); Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan [Department of Physics and Astronomy, University College London, Gower Street, WC1E 6BT London (United Kingdom); Thiel, Walter [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany)
2015-06-28
Two new nine-dimensional potential energy surfaces (PESs) have been generated using high-level ab initio theory for the two main isotopologues of methyl chloride, CH{sub 3}{sup 35}Cl and CH{sub 3}{sup 37}Cl. The respective PESs, CBS-35{sup HL}, and CBS-37{sup HL}, are based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set (CBS) limit, and incorporate a range of higher-level (HL) additive energy corrections to account for core-valence electron correlation, higher-order coupled cluster terms, scalar relativistic effects, and diagonal Born-Oppenheimer corrections. Variational calculations of the vibrational energy levels were performed using the computer program TROVE, whose functionality has been extended to handle molecules of the form XY {sub 3}Z. Fully converged energies were obtained by means of a complete vibrational basis set extrapolation. The CBS-35{sup HL} and CBS-37{sup HL} PESs reproduce the fundamental term values with root-mean-square errors of 0.75 and 1.00 cm{sup −1}, respectively. An analysis of the combined effect of the HL corrections and CBS extrapolation on the vibrational wavenumbers indicates that both are needed to compute accurate theoretical results for methyl chloride. We believe that it would be extremely challenging to go beyond the accuracy currently achieved for CH{sub 3}Cl without empirical refinement of the respective PESs.
Robust Active Label Correction
DEFF Research Database (Denmark)
Kremer, Jan; Sha, Fei; Igel, Christian
2018-01-01
for the noisy data lead to different active label correction algorithms. If loss functions consider the label noise rates, these rates are estimated during learning, where importance weighting compensates for the sampling bias. We show empirically that viewing the true label as a latent variable and computing......Active label correction addresses the problem of learning from input data for which noisy labels are available (e.g., from imprecise measurements or crowd-sourcing) and each true label can be obtained at a significant cost (e.g., through additional measurements or human experts). To minimize......). To select labels for correction, we adopt the active learning strategy of maximizing the expected model change. We consider the change in regularized empirical risk functionals that use different pointwise loss functions for patterns with noisy and true labels, respectively. Different loss functions...
Generalised Batho correction factor
International Nuclear Information System (INIS)
Siddon, R.L.
1984-01-01
There are various approximate algorithms available to calculate the radiation dose in the presence of a heterogeneous medium. The Webb and Fox product over layers formulation of the generalised Batho correction factor requires determination of the number of layers and the layer densities for each ray path. It has been shown that the Webb and Fox expression is inefficient for the heterogeneous medium which is expressed as regions of inhomogeneity rather than layers. The inefficiency of the layer formulation is identified as the repeated problem of determining for each ray path which inhomogeneity region corresponds to a particular layer. It has been shown that the formulation of the Batho correction factor as a product over inhomogeneity regions avoids that topological problem entirely. The formulation in terms of a product over regions simplifies the computer code and reduces the time required to calculate the Batho correction factor for the general heterogeneous medium. (U.K.)
THE SECONDARY EXTINCTION CORRECTION
Energy Technology Data Exchange (ETDEWEB)
Zachariasen, W. H.
1963-03-15
It is shown that Darwin's formula for the secondary extinction correction, which has been universally accepted and extensively used, contains an appreciable error in the x-ray diffraction case. The correct formula is derived. As a first order correction for secondary extinction, Darwin showed that one should use an effective absorption coefficient mu + gQ where an unpolarized incident beam is presumed. The new derivation shows that the effective absorption coefficient is mu + 2gQ(1 + cos/sup 4/2 theta )/(1 plus or minus cos/sup 2/2 theta )/s up 2/, which gives mu + gQ at theta =0 deg and theta = 90 deg , but mu + 2gQ at theta = 45 deg . Darwin's theory remains valid when applied to neutron diffraction. (auth)
Can cancer researchers accurately judge whether preclinical reports will reproduce?
Directory of Open Access Journals (Sweden)
Daniel Benjamin
2017-06-01
Full Text Available There is vigorous debate about the reproducibility of research findings in cancer biology. Whether scientists can accurately assess which experiments will reproduce original findings is important to determining the pace at which science self-corrects. We collected forecasts from basic and preclinical cancer researchers on the first 6 replication studies conducted by the Reproducibility Project: Cancer Biology (RP:CB to assess the accuracy of expert judgments on specific replication outcomes. On average, researchers forecasted a 75% probability of replicating the statistical significance and a 50% probability of replicating the effect size, yet none of these studies successfully replicated on either criterion (for the 5 studies with results reported. Accuracy was related to expertise: experts with higher h-indices were more accurate, whereas experts with more topic-specific expertise were less accurate. Our findings suggest that experts, especially those with specialized knowledge, were overconfident about the RP:CB replicating individual experiments within published reports; researcher optimism likely reflects a combination of overestimating the validity of original studies and underestimating the difficulties of repeating their methodologies.
Accurate thermoelastic tensor and acoustic velocities of NaCl
Energy Technology Data Exchange (ETDEWEB)
Marcondes, Michel L., E-mail: michel@if.usp.br [Physics Institute, University of Sao Paulo, Sao Paulo, 05508-090 (Brazil); Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Shukla, Gaurav, E-mail: shukla@physics.umn.edu [School of Physics and Astronomy, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States); Silveira, Pedro da [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Wentzcovitch, Renata M., E-mail: wentz002@umn.edu [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States)
2015-12-15
Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.
International Nuclear Information System (INIS)
Tejera R, A.; Cortes P, A.; Becerril V, A.
1990-03-01
For the practical application of the method proposed by J. Bryant, the authors carried out a series of small corrections, related with the bottom, the dead time of the detectors and channels, with the resolution time of the coincidences, with the accidental coincidences, with the decay scheme and with the gamma efficiency of the beta detector beta and the beta efficiency beta of the gamma detector. The calculation of the correction formula is presented in the development of the present report, being presented 25 combinations of the probability of the first existent state at once of one disintegration and the second state at once of the following disintegration. (Author)
Model Correction Factor Method
DEFF Research Database (Denmark)
Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes
1997-01-01
The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...
Equipment upgrade - Accurate positioning of ion chambers
International Nuclear Information System (INIS)
Doane, Harry J.; Nelson, George W.
1990-01-01
Five adjustable clamps were made to firmly support and accurately position the ion Chambers, that provide signals to the power channels for the University of Arizona TRIGA reactor. The design requirements, fabrication procedure and installation are described
The corrections to scaling within Mazenko's theory in the limit of low ...
Indian Academy of Sciences (India)
functions'. In fact both the scaling functions and scaling exponents describe only the leading behaviour in the theory of scaling phenomena. There may be, and usually are, subdominant corrections, known as corrections to scaling. These corrections cannot be neglected in practice if more accurate values for exponents and ...
Correction procedures for C-14 dates
International Nuclear Information System (INIS)
McKerrell, H.
1975-01-01
There are two quite separate criteria to satisfy before accepting as valid the corrections to C-14 dates which have been indicated for some years now by the bristlecone pine calibration. Firstly the correction figures have to be based upon all the available tree-ring data and derived in a manner that is mathematically sound, and secondly the correction figures have to produce accurate results on C-14 dates from archaeological test samples of known historical date, these covering as wide a period as possible. Neither of these basic prerequisites has yet been fully met. Thus the two-fold purpose of this paper is to bring together, and to compare with an independently based procedure, the various correction curves or tables that have been published up to Spring 1974, as well as to detail the correction results on reliable, historically dated Egyptian, Helladic and Minoan test samples from 3100 B.C. The nomenclature followed is strictly that adopted by the primary dating journal Radiocarbon, all C-14 dates quoted thus relate to the 5568 year half-life and the standard AD/BC system. (author)
Peak-by-peak correction of Ge(Li) gamma-ray spectra for photopeaks from background
International Nuclear Information System (INIS)
Cutshall, N.H.; Larsen, I.L.
1980-01-01
Background photopeaks can interfere with accurate measurement of low levels of radionuclides by gamma-ray spectrometry. A flowchart for peak-by-peak correction of sample spectra to produce accurate results is presented. (orig.)
Peak-by-peak correction of Ge(Li) gamma-ray spectra for photopeaks from background
Energy Technology Data Exchange (ETDEWEB)
Cutshall, N H; Larsen, I L [Oak Ridge National Lab., TN (USA)
1980-12-01
Background photopeaks can interfere with accurate measurement of low levels of radionuclides by gamma-ray spectrometry. A flowchart for peak-by-peak correction of sample spectra to produce accurate results is presented.
Attenuation correction for SPECT
International Nuclear Information System (INIS)
Hosoba, Minoru
1986-01-01
Attenuation correction is required for the reconstruction of a quantitative SPECT image. A new method for detecting body contours, which are important for the correction of tissue attenuation, is presented. The effect of body contours, detected by the newly developed method, on the reconstructed images was evaluated using various techniques for attenuation correction. The count rates in the specified region of interest in the phantom image by the Radial Post Correction (RPC) method, the Weighted Back Projection (WBP) method, Chang's method were strongly affected by the accuracy of the contours, as compared to those by Sorenson's method. To evaluate the effect of non-uniform attenuators on the cardiac SPECT, computer simulation experiments were performed using two types of models, the uniform attenuator model (UAM) and the non-uniform attenuator model (NUAM). The RPC method showed the lowest relative percent error (%ERROR) in UAM (11 %). However, 20 to 30 percent increase in %ERROR was observed for NUAM reconstructed with the RPC, WBP, and Chang's methods. Introducing an average attenuation coefficient (0.12/cm for Tc-99m and 0.14/cm for Tl-201) in the RPC method decreased %ERROR to the levels for UAM. Finally, a comparison between images, which were obtained by 180 deg and 360 deg scans and reconstructed from the RPC method, showed that the degree of the distortion of the contour of the simulated ventricles in the 180 deg scan was 15 % higher than that in the 360 deg scan. (Namekawa, K.)
Text Induced Spelling Correction
Reynaert, M.W.C.
2004-01-01
We present TISC, a language-independent and context-sensitive spelling checking and correction system designed to facilitate the automatic removal of non-word spelling errors in large corpora. Its lexicon is derived from a very large corpus of raw text, without supervision, and contains word
International Nuclear Information System (INIS)
Duchene, G.; Moszynski, M.; Curien, D.
1991-01-01
The EUROGAM data-acquisition has to handle a large number of events/s. Typical in-beam experiments using heavy-ion fusion reactions assume the production of about 50 000 compound nuclei per second deexciting via particle and γ-ray emissions. The very powerful γ-ray detection of EUROGAM is expected to produce high-fold event rates as large as 10 4 events/s. Such high count rates introduce, in a common dead time mode, large dead times for the whole system associated with the processing of the pulse, its digitization and its readout (from the preamplifier pulse up to the readout of the information). In order to minimize the dead time the shaping time constant τ, usually about 3 μs for large volume Ge detectors has to be reduced. Smaller shaping times, however, will adversely affect the energy resolution due to ballistic deficit. One possible solution is to operate the linear amplifier, with a somewhat smaller shaping time constant (in the present case we choose τ = 1.5 μs), in combination with a ballistic deficit compensator. The ballistic deficit can be corrected in different ways using a Gated Integrator, a hardware correction or even a software correction. In this paper we present a comparative study of the software and hardware corrections as well as gated integration
Correctness of concurrent processes
E.R. Olderog (Ernst-Rüdiger)
1989-01-01
textabstractA new notion of correctness for concurrent processes is introduced and investigated. It is a relationship P sat S between process terms P built up from operators of CCS [Mi 80], CSP [Ho 85] and COSY [LTS 79] and logical formulas S specifying sets of finite communication sequences as in
Error Correcting Codes -34 ...
Indian Academy of Sciences (India)
information and coding theory. A large scale relay computer had failed to deliver the expected results due to a hardware fault. Hamming, one of the active proponents of computer usage, was determined to find an efficient means by which computers could detect and correct their own faults. A mathematician by train-.
Full Text Available ... their surgery, orthognathic surgery is performed to correct functional problems. Jaw Surgery can have a dramatic effect on many aspects of life. Following are some of the conditions that may ... front, or side Facial injury Birth defects Receding lower jaw and ...
Indian Academy of Sciences (India)
successful consumer products of all time - the Compact Disc. (CD) digital audio .... We can make ... only 2 t additional parity check symbols are required, to be able to correct t .... display information (contah'ling music related data and a table.
Indian Academy of Sciences (India)
Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 3; Issue 4. Algorithms - Correctness of Programs. R K Shyamasundar. Series Article Volume 3 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India.
International Nuclear Information System (INIS)
Liu Quanwei; Luo Zhongyan; Zhu Haiqiao; Wu Jizong
2007-01-01
For high radioactivity level of dissolved solution of spent fuel and the solution of uranium product, radioactive hazard must be considered and reduced as low as possible during accurate determination of uranium. In this work automatic potentiometric titration was applied and the sample only 10 mg of uranium contained was taken in order to reduce the harm of analyzer suffered from the radioactivity. RSD<0.06%, at the same time the result can be corrected for more reliable and accurate measurement. The determination method can effectively reduce the harm of analyzer suffered from the radioactivity, and meets the requirement of reliable accurate measurement of uranium. (authors)
More accurate thermal neutron coincidence counting technique
International Nuclear Information System (INIS)
Baron, N.
1978-01-01
Using passive thermal neutron coincidence counting techniques, the accuracy of nondestructive assays of fertile material can be improved significantly using a two-ring detector. It was shown how the use of a function of the coincidence count rate ring-ratio can provide a detector response rate that is independent of variations in neutron detection efficiency caused by varying sample moderation. Furthermore, the correction for multiplication caused by SF- and (α,n)-neutrons is shown to be separable into the product of a function of the effective mass of 240 Pu (plutonium correction) and a function of the (α,n) reaction probability (matrix correction). The matrix correction is described by a function of the singles count rate ring-ratio. This correction factor is empirically observed to be identical for any combination of PuO 2 powder and matrix materials SiO 2 and MgO because of the similar relation of the (α,n)-Q value and (α,n)-reaction cross section among these matrix nuclei. However the matrix correction expression is expected to be different for matrix materials such as Na, Al, and/or Li. Nevertheless, it should be recognized that for comparison measurements among samples of similar matrix content, it is expected that some function of the singles count rate ring-ratio can be defined to account for variations in the matrix correction due to differences in the intimacy of mixture among the samples. Furthermore the magnitude of this singles count rate ring-ratio serves to identify the contaminant generating the (α,n)-neutrons. Such information is useful in process control
Nagib, Hassan; Vinuesa, Ricardo
2013-11-01
Ability of available Pitot tube corrections to provide accurate mean velocity profiles in ZPG boundary layers is re-examined following the recent work by Bailey et al. Measurements by Bailey et al., carried out with probes of diameters ranging from 0.2 to 1.89 mm, together with new data taken with larger diameters up to 12.82 mm, show deviations with respect to available high-quality datasets and hot-wire measurements in the same Reynolds number range. These deviations are significant in the buffer region around y+ = 30 - 40 , and lead to disagreement in the von Kármán coefficient κ extracted from profiles. New forms for shear, near-wall and turbulence corrections are proposed, highlighting the importance of the latest one. Improved agreement in mean velocity profiles is obtained with new forms, where shear and near-wall corrections contribute with around 85%, and remaining 15% of the total correction comes from turbulence correction. Finally, available algorithms to correct wall position in profile measurements of wall-bounded flows are tested, using as benchmark the corrected Pitot measurements with artificially simulated probe shifts and blockage effects. We develop a new scheme, κB - Musker, which is able to accurately locate wall position.
DNA barcode data accurately assign higher spider taxa
Directory of Open Access Journals (Sweden)
Jonathan A. Coddington
2016-07-01
Full Text Available The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios “barcodes” (whether single or multiple, organelle or nuclear, loci clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families—taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75–100%. Accurate assignment of higher taxa (PIdent above which errors totaled less than 5% occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However
Corrected transposition of the great arteries
Energy Technology Data Exchange (ETDEWEB)
Choi, Young Hi; Park, Jae Hyung; Han, Man Chung [Seoul National University College of Medicine, Seoul (Korea, Republic of)
1981-12-15
The corrected transposition of the great arteries is an usual congenital cardiac malformation, which consists of transposition of great arteries and ventricular inversion, and which is caused by abnormal development of conotruncus and ventricular looping. High frequency of associated cardiac malformations makes it difficult to get accurate morphologic diagnosis. A total of 18 cases of corrected transposition of the great arteries is presented, in which cardiac catheterization and angiocardiography were done at the Department of Radiology, Seoul National University Hospital between September 1976 and June 1981. The clinical, radiographic, and operative findings with the emphasis on the angiocardiographic findings were analyzed. The results are as follows: 1. Among 18 cases, 13 cases have normal cardiac position, 2 cases have dextrocardia with situs solitus, 2 cases have dextrocardia with situs inversus and 1 case has levocardia with situs inversus. 2. Segmental sets are (S, L, L) in 15 cases, and (I, D,D) in 3 cases and there is no exception to loop rule. 3. Side by side interrelationships of both ventricles and both semilunar valves are noticed in 10 and 12 cases respectively. 4. Subaortic type conus is noted in all 18 cases. 5. Associated cardic malformations are VSD in 14 cases, PS in 11, PDA in 3, PFO in 3, ASD in 2, right aortic arch in 2, tricuspid insufficiency, mitral prolapse, persistent left SVC and persistent right SVC in 1 case respectively. 6. For accurate diagnosis of corrected TGA, selective biventriculography using biplane cineradiography is an essential procedure.
Corrected transposition of the great arteries
International Nuclear Information System (INIS)
Choi, Young Hi; Park, Jae Hyung; Han, Man Chung
1981-01-01
The corrected transposition of the great arteries is an usual congenital cardiac malformation, which consists of transposition of great arteries and ventricular inversion, and which is caused by abnormal development of conotruncus and ventricular looping. High frequency of associated cardiac malformations makes it difficult to get accurate morphologic diagnosis. A total of 18 cases of corrected transposition of the great arteries is presented, in which cardiac catheterization and angiocardiography were done at the Department of Radiology, Seoul National University Hospital between September 1976 and June 1981. The clinical, radiographic, and operative findings with the emphasis on the angiocardiographic findings were analyzed. The results are as follows: 1. Among 18 cases, 13 cases have normal cardiac position, 2 cases have dextrocardia with situs solitus, 2 cases have dextrocardia with situs inversus and 1 case has levocardia with situs inversus. 2. Segmental sets are (S, L, L) in 15 cases, and (I, D,D) in 3 cases and there is no exception to loop rule. 3. Side by side interrelationships of both ventricles and both semilunar valves are noticed in 10 and 12 cases respectively. 4. Subaortic type conus is noted in all 18 cases. 5. Associated cardic malformations are VSD in 14 cases, PS in 11, PDA in 3, PFO in 3, ASD in 2, right aortic arch in 2, tricuspid insufficiency, mitral prolapse, persistent left SVC and persistent right SVC in 1 case respectively. 6. For accurate diagnosis of corrected TGA, selective biventriculography using biplane cineradiography is an essential procedure
Geometric correction of APEX hyperspectral data
Directory of Open Access Journals (Sweden)
Vreys Kristin
2016-03-01
Full Text Available Hyperspectral imagery originating from airborne sensors is nowadays widely used for the detailed characterization of land surface. The correct mapping of the pixel positions to ground locations largely contributes to the success of the applications. Accurate geometric correction, also referred to as “orthorectification”, is thus an important prerequisite which must be performed prior to using airborne imagery for evaluations like change detection, or mapping or overlaying the imagery with existing data sets or maps. A so-called “ortho-image” provides an accurate representation of the earth’s surface, having been adjusted for lens distortions, camera tilt and topographic relief. In this paper, we describe the different steps in the geometric correction process of APEX hyperspectral data, as applied in the Central Data Processing Center (CDPC at the Flemish Institute for Technological Research (VITO, Mol, Belgium. APEX ortho-images are generated through direct georeferencing of the raw images, thereby making use of sensor interior and exterior orientation data, boresight calibration data and elevation data. They can be referenced to any userspecified output projection system and can be resampled to any output pixel size.
Precise and accurate train run data: Approximation of actual arrival and departure times
DEFF Research Database (Denmark)
Richter, Troels; Landex, Alex; Andersen, Jonas Lohmann Elkjær
with the approximated actual arrival and departure times. As a result, all future statistics can now either be based on track circuit data with high precision or approximated actual arrival times with a high accuracy. Consequently, performance analysis will be more accurate, punctuality statistics more correct, KPI...
PET motion correction using PRESTO with ITK motion estimation
Energy Technology Data Exchange (ETDEWEB)
Botelho, Melissa [Institute of Biophysics and Biomedical Engineering, Science Faculty of University of Lisbon (Portugal); Caldeira, Liliana; Scheins, Juergen [Institute of Neuroscience and Medicine (INM-4), Forschungszentrum Jülich (Germany); Matela, Nuno [Institute of Biophysics and Biomedical Engineering, Science Faculty of University of Lisbon (Portugal); Kops, Elena Rota; Shah, N Jon [Institute of Neuroscience and Medicine (INM-4), Forschungszentrum Jülich (Germany)
2014-07-29
The Siemens BrainPET scanner is a hybrid MRI/PET system. PET images are prone to motion artefacts which degrade the image quality. Therefore, motion correction is essential. The library PRESTO converts motion-corrected LORs into highly accurate generic projection data [1], providing high-resolution PET images. ITK is an open-source software used for registering multidimensional data []. ITK provides motion estimation necessary to PRESTO.
PET motion correction using PRESTO with ITK motion estimation
International Nuclear Information System (INIS)
Botelho, Melissa; Caldeira, Liliana; Scheins, Juergen; Matela, Nuno; Kops, Elena Rota; Shah, N Jon
2014-01-01
The Siemens BrainPET scanner is a hybrid MRI/PET system. PET images are prone to motion artefacts which degrade the image quality. Therefore, motion correction is essential. The library PRESTO converts motion-corrected LORs into highly accurate generic projection data [1], providing high-resolution PET images. ITK is an open-source software used for registering multidimensional data []. ITK provides motion estimation necessary to PRESTO.
Correction of refractive errors
Directory of Open Access Journals (Sweden)
Vladimir Pfeifer
2005-10-01
Full Text Available Background: Spectacles and contact lenses are the most frequently used, the safest and the cheapest way to correct refractive errors. The development of keratorefractive surgery has brought new opportunities for correction of refractive errors in patients who have the need to be less dependent of spectacles or contact lenses. Until recently, RK was the most commonly performed refractive procedure for nearsighted patients.Conclusions: The introduction of excimer laser in refractive surgery has given the new opportunities of remodelling the cornea. The laser energy can be delivered on the stromal surface like in PRK or deeper on the corneal stroma by means of lamellar surgery. In LASIK flap is created with microkeratome in LASEK with ethanol and in epi-LASIK the ultra thin flap is created mechanically.
Chanel, M; Rumolo, G; Tomás, R; CERN. Geneva. AB Department
2008-01-01
ï»¿At the end of the 2007 run, orbit measurements were carried out in the 4 rings of the PS Booster (PSB) for different working points and beam energies. The aim of these measurements was to provide the necessary input data for a PSB realignment campaign during the 2007/2008 shutdown. Currently, only very few corrector magnets can be operated reliably in the PSB; therefore the orbit correction has to be achieved by displacing (horizontally and vertically) and/or tilting some of the defocusing quadrupoles (QDs). In this report we first describe the orbit measurements, followed by a detailed explanation of the orbit correction strategy. Results and conclusions are presented in the last section.
Hinds, Erold W. (Principal Investigator)
1996-01-01
This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.
Fast and accurate determination of modularity and its effect size
International Nuclear Information System (INIS)
Treviño, Santiago III; Nyberg, Amy; Bassler, Kevin E; Del Genio, Charo I
2015-01-01
We present a fast spectral algorithm for community detection in complex networks. Our method searches for the partition with the maximum value of the modularity via the interplay of several refinement steps that include both agglomeration and division. We validate the accuracy of the algorithm by applying it to several real-world benchmark networks. On all these, our algorithm performs as well or better than any other known polynomial scheme. This allows us to extensively study the modularity distribution in ensembles of Erdős–Rényi networks, producing theoretical predictions for means and variances inclusive of finite-size corrections. Our work provides a way to accurately estimate the effect size of modularity, providing a z-score measure of it and enabling a more informative comparison of networks with different numbers of nodes and links. (paper)
Accurate fluid force measurement based on control surface integration
Lentink, David
2018-01-01
Nonintrusive 3D fluid force measurements are still challenging to conduct accurately for freely moving animals, vehicles, and deforming objects. Two techniques, 3D particle image velocimetry (PIV) and a new technique, the aerodynamic force platform (AFP), address this. Both rely on the control volume integral for momentum; whereas PIV requires numerical integration of flow fields, the AFP performs the integration mechanically based on rigid walls that form the control surface. The accuracy of both PIV and AFP measurements based on the control surface integration is thought to hinge on determining the unsteady body force associated with the acceleration of the volume of displaced fluid. Here, I introduce a set of non-dimensional error ratios to show which fluid and body parameters make the error negligible. The unsteady body force is insignificant in all conditions where the average density of the body is much greater than the density of the fluid, e.g., in gas. Whenever a strongly deforming body experiences significant buoyancy and acceleration, the error is significant. Remarkably, this error can be entirely corrected for with an exact factor provided that the body has a sufficiently homogenous density or acceleration distribution, which is common in liquids. The correction factor for omitting the unsteady body force, {{{ {ρ f}} {1 - {ρ f} ( {{ρ b}+{ρ f}} )}.{( {{{{ρ }}b}+{ρ f}} )}}} , depends only on the fluid, {ρ f}, and body, {{ρ }}b, density. Whereas these straightforward solutions work even at the liquid-gas interface in a significant number of cases, they do not work for generalized bodies undergoing buoyancy in combination with appreciable body density inhomogeneity, volume change (PIV), or volume rate-of-change (PIV and AFP). In these less common cases, the 3D body shape needs to be measured and resolved in time and space to estimate the unsteady body force. The analysis shows that accounting for the unsteady body force is straightforward to non
More accurate picture of human body organs
International Nuclear Information System (INIS)
Kolar, J.
1985-01-01
Computerized tomography and nucler magnetic resonance tomography (NMRT) are revolutionary contributions to radiodiagnosis because they allow to obtain a more accurate image of human body organs. The principles are described of both methods. Attention is mainly devoted to NMRT which has clinically only been used for three years. It does not burden the organism with ionizing radiation. (Ha)
Accurate overlaying for mobile augmented reality
Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.
1999-01-01
Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency
Accurate activity recognition in a home setting
van Kasteren, T.; Noulas, A.; Englebienne, G.; Kröse, B.
2008-01-01
A sensor system capable of automatically recognizing activities would allow many potential ubiquitous applications. In this paper, we present an easy to install sensor network and an accurate but inexpensive annotation method. A recorded dataset consisting of 28 days of sensor data and its
Exemplar-based human action pose correction.
Shen, Wei; Deng, Ke; Bai, Xiang; Leyvand, Tommer; Guo, Baining; Tu, Zhuowen
2014-07-01
The launch of Xbox Kinect has built a very successful computer vision product and made a big impact on the gaming industry. This sheds lights onto a wide variety of potential applications related to action recognition. The accurate estimation of human poses from the depth image is universally a critical step. However, existing pose estimation systems exhibit failures when facing severe occlusion. In this paper, we propose an exemplar-based method to learn to correct the initially estimated poses. We learn an inhomogeneous systematic bias by leveraging the exemplar information within a specific human action domain. Furthermore, as an extension, we learn a conditional model by incorporation of pose tags to further increase the accuracy of pose correction. In the experiments, significant improvements on both joint-based skeleton correction and tag prediction are observed over the contemporary approaches, including what is delivered by the current Kinect system. Our experiments for the facial landmark correction also illustrate that our algorithm can improve the accuracy of other detection/estimation systems.
A precise technique for manufacturing correction coil
International Nuclear Information System (INIS)
Schieber, L.
1992-01-01
An automated method of manufacturing correction coils has been developed which provides a precise embodiment of the coil design. Numerically controlled machines have been developed to accurately position coil windings on the beam tube. Two types of machines have been built. One machine bonds the wire to a substrate which is wrapped around the beam tube after it is completed while the second machine bonds the wire directly to the beam tube. Both machines use the Multiwire reg-sign technique of bonding the wire to the substrate utilizing an ultrasonic stylus. These machines are being used to manufacture coils for both the SSC and RHIC
Accurate phylogenetic classification of DNA fragments based onsequence composition
Energy Technology Data Exchange (ETDEWEB)
McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis; Hugenholtz, Philip; Rigoutsos, Isidore
2006-05-01
Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequence characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.
A practical method for accurate quantification of large fault trees
International Nuclear Information System (INIS)
Choi, Jong Soo; Cho, Nam Zin
2007-01-01
This paper describes a practical method to accurately quantify top event probability and importance measures from incomplete minimal cut sets (MCS) of a large fault tree. The MCS-based fault tree method is extensively used in probabilistic safety assessments. Several sources of uncertainties exist in MCS-based fault tree analysis. The paper is focused on quantification of the following two sources of uncertainties: (1) the truncation neglecting low-probability cut sets and (2) the approximation in quantifying MCSs. The method proposed in this paper is based on a Monte Carlo simulation technique to estimate probability of the discarded MCSs and the sum of disjoint products (SDP) approach complemented by the correction factor approach (CFA). The method provides capability to accurately quantify the two uncertainties and estimate the top event probability and importance measures of large coherent fault trees. The proposed fault tree quantification method has been implemented in the CUTREE code package and is tested on the two example fault trees
Quality metric for accurate overlay control in <20nm nodes
Klein, Dana; Amit, Eran; Cohen, Guy; Amir, Nuriel; Har-Zvi, Michael; Huang, Chin-Chou Kevin; Karur-Shanmugam, Ramkumar; Pierson, Bill; Kato, Cindy; Kurita, Hiroyuki
2013-04-01
The semiconductor industry is moving toward 20nm nodes and below. As the Overlay (OVL) budget is getting tighter at these advanced nodes, the importance in the accuracy in each nanometer of OVL error is critical. When process owners select OVL targets and methods for their process, they must do it wisely; otherwise the reported OVL could be inaccurate, resulting in yield loss. The same problem can occur when the target sampling map is chosen incorrectly, consisting of asymmetric targets that will cause biased correctable terms and a corrupted wafer. Total measurement uncertainty (TMU) is the main parameter that process owners use when choosing an OVL target per layer. Going towards the 20nm nodes and below, TMU will not be enough for accurate OVL control. KLA-Tencor has introduced a quality score named `Qmerit' for its imaging based OVL (IBO) targets, which is obtained on the-fly for each OVL measurement point in X & Y. This Qmerit score will enable the process owners to select compatible targets which provide accurate OVL values for their process and thereby improve their yield. Together with K-T Analyzer's ability to detect the symmetric targets across the wafer and within the field, the Archer tools will continue to provide an independent, reliable measurement of OVL error into the next advanced nodes, enabling fabs to manufacture devices that meet their tight OVL error budgets.
DEFF Research Database (Denmark)
Jensen, Rasmus Ramsbøl; Benjaminsen, Claus; Larsen, Rasmus
2015-01-01
The application of motion tracking is wide, including: industrial production lines, motion interaction in gaming, computer-aided surgery and motion correction in medical brain imaging. Several devices for motion tracking exist using a variety of different methodologies. In order to use such devices...... offset and tracking noise in medical brain imaging. The data are generated from a phantom mounted on a rotary stage and have been collected using a Siemens High Resolution Research Tomograph for positron emission tomography. During acquisition the phantom was tracked with our latest tracking prototype...
Accurate guitar tuning by cochlear implant musicians.
Directory of Open Access Journals (Sweden)
Thomas Lu
Full Text Available Modern cochlear implant (CI users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.
Accurate estimation of indoor travel times
DEFF Research Database (Denmark)
Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan
2014-01-01
The ability to accurately estimate indoor travel times is crucial for enabling improvements within application areas such as indoor navigation, logistics for mobile workers, and facility management. In this paper, we study the challenges inherent in indoor travel time estimation, and we propose...... the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. InTraTime...... allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...
On accurate determination of contact angle
Concus, P.; Finn, R.
1992-01-01
Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.
Software Estimation: Developing an Accurate, Reliable Method
2011-08-01
based and size-based estimates is able to accurately plan, launch, and execute on schedule. Bob Sinclair, NAWCWD Chris Rickets , NAWCWD Brad Hodgins...Office by Carnegie Mellon University. SMPSP and SMTSP are service marks of Carnegie Mellon University. 1. Rickets , Chris A, “A TSP Software Maintenance...Life Cycle”, CrossTalk, March, 2005. 2. Koch, Alan S, “TSP Can Be the Building blocks for CMMI”, CrossTalk, March, 2005. 3. Hodgins, Brad, Rickets
Highly Accurate Prediction of Jobs Runtime Classes
Reiner-Benaim, Anat; Grabarnick, Anna; Shmueli, Edi
2016-01-01
Separating the short jobs from the long is a known technique to improve scheduling performance. In this paper we describe a method we developed for accurately predicting the runtimes classes of the jobs to enable this separation. Our method uses the fact that the runtimes can be represented as a mixture of overlapping Gaussian distributions, in order to train a CART classifier to provide the prediction. The threshold that separates the short jobs from the long jobs is determined during the ev...
Accurate multiplicity scaling in isotopically conjugate reactions
International Nuclear Information System (INIS)
Golokhvastov, A.I.
1989-01-01
The generation of accurate scaling of mutiplicity distributions is presented. The distributions of π - mesons (negative particles) and π + mesons in different nucleon-nucleon interactions (PP, NP and NN) are described by the same universal function Ψ(z) and the same energy dependence of the scale parameter which determines the stretching factor for the unit function Ψ(z) to obtain the desired multiplicity distribution. 29 refs.; 6 figs
Mental models accurately predict emotion transitions.
Thornton, Mark A; Tamir, Diana I
2017-06-06
Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.
Mental models accurately predict emotion transitions
Thornton, Mark A.; Tamir, Diana I.
2017-01-01
Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373
12 CFR 621.14 - Certification of correctness.
2010-01-01
... REQUIREMENTS Report of Condition and Performance § 621.14 Certification of correctness. Each report of financial condition and performance filed with the Farm Credit Administration shall be certified as having... accurate representation of the financial condition and performance of the institution to which it applies...
RCRA corrective action and closure
International Nuclear Information System (INIS)
1995-02-01
This information brief explains how RCRA corrective action and closure processes affect one another. It examines the similarities and differences between corrective action and closure, regulators' interests in RCRA facilities undergoing closure, and how the need to perform corrective action affects the closure of DOE's permitted facilities and interim status facilities
Hepatitis C and the correctional population.
Reindollar, R W
1999-12-27
The hepatitis C epidemic has extended well into the correctional population where individuals predominantly originate from high-risk environments and have high-risk behaviors. Epidemiologic data estimate that 30% to 40% of the 1.8 million inmates in the United States are infected with the hepatitis C virus (HCV), the majority of whom were infected before incarceration. As in the general population, injection drug use accounts for the majority of HCV infections in this group--one to two thirds of inmates have a history of injection drug use before incarceration and continue to do so while in prison. Although correctional facilities also represent a high-risk environment for HCV infection because of a continued high incidence of drug use and high-risk sexual activities, available data indicate a low HCV seroconversion rate of 1.1 per 100 person-years in prison. Moreover, a high annual turnover rate means that many inmates return to their previous high-risk environments and behaviors that are conducive either to acquiring or spreading HCV. Despite a very high prevalence of HCV infection within the US correctional system, identification and treatment of at-risk individuals is inconsistent, at best. Variable access to correctional health-care resources, limited funding, high inmate turnover rates, and deficient follow-up care after release represent a few of the factors that confound HCV control and prevention in this group. Future efforts must focus on establishing an accurate knowledge base and implementing education, policies, and procedures for the prevention and treatment of hepatitis C in correctional populations.
Accurately Assessing Lines on the Aging Face.
Renton, Kim; Keefe, Kathy Young
The ongoing positive aging trend has resulted in many research studies being conducted to determine the characteristics of aging and what steps we can take to prevent the extrinsic signs of aging. Much of this attention has been focused on the prevention and treatment of facial wrinkles. To treat or prevent facial wrinkles correctly, their causative action first needs to be determined. published very compelling evidence that the development of wrinkles is complex and is caused by more factors than just the combination of poor lifestyle choices.
Robust and accurate vectorization of line drawings.
Hilaire, Xavier; Tombre, Karl
2006-06-01
This paper presents a method for vectorizing the graphical parts of paper-based line drawings. The method consists of separating the input binary image into layers of homogeneous thickness, skeletonizing each layer, segmenting the skeleton by a method based on random sampling, and simplifying the result. The segmentation method is robust with a best bound of 50 percent noise reached for indefinitely long primitives. Accurate estimation of the recognized vector's parameters is enabled by explicitly computing their feasibility domains. Theoretical performance analysis and expression of the complexity of the segmentation method are derived. Experimental results and comparisons with other vectorization systems are also provided.
The first accurate description of an aurora
Schröder, Wilfried
2006-12-01
As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.
Accurate Charge Densities from Powder Diffraction
DEFF Research Database (Denmark)
Bindzus, Niels; Wahlberg, Nanna; Becker, Jacob
Synchrotron powder X-ray diffraction has in recent years advanced to a level, where it has become realistic to probe extremely subtle electronic features. Compared to single-crystal diffraction, it may be superior for simple, high-symmetry crystals owing to negligible extinction effects and minimal...... peak overlap. Additionally, it offers the opportunity for collecting data on a single scale. For charge densities studies, the critical task is to recover accurate and bias-free structure factors from the diffraction pattern. This is the focal point of the present study, scrutinizing the performance...
Arbitrarily accurate twin composite π -pulse sequences
Torosov, Boyan T.; Vitanov, Nikolay V.
2018-04-01
We present three classes of symmetric broadband composite pulse sequences. The composite phases are given by analytic formulas (rational fractions of π ) valid for any number of constituent pulses. The transition probability is expressed by simple analytic formulas and the order of pulse area error compensation grows linearly with the number of pulses. Therefore, any desired compensation order can be produced by an appropriate composite sequence; in this sense, they are arbitrarily accurate. These composite pulses perform equally well as or better than previously published ones. Moreover, the current sequences are more flexible as they allow total pulse areas of arbitrary integer multiples of π .
Systematization of Accurate Discrete Optimization Methods
Directory of Open Access Journals (Sweden)
V. A. Ovchinnikov
2015-01-01
Full Text Available The object of study of this paper is to define accurate methods for solving combinatorial optimization problems of structural synthesis. The aim of the work is to systemize the exact methods of discrete optimization and define their applicability to solve practical problems.The article presents the analysis, generalization and systematization of classical methods and algorithms described in the educational and scientific literature.As a result of research a systematic presentation of combinatorial methods for discrete optimization described in various sources is given, their capabilities are described and properties of the tasks to be solved using the appropriate methods are specified.
Optimal estimation of ship's attitudes for beampattern corrections in a coaxial circular array
Digital Repository Service at National Institute of Oceanography (India)
Chakraborty, B.; Dev, K.K.
A study is conducted to estimate the accurate attitude of a ship's motion and the estimation is used to arrive at the corrections required for a farfield pattern of a coaxial circular array. The relevant analytical expression is developed...
The effect of regular corrective exercise on musculoskeletal deformities in Khorramabad school girls
Directory of Open Access Journals (Sweden)
bahman hasanvand
2011-06-01
Conclusion: The findings of this study emphasize the reliable, accurate, feasible, and easy methods for decreasing abnormalities. Furthermore, it showed that the corrective exercise programs, can reduce the abnormalities in oldness.
How accurate is the 14C method
International Nuclear Information System (INIS)
Nydal, R.
1979-01-01
Radiocarbon daters have in recent years focussed their interest on accuracy and reliability of 14 C dates. The use of dates for resolving fine chronological structures that are not dateable otherwise has stressed this point. The total uncertainty in dating an event is composed of errors relating to dating of the sample, i.e. uncertainty in measured quantities, deviations from assumed content of 14 C in material when alive; and errors related to quality of sample material, i.e. contamination from carbon of different age, diffuse context between sample and event. Statistical variability in counting of 14 C activity gives the most important contribution to measurement uncertainty - increasing with age and shortage of sample material. Corrections for isotopic fractionation and reservoir effects must be performed, and - most important when dates are compared with historical ages - the dendrochronological calibration will correct for past variations in the atmospheric 14 C content. Future improvement of dating precision can however only be obtained by the combined efforts of both daters and submitters of samples, thus minimizing errors related to selection and handling of sample material as well as those related to the 14 C method and measurements. (Auth.)
Social contagion of correct and incorrect information in memory.
Rush, Ryan A; Clark, Steven E
2014-01-01
The present study examines how discussion between individuals regarding a shared memory affects their subsequent individual memory reports. In three experiments pairs of participants recalled items from photographs of common household scenes, discussed their recall with each other, and then recalled the items again individually. Results showed that after the discussion. individuals recalled more correct items and more incorrect items, with very small non-significant increases, or no change, in recall accuracy. The information people were exposed to during the discussion was generally accurate, although not as accurate as individuals' initial recall. Individuals incorporated correct exposure items into their subsequent recall at a higher rate than incorrect exposure items. Participants who were initially more accurate became less accurate, and initially less-accurate participants became more accurate as a result of their discussion. Comparisons to no-discussion control groups suggest that the effects were not simply the product of repeated recall opportunities or self-cueing, but rather reflect the transmission of information between individuals.
Aperiodicity Correction for Rotor Tip Vortex Measurements
Ramasamy, Manikandan; Paetzel, Ryan; Bhagwat, Mahendra J.
2011-01-01
The initial roll-up of a tip vortex trailing from a model-scale, hovering rotor was measured using particle image velocimetry. The unique feature of the measurements was that a microscope was attached to the camera to allow much higher spatial resolution than hitherto possible. This also posed some unique challenges. In particular, the existing methodologies to correct for aperiodicity in the tip vortex locations could not be easily extended to the present measurements. The difficulty stemmed from the inability to accurately determine the vortex center, which is a prerequisite for the correction procedure. A new method is proposed for determining the vortex center, as well as the vortex core properties, using a least-squares fit approach. This approach has the obvious advantage that the properties are derived from not just a few points near the vortex core, but from a much larger area of flow measurements. Results clearly demonstrate the advantage in the form of reduced variation in the estimated core properties, and also the self-consistent results obtained using three different aperiodicity correction methods.
How Accurately can we Calculate Thermal Systems?
International Nuclear Information System (INIS)
Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A
2004-01-01
I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K eff , for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors
Accurate control testing for clay liner permeability
Energy Technology Data Exchange (ETDEWEB)
Mitchell, R J
1991-08-01
Two series of centrifuge tests were carried out to evaluate the use of centrifuge modelling as a method of accurate control testing of clay liner permeability. The first series used a large 3 m radius geotechnical centrifuge and the second series a small 0.5 m radius machine built specifically for research on clay liners. Two permeability cells were fabricated in order to provide direct data comparisons between the two methods of permeability testing. In both cases, the centrifuge method proved to be effective and efficient, and was found to be free of both the technical difficulties and leakage risks normally associated with laboratory permeability testing of fine grained soils. Two materials were tested, a consolidated kaolin clay having an average permeability coefficient of 1.2{times}10{sup -9} m/s and a compacted illite clay having a permeability coefficient of 2.0{times}10{sup -11} m/s. Four additional tests were carried out to demonstrate that the 0.5 m radius centrifuge could be used for linear performance modelling to evaluate factors such as volumetric water content, compaction method and density, leachate compatibility and other construction effects on liner leakage. The main advantages of centrifuge testing of clay liners are rapid and accurate evaluation of hydraulic properties and realistic stress modelling for performance evaluations. 8 refs., 12 figs., 7 tabs.
2015-01-01
Reports an error in "Pedagogy of the privileged: Review of Deconstructing Privilege: Teaching and Learning as Allies in the Classroom" by Rebecca L. Toporek (Cultural Diversity and Ethnic Minority Psychology, 2014[Oct], Vol 20[4], 621-622). This article was originally published online incorrectly as a Brief Report. The article authored by Rebecca L. Toporek has been published correctly as a Book Review in the October 2014 print publication (Vol. 20, No. 4, pp. 621-622. http://dx.doi.org/10.1037/a0036529). (The following abstract of the original article appeared in record 2014-42484-006.) Reviews the book, Deconstructing Privilege: Teaching and Learning as Allies in the Classroom edited by Kim A. Case (2013). The purpose of this book is to provide a collection of resources for those teaching about privilege directly, much of this volume may be useful for expanding the context within which educators teach all aspects of psychology. Understanding the history and systems of psychology, clinical practice, research methods, assessment, and all the core areas of psychology could be enhanced by consideration of the structural framework through which psychology has developed and is maintained. The book presents a useful guide for educators, and in particular, those who teach about systems of oppression and privilege directly. For psychologists, this guide provides scholarship and concrete strategies for facilitating students' awareness of multiple dimensions of privilege across content areas. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Radiation protection: A correction
International Nuclear Information System (INIS)
1972-01-01
An error in translation inadvertently distorted the sense of a paragraph in the article entitled 'Ecological Aspects of Radiation Protection', by Dr. P. Recht, which appeared in the Bulletin, Volume 14, No. 2 earlier this year. In the English text the error appears on Page 28, second paragraph, which reads, as published: 'An instance familiar to radiation protection specialists, which has since come to be regarded as a classic illustration of this approach, is the accidental release at the Windscale nuclear centre in the north of England.' In the French original of this text no reference was made, or intended, to the accidental release which took place in 1957; the reference was to the study of the critical population group exposed to routine releases from the centre, as the footnote made clear. A more correct translation of the relevant sentence reads: 'A classic example of this approach, well-known to radiation protection specialists, is that of releases from the Windscale nuclear centre, in the north of England.' A second error appeared in the footnote already referred to. In all languages, the critical population group studied in respect of the Windscale releases is named as that of Cornwall; the reference should be, of course, to that part of the population of Wales who eat laver bread. (author)
Cross plane scattering correction
International Nuclear Information System (INIS)
Shao, L.; Karp, J.S.
1990-01-01
Most previous scattering correction techniques for PET are based on assumptions made for a single transaxial plane and are independent of axial variations. These techniques will incorrectly estimate the scattering fraction for volumetric PET imaging systems since they do not take the cross-plane scattering into account. In this paper, the authors propose a new point source scattering deconvolution method (2-D). The cross-plane scattering is incorporated into the algorithm by modeling a scattering point source function. In the model, the scattering dependence both on axial and transaxial directions is reflected in the exponential fitting parameters and these parameters are directly estimated from a limited number of measured point response functions. The authors' results comparing the standard in-plane point source deconvolution to the authors' cross-plane source deconvolution show that for a small source, the former technique overestimates the scatter fraction in the plane of the source and underestimate the scatter fraction in adjacent planes. In addition, the authors also propose a simple approximation technique for deconvolution
Accurate Holdup Calculations with Predictive Modeling & Data Integration
Energy Technology Data Exchange (ETDEWEB)
Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering
2017-04-03
In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use
Accurate, fully-automated NMR spectral profiling for metabolomics.
Directory of Open Access Journals (Sweden)
Siamak Ravanbakhsh
Full Text Available Many diseases cause significant changes to the concentrations of small molecules (a.k.a. metabolites that appear in a person's biofluids, which means such diseases can often be readily detected from a person's "metabolic profile"-i.e., the list of concentrations of those metabolites. This information can be extracted from a biofluids Nuclear Magnetic Resonance (NMR spectrum. However, due to its complexity, NMR spectral profiling has remained manual, resulting in slow, expensive and error-prone procedures that have hindered clinical and industrial adoption of metabolomics via NMR. This paper presents a system, BAYESIL, which can quickly, accurately, and autonomously produce a person's metabolic profile. Given a 1D 1H NMR spectrum of a complex biofluid (specifically serum or cerebrospinal fluid, BAYESIL can automatically determine the metabolic profile. This requires first performing several spectral processing steps, then matching the resulting spectrum against a reference compound library, which contains the "signatures" of each relevant metabolite. BAYESIL views spectral matching as an inference problem within a probabilistic graphical model that rapidly approximates the most probable metabolic profile. Our extensive studies on a diverse set of complex mixtures including real biological samples (serum and CSF, defined mixtures and realistic computer generated spectra; involving > 50 compounds, show that BAYESIL can autonomously find the concentration of NMR-detectable metabolites accurately (~ 90% correct identification and ~ 10% quantification error, in less than 5 minutes on a single CPU. These results demonstrate that BAYESIL is the first fully-automatic publicly-accessible system that provides quantitative NMR spectral profiling effectively-with an accuracy on these biofluids that meets or exceeds the performance of trained experts. We anticipate this tool will usher in high-throughput metabolomics and enable a wealth of new applications of
Designing an accurate system for temperature measurements
Directory of Open Access Journals (Sweden)
Kochan Orest
2017-01-01
Full Text Available The method of compensation of changes in temperature field along the legs of inhomogeneous thermocouple, which measures a temperature of an object, is considered in this paper. This compensation is achieved by stabilization of the temperature field along the thermocouple. Such stabilization does not allow the error due to acquired thermoelectric inhomogeneity to manifest itself. There is also proposed the design of the furnace to stabilize temperature field along the legs of the thermocouple which measures the temperature of an object. This furnace is not integrated with the thermocouple mentioned above, therefore it is possible to replace this thermocouple with a new one when it get its legs considerably inhomogeneous.. There is designed the two loop measuring system with the ability of error correction which can use simultaneously a usual thermocouple as well as a thermocouple with controlled profile of temperature field. The latter can be used as a reference sensor for the former.
Food systems in correctional settings
DEFF Research Database (Denmark)
Smoyer, Amy; Kjær Minke, Linda
management of food systems may improve outcomes for incarcerated people and help correctional administrators to maximize their health and safety. This report summarizes existing research on food systems in correctional settings and provides examples of food programmes in prison and remand facilities......Food is a central component of life in correctional institutions and plays a critical role in the physical and mental health of incarcerated people and the construction of prisoners' identities and relationships. An understanding of the role of food in correctional settings and the effective......, including a case study of food-related innovation in the Danish correctional system. It offers specific conclusions for policy-makers, administrators of correctional institutions and prison-food-service professionals, and makes proposals for future research....
Accurate predictions for the LHC made easy
CERN. Geneva
2014-01-01
The data recorded by the LHC experiments is of a very high quality. To get the most out of the data, precise theory predictions, including uncertainty estimates, are needed to reduce as much as possible theoretical bias in the experimental analyses. Recently, significant progress has been made in computing Next-to-Leading Order (NLO) computations, including matching to the parton shower, that allow for these accurate, hadron-level predictions. I shall discuss one of these efforts, the MadGraph5_aMC@NLO program, that aims at the complete automation of predictions at the NLO accuracy within the SM as well as New Physics theories. I’ll illustrate some of the theoretical ideas behind this program, show some selected applications to LHC physics, as well as describe the future plans.
Apparatus for accurately measuring high temperatures
Smith, D.D.
The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.
Accurate Modeling Method for Cu Interconnect
Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko
This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.
Corrective justice and contract law
Directory of Open Access Journals (Sweden)
Martín Hevia
2010-06-01
Full Text Available This article suggests that the central aspects of contract law in various jurisdictions can be explained within the idea of corrective justice. The article is divided into three parts. The first part distinguishes between corrective justice and distributive justice. The second part describes contract law. The third part focuses on actions for breach of contract and within that context reflects upon the idea of corrective justice.
Corrective justice and contract law
Martín Hevia
2010-01-01
This article suggests that the central aspects of contract law in various jurisdictions can be explained within the idea of corrective justice. The article is divided into three parts. The first part distinguishes between corrective justice and distributive justice. The second part describes contract law. The third part focuses on actions for breach of contract and within that context reflects upon the idea of corrective justice.
Corrected ROC analysis for misclassified binary outcomes.
Zawistowski, Matthew; Sussman, Jeremy B; Hofer, Timothy P; Bentley, Douglas; Hayward, Rodney A; Wiitala, Wyndy L
2017-06-15
Creating accurate risk prediction models from Big Data resources such as Electronic Health Records (EHRs) is a critical step toward achieving precision medicine. A major challenge in developing these tools is accounting for imperfect aspects of EHR data, particularly the potential for misclassified outcomes. Misclassification, the swapping of case and control outcome labels, is well known to bias effect size estimates for regression prediction models. In this paper, we study the effect of misclassification on accuracy assessment for risk prediction models and find that it leads to bias in the area under the curve (AUC) metric from standard ROC analysis. The extent of the bias is determined by the false positive and false negative misclassification rates as well as disease prevalence. Notably, we show that simply correcting for misclassification while building the prediction model is not sufficient to remove the bias in AUC. We therefore introduce an intuitive misclassification-adjusted ROC procedure that accounts for uncertainty in observed outcomes and produces bias-corrected estimates of the true AUC. The method requires that misclassification rates are either known or can be estimated, quantities typically required for the modeling step. The computational simplicity of our method is a key advantage, making it ideal for efficiently comparing multiple prediction models on very large datasets. Finally, we apply the correction method to a hospitalization prediction model from a cohort of over 1 million patients from the Veterans Health Administrations EHR. Implementations of the ROC correction are provided for Stata and R. Published 2017. This article is a U.S. Government work and is in the public domain in the USA. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Analytical model for relativistic corrections to the nuclear magnetic shielding constant in atoms
Energy Technology Data Exchange (ETDEWEB)
Romero, Rodolfo H. [Facultad de Ciencias Exactas, Universidad Nacional del Nordeste, Avenida Libertad 5500 (3400), Corrientes (Argentina)]. E-mail: rhromero@exa.unne.edu.ar; Gomez, Sergio S. [Facultad de Ciencias Exactas, Universidad Nacional del Nordeste, Avenida Libertad 5500 (3400), Corrientes (Argentina)
2006-04-24
We present a simple analytical model for calculating and rationalizing the main relativistic corrections to the nuclear magnetic shielding constant in atoms. It provides good estimates for those corrections and their trends, in reasonable agreement with accurate four-component calculations and perturbation methods. The origin of the effects in deep core atomic orbitals is manifestly shown.
Correction for sample self-absorption in activity determination by gamma spectrometry
International Nuclear Information System (INIS)
Galloway, R.B.
1991-01-01
Gamma ray spectrometry is a convenient method of determining the activity of the radioactive components in environmental samples. Commonly samples vary in gamma absorption or differ in absorption from the calibration standards available, so that accurate correction for self-absorption in the sample is essential. A versatile correction procedure is described. (orig.)
Analytical model for relativistic corrections to the nuclear magnetic shielding constant in atoms
International Nuclear Information System (INIS)
Romero, Rodolfo H.; Gomez, Sergio S.
2006-01-01
We present a simple analytical model for calculating and rationalizing the main relativistic corrections to the nuclear magnetic shielding constant in atoms. It provides good estimates for those corrections and their trends, in reasonable agreement with accurate four-component calculations and perturbation methods. The origin of the effects in deep core atomic orbitals is manifestly shown
Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi
2015-02-01
With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.
Open quantum systems and error correction
Shabani Barzegar, Alireza
Quantum effects can be harnessed to manipulate information in a desired way. Quantum systems which are designed for this purpose are suffering from harming interaction with their surrounding environment or inaccuracy in control forces. Engineering different methods to combat errors in quantum devices are highly demanding. In this thesis, I focus on realistic formulations of quantum error correction methods. A realistic formulation is the one that incorporates experimental challenges. This thesis is presented in two sections of open quantum system and quantum error correction. Chapters 2 and 3 cover the material on open quantum system theory. It is essential to first study a noise process then to contemplate methods to cancel its effect. In the second chapter, I present the non-completely positive formulation of quantum maps. Most of these results are published in [Shabani and Lidar, 2009b,a], except a subsection on geometric characterization of positivity domain of a quantum map. The real-time formulation of the dynamics is the topic of the third chapter. After introducing the concept of Markovian regime, A new post-Markovian quantum master equation is derived, published in [Shabani and Lidar, 2005a]. The section of quantum error correction is presented in three chapters of 4, 5, 6 and 7. In chapter 4, we introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization (published in [Shabani and Lidar, 2005b]). In Chapter 5, we present a semidefinite program optimization approach to quantum error correction that yields codes and recovery procedures that are robust against significant variations in the noise channel. Our approach allows us to optimize the encoding, recovery, or both, and is amenable to approximations that significantly improve computational cost while retaining fidelity (see [Kosut et al., 2008] for a published version). Chapter 6 is devoted to a theory of quantum error correction (QEC
An accurate projection algorithm for array processor based SPECT systems
International Nuclear Information System (INIS)
King, M.A.; Schwinger, R.B.; Cool, S.L.
1985-01-01
A data re-projection algorithm has been developed for use in single photon emission computed tomography (SPECT) on an array processor based computer system. The algorithm makes use of an accurate representation of pixel activity (uniform square pixel model of intensity distribution), and is rapidly performed due to the efficient handling of an array based algorithm and the Fast Fourier Transform (FFT) on parallel processing hardware. The algorithm consists of using a pixel driven nearest neighbour projection operation to an array of subdivided projection bins. This result is then convolved with the projected uniform square pixel distribution before being compressed to original bin size. This distribution varies with projection angle and is explicitly calculated. The FFT combined with a frequency space multiplication is used instead of a spatial convolution for more rapid execution. The new algorithm was tested against other commonly used projection algorithms by comparing the accuracy of projections of a simulated transverse section of the abdomen against analytically determined projections of that transverse section. The new algorithm was found to yield comparable or better standard error and yet result in easier and more efficient implementation on parallel hardware. Applications of the algorithm include iterative reconstruction and attenuation correction schemes and evaluation of regions of interest in dynamic and gated SPECT
Study of accurate volume measurement system for plutonium nitrate solution
Energy Technology Data Exchange (ETDEWEB)
Hosoma, T. [Power Reactor and Nuclear Fuel Development Corp., Tokai, Ibaraki (Japan). Tokai Works
1998-12-01
It is important for effective safeguarding of nuclear materials to establish a technique for accurate volume measurement of plutonium nitrate solution in accountancy tank. The volume of the solution can be estimated by two differential pressures between three dip-tubes, in which the air is purged by an compressor. One of the differential pressure corresponds to the density of the solution, and another corresponds to the surface level of the solution in the tank. The measurement of the differential pressure contains many uncertain errors, such as precision of pressure transducer, fluctuation of back-pressure, generation of bubbles at the front of the dip-tubes, non-uniformity of temperature and density of the solution, pressure drop in the dip-tube, and so on. The various excess pressures at the volume measurement are discussed and corrected by a reasonable method. High precision-differential pressure measurement system is developed with a quartz oscillation type transducer which converts a differential pressure to a digital signal. The developed system is used for inspection by the government and IAEA. (M. Suetake)
Accurate measurement of RF exposure from emerging wireless communication systems
International Nuclear Information System (INIS)
Letertre, Thierry; Toffano, Zeno; Monebhurrun, Vikass
2013-01-01
Isotropic broadband probes or spectrum analyzers (SAs) may be used for the measurement of rapidly varying electromagnetic fields generated by emerging wireless communication systems. In this paper this problematic is investigated by comparing the responses measured by two different isotropic broadband probes typically used to perform electric field (E-field) evaluations. The broadband probes are submitted to signals with variable duty cycles (DC) and crest factors (CF) either with or without Orthogonal Frequency Division Multiplexing (OFDM) modulation but with the same root-mean-square (RMS) power. The two probes do not provide accurate enough results for deterministic signals such as Worldwide Interoperability for Microwave Access (WIMAX) or Long Term Evolution (LTE) as well as for non-deterministic signals such as Wireless Fidelity (WiFi). The legacy measurement protocols should be adapted to cope for the emerging wireless communication technologies based on the OFDM modulation scheme. This is not easily achieved except when the statistics of the RF emission are well known. In this case the measurement errors are shown to be systematic and a correction factor or calibration can be applied to obtain a good approximation of the total RMS power.
Accurate prediction of the enthalpies of formation for xanthophylls.
Lii, Jenn-Huei; Liao, Fu-Xing; Hu, Ching-Han
2011-11-30
This study investigates the applications of computational approaches in the prediction of enthalpies of formation (ΔH(f)) for C-, H-, and O-containing compounds. Molecular mechanics (MM4) molecular mechanics method, density functional theory (DFT) combined with the atomic equivalent (AE) and group equivalent (GE) schemes, and DFT-based correlation corrected atomization (CCAZ) were used. We emphasized on the application to xanthophylls, C-, H-, and O-containing carotenoids which consist of ∼ 100 atoms and extended π-delocaization systems. Within the training set, MM4 predictions are more accurate than those obtained using AE and GE; however a systematic underestimation was observed in the extended systems. ΔH(f) for the training set molecules predicted by CCAZ combined with DFT are in very good agreement with the G3 results. The average absolute deviations (AADs) of CCAZ combined with B3LYP and MPWB1K are 0.38 and 0.53 kcal/mol compared with the G3 data, and are 0.74 and 0.69 kcal/mol compared with the available experimental data, respectively. Consistency of the CCAZ approach for the selected xanthophylls is revealed by the AAD of 2.68 kcal/mol between B3LYP-CCAZ and MPWB1K-CCAZ. Copyright © 2011 Wiley Periodicals, Inc.
Podiatry Ankle Duplex Scan: Readily Learned and Accurate in Diabetes.
Normahani, Pasha; Powezka, Katarzyna; Aslam, Mohammed; Standfield, Nigel J; Jaffer, Usman
2018-03-01
We aimed to train podiatrists to perform a focused duplex ultrasound scan (DUS) of the tibial vessels at the ankle in diabetic patients; podiatry ankle (PodAnk) duplex scan. Thirteen podiatrists underwent an intensive 3-hour long simulation training session. Participants were then assessed performing bilateral PodAnk duplex scans of 3 diabetic patients with peripheral arterial disease. Participants were assessed using the duplex ultrasound objective structured assessment of technical skills (DUOSATS) tool and an "Imaging Score". A total of 156 vessel assessments were performed. All patients had abnormal waveforms with a loss of triphasic flow. Loss of triphasic flow was accurately detected in 145 (92.9%) vessels; the correct waveform was identified in 139 (89.1%) cases. Participants achieved excellent DUOSATS scores (median 24 [interquartile range: 23-25], max attainable score of 26) as well as "Imaging Scores" (8 [8-8], max attainable score of 8) indicating proficiency in technical skills. The mean time taken for each bilateral ankle assessment was 20.4 minutes (standard deviation ±6.7). We have demonstrated that a focused DUS for the purpose of vascular assessment of the diabetic foot is readily learned using intensive simulation training.
International Nuclear Information System (INIS)
El-Behay, A.Z.; Attawiya, M.Y.; Khattab, F.M.
1984-01-01
In a trial to obtain accurate results from X-ray fluorescence technique for the analysis of trace elements in geological materials, two corrections were used for the obtained data, namely, correction for the observed x-ray intensities for absorption and/or enhancement effects due to the presence of other elements in the system and correction for spectral deconvolution to account for the overlapping lines. Significant improvement in the precision and accuracy was obtained and evaluated
Directory of Open Access Journals (Sweden)
Dobrislav Dobrev∗
2017-02-01
Full Text Available We provide an accurate closed-form expression for the expected shortfall of linear portfolios with elliptically distributed risk factors. Our results aim to correct inaccuracies that originate in Kamdem (2005 and are present also in at least thirty other papers referencing it, including the recent survey by Nadarajah et al. (2014 on estimation methods for expected shortfall. In particular, we show that the correction we provide in the popular multivariate Student t setting eliminates understatement of expected shortfall by a factor varying from at least four to more than 100 across different tail quantiles and degrees of freedom. As such, the resulting economic impact in ﬁnancial risk management applications could be signiﬁcant. We further correct such errors encountered also in closely related results in Kamdem (2007 and 2009 for mixtures of elliptical distributions. More generally, our ﬁndings point to the extra scrutiny required when deploying new methods for expected shortfall estimation in practice.
PET measurements of cerebral metabolism corrected for CSF contributions
International Nuclear Information System (INIS)
Chawluk, J.; Alavi, A.; Dann, R.; Kushner, M.J.; Hurtig, H.; Zimmerman, R.A.; Reivich, M.
1984-01-01
Thirty-three subjects have been studied with PET and anatomic imaging (proton-NMR and/or CT) in order to determine the effect of cerebral atrophy on calculations of metabolic rates. Subgroups of neurologic disease investigated include stroke, brain tumor, epilepsy, psychosis, and dementia. Anatomic images were digitized through a Vidicon camera and analyzed volumetrically. Relative areas for ventricles, sulci, and brain tissue were calculated. Preliminary analysis suggests that ventricular volumes as determined by NMR and CT are similar, while sulcal volumes are larger on NMR scans. Metabolic rates (18F-FDG) were calculated before and after correction for CSF spaces, with initial focus upon dementia and normal aging. Correction for atrophy led to a greater increase (%) in global metabolic rates in demented individuals (18.2 +- 5.3) compared to elderly controls (8.3 +- 3.0,p < .05). A trend towards significantly lower glucose metabolism in demented subjects before CSF correction was not seen following correction for atrophy. These data suggest that volumetric analysis of NMR images may more accurately reflect the degree of cerebral atrophy, since NMR does not suffer from beam hardening artifact due to bone-parenchyma juxtapositions. Furthermore, appropriate correction for CSF spaces should be employed if current resolution PET scanners are to accurately measure residual brain tissue metabolism in various pathological states
Unpacking Corrections in Mobile Instruction
DEFF Research Database (Denmark)
Levin, Lena; Cromdal, Jakob; Broth, Mathias
2017-01-01
that the practice of unpacking the local particulars of corrections (i) provides for the instructional character of the interaction, and (ii) is highly sensitive to the relevant physical and mobile contingencies. These findings contribute to the existing literature on the interactional organisation of correction...
Atmospheric correction of satellite data
Shmirko, Konstantin; Bobrikov, Alexey; Pavlov, Andrey
2015-11-01
Atmosphere responses for more than 90% of all radiation measured by satellite. Due to this, atmospheric correction plays an important role in separating water leaving radiance from the signal, evaluating concentration of various water pigments (chlorophyll-A, DOM, CDOM, etc). The elimination of atmospheric intrinsic radiance from remote sensing signal referred to as atmospheric correction.
Stress Management in Correctional Recreation.
Card, Jaclyn A.
Current economic conditions have created additional sources of stress in the correctional setting. Often, recreation professionals employed in these settings also add to inmate stress. One of the major factors limiting stress management in correctional settings is a lack of understanding of the value, importance, and perceived freedom, of leisure.…
Implicit time accurate simulation of unsteady flow
van Buuren, René; Kuerten, Hans; Geurts, Bernard J.
2001-03-01
Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright
Spectrally accurate initial data in numerical relativity
Battista, Nicholas A.
Einstein's theory of general relativity has radically altered the way in which we perceive the universe. His breakthrough was to realize that the fabric of space is deformable in the presence of mass, and that space and time are linked into a continuum. Much evidence has been gathered in support of general relativity over the decades. Some of the indirect evidence for GR includes the phenomenon of gravitational lensing, the anomalous perihelion of mercury, and the gravitational redshift. One of the most striking predictions of GR, that has not yet been confirmed, is the existence of gravitational waves. The primary source of gravitational waves in the universe is thought to be produced during the merger of binary black hole systems, or by binary neutron stars. The starting point for computer simulations of black hole mergers requires highly accurate initial data for the space-time metric and for the curvature. The equations describing the initial space-time around the black hole(s) are non-linear, elliptic partial differential equations (PDE). We will discuss how to use a pseudo-spectral (collocation) method to calculate the initial puncture data corresponding to single black hole and binary black hole systems.
A stiffly accurate integrator for elastodynamic problems
Michels, Dominik L.
2017-07-21
We present a new integration algorithm for the accurate and efficient solution of stiff elastodynamic problems governed by the second-order ordinary differential equations of structural mechanics. Current methods have the shortcoming that their performance is highly dependent on the numerical stiffness of the underlying system that often leads to unrealistic behavior or a significant loss of efficiency. To overcome these limitations, we present a new integration method which is based on a mathematical reformulation of the underlying differential equations, an exponential treatment of the full nonlinear forcing operator as opposed to more standard partially implicit or exponential approaches, and the utilization of the concept of stiff accuracy which ensures that the efficiency of the simulations is significantly less sensitive to increased stiffness. As a consequence, we are able to tremendously accelerate the simulation of stiff systems compared to established integrators and significantly increase the overall accuracy. The advantageous behavior of this approach is demonstrated on a broad spectrum of complex examples like deformable bodies, textiles, bristles, and human hair. Our easily parallelizable integrator enables more complex and realistic models to be explored in visual computing without compromising efficiency.
Self-absorption corrections for well-type germanium detectors
International Nuclear Information System (INIS)
Appleby, P.G.; Richardson, N.; Nolan, P.J.
1992-01-01
Corrections for self-absorption are of vital importance to accurate determination by gamma spectrometry of radionuclides such as 210 Pb, 241 Am and 234 Th which emit low energy gamma radiation. A simple theoretical model for determining the necessary corrections for well-type germanium detectors is presented. In this model, self-absorption factors are expressed in terms of the mass attenuation coefficient of the sample and a parameter characterising the well geometry. Experimental measurements of self-absorption are used to evaluate the model and to determine a semi-empirical algorithm for improved estimates of the geometrical parameter. (orig.)
Real-time scatter measurement and correction in film radiography
International Nuclear Information System (INIS)
Shaw, C.G.
1987-01-01
A technique for real-time scatter measurement and correction in scanning film radiography is described. With this technique, collimated x-ray fan beams are used to partially reject scattered radiation. Photodiodes are attached to the aft-collimator for sampled scatter measurement. Such measurement allows the scatter distribution to be reconstructed and subtracted from digitized film image data for accurate transmission measurement. In this presentation the authors discuss the physical and technical considerations of this scatter correction technique. Examples are shown that demonstrate the feasibility of the technique. Improved x-ray transmission measurement and dual-energy subtraction imaging are demonstrated with phantoms
Radiative corrections for the leptonic pair production
Energy Technology Data Exchange (ETDEWEB)
Elend, H H
1971-01-01
The one-photon bremsstrahlung correction for symmetrical lepton pair production is newly calculated. For this, from all the Feynman diagrams, the subset is picked out for this process which essentially contributes to the symmetrical case. The matrix element square value for the chosen sub-set is expressed by the Bethe-Heitler matrix element square value provided with certain kinematic factors (Huld relationship), where a) a development after the energy of the Bremsquantum, assumed to be small, is carried out and the series is cut off after the second term beyond the infrared section, b) a high-energy approximation is made. Furthermore, c) the structure of the target nucleus and of the recoil transfered to it is neglected, d) the integration on the phase space of the bremsquantitum is carried out with a peaking approximation. All these approximations are individually discussed, and the validity limits which they set for the bremsstrahlung in the result are accurately given.
Correcting sample drift using Fourier harmonics.
Bárcena-González, G; Guerrero-Lebrero, M P; Guerrero, E; Reyes, D F; Braza, V; Yañez, A; Nuñez-Moraleda, B; González, D; Galindo, P L
2018-07-01
During image acquisition of crystalline materials by high-resolution scanning transmission electron microscopy, the sample drift could lead to distortions and shears that hinder their quantitative analysis and characterization. In order to measure and correct this effect, several authors have proposed different methodologies making use of series of images. In this work, we introduce a methodology to determine the drift angle via Fourier analysis by using a single image based on the measurements between the angles of the second Fourier harmonics in different quadrants. Two different approaches, that are independent of the angle of acquisition of the image, are evaluated. In addition, our results demonstrate that the determination of the drift angle is more accurate by using the measurements of non-consecutive quadrants when the angle of acquisition is an odd multiple of 45°. Copyright © 2018 Elsevier Ltd. All rights reserved.
Methods of correcting Anger camera deadtime losses
International Nuclear Information System (INIS)
Sorenson, J.A.
1976-01-01
Three different methods of correcting for Anger camera deadtime loss were investigated. These included analytic methods (mathematical modeling), the marker-source method, and a new method based on counting ''pileup'' events appearing in a pulseheight analyzer window positioned above the photopeak of interest. The studies were done with /sup 99m/Tc on a Searle Radiographics camera with a measured deadtime of about 6 μsec. Analytic methods were found to be unreliable because of unpredictable changes in deadtime with changes in radiation scattering conditions. Both the marker-source method and the pileup-counting method were found to be accurate to within a few percent for true counting rates of up to about 200 K cps, with the pileup-counting method giving better results. This finding applied to sources at depths ranging up to 10 cm of pressed wood. The relative merits of the two methods are discussed
Towards Accurate Application Characterization for Exascale (APEX)
Energy Technology Data Exchange (ETDEWEB)
Hammond, Simon David [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-09-01
Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.
Accurate hydrocarbon estimates attained with radioactive isotope
International Nuclear Information System (INIS)
Hubbard, G.
1983-01-01
To make accurate economic evaluations of new discoveries, an oil company needs to know how much gas and oil a reservoir contains. The porous rocks of these reservoirs are not completely filled with gas or oil, but contain a mixture of gas, oil and water. It is extremely important to know what volume percentage of this water--called connate water--is contained in the reservoir rock. The percentage of connate water can be calculated from electrical resistivity measurements made downhole. The accuracy of this method can be improved if a pure sample of connate water can be analyzed or if the chemistry of the water can be determined by conventional logging methods. Because of the similarity of the mud filtrate--the water in a water-based drilling fluid--and the connate water, this is not always possible. If the oil company cannot distinguish between connate water and mud filtrate, its oil-in-place calculations could be incorrect by ten percent or more. It is clear that unless an oil company can be sure that a sample of connate water is pure, or at the very least knows exactly how much mud filtrate it contains, its assessment of the reservoir's water content--and consequently its oil or gas content--will be distorted. The oil companies have opted for the Repeat Formation Tester (RFT) method. Label the drilling fluid with small doses of tritium--a radioactive isotope of hydrogen--and it will be easy to detect and quantify in the sample
Determining spherical lens correction for astronaut training underwater.
Porter, Jason; Gibson, C Robert; Strauss, Samuel
2011-09-01
To develop a model that will accurately predict the distance spherical lens correction needed to be worn by National Aeronautics and Space Administration astronauts while training underwater. The replica space suit's helmet contains curved visors that induce refractive power when submersed in water. Anterior surface powers and thicknesses were measured for the helmet's protective and inside visors. The impact of each visor on the helmet's refractive power in water was analyzed using thick lens calculations and Zemax optical design software. Using geometrical optics approximations, a model was developed to determine the optimal distance spherical power needed to be worn underwater based on the helmet's total induced spherical power underwater and the astronaut's manifest spectacle plane correction in air. The validity of the model was tested using data from both eyes of 10 astronauts who trained underwater. The helmet's visors induced a total power of -2.737 D when placed underwater. The required underwater spherical correction (FW) was linearly related to the spectacle plane spherical correction in air (FAir): FW = FAir + 2.356 D. The mean magnitude of the difference between the actual correction worn underwater and the calculated underwater correction was 0.20 ± 0.11 D. The actual and calculated values were highly correlated (r = 0.971) with 70% of eyes having a difference in magnitude of astronauts. The model accurately predicts the actual values worn underwater and can be applied (more generally) to determine a suitable spectacle lens correction to be worn behind other types of masks when submerged underwater.
High order QED corrections in Z physics
International Nuclear Information System (INIS)
Marck, S.C. van der.
1991-01-01
In this thesis a number of calculations of higher order QED corrections are presented, all applying to the standard LEP/SLC processes e + e - → f-bar f, where f stands for any fermion. In cases where f≠ e - , ν e , the above process is only possible via annihilation of the incoming electron positron pair. At LEP/SLC this mainly occurs via the production and the subsequent decay of a Z boson, i.e. the cross section is heavily dominated by the Z resonance. These processes and the corrections to them, treated in a semi-analytical way, are discussed (ch. 2). In the case f = e - (Bhabha scattering) the process can also occur via the exchange of a virtual photon in the t-channel. Since the latter contribution is dominant at small scattering angles one has to exclude these angles if one is interested in Z physics. Having excluded that region one has to recalculate all QED corrections (ch. 3). The techniques introduced there enables for the calculation the difference between forward and backward scattering, the forward backward symmetry, for the cases f ≠ e - , ν e (ch. 4). At small scattering angles, where Bhabha scattering is dominated by photon exchange in the t-channel, this process is used in experiments to determine the luminosity of the e + e - accelerator. hence an accurate theoretical description of this process at small angles is of vital interest to the overall normalization of all measurements at LEP/SLC. Ch. 5 gives such a description in a semi-analytical way. The last two chapters discuss Monte Carlo techniques that are used for the cases f≠ e - , ν e . Ch. 6 describes the simulation of two photon bremsstrahlung, which is a second order QED correction effect. The results are compared with results of the semi-analytical treatment in ch. 2. Finally ch. 7 reviews several techniques that have been used to simulate higher order QED corrections for the cases f≠ e - , ν e . (author). 132 refs.; 10 figs.; 16 tabs
Scattering Correction For Image Reconstruction In Flash Radiography
International Nuclear Information System (INIS)
Cao, Liangzhi; Wang, Mengqi; Wu, Hongchun; Liu, Zhouyu; Cheng, Yuxiong; Zhang, Hongbo
2013-01-01
Scattered photons cause blurring and distortions in flash radiography, reducing the accuracy of image reconstruction significantly. The effect of the scattered photons is taken into account and an iterative deduction of the scattered photons is proposed to amend the scattering effect for image restoration. In order to deduct the scattering contribution, the flux of scattered photons is estimated as the sum of two components. The single scattered component is calculated accurately together with the uncollided flux along the characteristic ray, while the multiple scattered component is evaluated using correction coefficients pre-obtained from Monte Carlo simulations.The arbitrary geometry pretreatment and ray tracing are carried out based on the customization of AutoCAD. With the above model, an Iterative Procedure for image restORation code, IPOR, is developed. Numerical results demonstrate that the IPOR code is much more accurate than the direct reconstruction solution without scattering correction and it has a very high computational efficiency
Scattering Correction For Image Reconstruction In Flash Radiography
Energy Technology Data Exchange (ETDEWEB)
Cao, Liangzhi; Wang, Mengqi; Wu, Hongchun; Liu, Zhouyu; Cheng, Yuxiong; Zhang, Hongbo [Xi' an Jiaotong Univ., Xi' an (China)
2013-08-15
Scattered photons cause blurring and distortions in flash radiography, reducing the accuracy of image reconstruction significantly. The effect of the scattered photons is taken into account and an iterative deduction of the scattered photons is proposed to amend the scattering effect for image restoration. In order to deduct the scattering contribution, the flux of scattered photons is estimated as the sum of two components. The single scattered component is calculated accurately together with the uncollided flux along the characteristic ray, while the multiple scattered component is evaluated using correction coefficients pre-obtained from Monte Carlo simulations.The arbitrary geometry pretreatment and ray tracing are carried out based on the customization of AutoCAD. With the above model, an Iterative Procedure for image restORation code, IPOR, is developed. Numerical results demonstrate that the IPOR code is much more accurate than the direct reconstruction solution without scattering correction and it has a very high computational efficiency.
Linear network error correction coding
Guang, Xuan
2014-01-01
There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an
Automatic computation of radiative corrections
International Nuclear Information System (INIS)
Fujimoto, J.; Ishikawa, T.; Shimizu, Y.; Kato, K.; Nakazawa, N.; Kaneko, T.
1997-01-01
Automated systems are reviewed focusing on their general structure and requirement specific to the calculation of radiative corrections. Detailed description of the system and its performance is presented taking GRACE as a concrete example. (author)
Publisher Correction: On our bookshelf
Karouzos, Marios
2018-03-01
In the version of this Books and Arts originally published, the book title Spectroscopy for Amateur Astronomy was incorrect; it should have read Spectroscopy for Amateur Astronomers. This has now been corrected.
Self-correcting quantum computers
International Nuclear Information System (INIS)
Bombin, H; Chhajlany, R W; Horodecki, M; Martin-Delgado, M A
2013-01-01
Is the notion of a quantum computer (QC) resilient to thermal noise unphysical? We address this question from a constructive perspective and show that local quantum Hamiltonian models provide self-correcting QCs. To this end, we first give a sufficient condition on the connectedness of excitations for a stabilizer code model to be a self-correcting quantum memory. We then study the two main examples of topological stabilizer codes in arbitrary dimensions and establish their self-correcting capabilities. Also, we address the transversality properties of topological color codes, showing that six-dimensional color codes provide a self-correcting model that allows the transversal and local implementation of a universal set of operations in seven spatial dimensions. Finally, we give a procedure for initializing such quantum memories at finite temperature. (paper)
Correcting AUC for Measurement Error.
Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang
2015-12-01
Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.
Libertarian Anarchism Is Apodictically Correct
Redford, James
2011-01-01
James Redford, "Libertarian Anarchism Is Apodictically Correct", Social Science Research Network (SSRN), Dec. 15, 2011, 9 pp., doi:10.2139/ssrn.1972733. ABSTRACT: It is shown that libertarian anarchism (i.e., consistent liberalism) is unavoidably true.
A new digitized reverse correction method for hypoid gears based on a one-dimensional probe
Li, Tianxing; Li, Jubo; Deng, Xiaozhong; Yang, Jianjun; Li, Genggeng; Ma, Wensuo
2017-12-01
In order to improve the tooth surface geometric accuracy and transmission quality of hypoid gears, a new digitized reverse correction method is proposed based on the measurement data from a one-dimensional probe. The minimization of tooth surface geometrical deviations is realized from the perspective of mathematical analysis and reverse engineering. Combining the analysis of complex tooth surface generation principles and the measurement mechanism of one-dimensional probes, the mathematical relationship between the theoretical designed tooth surface, the actual machined tooth surface and the deviation tooth surface is established, the mapping relation between machine-tool settings and tooth surface deviations is derived, and the essential connection between the accurate calculation of tooth surface deviations and the reverse correction method of machine-tool settings is revealed. Furthermore, a reverse correction model of machine-tool settings is built, a reverse correction strategy is planned, and the minimization of tooth surface deviations is achieved by means of the method of numerical iterative reverse solution. On this basis, a digitized reverse correction system for hypoid gears is developed by the organic combination of numerical control generation, accurate measurement, computer numerical processing, and digitized correction. Finally, the correctness and practicability of the digitized reverse correction method are proved through a reverse correction experiment. The experimental results show that the tooth surface geometric deviations meet the engineering requirements after two trial cuts and one correction.
Error correcting coding for OTN
DEFF Research Database (Denmark)
Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.
2010-01-01
Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....
Spelling Correction in User Interfaces.
1982-12-20
conventional typescript -oriented command language, where most com- mands consist of a verb followed by a sequence of arguments. Most user terminals are...and explanations. not part of the typescripts . 2 SPFE.LING CORRLC1iON IN USR IN"RFAC’S 2. Design Issues We were prompted to look for a new correction...remaining 73% led us to wonder what other mechanisms might permit further corrections while retaining the typescript -style interface. Most of the other
Improved Correction of Misclassification Bias With Bootstrap Imputation.
van Walraven, Carl
2018-07-01
Diagnostic codes used in administrative database research can create bias due to misclassification. Quantitative bias analysis (QBA) can correct for this bias, requires only code sensitivity and specificity, but may return invalid results. Bootstrap imputation (BI) can also address misclassification bias but traditionally requires multivariate models to accurately estimate disease probability. This study compared misclassification bias correction using QBA and BI. Serum creatinine measures were used to determine severe renal failure status in 100,000 hospitalized patients. Prevalence of severe renal failure in 86 patient strata and its association with 43 covariates was determined and compared with results in which renal failure status was determined using diagnostic codes (sensitivity 71.3%, specificity 96.2%). Differences in results (misclassification bias) were then corrected with QBA or BI (using progressively more complex methods to estimate disease probability). In total, 7.4% of patients had severe renal failure. Imputing disease status with diagnostic codes exaggerated prevalence estimates [median relative change (range), 16.6% (0.8%-74.5%)] and its association with covariates [median (range) exponentiated absolute parameter estimate difference, 1.16 (1.01-2.04)]. QBA produced invalid results 9.3% of the time and increased bias in estimates of both disease prevalence and covariate associations. BI decreased misclassification bias with increasingly accurate disease probability estimates. QBA can produce invalid results and increase misclassification bias. BI avoids invalid results and can importantly decrease misclassification bias when accurate disease probability estimates are used.
Segmented attenuation correction using artificial neural networks in positron tomography
International Nuclear Information System (INIS)
Yu, S.K.; Nahmias, C.
1996-01-01
The measured attenuation correction technique is widely used in cardiac positron tomographic studies. However, the success of this technique is limited because of insufficient counting statistics achievable in practical transmission scan times, and of the scattered radiation in transmission measurement which leads to an underestimation of the attenuation coefficients. In this work, a segmented attenuation correction technique has been developed that uses artificial neural networks. The technique has been validated in phantoms and verified in human studies. The results indicate that attenuation coefficients measured in the segmented transmission image are accurate and reproducible. Activity concentrations measured in the reconstructed emission image can also be recovered accurately using this new technique. The accuracy of the technique is subject independent and insensitive to scatter contamination in the transmission data. This technique has the potential of reducing the transmission scan time, and satisfactory results are obtained if the transmission data contain about 400 000 true counts per plane. It can predict accurately the value of any attenuation coefficient in the range from air to water in a transmission image with or without scatter correction. (author)
Use of eigenvectors in understanding and correcting storage ring orbits
International Nuclear Information System (INIS)
Friedman, A.; Bozoki, E.
1994-01-01
The response matrix A is defined by the equation X=AΘ, where Θ is the kick vector and X is the resulting orbit vector. Since A is not necessarily a symmetric or even a square matrix we symmetrize it by using A T A. Then we find the eigenvalues and eigenvectors of this A T A matrix. The physical interpretation of the eigenvectors for circular machines is discussed. The task of the orbit correction is to find the kick vector Θ for a given measured orbit vector X. We are presenting a method, in which the kick vector is expressed as linear combination of the eigenvectors. An additional advantage of this method is that it yields the smallest possible kick vector to correct the orbit. We will illustrate the application of the method to the NSLS X-ray and UV storage rings and the resulting measurements. It will be evident, that the accuracy of this method allows the combination of the global orbit correction and local optimization of the orbit for beam lines and insertion devices. The eigenvector decomposition can also be used for optimizing kick vectors, taking advantage of the fact that eigenvectors with corresponding small eigenvalue generate negligible orbit changes. Thus, one can reduce a kick vector calculated by any other correction method and still stay within the tolerance for orbit correction. The use of eigenvectors in accurately measuring the response matrix and the use of the eigenvalue decomposition orbit correction algorithm in digital feedback is discussed. (orig.)
Software for Correcting the Dynamic Error of Force Transducers
Directory of Open Access Journals (Sweden)
Naoki Miyashita
2014-07-01
Full Text Available Software which corrects the dynamic error of force transducers in impact force measurements using their own output signal has been developed. The software corrects the output waveform of the transducers using the output waveform itself, estimates its uncertainty and displays the results. In the experiment, the dynamic error of three transducers of the same model are evaluated using the Levitation Mass Method (LMM, in which the impact forces applied to the transducers are accurately determined as the inertial force of the moving part of the aerostatic linear bearing. The parameters for correcting the dynamic error are determined from the results of one set of impact measurements of one transducer. Then, the validity of the obtained parameters is evaluated using the results of the other sets of measurements of all the three transducers. The uncertainties in the uncorrected force and those in the corrected force are also estimated. If manufacturers determine the correction parameters for each model using the proposed method, and provide the software with the parameters corresponding to each model, then users can obtain the waveform corrected against dynamic error and its uncertainty. The present status and the future prospects of the developed software are discussed in this paper.
Comparative evaluation of scatter correction techniques in 3D positron emission tomography
Zaidi, H
2000-01-01
Much research and development has been concentrated on the scatter compensation required for quantitative 3D PET. Increasingly sophisticated scatter correction procedures are under investigation, particularly those based on accurate scatter models, and iterative reconstruction-based scatter compensation approaches. The main difference among the correction methods is the way in which the scatter component in the selected energy window is estimated. Monte Carlo methods give further insight and might in themselves offer a possible correction procedure. Methods: Five scatter correction methods are compared in this paper where applicable. The dual-energy window (DEW) technique, the convolution-subtraction (CVS) method, two variants of the Monte Carlo-based scatter correction technique (MCBSC1 and MCBSC2) and our newly developed statistical reconstruction-based scatter correction (SRBSC) method. These scatter correction techniques are evaluated using Monte Carlo simulation studies, experimental phantom measurements...
Quantum error correction for beginners
International Nuclear Information System (INIS)
Devitt, Simon J; Nemoto, Kae; Munro, William J
2013-01-01
Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)
Methods and apparatus for environmental correction of thermal neutron logs
International Nuclear Information System (INIS)
Preeg, W.E.; Scott, H.D.
1983-01-01
An on-line environmentally-corrected measurement of the thermal neutron decay time (tau) of an earth formation traversed by a borehole is provided in a two-detector, pulsed neutron logging tool, by measuring tau at each detector and combining the two tau measurements in accordance with a previously established empirical relationship of the general form: tau = tausub(F) +A(tausub(F) + tausub(N)B) + C, where tausub(F) and tausub(N) are the tau measurements at the far-spaced and near-spaced detectors, respectively, A is a correction coefficient for borehole capture cross section effects, B is a correction coefficient for neutron diffusion effects, and C is a constant related to parameters of the logging tool. Preferred numerical values of A, B and C are disclosed, and a relationship for more accurately approximating the A term to specific borehole conditions. (author)
Neural network scatter correction technique for digital radiography
International Nuclear Information System (INIS)
Boone, J.M.
1990-01-01
This paper presents a scatter correction technique based on artificial neural networks. The technique utilizes the acquisition of a conventional digital radiographic image, coupled with the acquisition of a multiple pencil beam (micro-aperture) digital image. Image subtraction results in a sparsely sampled estimate of the scatter component in the image. The neural network is trained to develop a causal relationship between image data on the low-pass filtered open field image and the sparsely sampled scatter image, and then the trained network is used to correct the entire image (pixel by pixel) in a manner which is operationally similar to but potentially more powerful than convolution. The technique is described and is illustrated using clinical primary component images combined with scatter component images that are realistically simulated using the results from previously reported Monte Carlo investigations. The results indicate that an accurate scatter correction can be realized using this technique
Correcting the Chromatic Aberration in Barrel Distortion of Endoscopic Images
Directory of Open Access Journals (Sweden)
Y. M. Harry Ng
2003-04-01
Full Text Available Modern endoscopes offer physicians a wide-angle field of view (FOV for minimally invasive therapies. However, the high level of barrel distortion may prevent accurate perception of image. Fortunately, this kind of distortion may be corrected by digital image processing. In this paper we investigate the chromatic aberrations in the barrel distortion of endoscopic images. In the past, chromatic aberration in endoscopes is corrected by achromatic lenses or active lens control. In contrast, we take a computational approach by modifying the concept of image warping and the existing barrel distortion correction algorithm to tackle the chromatic aberration problem. In addition, an error function for the determination of the level of centroid coincidence is proposed. Simulation and experimental results confirm the effectiveness of our method.
Coincidence corrections for a multi-detector gamma spectrometer
Energy Technology Data Exchange (ETDEWEB)
Britton, R., E-mail: r.britton@surrey.ac.uk [University of Surrey, Guildford GU2 7XH (United Kingdom); AWE, Aldermaston, Reading, Berkshire RG7 4PR (United Kingdom); Burnett, J.L.; Davies, A.V. [AWE, Aldermaston, Reading, Berkshire RG7 4PR (United Kingdom); Regan, P.H. [University of Surrey, Guildford GU2 7XH (United Kingdom)
2015-01-01
List-mode data acquisition has been utilised in conjunction with a high-efficiency γ–γ coincidence system, allowing both the energetic and temporal information to be retained for each recorded event. Collected data is re-processed multiple times to extract any coincidence information from the γ-spectroscopy system, correct for the time-walk of low-energy events, and remove accidental coincidences from the projected coincidence spectra. The time-walk correction has resulted in a reduction in the width of the coincidence delay gate of 18.4±0.4%, and thus an equivalent removal of ‘background’ coincidences. The correction factors applied to ∼5.6% of events up to ∼500 keV for a combined {sup 137}Cs and {sup 60}Co source, and are crucial for accurate coincidence measurements of low-energy events that may otherwise be missed by a standard delay gate. By extracting both the delay gate and a representative ‘background’ region for the coincidences, a coincidence background subtracted spectrum is projected from the coincidence matrix, which effectively removes ∼100% of the accidental coincidences (up to 16.6±0.7% of the total coincidence events seen during this work). This accidental-coincidence removal is crucial for accurate characterisation of the events seen in coincidence systems, as without this correction false coincidence signatures may be incorrectly interpreted.
Tranchida, D.; Piccarolo, S.; Loos, J.; Alexeev, A.A.
2006-01-01
The Oliver and Pharr [J. Mater. Res. 7, 1564 (1992)] procedure is a widely used tool to analyze nanoindentation force curves obtained on metals or ceramics. Its application to polymers is, however, difficult, as Young's moduli are commonly overestimated mainly because of viscoelastic effects and
Dion, Nathalie; Cotart, Jean-Louis; Rabilloud, Muriel
2007-04-01
We quantified the link between tooth deterioration and malnutrition in institutionalized elderly subjects, taking into account the major risk factors for malnutrition and adjusting for the measurement error made in using the Mini Nutritional Assessment questionnaire. Data stem from a survey conducted in 2005 in 1094 subjects >or=60 y of age from a large sample of 100 institutions of the Rhône-Alpes region of France. A Bayesian approach was used to quantify the effect of tooth deterioration on malnutrition through a two-level logistic regression. This approach allowed taking into account the uncertainty on sensitivity and specificity of the Mini Nutritional Assessment questionnaire to adjust for the measurement error of that test. After adjustment for other risk factors, the risk of malnutrition increased significantly and continuously 1.15 times (odds ratio 1.15, 95% credibility interval 1.06-1.25) whenever the masticatory percentage decreased by 10 points, which is equivalent to the loss of two molars. The strongest factors that augmented the probability of malnutrition were deglutition disorders, depression, and verbal inconsistency. Dependency was also an important factor; the odds of malnutrition nearly doubled for each additional grade of dependency (graded 6 to 1). Diabetes, central neurodegenerative disease, and carcinoma tended to increase the probability of malnutrition but their effect was not statistically significant. Dental status should be considered a serious risk factor for malnutrition. Regular dental examination and care should preserve functional dental integrity to prevent malnutrition in institutionalized elderly people.
Beesems, Stefanie G.; Koster, Rudolph W.
2014-01-01
TrueCPR is a new real-time compression depth feedback device that measures changes in magnetic field strength between a back pad and a chest pad. We determined its accuracy with a manikin on a test bench and on various surfaces. First, calibration and accuracy of the manikin and TrueCPR was verified
Fully 3D refraction correction dosimetry system
International Nuclear Information System (INIS)
Manjappa, Rakesh; Makki, S Sharath; Kanhirodan, Rajan; Kumar, Rajesh; Vasu, Ram Mohan
2016-01-01
medium is 71.8%, an increase of 6.4% compared to that achieved using conventional ART algorithm. Smaller diameter dosimeters are scanned with dry air scanning by using a wide-angle lens that collects refracted light. The images reconstructed using cone beam geometry is seen to deteriorate in some planes as those regions are not scanned. Refraction correction is important and needs to be taken in to consideration to achieve quantitatively accurate dose reconstructions. Refraction modeling is crucial in array based scanners as it is not possible to identify refracted rays in the sinogram space. (paper)
Fully 3D refraction correction dosimetry system.
Manjappa, Rakesh; Makki, S Sharath; Kumar, Rajesh; Vasu, Ram Mohan; Kanhirodan, Rajan
2016-02-21
medium is 71.8%, an increase of 6.4% compared to that achieved using conventional ART algorithm. Smaller diameter dosimeters are scanned with dry air scanning by using a wide-angle lens that collects refracted light. The images reconstructed using cone beam geometry is seen to deteriorate in some planes as those regions are not scanned. Refraction correction is important and needs to be taken in to consideration to achieve quantitatively accurate dose reconstructions. Refraction modeling is crucial in array based scanners as it is not possible to identify refracted rays in the sinogram space.
Surgical correction of postoperative astigmatism
Directory of Open Access Journals (Sweden)
Lindstrom Richard
1990-01-01
Full Text Available The photokeratoscope has increased the understanding of the aspheric nature of the cornea as well as a better understanding of normal corneal topography. This has significantly affected the development of newer and more predictable models of surgical astigmatic correction. Relaxing incisions effectively flatten the steeper meridian an equivalent amount as they steepen the flatter meridian. The net change in spherical equivalent is, therefore, negligible. Poor predictability is the major limitation of relaxing incisions. Wedge resection can correct large degrees of postkeratoplasty astigmatism, Resection of 0.10 mm of tissue results in approximately 2 diopters of astigmatic correction. Prolonged postoperative rehabilitation and induced irregular astigmatism are limitations of the procedure. Transverse incisions flatten the steeper meridian an equivalent amount as they steepen the flatter meridian. Semiradial incisions result in two times the amount of flattening in the meridian of the incision compared to the meridian 90 degrees away. Combination of transverse incisions with semiradial incisions describes the trapezoidal astigmatic keratotomy. This procedure may correct from 5.5 to 11.0 diopters dependent upon the age of the patient. The use of the surgical keratometer is helpful in assessing a proper endpoint during surgical correction of astigmatism.
1994-05-27
In "Women in Science: Some Books of the Year" (11 March, p. 1458) the name of the senior editor of second edition of The History of Women and Science, Health, and Technology should have been given as Phyllis Holman Weisbard, and the name of the editor of the first edition should have been given as Susan Searing. Also, the statement that the author of A Matter of Choices: Memoirs of a Female Physicist, Fay Ajzenberg-Selove, is now retired was incorrect.
2016-02-01
In the October In Our Unit article by Cooper et al, “Against All Odds: Preventing Pressure Ulcers in High-Risk Cardiac Surgery Patients” (Crit Care Nurse. 2015;35[5]:76–82), there was an error in the reference citation on page 82. At the top of that page, reference 18 cited on the second line should be reference 23, which also should be added to the References list: 23. AHRQ website. Prevention and treatment program integrates actionable reports into practice, significantly reducing pressure ulcers in nursing home residents. November 2008. https://innovations.ahrq.gov/profiles/prevention-and-treatment-program-integrates-actionable-reports-practice-significantly. Accessed November 18, 2015
2015-06-01
Gillon R. Defending the four principles approach as a good basis for good medical practice and therefore for good medical ethics. J Med Ethics 2015;41:111–6. The author misrepresented Beauchamp and Childress when he wrote: ‘My own view (unlike Beauchamp and Childress who explicitly state that they make no such claim ( p. 421)1, is that all moral agents whether or not they are doctors or otherwise involved in healthcare have these prima facie moral obligations; but in the context of answering the question ‘what is it to do good medical ethics ?’ my claim is limited to the ethical obligations of doctors’. The author intended and should have written the following: ‘My own view, unlike Beauchamp and Childress who explicitly state that they make no such claim (p.421)1 is that these four prima facie principles can provide a basic moral framework not only for medical ethics but for ethics in general’.
2015-03-01
In the January 2015 issue of Cyberpsychology, Behavior, and Social Networking (vol. 18, no. 1, pp. 3–7), the article "Individual Differences in Cyber Security Behaviors: An Examination of Who Is Sharing Passwords." by Prof. Monica Whitty et al., has an error in wording in the abstract. The sentence in question was originally printed as: Contrary to our hypotheses, we found older people and individuals who score high on self-monitoring were more likely to share passwords. It should read: Contrary to our hypotheses, we found younger people and individuals who score high on self-monitoring were more likely to share passwords. The authors wish to apologize for the error.
2007-01-01
From left to right: Luis, Carmen, Mario, Christian and José listening to speeches by theorists Alvaro De Rújula and Luis Alvarez-Gaumé (right) at their farewell gathering on 15 May.We unfortunately cut out a part of the "Word of thanks" from the team retiring from Restaurant No. 1. The complete message is published below: Dear friends, You are the true "nucleus" of CERN. Every member of this extraordinary human mosaic will always remain in our affections and in our thoughts. We have all been very touched by your spontaneous generosity. Arrivederci, Mario Au revoir,Christian Hasta Siempre Carmen, José and Luis PS: Lots of love to the theory team and to the hidden organisers. So long!
2014-01-01
In the meeting report "Strategies to observe and understand processes and drivers in the biogeosphere," published in the 14 January 2014 issue of Eos (95(2), 16, doi:10.1002/2014EO020004), an incorrect affiliation was listed for one coauthor. Michael Young is with the University of Texas at Austin.
Energy Technology Data Exchange (ETDEWEB)
Hubeny, V.
2005-01-12
We investigate the geometry of four dimensional black hole solutions in the presence of stringy higher curvature corrections to the low energy effective action. For certain supersymmetric two charge black holes these corrections drastically alter the causal structure of the solution, converting seemingly pathological null singularities into timelike singularities hidden behind a finite area horizon. We establish, analytically and numerically, that the string-corrected two-charge black hole metric has the same Penrose diagram as the extremal four-charge black hole. The higher derivative terms lead to another dramatic effect--the gravitational force exerted by a black hole on an inertial observer is no longer purely attractive. The magnitude of this effect is related to the size of the compactification manifold.
SELF CORRECTION WORKS BETTER THAN TEACHER CORRECTION IN EFL SETTING
Directory of Open Access Journals (Sweden)
Azizollah Dabaghi
2012-11-01
Full Text Available Learning a foreign language takes place step by step, during which mistakes are to be expected in all stages of learning. EFL learners are usually afraid of making mistakes which prevents them from being receptive and responsive. Overcoming fear of mistakes depends on the way mistakes are rectified. It is believed that autonomy and learner-centeredness suggest that in some settings learner's self-correction of mistakes might be more beneficial for language learning than teacher's correction. This assumption has been the subject of debates for some time. Some researchers believe that correction whether that of teacher's or on behalf of learners is effective in showing them how their current interlanguage differs from the target (Long &Robinson, 1998. Others suggest that correcting the students whether directly or through recasts are ambiguous and may be perceived by the learner as confirmation of meaning rather than feedback on form (Lyster, 1998a. This study is intended to investigate the effects of correction on Iranian intermediate EFL learners' writing composition in Payam Noor University. For this purpose, 90 English majoring students, studying at Isfahan Payam Noor University were invited to participate at the experiment. They all received a sample of TOFEL test and a total number of 60 participants whose scores were within the range of one standard deviation below and above the mean were divided into two equal groups; experimental and control. The experimental group went through some correction during the experiment while the control group remained intact and the ordinary processes of teaching went on. Each group received twelve sessions of two hour classes every week on advanced writing course in which some activities of Modern English (II were selected. Then after the treatment both groups received an immediate test as post-test and the experimental group took the second post-test as the delayed recall test with the same design as the
Correcting quantum errors with entanglement.
Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu
2006-10-20
We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.
Self-correcting Multigrid Solver
International Nuclear Information System (INIS)
Lewandowski, Jerome L.V.
2004-01-01
A new multigrid algorithm based on the method of self-correction for the solution of elliptic problems is described. The method exploits information contained in the residual to dynamically modify the source term (right-hand side) of the elliptic problem. It is shown that the self-correcting solver is more efficient at damping the short wavelength modes of the algebraic error than its standard equivalent. When used in conjunction with a multigrid method, the resulting solver displays an improved convergence rate with no additional computational work
Brane cosmology with curvature corrections
International Nuclear Information System (INIS)
Kofinas, Georgios; Maartens, Roy; Papantonopoulos, Eleftherios
2003-01-01
We study the cosmology of the Randall-Sundrum brane-world where the Einstein-Hilbert action is modified by curvature correction terms: a four-dimensional scalar curvature from induced gravity on the brane, and a five-dimensional Gauss-Bonnet curvature term. The combined effect of these curvature corrections to the action removes the infinite-density big bang singularity, although the curvature can still diverge for some parameter values. A radiation brane undergoes accelerated expansion near the minimal scale factor, for a range of parameters. This acceleration is driven by the geometric effects, without an inflation field or negative pressures. At late times, conventional cosmology is recovered. (author)
Characterization of 3-Dimensional PET Systems for Accurate Quantification of Myocardial Blood Flow.
Renaud, Jennifer M; Yip, Kathy; Guimond, Jean; Trottier, Mikaël; Pibarot, Philippe; Turcotte, Eric; Maguire, Conor; Lalonde, Lucille; Gulenchyn, Karen; Farncombe, Troy; Wisenberg, Gerald; Moody, Jonathan; Lee, Benjamin; Port, Steven C; Turkington, Timothy G; Beanlands, Rob S; deKemp, Robert A
2017-01-01
Three-dimensional (3D) mode imaging is the current standard for PET/CT systems. Dynamic imaging for quantification of myocardial blood flow with short-lived tracers, such as 82 Rb-chloride, requires accuracy to be maintained over a wide range of isotope activities and scanner counting rates. We proposed new performance standard measurements to characterize the dynamic range of PET systems for accurate quantitative imaging. 82 Rb or 13 N-ammonia (1,100-3,000 MBq) was injected into the heart wall insert of an anthropomorphic torso phantom. A decaying isotope scan was obtained over 5 half-lives on 9 different 3D PET/CT systems and 1 3D/2-dimensional PET-only system. Dynamic images (28 × 15 s) were reconstructed using iterative algorithms with all corrections enabled. Dynamic range was defined as the maximum activity in the myocardial wall with less than 10% bias, from which corresponding dead-time, counting rates, and/or injected activity limits were established for each scanner. Scatter correction residual bias was estimated as the maximum cavity blood-to-myocardium activity ratio. Image quality was assessed via the coefficient of variation measuring nonuniformity of the left ventricular myocardium activity distribution. Maximum recommended injected activity/body weight, peak dead-time correction factor, counting rates, and residual scatter bias for accurate cardiac myocardial blood flow imaging were 3-14 MBq/kg, 1.5-4.0, 22-64 Mcps singles and 4-14 Mcps prompt coincidence counting rates, and 2%-10% on the investigated scanners. Nonuniformity of the myocardial activity distribution varied from 3% to 16%. Accurate dynamic imaging is possible on the 10 3D PET systems if the maximum injected MBq/kg values are respected to limit peak dead-time losses during the bolus first-pass transit. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
International Nuclear Information System (INIS)
Jain, P.C.
1985-12-01
The monthly average daily values of the extraterrestrial irradiation on a horizontal plane and the maximum possible sunshine duration are two important parameters that are frequently needed in various solar energy applications. These are generally calculated by solar scientists and engineers each time they are needed and often by using the approximate short-cut methods. Using the accurate analytical expressions developed by Spencer for the declination and the eccentricity correction factor, computations for these parameters have been made for all the latitude values from 90 deg. N to 90 deg. S at intervals of 1 deg. and are presented in a convenient tabular form. Monthly average daily values of the maximum possible sunshine duration as recorded on a Campbell Stoke's sunshine recorder are also computed and presented. These tables would avoid the need for repetitive and approximate calculations and serve as a useful ready reference for providing accurate values to the solar energy scientists and engineers
Gould, Ian R; Wosinska, Zofia M; Farid, Samir
2006-01-01
Accurate oxidation potentials for organic compounds are critical for the evaluation of thermodynamic and kinetic properties of their radical cations. Except when using a specialized apparatus, electrochemical oxidation of molecules with reactive radical cations is usually an irreversible process, providing peak potentials, E(p), rather than thermodynamically meaningful oxidation potentials, E(ox). In a previous study on amines with radical cations that underwent rapid decarboxylation, we estimated E(ox) by correcting the E(p) from cyclic voltammetry with rate constants for decarboxylation obtained using laser flash photolysis. Here we use redox equilibration experiments to determine accurate relative oxidation potentials for the same amines. We also describe an extension of these experiments to show how relative oxidation potentials can be obtained in the absence of equilibrium, from a complete kinetic analysis of the reversible redox kinetics. The results provide support for the previous cyclic voltammetry/laser flash photolysis method for determining oxidation potentials.
An efficient and accurate method for calculating nonlinear diffraction beam fields
Energy Technology Data Exchange (ETDEWEB)
Jeong, Hyun Jo; Cho, Sung Jong; Nam, Ki Woong; Lee, Jang Hyun [Division of Mechanical and Automotive Engineering, Wonkwang University, Iksan (Korea, Republic of)
2016-04-15
This study develops an efficient and accurate method for calculating nonlinear diffraction beam fields propagating in fluids or solids. The Westervelt equation and quasilinear theory, from which the integral solutions for the fundamental and second harmonics can be obtained, are first considered. A computationally efficient method is then developed using a multi-Gaussian beam (MGB) model that easily separates the diffraction effects from the plane wave solution. The MGB models provide accurate beam fields when compared with the integral solutions for a number of transmitter-receiver geometries. These models can also serve as fast, powerful modeling tools for many nonlinear acoustics applications, especially in making diffraction corrections for the nonlinearity parameter determination, because of their computational efficiency and accuracy.
On an efficient and accurate method to integrate restricted three-body orbits
Murison, Marc A.
1989-01-01
This work is a quantitative analysis of the advantages of the Bulirsch-Stoer (1966) method, demonstrating that this method is certainly worth considering when working with small N dynamical systems. The results, qualitatively suspected by many users, are quantitatively confirmed as follows: (1) the Bulirsch-Stoer extrapolation method is very fast and moderately accurate; (2) regularization of the equations of motion stabilizes the error behavior of the method and is, of course, essential during close approaches; and (3) when applicable, a manifold-correction algorithm reduces numerical errors to the limits of machine accuracy. In addition, for the specific case of the restricted three-body problem, even a small eccentricity for the orbit of the primaries drastically affects the accuracy of integrations, whether regularized or not; the circular restricted problem integrates much more accurately.
Accurate adiabatic energy surfaces for the ground and first excited states of He2+
International Nuclear Information System (INIS)
Lee, E.P.F.
1993-01-01
Different factors affecting the accuracy of the computed energy surfaces of the ground and first excited state of He 2 + have been examined, including the choice of the one-and many-particle bases, the configurational space in the MRCI (multi-reference configuration interaction) calculations and other corrections such as the Davidson and the full counterpoise (CP) correction. From basis-variation studies, it was concluded that multi-reference direct-CI calculations (MRDCI) using CASSCF MOs and/or natural orbitals (NOs) from a smaller CISD calculation, gave results close to full CI. The computed dissociation energies, D e , for the ground and first excited state of He 2 + were 2.4670 (2.4659) eV and 17.2 (17.1) cm -1 , respectively, at the highest level [without and with CP correction for basis-set superposition errors (BSSE)] of calculation with an [11s8p3d1f] GTO contraction, in reasonably good agreement with previous calculations, and estimated correct values, where available. It is believed that the computed D e , and the energy surface for the first excited state should be reasonably accurate. However, for the ground state, the effects of multiple f functions and/or functions of higher angular momentum have not been investigated owing to limitation of the available computing resources. This is probably the only weakness is the present study. (Author)
An attenuation correction method for PET/CT images
International Nuclear Information System (INIS)
Ue, Hidenori; Yamazaki, Tomohiro; Haneishi, Hideaki
2006-01-01
In PET/CT systems, accurate attenuation correction can be achieved by creating an attenuation map from an X-ray CT image. On the other hand, respiratory-gated PET acquisition is an effective method for avoiding motion blurring of the thoracic and abdominal organs caused by respiratory motion. In PET/CT systems employing respiratory-gated PET, using an X-ray CT image acquired during breath-holding for attenuation correction may have a large effect on the voxel values, especially in regions with substantial respiratory motion. In this report, we propose an attenuation correction method in which, as the first step, a set of respiratory-gated PET images is reconstructed without attenuation correction, as the second step, the motion of each phase PET image from the PET image in the same phase as the CT acquisition timing is estimated by the previously proposed method, as the third step, the CT image corresponding to each respiratory phase is generated from the original CT image by deformation according to the motion vector maps, and as the final step, attenuation correction using these CT images and reconstruction are performed. The effectiveness of the proposed method was evaluated using 4D-NCAT phantoms, and good stability of the voxel values near the diaphragm was observed. (author)
Atmospheric Error Correction of the Laser Beam Ranging
Directory of Open Access Journals (Sweden)
J. Saydi
2014-01-01
Full Text Available Atmospheric models based on surface measurements of pressure, temperature, and relative humidity have been used to increase the laser ranging accuracy by ray tracing. Atmospheric refraction can cause significant errors in laser ranging systems. Through the present research, the atmospheric effects on the laser beam were investigated by using the principles of laser ranging. Atmospheric correction was calculated for 0.532, 1.3, and 10.6 micron wavelengths through the weather conditions of Tehran, Isfahan, and Bushehr in Iran since March 2012 to March 2013. Through the present research the atmospheric correction was computed for meteorological data in base of monthly mean. Of course, the meteorological data were received from meteorological stations in Tehran, Isfahan, and Bushehr. Atmospheric correction was calculated for 11, 100, and 200 kilometers laser beam propagations under 30°, 60°, and 90° rising angles for each propagation. The results of the study showed that in the same months and beam emission angles, the atmospheric correction was most accurate for 10.6 micron wavelength. The laser ranging error was decreased by increasing the laser emission angle. The atmospheric correction with two Marini-Murray and Mendes-Pavlis models for 0.532 nm was compared.
Touchless attitude correction for satellite with constant magnetic moment
Ao, Hou-jun; Yang, Le-ping; Zhu, Yan-wei; Zhang, Yuan-wen; Huang, Huan
2017-09-01
Rescue of satellite with attitude fault is of great value. Satellite with improper injection attitude may lose contact with ground as the antenna points to the wrong direction, or encounter energy problems as solar arrays are not facing the sun. Improper uploaded command may set the attitude out of control, exemplified by Japanese Hitomi spacecraft. In engineering practice, traditional physical contact approaches have been applied, yet with a potential risk of collision and a lack of versatility since the mechanical systems are mission-specific. This paper puts forward a touchless attitude correction approach, in which three satellites are considered, one having constant dipole and two having magnetic coils to control attitude of the first. Particular correction configurations are designed and analyzed to maintain the target's orbit during the attitude correction process. A reference coordinate system is introduced to simplify the control process and avoid the singular value problem of Euler angles. Based on the spherical triangle basic relations, the accurate varying geomagnetic field is considered in the attitude dynamic mode. Sliding mode control method is utilized to design the correction law. Finally, numerical simulation is conducted to verify the theoretical derivation. It can be safely concluded that the no-contact attitude correction approach for the satellite with uniaxial constant magnetic moment is feasible and potentially applicable to on-orbit operations.
Directory of Open Access Journals (Sweden)
David M. Benoit
2011-08-01
Full Text Available We present a theoretical framework for the computation of anharmonic vibrational frequencies for large systems, with a particular focus on determining adsorbate frequencies from first principles. We give a detailed account of our local implementation of the vibrational self-consistent field approach and its correlation corrections. We show that our approach is both robust, accurate and can be easily deployed on computational grids in order to provide an efficient computational tool. We also present results on the vibrational spectrum of hydrogen fluoride on pyrene, on the thiophene molecule in the gas phase, and on small neutral gold clusters.
A Generalized Correction for Attenuation.
Petersen, Anne C.; Bock, R. Darrell
Use of the usual bivariate correction for attenuation with more than two variables presents two statistical problems. This pairwise method may produce a covariance matrix which is not at least positive semi-definite, and the bivariate procedure does not consider the possible influences of correlated errors among the variables. The method described…
Entropic corrections to Newton's law
International Nuclear Information System (INIS)
Setare, M R; Momeni, D; Myrzakulov, R
2012-01-01
In this short paper, we calculate separately the generalized uncertainty principle (GUP) and self-gravitational corrections to Newton's gravitational formula. We show that for a complete description of the GUP and self-gravity effects, both the temperature and entropy must be modified. (paper)
'Correction of unrealizable service choreographies’
Mancioppi, M.
2015-01-01
This thesis is devoted to the detection and correction of design flaws affecting service choreographies. Service choreographies are models that specify how software services are composed in a decentralized, message-driven fashion. In particular, this work focuses on flaws that compromise the
Multilingual text induced spelling correction
Reynaert, M.W.C.
2004-01-01
We present TISC, a multilingual, language-independent and context-sensitive spelling checking and correction system designed to facilitate the automatic removal of non-word spelling errors in large corpora. Its lexicon is derived from raw text corpora, without supervision, and contains word unigrams
The correct "ball bearings" data.
Caroni, C
2002-12-01
The famous data on fatigue failure times of ball bearings have been quoted incorrectly from Lieblein and Zelen's original paper. The correct data include censored values, as well as non-fatigue failures that must be handled appropriately. They could be described by a mixture of Weibull distributions, corresponding to different modes of failure.
Interaction and self-correction
DEFF Research Database (Denmark)
Satne, Glenda Lucila
2014-01-01
and acquisition. I then criticize two models that have been dominant in thinking about conceptual competence, the interpretationist and the causalist models. Both fail to meet NC, by failing to account for the abilities involved in conceptual self-correction. I then offer an alternative account of self...
CORRECTIVE ACTION IN CAR MANUFACTURING
Directory of Open Access Journals (Sweden)
H. Rohne
2012-01-01
Full Text Available
ENGLISH ABSTRACT: In this paper the important .issues involved in successfully implementing corrective action systems in quality management are discussed. The work is based on experience in implementing and operating such a system in an automotive manufacturing enterprise in South Africa. The core of a corrective action system is good documentation, supported by a computerised information system. Secondly, a systematic problem solving methodology is essential to resolve the quality related problems identified by the system. In the following paragraphs the general corrective action process is discussed and the elements of a corrective action system are identified, followed by a more detailed discussion of each element. Finally specific results from the application are discussed.
AFRIKAANSE OPSOMMING: Belangrike oorwegings by die suksesvolle implementering van korrektiewe aksie stelsels in gehaltebestuur word in hierdie artikel bespreek. Die werk is gebaseer op ondervinding in die implementering en bedryf van so 'n stelsel by 'n motorvervaardiger in Suid Afrika. Die kern van 'n korrektiewe aksie stelsel is goeie dokumentering, gesteun deur 'n gerekenariseerde inligtingstelsel. Tweedens is 'n sistematiese probleemoplossings rnetodologie nodig om die gehalte verwante probleme wat die stelsel identifiseer aan te spreek. In die volgende paragrawe word die algemene korrektiewe aksie proses bespreek en die elemente van die korrektiewe aksie stelsel geidentifiseer. Elke element word dan in meer besonderhede bespreek. Ten slotte word spesifieke resultate van die toepassing kortliks behandel.
DEFF Research Database (Denmark)
Martinez Peñas, Umberto; Pellikaan, Ruud
2017-01-01
Error-correcting pairs were introduced as a general method of decoding linear codes with respect to the Hamming metric using coordinatewise products of vectors, and are used for many well-known families of codes. In this paper, we define new types of vector products, extending the coordinatewise ...
African Journals Online (AJOL)
TOSHIBA
Leaf area models are simple, accurate and non-destructive. They are important in many ... area model for S. macrocarpon using linear measurements. A total of 80 fully opened ... Regression analysis of leaf area obtained from graph tracing as ...
Deconvolution based attenuation correction for time-of-flight positron emission tomography
Lee, Nam-Yong
2017-10-01
For an accurate quantitative reconstruction of the radioactive tracer distribution in positron emission tomography (PET), we need to take into account the attenuation of the photons by the tissues. For this purpose, we propose an attenuation correction method for the case when a direct measurement of the attenuation distribution in the tissues is not available. The proposed method can determine the attenuation factor up to a constant multiple by exploiting the consistency condition that the exact deconvolution of noise-free time-of-flight (TOF) sinogram must satisfy. Simulation studies shows that the proposed method corrects attenuation artifacts quite accurately for TOF sinograms of a wide range of temporal resolutions and noise levels, and improves the image reconstruction for TOF sinograms of higher temporal resolutions by providing more accurate attenuation correction.
Bunch mode specific rate corrections for PILATUS3 detectors
Energy Technology Data Exchange (ETDEWEB)
Trueb, P., E-mail: peter.trueb@dectris.com [DECTRIS Ltd, 5400 Baden (Switzerland); Dejoie, C. [ETH Zurich, 8093 Zurich (Switzerland); Kobas, M. [DECTRIS Ltd, 5400 Baden (Switzerland); Pattison, P. [EPF Lausanne, 1015 Lausanne (Switzerland); Peake, D. J. [School of Physics, The University of Melbourne, Victoria 3010 (Australia); Radicci, V. [DECTRIS Ltd, 5400 Baden (Switzerland); Sobott, B. A. [School of Physics, The University of Melbourne, Victoria 3010 (Australia); Walko, D. A. [Argonne National Laboratory, Argonne, IL 60439 (United States); Broennimann, C. [DECTRIS Ltd, 5400 Baden (Switzerland)
2015-04-09
The count rate behaviour of PILATUS3 detectors has been characterized for seven bunch modes at four different synchrotrons. The instant retrigger technology of the PILATUS3 application-specific integrated circuit is found to reduce the dependency of the required rate correction on the synchrotron bunch mode. The improvement of using bunch mode specific rate corrections based on a Monte Carlo simulation is quantified. PILATUS X-ray detectors are in operation at many synchrotron beamlines around the world. This article reports on the characterization of the new PILATUS3 detector generation at high count rates. As for all counting detectors, the measured intensities have to be corrected for the dead-time of the counting mechanism at high photon fluxes. The large number of different bunch modes at these synchrotrons as well as the wide range of detector settings presents a challenge for providing accurate corrections. To avoid the intricate measurement of the count rate behaviour for every bunch mode, a Monte Carlo simulation of the counting mechanism has been implemented, which is able to predict the corrections for arbitrary bunch modes and a wide range of detector settings. This article compares the simulated results with experimental data acquired at different synchrotrons. It is found that the usage of bunch mode specific corrections based on this simulation improves the accuracy of the measured intensities by up to 40% for high photon rates and highly structured bunch modes. For less structured bunch modes, the instant retrigger technology of PILATUS3 detectors substantially reduces the dependency of the rate correction on the bunch mode. The acquired data also demonstrate that the instant retrigger technology allows for data acquisition up to 15 million photons per second per pixel.
Correcting saturation of detectors for particle/droplet imaging methods
International Nuclear Information System (INIS)
Kalt, Peter A M
2010-01-01
Laser-based diagnostic methods are being applied to more and more flows of theoretical and practical interest and are revealing interesting new flow features. Imaging particles or droplets in nephelometry and laser sheet dropsizing methods requires a trade-off of maximized signal-to-noise ratio without over-saturating the detector. Droplet and particle imaging results in lognormal distribution of pixel intensities. It is possible to fit a derived lognormal distribution to the histogram of measured pixel intensities. If pixel intensities are clipped at a saturated value, it is possible to estimate a presumed probability density function (pdf) shape without the effects of saturation from the lognormal fit to the unsaturated histogram. Information about presumed shapes of the pixel intensity pdf is used to generate corrections that can be applied to data to account for saturation. The effects of even slight saturation are shown to be a significant source of error on the derived average. The influence of saturation on the derived root mean square (rms) is even more pronounced. It is found that errors on the determined average exceed 5% when the number of saturated samples exceeds 3% of the total. Errors on the rms are 20% for a similar saturation level. This study also attempts to delineate limits, within which the detector saturation can be accurately corrected. It is demonstrated that a simple method for reshaping the clipped part of the pixel intensity histogram makes accurate corrections to account for saturated pixels. These outcomes can be used to correct a saturated signal, quantify the effect of saturation on a derived average and offer a method to correct the derived average in the case of slight to moderate saturation of pixels
A correction on coastal heads for groundwater flow models.
Lu, Chunhui; Werner, Adrian D; Simmons, Craig T; Luo, Jian
2015-01-01
We introduce a simple correction to coastal heads for constant-density groundwater flow models that contain a coastal boundary, based on previous analytical solutions for interface flow. The results demonstrate that accurate discharge to the sea in confined aquifers can be obtained by direct application of Darcy's law (for constant-density flow) if the coastal heads are corrected to ((α + 1)/α)hs - B/2α, in which hs is the mean sea level above the aquifer base, B is the aquifer thickness, and α is the density factor. For unconfined aquifers, the coastal head should be assigned the value hs1+α/α. The accuracy of using these corrections is demonstrated by consistency between constant-density Darcy's solution and variable-density flow numerical simulations. The errors introduced by adopting two previous approaches (i.e., no correction and using the equivalent fresh water head at the middle position of the aquifer to represent the hydraulic head at the coastal boundary) are evaluated. Sensitivity analysis shows that errors in discharge to the sea could be larger than 100% for typical coastal aquifer parameter ranges. The location of observation wells relative to the toe is a key factor controlling the estimation error, as it determines the relative aquifer length of constant-density flow relative to variable-density flow. The coastal head correction method introduced in this study facilitates the rapid and accurate estimation of the fresh water flux from a given hydraulic head measurement and allows for an improved representation of the coastal boundary condition in regional constant-density groundwater flow models. © 2014, National Ground Water Association.
Effect of Inhomogeneity correction for lung volume model in TPS
International Nuclear Information System (INIS)
Chung, Se Young; Lee, Sang Rok; Kim, Young Bum; Kwon, Young Ho
2004-01-01
see that the value that is not correction and the margin of error of the measurement value stand at 16% and 14%, respectively. Moreover, values of the 3D showed lower margin of error compared to 2D. Revision according to the density of tissue must be executed during radiation therapy planning. To ensure a more accurate planning, use of 3D planning system is recommended more so than the 2D Planning system to ensure a more accurate revision on the therapy plan. Moreover, 3D Planning system needs to select and use the most accurate and appropriate inhomogeneous correction algorithm through actual measurement. In addition, comparison and analysis through TLD or film dosimetry are needed.
Correcting ligands, metabolites, and pathways
Directory of Open Access Journals (Sweden)
Vriend Gert
2006-11-01
Full Text Available Abstract Background A wide range of research areas in bioinformatics, molecular biology and medicinal chemistry require precise chemical structure information about molecules and reactions, e.g. drug design, ligand docking, metabolic network reconstruction, and systems biology. Most available databases, however, treat chemical structures more as illustrations than as a datafield in its own right. Lack of chemical accuracy impedes progress in the areas mentioned above. We present a database of metabolites called BioMeta that augments the existing pathway databases by explicitly assessing the validity, correctness, and completeness of chemical structure and reaction information. Description The main bulk of the data in BioMeta were obtained from the KEGG Ligand database. We developed a tool for chemical structure validation which assesses the chemical validity and stereochemical completeness of a molecule description. The validation tool was used to examine the compounds in BioMeta, showing that a relatively small number of compounds had an incorrect constitution (connectivity only, not considering stereochemistry and that a considerable number (about one third had incomplete or even incorrect stereochemistry. We made a large effort to correct the errors and to complete the structural descriptions. A total of 1468 structures were corrected and/or completed. We also established the reaction balance of the reactions in BioMeta and corrected 55% of the unbalanced (stoichiometrically incorrect reactions in an automatic procedure. The BioMeta database was implemented in PostgreSQL and provided with a web-based interface. Conclusion We demonstrate that the validation of metabolite structures and reactions is a feasible and worthwhile undertaking, and that the validation results can be used to trigger corrections and improvements to BioMeta, our metabolite database. BioMeta provides some tools for rational drug design, reaction searches, and
Video Error Correction Using Steganography
Robie, David L.; Mersereau, Russell M.
2002-12-01
The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.
Personalized recommendation with corrected similarity
International Nuclear Information System (INIS)
Zhu, Xuzhen; Tian, Hui; Cai, Shimin
2014-01-01
Personalized recommendation has attracted a surge of interdisciplinary research. Especially, similarity-based methods in applications of real recommendation systems have achieved great success. However, the computations of similarities are overestimated or underestimated, in particular because of the defective strategy of unidirectional similarity estimation. In this paper, we solve this drawback by leveraging mutual correction of forward and backward similarity estimations, and propose a new personalized recommendation index, i.e., corrected similarity based inference (CSI). Through extensive experiments on four benchmark datasets, the results show a greater improvement of CSI in comparison with these mainstream baselines. And a detailed analysis is presented to unveil and understand the origin of such difference between CSI and mainstream indices. (paper)
Video Error Correction Using Steganography
Directory of Open Access Journals (Sweden)
Robie David L
2002-01-01
Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.
Corrective action program reengineering project
International Nuclear Information System (INIS)
Vernick, H.R.
1996-01-01
A series of similar refueling floor events that occurred during the early 1990s prompted Susquehanna steam electric station (SSES) management to launch a broad-based review of how the Nuclear Department conducts business. This was accomplished through the formation of several improvement initiative teams. Clearly, one of the key areas that benefited from this management initiative was the corrective action program. The corrective action improvement team was charged with taking a comprehensive look at how the Nuclear Department identified and resolved problems. The 10-member team included management and bargaining unit personnel as well as an external management consultant. This paper provides a summary of this self-assessment initiative, including a discussion of the issues identified, opportunities for improvement, and subsequent completed or planned actions
Corrected body surface potential mapping.
Krenzke, Gerhard; Kindt, Carsten; Hetzer, Roland
2007-02-01
In the method for body surface potential mapping described here, the influence of thorax shape on measured ECG values is corrected. The distances of the ECG electrodes from the electrical heart midpoint are determined using a special device for ECG recording. These distances are used to correct the ECG values as if they had been measured on the surface of a sphere with a radius of 10 cm with its midpoint localized at the electrical heart midpoint. The equipotential lines of the electrical heart field are represented on the virtual surface of such a sphere. It is demonstrated that the character of a dipole field is better represented if the influence of the thorax shape is reduced. The site of the virtual reference electrode is also important for the dipole character of the representation of the electrical heart field.
Interaction and Self-Correction
Directory of Open Access Journals (Sweden)
Glenda Lucila Satne
2014-07-01
Full Text Available In this paper I address the question of how to account for the normative dimension involved in conceptual competence in a naturalistic framework. First, I present what I call the Naturalist Challenge (NC, referring to both the phylogenetic and ontogenetic dimensions of conceptual possession and acquisition. I then criticize two models that have been dominant in thinking about conceptual competence, the interpretationist and the causalist models. Both fail to meet NC, by failing to account for the abilities involved in conceptual self-correction. I then offer an alternative account of self-correction that I develop with the help of the interactionist theory of mutual understanding arising from recent developments in Phenomenology and Developmental Psychology.
Energy dependence corrections to MOSFET dosimetric sensitivity
International Nuclear Information System (INIS)
Cheung, T.; Yu, P.K.N.; Butson, M.J.; Illawarra Cancer Care Centre, Crown St, Wollongong
2009-01-01
Metal Oxide Semiconductor Field Effect Transistors (MOSFET's) are dosimeters which are now frequently utilized in radiotherapy treatment applications. An improved MOSFET, clinical semiconductor dosimetry system (CSDS) which utilizes improved packaging for the MOSFET device has been studied for energy dependence of sensitivity to x-ray radiation measurement. Energy dependence from 50 kVp to 10 MV x-rays has been studied and found to vary by up to a factor of 3.2 with 75 kVp producing the highest sensitivity response. The detectors average life span in high sensitivity mode is energy related and ranges from approximately 100 Gy for 75 kVp x-rays to approximately 300 Gy at 6MV x-ray energy. The MOSFET detector has also been studied for sensitivity variations with integrated dose history. It was found to become less sensitive to radiation with age and the magnitude of this effect is dependant on radiation energy with lower energies producing a larger sensitivity reduction with integrated dose. The reduction in sensitivity is however approximated reproducibly by a slightly non linear, second order polynomial function allowing corrections to be made to reading to account for this effect to provide more accurate dose assessments both in phantom and in-vivo.
Energy dependence corrections to MOSFET dosimetric sensitivity.
Cheung, T; Butson, M J; Yu, P K N
2009-03-01
Metal Oxide Semiconductor Field Effect Transistors (MOSFET's) are dosimeters which are now frequently utilized in radiotherapy treatment applications. An improved MOSFET, clinical semiconductor dosimetry system (CSDS) which utilizes improved packaging for the MOSFET device has been studied for energy dependence of sensitivity to x-ray radiation measurement. Energy dependence from 50 kVp to 10 MV x-rays has been studied and found to vary by up to a factor of 3.2 with 75 kVp producing the highest sensitivity response. The detectors average life span in high sensitivity mode is energy related and ranges from approximately 100 Gy for 75 kVp x-rays to approximately 300 Gy at 6 MV x-ray energy. The MOSFET detector has also been studied for sensitivity variations with integrated dose history. It was found to become less sensitive to radiation with age and the magnitude of this effect is dependant on radiation energy with lower energies producing a larger sensitivity reduction with integrated dose. The reduction in sensitivity is however approximated reproducibly by a slightly non linear, second order polynomial function allowing corrections to be made to readings to account for this effect to provide more accurate dose assessments both in phantom and in-vivo.
EPS Young Physicist Prize - CORRECTION
2009-01-01
The original text for the article 'Prizes aplenty in Krakow' in Bulletin 30-31 assigned the award of the EPS HEPP Young Physicist Prize to Maurizio Pierini. In fact he shared the prize with Niki Saoulidou of Fermilab, who was rewarded for her contribution to neutrino physics, as the article now correctly indicates. We apologise for not having named Niki Saoulidou in the original article.
Publisher Correction: Eternal blood vessels
Hindson, Jordan
2018-05-01
This article was originally published with an incorrect reference for the original article. The reference has been amended. Please see the correct reference below. Qiu, Y. et al. Microvasculature-on-a-chip for the long-term study of endothelial barrier dysfunction and microvascular obstruction in disease. Nat. Biomed. Eng. https://doi.org/10.1038/s41551-018-0224-z (2018)
An overview of correctional psychiatry.
Metzner, Jeffrey; Dvoskin, Joel
2006-09-01
Supermax facilities may be an unfortunate and unpleasant necessity in modern corrections. Because of the serious dangers posed by prison gangs, they are unlikely to disappear completely from the correctional landscape any time soon. But such units should be carefully reserved for those inmates who pose the most serious danger to the prison environment. Further, the constitutional duty to provide medical and mental health care does not end at the supermax door. There is a great deal of common ground between the opponents of such environments and those who view them as a necessity. No one should want these expensive beds to be used for people who could be more therapeutically and safely managed in mental health treatment environments. No one should want people with serious mental illnesses to be punished for their symptoms. Finally, no one wants these units to make people more, instead of less, dangerous. It is in everyone's interests to learn as much as possible about the potential of these units for good and for harm. Corrections is a profession, and professions base their practices on data. If we are to avoid the most egregious and harmful effects of supermax confinement, we need to understand them far better than we currently do. Though there is a role for advocacy from those supporting or opposed to such environments, there is also a need for objective, scientifically rigorous study of these units and the people who live there.
Accurate formulas for the penalty caused by interferometric crosstalk
DEFF Research Database (Denmark)
Rasmussen, Christian Jørgen; Liu, Fenghai; Jeppesen, Palle
2000-01-01
New simple formulas for the penalty caused by interferometric crosstalk in PIN receiver systems and optically preamplified receiver systems are presented. They are more accurate than existing formulas.......New simple formulas for the penalty caused by interferometric crosstalk in PIN receiver systems and optically preamplified receiver systems are presented. They are more accurate than existing formulas....
A new, accurate predictive model for incident hypertension
DEFF Research Database (Denmark)
Völzke, Henry; Fung, Glenn; Ittermann, Till
2013-01-01
Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures.......Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....
Accurate and Simple Calibration of DLP Projector Systems
DEFF Research Database (Denmark)
Wilm, Jakob; Olesen, Oline Vinter; Larsen, Rasmus
2014-01-01
does not rely on an initial camera calibration, and so does not carry over the error into projector calibration. A radial interpolation scheme is used to convert features coordinates into projector space, thereby allowing for a very accurate procedure. This allows for highly accurate determination...
Accurate Compton scattering measurements for N{sub 2} molecules
Energy Technology Data Exchange (ETDEWEB)
Kobayashi, Kohjiro [Advanced Technology Research Center, Gunma University, 1-5-1 Tenjin-cho, Kiryu, Gunma 376-8515 (Japan); Itou, Masayoshi; Tsuji, Naruki; Sakurai, Yoshiharu [Japan Synchrotron Radiation Research Institute (JASRI), 1-1-1 Kouto, Sayo-cho, Sayo-gun, Hyogo 679-5198 (Japan); Hosoya, Tetsuo; Sakurai, Hiroshi, E-mail: sakuraih@gunma-u.ac.jp [Department of Production Science and Technology, Gunma University, 29-1 Hon-cho, Ota, Gunma 373-0057 (Japan)
2011-06-14
The accurate Compton profiles of N{sub 2} gas were measured using 121.7 keV synchrotron x-rays. The present accurate measurement proves the better agreement of the CI (configuration interaction) calculation than the Hartree-Fock calculation and suggests the importance of multi-excitation in the CI calculations for the accuracy of wavefunctions in ground states.
a Semi-Empirical Topographic Correction Model for Multi-Source Satellite Images
Xiao, Sa; Tian, Xinpeng; Liu, Qiang; Wen, Jianguang; Ma, Yushuang; Song, Zhenwei
2018-04-01
Topographic correction of surface reflectance in rugged terrain areas is the prerequisite for the quantitative application of remote sensing in mountainous areas. Physics-based radiative transfer model can be applied to correct the topographic effect and accurately retrieve the reflectance of the slope surface from high quality satellite image such as Landsat8 OLI. However, as more and more images data available from various of sensors, some times we can not get the accurate sensor calibration parameters and atmosphere conditions which are needed in the physics-based topographic correction model. This paper proposed a semi-empirical atmosphere and topographic corrction model for muti-source satellite images without accurate calibration parameters.Based on this model we can get the topographic corrected surface reflectance from DN data, and we tested and verified this model with image data from Chinese satellite HJ and GF. The result shows that the correlation factor was reduced almost 85 % for near infrared bands and the classification overall accuracy of classification increased 14 % after correction for HJ. The reflectance difference of slope face the sun and face away the sun have reduced after correction.
International Nuclear Information System (INIS)
Boussion, N; Hatt, M; Lamare, F; Bizais, Y; Turzo, A; Rest, C Cheze-Le; Visvikis, D
2006-01-01
Partial volume effects (PVEs) are consequences of the limited spatial resolution in emission tomography. They lead to a loss of signal in tissues of size similar to the point spread function and induce activity spillover between regions. Although PVE can be corrected for by using algorithms that provide the correct radioactivity concentration in a series of regions of interest (ROIs), so far little attention has been given to the possibility of creating improved images as a result of PVE correction. Potential advantages of PVE-corrected images include the ability to accurately delineate functional volumes as well as improving tumour-to-background ratio, resulting in an associated improvement in the analysis of response to therapy studies and diagnostic examinations, respectively. The objective of our study was therefore to develop a methodology for PVE correction not only to enable the accurate recuperation of activity concentrations, but also to generate PVE-corrected images. In the multiresolution analysis that we define here, details of a high-resolution image H (MRI or CT) are extracted, transformed and integrated in a low-resolution image L (PET or SPECT). A discrete wavelet transform of both H and L images is performed by using the 'a trous' algorithm, which allows the spatial frequencies (details, edges, textures) to be obtained easily at a level of resolution common to H and L. A model is then inferred to build the lacking details of L from the high-frequency details in H. The process was successfully tested on synthetic and simulated data, proving the ability to obtain accurately corrected images. Quantitative PVE correction was found to be comparable with a method considered as a reference but limited to ROI analyses. Visual improvement and quantitative correction were also obtained in two examples of clinical images, the first using a combined PET/CT scanner with a lymphoma patient and the second using a FDG brain PET and corresponding T1-weighted MRI in
Professional orientation and pluralistic ignorance among jail correctional officers.
Cook, Carrie L; Lane, Jodi
2014-06-01
Research about the attitudes and beliefs of correctional officers has historically been conducted in prison facilities while ignoring jail settings. This study contributes to our understanding of correctional officers by examining the perceptions of those who work in jails, specifically measuring professional orientations about counseling roles, punitiveness, corruption of authority by inmates, and social distance from inmates. The study also examines whether officers are accurate in estimating these same perceptions of their peers, a line of inquiry that has been relatively ignored. Findings indicate that the sample was concerned about various aspects of their job and the management of inmates. Specifically, officers were uncertain about adopting counseling roles, were somewhat punitive, and were concerned both with maintaining social distance from inmates and with an inmate's ability to corrupt their authority. Officers also misperceived the professional orientation of their fellow officers and assumed their peer group to be less progressive than they actually were.
Reduction of density-modification bias by β correction
International Nuclear Information System (INIS)
Skubák, Pavol; Pannu, Navraj S.
2011-01-01
A cross-validation-based method for bias reduction in ‘classical’ iterative density modification of experimental X-ray crystallography maps provides significantly more accurate phase-quality estimates and leads to improved automated model building. Density modification often suffers from an overestimation of phase quality, as seen by escalated figures of merit. A new cross-validation-based method to address this estimation bias by applying a bias-correction parameter ‘β’ to maximum-likelihood phase-combination functions is proposed. In tests on over 100 single-wavelength anomalous diffraction data sets, the method is shown to produce much more reliable figures of merit and improved electron-density maps. Furthermore, significantly better results are obtained in automated model building iterated with phased refinement using the more accurate phase probability parameters from density modification
Measurement and correction of leaf open times in helical tomotherapy
International Nuclear Information System (INIS)
Sevillano, David; Mínguez, Cristina; Sánchez, Alicia; Sánchez-Reyes, Alberto
2012-01-01
showed that, while treatments affected by latency effects were improved, those affected by individual leaf errors were not. Conclusions: Measurement of MLC performance in real treatments provides the authors with a valuable tool for ensuring the quality of HT delivery. The LOTs of MLC are very accurate in most cases. Sources of error were found and correction methods proposed and applied. The corrections decreased the amount of LOT errors. The dosimetric impact of these corrections should be evaluated more thoroughly using 3D dose distribution analysis.
Measurement and correction of leaf open times in helical tomotherapy
Energy Technology Data Exchange (ETDEWEB)
Sevillano, David; Minguez, Cristina; Sanchez, Alicia; Sanchez-Reyes, Alberto [Department of Medical Physics, Tomotherapy Unit, Grupo IMO, Madrid 28010 (Spain)
2012-11-15
showed that, while treatments affected by latency effects were improved, those affected by individual leaf errors were not. Conclusions: Measurement of MLC performance in real treatments provides the authors with a valuable tool for ensuring the quality of HT delivery. The LOTs of MLC are very accurate in most cases. Sources of error were found and correction methods proposed and applied. The corrections decreased the amount of LOT errors. The dosimetric impact of these corrections should be evaluated more thoroughly using 3D dose distribution analysis.
A Horizontal Tilt Correction Method for Ship License Numbers Recognition
Liu, Baolong; Zhang, Sanyuan; Hong, Zhenjie; Ye, Xiuzi
2018-02-01
An automatic ship license numbers (SLNs) recognition system plays a significant role in intelligent waterway transportation systems since it can be used to identify ships by recognizing the characters in SLNs. Tilt occurs frequently in many SLNs because the monitors and the ships usually have great vertical or horizontal angles, which decreases the accuracy and robustness of a SLNs recognition system significantly. In this paper, we present a horizontal tilt correction method for SLNs. For an input tilt SLN image, the proposed method accomplishes the correction task through three main steps. First, a MSER-based characters’ center-points computation algorithm is designed to compute the accurate center-points of the characters contained in the input SLN image. Second, a L 1- L 2 distance-based straight line is fitted to the computed center-points using M-estimator algorithm. The tilt angle is estimated at this stage. Finally, based on the computed tilt angle, an affine transformation rotation is conducted to rotate and to correct the input SLN horizontally. At last, the proposed method is tested on 200 tilt SLN images, the proposed method is proved to be effective with a tilt correction rate of 80.5%.
Two dimensional spatial distortion correction algorithm for scintillation GAMMA cameras
International Nuclear Information System (INIS)
Chaney, R.; Gray, E.; Jih, F.; King, S.E.; Lim, C.B.
1985-01-01
Spatial distortion in an Anger gamma camera originates fundamentally from the discrete nature of scintillation light sampling with an array of PMT's. Historically digital distortion correction started with the method based on the distortion measurement by using 1-D slit pattern and the subsequent on-line bi-linear approximation with 64 x 64 look-up tables for X and Y. However, the X, Y distortions are inherently two-dimensional in nature, and thus the validity of this 1-D calibration method becomes questionable with the increasing distortion amplitude in association with the effort to get better spatial and energy resolutions. The authors have developed a new accurate 2-D correction algorithm. This method involves the steps of; data collection from 2-D orthogonal hole pattern, 2-D distortion vector measurement, 2-D Lagrangian polynomial interpolation, and transformation to X, Y ADC frame. The impact of numerical precision used in correction and the accuracy of bilinear approximation with varying look-up table size have been carefully examined through computer simulation by using measured single PMT light response function together with Anger positioning logic. Also the accuracy level of different order Lagrangian polynomial interpolations for correction table expansion from hole centroids were investigated. Detailed algorithm and computer simulation are presented along with camera test results
A vibration correction method for free-fall absolute gravimeters
Qian, J.; Wang, G.; Wu, K.; Wang, L. J.
2018-02-01
An accurate determination of gravitational acceleration, usually approximated as 9.8 m s-2, has been playing an important role in the areas of metrology, geophysics, and geodetics. Absolute gravimetry has been experiencing rapid developments in recent years. Most absolute gravimeters today employ a free-fall method to measure gravitational acceleration. Noise from ground vibration has become one of the most serious factors limiting measurement precision. Compared to vibration isolators, the vibration correction method is a simple and feasible way to reduce the influence of ground vibrations. A modified vibration correction method is proposed and demonstrated. A two-dimensional golden section search algorithm is used to search for the best parameters of the hypothetical transfer function. Experiments using a T-1 absolute gravimeter are performed. It is verified that for an identical group of drop data, the modified method proposed in this paper can achieve better correction effects with much less computation than previous methods. Compared to vibration isolators, the correction method applies to more hostile environments and even dynamic platforms, and is expected to be used in a wider range of applications.
An Automated Baseline Correction Method Based on Iterative Morphological Operations.
Chen, Yunliang; Dai, Liankui
2018-05-01
Raman spectra usually suffer from baseline drift caused by fluorescence or other reasons. Therefore, baseline correction is a necessary and crucial step that must be performed before subsequent processing and analysis of Raman spectra. An automated baseline correction method based on iterative morphological operations is proposed in this work. The method can adaptively determine the structuring element first and then gradually remove the spectral peaks during iteration to get an estimated baseline. Experiments on simulated data and real-world Raman data show that the proposed method is accurate, fast, and flexible for handling different kinds of baselines in various practical situations. The comparison of the proposed method with some state-of-the-art baseline correction methods demonstrates its advantages over the existing methods in terms of accuracy, adaptability, and flexibility. Although only Raman spectra are investigated in this paper, the proposed method is hopefully to be used for the baseline correction of other analytical instrumental signals, such as IR spectra and chromatograms.
International Nuclear Information System (INIS)
Schmidt, Tobias; Kümmel, Stephan; Kraisler, Eli; Makmal, Adi; Kronik, Leeor
2014-01-01
We present and test a new approximation for the exchange-correlation (xc) energy of Kohn-Sham density functional theory. It combines exact exchange with a compatible non-local correlation functional. The functional is by construction free of one-electron self-interaction, respects constraints derived from uniform coordinate scaling, and has the correct asymptotic behavior of the xc energy density. It contains one parameter that is not determined ab initio. We investigate whether it is possible to construct a functional that yields accurate binding energies and affords other advantages, specifically Kohn-Sham eigenvalues that reliably reflect ionization potentials. Tests for a set of atoms and small molecules show that within our local-hybrid form accurate binding energies can be achieved by proper optimization of the free parameter in our functional, along with an improvement in dissociation energy curves and in Kohn-Sham eigenvalues. However, the correspondence of the latter to experimental ionization potentials is not yet satisfactory, and if we choose to optimize their prediction, a rather different value of the functional's parameter is obtained. We put this finding in a larger context by discussing similar observations for other functionals and possible directions for further functional development that our findings suggest
Atmospheric correction of APEX hyperspectral data
Directory of Open Access Journals (Sweden)
Sterckx Sindy
2016-03-01
Full Text Available Atmospheric correction plays a crucial role among the processing steps applied to remotely sensed hyperspectral data. Atmospheric correction comprises a group of procedures needed to remove atmospheric effects from observed spectra, i.e. the transformation from at-sensor radiances to at-surface radiances or reflectances. In this paper we present the different steps in the atmospheric correction process for APEX hyperspectral data as applied by the Central Data Processing Center (CDPC at the Flemish Institute for Technological Research (VITO, Mol, Belgium. The MODerate resolution atmospheric TRANsmission program (MODTRAN is used to determine the source of radiation and for applying the actual atmospheric correction. As part of the overall correction process, supporting algorithms are provided in order to derive MODTRAN configuration parameters and to account for specific effects, e.g. correction for adjacency effects, haze and shadow correction, and topographic BRDF correction. The methods and theory underlying these corrections and an example of an application are presented.
A technique for accurate planning of stereotactic brain implants prior to head ring fixation
International Nuclear Information System (INIS)
Ulin, Kenneth; Bornstein, Linda E.; Ling, Marilyn N.; Saris, Stephen; Wu, Julian K.; Curran, Bruce H.; Wazer, David E.
1997-01-01
Purpose: A two-step procedure is described for accurate planning of stereotactic brain implants prior to head-ring fixation. Methods and Materials: Approximately 2 weeks prior to implant a CT scan without the head ring is performed for treatment-planning purposes. An entry point and a reference point, both marked with barium and later tattooed, facilitate planning and permit correlation of the images with a later CT scan. A plan is generated using a conventional treatment-planning system to determine the number and activity of I-125 seeds required and the position of each catheter. I-125 seed anisotropy is taken into account by means of a modification to the treatment planning program. On the day of the implant a second CT scan is performed with the head ring affixed to the skull and with the same points marked as in the previous scan. The planned catheter coordinates are then mapped into the coordinate system of the second CT scan by means of a manual translational correction and a computer-calculated rotational correction derived from the reference point coordinates in the two scans. Results: The rotational correction algorithm was verified experimentally in a Rando phantom before it was used clinically. For analysis of the results with individual patients a third CT scan is performed 1 day following the implant and is used for calculating the final dosimetry. Conclusion: The technique that is described has two important advantages: 1) the number and activity of seeds required can be accurately determined in advance; and 2) sufficient time is allowed to derive the best possible plan
Misalignment corrections in optical interconnects
Song, Deqiang
Optical interconnects are considered a promising solution for long distance and high bitrate data transmissions, outperforming electrical interconnects in terms of loss and dispersion. Due to the bandwidth and distance advantage of optical interconnects, longer links have been implemented with optics. Recent studies show that optical interconnects have clear advantages even at very short distances---intra system interconnects. The biggest challenge for such optical interconnects is the alignment tolerance. Many free space optical components require very precise assembly and installation, and therefore the overall cost could be increased. This thesis studied the misalignment tolerance and possible alignment correction solutions for optical interconnects at backplane or board level. First the alignment tolerance for free space couplers was simulated and the result indicated the most critical alignments occur between the VCSEL, waveguide and microlens arrays. An in-situ microlens array fabrication method was designed and experimentally demonstrated, with no observable misalignment with the waveguide array. At the receiver side, conical lens arrays were proposed to replace simple microlens arrays for a larger angular alignment tolerance. Multilayer simulation models in CodeV were built to optimized the refractive index and shape profiles of the conical lens arrays. Conical lenses fabricated with micro injection molding machine and fiber etching were characterized. Active component VCSOA was used to correct misalignment in optical connectors between the board and backplane. The alignment correction capability were characterized for both DC and AC (1GHz) optical signal. The speed and bandwidth of the VCSOA was measured and compared with a same structure VCSEL. Based on the optical inverter being studied in our lab, an all-optical flip-flop was demonstrated using a pair of VCSOAs. This memory cell with random access ability can store one bit optical signal with set or
Accurately bearing measurement in non-cooperative passive location system
International Nuclear Information System (INIS)
Liu Zhiqiang; Ma Hongguang; Yang Lifeng
2007-01-01
The system of non-cooperative passive location based on array is proposed. In the system, target is detected by beamforming and Doppler matched filtering; and bearing is measured by a long-base-ling interferometer which is composed of long distance sub-arrays. For the interferometer with long-base-line, the bearing is measured accurately but ambiguously. To realize unambiguous accurately bearing measurement, beam width and multiple constraint adoptive beamforming technique is used to resolve azimuth ambiguous. Theory and simulation result shows this method is effective to realize accurately bearing measurement in no-cooperate passive location system. (authors)
Correcting slightly less simple movements
Directory of Open Access Journals (Sweden)
M.P. Aivar
2005-01-01
Full Text Available Many studies have analysed how goal directed movements are corrected in response to changes in the properties of the target. However, only simple movements to single targets have been used in those studies, so little is known about movement corrections under more complex situations. Evidence from studies that ask for movements to several targets in sequence suggests that whole sequences of movements are planned together. Planning related segments of a movement together makes it possible to optimise the whole sequence, but it means that some parts are planned quite long in advance, so that it is likely that they will have to be modified. In the present study we examined how people respond to changes that occur while they are moving to the first target of a sequence. Subjects moved a stylus across a digitising tablet. They moved from a specified starting point to two targets in succession. The first of these targets was always at the same position but it could have one of two sizes. The second target could be in one of two different positions and its size was different in each case. On some trials the first target changed size, and on some others the second target changed size and position, as soon as the subject started to move. When the size of the first target changed the subjects slowed down the first segment of their movements. Even the peak velocity, which was only about 150 ms after the change in size, was lower. Beside this fast response to the change itself, the dwell time at the first target was also affected: its duration increased after the change. Changing the size and position of the second target did not influence the first segment of the movement, but also increased the dwell time. The dwell time was much longer for a small target, irrespective of its initial size. If subjects knew in advance which target could change, they moved faster than if they did not know which could change. Taken together, these
Correction of gene expression data
DEFF Research Database (Denmark)
Darbani Shirvanehdeh, Behrooz; Stewart, C. Neal, Jr.; Noeparvar, Shahin
2014-01-01
This report investigates for the first time the potential inter-treatment bias source of cell number for gene expression studies. Cell-number bias can affect gene expression analysis when comparing samples with unequal total cellular RNA content or with different RNA extraction efficiencies....... For maximal reliability of analysis, therefore, comparisons should be performed at the cellular level. This could be accomplished using an appropriate correction method that can detect and remove the inter-treatment bias for cell-number. Based on inter-treatment variations of reference genes, we introduce...
Correct Linearization of Einstein's Equations
Directory of Open Access Journals (Sweden)
Rabounski D.
2006-06-01
Full Text Available Regularly Einstein's equations can be reduced to a wave form (linearly dependent from the second derivatives of the space metric in the absence of gravitation, the space rotation and Christoffel's symbols. As shown here, the origin of the problem is that one uses the general covariant theory of measurement. Here the wave form of Einstein's equations is obtained in the terms of Zelmanov's chronometric invariants (physically observable projections on the observer's time line and spatial section. The obtained equations depend on solely the second derivatives even if gravitation, the space rotation and Christoffel's symbols. The correct linearization proves: the Einstein equations are completely compatible with weak waves of the metric.
Neutron borehole logging correction technique
International Nuclear Information System (INIS)
Goldman, L.H.
1978-01-01
In accordance with an illustrative embodiment of the present invention, a method and apparatus is disclosed for logging earth formations traversed by a borehole in which an earth formation is irradiated with neutrons and gamma radiation produced thereby in the formation and in the borehole is detected. A sleeve or shield for capturing neutrons from the borehole and producing gamma radiation characteristic of that capture is provided to give an indication of the contribution of borehole capture events to the total detected gamma radiation. It is then possible to correct from those borehole effects the total detected gamma radiation and any earth formation parameters determined therefrom
Accurate KAP meter calibration as a prerequisite for optimisation in projection radiography
International Nuclear Information System (INIS)
Malusek, A.; Sandborg, M.; Alm Carlsson, G.
2016-01-01
Modern X-ray units register the air kerma-area product, PKA, with a built-in KAP meter. Some KAP meters show an energy-dependent bias comparable with the maximum uncertainty articulated by the IEC (25 %), adversely affecting dose-optimisation processes. To correct for the bias, a reference KAP meter calibrated at a standards laboratory and two calibration methods described here can be used to achieve an uncertainty of <7 % as recommended by IAEA. A computational model of the reference KAP meter is used to calculate beam quality correction factors for transfer of the calibration coefficient at the standards laboratory, Q 0 , to any beam quality, Q, in the clinic. Alternatively, beam quality corrections are measured with an energy-independent dosemeter via a reference beam quality in the clinic, Q 1 , to beam quality, Q. Biases up to 35 % of built-in KAP meter readings were noted. Energy-dependent calibration factors are needed for unbiased PKA. Accurate KAP meter calibration as a prerequisite for optimisation in projection radiography. (authors)
Improving the description of sunglint for accurate prediction of remotely sensed radiances
Energy Technology Data Exchange (ETDEWEB)
Ottaviani, Matteo [Light and Life Laboratory, Department of Physics and Engineering Physics, Stevens Institute of Technology, Castle Point on Hudson, Hoboken, NJ 07030 (United States)], E-mail: mottavia@stevens.edu; Spurr, Robert [RT Solutions Inc., 9 Channing Street, Cambridge, MA 02138 (United States); Stamnes, Knut; Li Wei [Light and Life Laboratory, Department of Physics and Engineering Physics, Stevens Institute of Technology, Castle Point on Hudson, Hoboken, NJ 07030 (United States); Su Wenying [Science Systems and Applications Inc., 1 Enterprise Parkway, Hampton, VA 23666 (United States); Wiscombe, Warren [NASA GSFC, Greenbelt, MD 20771 (United States)
2008-09-15
The bidirectional reflection distribution function (BRDF) of the ocean is a critical boundary condition for radiative transfer calculations in the coupled atmosphere-ocean system. Existing models express the extent of the glint-contaminated region and its contribution to the radiance essentially as a function of the wind speed. An accurate treatment of the glint contribution and its propagation in the atmosphere would improve current correction schemes and hence rescue a significant portion of data presently discarded as 'glint contaminated'. In current satellite imagery, a correction to the sensor-measured radiances is limited to the region at the edge of the glint, where the contribution is below a certain threshold. This correction assumes the sunglint radiance to be directly transmitted through the atmosphere. To quantify the error introduced by this approximation we employ a radiative transfer code that allows for a user-specified BRDF at the atmosphere-ocean interface and rigorously accounts for multiple scattering. We show that the errors incurred by ignoring multiple scattering are very significant and typically lie in the range 10-90%. Multiple reflections and shadowing at the surface can also be accounted for, and we illustrate the importance of such processes at grazing geometries.
A Prototype of Tropospheric Delay Correction in L1-SAIF Augmentation
Takeichi, Noboru; Sakai, Takeyasu; Fukushima, Sounosuke; Ito, Ken
L1-SAIF signal is one of the navigation signals of Quasi-Zenith Satellite System, which provides an augmentation function for mobile users in Japan. This paper presents the detail of the tropospheric delay correction in L1-SAIF augmentation. The tropospheric delay correction information is generated at the ground station using the data collected at GEONET (GPS Earth Observation NETwork) stations. The correction message contains the information of the zenith tropospheric delay (ZTD) values at 105 Tropospheric Grid Points (TGP) in the experiment area. From this message a mobile user can acquire the ZTD value at some neighboring TGPs, and estimate the local ZTD value accurately by using a suitable ZTD model function. Only 3 L1-SAIF messages are necessary to provide all of the tropospheric correction information. Several investigations using the actual data observed at many GEONET stations overall Japan have proved that it is possible to achieve the correction accuracy of 13.2mm (rms).
Accurate Sliding-Mode Control System Modeling for Buck Converters
DEFF Research Database (Denmark)
Høyerby, Mikkel Christian Wendelboe; Andersen, Michael Andreas E.
2007-01-01
This paper shows that classical sliding mode theory fails to correctly predict the output impedance of the highly useful sliding mode PID compensated buck converter. The reason for this is identified as the assumption of the sliding variable being held at zero during sliding mode, effectively...... approach also predicts the self-oscillating switching action of the sliding-mode control system correctly. Analytical findings are verified by simulation as well as experimentally in a 10-30V/3A buck converter....
Boomerang pattern correction of gynecomastia.
Hurwitz, Dennis J
2015-02-01
After excess skin and fat are removed, a body-lift suture advances skin and suspends ptotic breasts, the mons pubis, and buttocks. For women, the lift includes sculpturing adiposity. While some excess fat may need removal, muscular men should receive a deliberate effort to achieve generalized tight skin closure to reveal superficial muscular bulk. For skin to be tightly bound to muscle, the excess needs to be removed both horizontally and vertically. To aesthetically accomplish that goal, a series of oblique elliptical excisions have been designed. Twenty-four consecutive patients received boomerang pattern correction of gynecomastia. In the last 12 patients, a J torsoplasty extension replaced the transverse upper body lift. Indirect undermining and the opposing force of a simultaneous abdominoplasty obliterate the inframammary fold. To complete effacement of the entire torso in 11 patients, an abdominoplasty was extended by oblique excisions over bulging flanks. Satisfactory improvement was observed in all 24 boomerang cases. A disgruntled patient was displeased with distorted nipples after revision surgery. Scar maturation in the chest is lengthy, with scars taking years to flatten and fade. Complications were limited and no major revisions were needed. In selected patients, comprehensive body contouring surgery consists of a boomerang correction of gynecomastia. J torsoplasty with an abdominoplasty and oblique excisions of the flanks has proven to be a practical means to achieve aesthetic goals. Gender-specific body lift surgery that goes far beyond the treatment of gynecomastia best serves the muscular male patient after massive weight loss. Therapeutic, IV.
Jo, Byung-Du; Lee, Young-Jin; Kim, Dae-Hong; Jeon, Pil-Hyun; Kim, Hee-Joung
2014-03-01
In conventional digital radiography (DR) using a dual energy subtraction technique, a significant fraction of the detected photons are scattered within the body, resulting in the scatter component. Scattered radiation can significantly deteriorate image quality in diagnostic X-ray imaging systems. Various methods of scatter correction, including both measurement and non-measurement-based methods have been proposed in the past. Both methods can reduce scatter artifacts in images. However, non-measurement-based methods require a homogeneous object and have insufficient scatter component correction. Therefore, we employed a measurement-based method to correct for the scatter component of inhomogeneous objects from dual energy DR (DEDR) images. We performed a simulation study using a Monte Carlo simulation with a primary modulator, which is a measurement-based method for the DEDR system. The primary modulator, which has a checkerboard pattern, was used to modulate primary radiation. Cylindrical phantoms of variable size were used to quantify imaging performance. For scatter estimation, we used Discrete Fourier Transform filtering. The primary modulation method was evaluated using a cylindrical phantom in the DEDR system. The scatter components were accurately removed using a primary modulator. When the results acquired with scatter correction and without correction were compared, the average contrast-to-noise ratio (CNR) with the correction was 1.35 times higher than that obtained without correction, and the average root mean square error (RMSE) with the correction was 38.00% better than that without correction. In the subtraction study, the average CNR with correction was 2.04 (aluminum subtraction) and 1.38 (polymethyl methacrylate (PMMA) subtraction) times higher than that obtained without the correction. The analysis demonstrated the accuracy of scatter correction and the improvement of image quality using a primary modulator and showed the feasibility of
Accurate modeling and maximum power point detection of ...
African Journals Online (AJOL)
Accurate modeling and maximum power point detection of photovoltaic ... Determination of MPP enables the PV system to deliver maximum available power. ..... adaptive artificial neural network: Proposition for a new sizing procedure.
Accurate determination of light elements by charged particle activation analysis
International Nuclear Information System (INIS)
Shikano, K.; Shigematsu, T.
1989-01-01
To develop accurate determination of light elements by CPAA, accurate and practical standardization methods and uniform chemical etching are studied based on determination of carbon in gallium arsenide using the 12 C(d,n) 13 N reaction and the following results are obtained: (1)Average stopping power method with thick target yield is useful as an accurate and practical standardization method. (2)Front surface of sample has to be etched for accurate estimate of incident energy. (3)CPAA is utilized for calibration of light element analysis by physical method. (4)Calibration factor of carbon analysis in gallium arsenide using the IR method is determined to be (9.2±0.3) x 10 15 cm -1 . (author)
ACCURATE ESTIMATES OF CHARACTERISTIC EXPONENTS FOR SECOND ORDER DIFFERENTIAL EQUATION
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
In this paper, a second order linear differential equation is considered, and an accurate estimate method of characteristic exponent for it is presented. Finally, we give some examples to verify the feasibility of our result.
Importance of molecular diagnosis in the accurate diagnosis of ...
Indian Academy of Sciences (India)
1Department of Health and Environmental Sciences, Kyoto University Graduate School of Medicine, Yoshida Konoecho, ... of molecular diagnosis in the accurate diagnosis of systemic carnitine deficiency. .... 'affecting protein function' by SIFT.
Ji, Songsong; Yang, Yibo; Pang, Gang; Antoine, Xavier
2018-01-01
The aim of this paper is to design some accurate artificial boundary conditions for the semi-discretized linear Schrödinger and heat equations in rectangular domains. The Laplace transform in time and discrete Fourier transform in space are applied to get Green's functions of the semi-discretized equations in unbounded domains with single-source. An algorithm is given to compute these Green's functions accurately through some recurrence relations. Furthermore, the finite-difference method is used to discretize the reduced problem with accurate boundary conditions. Numerical simulations are presented to illustrate the accuracy of our method in the case of the linear Schrödinger and heat equations. It is shown that the reflection at the corners is correctly eliminated.
High accurate time system of the Low Latitude Meridian Circle.
Yang, Jing; Wang, Feng; Li, Zhiming
In order to obtain the high accurate time signal for the Low Latitude Meridian Circle (LLMC), a new GPS accurate time system is developed which include GPS, 1 MC frequency source and self-made clock system. The second signal of GPS is synchronously used in the clock system and information can be collected by a computer automatically. The difficulty of the cancellation of the time keeper can be overcomed by using this system.
Accurate Alignment of Plasma Channels Based on Laser Centroid Oscillations
International Nuclear Information System (INIS)
Gonsalves, Anthony; Nakamura, Kei; Lin, Chen; Osterhoff, Jens; Shiraishi, Satomi; Schroeder, Carl; Geddes, Cameron; Toth, Csaba; Esarey, Eric; Leemans, Wim
2011-01-01
A technique has been developed to accurately align a laser beam through a plasma channel by minimizing the shift in laser centroid and angle at the channel outptut. If only the shift in centroid or angle is measured, then accurate alignment is provided by minimizing laser centroid motion at the channel exit as the channel properties are scanned. The improvement in alignment accuracy provided by this technique is important for minimizing electron beam pointing errors in laser plasma accelerators.
An accurate metric for the spacetime around neutron stars
Pappas, George
2016-01-01
The problem of having an accurate description of the spacetime around neutron stars is of great astrophysical interest. For astrophysical applications, one needs to have a metric that captures all the properties of the spacetime around a neutron star. Furthermore, an accurate appropriately parameterised metric, i.e., a metric that is given in terms of parameters that are directly related to the physical structure of the neutron star, could be used to solve the inverse problem, which is to inf...
Accurate forced-choice recognition without awareness of memory retrieval
Voss, Joel L.; Baym, Carol L.; Paller, Ken A.
2008-01-01
Recognition confidence and the explicit awareness of memory retrieval commonly accompany accurate responding in recognition tests. Memory performance in recognition tests is widely assumed to measure explicit memory, but the generality of this assumption is questionable. Indeed, whether recognition in nonhumans is always supported by explicit memory is highly controversial. Here we identified circumstances wherein highly accurate recognition was unaccompanied by hallmark features of explicit ...
Accurate radiotherapy positioning system investigation based on video
International Nuclear Information System (INIS)
Tao Shengxiang; Wu Yican
2006-01-01
This paper introduces the newest research production on patient positioning method in accurate radiotherapy brought by Accurate Radiotherapy Treating System (ARTS) research team of Institute of Plasma Physics of Chinese Academy of Sciences, such as the positioning system based on binocular vision, the position-measuring system based on contour matching and the breath gate controlling system for positioning. Their basic principle, the application occasion and the prospects are briefly depicted. (authors)
static correction parameters in the lower flood plain of the central ...
African Journals Online (AJOL)
DJFLEX
which can be used for stratigraphic interpretation. (Marsden, 1993). atum static correction requires that the velocity and thickness of the weathering layer be known for the accurate mapping of the underlying structures for oil and gas exploration. Analysis of the weathering layer properties can reveal how they vary vertically ...
Order-α corrections to the decay rate of orthopositronium in the Fried-Yennie gauge
International Nuclear Information System (INIS)
Adkins, G.S.; Salahuddin, A.A.; Schalm, K.E.
1992-01-01
The order-α correction to the decay rate of orthopositronium is obtained using the Fried-Yennie gauge. The result (mα 7 /π 2 ) [-1.987 84(11)] is consistent with but more accurate than the results of previous evaluations
Siegel, Linda S.
1983-01-01
Examines (1) whether and when the development of preterm children of very low birth weight would begin to approximate that of demographically matched full-term children, and (2) whether test scores corrected for degree of prematurity or those based on chronological age would be the more accurate predictors of subsequent development. (RH)
Charman, Steve D.; Wells, Gary L.
2008-01-01
Real-world eyewitnesses are often asked whether their lineup responses were affected by various external influences, but it is unknown whether they can accurately answer these types of questions. The witness-report-of-influence mental-correction model is proposed to explain witnesses' reports of influence. Two experiments used a new paradigm (the…
Energy Technology Data Exchange (ETDEWEB)
Masunov, Artëm E., E-mail: amasunov@ucf.edu [NanoScience Technology Center, Department of Chemistry, and Department of Physics, University of Central Florida, Orlando, FL 32826 (United States); Photochemistry Center RAS, ul. Novatorov 7a, Moscow 119421 (Russian Federation); Gangopadhyay, Shruba [Department of Physics, University of California, Davis, CA 95616 (United States); IBM Almaden Research Center, 650 Harry Road, San Jose, CA 95120 (United States)
2015-12-15
New method to eliminate the spin-contamination in broken symmetry density functional theory (BS DFT) calculations is introduced. Unlike conventional spin-purification correction, this method is based on canonical Natural Orbitals (NO) for each high/low spin coupled electron pair. We derive an expression to extract the energy of the pure singlet state given in terms of energy of BS DFT solution, the occupation number of the bonding NO, and the energy of the higher spin state built on these bonding and antibonding NOs (not self-consistent Kohn–Sham orbitals of the high spin state). Compared to the other spin-contamination correction schemes, spin-correction is applied to each correlated electron pair individually. We investigate two binuclear Mn(IV) molecular magnets using this pairwise correction. While one of the molecules is described by magnetic orbitals strongly localized on the metal centers, and spin gap is accurately predicted by Noodleman and Yamaguchi schemes, for the other one the gap is predicted poorly by these schemes due to strong delocalization of the magnetic orbitals onto the ligands. We show our new correction to yield more accurate results in both cases. - Highlights: • Magnetic orbitails obtained for high and low spin states are not related. • Spin-purification correction becomes inaccurate for delocalized magnetic orbitals. • We use the natural orbitals of the broken symmetry state to build high spin state. • This new correction is made separately for each electron pair. • Our spin-purification correction is more accurate for delocalised magnetic orbitals.
Automatic Power Factor Correction Using Capacitive Bank
Mr.Anant Kumar Tiwari,; Mrs. Durga Sharma
2014-01-01
The power factor correction of electrical loads is a problem common to all industrial companies. Earlier the power factor correction was done by adjusting the capacitive bank manually [1]. The automated power factor corrector (APFC) using capacitive load bank is helpful in providing the power factor correction. Proposed automated project involves measuring the power factor value from the load using microcontroller. The design of this auto-adjustable power factor correction is ...
Accurate measurement of gene copy number for human alpha-defensin DEFA1A3.
Khan, Fayeza F; Carpenter, Danielle; Mitchell, Laura; Mansouri, Omniah; Black, Holly A; Tyson, Jess; Armour, John A L
2013-10-20
Multi-allelic copy number variants include examples of extensive variation between individuals in the copy number of important genes, most notably genes involved in immune function. The definition of this variation, and analysis of its impact on function, has been hampered by the technical difficulty of large-scale but accurate typing of genomic copy number. The copy-variable alpha-defensin locus DEFA1A3 on human chromosome 8 commonly varies between 4 and 10 copies per diploid genome, and presents considerable challenges for accurate high-throughput typing. In this study, we developed two paralogue ratio tests and three allelic ratio measurements that, in combination, provide an accurate and scalable method for measurement of DEFA1A3 gene number. We combined information from different measurements in a maximum-likelihood framework which suggests that most samples can be assigned to an integer copy number with high confidence, and applied it to typing 589 unrelated European DNA samples. Typing the members of three-generation pedigrees provided further reassurance that correct integer copy numbers had been assigned. Our results have allowed us to discover that the SNP rs4300027 is strongly associated with DEFA1A3 gene copy number in European samples. We have developed an accurate and robust method for measurement of DEFA1A3 copy number. Interrogation of rs4300027 and associated SNPs in Genome-Wide Association Study SNP data provides no evidence that alpha-defensin copy number is a strong risk factor for phenotypes such as Crohn's disease, type I diabetes, HIV progression and multiple sclerosis.
9 CFR 416.15 - Corrective Actions.
2010-01-01
... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Corrective Actions. 416.15 Section 416... SANITATION § 416.15 Corrective Actions. (a) Each official establishment shall take appropriate corrective... the procedures specified therein, or the implementation or maintenance of the Sanitation SOP's, may...
Working toward Literacy in Correctional Education ESL
Gardner, Susanne
2014-01-01
Correctional Education English as a Second Language (ESL) literacy programs vary from state to state, region to region. Some states enroll their correctional ESL students in adult basic education (ABE) classes; other states have separate classes and programs. At the Maryland Correctional Institution in Jessup, the ESL class is a self-contained…
78 FR 59798 - Small Business Subcontracting: Correction
2013-09-30
... SMALL BUSINESS ADMINISTRATION 13 CFR Part 125 RIN 3245-AG22 Small Business Subcontracting: Correction AGENCY: U.S. Small Business Administration. ACTION: Correcting amendments. SUMMARY: This document... business subcontracting to implement provisions of the Small Business Jobs Act of 2010. This correction...
Correction magnet power supplies for APS machine
International Nuclear Information System (INIS)
Kang, Y.G.
1991-04-01
A number of correction magnets are required for the advanced photon source (APS) machine to correct the beam. There are five kinds of correction magnets for the storage ring, two for the injector synchrotron, and two for the positron accumulator ring (PAR). Table I shoes a summary of the correction magnet power supplies for the APS machine. For the storage ring, the displacement of the quadrupole magnets due to the low frequency vibration below 25 Hz has the most significant effect on the stability of the positron closed orbit. The primary external source of the low frequency vibration is the ground motion of approximately 20 μm amplitude, with frequency components concentrated below 10 Hz. These low frequency vibrations can be corrected by using the correction magnets, whose field strengths are controlled individually through the feedback loop comprising the beam position monitoring system. The correction field require could be either positive or negative. Thus for all the correction magnets, bipolar power supplies (BPSs) are required to produce both polarities of correction fields. Three different types of BPS are used for all the correction magnets. Type I BPSs cover all the correction magnets for the storage ring, except for the trim dipoles. The maximum output current of the Type I BPS is 140 Adc. A Type II BPS powers a trim dipole, and its maximum output current is 60 Adc. The injector synchrotron and PAR correction magnets are powered form Type III BPSs, whose maximum output current is 25 Adc
Forward induction reasoning and correct beliefs
Perea y Monsuwé, Andrés
2017-01-01
All equilibrium concepts implicitly make a correct beliefs assumption, stating that a player believes that his opponents are correct about his first-order beliefs. In this paper we show that in many dynamic games of interest, this correct beliefs assumption may be incompatible with a very basic form
Energy Technology Data Exchange (ETDEWEB)
Fitzpatrick, A. Liam [Department of Physics, Boston University,590 Commonwealth Avenue, Boston, MA 02215 (United States); Kaplan, Jared [Department of Physics and Astronomy, Johns Hopkins University,3400 N. Charles St, Baltimore, MD 21218 (United States)
2016-05-12
We use results on Virasoro conformal blocks to study chaotic dynamics in CFT{sub 2} at large central charge c. The Lyapunov exponent λ{sub L}, which is a diagnostic for the early onset of chaos, receives 1/c corrections that may be interpreted as λ{sub L}=((2π)/β)(1+(12/c)). However, out of time order correlators receive other equally important 1/c suppressed contributions that do not have such a simple interpretation. We revisit the proof of a bound on λ{sub L} that emerges at large c, focusing on CFT{sub 2} and explaining why our results do not conflict with the analysis leading to the bound. We also comment on relationships between chaos, scattering, causality, and bulk locality.
Radiative corrections in bumblebee electrodynamics
Directory of Open Access Journals (Sweden)
R.V. Maluf
2015-10-01
Full Text Available We investigate some quantum features of the bumblebee electrodynamics in flat spacetimes. The bumblebee field is a vector field that leads to a spontaneous Lorentz symmetry breaking. For a smooth quadratic potential, the massless excitation (Nambu–Goldstone boson can be identified as the photon, transversal to the vacuum expectation value of the bumblebee field. Besides, there is a massive excitation associated with the longitudinal mode and whose presence leads to instability in the spectrum of the theory. By using the principal-value prescription, we show that no one-loop radiative corrections to the mass term is generated. Moreover, the bumblebee self-energy is not transverse, showing that the propagation of the longitudinal mode cannot be excluded from the effective theory.
International Nuclear Information System (INIS)
Fitzpatrick, A. Liam; Kaplan, Jared
2016-01-01
We use results on Virasoro conformal blocks to study chaotic dynamics in CFT_2 at large central charge c. The Lyapunov exponent λ_L, which is a diagnostic for the early onset of chaos, receives 1/c corrections that may be interpreted as λ_L=((2π)/β)(1+(12/c)). However, out of time order correlators receive other equally important 1/c suppressed contributions that do not have such a simple interpretation. We revisit the proof of a bound on λ_L that emerges at large c, focusing on CFT_2 and explaining why our results do not conflict with the analysis leading to the bound. We also comment on relationships between chaos, scattering, causality, and bulk locality.
Electromagnetic corrections to baryon masses
International Nuclear Information System (INIS)
Durand, Loyal; Ha, Phuoc
2005-01-01
We analyze the electromagnetic contributions to the octet and decuplet baryon masses using the heavy-baryon approximation in chiral effective field theory and methods we developed in earlier analyses of the baryon masses and magnetic moments. Our methods connect simply to Morpurgo's general parametrization of the electromagnetic contributions and to semirelativistic quark models. Our calculations are carried out including the one-loop mesonic corrections to the basic electromagnetic interactions, so to two loops overall. We find that to this order in the chiral loop expansion there are no three-body contributions. The Coleman-Glashow relation and other sum rules derived in quark models with only two-body terms therefore continue to hold, and violations involve at least three-loop processes and can be expected to be quite small. We present the complete formal results and some estimates of the matrix elements here. Numerical calculations will be presented separately
[Surgical correction of cleft palate].
Kimura, F T; Pavia Noble, A; Soriano Padilla, F; Soto Miranda, A; Medellín Rodríguez, A
1990-04-01
This study presents a statistical review of corrective surgery for cleft palate, based on cases treated at the maxillo-facial surgery units of the Pediatrics Hospital of the Centro Médico Nacional and at Centro Médico La Raza of the National Institute of Social Security of Mexico, over a five-year period. Interdisciplinary management as performed at the Cleft-Palate Clinic, in an integrated approach involving specialists in maxillo-facial surgery, maxillar orthopedics, genetics, social work and mental hygiene, pursuing to reestablish the stomatological and psychological functions of children afflicted by cleft palate, is amply described. The frequency and classification of the various techniques practiced in that service are described, as well as surgical statistics for 188 patients, which include a total of 256 palate surgeries performed from March 1984 to March 1989, applying three different techniques and proposing a combination of them in a single surgical time, in order to avoid complementary surgery.
A rigid motion correction method for helical computed tomography (CT)
International Nuclear Information System (INIS)
Kim, J-H; Kyme, A; Fulton, R; Nuyts, J; Kuncic, Z
2015-01-01
We propose a method to compensate for six degree-of-freedom rigid motion in helical CT of the head. The method is demonstrated in simulations and in helical scans performed on a 16-slice CT scanner. Scans of a Hoffman brain phantom were acquired while an optical motion tracking system recorded the motion of the bed and the phantom. Motion correction was performed by restoring projection consistency using data from the motion tracking system, and reconstructing with an iterative fully 3D algorithm. Motion correction accuracy was evaluated by comparing reconstructed images with a stationary reference scan. We also investigated the effects on accuracy of tracker sampling rate, measurement jitter, interpolation of tracker measurements, and the synchronization of motion data and CT projections. After optimization of these aspects, motion corrected images corresponded remarkably closely to images of the stationary phantom with correlation and similarity coefficients both above 0.9. We performed a simulation study using volunteer head motion and found similarly that our method is capable of compensating effectively for realistic human head movements. To the best of our knowledge, this is the first practical demonstration of generalized rigid motion correction in helical CT. Its clinical value, which we have yet to explore, may be significant. For example it could reduce the necessity for repeat scans and resource-intensive anesthetic and sedation procedures in patient groups prone to motion, such as young children. It is not only applicable to dedicated CT imaging, but also to hybrid PET/CT and SPECT/CT, where it could also ensure an accurate CT image for lesion localization and attenuation correction of the functional image data. (paper)
Uysal, Ismail Enes
2015-10-26
Analysis of electromagnetic interactions on nanodevices can oftentimes be carried out accurately using “traditional” electromagnetic solvers. However, if a gap of sub-nanometer scale exists between any two surfaces of the device, quantum-mechanical effects including tunneling should be taken into account for an accurate characterization of the device\\'s response. Since the first-principle quantum simulators can not be used efficiently to fully characterize a typical-size nanodevice, a quantum corrected electromagnetic model has been proposed as an efficient and accurate alternative (R. Esteban et al., Nat. Commun., 3(825), 2012). The quantum correction is achieved through an effective layered medium introduced into the gap between the surfaces. The dielectric constant of each layer is obtained using a first-principle quantum characterization of the gap with a different dimension.
Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Sacks, David B; Yu, Yi-Kuo
2018-06-05
Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ.
Advanced hardware design for error correcting codes
Coussy, Philippe
2015-01-01
This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.
Simplified correction of g-value measurements
DEFF Research Database (Denmark)
Duer, Karsten
1998-01-01
been carried out using a detailed physical model based on ISO9050 and prEN410 but using polarized data for non-normal incidence. This model is only valid for plane, clear glazings and therefor not suited for corrections of measurements performed on complex glazings. To investigate a more general...... correction procedure the results from the measurements on the Interpane DGU have been corrected using the principle outlined in (Rosenfeld, 1996). This correction procedure is more general as corrections can be carried out without a correct physical model of the investigated glazing. On the other hand...... the way this “general” correction procedure is used is not always in accordance to the physical conditions....
Fast and accurate computation of projected two-point functions
Grasshorn Gebhardt, Henry S.; Jeong, Donghui
2018-01-01
We present the two-point function from the fast and accurate spherical Bessel transformation (2-FAST) algorithm1Our code is available at https://github.com/hsgg/twoFAST. for a fast and accurate computation of integrals involving one or two spherical Bessel functions. These types of integrals occur when projecting the galaxy power spectrum P (k ) onto the configuration space, ξℓν(r ), or spherical harmonic space, Cℓ(χ ,χ'). First, we employ the FFTLog transformation of the power spectrum to divide the calculation into P (k )-dependent coefficients and P (k )-independent integrations of basis functions multiplied by spherical Bessel functions. We find analytical expressions for the latter integrals in terms of special functions, for which recursion provides a fast and accurate evaluation. The algorithm, therefore, circumvents direct integration of highly oscillating spherical Bessel functions.
Milojević, Slavka; Stojanovic, Vojislav
2017-04-01
Due to the continuous development of the seismic acquisition and processing method, the increase of the signal/fault ratio always represents a current target. The correct application of the latest software solutions improves the processing results and justifies their development. A correct computation and application of static corrections represents one of the most important tasks in pre-processing. This phase is of great importance for further processing steps. Static corrections are applied to seismic data in order to compensate the effects of irregular topography, the difference between the levels of source points and receipt in relation to the level of reduction, of close to the low-velocity surface layer (weathering correction), or any reasons that influence the spatial and temporal position of seismic routes. The refraction statics method is the most common method for computation of static corrections. It is successful in resolving of both the long-period statics problems and determining of the difference in the statics caused by abrupt lateral changes in velocity in close to the surface layer. XtremeGeo FlatironsTM is a program whose main purpose is computation of static correction through a refraction statics method and allows the application of the following procedures: picking of first arrivals, checking of geometry, multiple methods for analysis and modelling of statics, analysis of the refractor anisotropy and tomography (Eikonal Tomography). The exploration area is located on the southern edge of the Pannonian Plain, in the plain area with altitudes of 50 to 195 meters. The largest part of the exploration area covers Deliblato Sands, where the geological structure of the terrain and high difference in altitudes significantly affects the calculation of static correction. Software XtremeGeo FlatironsTM has powerful visualization and tools for statistical analysis which contributes to significantly more accurate assessment of geometry close to the surface
Pulse compressor with aberration correction
Energy Technology Data Exchange (ETDEWEB)
Mankos, Marian [Electron Optica, Inc., Palo Alto, CA (United States)
2015-11-30
In this SBIR project, Electron Optica, Inc. (EOI) is developing an electron mirror-based pulse compressor attachment to new and retrofitted dynamic transmission electron microscopes (DTEMs) and ultrafast electron diffraction (UED) cameras for improving the temporal resolution of these instruments from the characteristic range of a few picoseconds to a few nanoseconds and beyond, into the sub-100 femtosecond range. The improvement will enable electron microscopes and diffraction cameras to better resolve the dynamics of reactions in the areas of solid state physics, chemistry, and biology. EOI’s pulse compressor technology utilizes the combination of electron mirror optics and a magnetic beam separator to compress the electron pulse. The design exploits the symmetry inherent in reversing the electron trajectory in the mirror in order to compress the temporally broadened beam. This system also simultaneously corrects the chromatic and spherical aberration of the objective lens for improved spatial resolution. This correction will be found valuable as the source size is reduced with laser-triggered point source emitters. With such emitters, it might be possible to significantly reduce the illuminated area and carry out ultrafast diffraction experiments from small regions of the sample, e.g. from individual grains or nanoparticles. During phase I, EOI drafted a set of candidate pulse compressor architectures and evaluated the trade-offs between temporal resolution and electron bunch size to achieve the optimum design for two particular applications with market potential: increasing the temporal and spatial resolution of UEDs, and increasing the temporal and spatial resolution of DTEMs. Specialized software packages that have been developed by MEBS, Ltd. were used to calculate the electron optical properties of the key pulse compressor components: namely, the magnetic prism, the electron mirror, and the electron lenses. In the final step, these results were folded
Correction of a Depth-Dependent Lateral Distortion in 3D Super-Resolution Imaging.
Directory of Open Access Journals (Sweden)
Lina Carlini
Full Text Available Three-dimensional (3D localization-based super-resolution microscopy (SR requires correction of aberrations to accurately represent 3D structure. Here we show how a depth-dependent lateral shift in the apparent position of a fluorescent point source, which we term `wobble`, results in warped 3D SR images and provide a software tool to correct this distortion. This system-specific, lateral shift is typically > 80 nm across an axial range of ~ 1 μm. A theoretical analysis based on phase retrieval data from our microscope suggests that the wobble is caused by non-rotationally symmetric phase and amplitude aberrations in the microscope's pupil function. We then apply our correction to the bacterial cytoskeletal protein FtsZ in live bacteria and demonstrate that the corrected data more accurately represent the true shape of this vertically-oriented ring-like structure. We also include this correction method in a registration procedure for dual-color, 3D SR data and show that it improves target registration error (TRE at the axial limits over an imaging depth of 1 μm, yielding TRE values of < 20 nm. This work highlights the importance of correcting aberrations in 3D SR to achieve high fidelity between the measurements and the sample.
International Nuclear Information System (INIS)
Lim, Teik-Cheng
2016-01-01
For moderately thick plates, the use of First order Shear Deformation Theory (FSDT) with a constant shear correction factor of 5/6 is sufficient to take into account the plate deflection arising from transverse shear deformation. For very thick plates, the use of Third order Shear Deformation Theory (TSDT) is preferred as it allows the shear strain distribution to be varied through the plate thickness. Therefore no correction factor is required in TSDT, unlike FSDT. Due to the complexity involved in TSDT, this paper obtains a more accurate shear correction factor for use in FSDT of very thick simply supported and uniformly loaded isosceles right triangular plates based on the TSDT. By matching the maximum deflections for this plate according to FSDT and TSDT, a variable shear correction factor is obtained. Results show that the shear correction factor for the simplified TSDT, i.e. 14/17, is least accurate. The commonly adopted shear correction factor of 5/6 in FSDT is valid only for very thin or highly auxetic plates. This paper provides a variable shear correction for FSDT deflection that matches the plate deflection by TSDT. This variable shear correction factor allows designers to justify the use of a commonly adopted shear correction factor of 5/6 even for very thick plates as long as the Poisson’s ratio of the plate material is sufficiently negative. (paper)
Accurate isotope ratio mass spectrometry. Some problems and possibilities
International Nuclear Information System (INIS)
Bievre, P. de
1978-01-01
The review includes reference to 190 papers, mainly published during the last 10 years. It covers the following: important factors in accurate isotope ratio measurements (precision and accuracy of isotope ratio measurements -exemplified by determinations of 235 U/ 238 U and of other elements including 239 Pu/ 240 Pu; isotope fractionation -exemplified by curves for Rb, U); applications (atomic weights); the Oklo natural nuclear reactor (discovered by UF 6 mass spectrometry at Pierrelatte); nuclear and other constants; isotope ratio measurements in nuclear geology and isotope cosmology - accurate age determination; isotope ratio measurements on very small samples - archaeometry; isotope dilution; miscellaneous applications; and future prospects. (U.K.)
ROLAIDS-CPM: A code for accurate resonance absorption calculations
International Nuclear Information System (INIS)
Kruijf, W.J.M. de.
1993-08-01
ROLAIDS is used to calculate group-averaged cross sections for specific zones in a one-dimensional geometry. This report describes ROLAIDS-CPM which is an extended version of ROLAIDS. The main extension in ROLAIDS-CPM is the possibility to use the collision probability method for a slab- or cylinder-geometry instead of the less accurate interface-currents method. In this way accurate resonance absorption calculations can be performed with ROLAIDS-CPM. ROLAIDS-CPM has been developed at ECN. (orig.)
Accurate evaluation of exchange fields in finite element micromagnetic solvers
Chang, R.; Escobar, M. A.; Li, S.; Lubarda, M. V.; Lomakin, V.
2012-04-01
Quadratic basis functions (QBFs) are implemented for solving the Landau-Lifshitz-Gilbert equation via the finite element method. This involves the introduction of a set of special testing functions compatible with the QBFs for evaluating the Laplacian operator. The results by using QBFs are significantly more accurate than those via linear basis functions. QBF approach leads to significantly more accurate results than conventionally used approaches based on linear basis functions. Importantly QBFs allow reducing the error of computing the exchange field by increasing the mesh density for structured and unstructured meshes. Numerical examples demonstrate the feasibility of the method.
Accurate switching intensities and length scales in quasi-phase-matched materials
DEFF Research Database (Denmark)
Bang, Ole; Graversen, Torben Winther; Corney, Joel Frederick
2001-01-01
We consider unseeded typeI second-harmonic generation in quasi-phase-matched quadratic nonlinear materials and derive an accurate analytical expression for the evolution of the average intensity. The intensity- dependent nonlinear phase mismatch that is due to the cubic nonlinearity induced...... by quasi phase matching is found. The equivalent formula for the intensity of maximum conversion, the crossing of which changes the one-period nonlinear phase shift of the fundamental abruptly by p , corrects earlier estimates [Opt.Lett. 23, 506 (1998)] by a factor of 5.3. We find the crystal lengths...... that are necessary to obtain an optimal flat phase versus intensity response on either side of this separatrix intensity....
Accurate estimation of influenza epidemics using Google search data via ARGO.
Yang, Shihao; Santillana, Mauricio; Kou, S C
2015-11-24
Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions.
Producing accurate wave propagation time histories using the global matrix method
International Nuclear Information System (INIS)
Obenchain, Matthew B; Cesnik, Carlos E S
2013-01-01
This paper presents a reliable method for producing accurate displacement time histories for wave propagation in laminated plates using the global matrix method. The existence of inward and outward propagating waves in the general solution is highlighted while examining the axisymmetric case of a circular actuator on an aluminum plate. Problems with previous attempts to isolate the outward wave for anisotropic laminates are shown. The updated method develops a correction signal that can be added to the original time history solution to cancel the inward wave and leave only the outward propagating wave. The paper demonstrates the effectiveness of the new method for circular and square actuators bonded to the surface of isotropic laminates, and these results are compared with exact solutions. Results for circular actuators on cross-ply laminates are also presented and compared with experimental results, showing the ability of the new method to successfully capture the displacement time histories for composite laminates. (paper)
Energy Technology Data Exchange (ETDEWEB)
Yu, Xiaojiang, E-mail: slsyxj@nus.edu.sg; Diao, Caozheng; Breese, Mark B. H. [Singapore Synchrotron Light Source, National University of Singapore, Singapore 117603 (Singapore)
2016-07-27
An aberration calculation method which was developed by Lu [1] can treat individual aberration term precisely. Spectral aberration is the linear sum of these aberration terms, and the aberrations of multi-element systems also can be calculated correctly when the stretching ratio, defined herein, is unity. Evaluation of focusing mirror-grating systems which are optimized according to Lu’s method, along with the Light Path Function (LPF) and the Spot Diagram method (SD) are discussed to confirm the advantage of Lu’s methodology. Lu’s aberration terms are derived from a precise wave-front treatment, whereas the terms of the power series expansion of the light path function do not yield an accurate sum of the aberrations. Moreover, Lu’s aberration terms can be individually optimized. This is not possible with the analytical spot diagram formulae.
Accounting for Chromatic Atmospheric Effects on Barycentric Corrections
Energy Technology Data Exchange (ETDEWEB)
Blackman, Ryan T.; Szymkowiak, Andrew E.; Fischer, Debra A.; Jurgenson, Colby A., E-mail: ryan.blackman@yale.edu [Department of Astronomy, Yale University, 52 Hillhouse Avenue, New Haven, CT 06511 (United States)
2017-03-01
Atmospheric effects on stellar radial velocity measurements for exoplanet discovery and characterization have not yet been fully investigated for extreme precision levels. We carry out calculations to determine the wavelength dependence of barycentric corrections across optical wavelengths, due to the ubiquitous variations in air mass during observations. We demonstrate that radial velocity errors of at least several cm s{sup −1} can be incurred if the wavelength dependence is not included in the photon-weighted barycentric corrections. A minimum of four wavelength channels across optical spectra (380–680 nm) are required to account for this effect at the 10 cm s{sup −1} level, with polynomial fits of the barycentric corrections applied to cover all wavelengths. Additional channels may be required in poor observing conditions or to avoid strong telluric absorption features. Furthermore, consistent flux sampling on the order of seconds throughout the observation is necessary to ensure that accurate photon weights are obtained. Finally, we describe how a multiple-channel exposure meter will be implemented in the EXtreme PREcision Spectrograph (EXPRES).
Topographic Correction Module at Storm (TC@Storm)
Zaksek, K.; Cotar, K.; Veljanovski, T.; Pehani, P.; Ostir, K.
2015-04-01
Different solar position in combination with terrain slope and aspect result in different illumination of inclined surfaces. Therefore, the retrieved satellite data cannot be accurately transformed to the spectral reflectance, which depends only on the land cover. The topographic correction should remove this effect and enable further automatic processing of higher level products. The topographic correction TC@STORM was developed as a module within the SPACE-SI automatic near-real-time image processing chain STORM. It combines physical approach with the standard Minnaert method. The total irradiance is modelled as a three-component irradiance: direct (dependent on incidence angle, sun zenith angle and slope), diffuse from the sky (dependent mainly on sky-view factor), and diffuse reflected from the terrain (dependent on sky-view factor and albedo). For computation of diffuse irradiation from the sky we assume an anisotropic brightness of the sky. We iteratively estimate a linear combination from 10 different models, to provide the best results. Dependent on the data resolution, we mask shades based on radiometric (image) or geometric properties. The method was tested on RapidEye, Landsat 8, and PROBA-V data. Final results of the correction were evaluated and statistically validated based on various topography settings and land cover classes. Images show great improvements in shaded areas.
THE SELF-CORRECTION OF ENGLISH SPEECH ERRORS IN SECOND LANGUANGE LEARNING
Directory of Open Access Journals (Sweden)
Ketut Santi Indriani
2015-05-01
Full Text Available The process of second language (L2 learning is strongly influenced by the factors of error reconstruction that occur when the language is learned. Errors will definitely appear in the learning process. However, errors can be used as a step to accelerate the process of understanding the language. Doing self-correction (with or without giving cues is one of the examples. In the aspect of speaking, self-correction is done immediately after the error appears. This study is aimed at finding (i what speech errors the L2 speakers are able to identify, (ii of the errors identified, what speech errors the L2 speakers are able to self correct and (iii whether the self-correction of speech error are able to immediately improve the L2 learning. Based on the data analysis, it was found that the majority identified errors are related to noun (plurality, subject-verb agreement, grammatical structure and pronunciation.. B2 speakers tend to correct errors properly. Of the 78% identified speech errors, as much as 66% errors could be self-corrected accurately by the L2 speakers. Based on the analysis, it was also found that self-correction is able to improve L2 learning ability directly. This is evidenced by the absence of repetition of the same error after the error had been corrected.
Rulison Site corrective action report
Energy Technology Data Exchange (ETDEWEB)
NONE
1996-09-01
Project Rulison was a joint US Atomic Energy Commission (AEC) and Austral Oil Company (Austral) experiment, conducted under the AEC`s Plowshare Program, to evaluate the feasibility of using a nuclear device to stimulate natural gas production in low-permeability gas-producing geologic formations. The experiment was conducted on September 10, 1969, and consisted of detonating a 40-kiloton nuclear device at a depth of 2,568 m below ground surface (BGS). This Corrective Action Report describes the cleanup of petroleum hydrocarbon- and heavy-metal-contaminated sediments from an old drilling effluent pond and characterization of the mud pits used during drilling of the R-EX well at the Rulison Site. The Rulison Site is located approximately 65 kilometers (40 miles) northeast of Grand Junction, Colorado. The effluent pond was used for the storage of drilling mud during drilling of the emplacement hole for the 1969 gas stimulation test conducted by the AEC. This report also describes the activities performed to determine whether contamination is present in mud pits used during the drilling of well R-EX, the gas production well drilled at the site to evaluate the effectiveness of the detonation in stimulating gas production. The investigation activities described in this report were conducted during the autumn of 1995, concurrent with the cleanup of the drilling effluent pond. This report describes the activities performed during the soil investigation and provides the analytical results for the samples collected during that investigation.
Rulison Site corrective action report
International Nuclear Information System (INIS)
1996-09-01
Project Rulison was a joint US Atomic Energy Commission (AEC) and Austral Oil Company (Austral) experiment, conducted under the AEC's Plowshare Program, to evaluate the feasibility of using a nuclear device to stimulate natural gas production in low-permeability gas-producing geologic formations. The experiment was conducted on September 10, 1969, and consisted of detonating a 40-kiloton nuclear device at a depth of 2,568 m below ground surface (BGS). This Corrective Action Report describes the cleanup of petroleum hydrocarbon- and heavy-metal-contaminated sediments from an old drilling effluent pond and characterization of the mud pits used during drilling of the R-EX well at the Rulison Site. The Rulison Site is located approximately 65 kilometers (40 miles) northeast of Grand Junction, Colorado. The effluent pond was used for the storage of drilling mud during drilling of the emplacement hole for the 1969 gas stimulation test conducted by the AEC. This report also describes the activities performed to determine whether contamination is present in mud pits used during the drilling of well R-EX, the gas production well drilled at the site to evaluate the effectiveness of the detonation in stimulating gas production. The investigation activities described in this report were conducted during the autumn of 1995, concurrent with the cleanup of the drilling effluent pond. This report describes the activities performed during the soil investigation and provides the analytical results for the samples collected during that investigation
Metrics with vanishing quantum corrections
International Nuclear Information System (INIS)
Coley, A A; Hervik, S; Gibbons, G W; Pope, C N
2008-01-01
We investigate solutions of the classical Einstein or supergravity equations that solve any set of quantum corrected Einstein equations in which the Einstein tensor plus a multiple of the metric is equated to a symmetric conserved tensor T μν (g αβ , ∂ τ g αβ , ∂ τ ∂ σ g αβ , ...,) constructed from sums of terms, the involving contractions of the metric and powers of arbitrary covariant derivatives of the curvature tensor. A classical solution, such as an Einstein metric, is called universal if, when evaluated on that Einstein metric, T μν is a multiple of the metric. A Ricci flat classical solution is called strongly universal if, when evaluated on that Ricci flat metric, T μν vanishes. It is well known that pp-waves in four spacetime dimensions are strongly universal. We focus attention on a natural generalization; Einstein metrics with holonomy Sim(n - 2) in which all scalar invariants are zero or constant. In four dimensions we demonstrate that the generalized Ghanam-Thompson metric is weakly universal and that the Goldberg-Kerr metric is strongly universal; indeed, we show that universality extends to all four-dimensional Sim(2) Einstein metrics. We also discuss generalizations to higher dimensions
Accurate quasiparticle calculation of x-ray photoelectron spectra of solids.
Aoki, Tsubasa; Ohno, Kaoru
2018-05-31
It has been highly desired to provide an accurate and reliable method to calculate core electron binding energies (CEBEs) of crystals and to understand the final state screening effect on a core hole in high resolution x-ray photoelectron spectroscopy (XPS), because the ΔSCF method cannot be simply used for bulk systems. We propose to use the quasiparticle calculation based on many-body perturbation theory for this problem. In this study, CEBEs of band-gapped crystals, silicon, diamond, β-SiC, BN, and AlP, are investigated by means of the GW approximation (GWA) using the full ω integration and compared with the preexisting XPS data. The screening effect on a deep core hole is also investigated in detail by evaluating the relaxation energy (RE) from the core and valence contributions separately. Calculated results show that not only the valence electrons but also the core electrons have an important contribution to the RE, and the GWA have a tendency to underestimate CEBEs due to the excess RE. This underestimation can be improved by introducing the self-screening correction to the GWA. The resulting C1s, B1s, N1s, Si2p, and Al2p CEBEs are in excellent agreement with the experiments within 1 eV absolute error range. The present self-screening corrected GW approach has the capability to achieve the highly accurate prediction of CEBEs without any empirical parameter for band-gapped crystals, and provide a more reliable theoretical approach than the conventional ΔSCF-DFT method.
Accurate quasiparticle calculation of x-ray photoelectron spectra of solids
Aoki, Tsubasa; Ohno, Kaoru
2018-05-01
It has been highly desired to provide an accurate and reliable method to calculate core electron binding energies (CEBEs) of crystals and to understand the final state screening effect on a core hole in high resolution x-ray photoelectron spectroscopy (XPS), because the ΔSCF method cannot be simply used for bulk systems. We propose to use the quasiparticle calculation based on many-body perturbation theory for this problem. In this study, CEBEs of band-gapped crystals, silicon, diamond, β-SiC, BN, and AlP, are investigated by means of the GW approximation (GWA) using the full ω integration and compared with the preexisting XPS data. The screening effect on a deep core hole is also investigated in detail by evaluating the relaxation energy (RE) from the core and valence contributions separately. Calculated results show that not only the valence electrons but also the core electrons have an important contribution to the RE, and the GWA have a tendency to underestimate CEBEs due to the excess RE. This underestimation can be improved by introducing the self-screening correction to the GWA. The resulting C1s, B1s, N1s, Si2p, and Al2p CEBEs are in excellent agreement with the experiments within 1 eV absolute error range. The present self-screening corrected GW approach has the capability to achieve the highly accurate prediction of CEBEs without any empirical parameter for band-gapped crystals, and provide a more reliable theoretical approach than the conventional ΔSCF-DFT method.
Looking for Holes in Sterile Wrapping: How Accurate Are We?
Rashidifard, Christopher H; Mayassi, Hani A; Bush, Chelsea M; Opalacz, Brian M; Richardson, Mark W; Muccino, Paul M; DiPasquale, Thomas G
2018-05-01
Defects in sterile surgical wrapping are identified by the presence of holes through which light can be seen. However, it is unknown how reliably the human eye can detect these defects. The purpose of this study was to determine (1) how often holes in sterile packaging of various sizes could be detected; and (2) whether differences in lighting, experience level of the observer, or time spent inspecting the packaging were associated with improved likelihood of detection of holes in sterile packaging. Thirty participants (10 surgical technicians, 13 operating room nurses, seven orthopaedic surgery residents) inspected sterile sheets for perforations under ambient operating room (OR) lighting and then again with a standard powered OR lamp in addition to ambient lighting. There were no additional criteria for eligibility other than willingness to participate. Each sheet contained one of nine defect sizes with four sheets allocated to each defect size. Ten wraps were controls with no defects. Participants were allowed as much time as necessary for inspection. Holes ≥ 2.5 mm were detected more often than holes ≤ 2 mm (87% [832 of 960] versus 7% [82 of 1200]; odds ratio, 88.6 [95% confidence interval, 66.2-118.6]; p < 0.001). There was no difference in detection accuracy between OR lamp and ambient lightning nor experience level. There was no correlation between inspection time and detection accuracy. Defects ≤ 2 mm were not reliably detected with respect to lighting, time, or level of experience. Future research is warranted to determine defect sizes that are clinically meaningful. Level II, diagnostic study.
Procedures for accurately diluting and dispensing radioactive solutions
International Nuclear Information System (INIS)
1975-01-01
The technique currently used by various laboratories participating in international comparisons of radioactivity measurements are surveyed and recommendations for good laboratory practice established. Thus one describes, for instance, the preparation of solutions, dilution techniques, the use of 'pycnometers', weighing procedures (including buyoancy correction), etc. It should be possible to keep random and systematic uncertainties below 0.1% of the final result
77 FR 3800 - Accurate NDE & Inspection, LLC; Confirmatory Order
2012-01-25
... corrective actions that included training provided to the radiography staff by the RSO on operating... staff during the annual refresher training conducted in October 2010 and the safety meeting conducted in... July 28, 2011. d. The training will include a discussion on the following topics: (1) The importance of...
International Nuclear Information System (INIS)
Ahmadian, Alireza; Ay, Mohammad R.; Sarkar, Saeed; Bidgoli, Javad H.; Zaidi, Habib
2008-01-01
Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map (μmap), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated μmaps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique in
Prevalence of accurate nursing documentation in patient records
Paans, Wolter; Sermeus, Walter; Nieweg, Roos; van der Schans, Cees
2010-01-01
AIM: This paper is a report of a study conducted to describe the accuracy of nursing documentation in patient records in hospitals. Background. Accurate nursing documentation enables nurses to systematically review the nursing process and to evaluate the quality of care. Assessing nurses' reports
Accurate method of the magnetic field measurement of quadrupole magnets
International Nuclear Information System (INIS)
Kumada, M.; Sakai, I.; Someya, H.; Sasaki, H.
1983-01-01
We present an accurate method of the magnetic field measurement of the quadrupole magnet. The method of obtaining the information of the field gradient and the effective focussing length is given. A new scheme to obtain the information of the skew field components is also proposed. The relative accuracy of the measurement was 1 x 10 -4 or less. (author)
Feedforward signal prediction for accurate motion systems using digital filters
Butler, H.
2012-01-01
A positioning system that needs to accurately track a reference can benefit greatly from using feedforward. When using a force actuator, the feedforward needs to generate a force proportional to the reference acceleration, which can be measured by means of an accelerometer or can be created by
Fishing site mapping using local knowledge provides accurate and ...
African Journals Online (AJOL)
Accurate fishing ground maps are necessary for fisheries monitoring. In Velondriake locally managed marine area (LMMA) we observed that the nomenclature of shared fishing sites (FS) is villages dependent. Additionally, the level of illiteracy makes data collection more complicated, leading to data collectors improvising ...
Laser guided automated calibrating system for accurate bracket ...
African Journals Online (AJOL)
It is widely recognized that accurate bracket placement is of critical importance in the efficient application of biomechanics and in realizing the full potential of a preadjusted edgewise appliance. Aim: The purpose of ... placement. Keywords: Hough transforms, Indirect bonding technique, Laser, Orthodontic bracket placement ...
Foresight begins with FMEA. Delivering accurate risk assessments.
Passey, R D
1999-03-01
If sufficient factors are taken into account and two- or three-stage analysis is employed, failure mode and effect analysis represents an excellent technique for delivering accurate risk assessments for products and processes, and for relating them to legal liability. This article describes a format that facilitates easy interpretation.
Fast and Accurate Residential Fire Detection Using Wireless Sensor Networks
Bahrepour, Majid; Meratnia, Nirvana; Havinga, Paul J.M.
2010-01-01
Prompt and accurate residential fire detection is important for on-time fire extinguishing and consequently reducing damages and life losses. To detect fire sensors are needed to measure the environmental parameters and algorithms are required to decide about occurrence of fire. Recently, wireless
Towards accurate de novo assembly for genomes with repeats
Bucur, Doina
2017-01-01
De novo genome assemblers designed for short k-mer length or using short raw reads are unlikely to recover complex features of the underlying genome, such as repeats hundreds of bases long. We implement a stochastic machine-learning method which obtains accurate assemblies with repeats and
Dense and accurate whole-chromosome haplotyping of individual genomes
Porubsky, David; Garg, Shilpa; Sanders, Ashley D.; Korbel, Jan O.; Guryev, Victor; Lansdorp, Peter M.; Marschall, Tobias
2017-01-01
The diploid nature of the human genome is neglected in many analyses done today, where a genome is perceived as a set of unphased variants with respect to a reference genome. This lack of haplotype-level analyses can be explained by a lack of methods that can produce dense and accurate
Accurate automatic tuning circuit for bipolar integrated filters
de Heij, Wim J.A.; de Heij, W.J.A.; Hoen, Klaas; Hoen, Klaas; Seevinck, Evert; Seevinck, E.
1990-01-01
An accurate automatic tuning circuit for tuning the cutoff frequency and Q-factor of high-frequency bipolar filters is presented. The circuit is based on a voltage controlled quadrature oscillator (VCO). The frequency and the RMS (root mean square) amplitude of the oscillator output signal are
Laser Guided Automated Calibrating System for Accurate Bracket ...
African Journals Online (AJOL)
Background: The basic premise of preadjusted bracket system is accurate bracket positioning. ... using MATLAB ver. 7 software (The MathWorks Inc.). These images are in the form of matrices of size 640 × 480. 650 nm (red light) type III diode laser is used as ... motion control and Pitch, Yaw, Roll degrees of freedom (DOF).
Dynamic weighing for accurate fertilizer application and monitoring
Bergeijk, van J.; Goense, D.; Willigenburg, van L.G.; Speelman, L.
2001-01-01
The mass flow of fertilizer spreaders must be calibrated for the different types of fertilizers used. To obtain accurate fertilizer application manual calibration of actual mass flow must be repeated frequently. Automatic calibration is possible by measurement of the actual mass flow, based on
A Simple and Accurate Method for Measuring Enzyme Activity.
Yip, Din-Yan
1997-01-01
Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…
How Accurate are Government Forecast of Economic Fundamentals?
C-L. Chang (Chia-Lin); Ph.H.B.F. Franses (Philip Hans); M.J. McAleer (Michael)
2009-01-01
textabstractA government’s ability to forecast key economic fundamentals accurately can affect business confidence, consumer sentiment, and foreign direct investment, among others. A government forecast based on an econometric model is replicable, whereas one that is not fully based on an
Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method
Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey
2013-01-01
Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…
General approach for accurate resonance analysis in transformer windings
Popov, M.
2018-01-01
In this paper, resonance effects in transformer windings are thoroughly investigated and analyzed. The resonance is determined by making use of an accurate approach based on the application of the impedance matrix of a transformer winding. The method is validated by a test coil and the numerical
Novel multi-beam radiometers for accurate ocean surveillance
DEFF Research Database (Denmark)
Cappellin, C.; Pontoppidan, K.; Nielsen, P. H.
2014-01-01
Novel antenna architectures for real aperture multi-beam radiometers providing high resolution and high sensitivity for accurate sea surface temperature (SST) and ocean vector wind (OVW) measurements are investigated. On the basis of the radiometer requirements set for future SST/OVW missions...
Planimetric volumetry of the prostate: how accurate is it?
Aarnink, R. G.; Giesen, R. J.; de la Rosette, J. J.; Huynen, A. L.; Debruyne, F. M.; Wijkstra, H.
1995-01-01
Planimetric volumetry is used in clinical practice when accurate volume determination of the prostate is needed. The prostate volume is determined by discretization of the 3D prostate shape. The are of the prostate is calculated in consecutive ultrasonographic cross-sections. This area is multiplied
Accurate conjugate gradient methods for families of shifted systems
Eshof, J. van den; Sleijpen, G.L.G.
We present an efficient and accurate variant of the conjugate gradient method for solving families of shifted systems. In particular we are interested in shifted systems that occur in Tikhonov regularization for inverse problems since these problems can be sensitive to roundoff errors. The
Accurate 3D Mapping Algorithm for Flexible Antennas
Directory of Open Access Journals (Sweden)
Saed Asaly
2018-01-01
Full Text Available This work addresses the problem of performing an accurate 3D mapping of a flexible antenna surface. Consider a high-gain satellite flexible antenna; even a submillimeter change in the antenna surface may lead to a considerable loss in the antenna gain. Using a robotic subreflector, such changes can be compensated for. Yet, in order to perform such tuning, an accurate 3D mapping of the main antenna is required. This paper presents a general method for performing an accurate 3D mapping of marked surfaces such as satellite dish antennas. Motivated by the novel technology for nanosatellites with flexible high-gain antennas, we propose a new accurate mapping framework which requires a small-sized monocamera and known patterns on the antenna surface. The experimental result shows that the presented mapping method can detect changes up to 0.1-millimeter accuracy, while the camera is located 1 meter away from the dish, allowing an RF antenna optimization for Ka and Ku frequencies. Such optimization process can improve the gain of the flexible antennas and allow an adaptive beam shaping. The presented method is currently being implemented on a nanosatellite which is scheduled to be launched at the end of 2018.
Hird, Sarah; Kubatko, Laura; Carstens, Bryan
2010-11-01
We describe a method for estimating species trees that relies on replicated subsampling of large data matrices. One application of this method is phylogeographic research, which has long depended on large datasets that sample intensively from the geographic range of the focal species; these datasets allow systematicists to identify cryptic diversity and understand how contemporary and historical landscape forces influence genetic diversity. However, analyzing any large dataset can be computationally difficult, particularly when newly developed methods for species tree estimation are used. Here we explore the use of replicated subsampling, a potential solution to the problem posed by large datasets, with both a simulation study and an empirical analysis. In the simulations, we sample different numbers of alleles and loci, estimate species trees using STEM, and compare the estimated to the actual species tree. Our results indicate that subsampling three alleles per species for eight loci nearly always results in an accurate species tree topology, even in cases where the species tree was characterized by extremely rapid divergence. Even more modest subsampling effort, for example one allele per species and two loci, was more likely than not (>50%) to identify the correct species tree topology, indicating that in nearly all cases, computing the majority-rule consensus tree from replicated subsampling provides a good estimate of topology. These results were supported by estimating the correct species tree topology and reasonable branch lengths for an empirical 10-locus great ape dataset. Copyright © 2010 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Goodpaster, Jason D.; Barnes, Taylor A.; Miller, Thomas F.; Manby, Frederick R.
2014-01-01
We analyze the sources of error in quantum embedding calculations in which an active subsystem is treated using wavefunction methods, and the remainder using density functional theory. We show that the embedding potential felt by the electrons in the active subsystem makes only a small contribution to the error of the method, whereas the error in the nonadditive exchange-correlation energy dominates. We test an MP2 correction for this term and demonstrate that the corrected embedding scheme accurately reproduces wavefunction calculations for a series of chemical reactions. Our projector-based embedding method uses localized occupied orbitals to partition the system; as with other local correlation methods, abrupt changes in the character of the localized orbitals along a reaction coordinate can lead to discontinuities in the embedded energy, but we show that these discontinuities are small and can be systematically reduced by increasing the size of the active region. Convergence of reaction energies with respect to the size of the active subsystem is shown to be rapid for all cases where the density functional treatment is able to capture the polarization of the environment, even in conjugated systems, and even when the partition cuts across a double bond
XenoSite: accurately predicting CYP-mediated sites of metabolism with neural networks.
Zaretzki, Jed; Matlock, Matthew; Swamidass, S Joshua
2013-12-23
Understanding how xenobiotic molecules are metabolized is important because it influences the safety, efficacy, and dose of medicines and how they can be modified to improve these properties. The cytochrome P450s (CYPs) are proteins responsible for metabolizing 90% of drugs on the market, and many computational methods can predict which atomic sites of a molecule--sites of metabolism (SOMs)--are modified during CYP-mediated metabolism. This study improves on prior methods of predicting CYP-mediated SOMs by using new descriptors and machine learning based on neural networks. The new method, XenoSite, is faster to train and more accurate by as much as 4% or 5% for some isozymes. Furthermore, some "incorrect" predictions made by XenoSite were subsequently validated as correct predictions by revaluation of the source literature. Moreover, XenoSite output is interpretable as a probability, which reflects both the confidence of the model that a particular atom is metabolized and the statistical likelihood that its prediction for that atom is correct.
Accurate acoustic and elastic beam migration without slant stack for complex topography
International Nuclear Information System (INIS)
Huang, Jianping; Yuan, Maolin; Li, Zhenchun; Liao, Wenyuan; Yue, Yubo
2015-01-01
Recent trends in seismic exploration have led to the collection of more surveys, often with multi-component recording, in onshore settings where both topography and subsurface targets are complex, leading to challenges for processing methods. Gaussian beam migration (GBM) is an alternative to single-arrival Kirchhoff migration, although there are some issues resulting in unsatisfactory GBM images. For example, static correction will give rise to the distortion of wavefields when near-surface elevation and velocity vary rapidly. Moreover, Green’s function compensated for phase changes from the beam center to receivers is inaccurate when receivers are not placed within some neighborhood of the beam center, that is, GBM is slightly inflexible for irregular acquisition system and complex topography. As a result, the differences of both the near-surface velocity and the surface slope from the beam center to the receivers and the poor spatial sampling of the land data lead to inaccuracy and aliasing of the slant stack, respectively. In order to improve the flexibility and accuracy of GBM, we propose accurate acoustic, PP and polarity-corrected PS beam migration without slant stack for complex topography. The applications of this method to one-component synthetic data from a 2D Canadian Foothills model and a Zhongyuan oilfield fault model, one-component field data and an unseparated multi-component synthetic data demonstrate that the method is effective for structural and relatively amplitude-preserved imaging, but significantly more time-consuming. (paper)
Accurate donor electron wave functions from a multivalley effective mass theory.
Pendo, Luke; Hu, Xuedong
Multivalley effective mass (MEM) theories combine physical intuition with a marginal need for computational resources, but they tend to be insensitive to variations in the wavefunction. However, recent papers suggest full Bloch functions and suitable central cell donor potential corrections are essential to replicating qualitative and quantitative features of the wavefunction. In this talk, we consider a variational MEM method that can accurately predict both spectrum and wavefunction of isolated phosphorus donors. As per Gamble et. al, we employ a truncated series representation of the Bloch function with a tetrahedrally symmetric central cell correction. We use a dynamic dielectric constant, a feature commonly seen in tight-binding methods. Uniquely, we use a freely extensible basis of either all Slater- or all Gaussian-type functions. With a large basis able to capture the influence of higher energy eigenstates, this method is well positioned to consider the influence of external perturbations, such as electric field or applied strain, on the charge density. This work is supported by the US Army Research Office (W911NF1210609).
Loureiro, A. D.; Gomes, L. M.; Ventura, L.
2018-02-01
The international standard ISO 12312-1 proposes transmittance tests that quantify how dark sunglasses lenses are and whether or not they are suitable for driving. To perform these tests a spectrometer is required. In this study, we present and analyze theoretically an accurate alternative method for performing these measurements using simple components. Using three LEDs and a four-channel sensor we generated weighting functions similar to the standard ones for luminous and traffic lights transmittances. From 89 sunglasses lens spectroscopy data, we calculated luminous transmittance and signal detection quotients using our obtained weighting functions and the standard ones. Mean-difference Tukey plots were used to compare the results. All tested sunglasses lenses were classified in the right category and correctly as suitable or not for driving. The greatest absolute errors for luminous transmittance and red, yellow, green and blue signal detection quotients were 0.15%, 0.17, 0.06, 0.04 and 0.18, respectively. This method will be used in a device capable to perform transmittance tests (visible, traffic lights and ultraviolet (UV)) according to the standard. It is important to measure rightly luminous transmittance and relative visual attenuation quotients to report correctly whether or not sunglasses are suitable for driving. Moreover, standard UV requirements depend on luminous transmittance.
High order corrections to the renormalon
International Nuclear Information System (INIS)
Faleev, S.V.
1997-01-01
High order corrections to the renormalon are considered. Each new type of insertion into the renormalon chain of graphs generates a correction to the asymptotics of perturbation theory of the order of ∝1. However, this series of corrections to the asymptotics is not the asymptotic one (i.e. the mth correction does not grow like m.). The summation of these corrections for the UV renormalon may change the asymptotics by a factor N δ . For the traditional IR renormalon the mth correction diverges like (-2) m . However, this divergence has no infrared origin and may be removed by a proper redefinition of the IR renormalon. On the other hand, for IR renormalons in hadronic event shapes one should naturally expect these multiloop contributions to decrease like (-2) -m . Some problems expected upon reaching the best accuracy of perturbative QCD are also discussed. (orig.)
Accurate Modelling of Surface Currents and Internal Tides in a Semi-enclosed Coastal Sea
Allen, S. E.; Soontiens, N. K.; Dunn, M. B. H.; Liu, J.; Olson, E.; Halverson, M. J.; Pawlowicz, R.
2016-02-01
The Strait of Georgia is a deep (400 m), strongly stratified, semi-enclosed coastal sea on the west coast of North America. We have configured a baroclinic model of the Strait of Georgia and surrounding coastal waters using the NEMO ocean community model. We run daily nowcasts and forecasts and publish our sea-surface results (including storm surge warnings) to the web (salishsea.eos.ubc.ca/storm-surge). Tides in the Strait of Georgia are mixed and large. The baroclinic model and previous barotropic models accurately represent tidal sea-level variations and depth mean currents. The baroclinic model reproduces accurately the diurnal but not the semi-diurnal baroclinic tidal currents. In the Southern Strait of Georgia, strong internal tidal currents at the semi-diurnal frequency are observed. Strong semi-diurnal tides are also produced in the model, but are almost 180 degrees out of phase with the observations. In the model, in the surface, the barotropic and baroclinic tides reinforce, whereas the observations show that at the surface the baroclinic tides oppose the barotropic. As such the surface currents are very poorly modelled. Here we will present evidence of the internal tidal field from observations. We will discuss the generation regions of the tides, the necessary modifications to the model required to correct the phase, the resulting baroclinic tides and the improvements in the surface currents.
Rapid and accurate prediction and scoring of water molecules in protein binding sites.
Directory of Open Access Journals (Sweden)
Gregory A Ross
Full Text Available Water plays a critical role in ligand-protein interactions. However, it is still challenging to predict accurately not only where water molecules prefer to bind, but also which of those water molecules might be displaceable. The latter is often seen as a route to optimizing affinity of potential drug candidates. Using a protocol we call WaterDock, we show that the freely available AutoDock Vina tool can be used to predict accurately the binding sites of water molecules. WaterDock was validated using data from X-ray crystallography, neutron diffraction and molecular dynamics simulations and correctly predicted 97% of the water molecules in the test set. In addition, we combined data-mining, heuristic and machine learning techniques to develop probabilistic water molecule classifiers. When applied to WaterDock predictions in the Astex Diverse Set of protein ligand complexes, we could identify whether a water molecule was conserved or displaced to an accuracy of 75%. A second model predicted whether water molecules were displaced by polar groups or by non-polar groups to an accuracy of 80%. These results should prove useful for anyone wishing to undertake rational design of new compounds where the displacement of water molecules is being considered as a route to improved affinity.
Accurate Extraction of Charge Carrier Mobility in 4-Probe Field-Effect Transistors
Choi, Hyun Ho; Rodionov, Yaroslav I.; Paterson, Alexandra F.; Panidi, Julianna; Saranin, Danila; Kharlamov, Nikolai; Didenko, Sergei I.; Anthopoulos, Thomas D.; Cho, Kilwon; Podzorov, Vitaly
2018-01-01
Charge carrier mobility is an important characteristic of organic field-effect transistors (OFETs) and other semiconductor devices. However, accurate mobility determination in FETs is frequently compromised by issues related to Schottky-barrier contact resistance, that can be efficiently addressed by measurements in 4-probe/Hall-bar contact geometry. Here, it is shown that this technique, widely used in materials science, can still lead to significant mobility overestimation due to longitudinal channel shunting caused by voltage probes in 4-probe structures. This effect is investigated numerically and experimentally in specially designed multiterminal OFETs based on optimized novel organic-semiconductor blends and bulk single crystals. Numerical simulations reveal that 4-probe FETs with long but narrow channels and wide voltage probes are especially prone to channel shunting, that can lead to mobilities overestimated by as much as 350%. In addition, the first Hall effect measurements in blended OFETs are reported and how Hall mobility can be affected by channel shunting is shown. As a solution to this problem, a numerical correction factor is introduced that can be used to obtain much more accurate experimental mobilities. This methodology is relevant to characterization of a variety of materials, including organic semiconductors, inorganic oxides, monolayer materials, as well as carbon nanotube and semiconductor nanocrystal arrays.
Accurate alpha sticking fractions from improved calculations relevant for muon catalyzed fusion
International Nuclear Information System (INIS)
Szalewicz, K.
1990-05-01
Recent experiments have shown that under proper conditions a single muon may catalyze almost two hundred fusions in its lifetime. This process proceeds through formation of muonic molecular ions. Properties of these ions are central to the understanding of the phenomenon. Our work included the most accurate calculations of the energy levels and Coulombic sticking fractions for tdμ and other muonic molecular ions, calculations of Auger transition rates, calculations of corrections to the energy levels due to interactions with the most molecule, and calculation of the reactivation of muons from α particles. The majority of our effort has been devoted to the theory and computation of the influence of the strong nuclear forces on fusion rates and sticking fractions. We have calculated fusion rates for tdμ including the effects of nuclear forces on the molecular wave functions. We have also shown that these results can be reproduced to almost four digit accuracy by using a very simple quasifactorizable expression which does not require modifications of the molecular wave functions. Our sticking fractions are more accurate than any other theoretical values. We have used a more sophisticated theory than any other work and our numerical calculations have converged to at least three significant digits
Accurate Extraction of Charge Carrier Mobility in 4-Probe Field-Effect Transistors
Choi, Hyun Ho
2018-04-30
Charge carrier mobility is an important characteristic of organic field-effect transistors (OFETs) and other semiconductor devices. However, accurate mobility determination in FETs is frequently compromised by issues related to Schottky-barrier contact resistance, that can be efficiently addressed by measurements in 4-probe/Hall-bar contact geometry. Here, it is shown that this technique, widely used in materials science, can still lead to significant mobility overestimation due to longitudinal channel shunting caused by voltage probes in 4-probe structures. This effect is investigated numerically and experimentally in specially designed multiterminal OFETs based on optimized novel organic-semiconductor blends and bulk single crystals. Numerical simulations reveal that 4-probe FETs with long but narrow channels and wide voltage probes are especially prone to channel shunting, that can lead to mobilities overestimated by as much as 350%. In addition, the first Hall effect measurements in blended OFETs are reported and how Hall mobility can be affected by channel shunting is shown. As a solution to this problem, a numerical correction factor is introduced that can be used to obtain much more accurate experimental mobilities. This methodology is relevant to characterization of a variety of materials, including organic semiconductors, inorganic oxides, monolayer materials, as well as carbon nanotube and semiconductor nanocrystal arrays.
Projection correction for the pixel-by-pixel basis in diffraction enhanced imaging
International Nuclear Information System (INIS)
Huang Zhifeng; Kang Kejun; Li Zheng
2006-01-01
Theories and methods of x-ray diffraction enhanced imaging (DEI) and computed tomography of the DEI (DEI-CT) have been investigated recently. But the phenomenon of projection offsets which may affect the accuracy of the results of extraction methods of refraction-angle images and reconstruction algorithms of the DEI-CT is seldom of concern. This paper focuses on it. Projection offsets are revealed distinctly according to the equivalent rectilinear propagation model of the DEI. Then, an effective correction method using the equivalent positions of projection data is presented to eliminate the errors induced by projection offsets. The correction method is validated by a computer simulation experiment and extraction methods or reconstruction algorithms based on the corrected data can give more accurate results. The limitations of the correction method are discussed at the end
A new correction method for determination on carbohydrates in lignocellulosic biomass.
Li, Hong-Qiang; Xu, Jian
2013-06-01
The accurate determination on the key components in lignocellulosic biomass is the premise of pretreatment and bioconversion. Currently, the widely used 72% H2SO4 two-step hydrolysis quantitative saccharification (QS) procedure uses loss coefficient of monosaccharide standards to correct monosaccharide loss in the secondary hydrolysis (SH) of QS and may result in excessive correction. By studying the quantitative relationships of glucose and xylose losses during special hydrolysis conditions and the HMF and furfural productions, a simple correction on the monosaccharide loss from both PH and SH was established by using HMF and furfural as the calibrators. This method was used to the component determination on corn stover, Miscanthus and cotton stalk (raw materials and pretreated) and compared to the NREL method. It has been proved that this method can avoid excessive correction on the samples with high-carbohydrate contents. Copyright © 2013 Elsevier Ltd. All rights reserved.
Nazarparvar, Babak; Shamsaei, Mojtaba; Rajabi, Hossein
2012-01-01
The motion of the head during brain positron emission tomography (PET) acquisitions has been identified as a source of artifact in the reconstructed image. In this study, a method is described to develop an image-based motion correction technique for correcting the post-acquisition data without using external optical motion-tracking system such as POLARIS. In this technique, GATE has been used to simulate PET brain scan using point sources mounted around the head to accurately monitor the position of the head during the time frames. The measurement of head motion in each frame showed a transformation in the image frame matrix, resulting in a fully corrected data set. Using different kinds of phantoms and motions, the accuracy of the correction method is tested and its applicability to experimental studies is demonstrated as well.
Correction magnet power supplies for APS machine
International Nuclear Information System (INIS)
Kang, Y.G.
1991-01-01
The Advanced Photon Source machine requires a number of correction magnets; five kinds for the storage ring, two for the injector synchrotron, and two for the positron accumulator ring. Three types of bipolar power supply will be used for all the correction magnets. This paper describes the design aspects and considerations for correction magnet power supplies for the APS machine. 3 refs., 3 figs., 1 tab
Fast and Accurate Rat Head Motion Tracking With Point Sources for Awake Brain PET.
Miranda, Alan; Staelens, Steven; Stroobants, Sigrid; Verhaeghe, Jeroen
2017-07-01
To avoid the confounding effects of anesthesia and immobilization stress in rat brain positron emission tomography (PET), motion tracking-based unrestrained awake rat brain imaging is being developed. In this paper, we propose a fast and accurate rat headmotion tracking method based on small PET point sources. PET point sources (3-4) attached to the rat's head are tracked in image space using 15-32-ms time frames. Our point source tracking (PST) method was validated using a manually moved microDerenzo phantom that was simultaneously tracked with an optical tracker (OT) for comparison. The PST method was further validated in three awake [ 18 F]FDG rat brain scans. Compared with the OT, the PST-based correction at the same frame rate (31.2 Hz) reduced the reconstructed FWHM by 0.39-0.66 mm for the different tested rod sizes of the microDerenzo phantom. The FWHM could be further reduced by another 0.07-0.13 mm when increasing the PST frame rate (66.7 Hz). Regional brain [ 18 F]FDG uptake in the motion corrected scan was strongly correlated ( ) with that of the anesthetized reference scan for all three cases ( ). The proposed PST method allowed excellent and reproducible motion correction in awake in vivo experiments. In addition, there is no need of specialized tracking equipment or additional calibrations to be performed, the point sources are practically imperceptible to the rat, and PST is ideally suitable for small bore scanners, where optical tracking might be challenging.
Yoshida, Tatsusada; Hayashi, Takahisa; Mashima, Akira; Chuman, Hiroshi
2015-10-01
One of the most challenging problems in computer-aided drug discovery is the accurate prediction of the binding energy between a ligand and a protein. For accurate estimation of net binding energy ΔEbind in the framework of the Hartree-Fock (HF) theory, it is necessary to estimate two additional energy terms; the dispersion interaction energy (Edisp) and the basis set superposition error (BSSE). We previously reported a simple and efficient dispersion correction, Edisp, to the Hartree-Fock theory (HF-Dtq). In the present study, an approximation procedure for estimating BSSE proposed by Kruse and Grimme, a geometrical counterpoise correction (gCP), was incorporated into HF-Dtq (HF-Dtq-gCP). The relative weights of the Edisp (Dtq) and BSSE (gCP) terms were determined to reproduce ΔEbind calculated with CCSD(T)/CBS or /aug-cc-pVTZ (HF-Dtq-gCP (scaled)). The performance of HF-Dtq-gCP (scaled) was compared with that of B3LYP-D3(BJ)-bCP (dispersion corrected B3LYP with the Boys and Bernadi counterpoise correction (bCP)), by taking ΔEbind (CCSD(T)-bCP) of small non-covalent complexes as 'a golden standard'. As a critical test, HF-Dtq-gCP (scaled)/6-31G(d) and B3LYP-D3(BJ)-bCP/6-31G(d) were applied to the complex model for HIV-1 protease and its potent inhibitor, KNI-10033. The present results demonstrate that HF-Dtq-gCP (scaled) is a useful and powerful remedy for accurately and promptly predicting ΔEbind between a ligand and a protein, albeit it is a simple correction procedure. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ideal flood field images for SPECT uniformity correction
International Nuclear Information System (INIS)
Oppenheim, B.E.; Appledorn, C.R.
1984-01-01
Since as little as 2.5% camera non-uniformity can cause disturbing artifacts in SPECT imaging, the ideal flood field images for uniformity correction would be made with the collimator in place using a perfectly uniform sheet source. While such a source is not realizable the equivalent images can be generated by mapping the activity distribution of a Co-57 sheet source and correcting subsequent images of the source with this mapping. Mapping is accomplished by analyzing equal-time images of the source made in multiple precisely determined positions. The ratio of counts detected in the same region of two images is a measure of the ratio of the activities of the two portions of the source imaged in that region. The activity distribution in the sheet source is determined from a set of such ratios. The more source positions imaged in a given time, the more accurate the source mapping, according to results of a computer simulation. A 1.9 mCi Co-57 sheet source was shifted by 12 mm increments along the horizontal and vertical axis of the camera face to 9 positions on each axis. The source was imaged for 20 min in each position and 214 million total counts were accumulated. The activity distribution of the source, relative to the center pixel, was determined for a 31 x 31 array. The integral uniformity was found to be 2.8%. The RMS error for such a mapping was determined by computer simulation to be 0.46%. The activity distribution was used to correct a high count flood field image for non-uniformities attributable to the Co-57 source. Such a corrected image represents camera plus collimator response to an almost perfectly uniform sheet source
Calculation of accurate small angle X-ray scattering curves from coarse-grained protein models
Directory of Open Access Journals (Sweden)
Stovgaard Kasper
2010-08-01
Full Text Available Abstract Background Genome sequencing projects have expanded the gap between the amount of known protein sequences and structures. The limitations of current high resolution structure determination methods make it unlikely that this gap will disappear in the near future. Small angle X-ray scattering (SAXS is an established low resolution method for routinely determining the structure of proteins in solution. The purpose of this study is to develop a method for the efficient calculation of accurate SAXS curves from coarse-grained protein models. Such a method can for example be used to construct a likelihood function, which is paramount for structure determination based on statistical inference. Results We present a method for the efficient calculation of accurate SAXS curves based on the Debye formula and a set of scattering form factors for dummy atom representations of amino acids. Such a method avoids the computationally costly iteration over all atoms. We estimated the form factors using generated data from a set of high quality protein structures. No ad hoc scaling or correction factors are applied in the calculation of the curves. Two coarse-grained representations of protein structure were investigated; two scattering bodies per amino acid led to significantly better results than a single scattering body. Conclusion We show that the obtained point estimates allow the calculation of accurate SAXS curves from coarse-grained protein models. The resulting curves are on par with the current state-of-the-art program CRYSOL, which requires full atomic detail. Our method was also comparable to CRYSOL in recognizing native structures among native-like decoys. As a proof-of-concept, we combined the coarse-grained Debye calculation with a previously described probabilistic model of protein structure, TorusDBN. This resulted in a significant improvement in the decoy recognition performance. In conclusion, the presented method shows great promise for
BEYOND ELLIPSE(S): ACCURATELY MODELING THE ISOPHOTAL STRUCTURE OF GALAXIES WITH ISOFIT AND CMODEL
International Nuclear Information System (INIS)
Ciambur, B. C.
2015-01-01
This work introduces a new fitting formalism for isophotes that enables more accurate modeling of galaxies with non-elliptical shapes, such as disk galaxies viewed edge-on or galaxies with X-shaped/peanut bulges. Within this scheme, the angular parameter that defines quasi-elliptical isophotes is transformed from the commonly used, but inappropriate, polar coordinate to the “eccentric anomaly.” This provides a superior description of deviations from ellipticity, better capturing the true isophotal shape. Furthermore, this makes it possible to accurately recover both the surface brightness profile, using the correct azimuthally averaged isophote, and the two-dimensional model of any galaxy: the hitherto ubiquitous, but artificial, cross-like features in residual images are completely removed. The formalism has been implemented into the Image Reduction and Analysis Facility tasks Ellipse and Bmodel to create the new tasks “Isofit,” and “Cmodel.” The new tools are demonstrated here with application to five galaxies, chosen to be representative case-studies for several areas where this technique makes it possible to gain new scientific insight. Specifically: properly quantifying boxy/disky isophotes via the fourth harmonic order in edge-on galaxies, quantifying X-shaped/peanut bulges, higher-order Fourier moments for modeling bars in disks, and complex isophote shapes. Higher order (n > 4) harmonics now become meaningful and may correlate with structural properties, as boxyness/diskyness is known to do. This work also illustrates how the accurate construction, and subtraction, of a model from a galaxy image facilitates the identification and recovery of over-lapping sources such as globular clusters and the optical counterparts of X-ray sources
Quantum corrections to Schwarzschild black hole
Energy Technology Data Exchange (ETDEWEB)
Calmet, Xavier; El-Menoufi, Basem Kamal [University of Sussex, Department of Physics and Astronomy, Brighton (United Kingdom)
2017-04-15
Using effective field theory techniques, we compute quantum corrections to spherically symmetric solutions of Einstein's gravity and focus in particular on the Schwarzschild black hole. Quantum modifications are covariantly encoded in a non-local effective action. We work to quadratic order in curvatures simultaneously taking local and non-local corrections into account. Looking for solutions perturbatively close to that of classical general relativity, we find that an eternal Schwarzschild black hole remains a solution and receives no quantum corrections up to this order in the curvature expansion. In contrast, the field of a massive star receives corrections which are fully determined by the effective field theory. (orig.)
Towards Compensation Correctness in Interactive Systems
Vaz, Cátia; Ferreira, Carla
One fundamental idea of service-oriented computing is that applications should be developed by composing already available services. Due to the long running nature of service interactions, a main challenge in service composition is ensuring correctness of failure recovery. In this paper, we use a process calculus suitable for modelling long running transactions with a recovery mechanism based on compensations. Within this setting, we discuss and formally state correctness criteria for compensable processes compositions, assuming that each process is correct with respect to failure recovery. Under our theory, we formally interpret self-healing compositions, that can detect and recover from failures, as correct compositions of compensable processes.
Class action litigation in correctional psychiatry.
Metzner, Jeffrey L
2002-01-01
Class action litigation has been instrumental in jail and prison reform during the past two decades. Correctional mental health systems have significantly benefited from such litigation. Forensic psychiatrists have been crucial in the litigation process and the subsequent evolution of correctional mental health care systems. This article summarizes information concerning basic demographics of correctional populations and costs of correctional health care and provides a brief history of such litigation. The role of psychiatric experts, with particular reference to standards of care, is described. Specifically discussed are issues relevant to suicide prevention, the prevalence of mentally ill inmates in supermax prisons, and discharge planning.
Correctional Practitioners on Reentry: A Missed Perspective
Directory of Open Access Journals (Sweden)
Elaine Gunnison
2015-06-01
Full Text Available Much of the literature on reentry of formerly incarcerated individuals revolves around discussions of failures they incur during reintegration or the identification of needs and challenges that they have during reentry from the perspective of community corrections officers. The present research fills a gap in the reentry literature by examining the needs and challenges of formerly incarcerated individuals and what makes for reentry success from the perspective of correctional practitioners (i.e., wardens and non-wardens. The views of correctional practitioners are important to understand the level of organizational commitment to reentry and the ways in which social distance between correctional professionals and their clients may impact reentry success. This research reports on the results from an email survey distributed to a national sample of correctional officials listed in the American Correctional Association, 2012 Directory. Specifically, correctional officials were asked to report on needs and challenges facing formerly incarcerated individuals, define success, identify factors related to successful reentry, recount success stories, and report what could be done to assist them in successful outcomes. Housing and employment were raised by wardens and corrections officials as important needs for successful reentry. Corrections officials adopted organizational and systems perspectives in their responses and had differing opinions about social distance. Policy implications are presented.
Evaluation of liver fat in the presence of iron with MRI using T2* correction: a clinical approach.
Henninger, Benjamin; Benjamin, Henninger; Kremser, Christian; Christian, Kremser; Rauch, Stefan; Stefan, Rauch; Eder, Robert; Robert, Eder; Judmaier, Werner; Werner, Judmaier; Zoller, Heinz; Heinz, Zoller; Michaely, Henrik; Henrik, Michaely; Schocke, Michael; Michael, Schocke
2013-06-01
To assess magnetic resonance imaging (MRI) with conventional chemical shift-based sequences with and without T2* correction for the evaluation of steatosis hepatitis (SH) in the presence of iron. Thirty-one patients who underwent MRI and liver biopsy because of clinically suspected diffuse liver disease were retrospectively analysed. The signal intensity (SI) was calculated in co-localised regions of interest (ROIs) using conventional spoiled gradient-echo T1 FLASH in-phase and opposed-phase (IP/OP). T2* relaxation time was recorded in a fat-saturated multi-echo-gradient-echo sequence. The fat fraction (FF) was calculated with non-corrected and T2*-corrected SIs. Results were correlated with liver biopsy. There was significant difference (P T2* corrected FF in patients with SH and concomitant hepatic iron overload (HIO). Using 5 % as a threshold resulted in eight false negative results with uncorrected FF whereas T2* corrected FF lead to true positive results in 5/8 patients. ROC analysis calculated three threshold values (8.97 %, 5.3 % and 3.92 %) for T2* corrected FF with accuracy 84 %, sensitivity 83-91 % and specificity 63-88 %. FF with T2* correction is accurate for the diagnosis of hepatic fat in the presence of HIO. Findings of our study suggest the use of IP/OP imaging in combination with T2* correction. • Magnetic resonance helps quantify both iron and fat content within the liver • T2* correction helps to predict the correct diagnosis of steatosis hepatitis • "Fat fraction" from T2*-corrected chemical shift-based sequences accurately quantifies hepatic fat • "Fat fraction" without T2* correction underestimates hepatic fat with iron overload.
Atmospheric Correction Inter-comparison Exercise (ACIX)
Vermote, E.; Doxani, G.; Gascon, F.; Roger, J. C.; Skakun, S.
2017-12-01
The free and open data access policy to Landsat-8 (L-8) and Sentinel-2 (S-2) satellite imagery has encouraged the development of atmospheric correction (AC) approaches for generating Bottom-of-Atmosphere (BOA) products. Several entities have started to generate (or plan to generate in the short term) BOA reflectance products at global scale for L-8 and S-2 missions. To this end, the European Space Agency (ESA) and National Aeronautics and Space Administration (NASA) have initiated an exercise on the inter-comparison of the available AC processors. The results of the exercise are expected to point out the strengths and weaknesses, as well as communalities and discrepancies of various AC processors, in order to suggest and define ways for their further improvement. In particular, 11 atmospheric processors from five different countries participate in ACIX with the aim to inter-compare their performance when applied to L-8 and S-2 data. All the processors should be operational without requiring parametrization when applied on different areas. A protocol describing in details the inter-comparison metrics and the test dataset based on the AERONET sites has been agreed unanimously during the 1st ACIX workshop in June 2016. In particular, a basic and an advanced run of each of the processor were requested in the frame of ACIX, with the aim to draw robust and reliable conclusions on the processors' performance. The protocol also describes the comparison metrics of the aerosol optical thickness and water vapour products of the processors with the corresponding AERONET measurements. Moreover, concerning the surface reflectances, the inter-comparison among the processors is defined, as well as the comparison with the MODIS surface reflectance and with a reference surface reflectance product. Such a reference product will be obtained using the AERONET characterization of the aerosol (size distribution and refractive indices) and an accurate radiative transfer code. The inter
International Nuclear Information System (INIS)
Dinpajooh, Mohammadhasan; Bai, Peng; Allan, Douglas A.; Siepmann, J. Ilja
2015-01-01
Since the seminal paper by Panagiotopoulos [Mol. Phys. 61, 813 (1997)], the Gibbs ensemble Monte Carlo (GEMC) method has been the most popular particle-based simulation approach for the computation of vapor–liquid phase equilibria. However, the validity of GEMC simulations in the near-critical region has been questioned because rigorous finite-size scaling approaches cannot be applied to simulations with fluctuating volume. Valleau [Mol. Simul. 29, 627 (2003)] has argued that GEMC simulations would lead to a spurious overestimation of the critical temperature. More recently, Patel et al. [J. Chem. Phys. 134, 024101 (2011)] opined that the use of analytical tail corrections would be problematic in the near-critical region. To address these issues, we perform extensive GEMC simulations for Lennard-Jones particles in the near-critical region varying the system size, the overall system density, and the cutoff distance. For a system with N = 5500 particles, potential truncation at 8σ and analytical tail corrections, an extrapolation of GEMC simulation data at temperatures in the range from 1.27 to 1.305 yields T c = 1.3128 ± 0.0016, ρ c = 0.316 ± 0.004, and p c = 0.1274 ± 0.0013 in excellent agreement with the thermodynamic limit determined by Potoff and Panagiotopoulos [J. Chem. Phys. 109, 10914 (1998)] using grand canonical Monte Carlo simulations and finite-size scaling. Critical properties estimated using GEMC simulations with different overall system densities (0.296 ≤ ρ t ≤ 0.336) agree to within the statistical uncertainties. For simulations with tail corrections, data obtained using r cut = 3.5σ yield T c and p c that are higher by 0.2% and 1.4% than simulations with r cut = 5 and 8σ but still with overlapping 95% confidence intervals. In contrast, GEMC simulations with a truncated and shifted potential show that r cut = 8σ is insufficient to obtain accurate results. Additional GEMC simulations for hard-core square-well particles with various
Iterative CT shading correction with no prior information
Wu, Pengwei; Sun, Xiaonan; Hu, Hongjie; Mao, Tingyu; Zhao, Wei; Sheng, Ke; Cheung, Alice A.; Niu, Tianye
2015-11-01
Shading artifacts in CT images are caused by scatter contamination, beam-hardening effect and other non-ideal imaging conditions. The purpose of this study is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT images (e.g. cone-beam CT, low-kVp CT) without relying on prior information. The method is based on the general knowledge of the relatively uniform CT number distribution in one tissue component. The CT image is first segmented to construct a template image where each structure is filled with the same CT number of a specific tissue type. Then, by subtracting the ideal template from the CT image, the residual image from various error sources are generated. Since forward projection is an integration process, non-continuous shading artifacts in the image become continuous signals in a line integral. Thus, the residual image is forward projected and its line integral is low-pass filtered in order to estimate the error that causes shading artifacts. A compensation map is reconstructed from the filtered line integral error using a standard FDK algorithm and added back to the original image for shading correction. As the segmented image does not accurately depict a shaded CT image, the proposed scheme is iterated until the variation of the residual image is minimized. The proposed method is evaluated using cone-beam CT images of a Catphan©600 phantom and a pelvis patient, and low-kVp CT angiography images for carotid artery assessment. Compared with the CT image without correction, the proposed method reduces the overall CT number error from over 200 HU to be less than 30 HU and increases the spatial uniformity by a factor of 1.5. Low-contrast object is faithfully retained after the proposed correction. An effective iterative algorithm for shading correction in CT imaging is proposed that is only assisted by general anatomical information without relying on prior knowledge. The proposed method is thus practical
More accurate theory for Bose-Einstein condensation fraction
International Nuclear Information System (INIS)
Biswas, Shyamal
2008-01-01
Bose-Einstein statistics is derived in the thermodynamic limit when the ratio of system size to thermal de Broglie wavelength goes to infinity. However, according to the experimental setup of Bose-Einstein condensation of harmonically trapped Bose gas of alkali atoms, the ratio near the condensation temperature (T o ) is 30-50. And, at ultralow temperatures well below T o , this ratio becomes comparable to 1. We argue that finite size as well as the ultralow temperature induces corrections to Bose-Einstein statistics. From the corrected statistics we plot condensation fraction versus temperature graph. This theoretical plot satisfies well with the experimental plot [A. Griesmaier et al., Phys. Rev. Lett. 94 (2005) 160401
International Nuclear Information System (INIS)
Middleton, Mark; Medwell, Steve; Wong, Jacky; Lynton-Moll, Mary; Rolfo, Aldo; See Andrew; Joon, Michael Lim
2006-01-01
Given the onset of dose escalation and increased planning target volume (PTV) conformity, the requirement of accurate field placement has also increased. This study compares and contrasts a combination offline/online electronic portal imaging (EPI) device correction with a complete online correction protocol and assesses their relative effectiveness in managing set-up error. Field placement data was collected on patients receiving radical radiotherapy to the prostate. Ten patients were on an initial combination offline/online correction protocol, followed by another 10 patients on a complete online correction protocol. Analysis of 1480 portal images from 20 patients was carried out, illustrating that a combination offline/online approach can be very effective in dealing with the systematic component of set-up error, but it is only when a complete online correction protocol is employed that both systematic and random set-up errors can be managed. Now, EPI protocols have evolved considerably and online corrections are a highly effective tool in the quest for more accurate field placement. This study discusses the clinical workload impact issues that need to be addressed in order for an online correction protocol to be employed, and addresses many of the practical issues that need to be resolved. Management of set-up error is paramount when seeking to dose escalate and only an online correction protocol can manage both components of set-up error. Both systematic and random errors are important and can be effectively and efficiently managed
Accurate lithography simulation model based on convolutional neural networks
Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki
2017-07-01
Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.
Fast and accurate edge orientation processing during object manipulation
Flanagan, J Randall; Johansson, Roland S
2018-01-01
Quickly and accurately extracting information about a touched object’s orientation is a critical aspect of dexterous object manipulation. However, the speed and acuity of tactile edge orientation processing with respect to the fingertips as reported in previous perceptual studies appear inadequate in these respects. Here we directly establish the tactile system’s capacity to process edge-orientation information during dexterous manipulation. Participants extracted tactile information about edge orientation very quickly, using it within 200 ms of first touching the object. Participants were also strikingly accurate. With edges spanning the entire fingertip, edge-orientation resolution was better than 3° in our object manipulation task, which is several times better than reported in previous perceptual studies. Performance remained impressive even with edges as short as 2 mm, consistent with our ability to precisely manipulate very small objects. Taken together, our results radically redefine the spatial processing capacity of the tactile system. PMID:29611804
A Highly Accurate Approach for Aeroelastic System with Hysteresis Nonlinearity
Directory of Open Access Journals (Sweden)
C. C. Cui
2017-01-01
Full Text Available We propose an accurate approach, based on the precise integration method, to solve the aeroelastic system of an airfoil with a pitch hysteresis. A major procedure for achieving high precision is to design a predictor-corrector algorithm. This algorithm enables accurate determination of switching points resulting from the hysteresis. Numerical examples show that the results obtained by the presented method are in excellent agreement with exact solutions. In addition, the high accuracy can be maintained as the time step increases in a reasonable range. It is also found that the Runge-Kutta method may sometimes provide quite different and even fallacious results, though the step length is much less than that adopted in the presented method. With such high computational accuracy, the presented method could be applicable in dynamical systems with hysteresis nonlinearities.
An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance
Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun
2015-01-01
Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314
Accurate phylogenetic tree reconstruction from quartets: a heuristic approach.
Reaz, Rezwana; Bayzid, Md Shamsuzzoha; Rahman, M Sohel
2014-01-01
Supertree methods construct trees on a set of taxa (species) combining many smaller trees on the overlapping subsets of the entire set of taxa. A 'quartet' is an unrooted tree over 4 taxa, hence the quartet-based supertree methods combine many 4-taxon unrooted trees into a single and coherent tree over the complete set of taxa. Quartet-based phylogeny reconstruction methods have been receiving considerable attentions in the recent years. An accurate and efficient quartet-based method might be competitive with the current best phylogenetic tree reconstruction methods (such as maximum likelihood or Bayesian MCMC analyses), without being as computationally intensive. In this paper, we present a novel and highly accurate quartet-based phylogenetic tree reconstruction method. We performed an extensive experimental study to evaluate the accuracy and scalability of our approach on both simulated and biological datasets.
Improved fingercode alignment for accurate and compact fingerprint recognition
CSIR Research Space (South Africa)
Brown, Dane
2016-05-01
Full Text Available Alignment for Accurate and Compact Fingerprint Recognition Dane Brown∗† and Karen Bradshaw∗ ∗Department of Computer Science Rhodes University Grahamstown, South Africa †Council for Scientific and Industrial Research Modelling and Digital Sciences Pretoria.... The experimental analysis and results are discussed in Section IV. Section V concludes the paper. II. RELATED STUDIES FingerCode [1] uses circular tessellation of filtered finger- print images centered at the reference point, which results in a circular ROI...
D-BRAIN : Anatomically accurate simulated diffusion MRI brain data
Perrone, Daniele; Jeurissen, Ben; Aelterman, Jan; Roine, Timo; Sijbers, Jan; Pizurica, Aleksandra; Leemans, Alexander; Philips, Wilfried
2016-01-01
Diffusion Weighted (DW) MRI allows for the non-invasive study of water diffusion inside living tissues. As such, it is useful for the investigation of human brain white matter (WM) connectivity in vivo through fiber tractography (FT) algorithms. Many DW-MRI tailored restoration techniques and FT algorithms have been developed. However, it is not clear how accurately these methods reproduce the WM bundle characteristics in real-world conditions, such as in the presence of noise, partial volume...
Accurate Online Full Charge Capacity Modeling of Smartphone Batteries
Hoque, Mohammad A.; Siekkinen, Matti; Koo, Jonghoe; Tarkoma, Sasu
2016-01-01
Full charge capacity (FCC) refers to the amount of energy a battery can hold. It is the fundamental property of smartphone batteries that diminishes as the battery ages and is charged/discharged. We investigate the behavior of smartphone batteries while charging and demonstrate that the battery voltage and charging rate information can together characterize the FCC of a battery. We propose a new method for accurately estimating FCC without exposing low-level system details or introducing new ...
Multigrid time-accurate integration of Navier-Stokes equations
Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.
1993-01-01
Efficient acceleration techniques typical of explicit steady-state solvers are extended to time-accurate calculations. Stability restrictions are greatly reduced by means of a fully implicit time discretization. A four-stage Runge-Kutta scheme with local time stepping, residual smoothing, and multigridding is used instead of traditional time-expensive factorizations. Some applications to natural and forced unsteady viscous flows show the capability of the procedure.
Fast, Accurate Memory Architecture Simulation Technique Using Memory Access Characteristics
小野, 貴継; 井上, 弘士; 村上, 和彰
2007-01-01
This paper proposes a fast and accurate memory architecture simulation technique. To design memory architecture, the first steps commonly involve using trace-driven simulation. However, expanding the design space makes the evaluation time increase. A fast simulation is achieved by a trace size reduction, but it reduces the simulation accuracy. Our approach can reduce the simulation time while maintaining the accuracy of the simulation results. In order to evaluate validity of proposed techniq...
Discrete sensors distribution for accurate plantar pressure analyses.
Claverie, Laetitia; Ille, Anne; Moretto, Pierre
2016-12-01
The aim of this study was to determine the distribution of discrete sensors under the footprint for accurate plantar pressure analyses. For this purpose, two different sensor layouts have been tested and compared, to determine which was the most accurate to monitor plantar pressure with wireless devices in research and/or clinical practice. Ten healthy volunteers participated in the study (age range: 23-58 years). The barycenter of pressures (BoP) determined from the plantar pressure system (W-inshoe®) was compared to the center of pressures (CoP) determined from a force platform (AMTI) in the medial-lateral (ML) and anterior-posterior (AP) directions. Then, the vertical ground reaction force (vGRF) obtained from both W-inshoe® and force platform was compared for both layouts for each subject. The BoP and vGRF determined from the plantar pressure system data showed good correlation (SCC) with those determined from the force platform data, notably for the second sensor organization (ML SCC= 0.95; AP SCC=0.99; vGRF SCC=0.91). The study demonstrates that an adjusted placement of removable sensors is key to accurate plantar pressure analyses. These results are promising for a plantar pressure recording outside clinical or laboratory settings, for long time monitoring, real time feedback or for whatever activity requiring a low-cost system. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Can blind persons accurately assess body size from the voice?
Pisanski, Katarzyna; Oleszkiewicz, Anna; Sorokowska, Agnieszka
2016-04-01
Vocal tract resonances provide reliable information about a speaker's body size that human listeners use for biosocial judgements as well as speech recognition. Although humans can accurately assess men's relative body size from the voice alone, how this ability is acquired remains unknown. In this study, we test the prediction that accurate voice-based size estimation is possible without prior audiovisual experience linking low frequencies to large bodies. Ninety-one healthy congenitally or early blind, late blind and sighted adults (aged 20-65) participated in the study. On the basis of vowel sounds alone, participants assessed the relative body sizes of male pairs of varying heights. Accuracy of voice-based body size assessments significantly exceeded chance and did not differ among participants who were sighted, or congenitally blind or who had lost their sight later in life. Accuracy increased significantly with relative differences in physical height between men, suggesting that both blind and sighted participants used reliable vocal cues to size (i.e. vocal tract resonances). Our findings demonstrate that prior visual experience is not necessary for accurate body size estimation. This capacity, integral to both nonverbal communication and speech perception, may be present at birth or may generalize from broader cross-modal correspondences. © 2016 The Author(s).
An accurate determination of the flux within a slab
International Nuclear Information System (INIS)
Ganapol, B.D.; Lapenta, G.
1993-01-01
During the past decade, several articles have been written concerning accurate solutions to the monoenergetic neutron transport equation in infinite and semi-infinite geometries. The numerical formulations found in these articles were based primarily on the extensive theoretical investigations performed by the open-quotes transport greatsclose quotes such as Chandrasekhar, Busbridge, Sobolev, and Ivanov, to name a few. The development of numerical solutions in infinite and semi-infinite geometries represents an example of how mathematical transport theory can be utilized to provide highly accurate and efficient numerical transport solutions. These solutions, or analytical benchmarks, are useful as open-quotes industry standards,close quotes which provide guidance to code developers and promote learning in the classroom. The high accuracy of these benchmarks is directly attributable to the rapid advancement of the state of computing and computational methods. Transport calculations that were beyond the capability of the open-quotes supercomputersclose quotes of just a few years ago are now possible at one's desk. In this paper, we again build upon the past to tackle the slab problem, which is of the next level of difficulty in comparison to infinite media problems. The formulation is based on the monoenergetic Green's function, which is the most fundamental transport solution. This method of solution requires a fast and accurate evaluation of the Green's function, which, with today's computational power, is now readily available
Can Measured Synergy Excitations Accurately Construct Unmeasured Muscle Excitations?
Bianco, Nicholas A; Patten, Carolynn; Fregly, Benjamin J
2018-01-01
Accurate prediction of muscle and joint contact forces during human movement could improve treatment planning for disorders such as osteoarthritis, stroke, Parkinson's disease, and cerebral palsy. Recent studies suggest that muscle synergies, a low-dimensional representation of a large set of muscle electromyographic (EMG) signals (henceforth called "muscle excitations"), may reduce the redundancy of muscle excitation solutions predicted by optimization methods. This study explores the feasibility of using muscle synergy information extracted from eight muscle EMG signals (henceforth called "included" muscle excitations) to accurately construct muscle excitations from up to 16 additional EMG signals (henceforth called "excluded" muscle excitations). Using treadmill walking data collected at multiple speeds from two subjects (one healthy, one poststroke), we performed muscle synergy analysis on all possible subsets of eight included muscle excitations and evaluated how well the calculated time-varying synergy excitations could construct the remaining excluded muscle excitations (henceforth called "synergy extrapolation"). We found that some, but not all, eight-muscle subsets yielded synergy excitations that achieved >90% extrapolation variance accounted for (VAF). Using the top 10% of subsets, we developed muscle selection heuristics to identify included muscle combinations whose synergy excitations achieved high extrapolation accuracy. For 3, 4, and 5 synergies, these heuristics yielded extrapolation VAF values approximately 5% lower than corresponding reconstruction VAF values for each associated eight-muscle subset. These results suggest that synergy excitations obtained from experimentally measured muscle excitations can accurately construct unmeasured muscle excitations, which could help limit muscle excitations predicted by muscle force optimizations.
Is bioelectrical impedance accurate for use in large epidemiological studies?
Directory of Open Access Journals (Sweden)
Merchant Anwar T
2008-09-01
Full Text Available Abstract Percentage of body fat is strongly associated with the risk of several chronic diseases but its accurate measurement is difficult. Bioelectrical impedance analysis (BIA is a relatively simple, quick and non-invasive technique, to measure body composition. It measures body fat accurately in controlled clinical conditions but its performance in the field is inconsistent. In large epidemiologic studies simpler surrogate techniques such as body mass index (BMI, waist circumference, and waist-hip ratio are frequently used instead of BIA to measure body fatness. We reviewed the rationale, theory, and technique of recently developed systems such as foot (or hand-to-foot BIA measurement, and the elements that could influence its results in large epidemiologic studies. BIA results are influenced by factors such as the environment, ethnicity, phase of menstrual cycle, and underlying medical conditions. We concluded that BIA measurements validated for specific ethnic groups, populations and conditions can accurately measure body fat in those populations, but not others and suggest that for large epdiemiological studies with diverse populations BIA may not be the appropriate choice for body composition measurement unless specific calibration equations are developed for different groups participating in the study.
Accurate and approximate thermal rate constants for polyatomic chemical reactions
International Nuclear Information System (INIS)
Nyman, Gunnar
2007-01-01
In favourable cases it is possible to calculate thermal rate constants for polyatomic reactions to high accuracy from first principles. Here, we discuss the use of flux correlation functions combined with the multi-configurational time-dependent Hartree (MCTDH) approach to efficiently calculate cumulative reaction probabilities and thermal rate constants for polyatomic chemical reactions. Three isotopic variants of the H 2 + CH 3 → CH 4 + H reaction are used to illustrate the theory. There is good agreement with experimental results although the experimental rates generally are larger than the calculated ones, which are believed to be at least as accurate as the experimental rates. Approximations allowing evaluation of the thermal rate constant above 400 K are treated. It is also noted that for the treated reactions, transition state theory (TST) gives accurate rate constants above 500 K. TST theory also gives accurate results for kinetic isotope effects in cases where the mass of the transfered atom is unchanged. Due to neglect of tunnelling, TST however fails below 400 K if the mass of the transferred atom changes between the isotopic reactions
Indexed variation graphs for efficient and accurate resistome profiling.
Rowe, Will P M; Winn, Martyn D
2018-05-14
Antimicrobial resistance remains a major threat to global health. Profiling the collective antimicrobial resistance genes within a metagenome (the "resistome") facilitates greater understanding of antimicrobial resistance gene diversity and dynamics. In turn, this can allow for gene surveillance, individualised treatment of bacterial infections and more sustainable use of antimicrobials. However, resistome profiling can be complicated by high similarity between reference genes, as well as the sheer volume of sequencing data and the complexity of analysis workflows. We have developed an efficient and accurate method for resistome profiling that addresses these complications and improves upon currently available tools. Our method combines a variation graph representation of gene sets with an LSH Forest indexing scheme to allow for fast classification of metagenomic sequence reads using similarity-search queries. Subsequent hierarchical local alignment of classified reads against graph traversals enables accurate reconstruction of full-length gene sequences using a scoring scheme. We provide our implementation, GROOT, and show it to be both faster and more accurate than a current reference-dependent tool for resistome profiling. GROOT runs on a laptop and can process a typical 2 gigabyte metagenome in 2 minutes using a single CPU. Our method is not restricted to resistome profiling and has the potential to improve current metagenomic workflows. GROOT is written in Go and is available at https://github.com/will-rowe/groot (MIT license). will.rowe@stfc.ac.uk. Supplementary data are available at Bioinformatics online.
International Nuclear Information System (INIS)
Angelis, G I; Kyme, A Z; Ryder, W J; Fulton, R R; Meikle, S R
2014-01-01
Attenuation correction in positron emission tomography brain imaging of freely moving animals is a very challenging problem since the torso of the animal is often within the field of view and introduces a non negligible attenuating factor that can degrade the quantitative accuracy of the reconstructed images. In the context of unrestrained small animal imaging, estimation of the attenuation correction factors without the need for a transmission scan is highly desirable. An attractive approach that avoids the need for a transmission scan involves the generation of the hull of the animal’s head based on the reconstructed motion corrected emission images. However, this approach ignores the attenuation introduced by the animal’s torso. In this work, we propose a virtual scanner geometry which moves in synchrony with the animal’s head and discriminates between those events that traversed only the animal’s head (and therefore can be accurately compensated for attenuation) and those that might have also traversed the animal’s torso. For each recorded pose of the animal’s head a new virtual scanner geometry is defined and therefore a new system matrix must be calculated leading to a time-varying system matrix. This new approach was evaluated on phantom data acquired on the microPET Focus 220 scanner using a custom-made phantom and step-wise motion. Results showed that when the animal’s torso is within the FOV and not appropriately accounted for during attenuation correction it can lead to bias of up to 10% . Attenuation correction was more accurate when the virtual scanner was employed leading to improved quantitative estimates (bias < 2%), without the need to account for the attenuation introduced by the extraneous compartment. Although the proposed method requires increased computational resources, it can provide a reliable approach towards quantitatively accurate attenuation correction for freely moving animal studies. (paper)
International Nuclear Information System (INIS)
Boehlecke, Robert
2004-01-01
The six bunkers included in CAU 204 were primarily used to monitor atmospheric testing or store munitions. The 'Corrective Action Investigation Plan (CAIP) for Corrective Action Unit 204: Storage Bunkers, Nevada Test Site, Nevada' (NNSA/NV, 2002a) provides information relating to the history, planning, and scope of the investigation; therefore, it will not be repeated in this CADD. This CADD identifies potential corrective action alternatives and provides a rationale for the selection of a recommended corrective action alternative for each CAS within CAU 204. The evaluation of corrective action alternatives is based on process knowledge and the results of investigative activities conducted in accordance with the CAIP (NNSA/NV, 2002a) that was approved prior to the start of the Corrective Action Investigation (CAI). Record of Technical Change (ROTC) No. 1 to the CAIP (approval pending) documents changes to the preliminary action levels (PALs) agreed to by the Nevada Division of Environmental Protection (NDEP) and DOE, National Nuclear Security Administration Nevada Site Office (NNSA/NSO). This ROTC specifically discusses the radiological PALs and their application to the findings of the CAU 204 corrective action investigation. The scope of this CADD consists of the following: (1) Develop corrective action objectives; (2) Identify corrective action alternative screening criteria; (3) Develop corrective action alternatives; (4) Perform detailed and comparative evaluations of corrective action alternatives in relation to corrective action objectives and screening criteria; and (5) Recommend and justify a preferred corrective action alternative for each CAS within CAU 204
Study of lung density corrections in a clinical trial (RTOG 88-08)
International Nuclear Information System (INIS)
Orton, Colin G.; Chungbin, Suzanne; Klein, Eric E.; Gillin, Michael T.; Schultheiss, Timothy E.; Sause, William T.
1998-01-01
Purpose: To investigate the effect of lung density corrections on the dose delivered to lung cancer radiotherapy patients in a multi-institutional clinical trial, and to determine whether commonly available density-correction algorithms are sufficient to improve the accuracy and precision of dose calculation in the clinical trials setting. Methods and Materials: A benchmark problem was designed (and a corresponding phantom fabricated) to test density-correction algorithms under standard conditions for photon beams ranging from 60 Co to 24 MV. Point doses and isodose distributions submitted for a Phase III trial in regionally advanced, unresectable non-small-cell lung cancer (Radiation Therapy Oncology Group 88-08) were calculated with and without density correction. Tumor doses were analyzed for 322 patients and 1236 separate fields. Results: For the benchmark problem studied here, the overall correction factor for a four-field treatment varied significantly with energy, ranging from 1.14 ( 60 Co) to 1.05 (24 MV) for measured doses, or 1.17 ( 60 Co) to 1.05 (24 MV) for doses calculated by conventional density-correction algorithms. For the patient data, overall correction factors (calculated) ranged from 0.95 to 1.28, with a mean of 1.05 and distributional standard deviation of 0.05. The largest corrections were for lateral fields, with a mean correction factor of 1.11 and standard deviation of 0.08. Conclusions: Lung inhomogeneities can lead to significant variations in delivered dose between patients treated in a clinical trial. Existing density-correction algorithms are accurate enough to significantly reduce these variations
International Nuclear Information System (INIS)
Piepsz, Amy; Tondeur, Marianne; Ham, Hamphrey
2008-01-01
51 Cr ethylene diamine tetraacetic acid ( 51 Cr EDTA) clearance is nowadays considered as an accurate and reproducible method for measuring glomerular filtration rate (GFR) in children. Normal values in function of age, corrected for body surface area, have been recently updated. However, much criticism has been expressed about the validity of body surface area correction. The aim of the present paper was to present the normal GFR values, not corrected for body surface area, with the associated percentile curves. For that purpose, the same patients as in the previous paper were selected, namely those with no recent urinary tract infection, having a normal left to right 99m Tc MAG3 uptake ratio and a normal kidney morphology on the early parenchymal images. A single blood sample method was used for 51 Cr EDTA clearance measurement. Clearance values, not corrected for body surface area, increased progressively up to the adolescence. The percentile curves were determined and allow, for a single patient, to estimate accurately the level of non-corrected clearance and the evolution with time, whatever the age. (orig.)
Improving transcriptome assembly through error correction of high-throughput sequence reads
Directory of Open Access Journals (Sweden)
Matthew D. MacManes
2013-07-01
Full Text Available The study of functional genomics, particularly in non-model organisms, has been dramatically improved over the last few years by the use of transcriptomes and RNAseq. While these studies are potentially extremely powerful, a computationally intensive procedure, the de novo construction of a reference transcriptome must be completed as a prerequisite to further analyses. The accurate reference is critically important as all downstream steps, including estimating transcript abundance are critically dependent on the construction of an accurate reference. Though a substantial amount of research has been done on assembly, only recently have the pre-assembly procedures been studied in detail. Specifically, several stand-alone error correction modules have been reported on and, while they have shown to be effective in reducing errors at the level of sequencing reads, how error correction impacts assembly accuracy is largely unknown. Here, we show via use of a simulated and empiric dataset, that applying error correction to sequencing reads has significant positive effects on assembly accuracy, and should be applied to all datasets. A complete collection of commands which will allow for the production of Reptile corrected reads is available at https://github.com/macmanes/error_correction/tree/master/scripts and as File S1.
Energy Technology Data Exchange (ETDEWEB)
Piepsz, Amy; Tondeur, Marianne [CHU St. Pierre, Department of Radioisotopes, Brussels (Belgium); Ham, Hamphrey [University Hospital Ghent, Department of Nuclear Medicine, Ghent (Belgium)
2008-09-15
{sup 51}Cr ethylene diamine tetraacetic acid ({sup 51}Cr EDTA) clearance is nowadays considered as an accurate and reproducible method for measuring glomerular filtration rate (GFR) in children. Normal values in function of age, corrected for body surface area, have been recently updated. However, much criticism has been expressed about the validity of body surface area correction. The aim of the present paper was to present the normal GFR values, not corrected for body surface area, with the associated percentile curves. For that purpose, the same patients as in the previous paper were selected, namely those with no recent urinary tract infection, having a normal left to right {sup 99m}Tc MAG3 uptake ratio and a normal kidney morphology on the early parenchymal images. A single blood sample method was used for {sup 51}Cr EDTA clearance measurement. Clearance values, not corrected for body surface area, increased progressively up to the adolescence. The percentile curves were determined and allow, for a single patient, to estimate accurately the level of non-corrected clearance and the evolution with time, whatever the age. (orig.)
7 CFR 1730.25 - Corrective action.
2010-01-01
... 7 Agriculture 11 2010-01-01 2010-01-01 false Corrective action. 1730.25 Section 1730.25... AGRICULTURE ELECTRIC SYSTEM OPERATIONS AND MAINTENANCE Operations and Maintenance Requirements § 1730.25 Corrective action. (a) For any items on the RUS Form 300 rated unsatisfactory (i.e., 0 or 1) by the borrower...
Fluorescence correction in electron probe microanalysis
International Nuclear Information System (INIS)
Castellano, Gustavo; Riveros, J.A.
1987-01-01
In this work, several expressions for characteristic fluorescence corrections are computed, for a compilation of experimental determinations on standard samples. Since this correction does not take significant values, the performance of the different models is nearly the same; this fact suggests the use of the simplest available expression. (Author) [es
9 CFR 417.3 - Corrective actions.
2010-01-01
... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Corrective actions. 417.3 Section 417.3 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE... ANALYSIS AND CRITICAL CONTROL POINT (HACCP) SYSTEMS § 417.3 Corrective actions. (a) The written HACCP plan...
Iterative optimization of quantum error correcting codes
International Nuclear Information System (INIS)
Reimpell, M.; Werner, R.F.
2005-01-01
We introduce a convergent iterative algorithm for finding the optimal coding and decoding operations for an arbitrary noisy quantum channel. This algorithm does not require any error syndrome to be corrected completely, and hence also finds codes outside the usual Knill-Laflamme definition of error correcting codes. The iteration is shown to improve the figure of merit 'channel fidelity' in every step
Publisher Correction: Invisible Trojan-horse attack
DEFF Research Database (Denmark)
Sajeed, Shihan; Minshull, Carter; Jain, Nitin
2017-01-01
A correction to this article has been published and is linked from the HTML version of this paper. The error has been fixed in the paper.......A correction to this article has been published and is linked from the HTML version of this paper. The error has been fixed in the paper....
75 FR 17167 - Sunshine Act Meetings; Correction
2010-04-05
... NATIONAL COUNCIL ON DISABILITY Sunshine Act Meetings; Correction AGENCY: National Council on Disability. ACTION: Notice; correction. Type: Quarterly meeting. SUMMARY: NCD published a Sunshine Act Meeting Notice in the Federal Register on March 11, 2010, notifying the public of a quarterly meeting in...
A correction to the Watanabe potential
International Nuclear Information System (INIS)
Abul-Magd, A.Y.; Rabie, A.; El-Gazzar, M.A.
1980-10-01
Using the adiabatic approximation, an analytic expression for the correction to the Watanabe potential was obtained. In addition, we have corrected through a proper choice of the energy at which the potential parameters of the constituents of 6 Li should be taken. (author)
21 CFR 123.7 - Corrective actions.
2010-04-01
... of their HACCP plans in accordance with § 123.6(c)(5), by which they predetermine the corrective... in accordance with § 123.10, to determine whether the HACCP plan needs to be modified to reduce the risk of recurrence of the deviation, and modify the HACCP plan as necessary. (d) All corrective actions...
Leading quantum correction to the Newtonian potential
International Nuclear Information System (INIS)
Donoghue, J.F.
1994-01-01
I argue that the leading quantum corrections, in powers of the energy or inverse powers of the distance, may be computed in quantum gravity through knowledge of only the low-energy structure of the theory. As an example, I calculate the leading quantum corrections to the Newtonian gravitational potential
Proving correctness of compilers using structured graphs
DEFF Research Database (Denmark)
Bahr, Patrick
2014-01-01
it into a compiler implementation using a graph type along with a correctness proof. The implementation and correctness proof of a compiler using a tree type without explicit jumps is simple, but yields code duplication. Our method provides a convenient way of improving such a compiler without giving up the benefits...
Correcting Poor Posture without Awareness or Willpower
Wernik, Uri
2012-01-01
In this article, a new technique for correcting poor posture is presented. Rather than intentionally increasing awareness or mobilizing willpower to correct posture, this approach offers a game using randomly drawn cards with easy daily assignments. A case using the technique is presented to emphasize the subjective experience of living with poor…
Euphemism and political correctness in contemporary English
Directory of Open Access Journals (Sweden)
Н Б Рубина
2011-12-01
Full Text Available The presented article is devoted to the consideration of such linguistic category as the political correctness which was widely adopted in the English-speaking countries and made considerable impact on modern English language. Linguistic political correctness is the most curious language theme to ignore which, means to miss the major aspect of modern English language.
Opportunistic Error Correction for WLAN Applications
Shao, X.; Schiphorst, Roelof; Slump, Cornelis H.
2008-01-01
The current error correction layer of IEEE 802.11a WLAN is designed for worst case scenarios, which often do not apply. In this paper, we propose a new opportunistic error correction layer based on Fountain codes and a resolution adaptive ADC. The key part in the new proposed system is that only
Accurate Classification of Chronic Migraine via Brain Magnetic Resonance Imaging
Schwedt, Todd J.; Chong, Catherine D.; Wu, Teresa; Gaw, Nathan; Fu, Yinlin; Li, Jing
2015-01-01
Background The International Classification of Headache Disorders provides criteria for the diagnosis and subclassification of migraine. Since there is no objective gold standard by which to test these diagnostic criteria, the criteria are based on the consensus opinion of content experts. Accurate migraine classifiers consisting of brain structural measures could serve as an objective gold standard by which to test and revise diagnostic criteria. The objectives of this study were to utilize magnetic resonance imaging measures of brain structure for constructing classifiers: 1) that accurately identify individuals as having chronic vs. episodic migraine vs. being a healthy control; and 2) that test the currently used threshold of 15 headache days/month for differentiating chronic migraine from episodic migraine. Methods Study participants underwent magnetic resonance imaging for determination of regional cortical thickness, cortical surface area, and volume. Principal components analysis combined structural measurements into principal components accounting for 85% of variability in brain structure. Models consisting of these principal components were developed to achieve the classification objectives. Ten-fold cross validation assessed classification accuracy within each of the ten runs, with data from 90% of participants randomly selected for classifier development and data from the remaining 10% of participants used to test classification performance. Headache frequency thresholds ranging from 5–15 headache days/month were evaluated to determine the threshold allowing for the most accurate subclassification of individuals into lower and higher frequency subgroups. Results Participants were 66 migraineurs and 54 healthy controls, 75.8% female, with an average age of 36 +/− 11 years. Average classifier accuracies were: a) 68% for migraine (episodic + chronic) vs. healthy controls; b) 67.2% for episodic migraine vs. healthy controls; c) 86.3% for chronic
International Nuclear Information System (INIS)
Winkler, Hanspeter; Taylor, Kenneth A.
2006-01-01
An image alignment method for electron tomography is presented which is based on cross-correlation techniques and which includes a simultaneous refinement of the tilt geometry. A coarsely aligned tilt series is iteratively refined with a procedure consisting of two steps for each cycle: area matching and subsequent geometry correction. The first step, area matching, brings into register equivalent specimen regions in all images of the tilt series. It determines four parameters of a linear two-dimensional transformation, not just translation and rotation as is done during the preceding coarse alignment with conventional methods. The refinement procedure also differs from earlier methods in that the alignment references are now computed from already aligned images by reprojection of a backprojected volume. The second step, geometry correction, refines the initially inaccurate estimates of the geometrical parameters, including the direction of the tilt axis, a tilt angle offset, and the inclination of the specimen with respect to the support film or specimen holder. The correction values serve as an indicator for the progress of the refinement. For each new iteration, the correction values are used to compute an updated set of geometry parameters by a least squares fit. Model calculations show that it is essential to refine the geometrical parameters as well as the accurate alignment of the images to obtain a faithful map of the original structure
Mechanism for Corrective Action on Budget Imbalances
Directory of Open Access Journals (Sweden)
Ion Lucian CATRINA
2014-02-01
Full Text Available The European Fiscal Compact sets the obligation for the signatory states to establish an automatic mechanism for taking corrective action on budget imbalances. Nevertheless, the European Treaty says nothing about the tools that should be used in order to reach the desired equilibrium of budgets, but only that it should aim at correcting deviations from the medium-term objective or the adjustment path, including their cumulated impact on government debt dynamics. This paper is aiming at showing that each member state has to build the correction mechanism according to the impact of the chosen tools on economic growth and on general government revenues. We will also emphasize that the correction mechanism should be built not only exacerbating the corrective action through spending/ tax based adjustments, but on a high quality package of economic policies as well.
Chromaticity correction for the SSC collider rings
International Nuclear Information System (INIS)
Sen, T.; Nosochkov, Y.; Pilat, F.; Stiening, R.; Ritson, D.M.
1993-01-01
The authors address the issue of correcting higher order chromaticities of the collider with one or more low β insertions. The chromaticity contributed by the interaction regions (IRs) depends crucially on the maximum value of β in the two IRs in a cluster, the phase advance between adjacent interaction points (IPs), and the choice of global tune. They propose a correction scheme in which the linear chromaticity is corrected by a global distribution of sextupoles and the second order chromaticity of each IR is corrected by a more local set of sextupoles. Compared to the case where only the linear chromaticity is corrected, this configuration increases the momentum aperture more than three times and also reduces the β beat by this factor. With this scheme, the tune can be chosen to satisfy other constraints and the two IRs in a cluster can be operated independently at different luminosities without affecting the chromatic properties of the ring
Chromaticity correction for the SSC Collider Rings
International Nuclear Information System (INIS)
Sen, T.; Nosochkov, Y.; Pilat, F.; Stiening, R.; Ritson, D.M.
1993-05-01
We address the issue of correcting higher order chromaticities of the collider with one or more low β insertions. The chromaticity contributed by the interaction regions (IRS) depends crucially on the maximum value of β in the two IRs in a cluster, the phase advance between adjacent interaction points (IPs), and the choice of global tune. We propose a correction scheme in which the linear chromaticity is corrected by a global distribution of sextupoles and the second order chromaticity of each IR is corrected by a more local set of sextupoles. Compared to the case where only the linear chromaticity is corrected, this configuration increases the momentum aperture more than three times and also reduces the β beat by this factor. With this scheme, the tune can be chosen to satisfy other constraints and the two IRs in a cluster can be operated independently at different luminosities without affecting the chromatic properties of the ring
Johnson, Erin R; Contreras-García, Julia
2011-08-28
We develop a new density-functional approach combining physical insight from chemical structure with treatment of multi-reference character by real-space modeling of the exchange-correlation hole. We are able to recover, for the first time, correct fractional-charge and fractional-spin behaviour for atoms of groups 1 and 2. Based on Becke's non-dynamical correlation functional [A. D. Becke, J. Chem. Phys. 119, 2972 (2003)] and explicitly accounting for core-valence separation and pairing effects, this method is able to accurately describe dissociation and strong correlation in s-shell many-electron systems. © 2011 American Institute of Physics
Correction of Pressure Drop in Steam and Water System in Performance Test of Boiler
Liu, Jinglong; Zhao, Xianqiao; Hou, Fanjun; Wu, Xiaowu; Wang, Feng; Hu, Zhihong; Yang, Xinsen
2018-01-01
Steam and water pressure drop is one of the most important characteristics in the boiler performance test. As the measuring points are not in the guaranteed position and the test condition fluctuation exsits, the pressure drop test of steam and water system has the deviation of measuring point position and the deviation of test running parameter. In order to get accurate pressure drop of steam and water system, the corresponding correction should be carried out. This paper introduces the correction method of steam and water pressure drop in boiler performance test.
Radial lens distortion correction with sub-pixel accuracy for X-ray micro-tomography.
Vo, Nghia T; Atwood, Robert C; Drakopoulos, Michael
2015-12-14
Distortion correction or camera calibration for an imaging system which is highly configurable and requires frequent disassembly for maintenance or replacement of parts needs a speedy method for recalibration. Here we present direct techniques for calculating distortion parameters of a non-linear model based on the correct determination of the center of distortion. These techniques are fast, very easy to implement, and accurate at sub-pixel level. The implementation at the X-ray tomography system of the I12 beamline, Diamond Light Source, which strictly requires sub-pixel accuracy, shows excellent performance in the calibration image and in the reconstructed images.
Alphabet Soup: Sagittal Balance Correction Osteotomies of the Spine-What Radiologists Should Know.
Takahashi, T; Kainth, D; Marette, S; Polly, D
2018-04-01
Global sagittal malalignment has been demonstrated to have correlation with clinical symptoms and is a key component to be restored in adult spinal deformity. In this article, various types of sagittal balance-correction osteotomies are reviewed primarily on the basis of the 3 most commonly used procedures: Smith-Petersen osteotomy, pedicle subtraction osteotomy, and vertebral column resection. Familiarity with the expected imaging appearance and commonly encountered complications seen on postoperative imaging studies following correction osteotomies is crucial for accurate image interpretation. © 2018 by American Journal of Neuroradiology.
International Nuclear Information System (INIS)
Yuan Lin; Zhou Ben-Hu; Zhao Yun-Hui; Xu Jun; Hai Wen-Hua
2012-01-01
A variational-integral perturbation method (VIPM) is established by combining the variational perturbation with the integral perturbation. The first-order corrected wave functions are constructed, and the second-order energy corrections for the ground state and several lower excited states are calculated by applying the VIPM to the hydrogen atom in a strong uniform magnetic field. Our calculations demonstrated that the energy calculated by the VIPM only shows a negative value, which indicates that the VIPM method is more accurate than the other methods. Our study indicated that the VIPM can not only increase the accuracy of the results but also keep the convergence of the wave functions
Two-Loop Self-Energy Correction in a Strong Coulomb Nuclear Field
International Nuclear Information System (INIS)
Yerokhin, V.A.; Indelicato, P.; Shabaev, V.M.
2005-01-01
The two-loop self-energy correction to the ground-state energy levels of hydrogen-like ions with nuclear charges Z ≥ 10 is calculated without the Zα expansion, where α is the fine-structure constant. The data obtained are compared with the results of analytical calculations within the Zα expansion; significant disagreement with the analytical results of order α 2 (Zα) 6 has been found. Extrapolation is used to obtain the most accurate value for the two-loop self-energy correction for the 1s state in hydrogen
NLO corrections to the photon impact factor: Combining real and virtual corrections
International Nuclear Information System (INIS)
Bartels, J.; Colferai, D.; Kyrieleis, A.; Gieseke, S.
2002-08-01
In this third part of our calculation of the QCD NLO corrections to the photon impact factor we combine our previous results for the real corrections with the singular pieces of the virtual corrections and present finite analytic expressions for the quark-antiquark-gluon intermediate state inside the photon impact factor. We begin with a list of the infrared singular pieces of the virtual correction, obtained in the first step of our program. We then list the complete results for the real corrections (longitudinal and transverse photon polarization). In the next step we defined, for the real corrections, the collinear and soft singular regions and calculate their contributions to the impact factor. We then subtract the contribution due to the central region. Finally, we combine the real corrections with the singular pieces of the virtual corrections and obtain our finite results. (orig.)
Chen, H.; Karion, A.; Rella, C. W.; Winderlich, J.; Gerbig, C.; Filges, A.; Newberger, T.; Sweeney, C.; Tans, P. P.
2013-04-01
Accurate measurements of carbon monoxide (CO) in humid air have been made using the cavity ring-down spectroscopy (CRDS) technique. The measurements of CO mole fractions are determined from the strength of its spectral absorption in the near-infrared region (~1.57 μm) after removing interferences from adjacent carbon dioxide (CO2) and water vapor (H2O) absorption lines. Water correction functions that account for the dilution and pressure-broadening effects as well as absorption line interferences from adjacent CO2 and H2O lines have been derived for CO2 mole fractions between 360-390 ppm and for reported H2O mole fractions between 0-4%. The line interference corrections are independent of CO mole fractions. The dependence of the line interference correction on CO2 abundance is estimated to be approximately -0.3 ppb/100 ppm CO2 for dry mole fractions of CO. Comparisons of water correction functions from different analyzers of the same type show significant differences, making it necessary to perform instrument-specific water tests for each individual analyzer. The CRDS analyzer was flown on an aircraft in Alaska from April to November in 2011, and the accuracy of the CO measurements by the CRDS analyzer has been validated against discrete NOAA/ESRL flask sample measurements made on board the same aircraft, with a mean difference between integrated in situ and flask measurements of -0.6 ppb and a standard deviation of 2.8 ppb. Preliminary testing of CRDS instrumentation that employs improved spectroscopic model functions for CO2, H2O, and CO to fit the raw spectral data (available since the beginning of 2012) indicates a smaller water vapor dependence than the models discussed here, but more work is necessary to fully validate the performance. The CRDS technique provides an accurate and low-maintenance method of monitoring the atmospheric dry mole fractions of CO in humid air streams.
International Nuclear Information System (INIS)
Smith, N.; Pritchard, D.E.
1981-01-01
We have recently demonstrated that the energy corrected sudden (ECS) scaling law of De Pristo et al. when conbined with the power law assumption for the basis rates k/sub l/→0proportional[l(l+1)]/sup -g/ can accurately fit a wide body of rotational energy transfer data. We develop a simple and accurate approximation to this fitting law, and in addition mathematically show the connection between it and our earlier proposed energy based law which also has been successful in describing both theoretical and experimental data on rotationally inelastic collisions
Auto correct method of AD converters precision based on ethernet
Directory of Open Access Journals (Sweden)
NI Jifeng
2013-10-01
Full Text Available Ideal AD conversion should be a straight zero-crossing line in the Cartesian coordinate axis system. While in practical engineering, the signal processing circuit, chip performance and other factors have an impact on the accuracy of conversion. Therefore a linear fitting method is adopted to improve the conversion accuracy. An automatic modification of AD conversion based on Ethernet is presented by using software and hardware. Just by tapping the mouse, all the AD converter channel linearity correction can be automatically completed, and the error, SNR and ENOB (effective number of bits are calculated. Then the coefficients of linear modification are loaded into the onboard AD converter card's EEPROM. Compared with traditional methods, this method is more convenient, accurate and efficient，and has a broad application prospects.
Evolutionary modeling-based approach for model errors correction
Directory of Open Access Journals (Sweden)
S. Q. Wan
2012-08-01
Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."
On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.
Closed orbit related problems: Correction, feedback, and analysis
International Nuclear Information System (INIS)
Bozoki, E.S.
1995-01-01
Orbit correction - moving the orbit to a desired orbit, orbit stability - keeping the orbit on the desired orbit using feedback to filter out unwanted noise, and orbit analysis - to learn more about the model of the machine, are strongly interrelated. They are the three facets of the same problem. The better one knows the model of the machine, the better the predictions that can be made on the behavior of the machine (inverse modeling) and the more accurately one can control the machine. On the other hand, one of the tools to learn more about the machine (modeling) is to study and analyze the orbit response to open-quotes kicks.close quotes
New method in obtaining correction factor of power confirming
International Nuclear Information System (INIS)
Deng Yongjun; Li Rundong; Liu Yongkang; Zhou Wei
2010-01-01
Westcott theory is the most widely used method in reactor power calibration, which particularly suited to research reactor. But this method is very fussy because lots of correction parameters which rely on empirical formula to special reactor type are needed. The incidence coefficient between foil activity and reactor power was obtained by Monte-Carlo calculation, which was carried out with precise description of the reactor core and the foil arrangement position by MCNP input card. So the reactor power was determined by the core neutron fluence profile and the foil activity placed in the position for normalization use. The characteristic of this new method is simpler, more flexible and accurate than Westcott theory. In this paper, the results of SPRR-300 obtained by the new method in theory were compared with the experimental results, which verified the possibility of this new method. (authors)
Easy and accurate reconstruction of whole HIV genomes from short-read sequence data with shiver
Blanquart, François; Golubchik, Tanya; Gall, Astrid; Bakker, Margreet; Bezemer, Daniela; Croucher, Nicholas J; Hall, Matthew; Hillebregt, Mariska; Ratmann, Oliver; Albert, Jan; Bannert, Norbert; Fellay, Jacques; Fransen, Katrien; Gourlay, Annabelle; Grabowski, M Kate; Gunsenheimer-Bartmeyer, Barbara; Günthard, Huldrych F; Kivelä, Pia; Kouyos, Roger; Laeyendecker, Oliver; Liitsola, Kirsi; Meyer, Laurence; Porter, Kholoud; Ristola, Matti; van Sighem, Ard; Cornelissen, Marion; Kellam, Paul; Reiss, Peter
2018-01-01
Abstract Studying the evolution of viruses and their molecular epidemiology relies on accurate viral sequence data, so that small differences between similar viruses can be meaningfully interpreted. Despite its higher throughput and more detailed minority variant data, next-generation sequencing has yet to be widely adopted for HIV. The difficulty of accurately reconstructing the consensus sequence of a quasispecies from reads (short fragments of DNA) in the presence of large between- and within-host diversity, including frequent indels, may have presented a barrier. In particular, mapping (aligning) reads to a reference sequence leads to biased loss of information; this bias can distort epidemiological and evolutionary conclusions. De novo assembly avoids this bias by aligning the reads to themselves, producing a set of sequences called contigs. However contigs provide only a partial summary of the reads, misassembly may result in their having an incorrect structure, and no information is available at parts of the genome where contigs could not be assembled. To address these problems we developed the tool shiver to pre-process reads for quality and contamination, then map them to a reference tailored to the sample using corrected contigs supplemented with the user’s choice of existing reference sequences. Run with two commands per sample, it can easily be used for large heterogeneous data sets. We used shiver to reconstruct the consensus sequence and minority variant information from paired-end short-read whole-genome data produced with the Illumina platform, for sixty-five existing publicly available samples and fifty new samples. We show the systematic superiority of mapping to shiver’s constructed reference compared with mapping the same reads to the closest of 3,249 real references: median values of 13 bases called differently and more accurately, 0 bases called differently and less accurately, and 205 bases of missing sequence recovered. We also
The cell pattern correction through design-based metrology
Kim, Yonghyeon; Lee, Kweonjae; Chang, Jinman; Kim, Taeheon; Han, Daehan; Lee, Kyusun; Hong, Aeran; Kang, Jinyoung; Choi, Bumjin; Lee, Joosung; Yeom, Kyehee; Lee, Jooyoung; Hong, Hyeongsun; Lee, Kyupil; Jin, Gyoyoung
2015-03-01
Starting with the sub 2Xnm node, the process window becomes smaller and tighter than before. Pattern related error budget is required for accurate critical-dimension control of Cell layers. Therefore, lithography has been faced with its various difficulties, such as weird distribution, overlay error, patterning difficulty etc. The distribution of cell pattern and overlay management are the most important factors in DRAM field. We had been experiencing that the fatal risk is caused by the patterns located in the tail of the distribution. The overlay also induces the various defect sources and misalignment issues. Even though we knew that these elements are important, we could not classify the defect type of Cell patterns. Because there is no way to gather massive small pattern CD samples in cell unit block and to compare layout with cell patterns by the CD-SEM. The CD- SEM is used in order to gather these data through high resolution, but CD-SEM takes long time to inspect and extract data because it measures the small FOV. (Field Of View) However, the NGR(E-beam tool) provides high speed with large FOV and high resolution. Also, it's possible to measure an accurate overlay between the target layout and cell patterns because they provide DBM. (Design Based Metrology) By using massive measured data, we extract the result that it is persuasive by applying the various analysis techniques, as cell distribution and defects, the pattern overlay error correction etc. We introduce how to correct cell pattern, by using the DBM measurement, and new analysis methods.
Precision Photometric Extinction Corrections from Direct Atmospheric Measurements
McGraw, John T.; Zimmer, P.; Linford, J.; Simon, T.; Measurement Astrophysics Research Group
2009-01-01
For decades astronomical extinction corrections have been accomplished using nightly mean extinction coefficients derived from Langley plots measured with the same telescope used for photometry. Because this technique results in lost time on program fields, observers only grudgingly made sporadic extinction measurements. Occasionally extinction corrections are not measured nightly but are made using tabulated mean monthly or even quarterly extinction coefficients. Any observer of the sky knows that Earth's atmosphere is an ever-changing fluid in which is embedded extinction sources ranging from Rayleigh (molecular) scattering to aerosol, smoke and dust scattering and absorption, to "just plain cloudy.” Our eyes also tell us that the type, direction and degree of extinction changes on time scales of minutes or less - typically shorter than many astronomical observations. Thus, we should expect that atmospheric extinction can change significantly during a single observation. Mean extinction coefficients might be well-defined nightly means, but those means have high variance because they do not accurately record the wavelength-, time-, and angle-dependent extinction actually affecting each observation. Our research group is implementing lidar measurements made in the direction of observation with one minute cadence, from which the absolute monochromatic extinction can be measured. Simultaneous spectrophotometry of nearby bright standard stars allows derivation and MODTRAN modeling atmospheric transmission as a function of wavelength for the atmosphere through which an observation is made. Application of this technique is demonstrated. Accurate real-time extinction measurements are an enabling factor for sub-1% photometry. This research is supported by NSF Grant 0421087 and AFRL Grant #FA9451-04-2-0355.