WorldWideScience

Sample records for saturation correction method

  1. Correcting saturation of detectors for particle/droplet imaging methods

    International Nuclear Information System (INIS)

    Kalt, Peter A M

    2010-01-01

    Laser-based diagnostic methods are being applied to more and more flows of theoretical and practical interest and are revealing interesting new flow features. Imaging particles or droplets in nephelometry and laser sheet dropsizing methods requires a trade-off of maximized signal-to-noise ratio without over-saturating the detector. Droplet and particle imaging results in lognormal distribution of pixel intensities. It is possible to fit a derived lognormal distribution to the histogram of measured pixel intensities. If pixel intensities are clipped at a saturated value, it is possible to estimate a presumed probability density function (pdf) shape without the effects of saturation from the lognormal fit to the unsaturated histogram. Information about presumed shapes of the pixel intensity pdf is used to generate corrections that can be applied to data to account for saturation. The effects of even slight saturation are shown to be a significant source of error on the derived average. The influence of saturation on the derived root mean square (rms) is even more pronounced. It is found that errors on the determined average exceed 5% when the number of saturated samples exceeds 3% of the total. Errors on the rms are 20% for a similar saturation level. This study also attempts to delineate limits, within which the detector saturation can be accurately corrected. It is demonstrated that a simple method for reshaping the clipped part of the pixel intensity histogram makes accurate corrections to account for saturated pixels. These outcomes can be used to correct a saturated signal, quantify the effect of saturation on a derived average and offer a method to correct the derived average in the case of slight to moderate saturation of pixels

  2. A gamma camera count rate saturation correction method for whole-body planar imaging

    Science.gov (United States)

    Hobbs, Robert F.; Baechler, Sébastien; Senthamizhchelvan, Srinivasan; Prideaux, Andrew R.; Esaias, Caroline E.; Reinhardt, Melvin; Frey, Eric C.; Loeb, David M.; Sgouros, George

    2010-02-01

    Whole-body (WB) planar imaging has long been one of the staple methods of dosimetry, and its quantification has been formalized by the MIRD Committee in pamphlet no 16. One of the issues not specifically addressed in the formalism occurs when the count rates reaching the detector are sufficiently high to result in camera count saturation. Camera dead-time effects have been extensively studied, but all of the developed correction methods assume static acquisitions. However, during WB planar (sweep) imaging, a variable amount of imaged activity exists in the detector's field of view as a function of time and therefore the camera saturation is time dependent. A new time-dependent algorithm was developed to correct for dead-time effects during WB planar acquisitions that accounts for relative motion between detector heads and imaged object. Static camera dead-time parameters were acquired by imaging decaying activity in a phantom and obtaining a saturation curve. Using these parameters, an iterative algorithm akin to Newton's method was developed, which takes into account the variable count rate seen by the detector as a function of time. The algorithm was tested on simulated data as well as on a whole-body scan of high activity Samarium-153 in an ellipsoid phantom. A complete set of parameters from unsaturated phantom data necessary for count rate to activity conversion was also obtained, including build-up and attenuation coefficients, in order to convert corrected count rate values to activity. The algorithm proved successful in accounting for motion- and time-dependent saturation effects in both the simulated and measured data and converged to any desired degree of precision. The clearance half-life calculated from the ellipsoid phantom data was calculated to be 45.1 h after dead-time correction and 51.4 h with no correction; the physical decay half-life of Samarium-153 is 46.3 h. Accurate WB planar dosimetry of high activities relies on successfully compensating

  3. On Neglecting Chemical Exchange Effects When Correcting in Vivo 31P MRS Data for Partial Saturation

    Science.gov (United States)

    Ouwerkerk, Ronald; Bottomley, Paul A.

    2001-02-01

    Signal acquisition in most MRS experiments requires a correction for partial saturation that is commonly based on a single exponential model for T1 that ignores effects of chemical exchange. We evaluated the errors in 31P MRS measurements introduced by this approximation in two-, three-, and four-site chemical exchange models under a range of flip-angles and pulse sequence repetition times (TR) that provide near-optimum signal-to-noise ratio (SNR). In two-site exchange, such as the creatine-kinase reaction involving phosphocreatine (PCr) and γ-ATP in human skeletal and cardiac muscle, errors in saturation factors were determined for the progressive saturation method and the dual-angle method of measuring T1. The analysis shows that these errors are negligible for the progressive saturation method if the observed T1 is derived from a three-parameter fit of the data. When T1 is measured with the dual-angle method, errors in saturation factors are less than 5% for all conceivable values of the chemical exchange rate and flip-angles that deliver useful SNR per unit time over the range T1/5 ≤ TR ≤ 2T1. Errors are also less than 5% for three- and four-site exchange when TR ≥ T1*/2, the so-called "intrinsic" T1's of the metabolites. The effect of changing metabolite concentrations and chemical exchange rates on observed T1's and saturation corrections was also examined with a three-site chemical exchange model involving ATP, PCr, and inorganic phosphate in skeletal muscle undergoing up to 95% PCr depletion. Although the observed T1's were dependent on metabolite concentrations, errors in saturation corrections for TR = 2 s could be kept within 5% for all exchanging metabolites using a simple interpolation of two dual-angle T1 measurements performed at the start and end of the experiment. Thus, the single-exponential model appears to be reasonably accurate for correcting 31P MRS data for partial saturation in the presence of chemical exchange. Even in systems where

  4. Model Correction Factor Method

    DEFF Research Database (Denmark)

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes

    1997-01-01

    The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...

  5. Measured attenuation correction methods

    International Nuclear Information System (INIS)

    Ostertag, H.; Kuebler, W.K.; Doll, J.; Lorenz, W.J.

    1989-01-01

    Accurate attenuation correction is a prerequisite for the determination of exact local radioactivity concentrations in positron emission tomography. Attenuation correction factors range from 4-5 in brain studies to 50-100 in whole body measurements. This report gives an overview of the different methods of determining the attenuation correction factors by transmission measurements using an external positron emitting source. The long-lived generator nuclide 68 Ge/ 68 Ga is commonly used for this purpose. The additional patient dose from the transmission source is usually a small fraction of the dose due to the subsequent emission measurement. Ring-shaped transmission sources as well as rotating point or line sources are employed in modern positron tomographs. By masking a rotating line or point source, random and scattered events in the transmission scans can be effectively suppressed. The problems of measured attenuation correction are discussed: Transmission/emission mismatch, random and scattered event contamination, counting statistics, transmission/emission scatter compensation, transmission scan after administration of activity to the patient. By using a double masking technique simultaneous emission and transmission scans become feasible. (orig.)

  6. A preliminary study on method of saturated curve

    International Nuclear Information System (INIS)

    Cao Liguo; Chen Yan; Ao Qi; Li Huijuan

    1987-01-01

    It is an effective method to determine directly the absorption coefficient of sample with the matrix effect correction. The absorption coefficient is calculated using the relation of the characteristic X-ray intensity with the thickness of samples (saturated curve). The method explains directly the feature of the sample and the correction of the enhanced effect in certain condition. The method is not as same as the usual one in which the determination of the absorption coefficient of sample is based on the procedure of absorption of X-ray penetrating sample. The sensitivity factor KI 0 is discussed. The idea of determinating KI o by experiment and quasi-absoluted measurement of absorption coefficient μ are proposed. The experimental results with correction in different condition are shown

  7. Classical gluon production amplitude for nucleus-nucleus collisions:First saturation correction in the projectile

    International Nuclear Information System (INIS)

    Chirilli, Giovanni A.; Kovchegov, Yuri V.; Wertepny, Douglas E.

    2015-01-01

    We calculate the classical single-gluon production amplitude in nucleus-nucleus collisions including the first saturation correction in one of the nuclei (the projectile) while keeping multiple-rescattering (saturation) corrections to all orders in the other nucleus (the target). In our approximation only two nucleons interact in the projectile nucleus: the single-gluon production amplitude we calculate is order-g"3 and is leading-order in the atomic number of the projectile, while resumming all order-one saturation corrections in the target nucleus. Our result is the first step towards obtaining an analytic expression for the first projectile saturation correction to the gluon production cross section in nucleus-nucleus collisions.

  8. An algorithm to correct saturated mass spectrometry ion abundances for enhanced quantitation and mass accuracy in omic studies

    Energy Technology Data Exchange (ETDEWEB)

    Bilbao, Aivett; Gibbons, Bryson C.; Slysz, Gordon W.; Crowell, Kevin L.; Monroe, Matthew E.; Ibrahim, Yehia M.; Smith, Richard D.; Payne, Samuel H.; Baker, Erin S.

    2018-04-01

    The mass accuracy and peak intensity of ions detected by mass spectrometry (MS) measurements are essential to facilitate compound identification and quantitation. However, high concentration species can easily cause problems if their ion intensities reach beyond the limits of the detection system, leading to distorted and non-ideal detector response (e.g. saturation), and largely precluding the calculation of accurate m/z and intensity values. Here we present an open source computational method to correct peaks above a defined intensity (saturated) threshold determined by the MS instrumentation such as the analog-to-digital converters or time-to-digital converters used in conjunction with time-of-flight MS. In this method, the isotopic envelope for each observed ion above the saturation threshold is compared to its expected theoretical isotopic distribution. The most intense isotopic peak for which saturation does not occur is then utilized to re-calculate the precursor m/z and correct the intensity, resulting in both higher mass accuracy and greater dynamic range. The benefits of this approach were evaluated with proteomic and lipidomic datasets of varying complexities. After correcting the high concentration species, reduced mass errors and enhanced dynamic range were observed for both simple and complex omic samples. Specifically, the mass error dropped by more than 50% in most cases with highly saturated species and dynamic range increased by 1-2 orders of magnitude for peptides in a blood serum sample.

  9. Generalized subspace correction methods

    Energy Technology Data Exchange (ETDEWEB)

    Kolm, P. [Royal Institute of Technology, Stockholm (Sweden); Arbenz, P.; Gander, W. [Eidgenoessiche Technische Hochschule, Zuerich (Switzerland)

    1996-12-31

    A fundamental problem in scientific computing is the solution of large sparse systems of linear equations. Often these systems arise from the discretization of differential equations by finite difference, finite volume or finite element methods. Iterative methods exploiting these sparse structures have proven to be very effective on conventional computers for a wide area of applications. Due to the rapid development and increasing demand for the large computing powers of parallel computers, it has become important to design iterative methods specialized for these new architectures.

  10. Saturation and Energy Corrections for TeV Electrons and Photons

    CERN Document Server

    Clerbaux, Barbara; Mahmoud, Tariq; Marage, Pierre Edouard

    2006-01-01

    This note presents a study of the response of the CMS electromagnetic calorimeter ECAL to high energy electrons and photons (from 500 to 4000 GeV), using the full simulation of the CMS detector. The longitudinal containment and the lateral extension of high energy showers are discussed, and energy and eta dependent correction factors F(E_meas, eta), where E_meas = E_ECAL + E_HCAL, are determined in order to reconstruct the incident particle energy, using the energies measured in the ECAL and in the hadronic calorimeter HCAL. For ECAL barrel crystals with energy deposit higher than 1700 GeV, improvements are proposed to techniques aimed at correcting for the effects of electronics saturation.

  11. Near-Saturation Single-Photon Avalanche Diode Afterpulse and Sensitivity Correction Scheme for the LHC Longitudinal density Monitor

    CERN Document Server

    Bravin, E; Palm, M

    2014-01-01

    Single-Photon Avalanche Diodes (SPADs) monitor the longitudinal density of the LHC beams by measuring the temporal distribution of synchrotron radiation. The relative population of nominally empty RF-buckets (satellites or ghosts) with respect to filled bunches is a key figure for the luminosity calibration of the LHC experiments. Since afterpulsing from a main bunch avalanche can be as high as, or higher than, the signal from satellites or ghosts, an accurate correction algorithm is needed. Furthermore, to reduce the integration time, the amount of light sent to the SPAD is enough so that pile-up effects and afterpulsing cannot be neglected. The SPAD sensitivity has also been found to vary at the end of the active quenching phase. We present a method to characterize and correct for SPAD deadtime, afterpulsing and sensitivity variation near saturation, together with laboratory benchmarking.

  12. Error of image saturation in the structured-light method.

    Science.gov (United States)

    Qi, Zhaoshuai; Wang, Zhao; Huang, Junhui; Xing, Chao; Gao, Jianmin

    2018-01-01

    In the phase-measuring structured-light method, image saturation will induce large phase errors. Usually, by selecting proper system parameters (such as the phase-shift number, exposure time, projection intensity, etc.), the phase error can be reduced. However, due to lack of a complete theory of phase error, there is no rational principle or basis for the selection of the optimal system parameters. For this reason, the phase error due to image saturation is analyzed completely, and the effects of the two main factors, including the phase-shift number and saturation degree, on the phase error are studied in depth. In addition, the selection of optimal system parameters is discussed, including the proper range and the selection principle of the system parameters. The error analysis and the conclusion are verified by simulation and experiment results, and the conclusion can be used for optimal parameter selection in practice.

  13. Selective saturation method for EPR dosimetry with tooth enamel

    International Nuclear Information System (INIS)

    Ignatiev, E.A.; Romanyukha, A.A.; Koshta, A.A.; Wieser, A.

    1996-01-01

    The method of selective saturation is based on the difference in the microwave (mw) power dependence of the background and radiation induced EPR components of the tooth enamel spectrum. The subtraction of the EPR spectrum recorded at low mw power from that recorded at higher mw power provides a considerable reduction of the background component in the spectrum. The resolution of the EPR spectrum could be improved 10-fold, however simultaneously the signal-to-noise ratio was found to be reduced twice. A detailed comparative study of reference samples with known absorbed doses was performed to demonstrate the advantage of the method. The application of the selective saturation method for EPR dosimetry with tooth enamel reduced the lower limit of EPR dosimetry to about 100 mGy. (author)

  14. Correcting human heart 31P NMR spectra for partial saturation. Evidence that saturation factors for PCr/ATP are homogeneous in normal and disease states

    Science.gov (United States)

    Bottomley, Paul A.; Hardy, Christopher J.; Weiss, Robert G.

    Heart PCr/ATP ratios measured from spatially localized 31P NMR spectra can be corrected for partial saturation effects using saturation factors derived from unlocalized chest surface-coil spectra acquired at the heart rate and approximate Ernst angle for phosphor creatine (PCr) and again under fully relaxed conditions during each 31P exam. To validate this approach in studies of normal and disease states where the possibility of heterogeneity in metabolite T1 values between both chest muscle and heart and normal and disease states exists, the properties of saturation factors for metabolite ratios were investigated theoretically under conditions applicable in typical cardiac spectroscopy exams and empirically using data from 82 cardiac 31P exams in six study groups comprising normal controls ( n = 19) and patients with dilated ( n = 20) and hypertrophic ( n = 5) cardiomyopathy, coronary artery disease ( n = 16), heart transplants ( n = 19), and valvular heart disease ( n = 3). When TR ≪ T1,(PCr), with T1(PCr) ⩾ T1(ATP), the saturation factor for PCr/ATP lies in the range 1.5 ± 0.5, regardless of the T1 values. The precise value depends on the ratio of metabolite T1 values rather than their absolute values and is insensitive to modest changes in TR. Published data suggest that the metabolite T1 ratio is the same in heart and muscle. Our empirical data reveal that the saturation factors do not vary significantly with disease state, nor with the relative fractions of muscle and heart contributing to the chest surface-coil spectra. Also, the corrected myocardial PCr/ATP ratios in each normal or disease state bear no correlation with the corresponding saturation factors nor the fraction of muscle in the unlocalized chest spectra. However, application of the saturation correction (mean value, 1.36 ± 0.03 SE) significantly reduced scatter in myocardial PCr/ATP data by 14 ± 11% (SD) ( p ⩽ 0.05). The findings suggest that the relative T1 values of PCr and ATP are

  15. Capillary pressure-saturation relationships for porous granular materials: Pore morphology method vs. pore unit assembly method

    Science.gov (United States)

    Sweijen, Thomas; Aslannejad, Hamed; Hassanizadeh, S. Majid

    2017-09-01

    In studies of two-phase flow in complex porous media it is often desirable to have an estimation of the capillary pressure-saturation curve prior to measurements. Therefore, we compare in this research the capability of three pore-scale approaches in reproducing experimentally measured capillary pressure-saturation curves. To do so, we have generated 12 packings of spheres that are representative of four different glass-bead packings and eight different sand packings, for which we have found experimental data on the capillary pressure-saturation curve in the literature. In generating the packings, we matched the particle size distributions and porosity values of the granular materials. We have used three different pore-scale approaches for generating the capillary pressure-saturation curves of each packing: i) the Pore Unit Assembly (PUA) method in combination with the Mayer and Stowe-Princen (MS-P) approximation for estimating the entry pressures of pore throats, ii) the PUA method in combination with the hemisphere approximation, and iii) the Pore Morphology Method (PMM) in combination with the hemisphere approximation. The three approaches were also used to produce capillary pressure-saturation curves for the coating layer of paper, used in inkjet printing. Curves for such layers are extremely difficult to determine experimentally, due to their very small thickness and the presence of extremely small pores (less than one micrometer in size). Results indicate that the PMM and PUA-hemisphere method give similar capillary pressure-saturation curves, because both methods rely on a hemisphere to represent the air-water interface. The ability of the hemisphere approximation and the MS-P approximation to reproduce correct capillary pressure seems to depend on the type of particle size distribution, with the hemisphere approximation working well for narrowly distributed granular materials.

  16. A hybrid numerical method for orbit correction

    International Nuclear Information System (INIS)

    White, G.; Himel, T.; Shoaee, H.

    1997-09-01

    The authors describe a simple hybrid numerical method for beam orbit correction in particle accelerators. The method overcomes both degeneracy in the linear system being solved and respects boundaries on the solution. It uses the Singular Value Decomposition (SVD) to find and remove the null-space in the system, followed by a bounded Linear Least Squares analysis of the remaining recast problem. It was developed for correcting orbit and dispersion in the B-factory rings

  17. A Single Image Dehazing Method Using Average Saturation Prior

    Directory of Open Access Journals (Sweden)

    Zhenfei Gu

    2017-01-01

    Full Text Available Outdoor images captured in bad weather are prone to yield poor visibility, which is a fatal problem for most computer vision applications. The majority of existing dehazing methods rely on an atmospheric scattering model and therefore share a common limitation; that is, the model is only valid when the atmosphere is homogeneous. In this paper, we propose an improved atmospheric scattering model to overcome this inherent limitation. By adopting the proposed model, a corresponding dehazing method is also presented. In this method, we first create a haze density distribution map of a hazy image, which enables us to segment the hazy image into scenes according to the haze density similarity. Then, in order to improve the atmospheric light estimation accuracy, we define an effective weight assignment function to locate a candidate scene based on the scene segmentation results and therefore avoid most potential errors. Next, we propose a simple but powerful prior named the average saturation prior (ASP, which is a statistic of extensive high-definition outdoor images. Using this prior combined with the improved atmospheric scattering model, we can directly estimate the scene atmospheric scattering coefficient and restore the scene albedo. The experimental results verify that our model is physically valid, and the proposed method outperforms several state-of-the-art single image dehazing methods in terms of both robustness and effectiveness.

  18. A New Class of Scaling Correction Methods

    International Nuclear Information System (INIS)

    Mei Li-Jie; Wu Xin; Liu Fu-Yao

    2012-01-01

    When conventional integrators like Runge—Kutta-type algorithms are used, numerical errors can make an orbit deviate from a hypersurface determined by many constraints, which leads to unreliable numerical solutions. Scaling correction methods are a powerful tool to avoid this. We focus on their applications, and also develop a family of new velocity multiple scaling correction methods where scale factors only act on the related components of the integrated momenta. They can preserve exactly some first integrals of motion in discrete or continuous dynamical systems, so that rapid growth of roundoff or truncation errors is suppressed significantly. (general)

  19. Another method of dead time correction

    International Nuclear Information System (INIS)

    Sabol, J.

    1988-01-01

    A new method of the correction of counting losses caused by a non-extended dead time of pulse detection systems is presented. The approach is based on the distribution of time intervals between pulses at the output of the system. The method was verified both experimentally and by using the Monte Carlo simulations. The results show that the suggested technique is more reliable and accurate than other methods based on a separate measurement of the dead time. (author) 5 refs

  20. Off-Angle Iris Correction Methods

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Thompson, Joseph T [ORNL; Karakaya, Mahmut [ORNL; Boehnen, Chris Bensing [ORNL

    2016-01-01

    In many real world iris recognition systems obtaining consistent frontal images is problematic do to inexperienced or uncooperative users, untrained operators, or distracting environments. As a result many collected images are unusable by modern iris matchers. In this chapter we present four methods for correcting off-angle iris images to appear frontal which makes them compatible with existing iris matchers. The methods include an affine correction, a retraced model of the human eye, measured displacements, and a genetic algorithm optimized correction. The affine correction represents a simple way to create an iris image that appears frontal but it does not account for refractive distortions of the cornea. The other method account for refraction. The retraced model simulates the optical properties of the cornea. The other two methods are data driven. The first uses optical flow to measure the displacements of the iris texture when compared to frontal images of the same subject. The second uses a genetic algorithm to learn a mapping that optimizes the Hamming Distance scores between off-angle and frontal images. In this paper we hypothesize that the biological model presented in our earlier work does not adequately account for all variations in eye anatomy and therefore the two data-driven approaches should yield better performance. Results are presented using the commercial VeriEye matcher that show that the genetic algorithm method clearly improves over prior work and makes iris recognition possible up to 50 degrees off-angle.

  1. Iteration of ultrasound aberration correction methods

    Science.gov (United States)

    Maasoey, Svein-Erik; Angelsen, Bjoern; Varslot, Trond

    2004-05-01

    Aberration in ultrasound medical imaging is usually modeled by time-delay and amplitude variations concentrated on the transmitting/receiving array. This filter process is here denoted a TDA filter. The TDA filter is an approximation to the physical aberration process, which occurs over an extended part of the human body wall. Estimation of the TDA filter, and performing correction on transmit and receive, has proven difficult. It has yet to be shown that this method works adequately for severe aberration. Estimation of the TDA filter can be iterated by retransmitting a corrected signal and re-estimate until a convergence criterion is fulfilled (adaptive imaging). Two methods for estimating time-delay and amplitude variations in receive signals from random scatterers have been developed. One method correlates each element signal with a reference signal. The other method use eigenvalue decomposition of the receive cross-spectrum matrix, based upon a receive energy-maximizing criterion. Simulations of iterating aberration correction with a TDA filter have been investigated to study its convergence properties. A weak and strong human-body wall model generated aberration. Both emulated the human abdominal wall. Results after iteration improve aberration correction substantially, and both estimation methods converge, even for the case of strong aberration.

  2. PRO-QUEST: a rapid assessment method based on progressive saturation for quantifying exchange rates using saturation times in CEST.

    Science.gov (United States)

    Demetriou, Eleni; Tachrount, Mohamed; Zaiss, Moritz; Shmueli, Karin; Golay, Xavier

    2018-03-05

    To develop a new MRI technique to rapidly measure exchange rates in CEST MRI. A novel pulse sequence for measuring chemical exchange rates through a progressive saturation recovery process, called PRO-QUEST (progressive saturation for quantifying exchange rates using saturation times), has been developed. Using this method, the water magnetization is sampled under non-steady-state conditions, and off-resonance saturation is interleaved with the acquisition of images obtained through a Look-Locker type of acquisition. A complete theoretical framework has been set up, and simple equations to obtain the exchange rates have been derived. A reduction of scan time from 58 to 16 minutes has been obtained using PRO-QUEST versus the standard QUEST. Maps of both T 1 of water and B 1 can simply be obtained by repetition of the sequence without off-resonance saturation pulses. Simulations and calculated exchange rates from experimental data using amino acids such as glutamate, glutamine, taurine, and alanine were compared and found to be in good agreement. The PRO-QUEST sequence was also applied on healthy and infarcted rats after 24 hours, and revealed that imaging specificity to ischemic acidification during stroke was substantially increased relative to standard amide proton transfer-weighted imaging. Because of the reduced scan time and insensitivity to nonchemical exchange factors such as direct water saturation, PRO-QUEST can serve as an excellent alternative for researchers and clinicians interested to map pH changes in vivo. © 2018 International Society for Magnetic Resonance in Medicine.

  3. Efficient orbit integration by manifold correction methods.

    Science.gov (United States)

    Fukushima, Toshio

    2005-12-01

    Triggered by a desire to investigate, numerically, the planetary precession through a long-term numerical integration of the solar system, we developed a new formulation of numerical integration of orbital motion named manifold correct on methods. The main trick is to rigorously retain the consistency of physical relations, such as the orbital energy, the orbital angular momentum, or the Laplace integral, of a binary subsystem. This maintenance is done by applying a correction to the integrated variables at each integration step. Typical methods of correction are certain geometric transformations, such as spatial scaling and spatial rotation, which are commonly used in the comparison of reference frames, or mathematically reasonable operations, such as modularization of angle variables into the standard domain [-pi, pi). The form of the manifold correction methods finally evolved are the orbital longitude methods, which enable us to conduct an extremely precise integration of orbital motions. In unperturbed orbits, the integration errors are suppressed at the machine epsilon level for an indefinitely long period. In perturbed cases, on the other hand, the errors initially grow in proportion to the square root of time and then increase more rapidly, the onset of which depends on the type and magnitude of the perturbations. This feature is also realized for highly eccentric orbits by applying the same idea as used in KS-regularization. In particular, the introduction of time elements greatly enhances the performance of numerical integration of KS-regularized orbits, whether the scaling is applied or not.

  4. Dead time corrections using the backward extrapolation method

    Energy Technology Data Exchange (ETDEWEB)

    Gilad, E., E-mail: gilade@bgu.ac.il [The Unit of Nuclear Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel); Dubi, C. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel); Geslot, B.; Blaise, P. [DEN/CAD/DER/SPEx/LPE, CEA Cadarache, Saint-Paul-les-Durance 13108 (France); Kolin, A. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel)

    2017-05-11

    Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold, up to total saturation of the detector system. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (e.g., paralyzing vs. non paralyzing), and the stochastic nature of the fission chains. In the present study, a new technique is introduced for dead time corrections on the sampled Count Per Second (CPS), based on backward extrapolation of the losses, created by increasingly growing artificially imposed dead time on the data, back to zero. The method has been implemented on actual neutron noise measurements carried out in the MINERVE zero power reactor, demonstrating high accuracy (of 1–2%) in restoring the corrected count rate. - Highlights: • A new method for dead time corrections is introduced and experimentally validated. • The method does not depend on any prior calibration nor assumes any specific model. • Different dead times are imposed on the signal and the losses are extrapolated to zero. • The method is implemented and validated using neutron measurements from the MINERVE. • Result show very good correspondence to empirical results.

  5. Methods of correcting Anger camera deadtime losses

    International Nuclear Information System (INIS)

    Sorenson, J.A.

    1976-01-01

    Three different methods of correcting for Anger camera deadtime loss were investigated. These included analytic methods (mathematical modeling), the marker-source method, and a new method based on counting ''pileup'' events appearing in a pulseheight analyzer window positioned above the photopeak of interest. The studies were done with /sup 99m/Tc on a Searle Radiographics camera with a measured deadtime of about 6 μsec. Analytic methods were found to be unreliable because of unpredictable changes in deadtime with changes in radiation scattering conditions. Both the marker-source method and the pileup-counting method were found to be accurate to within a few percent for true counting rates of up to about 200 K cps, with the pileup-counting method giving better results. This finding applied to sources at depths ranging up to 10 cm of pressed wood. The relative merits of the two methods are discussed

  6. From dry to saturated thermal conductivity: mixing-model correction charts and new conversion equations for sedimentary rocks

    Science.gov (United States)

    Fuchs, Sven; Schütz, Felina; Förster, Andrea; Förster, Hans-Jürgen

    2013-04-01

    satisfying. To improve the fit of the models, correction equations are calculated based on the statistical data. In addition, the application of correction equations allows a significant improvement of the accuracy of bulk TC data calculated. However, the "corrected" geometric mean constitutes the only model universally applicable to different types of sedimentary rocks and, thus, is recommended for the calculation of bulk TC. Finally, the statistical analysis also resulted in lithotype-specific conversion equations, which permit a calculation of the water-saturated bulk TC from dry-measured TC and porosity (e.g., well-log-derived porosity). This approach has the advantage that the saturated bulk TC could be calculated readily without application of any mixing model. The expected errors with this approach are in the range between 5 and 10 % (Fuchs et al., 2013).

  7. Decay correction methods in dynamic PET studies

    International Nuclear Information System (INIS)

    Chen, K.; Reiman, E.; Lawson, M.

    1995-01-01

    In order to reconstruct positron emission tomography (PET) images in quantitative dynamic studies, the data must be corrected for radioactive decay. One of the two commonly used methods ignores physiological processes including blood flow that occur at the same time as radioactive decay; the other makes incorrect use of time-accumulated PET counts. In simulated dynamic PET studies using 11 C-acetate and 18 F-fluorodeoxyglucose (FDG), these methods are shown to result in biased estimates of the time-activity curve (TAC) and model parameters. New methods described in this article provide significantly improved parameter estimates in dynamic PET studies

  8. A non-parametric method for correction of global radiation observations

    DEFF Research Database (Denmark)

    Bacher, Peder; Madsen, Henrik; Perers, Bengt

    2013-01-01

    in the observations are corrected. These are errors such as: tilt in the leveling of the sensor, shadowing from surrounding objects, clipping and saturation in the signal processing, and errors from dirt and wear. The method is based on a statistical non-parametric clear-sky model which is applied to both...

  9. Improved Saturated Hydraulic Conductivity Pedotransfer Functions Using Machine Learning Methods

    Science.gov (United States)

    Araya, S. N.; Ghezzehei, T. A.

    2017-12-01

    Saturated hydraulic conductivity (Ks) is one of the fundamental hydraulic properties of soils. Its measurement, however, is cumbersome and instead pedotransfer functions (PTFs) are often used to estimate it. Despite a lot of progress over the years, generic PTFs that estimate hydraulic conductivity generally don't have a good performance. We develop significantly improved PTFs by applying state of the art machine learning techniques coupled with high-performance computing on a large database of over 20,000 soils—USKSAT and the Florida Soil Characterization databases. We compared the performance of four machine learning algorithms (k-nearest neighbors, gradient boosted model, support vector machine, and relevance vector machine) and evaluated the relative importance of several soil properties in explaining Ks. An attempt is also made to better account for soil structural properties; we evaluated the importance of variables derived from transformations of soil water retention characteristics and other soil properties. The gradient boosted models gave the best performance with root mean square errors less than 0.7 and mean errors in the order of 0.01 on a log scale of Ks [cm/h]. The effective particle size, D10, was found to be the single most important predictor. Other important predictors included percent clay, bulk density, organic carbon percent, coefficient of uniformity and values derived from water retention characteristics. Model performances were consistently better for Ks values greater than 10 cm/h. This study maximizes the extraction of information from a large database to develop generic machine learning based PTFs to estimate Ks. The study also evaluates the importance of various soil properties and their transformations in explaining Ks.

  10. A Design Method of Robust Servo Internal Model Control with Control Input Saturation

    OpenAIRE

    山田, 功; 舩見, 洋祐

    2001-01-01

    In the present paper, we examine a design method of robust servo Internal Model Control with control input saturation. First of all, we clarify the condition that Internal Model Control has robust servo characteristics for the system with control input saturation. From this consideration, we propose new design method of Internal Model Control with robust servo characteristics. A numerical example to illustrate the effectiveness of the proposed method is shown.

  11. Study of the orbital correction method

    International Nuclear Information System (INIS)

    Meserve, R.A.

    1976-01-01

    Two approximations of interest in atomic, molecular, and solid state physics are explored. First, a procedure for calculating an approximate Green's function for use in perturbation theory is derived. In lowest order it is shown to be equivalent to treating the contribution of the bound states of the unperturbed Hamiltonian exactly and representing the continuum contribution by plane waves orthogonalized to the bound states (OPW's). If the OPW approximation were inadequate, the procedure allows for systematic improvement of the approximation. For comparison purposes an exact but more limited procedure for performing second-order perturbation theory, one that involves solving an inhomogeneous differential equation, is also derived. Second, the Kohn-Sham many-electron formalism is discussed and formulae are derived and discussed for implementing perturbation theory within the formalism so as to find corrections to the total energy of a system through second order in the perturbation. Both approximations were used in the calculation of the polarizability of helium, neon, and argon. The calculation included direct and exchange effects by the Kohn-Sham method and full self-consistency was demanded. The results using the differential equation method yielded excellent agreement with the coupled Hartree-Fock results of others and with experiment. Moreover, the OPW approximation yielded satisfactory comparison with the results of calculation by the exact differential equation method. Finally, both approximations were used in the calculation of properties of hydrogen fluoride and methane. The appendix formulates a procedure using group theory and the internal coordinates of a molecular system to simplify the calculation of vibrational frequencies

  12. A New Dyslexia Reading Method and Visual Correction Position Method.

    Science.gov (United States)

    Manilla, George T; de Braga, Joe

    2017-01-01

    Pediatricians and educators may interact daily with several dyslexic patients or students. One dyslexic author accidently developed a personal, effective, corrective reading method. Its effectiveness was evaluated in 3 schools. One school utilized 8 demonstration special education students. Over 3 months, one student grew one third year, 3 grew 1 year, and 4 grew 2 years. In another school, 6 sixth-, seventh-, and eighth-grade classroom teachers followed 45 treated dyslexic students. They all excelled and progressed beyond their classroom peers in 4 months. Using cyclovergence upper gaze, dyslexic reading problems disappeared at one of the Positional Reading Arc positions of 30°, 60°, 90°, 120°, or 150° for 10 dyslexics. Positional Reading Arc on 112 students of the second through eighth grades showed words read per minute, reading errors, and comprehension improved. Dyslexia was visually corrected by use of a new reading method and Positional Reading Arc positions.

  13. Simultaneous Imaging of CBF Change and BOLD with Saturation-Recovery-T1 Method.

    Directory of Open Access Journals (Sweden)

    Xiao Wang

    Full Text Available A neuroimaging technique based on the saturation-recovery (SR-T1 MRI method was applied for simultaneously imaging blood oxygenation level dependence (BOLD contrast and cerebral blood flow change (ΔCBF, which is determined by CBF-sensitive T1 relaxation rate change (ΔR1CBF. This technique was validated by quantitatively examining the relationships among ΔR1CBF, ΔCBF, BOLD and relative CBF change (rCBF, which was simultaneously measured by laser Doppler flowmetry under global ischemia and hypercapnia conditions, respectively, in the rat brain. It was found that during ischemia, BOLD decreased 23.1±2.8% in the cortical area; ΔR1CBF decreased 0.020±0.004s-1 corresponding to a ΔCBF decrease of 1.07±0.24 ml/g/min and 89.5±1.8% CBF reduction (n=5, resulting in a baseline CBF value (=1.18 ml/g/min consistent with the literature reports. The CBF change quantification based on temperature corrected ΔR1CBF had a better accuracy than apparent R1 change (ΔR1app; nevertheless, ΔR1app without temperature correction still provides a good approximation for quantifying CBF change since perfusion dominates the evolution of the longitudinal relaxation rate (R1app. In contrast to the excellent consistency between ΔCBF and rCBF measured during and after ischemia, the BOLD change during the post-ischemia period was temporally disassociated with ΔCBF, indicating distinct CBF and BOLD responses. Similar results were also observed for the hypercapnia study. The overall results demonstrate that the SR-T1 MRI method is effective for noninvasive and quantitative imaging of both ΔCBF and BOLD associated with physiological and/or pathological changes.

  14. Research on 3-D terrain correction methods of airborne gamma-ray spectrometry survey

    International Nuclear Information System (INIS)

    Liu Yanyang; Liu Qingcheng; Zhang Zhiyong

    2008-01-01

    The general method of height correction is not effectual in complex terrain during the process of explaining airborne gamma-ray spectrometry data, and the 2-D terrain correction method researched in recent years is just available for correction of section measured. A new method of 3-D sector terrain correction is studied. The ground radiator is divided into many small sector radiators by the method, then the irradiation rate is calculated in certain survey distance, and the total value of all small radiate sources is regarded as the irradiation rate of the ground radiator at certain point of aero- survey, and the correction coefficients of every point are calculated which then applied to correct to airborne gamma-ray spectrometry data. The method can achieve the forward calculation, inversion calculation and terrain correction for airborne gamma-ray spectrometry survey in complex topography by dividing the ground radiator into many small sectors. Other factors are considered such as the un- saturated degree of measure scope, uneven-radiator content on ground, and so on. The results of for- ward model and an example analysis show that the 3-D terrain correction method is proper and effectual. (authors)

  15. Nowcasting Surface Meteorological Parameters Using Successive Correction Method

    National Research Council Canada - National Science Library

    Henmi, Teizi

    2002-01-01

    The successive correction method was examined and evaluated statistically as a nowcasting method for surface meteorological parameters including temperature, dew point temperature, and horizontal wind vector components...

  16. A method to correct coordinate distortion in EBSD maps

    International Nuclear Information System (INIS)

    Zhang, Y.B.; Elbrønd, A.; Lin, F.X.

    2014-01-01

    Drift during electron backscatter diffraction mapping leads to coordinate distortions in resulting orientation maps, which affects, in some cases significantly, the accuracy of analysis. A method, thin plate spline, is introduced and tested to correct such coordinate distortions in the maps after the electron backscatter diffraction measurements. The accuracy of the correction as well as theoretical and practical aspects of using the thin plate spline method is discussed in detail. By comparing with other correction methods, it is shown that the thin plate spline method is most efficient to correct different local distortions in the electron backscatter diffraction maps. - Highlights: • A new method is suggested to correct nonlinear spatial distortion in EBSD maps. • The method corrects EBSD maps more precisely than presently available methods. • Errors less than 1–2 pixels are typically obtained. • Direct quantitative analysis of dynamic data are available after this correction

  17. New decoding methods of interleaved burst error-correcting codes

    Science.gov (United States)

    Nakano, Y.; Kasahara, M.; Namekawa, T.

    1983-04-01

    A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.

  18. A new method for calculating gas saturation of low-resistivity shale gas reservoirs

    Directory of Open Access Journals (Sweden)

    Jinyan Zhang

    2017-09-01

    Full Text Available The Jiaoshiba shale gas field is located in the Fuling area of the Sichuan Basin, with the Upper Ordovician Wufeng–Lower Silurian Longmaxi Fm as the pay zone. At the bottom of the pay zone, a high-quality shale gas reservoir about 20 m thick is generally developed with high organic contents and gas abundance, but its resistivity is relatively low. Accordingly, the gas saturation calculated by formulas (e.g. Archie using electric logging data is often much lower than the experiment-derived value. In this paper, a new method was presented for calculating gas saturation more accurately based on non-electric logging data. Firstly, the causes for the low resistivity of shale gas reservoirs in this area were analyzed. Then, the limitation of traditional methods for calculating gas saturation based on electric logging data was diagnosed, and the feasibility of the neutron–density porosity overlay method was illustrated. According to the response characteristics of neutron, density and other porosity logging in shale gas reservoirs, a model for calculating gas saturation of shale gas was established by core experimental calibration based on the density logging value, the density porosity and the difference between density porosity and neutron porosity, by means of multiple methods (e.g. the dual-porosity overlay method by optimizing the best overlay coefficient. This new method avoids the effect of low resistivity, and thus can provide normal calculated gas saturation of high-quality shale gas reservoirs. It works well in practical application. This new method provides a technical support for the calculation of shale gas reserves in this area. Keywords: Shale gas, Gas saturation, Low resistivity, Non-electric logging, Volume density, Compensated neutron, Overlay method, Reserves calculation, Sichuan Basin, Jiaoshiba shale gas field

  19. Methods of orbit correction system optimization

    International Nuclear Information System (INIS)

    Chao, Yu-Chiu.

    1997-01-01

    Extracting optimal performance out of an orbit correction system is an important component of accelerator design and evaluation. The question of effectiveness vs. economy, however, is not always easily tractable. This is especially true in cases where betatron function magnitude and phase advance do not have smooth or periodic dependencies on the physical distance. In this report a program is presented using linear algebraic techniques to address this problem. A systematic recipe is given, supported with quantitative criteria, for arriving at an orbit correction system design with the optimal balance between performance and economy. The orbit referred to in this context can be generalized to include angle, path length, orbit effects on the optical transfer matrix, and simultaneous effects on multiple pass orbits

  20. A method to correct coordinate distortion in EBSD maps

    DEFF Research Database (Denmark)

    Zhang, Yubin; Elbrønd, Andreas Benjamin; Lin, Fengxiang

    2014-01-01

    Drift during electron backscatter diffraction mapping leads to coordinate distortions in resulting orientation maps, which affects, in some cases significantly, the accuracy of analysis. A method, thin plate spline, is introduced and tested to correct such coordinate distortions in the maps after...... the electron backscatter diffraction measurements. The accuracy of the correction as well as theoretical and practical aspects of using the thin plate spline method is discussed in detail. By comparing with other correction methods, it is shown that the thin plate spline method is most efficient to correct...

  1. Local defect correction for boundary integral equation methods

    NARCIS (Netherlands)

    Kakuba, G.; Anthonissen, M.J.H.

    2014-01-01

    The aim in this paper is to develop a new local defect correction approach to gridding for problems with localised regions of high activity in the boundary element method. The technique of local defect correction has been studied for other methods as finite difference methods and finite volume

  2. Attenuation correction method for single photon emission CT

    Energy Technology Data Exchange (ETDEWEB)

    Morozumi, Tatsuru; Nakajima, Masato [Keio Univ., Yokohama (Japan). Faculty of Science and Technology; Ogawa, Koichi; Yuta, Shinichi

    1983-10-01

    A correction method (Modified Correction Matrix method) is proposed to implement iterative correction by exactly measuring attenuation constant distribution in a test body, calculating a correction factor for every picture element, then multiply the image by these factors. Computer simulation for the comparison of the results showed that the proposed method was specifically more effective to an application to the test body, in which the rate of attenuation constant change is large, than the conventional correction matrix method. Since the actual measurement data always contain quantum noise, the noise was taken into account in the simulation. However, the correction effect was large even under the noise. For verifying its clinical effectiveness, the experiment using an acrylic phantom was also carried out. As the result, the recovery of image quality in the parts with small attenuation constant was remarkable as compared with the conventional method.

  3. A practical procedure to improve the accuracy of radiochromic film dosimetry. A integration with a correction method of uniformity correction and a red/blue correction method

    International Nuclear Information System (INIS)

    Uehara, Ryuzo; Tachibana, Hidenobu; Ito, Yasushi; Yoshino, Shinichi; Matsubayashi, Fumiyasu; Sato, Tomoharu

    2013-01-01

    It has been reported that the light scattering could worsen the accuracy of dose distribution measurement using a radiochromic film. The purpose of this study was to investigate the accuracy of two different films, EDR2 and EBT2, as film dosimetry tools. The effectiveness of a correction method for the non-uniformity caused from EBT2 film and the light scattering was also evaluated. In addition the efficacy of this correction method integrated with the red/blue correction method was assessed. EDR2 and EBT2 films were read using a flatbed charge-coupled device scanner (EPSON 10000 G). Dose differences on the axis perpendicular to the scanner lamp movement axis were within 1% with EDR2, but exceeded 3% (Maximum: +8%) with EBT2. The non-uniformity correction method, after a single film exposure, was applied to the readout of the films. A corrected dose distribution data was subsequently created. The correction method showed more than 10%-better pass ratios in dose difference evaluation than when the correction method was not applied. The red/blue correction method resulted in 5%-improvement compared with the standard procedure that employed red color only. The correction method with EBT2 proved to be able to rapidly correct non-uniformity, and has potential for routine clinical intensity modulated radiation therapy (IMRT) dose verification if the accuracy of EBT2 is required to be similar to that of EDR2. The use of red/blue correction method may improve the accuracy, but we recommend we should use the red/blue correction method carefully and understand the characteristics of EBT2 for red color only and the red/blue correction method. (author)

  4. [A practical procedure to improve the accuracy of radiochromic film dosimetry: a integration with a correction method of uniformity correction and a red/blue correction method].

    Science.gov (United States)

    Uehara, Ryuzo; Tachibana, Hidenobu; Ito, Yasushi; Yoshino, Shinichi; Matsubayashi, Fumiyasu; Sato, Tomoharu

    2013-06-01

    It has been reported that the light scattering could worsen the accuracy of dose distribution measurement using a radiochromic film. The purpose of this study was to investigate the accuracy of two different films, EDR2 and EBT2, as film dosimetry tools. The effectiveness of a correction method for the non-uniformity caused from EBT2 film and the light scattering was also evaluated. In addition the efficacy of this correction method integrated with the red/blue correction method was assessed. EDR2 and EBT2 films were read using a flatbed charge-coupled device scanner (EPSON 10000G). Dose differences on the axis perpendicular to the scanner lamp movement axis were within 1% with EDR2, but exceeded 3% (Maximum: +8%) with EBT2. The non-uniformity correction method, after a single film exposure, was applied to the readout of the films. A corrected dose distribution data was subsequently created. The correction method showed more than 10%-better pass ratios in dose difference evaluation than when the correction method was not applied. The red/blue correction method resulted in 5%-improvement compared with the standard procedure that employed red color only. The correction method with EBT2 proved to be able to rapidly correct non-uniformity, and has potential for routine clinical IMRT dose verification if the accuracy of EBT2 is required to be similar to that of EDR2. The use of red/blue correction method may improve the accuracy, but we recommend we should use the red/blue correction method carefully and understand the characteristics of EBT2 for red color only and the red/blue correction method.

  5. Different partial volume correction methods lead to different conclusions

    DEFF Research Database (Denmark)

    Greve, Douglas N; Salat, David H; Bowen, Spencer L

    2016-01-01

    A cross-sectional group study of the effects of aging on brain metabolism as measured with (18)F-FDG-PET was performed using several different partial volume correction (PVC) methods: no correction (NoPVC), Meltzer (MZ), Müller-Gärtner (MG), and the symmetric geometric transfer matrix (SGTM) usin...

  6. Simulating water hammer with corrective smoothed particle method

    NARCIS (Netherlands)

    Hou, Q.; Kruisbrink, A.C.H.; Tijsseling, A.S.; Keramat, A.

    2012-01-01

    The corrective smoothed particle method (CSPM) is used to simulate water hammer. The spatial derivatives in the water-hammer equations are approximated by a corrective kernel estimate. For the temporal derivatives, the Euler-forward time integration algorithm is employed. The CSPM results are in

  7. Method of absorbance correction in a spectroscopic heating value sensor

    Science.gov (United States)

    Saveliev, Alexei; Jangale, Vilas Vyankatrao; Zelepouga, Sergeui; Pratapas, John

    2013-09-17

    A method and apparatus for absorbance correction in a spectroscopic heating value sensor in which a reference light intensity measurement is made on a non-absorbing reference fluid, a light intensity measurement is made on a sample fluid, and a measured light absorbance of the sample fluid is determined. A corrective light intensity measurement at a non-absorbing wavelength of the sample fluid is made on the sample fluid from which an absorbance correction factor is determined. The absorbance correction factor is then applied to the measured light absorbance of the sample fluid to arrive at a true or accurate absorbance for the sample fluid.

  8. Application of the finite volume method in the simulation of saturated flows of binary mixtures

    International Nuclear Information System (INIS)

    Murad, M.A.; Gama, R.M.S. da; Sampaio, R.

    1989-12-01

    This work presents the simulation of saturated flows of an incompressible Newtonian fluid through a rigid, homogeneous and isotropic porous medium. The employed mathematical model is derived from the Continuum Theory of Mixtures and generalizes the classical one which is based on Darcy's Law form of the momentum equation. In this approach fluid and porous matrix are regarded as continuous constituents of a binary mixture. The finite volume method is employed in the simulation. (author) [pt

  9. A spectrum correction method for fuel assembly rehomogenization

    International Nuclear Information System (INIS)

    Lee, Kyung Taek; Cho, Nam Zin

    2004-01-01

    To overcome the limitation of existing homogenization methods based on the single assembly calculation with zero current boundary condition, we propose a new rehomogenization method, named spectrum correction method (SCM), consisting of the multigroup energy spectrum approximation by spectrum correction and the condensed two-group heterogeneous single assembly calculations with non-zero current boundary condition. In SCM, the spectrum shifting phenomena caused by current across assembly interfaces are considered by the spectrum correction at group condensation stage at first. Then, heterogeneous single assembly calculations with two-group cross sections condensed by using corrected multigroup energy spectrum are performed to obtain rehomogenized nodal diffusion parameters, i.e., assembly-wise homogenized cross sections and discontinuity factors. To evaluate the performance of SCM, it was applied to the analytic function expansion nodal (AFEN) method and several test problems were solved. The results show that SCM can reduce the errors significantly both in multiplication factors and assembly averaged power distributions

  10. Methods to Increase Educational Effectiveness in an Adult Correctional Setting.

    Science.gov (United States)

    Kuster, Byron

    1998-01-01

    A correctional educator reflects on methods that improve instructional effectiveness. These include teacher-student collaboration, clear goals, student accountability, positive classroom atmosphere, high expectations, and mutual respect. (SK)

  11. Automated general temperature correction method for dielectric soil moisture sensors

    Science.gov (United States)

    Kapilaratne, R. G. C. Jeewantinie; Lu, Minjiao

    2017-08-01

    An effective temperature correction method for dielectric sensors is important to ensure the accuracy of soil water content (SWC) measurements of local to regional-scale soil moisture monitoring networks. These networks are extensively using highly temperature sensitive dielectric sensors due to their low cost, ease of use and less power consumption. Yet there is no general temperature correction method for dielectric sensors, instead sensor or site dependent correction algorithms are employed. Such methods become ineffective at soil moisture monitoring networks with different sensor setups and those that cover diverse climatic conditions and soil types. This study attempted to develop a general temperature correction method for dielectric sensors which can be commonly used regardless of the differences in sensor type, climatic conditions and soil type without rainfall data. In this work an automated general temperature correction method was developed by adopting previously developed temperature correction algorithms using time domain reflectometry (TDR) measurements to ThetaProbe ML2X, Stevens Hydra probe II and Decagon Devices EC-TM sensor measurements. The rainy day effects removal procedure from SWC data was automated by incorporating a statistical inference technique with temperature correction algorithms. The temperature correction method was evaluated using 34 stations from the International Soil Moisture Monitoring Network and another nine stations from a local soil moisture monitoring network in Mongolia. Soil moisture monitoring networks used in this study cover four major climates and six major soil types. Results indicated that the automated temperature correction algorithms developed in this study can eliminate temperature effects from dielectric sensor measurements successfully even without on-site rainfall data. Furthermore, it has been found that actual daily average of SWC has been changed due to temperature effects of dielectric sensors with a

  12. Correction

    DEFF Research Database (Denmark)

    Pinkevych, Mykola; Cromer, Deborah; Tolstrup, Martin

    2016-01-01

    [This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.].......[This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.]....

  13. Multiple Site-Directed and Saturation Mutagenesis by the Patch Cloning Method.

    Science.gov (United States)

    Taniguchi, Naohiro; Murakami, Hiroshi

    2017-01-01

    Constructing protein-coding genes with desired mutations is a basic step for protein engineering. Herein, we describe a multiple site-directed and saturation mutagenesis method, termed MUPAC. This method has been used to introduce multiple site-directed mutations in the green fluorescent protein gene and in the moloney murine leukemia virus reverse transcriptase gene. Moreover, this method was also successfully used to introduce randomized codons at five desired positions in the green fluorescent protein gene, and for simple DNA assembly for cloning.

  14. Metric-based method of software requirements correctness improvement

    Directory of Open Access Journals (Sweden)

    Yaremchuk Svitlana

    2017-01-01

    Full Text Available The work highlights the most important principles of software reliability management (SRM. The SRM concept construes a basis for developing a method of requirements correctness improvement. The method assumes that complicated requirements contain more actual and potential design faults/defects. The method applies a newer metric to evaluate the requirements complexity and double sorting technique evaluating the priority and complexity of a particular requirement. The method enables to improve requirements correctness due to identification of a higher number of defects with restricted resources. Practical application of the proposed method in the course of demands review assured a sensible technical and economic effect.

  15. Local defect correction for boundary integral equation methods

    NARCIS (Netherlands)

    Kakuba, G.; Anthonissen, M.J.H.

    2013-01-01

    This paper presents a new approach to gridding for problems with localised regions of high activity. The technique of local defect correction has been studied for other methods as ¿nite difference methods and ¿nite volume methods. In this paper we develop the technique for the boundary element

  16. A method of detector correction for cosmic ray muon radiography

    International Nuclear Information System (INIS)

    Liu Yuanyuan; Zhao Ziran; Chen Zhiqiang; Zhang Li; Wang Zhentian

    2008-01-01

    Cosmic ray muon radiography which has good penetrability and sensitivity to high-Z materials is an effective way for detecting shielded nuclear materials. The problem of data correction is one of the key points of muon radiography technique. Because of the influence of environmental background, environmental yawp and error of detectors, the raw data can not be used directly. If we used the raw data as the usable data to reconstruct without any corrections, it would turn up terrible artifacts. Based on the characteristics of the muon radiography system, aimed at the error of detectors, this paper proposes a method of detector correction. The simulation experiments demonstrate that this method can effectively correct the error produced by detectors. Therefore, we can say that it does a further step to let the technique of cosmic muon radiography into out real life. (authors)

  17. On Neglecting Chemical Exchange When Correcting in Vivo 31P MRS Data for Partial Saturation: Commentary on: ``Pitfalls in the Measurement of Metabolite Concentrations Using the One-Pulse Experiment in in Vivo NMR''

    Science.gov (United States)

    Ouwerkerk, Ronald; Bottomley, Paul A.

    2001-04-01

    This article replies to Spencer et al. (J. Magn. Reson.149, 251-257, 2001) concerning the degree to which chemical exchange affects partial saturation corrections using saturation factors. Considering the important case of in vivo31P NMR, we employ differential analysis to demonstrate a broad range of experimental conditions over which chemical exchange minimally affects saturation factors, and near-optimum signal-to-noise ratio is preserved. The analysis contradicts Spencer et al.'s broad claim that chemical exchange results in a strong dependence of saturation factors upon M0's and T1 and exchange parameters. For Spencer et al.'s example of a dynamic 31P NMR experiment in which phosphocreatine varies 20-fold, we show that our strategy of measuring saturation factors at the start and end of the study reduces errors in saturation corrections to 2% for the high-energy phosphates.

  18. An corrective method to correct of the inherent flaw of the asynchronization direct counting circuit

    International Nuclear Information System (INIS)

    Wang Renfei; Liu Congzhan; Jin Yongjie; Zhang Zhi; Li Yanguo

    2003-01-01

    As a inherent flaw of the Asynchronization Direct Counting Circuit, the crosstalk, which is resulted from the randomicity of the time-signal always exists between two adjacent channels. In order to reduce the counting error derived from the crosstalk, the author propose an effective method to correct the flaw after analysing the mechanism of the crosstalk

  19. Implementation of the Centroid Method for the Correction of Turbulence

    Directory of Open Access Journals (Sweden)

    Enric Meinhardt-Llopis

    2014-07-01

    Full Text Available The centroid method for the correction of turbulence consists in computing the Karcher-Fréchet mean of the sequence of input images. The direction of deformation between a pair of images is determined by the optical flow. A distinguishing feature of the centroid method is that it can produce useful results from an arbitrarily small set of input images.

  20. Methods of the Detection and Identification of Structural Defects in Saturated Metallic Composite Castings

    Directory of Open Access Journals (Sweden)

    Gawdzińska K.

    2017-09-01

    Full Text Available Diagnostics of composite castings, due to their complex structure, requires that their characteristics are tested by an appropriate description method. Any deviation from the specific characteristic will be regarded as a material defect. The detection of defects in composite castings sometimes is not sufficient and the defects have to be identified. This study classifies defects found in the structures of saturated metallic composite castings and indicates those stages of the process where such defects are likely to be formed. Not only does the author determine the causes of structural defects, describe methods of their detection and identification, but also proposes a schematic procedure to be followed during detection and identification of structural defects of castings made from saturated reinforcement metallic composites. Alloys examination was conducted after technological process, while using destructive (macroscopic tests, light and scanning electron microscopy and non-destructive (ultrasonic and X-ray defectoscopy, tomography, gravimetric method methods. Research presented in this article are part of author’s work on castings quality.

  1. [Study on phase correction method of spatial heterodyne spectrometer].

    Science.gov (United States)

    Wang, Xin-Qiang; Ye, Song; Zhang, Li-Juan; Xiong, Wei

    2013-05-01

    Phase distortion exists in collected interferogram because of a variety of measure reasons when spatial heterodyne spectrometers are used in practice. So an improved phase correction method is presented. The phase curve of interferogram was obtained through Fourier inverse transform to extract single side transform spectrum, based on which, the phase distortions were attained by fitting phase slope, so were the phase correction functions, and the convolution was processed between transform spectrum and phase correction function to implement spectrum phase correction. The method was applied to phase correction of actually measured monochromatic spectrum and emulational water vapor spectrum. Experimental results show that the low-frequency false signals in monochromatic spectrum fringe would be eliminated effectively to increase the periodicity and the symmetry of interferogram, in addition when the continuous spectrum imposed phase error was corrected, the standard deviation between it and the original spectrum would be reduced form 0.47 to 0.20, and thus the accuracy of spectrum could be improved.

  2. An attenuation correction method for PET/CT images

    International Nuclear Information System (INIS)

    Ue, Hidenori; Yamazaki, Tomohiro; Haneishi, Hideaki

    2006-01-01

    In PET/CT systems, accurate attenuation correction can be achieved by creating an attenuation map from an X-ray CT image. On the other hand, respiratory-gated PET acquisition is an effective method for avoiding motion blurring of the thoracic and abdominal organs caused by respiratory motion. In PET/CT systems employing respiratory-gated PET, using an X-ray CT image acquired during breath-holding for attenuation correction may have a large effect on the voxel values, especially in regions with substantial respiratory motion. In this report, we propose an attenuation correction method in which, as the first step, a set of respiratory-gated PET images is reconstructed without attenuation correction, as the second step, the motion of each phase PET image from the PET image in the same phase as the CT acquisition timing is estimated by the previously proposed method, as the third step, the CT image corresponding to each respiratory phase is generated from the original CT image by deformation according to the motion vector maps, and as the final step, attenuation correction using these CT images and reconstruction are performed. The effectiveness of the proposed method was evaluated using 4D-NCAT phantoms, and good stability of the voxel values near the diaphragm was observed. (author)

  3. An Automated Baseline Correction Method Based on Iterative Morphological Operations.

    Science.gov (United States)

    Chen, Yunliang; Dai, Liankui

    2018-05-01

    Raman spectra usually suffer from baseline drift caused by fluorescence or other reasons. Therefore, baseline correction is a necessary and crucial step that must be performed before subsequent processing and analysis of Raman spectra. An automated baseline correction method based on iterative morphological operations is proposed in this work. The method can adaptively determine the structuring element first and then gradually remove the spectral peaks during iteration to get an estimated baseline. Experiments on simulated data and real-world Raman data show that the proposed method is accurate, fast, and flexible for handling different kinds of baselines in various practical situations. The comparison of the proposed method with some state-of-the-art baseline correction methods demonstrates its advantages over the existing methods in terms of accuracy, adaptability, and flexibility. Although only Raman spectra are investigated in this paper, the proposed method is hopefully to be used for the baseline correction of other analytical instrumental signals, such as IR spectra and chromatograms.

  4. The various correction methods to the high precision aeromagnetic data

    International Nuclear Information System (INIS)

    Xu Guocang; Zhu Lin; Ning Yuanli; Meng Xiangbao; Zhang Hongjian

    2014-01-01

    In the airborne geophysical survey, an outstanding achievement first depends on the measurement precision of the instrument, and the choice of measurement conditions, the reliability of data collection, followed by the correct method of measurement data processing, the rationality of the data interpretation. Obviously, geophysical data processing is an important task for the comprehensive interpretation of the measurement results, processing method is correct or not directly related to the quality of the final results. we have developed a set of personal computer software to aeromagnetic and radiometric survey data processing in the process of actual production and scientific research in recent years, and successfully applied to the production. The processing methods and flowcharts to the high precision aromagnetic data were simply introduced in this paper. However, the mathematical techniques of the various correction programes to IGRF and flying height and magnetic diurnal variation were stressily discussed in the paper. Their processing effectness were illustrated by taking an example as well. (authors)

  5. Simple-MSSM: a simple and efficient method for simultaneous multi-site saturation mutagenesis.

    Science.gov (United States)

    Cheng, Feng; Xu, Jian-Miao; Xiang, Chao; Liu, Zhi-Qiang; Zhao, Li-Qing; Zheng, Yu-Guo

    2017-04-01

    To develop a practically simple and robust multi-site saturation mutagenesis (MSSM) method that enables simultaneously recombination of amino acid positions for focused mutant library generation. A general restriction enzyme-free and ligase-free MSSM method (Simple-MSSM) based on prolonged overlap extension PCR (POE-PCR) and Simple Cloning techniques. As a proof of principle of Simple-MSSM, the gene of eGFP (enhanced green fluorescent protein) was used as a template gene for simultaneous mutagenesis of five codons. Forty-eight randomly selected clones were sequenced. Sequencing revealed that all the 48 clones showed at least one mutant codon (mutation efficiency = 100%), and 46 out of the 48 clones had mutations at all the five codons. The obtained diversities at these five codons are 27, 24, 26, 26 and 22, respectively, which correspond to 84, 75, 81, 81, 69% of the theoretical diversity offered by NNK-degeneration (32 codons; NNK, K = T or G). The enzyme-free Simple-MSSM method can simultaneously and efficiently saturate five codons within one day, and therefore avoid missing interactions between residues in interacting amino acid networks.

  6. A vibration correction method for free-fall absolute gravimeters

    Science.gov (United States)

    Qian, J.; Wang, G.; Wu, K.; Wang, L. J.

    2018-02-01

    An accurate determination of gravitational acceleration, usually approximated as 9.8 m s-2, has been playing an important role in the areas of metrology, geophysics, and geodetics. Absolute gravimetry has been experiencing rapid developments in recent years. Most absolute gravimeters today employ a free-fall method to measure gravitational acceleration. Noise from ground vibration has become one of the most serious factors limiting measurement precision. Compared to vibration isolators, the vibration correction method is a simple and feasible way to reduce the influence of ground vibrations. A modified vibration correction method is proposed and demonstrated. A two-dimensional golden section search algorithm is used to search for the best parameters of the hypothetical transfer function. Experiments using a T-1 absolute gravimeter are performed. It is verified that for an identical group of drop data, the modified method proposed in this paper can achieve better correction effects with much less computation than previous methods. Compared to vibration isolators, the correction method applies to more hostile environments and even dynamic platforms, and is expected to be used in a wider range of applications.

  7. A method for eliminating sulfur compounds from fluid, saturated, aliphatic hydrocarbons

    Energy Technology Data Exchange (ETDEWEB)

    Fakhriev, A.M.; Galiautdinov, N.G.; Kashevarov, L.A.; Mazgarov, A.M.

    1982-01-01

    The method for eliminating sulfur compounds from fluid, saturated, aliphatic hydrocarbons, which involves extracting hydrocarbons using a dimethylsulfoxide extractant, is improved by using a dimethylsulfoxide blend and 10-60 percent (by volume) diethylenetriamine or polyethylenepolyamine which contains diethylenetriamine, triethylenetetramine and tetraethylenepentamine, in order to eliminate the above compounds. Polyethylenepolyamine is produced as a by-product during the production of ethylenediamine. Elimination is performed at 0-50 degrees and 1-60 atmospheres of pressure. Here, the extractant may contain up to 10 percent water. The use of the proposed method, rather than the existing method, will make it possible to increase hydrocarbon elimination from mercaptans by 40 percent and from H/sub 2/S by 10 percent when the same amount is eliminated from dialkylsulfides.

  8. An efficient optimization method to improve the measuring accuracy of oxygen saturation by using triangular wave optical signal

    Science.gov (United States)

    Li, Gang; Yu, Yue; Zhang, Cui; Lin, Ling

    2017-09-01

    The oxygen saturation is one of the important parameters to evaluate human health. This paper presents an efficient optimization method that can improve the accuracy of oxygen saturation measurement, which employs an optical frequency division triangular wave signal as the excitation signal to obtain dynamic spectrum and calculate oxygen saturation. In comparison to the traditional method measured RMSE (root mean square error) of SpO2 which is 0.1705, this proposed method significantly reduced the measured RMSE which is 0.0965. It is notable that the accuracy of oxygen saturation measurement has been improved significantly. The method can simplify the circuit and bring down the demand of elements. Furthermore, it has a great reference value on improving the signal to noise ratio of other physiological signals.

  9. A Horizontal Tilt Correction Method for Ship License Numbers Recognition

    Science.gov (United States)

    Liu, Baolong; Zhang, Sanyuan; Hong, Zhenjie; Ye, Xiuzi

    2018-02-01

    An automatic ship license numbers (SLNs) recognition system plays a significant role in intelligent waterway transportation systems since it can be used to identify ships by recognizing the characters in SLNs. Tilt occurs frequently in many SLNs because the monitors and the ships usually have great vertical or horizontal angles, which decreases the accuracy and robustness of a SLNs recognition system significantly. In this paper, we present a horizontal tilt correction method for SLNs. For an input tilt SLN image, the proposed method accomplishes the correction task through three main steps. First, a MSER-based characters’ center-points computation algorithm is designed to compute the accurate center-points of the characters contained in the input SLN image. Second, a L 1- L 2 distance-based straight line is fitted to the computed center-points using M-estimator algorithm. The tilt angle is estimated at this stage. Finally, based on the computed tilt angle, an affine transformation rotation is conducted to rotate and to correct the input SLN horizontally. At last, the proposed method is tested on 200 tilt SLN images, the proposed method is proved to be effective with a tilt correction rate of 80.5%.

  10. Correction of measured multiplicity distributions by the simulated annealing method

    International Nuclear Information System (INIS)

    Hafidouni, M.

    1993-01-01

    Simulated annealing is a method used to solve combinatorial optimization problems. It is used here for the correction of the observed multiplicity distribution from S-Pb collisions at 200 GeV/c per nucleon. (author) 11 refs., 2 figs

  11. A Hold-out method to correct PCA variance inflation

    DEFF Research Database (Denmark)

    Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Hansen, Lars Kai

    2012-01-01

    In this paper we analyze the problem of variance inflation experienced by the PCA algorithm when working in an ill-posed scenario where the dimensionality of the training set is larger than its sample size. In an earlier article a correction method based on a Leave-One-Out (LOO) procedure...

  12. Lipid Based Formulations of Biopharmaceutics Classification System (BCS Class II Drugs: Strategy, Formulations, Methods and Saturation

    Directory of Open Access Journals (Sweden)

    Šoltýsová I.

    2016-12-01

    Full Text Available Active ingredients in pharmaceuticals differ by their physico-chemical properties and their bioavailability therefore varies. The most frequently used and most convenient way of administration of medicines is oral, however many drugs are little soluble in water. Thus they are not sufficiently effective and suitable for such administration. For this reason a system of lipid based formulations (LBF was developed. Series of formulations were prepared and tested in water and biorelevant media. On the basis of selection criteria, there were selected formulations with the best emulsification potential, good dispersion in the environment and physical stability. Samples of structurally different drugs included in the Class II of the Biopharmaceutics classification system (BCS were obtained, namely Griseofulvin, Glibenclamide, Carbamazepine, Haloperidol, Itraconazol, Triclosan, Praziquantel and Rifaximin, for testing of maximal saturation in formulations prepared from commercially available excipients. Methods were developed for preparation of formulations, observation of emulsification and its description, determination of maximum solubility of drug samples in the respective formulation and subsequent analysis. Saturation of formulations with drugs showed that formulations 80 % XA and 20 % Xh, 35 % XF and 65 % Xh were best able to dissolve the drugs which supports the hypothesis that it is desirable to identify limited series of formulations which could be generally applied for this purpose.

  13. Method for decoupling error correction from privacy amplification

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Hoi-Kwong [Department of Electrical and Computer Engineering and Department of Physics, University of Toronto, 10 King' s College Road, Toronto, Ontario, Canada, M5S 3G4 (Canada)

    2003-04-01

    In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof.

  14. Method for decoupling error correction from privacy amplification

    International Nuclear Information System (INIS)

    Lo, Hoi-Kwong

    2003-01-01

    In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof

  15. Saturated salt solution method: a useful cadaver embalming for surgical skills training.

    Science.gov (United States)

    Hayashi, Shogo; Homma, Hiroshi; Naito, Munekazu; Oda, Jun; Nishiyama, Takahisa; Kawamoto, Atsuo; Kawata, Shinichi; Sato, Norio; Fukuhara, Tomomi; Taguchi, Hirokazu; Mashiko, Kazuki; Azuhata, Takeo; Ito, Masayuki; Kawai, Kentaro; Suzuki, Tomoya; Nishizawa, Yuji; Araki, Jun; Matsuno, Naoto; Shirai, Takayuki; Qu, Ning; Hatayama, Naoyuki; Hirai, Shuichi; Fukui, Hidekimi; Ohseto, Kiyoshige; Yukioka, Tetsuo; Itoh, Masahiro

    2014-12-01

    This article evaluates the suitability of cadavers embalmed by the saturated salt solution (SSS) method for surgical skills training (SST). SST courses using cadavers have been performed to advance a surgeon's techniques without any risk to patients. One important factor for improving SST is the suitability of specimens, which depends on the embalming method. In addition, the infectious risk and cost involved in using cadavers are problems that need to be solved. Six cadavers were embalmed by 3 methods: formalin solution, Thiel solution (TS), and SSS methods. Bacterial and fungal culture tests and measurement of ranges of motion were conducted for each cadaver. Fourteen surgeons evaluated the 3 embalming methods and 9 SST instructors (7 trauma surgeons and 2 orthopedists) operated the cadavers by 21 procedures. In addition, ultrasonography, central venous catheterization, and incision with cauterization followed by autosuture stapling were performed in some cadavers. The SSS method had a sufficient antibiotic effect and produced cadavers with flexible joints and a high tissue quality suitable for SST. The surgeons evaluated the cadavers embalmed by the SSS method to be highly equal to those embalmed by the TS method. Ultrasound images were clear in the cadavers embalmed by both the methods. Central venous catheterization could be performed in a cadaver embalmed by the SSS method and then be affirmed by x-ray. Lungs and intestines could be incised with cauterization and autosuture stapling in the cadavers embalmed by TS and SSS methods. Cadavers embalmed by the SSS method are sufficiently useful for SST. This method is simple, carries a low infectious risk, and is relatively of low cost, enabling a wider use of cadavers for SST.

  16. An efficient dose-compensation method for proximity effect correction

    International Nuclear Information System (INIS)

    Wang Ying; Han Weihua; Yang Xiang; Zhang Yang; Yang Fuhua; Zhang Renping

    2010-01-01

    A novel simple dose-compensation method is developed for proximity effect correction in electron-beam lithography. The sizes of exposed patterns depend on dose factors while other exposure parameters (including accelerate voltage, resist thickness, exposing step size, substrate material, and so on) remain constant. This method is based on two reasonable assumptions in the evaluation of the compensated dose factor: one is that the relation between dose factors and circle-diameters is linear in the range under consideration; the other is that the compensated dose factor is only affected by the nearest neighbors for simplicity. Four-layer-hexagon photonic crystal structures were fabricated as test patterns to demonstrate this method. Compared to the uncorrected structures, the homogeneity of the corrected hole-size in photonic crystal structures was clearly improved. (semiconductor technology)

  17. Suggested Methods for Preventing Core Saturation Instability in HVDC Transmission Systems

    Energy Technology Data Exchange (ETDEWEB)

    Norheim, Ian

    2002-07-01

    In this thesis a study of the HVDC related phenomenon core saturation instability and methods to prevent this phenomenon is performed. It is reason to believe that this phenomenon caused disconnection of the Skagerrak HVDC link 10 August 1993. Internationally, core saturation instability has been reported at several HVDC schemes and thorough complex studies of the phenomenon has been performed. This thesis gives a detailed description of the phenomenon and suggest some interesting methods to prevent the development of it. Core saturation instability and its consequences can be described in a simplified way as follows: It is now assumed that a fundamental harmonic component is present in the DC side current. Due to the coupling between the AC side and the DC side of the HVDC converter, a subsequent second harmonic positive-sequence current and DC currents will be generated on the AC side. The DC currents will cause saturation in the converter transformers. This will cause the magnetizing current to also have a second harmonic positive-sequence component. If a high second harmonic impedance is seen from the commutation bus, a high positive-sequence second harmonic component will be present in the commutation voltages. This will result in a relatively high fundamental frequency component in the DC side voltage. If the fundamental frequency impedance at the DC side is relatively low the fundamental component in the DC side current may become larger than it originally was. In addition the HVDC control system may contribute to the fundamental frequency component in the DC side voltage, and in this way cause a system even more sensitive to core saturation instability. The large magnetizing currents that eventually will flow on the AC side cause large zero-sequence currents in the neutral conductors of the AC transmission lines connected to the HVDC link. This may result in disconnection of the lines. Alternatively, the harmonics in the large magnetizing currents may cause

  18. A rigid motion correction method for helical computed tomography (CT)

    International Nuclear Information System (INIS)

    Kim, J-H; Kyme, A; Fulton, R; Nuyts, J; Kuncic, Z

    2015-01-01

    We propose a method to compensate for six degree-of-freedom rigid motion in helical CT of the head. The method is demonstrated in simulations and in helical scans performed on a 16-slice CT scanner. Scans of a Hoffman brain phantom were acquired while an optical motion tracking system recorded the motion of the bed and the phantom. Motion correction was performed by restoring projection consistency using data from the motion tracking system, and reconstructing with an iterative fully 3D algorithm. Motion correction accuracy was evaluated by comparing reconstructed images with a stationary reference scan. We also investigated the effects on accuracy of tracker sampling rate, measurement jitter, interpolation of tracker measurements, and the synchronization of motion data and CT projections. After optimization of these aspects, motion corrected images corresponded remarkably closely to images of the stationary phantom with correlation and similarity coefficients both above 0.9. We performed a simulation study using volunteer head motion and found similarly that our method is capable of compensating effectively for realistic human head movements. To the best of our knowledge, this is the first practical demonstration of generalized rigid motion correction in helical CT. Its clinical value, which we have yet to explore, may be significant. For example it could reduce the necessity for repeat scans and resource-intensive anesthetic and sedation procedures in patient groups prone to motion, such as young children. It is not only applicable to dedicated CT imaging, but also to hybrid PET/CT and SPECT/CT, where it could also ensure an accurate CT image for lesion localization and attenuation correction of the functional image data. (paper)

  19. Quantitative chemical exchange saturation transfer (qCEST) MRI--RF spillover effect-corrected omega plot for simultaneous determination of labile proton fraction ratio and exchange rate.

    Science.gov (United States)

    Sun, Phillip Zhe; Wang, Yu; Dai, ZhuoZhi; Xiao, Gang; Wu, Renhua

    2014-01-01

    Chemical exchange saturation transfer (CEST) MRI is sensitive to dilute proteins and peptides as well as microenvironmental properties. However, the complexity of the CEST MRI effect, which varies with the labile proton content, exchange rate and experimental conditions, underscores the need for developing quantitative CEST (qCEST) analysis. Towards this goal, it has been shown that omega plot is capable of quantifying paramagnetic CEST MRI. However, the use of the omega plot is somewhat limited for diamagnetic CEST (DIACEST) MRI because it is more susceptible to direct radio frequency (RF) saturation (spillover) owing to the relatively small chemical shift. Recently, it has been found that, for dilute DIACEST agents that undergo slow to intermediate chemical exchange, the spillover effect varies little with the labile proton ratio and exchange rate. Therefore, we postulated that the omega plot analysis can be improved if RF spillover effect could be estimated and taken into account. Specifically, simulation showed that both labile proton ratio and exchange rate derived using the spillover effect-corrected omega plot were in good agreement with simulated values. In addition, the modified omega plot was confirmed experimentally, and we showed that the derived labile proton ratio increased linearly with creatine concentration (p plot for quantitative analysis of DIACEST MRI. Copyright © 2014 John Wiley & Sons, Ltd.

  20. Spectral-ratio radon background correction method in airborne γ-ray spectrometry based on compton scattering deduction

    International Nuclear Information System (INIS)

    Gu Yi; Xiong Shengqing; Zhou Jianxin; Fan Zhengguo; Ge Liangquan

    2014-01-01

    γ-ray released by the radon daughter has severe impact on airborne γ-ray spectrometry. The spectral-ratio method is one of the best mathematical methods for radon background deduction in airborne γ-ray spectrometry. In this paper, an advanced spectral-ratio method was proposed which deducts Compton scattering ray by the fast Fourier transform rather than tripping ratios, the relationship between survey height and correction coefficient of the advanced spectral-ratio radon background correction method was studied, the advanced spectral-ratio radon background correction mathematic model was established, and the ground saturation model calibrating technology for correction coefficient was proposed. As for the advanced spectral-ratio radon background correction method, its applicability and correction efficiency are improved, and the application cost is saved. Furthermore, it can prevent the physical meaning lost and avoid the possible errors caused by matrix computation and mathematical fitting based on spectrum shape which is applied in traditional correction coefficient. (authors)

  1. Occurrence of two-photon absorption saturation in Ag nanocolloids, prepared by chemical reduction method

    Energy Technology Data Exchange (ETDEWEB)

    Rahulan, K. Mani, E-mail: krahul.au@gmail.com [Department of Physics, Anna University, Chennai (India); Ganesan, S. [Department of Physics, Anna University, Chennai (India); Aruna, P., E-mail: aruna@annauniv.edu [Department of Physics, Anna University, Chennai (India)

    2012-09-01

    Highlights: Black-Right-Pointing-Pointer Ag nanocolloids were synthesized via chemical reduction method. Black-Right-Pointing-Pointer The molecules of PVP play an important role in growth and agglomeration of silver nanocolloids. Black-Right-Pointing-Pointer Saturation behaviour followed by two photon absorption was responsible for good optical limiting characteristics in these nanocolloids. Black-Right-Pointing-Pointer The nonlinear optical parameters calculated from the data showed that these materials could be used as efficient optical limiters. - Abstract: Silver nanocolloids stabilized with polyvinyl pyrrolidone (PVP) have been prepared from (AgNO{sub 3}) by a chemical reduction method, involving the intermediate preparation of (Ag{sub 2}O) colloidal dispersions in the presence of sodium dodecycle sulfate as a surfactant and formaldehyde as reducing agent. The molecules of PVP play an important role in growth and agglomeration of silver nanocolloids. The formation of Ag nanocolloids was studied from the UV-vis absorption characteristics. An energy dispersive X-ray (EDX) spectrum and X-ray diffraction peak of the nanoparticles showed the highly crystalline nature of silver structure. The particle size was found to be 40 nm as analyzed from Field emission scanning electron microscopy (FESEM). The nonlinear optical and optical limiting properties of these nanoparticle dispersions were studied by using the Z-scan technique at 532 nm. Experimental results show that the Ag nanocolloids possess strong optical limiting effect, originated from absorption saturation followed by two-photon mechanism. The data show that Ag nanocolloids have great potential for nonlinear optical devices.

  2. Correction of Misclassifications Using a Proximity-Based Estimation Method

    Directory of Open Access Journals (Sweden)

    Shmulevich Ilya

    2004-01-01

    Full Text Available An estimation method for correcting misclassifications in signal and image processing is presented. The method is based on the use of context-based (temporal or spatial information in a sliding-window fashion. The classes can be purely nominal, that is, an ordering of the classes is not required. The method employs nonlinear operations based on class proximities defined by a proximity matrix. Two case studies are presented. In the first, the proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm. In the second, the estimation method is applied to two-dimensional signals for correction of misclassifications in images. In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework. Simulation results are presented in both case studies and the degree of improvement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.

  3. GPU accelerated manifold correction method for spinning compact binaries

    Science.gov (United States)

    Ran, Chong-xi; Liu, Song; Zhong, Shuang-ying

    2018-04-01

    The graphics processing unit (GPU) acceleration of the manifold correction algorithm based on the compute unified device architecture (CUDA) technology is designed to simulate the dynamic evolution of the Post-Newtonian (PN) Hamiltonian formulation of spinning compact binaries. The feasibility and the efficiency of parallel computation on GPU have been confirmed by various numerical experiments. The numerical comparisons show that the accuracy on GPU execution of manifold corrections method has a good agreement with the execution of codes on merely central processing unit (CPU-based) method. The acceleration ability when the codes are implemented on GPU can increase enormously through the use of shared memory and register optimization techniques without additional hardware costs, implying that the speedup is nearly 13 times as compared with the codes executed on CPU for phase space scan (including 314 × 314 orbits). In addition, GPU-accelerated manifold correction method is used to numerically study how dynamics are affected by the spin-induced quadrupole-monopole interaction for black hole binary system.

  4. Equation-Method for correcting clipping errors in OFDM signals.

    Science.gov (United States)

    Bibi, Nargis; Kleerekoper, Anthony; Muhammad, Nazeer; Cheetham, Barry

    2016-01-01

    Orthogonal frequency division multiplexing (OFDM) is the digital modulation technique used by 4G and many other wireless communication systems. OFDM signals have significant amplitude fluctuations resulting in high peak to average power ratios which can make an OFDM transmitter susceptible to non-linear distortion produced by its high power amplifiers (HPA). A simple and popular solution to this problem is to clip the peaks before an OFDM signal is applied to the HPA but this causes in-band distortion and introduces bit-errors at the receiver. In this paper we discuss a novel technique, which we call the Equation-Method, for correcting these errors. The Equation-Method uses the Fast Fourier Transform to create a set of simultaneous equations which, when solved, return the amplitudes of the peaks before they were clipped. We show analytically and through simulations that this method can, correct all clipping errors over a wide range of clipping thresholds. We show that numerical instability can be avoided and new techniques are needed to enable the receiver to differentiate between correctly and incorrectly received frequency-domain constellation symbols.

  5. Cryochromatography: a method for the separation of phosphoglycerides according to the number and length of saturated fatty acid components

    International Nuclear Information System (INIS)

    Henderson, R.F.; Clayton, M.H.

    1974-01-01

    A thin layer chromatographic method utilizing ultracold temperatures has been developed to separate phosphoglycerides containing only long-chain saturated fatty acids from phosphoglycerides containing fatty acids with any degree of unsaturation. The method is direct, nondiluting, and nondestructive. Since the surfactant lipids found in lung contain only long-chain, saturated fatty acids, the method should be particularly useful to those in lung lipid research. Studies on the uptake of labeled precursors into the lung surfactant lipids as well as work on quantitation of surfactant lecithins in the lung can be facilitated by this method. (U.S.)

  6. Research on evaluation method for water saturation of tight sandstone in Suxi region

    Science.gov (United States)

    Lv, Hong; Lai, Fuqiang; Chen, Liang; Li, Chao; Li, Jie; Yi, Heping

    2017-05-01

    The evaluation of irreducible water saturation is important for qualitative and quantitative prediction of residual oil distribution. However, it is to be improved for the accuracy of experimental measuring the irreducible water saturation and logging evaluation. In this paper, firstly the multi-functional core flooding experiment and the nuclear magnetic resonance centrifugation experiment are carried out in the west of Sulige gas field. Then, the influence was discussed about particle size, porosity and permeability on the water saturation. Finally, the evaluation model was established about irreducible water saturation and the evaluation of irreducible water saturation was carried out. The results show that the results of two experiments are both reliable. It is inversely proportional to the median particle size, porosity and permeability, and is most affected by the median particle size. The water saturation of the dry layer is higher than that of the general reservoir. The worse the reservoir property, the greater the water saturation. The test results show that the irreducible water saturation model can be used to evaluate the water floor.

  7. An agent-based method for simulating porous fluid-saturated structures with indistinguishable components

    Science.gov (United States)

    Kashani, Jamal; Pettet, Graeme John; Gu, YuanTong; Zhang, Lihai; Oloyede, Adekunle

    2017-10-01

    Single-phase porous materials contain multiple components that intermingle up to the ultramicroscopic level. Although the structures of the porous materials have been simulated with agent-based methods, the results of the available methods continue to provide patterns of distinguishable solid and fluid agents which do not represent materials with indistinguishable phases. This paper introduces a new agent (hybrid agent) and category of rules (intra-agent rule) that can be used to create emergent structures that would more accurately represent single-phase structures and materials. The novel hybrid agent carries the characteristics of system's elements and it is capable of changing within itself, while also responding to its neighbours as they also change. As an example, the hybrid agent under one-dimensional cellular automata formalism in a two-dimensional domain is used to generate patterns that demonstrate the striking morphological and characteristic similarities with the porous saturated single-phase structures where each agent of the ;structure; carries semi-permeability property and consists of both fluid and solid in space and at all times. We conclude that the ability of the hybrid agent to change locally provides an enhanced protocol to simulate complex porous structures such as biological tissues which could facilitate models for agent-based techniques and numerical methods.

  8. Method for measuring multiple scattering corrections between liquid scintillators

    Energy Technology Data Exchange (ETDEWEB)

    Verbeke, J.M., E-mail: verbeke2@llnl.gov; Glenn, A.M., E-mail: glenn22@llnl.gov; Keefer, G.J., E-mail: keefer1@llnl.gov; Wurtz, R.E., E-mail: wurtz1@llnl.gov

    2016-07-21

    A time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.

  9. A Method To ModifyCorrect The Performance Of Amplifiers

    Directory of Open Access Journals (Sweden)

    Rohith Krishnan R

    2015-01-01

    Full Text Available Abstract The actual response of the amplifier may vary with the replacement of some aged or damaged components and this method is to compensate that problem. Here we use op-amp Fixator as the design tool. The tool helps us to isolate the selected circuit component from rest of the circuit adjust its operating point to correct the performance deviations and to modify the circuit without changing other parts of the circuit. A method to modifycorrect the performance of amplifiers by properly redesign the circuit is presented in this paper.

  10. New method in obtaining correction factor of power confirming

    International Nuclear Information System (INIS)

    Deng Yongjun; Li Rundong; Liu Yongkang; Zhou Wei

    2010-01-01

    Westcott theory is the most widely used method in reactor power calibration, which particularly suited to research reactor. But this method is very fussy because lots of correction parameters which rely on empirical formula to special reactor type are needed. The incidence coefficient between foil activity and reactor power was obtained by Monte-Carlo calculation, which was carried out with precise description of the reactor core and the foil arrangement position by MCNP input card. So the reactor power was determined by the core neutron fluence profile and the foil activity placed in the position for normalization use. The characteristic of this new method is simpler, more flexible and accurate than Westcott theory. In this paper, the results of SPRR-300 obtained by the new method in theory were compared with the experimental results, which verified the possibility of this new method. (authors)

  11. A Method for Correcting IMRT Optimizer Heterogeneity Dose Calculations

    International Nuclear Information System (INIS)

    Zacarias, Albert S.; Brown, Mellonie F.; Mills, Michael D.

    2010-01-01

    Radiation therapy treatment planning for volumes close to the patient's surface, in lung tissue and in the head and neck region, can be challenging for the planning system optimizer because of the complexity of the treatment and protected volumes, as well as striking heterogeneity corrections. Because it is often the goal of the planner to produce an isodose plan with uniform dose throughout the planning target volume (PTV), there is a need for improved planning optimization procedures for PTVs located in these anatomical regions. To illustrate such an improved procedure, we present a treatment planning case of a patient with a lung lesion located in the posterior right lung. The intensity-modulated radiation therapy (IMRT) plan generated using standard optimization procedures produced substantial dose nonuniformity across the tumor caused by the effect of lung tissue surrounding the tumor. We demonstrate a novel iterative method of dose correction performed on the initial IMRT plan to produce a more uniform dose distribution within the PTV. This optimization method corrected for the dose missing on the periphery of the PTV and reduced the maximum dose on the PTV to 106% from 120% on the representative IMRT plan.

  12. Saturation recovery EPR spin-labeling method for quantification of lipids in biological membrane domains.

    Science.gov (United States)

    Mainali, Laxman; Camenisch, Theodore G; Hyde, James S; Subczynski, Witold K

    2017-12-01

    The presence of integral membrane proteins induces the formation of distinct domains in the lipid bilayer portion of biological membranes. Qualitative application of both continuous wave (CW) and saturation recovery (SR) electron paramagnetic resonance (EPR) spin-labeling methods allowed discrimination of the bulk, boundary, and trapped lipid domains. A recently developed method, which is based on the CW EPR spectra of phospholipid (PL) and cholesterol (Chol) analog spin labels, allows evaluation of the relative amount of PLs (% of total PLs) in the boundary plus trapped lipid domain and the relative amount of Chol (% of total Chol) in the trapped lipid domain [ M. Raguz, L. Mainali, W. J. O'Brien, and W. K. Subczynski (2015), Exp. Eye Res., 140:179-186 ]. Here, a new method is presented that, based on SR EPR spin-labeling, allows quantitative evaluation of the relative amounts of PLs and Chol in the trapped lipid domain of intact membranes. This new method complements the existing one, allowing acquisition of more detailed information about the distribution of lipids between domains in intact membranes. The methodological transition of the SR EPR spin-labeling approach from qualitative to quantitative is demonstrated. The abilities of this method are illustrated for intact cortical and nuclear fiber cell plasma membranes from porcine eye lenses. Statistical analysis (Student's t -test) of the data allowed determination of the separations of mean values above which differences can be treated as statistically significant ( P ≤ 0.05) and can be attributed to sources other than preparation/technique.

  13. Thermoluminescence dating of chinese porcelain using a regression method of saturating exponential in pre-dose technique

    International Nuclear Information System (INIS)

    Wang Weida; Xia Junding; Zhou Zhixin; Leung, P.L.

    2001-01-01

    Thermoluminescence (TL) dating using a regression method of saturating exponential in pre-dose technique was described. 23 porcelain samples from past dynasties of China were dated by this method. The results show that the TL ages are in reasonable agreement with archaeological dates within a standard deviation of 27%. Such error can be accepted in porcelain dating

  14. Auto correct method of AD converters precision based on ethernet

    Directory of Open Access Journals (Sweden)

    NI Jifeng

    2013-10-01

    Full Text Available Ideal AD conversion should be a straight zero-crossing line in the Cartesian coordinate axis system. While in practical engineering, the signal processing circuit, chip performance and other factors have an impact on the accuracy of conversion. Therefore a linear fitting method is adopted to improve the conversion accuracy. An automatic modification of AD conversion based on Ethernet is presented by using software and hardware. Just by tapping the mouse, all the AD converter channel linearity correction can be automatically completed, and the error, SNR and ENOB (effective number of bits are calculated. Then the coefficients of linear modification are loaded into the onboard AD converter card's EEPROM. Compared with traditional methods, this method is more convenient, accurate and efficient,and has a broad application prospects.

  15. Development of a method to determine the total C-14 content in saturated salt solutions

    International Nuclear Information System (INIS)

    Lucks, C.; Prautsch, C.

    2016-01-01

    This two-step method described here for the determination of the total carbon-14 content in saturated salt solutions is divided in the analysis of the carbon-14 in the evaporable and the non-evaporable fraction. After driving off the inorganic carbon by acidification, the volatile carbon compounds and volatile decomposition products follow with rising temperature inside the sample vessel in a mild stream of oxygen to a tube furnace equipped with CuO catalyst for oxidizing the carbon compounds to CO 2 at a temperature of 800 C. Water is condensed out with an intensive condenser and the released CO 2 is absorbed in a wash bottle filled with sodium hydroxide. Similarly, an aliquot of the evaporation residue is put in the first zone of the tube furnace during the second step of the analysis. After heating the catalyst in the second zone of the furnace to 800 C the residue is heated stepwise to 800 C. By proceeding in this way, the non-volatile compounds are decomposed or oxidised in the oxygen stream and finally completely oxidized by the aid of the catalyst. The released CO 2 is again absorbed in another wash bottle. The carbonate of each fraction is then precipitated as BaCO 3 separately. Finally, the precipitate is washed, dried, finely grounded and covered with toluene scintillation cocktail for measurement in a LSC. The detection limit is about 0,2 Bq/l for a sample volume of 250 ml.

  16. Using automatic calibration method for optimizing the performance of Pedotransfer functions of saturated hydraulic conductivity

    Directory of Open Access Journals (Sweden)

    Ahmed M. Abdelbaki

    2016-06-01

    Full Text Available Pedotransfer functions (PTFs are an easy way to predict saturated hydraulic conductivity (Ksat without measurements. This study aims to auto calibrate 22 PTFs. The PTFs were divided into three groups according to its input requirements and the shuffled complex evolution algorithm was used in calibration. The results showed great modification in the performance of the functions compared to the original published functions. For group 1 PTFs, the geometric mean error ratio (GMER and the geometric standard deviation of error ratio (GSDER values were modified from range (1.27–6.09, (5.2–7.01 to (0.91–1.15, (4.88–5.85 respectively. For group 2 PTFs, the GMER and the GSDER values were modified from (0.3–1.55, (5.9–12.38 to (1.00–1.03, (5.5–5.9 respectively. For group 3 PTFs, the GMER and the GSDER values were modified from (0.11–2.06, (5.55–16.42 to (0.82–1.01, (5.1–6.17 respectively. The result showed that the automatic calibration is an efficient and accurate method to enhance the performance of the PTFs.

  17. Correction

    CERN Multimedia

    2002-01-01

    Tile Calorimeter modules stored at CERN. The larger modules belong to the Barrel, whereas the smaller ones are for the two Extended Barrels. (The article was about the completion of the 64 modules for one of the latter.) The photo on the first page of the Bulletin n°26/2002, from 24 July 2002, illustrating the article «The ATLAS Tile Calorimeter gets into shape» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.

  18. Filtering of SPECT reconstructions made using Bellini's attenuation correction method

    International Nuclear Information System (INIS)

    Glick, S.J.; Penney, B.C.; King, M.A.

    1991-01-01

    This paper evaluates a three-dimensional (3D) Wiener filter which is used to restore SPECT reconstructions which were made using Bellini's method of attenuation correction. Its performance is compared to that of several pre-reconstruction filers: the one-dimensional (1D) Butterworth, the two-dimensional (2D) Butterworth, and a 2D Wiener filer. A simulation study is used to compare the four filtering methods. An approximation to a clinical liver spleen study was used as the source distribution and algorithm which accounts for the depth and distance dependent blurring in SPECT was used to compute noise free projections. To study the effect of filtering method on tumor detection accuracy, a 2 cm diameter, cool spherical tumor (40% contrast) was placed at a known, but random, location with the liver. Projection sets for ten tumor locations were computed and five noise realizations of each set were obtained by introducing Poisson noise. The simulated projections were either: filtered with the 1D or 2D Butterworth or the 2D Wiener and then reconstructed using Bellini's intrinsic attenuation correction, or reconstructed first, then filtered with the 3D Wiener. The criteria used for comparison were: normalized mean square error (NMSE), cold spot contrast, and accuracy of tumor detection with an automated numerical method. Results indicate that restorations obtained with 3D Wiener filtering yielded significantly higher lesion contrast and lower NMSE values compared to the other methods of processing. The Wiener restoration filters and the 2D Butterworth all provided similar measures of detectability, which were noticeably higher than that obtained with 1D Butterworth smoothing

  19. Development and Assessment of a Bundle Correction Method for CHF

    International Nuclear Information System (INIS)

    Hwang, Dae Hyun; Chang, Soon Heung

    1993-01-01

    A bundle correction method, based on the conservation laws of mass, energy, and momentum in an open subchannel, is proposed for the prediction of the critical heat flux (CHF) in rod bundles from round tube CHF correlations without detailed subchannel analysis. It takes into account the effects of the enthalpy and mass velocity distributions at subchannel level using the first dericatives of CHF with respect to the independent parameters. Three different CHF correlations for tubes (Groeneveld's CHF table, Katto correlation, and Biasi correlation) have been examined with uniformly heated bundle CHF data collected from various sources. A limited number of GHE data from a non-uniformly heated rod bundle are also evaluated with the aid of Tong's F-factor. The proposed method shows satisfactory CHF predictions for rod bundles both uniform and non-uniform power distributions. (Author)

  20. Empirical method for matrix effects correction in liquid samples

    International Nuclear Information System (INIS)

    Vigoda de Leyt, Dora; Vazquez, Cristina

    1987-01-01

    A simple method for the determination of Cr, Ni and Mo in stainless steels is presented. In order to minimize the matrix effects, the conditions of liquid system to dissolve stainless steels chips has been developed. Pure element solutions were used as standards. Preparation of synthetic solutions with all the elements of steel and also mathematic corrections are avoided. It results in a simple chemical operation which simplifies the method of analysis. The variance analysis of the results obtained with steel samples show that the three elements may be determined from the comparison with the analytical curves obtained with the pure elements if the same parameters in the calibration curves are used. The accuracy and the precision were checked against other techniques using the British Chemical Standards of the Bureau of Anlysed Samples Ltd. (England). (M.E.L.) [es

  1. Correction

    Directory of Open Access Journals (Sweden)

    2012-01-01

    Full Text Available Regarding Gorelik, G., & Shackelford, T.K. (2011. Human sexual conflict from molecules to culture. Evolutionary Psychology, 9, 564–587: The authors wish to correct an omission in citation to the existing literature. In the final paragraph on p. 570, we neglected to cite Burch and Gallup (2006 [Burch, R. L., & Gallup, G. G., Jr. (2006. The psychobiology of human semen. In S. M. Platek & T. K. Shackelford (Eds., Female infidelity and paternal uncertainty (pp. 141–172. New York: Cambridge University Press.]. Burch and Gallup (2006 reviewed the relevant literature on FSH and LH discussed in this paragraph, and should have been cited accordingly. In addition, Burch and Gallup (2006 should have been cited as the originators of the hypothesis regarding the role of FSH and LH in the semen of rapists. The authors apologize for this oversight.

  2. Correction

    CERN Multimedia

    2002-01-01

    The photo on the second page of the Bulletin n°48/2002, from 25 November 2002, illustrating the article «Spanish Visit to CERN» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.   The Spanish delegation, accompanied by Spanish scientists at CERN, also visited the LHC superconducting magnet test hall (photo). From left to right: Felix Rodriguez Mateos of CERN LHC Division, Josep Piqué i Camps, Spanish Minister of Science and Technology, César Dopazo, Director-General of CIEMAT (Spanish Research Centre for Energy, Environment and Technology), Juan Antonio Rubio, ETT Division Leader at CERN, Manuel Aguilar-Benitez, Spanish Delegate to Council, Manuel Delfino, IT Division Leader at CERN, and Gonzalo León, Secretary-General of Scientific Policy to the Minister.

  3. Correction

    Directory of Open Access Journals (Sweden)

    2014-01-01

    Full Text Available Regarding Tagler, M. J., and Jeffers, H. M. (2013. Sex differences in attitudes toward partner infidelity. Evolutionary Psychology, 11, 821–832: The authors wish to correct values in the originally published manuscript. Specifically, incorrect 95% confidence intervals around the Cohen's d values were reported on page 826 of the manuscript where we reported the within-sex simple effects for the significant Participant Sex × Infidelity Type interaction (first paragraph, and for attitudes toward partner infidelity (second paragraph. Corrected values are presented in bold below. The authors would like to thank Dr. Bernard Beins at Ithaca College for bringing these errors to our attention. Men rated sexual infidelity significantly more distressing (M = 4.69, SD = 0.74 than they rated emotional infidelity (M = 4.32, SD = 0.92, F(1, 322 = 23.96, p < .001, d = 0.44, 95% CI [0.23, 0.65], but there was little difference between women's ratings of sexual (M = 4.80, SD = 0.48 and emotional infidelity (M = 4.76, SD = 0.57, F(1, 322 = 0.48, p = .29, d = 0.08, 95% CI [−0.10, 0.26]. As expected, men rated sexual infidelity (M = 1.44, SD = 0.70 more negatively than they rated emotional infidelity (M = 2.66, SD = 1.37, F(1, 322 = 120.00, p < .001, d = 1.12, 95% CI [0.85, 1.39]. Although women also rated sexual infidelity (M = 1.40, SD = 0.62 more negatively than they rated emotional infidelity (M = 2.09, SD = 1.10, this difference was not as large and thus in the evolutionary theory supportive direction, F(1, 322 = 72.03, p < .001, d = 0.77, 95% CI [0.60, 0.94].

  4. Gynecomastia: the horizontal ellipse method for its correction.

    Science.gov (United States)

    Gheita, Alaa

    2008-09-01

    Gynecomastia is an extremely disturbing deformity affecting males, especially when it occurs in young subjects. Such subjects generally have no hormonal anomalies and thus either liposuction or surgical intervention, depending on the type and consistency of the breast, is required for treatment. If there is slight hypertrophy alone with no ptosis, then subcutaneous mastectomy is usually sufficient. However, when hypertrophy and/or ptosis are present, then corrective surgery on the skin and breast is mandatory to obtain a good cosmetic result. Most of the procedures suggested for reduction of the male breast are usually derived from reduction mammaplasty methods used for females. They have some disadvantages, mainly the multiple scars, which remain apparent in males, unusual shape, and the lack of symmetry with regard to the size of both breasts and/or the nipple position. The author presents a new, simple method that has proven superior to any previous method described so far. It consists of a horizontal excision ellipse of the breast's redundant skin and deep excess tissue and a superior pedicle flap carrying the areola-nipple complex to its new site on the chest wall. The method described yields excellent shape, symmetry, and minimal scars. A new method for treating gynecomastis is described in detail, its early and late operative results are shown, and its advantages are discussed.

  5. Biogeosystem Technique as a method to correct the climate

    Science.gov (United States)

    Kalinitchenko, Valery; Batukaev, Abdulmalik; Batukaev, Magomed; Minkina, Tatiana

    2017-04-01

    can be produced; The less energy is consumed for climate correction, the better. The proposed algorithm was never discussed before because most of its ingredients were unenforceable. Now the possibility to execute the algorithm exists in the framework of our new scientific-technical branch - Biogeosystem Technique (BGT*). The BGT* is a transcendental (non-imitating natural processes) approach to soil processing, regulation of energy, matter, water fluxes and biological productivity of biosphere: intra-soil machining to provide the new highly productive dispersed system of soil; intra-soil pulse continuous-discrete plants watering to reduce the transpiration rate and water consumption of plants for 5-20 times; intra-soil environmentally safe return of matter during intra-soil milling processing and (or) intra-soil pulse continuous-discrete plants watering with nutrition. Are possible: waste management; reducing flow of nutrients to water systems; carbon and other organic and mineral substances transformation into the soil to plant nutrition elements; less degradation of biological matter to greenhouse gases; increasing biological sequestration of carbon dioxide in terrestrial system's photosynthesis; oxidizing methane and hydrogen sulfide by fresh photosynthesis ionized biologically active oxygen; expansion of the active terrestrial site of biosphere. The high biological product output of biosphere will be gained. BGT* robotic systems are of low cost, energy and material consumption. By BGT* methods the uncertainties of climate and biosphere will be reduced. Key words: Biogeosystem Technique, method to correct, climate

  6. Diagnostics and correction of disregulation states by physical methods

    OpenAIRE

    Gorsha, O. V.; Gorsha, V. I.

    2017-01-01

    Nicolaus Copernicus University, Toruń, Poland Ukrainian Research Institute for Medicine of Transport, Odesa, Ukraine Gorsha O. V., Gorsha V. I. Diagnostics and correction of disregulation states by physical methods Горша О. В., Горша В. И. Диагностика и коррекция физическими методами дизрегуляторных состояний Toruń, Odesa 2017 Nicolaus Copernicus University, To...

  7. Evaluation of a scattering correction method for high energy tomography

    Science.gov (United States)

    Tisseur, David; Bhatia, Navnina; Estre, Nicolas; Berge, Léonie; Eck, Daniel; Payan, Emmanuel

    2018-01-01

    One of the main drawbacks of Cone Beam Computed Tomography (CBCT) is the contribution of the scattered photons due to the object and the detector. Scattered photons are deflected from their original path after their interaction with the object. This additional contribution of the scattered photons results in increased measured intensities, since the scattered intensity simply adds to the transmitted intensity. This effect is seen as an overestimation in the measured intensity thus corresponding to an underestimation of absorption. This results in artifacts like cupping, shading, streaks etc. on the reconstructed images. Moreover, the scattered radiation provides a bias for the quantitative tomography reconstruction (for example atomic number and volumic mass measurement with dual-energy technique). The effect can be significant and difficult in the range of MeV energy using large objects due to higher Scatter to Primary Ratio (SPR). Additionally, the incident high energy photons which are scattered by the Compton effect are more forward directed and hence more likely to reach the detector. Moreover, for MeV energy range, the contribution of the photons produced by pair production and Bremsstrahlung process also becomes important. We propose an evaluation of a scattering correction technique based on the method named Scatter Kernel Superposition (SKS). The algorithm uses a continuously thickness-adapted kernels method. The analytical parameterizations of the scatter kernels are derived in terms of material thickness, to form continuously thickness-adapted kernel maps in order to correct the projections. This approach has proved to be efficient in producing better sampling of the kernels with respect to the object thickness. This technique offers applicability over a wide range of imaging conditions and gives users an additional advantage. Moreover, since no extra hardware is required by this approach, it forms a major advantage especially in those cases where

  8. [Models for quantification of fluid saturation in two-phase flow system by light transmission method and its application].

    Science.gov (United States)

    Zhang, Yan-Hong; Ye, Shu-Jun; Wu, Ji-Chun

    2014-06-01

    Based on light transmission method in quantification of liquid saturation and its application in two-phase flow system, two groups of sandbox experiments were set up to study the migration of gas or Dense Non-Aqueous Phase Liquids (DNAPLs) in water saturated porous media. The migration of gas or DNAPL was monitored in the study. Two modified Light Intensity-Saturation (LIS) models for water/gas two-phase system were applied and verified by the experiment data. Moreover two new LIS models for NAPL/water system were developed and applied to simulate the DNAPL infiltration experiment data. The gas injection experiment showed that gas moved upward to the top of the sandbox in the form of 'fingering' and finally formed continuous distribution. The results of DNAPL infiltration experiment showed that TCE mainly moved downward as the result of its gravity, eventually formed irregular plume and accumulated at the bottom of the sandbox. The outcomes of two LIS models for water/gas system (WG-A and WG-B) were consistent to the measured data. The results of two LIS models for NAPL/water system (NW-A and NW-B) fit well with the observations, and Model NW-A based on assumption of individual drainage gave better results. It could be a useful reference for quantification of NAPL/water saturation in porous media system.

  9. Method and system of doppler correction for mobile communications systems

    Science.gov (United States)

    Georghiades, Costas N. (Inventor); Spasojevic, Predrag (Inventor)

    1999-01-01

    Doppler correction system and method comprising receiving a Doppler effected signal comprising a preamble signal (32). A delayed preamble signal (48) may be generated based on the preamble signal (32). The preamble signal (32) may be multiplied by the delayed preamble signal (48) to generate an in-phase preamble signal (60). The in-phase preamble signal (60) may be filtered to generate a substantially constant in-phase preamble signal (62). A plurality of samples of the substantially constant in-phase preamble signal (62) may be accumulated. A phase-shifted signal (76) may also be generated based on the preamble signal (32). The phase-shifted signal (76) may be multiplied by the delayed preamble signal (48) to generate an out-of-phase preamble signal (80). The out-of-phase preamble signal (80) may be filtered to generate a substantially constant out-of-phase preamble signal (82). A plurality of samples of the substantially constant out-of-phase signal (82) may be accumulated. A sum of the in-phase preamble samples and a sum of the out-of-phase preamble samples may be normalized relative to each other to generate an in-phase Doppler estimator (92) and an out-of-phase Doppler estimator (94).

  10. Synthesis of high saturation magnetic iron oxide nanomaterials via low temperature hydrothermal method

    Energy Technology Data Exchange (ETDEWEB)

    Bhavani, P.; Rajababu, C.H. [Department of Materials Science & Nanotechnology, Yogivemana University, Vemanapuram 516003, Kadapa (India); Arif, M.D. [Environmental Magnetism Laboratory, Indian Institute of Geomagnetism (IIG), Navi Mumbai 410218, Mumbai (India); Reddy, I. Venkata Subba [Department of Physics, Gitam University, Hyderabad Campus, Rudraram, Medak 502329 (India); Reddy, N. Ramamanohar, E-mail: manoharphd@gmail.com [Department of Materials Science & Nanotechnology, Yogivemana University, Vemanapuram 516003, Kadapa (India)

    2017-03-15

    Iron oxide nanoparticles (IONPs) were synthesized through a simple low temperature hydrothermal approach to obtain with high saturation magnetization properties. Two series of iron precursors (sulfates and chlorides) were used in synthesis process by varying the reaction temperature at a constant pH. The X-ray diffraction pattern indicates the inverse spinel structure of the synthesized IONPs. The Field emission scanning electron microscopy and high resolution transmission electron microscopy studies revealed that the particles prepared using iron sulfate were consisting a mixer of spherical (16–40 nm) and rod (diameter ~20–25 nm, length <100 nm) morphologies that synthesized at 130 °C, while the IONPs synthesized by iron chlorides are found to be well distributed spherical shapes with size range 5–20 nm. On other hand, the IONPs synthesized at reaction temperature of 190 °C has spherical (16–46 nm) morphology in both series. The band gap values of IONPs were calculated from the obtained optical absorption spectra of the samples. The IONPs synthesized using iron sulfate at temperature of 130 °C exhibited high saturation magnetization (M{sub S}) of 103.017 emu/g and low remanant magnetization (M{sub r}) of 0.22 emu/g with coercivity (H{sub c}) of 70.9 Oe{sub ,} which may be attributed to the smaller magnetic domains (d{sub m}) and dead magnetic layer thickness (t). - Highlights: • Comparison of iron oxide materials prepared with Fe{sup +2}/Fe{sup +3} sulfates and chlorides at different temperatures. • We prepared super-paramagnetic and soft ferromagnetic magnetite nanoparticles. • We report higher saturation magnetization with lower coercivity.

  11. Method of coupling 1-D unsaturated flow with 3-D saturated flow on large scale

    Directory of Open Access Journals (Sweden)

    Yan Zhu

    2011-12-01

    Full Text Available A coupled unsaturated-saturated water flow numerical model was developed. The water flow in the unsaturated zone is considered the one-dimensional vertical flow, which changes in the horizontal direction according to the groundwater table and the atmospheric boundary conditions. The groundwater flow is treated as the three-dimensional water flow. The recharge flux to groundwater from soil water is considered the bottom flux for the numerical simulation in the unsaturated zone, and the upper flux for the groundwater simulation. It connects and unites the two separated water flow systems. The soil water equation is solved based on the assumed groundwater table and the subsequent predicted recharge flux. Then, the groundwater equation is solved with the predicted recharge flux as the upper boundary condition. Iteration continues until the discrepancy between the assumed and calculated groundwater nodal heads have a certain accuracy. Illustrative examples with different water flow scenarios regarding the Dirichlet boundary condition, the Neumann boundary condition, the atmospheric boundary condition, and the source or sink term were calculated by the coupled model. The results are compared with those of other models, including Hydrus-1D, SWMS-2D, and FEFLOW, which demonstrate that the coupled model is effective and accurate and can significantly reduce the computational time for the large number of nodes in saturated-unsaturated water flow simulation.

  12. History and future of human cadaver preservation for surgical training: from formalin to saturated salt solution method.

    Science.gov (United States)

    Hayashi, Shogo; Naito, Munekazu; Kawata, Shinichi; Qu, Ning; Hatayama, Naoyuki; Hirai, Shuichi; Itoh, Masahiro

    2016-01-01

    Traditionally, surgical training meant on-the-job training with live patients in an operating room. However, due to advancing surgical techniques, such as minimally invasive surgery, and increasing safety demands during procedures, human cadavers have been used for surgical training. When considering the use of human cadavers for surgical training, one of the most important factors is their preservation. In this review, we summarize four preservation methods: fresh-frozen cadaver, formalin, Thiel's, and saturated salt solution methods. Fresh-frozen cadaver is currently the model that is closest to reality, but it also presents myriad problems, including the requirement of freezers for storage, limited work time because of rapid putrefaction, and risk of infection. Formalin is still used ubiquitously due to its low cost and wide availability, but it is not ideal because formaldehyde has an adverse health effect and formalin-embalmed cadavers do not exhibit many of the qualities of living organs. Thiel's method results in soft and flexible cadavers with almost natural colors, and Thiel-embalmed cadavers have been appraised widely in various medical disciplines. However, Thiel's method is relatively expensive and technically complicated. In addition, Thiel-embalmed cadavers have a limited dissection time. The saturated salt solution method is simple, carries a low risk of infection, and is relatively low cost. Although more research is needed, this method seems to be sufficiently useful for surgical training and has noteworthy features that expand the capability of clinical training. The saturated salt solution method will contribute to a wider use of cadavers for surgical training.

  13. Gamma camera correction system and method for using the same

    International Nuclear Information System (INIS)

    Inbar, D.; Gafni, G.; Grimberg, E.; Bialick, K.; Koren, J.

    1986-01-01

    A gamma camera is described which consists of: (a) a detector head that includes photodetectors for producing output signals in response to radiation stimuli which are emitted by a radiation field and which interact with the detector head and produce an event; (b) signal processing circuitry responsive to the output signals of the photodetectors for producing a sum signal that is a measure of the total energy of the event; (c) an energy discriminator having a relatively wide window for comparison with the sum signal; (d) the signal processing circuitry including coordinate computation circuitry for operating on the output signals, and calculating an X,Y coordinate of an event when the sum signal lies within the window of the energy discriminator; (e) an energy correction table containing spatially dependent energy windows for producing a validation signal if the total energy of an event lies within the window associated with the X,Y coordinates of the event; (f) the signal processing circuitry including a dislocation correction table containing spatially dependent correction factors for converting the X,Y coordinates of an event to relocated coordinates in accordance with correction factors determined by the X,Y coordinates; (g) a digital memory for storing a map of the radiation field; and (h) means for recording an event at its relocated coordinates in the memory if the energy correction table produces a validation signal

  14. Effect of methods of myopia correction on visual acuity, contrast sensitivity, and depth of focus

    NARCIS (Netherlands)

    Nio, YK; Jansonius, NM; Wijdh, RHJ; Beekhuis, WH; Worst, JGF; Noorby, S; Kooijman, AC

    Purpose. To psychophysically measure spherical and irregular aberrations in patients with various types of myopia correction. Setting: Laboratory of Experimental Ophthalmology, University of Groningen, Groningen, The Netherlands. Methods: Three groups of patients with low myopia correction

  15. Peculiarities of application the method of autogenic training in the correction of eating behavior

    OpenAIRE

    Shebanova, Vitaliya

    2014-01-01

    The article presented peculiarities of applying the method of autogenic training in the correction of eating disorders. Described stages of correction work with desadaptive eating behavior. Author makes accent on the rules self-assembly formula intentions.

  16. Methods and apparatus for environmental correction of thermal neutron logs

    International Nuclear Information System (INIS)

    Preeg, W.E.; Scott, H.D.

    1983-01-01

    An on-line environmentally-corrected measurement of the thermal neutron decay time (tau) of an earth formation traversed by a borehole is provided in a two-detector, pulsed neutron logging tool, by measuring tau at each detector and combining the two tau measurements in accordance with a previously established empirical relationship of the general form: tau = tausub(F) +A(tausub(F) + tausub(N)B) + C, where tausub(F) and tausub(N) are the tau measurements at the far-spaced and near-spaced detectors, respectively, A is a correction coefficient for borehole capture cross section effects, B is a correction coefficient for neutron diffusion effects, and C is a constant related to parameters of the logging tool. Preferred numerical values of A, B and C are disclosed, and a relationship for more accurately approximating the A term to specific borehole conditions. (author)

  17. Two different modelling methods of the saturated steam turbine load rejection

    International Nuclear Information System (INIS)

    Negreanu, Gabriel-Paul; Oprea, Ion

    1999-01-01

    One of the most difficult operation regimes of a steam turbine is the load rejection. It happens usually when the main switchgear of the unit closes unexpectedly due to some external or internal causes. In this moment, the rotor balance collapses: the motor momentum is positive, the resistant momentum is zero and the rotation velocity increases rapidly. When this process occurs, the over-speed protection should activate the emergency stop valves and the control and intercept valves in order to stop the steam admission into the turbine. The paper presents two differential approaches of the fluid dynamic processes from the flow sections of the saturated steam turbine of the NPP, where the laws of mass and energy conservation are applied. In this manner, the 'power and speed versus time' diagrams can be drawn. The main parameters of such technical problem are the closure low of the valves, the large volume of internal cavities, the huge inertial momentum of the rotor and especially the moisture of the steam that evaporates when the pressure decreases and generates an extra power in the turbine. (authors)

  18. Practical method of breast attenuation correction for cardiac SPECT

    International Nuclear Information System (INIS)

    Oliveira, Anderson de; Nogueira, Tindyua; Gutterres, Ricardo Fraga; Megueriam, Berdj Aram; Santos, Goncalo Rodrigues dos

    2007-01-01

    The breast attenuation effects on SPECT (Single Photon Emission Tomography) myocardium perfusion procedures have been lately scope of continuous inquiry. The requested attenuation correction factors are usually achieved by transmission analysis, making up the exposure of a standard external source to the SPECT, as a routine step. However, its high cost makes this methodology not fully available to the most of nuclear medicines services in Brazil and abroad. To overcome the problem, a new trend is presented in this work, implementing computational models to balance the breast attenuation effects on the left ventricle anterior wall, during myocardium perfusion scintigraphy procedures with SPECT. A neural network was put on in order to provide the attenuation correction indexes, based upon the following patients individual biotypes features: mass, age, height, chest and breast thicknesses, heart size, as well as the imparted activity intake levels. (author)

  19. Practical method of breast attenuation correction for cardiac SPECT

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Anderson de; Nogueira, Tindyua; Gutterres, Ricardo Fraga [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil). Coordenacao Geral de Instalacoes Medicas e Industriais (CGMI)]. E-mails: anderson@cnen.gov.br; tnogueira@cnen.gov.br; rguterre@cnen.gov.br; Megueriam, Berdj Aram [Instituto Nacional do Cancer (INCA), Rio de Janeiro, RJ (Brazil)]. E-mail: megueriam@hotmail.com; Santos, Goncalo Rodrigues dos [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)]. E-mail: goncalo@cnen.gov.br

    2007-07-01

    The breast attenuation effects on SPECT (Single Photon Emission Tomography) myocardium perfusion procedures have been lately scope of continuous inquiry. The requested attenuation correction factors are usually achieved by transmission analysis, making up the exposure of a standard external source to the SPECT, as a routine step. However, its high cost makes this methodology not fully available to the most of nuclear medicines services in Brazil and abroad. To overcome the problem, a new trend is presented in this work, implementing computational models to balance the breast attenuation effects on the left ventricle anterior wall, during myocardium perfusion scintigraphy procedures with SPECT. A neural network was put on in order to provide the attenuation correction indexes, based upon the following patients individual biotypes features: mass, age, height, chest and breast thicknesses, heart size, as well as the imparted activity intake levels. (author)

  20. Use of digital computers for correction of gamma method and neutron-gamma method indications

    International Nuclear Information System (INIS)

    Lakhnyuk, V.M.

    1978-01-01

    The program for the NAIRI-S computer is described which is intended for accounting and elimination of the effect of by-processes when interpreting gamma and neutron-gamma logging indications. By means of slight corrections it is possible to use the program as a mathematical basis for logging diagram standardization by the method of multidimensional regressive analysis and estimation of rock reservoir properties

  1. The open-pit truck dispatching method based on the completion of production target and the truck flow saturation

    Energy Technology Data Exchange (ETDEWEB)

    Xing, J.; Sun, X. [Northeastern University, Shenyang (China)

    2007-05-15

    To address current problems in the 'modular dispatch' dynamic programming system widely used in open-pit truck real-time dispatching, two concepts for meeting production targets and truck flow saturation were proposed. Using truck flow programming and taking into account stochastic factors and transportation distance, truck real-time dispatching was optimised. The method is applicable to both shovel-truck match and mismatching and also to empty and heavy truck dispatching. In an open-pit mine the production efficiency could be increased by between 8% and 18%. 6 refs.

  2. ASSESSMENT OF ATMOSPHERIC CORRECTION METHODS FOR OPTIMIZING HAZY SATELLITE IMAGERIES

    Directory of Open Access Journals (Sweden)

    Umara Firman Rizidansyah

    2015-04-01

    Full Text Available The purpose of this research is to examine suitability of three types of haze correction methods toward distinctness of surface objects in land cover. Considering the formation of haze therefore the main research are divided into both region namely rural assumed as vegetation and urban assumed as non vegetation area. Region of interest for rural selected Balaraja and urban selected Penjaringan. Haze imagery reduction utilized techniques such as Dark Object Substration, Virtual Cloud Point and Histogram Match. By applying an equation of Haze Optimized Transformation HOT = DNbluesin(∂-DNredcos(∂, the main result of this research includes: in the case of AVNIR-Rural, VCP has good results on Band 1 while the HM has good results on band 2, 3 and 4, therefore in the case of Avnir-Rural can be applied to HM. in the case of AVNIR-Urban, DOS has good result on band 1, 2 and 3 meanwhile HM has good results on band 4, therefore in the case of AVNIR-Urban can be applied to DOS. In the case of Landsat-Rural, DOS has good result on band 1, 2 and 6 meanwhile VCP has good results on band 4 and 5 and the smallest average value of HOT is 106.547 by VCP, therefore in the case of Lansat-Rural can be applied to DOS and VCP. In the case of Landsat-Urban, DOS has good result on band 1, 2 and 6 meanwhile VCP has good results on band 3, 4 and 5, therefore in the case of Landsat-Urban can be applied to VCP.   Tujuan penelitian ini untuk menguji kesesuaian tiga jenis metode koreksi haze terhadap kejelasan obyek permukaan di wilayah tutupan vegetasi dan non vegetasi, berkenaan menghilangkan haze di wilayah citra satelit optis yang memiliki karakteristik tertentu dan diduga proses pembentukan partikel hazenya berbeda. Sehingga daerah penelitian dibagi menjadi wilayah rural yang diasumsikan sebagai daerah vegetasi dan urban sebagai non vegetasi. Pedesaan terpilih kecamatan Balaraja dan Perkotaan terpilih kecamatan Penjaringan. Tiap lokasi menggunakan Avnir-2 dan Landsat

  3. New methods for the correction of 31P NMR spectra in in vivo NMR spectroscopy

    International Nuclear Information System (INIS)

    Starcuk, Z.; Bartusek, K.; Starcuk, Z. jr.

    1994-01-01

    The new methods for the correction of 31 P NMR spectra in vivo NMR spectroscopy have been performed. A method for the baseline correction of the spectra which represents a combination of time-domain and frequency-domain has been discussed.The method is very fast and efficient for minimization of base line artifacts of biological tissues impact

  4. Scatter correction method with primary modulator for dual energy digital radiography: a preliminary study

    Science.gov (United States)

    Jo, Byung-Du; Lee, Young-Jin; Kim, Dae-Hong; Jeon, Pil-Hyun; Kim, Hee-Joung

    2014-03-01

    In conventional digital radiography (DR) using a dual energy subtraction technique, a significant fraction of the detected photons are scattered within the body, resulting in the scatter component. Scattered radiation can significantly deteriorate image quality in diagnostic X-ray imaging systems. Various methods of scatter correction, including both measurement and non-measurement-based methods have been proposed in the past. Both methods can reduce scatter artifacts in images. However, non-measurement-based methods require a homogeneous object and have insufficient scatter component correction. Therefore, we employed a measurement-based method to correct for the scatter component of inhomogeneous objects from dual energy DR (DEDR) images. We performed a simulation study using a Monte Carlo simulation with a primary modulator, which is a measurement-based method for the DEDR system. The primary modulator, which has a checkerboard pattern, was used to modulate primary radiation. Cylindrical phantoms of variable size were used to quantify imaging performance. For scatter estimation, we used Discrete Fourier Transform filtering. The primary modulation method was evaluated using a cylindrical phantom in the DEDR system. The scatter components were accurately removed using a primary modulator. When the results acquired with scatter correction and without correction were compared, the average contrast-to-noise ratio (CNR) with the correction was 1.35 times higher than that obtained without correction, and the average root mean square error (RMSE) with the correction was 38.00% better than that without correction. In the subtraction study, the average CNR with correction was 2.04 (aluminum subtraction) and 1.38 (polymethyl methacrylate (PMMA) subtraction) times higher than that obtained without the correction. The analysis demonstrated the accuracy of scatter correction and the improvement of image quality using a primary modulator and showed the feasibility of

  5. Method and apparatus for optical phase error correction

    Science.gov (United States)

    DeRose, Christopher; Bender, Daniel A.

    2014-09-02

    The phase value of a phase-sensitive optical device, which includes an optical transport region, is modified by laser processing. At least a portion of the optical transport region is exposed to a laser beam such that the phase value is changed from a first phase value to a second phase value, where the second phase value is different from the first phase value. The portion of the optical transport region that is exposed to the laser beam can be a surface of the optical transport region or a portion of the volume of the optical transport region. In an embodiment of the invention, the phase value of the optical device is corrected by laser processing. At least a portion of the optical transport region is exposed to a laser beam until the phase value of the optical device is within a specified tolerance of a target phase value.

  6. Genomes correction and assembling: present methods and tools

    Science.gov (United States)

    Wojcieszek, Michał; Pawełkowicz, Magdalena; Nowak, Robert; Przybecki, Zbigniew

    2014-11-01

    Recent rapid development of next generation sequencing (NGS) technologies provided significant impact into genomics field of study enabling implementation of many de novo sequencing projects of new species which was previously confined by technological costs. Along with advancement of NGS there was need for adjustment in assembly programs. New algorithms must cope with massive amounts of data computation in reasonable time limits and processing power and hardware is also an important factor. In this paper, we address the issue of assembly pipeline for de novo genome assembly provided by programs presently available for scientist both as commercial and as open - source software. The implementation of four different approaches - Greedy, Overlap - Layout - Consensus (OLC), De Bruijn and Integrated resulting in variation of performance is the main focus of our discussion with additional insight into issue of short and long reads correction.

  7. Texture analysis by the Schulz reflection method: Defocalization corrections for thin films

    International Nuclear Information System (INIS)

    Chateigner, D.; Germi, P.; Pernet, M.

    1992-01-01

    A new method is described for correcting experimental data obtained from the texture analysis of thin films. The analysis employed for correcting the data usually requires the experimental curves of defocalization for a randomly oriented specimen. In view of difficulties in finding non-oriented films, a theoretical method for these corrections is proposed which uses the defocalization evolution for a bulk sample, the film thickness and the penetration depth of the incident beam in the material. This correction method is applied to a film of YBa 2 CU 3 O 7-δ on an SrTiO 3 single-crystal substrate. (orig.)

  8. Band extension in digital methods of transfer function determination – signal conditioners asymmetry error corrections

    Directory of Open Access Journals (Sweden)

    Zbigniew Staroszczyk

    2014-12-01

    Full Text Available [b]Abstract[/b]. In the paper, the calibrating method for error correction in transfer function determination with the use of DSP has been proposed. The correction limits/eliminates influence of transfer function input/output signal conditioners on the estimated transfer functions in the investigated object. The method exploits frequency domain conditioning paths descriptor found during training observation made on the known reference object.[b]Keywords[/b]: transfer function, band extension, error correction, phase errors

  9. Correction to the method of Talmadge and Fitch

    International Nuclear Information System (INIS)

    Sincero, A.P.

    2002-01-01

    The method of Talmadge and Fitch used for calculating thickener areas was published in 1955. Although in the United States, this method has largely been superseded by the solids flux method, there are other parts in the world that use this method even up to the present. The method, however, is erroneous and this needs to be known to potential users. The error lies in the assumption that the underflow concentration, C u , and the time of thickening, t u , in a continuous-flow thickener can be obtained from data obtained in a single batch settling test. This paper will show that this assumption is incorrect. (author)

  10. μ+-meson method of investigation of monocrystalline samples of ferromagnetic metals magnetized to saturation

    International Nuclear Information System (INIS)

    Gorelkin, V.N.; Miloserdin, V.Yu.; Smilga, V.P.

    1977-01-01

    Analysis and calculation have been performed with respect to local magnetic fields in nickel, cobalt and iron lattices with the use of the Ehwald's method. Based on the calculation results regularities have been established of the behaviour of muons in the given ferromagnetic materials in the absence of muon diffusion. It has been found that the μ + meson method makes it possible to study the position of a hydrogen light isotope (muonium) in the metal crystal lattice, deformation and stressed state of the lattice, to measure the contact and dipole fields. The advantages of the μ + meson method in the study of ferromagnetic properties are shown

  11. A new digitized reverse correction method for hypoid gears based on a one-dimensional probe

    Science.gov (United States)

    Li, Tianxing; Li, Jubo; Deng, Xiaozhong; Yang, Jianjun; Li, Genggeng; Ma, Wensuo

    2017-12-01

    In order to improve the tooth surface geometric accuracy and transmission quality of hypoid gears, a new digitized reverse correction method is proposed based on the measurement data from a one-dimensional probe. The minimization of tooth surface geometrical deviations is realized from the perspective of mathematical analysis and reverse engineering. Combining the analysis of complex tooth surface generation principles and the measurement mechanism of one-dimensional probes, the mathematical relationship between the theoretical designed tooth surface, the actual machined tooth surface and the deviation tooth surface is established, the mapping relation between machine-tool settings and tooth surface deviations is derived, and the essential connection between the accurate calculation of tooth surface deviations and the reverse correction method of machine-tool settings is revealed. Furthermore, a reverse correction model of machine-tool settings is built, a reverse correction strategy is planned, and the minimization of tooth surface deviations is achieved by means of the method of numerical iterative reverse solution. On this basis, a digitized reverse correction system for hypoid gears is developed by the organic combination of numerical control generation, accurate measurement, computer numerical processing, and digitized correction. Finally, the correctness and practicability of the digitized reverse correction method are proved through a reverse correction experiment. The experimental results show that the tooth surface geometric deviations meet the engineering requirements after two trial cuts and one correction.

  12. Transformation of 3-chloroallyl alcohol in water-saturated subsoil studied with a column method

    NARCIS (Netherlands)

    Beltman, W.H.J.; Leistra, M.; Matser, A.M.

    1996-01-01

    The performance of a newly developed column method for pesticide transformation rate measurements in the subsoil was tested using (Z)- and (E)-3-chloroallyl alcohol as model compounds. The subsoil columns were filled in situ. In the column experiment the half-life ranged from 0.5-5.2 d for

  13. Analysis and development of methods of correcting for heterogeneities to cobalt-60: computing application

    International Nuclear Information System (INIS)

    Kappas, K.

    1982-11-01

    The purpose of this work is the analysis of the influence of inhomogeneities of the human body on the determination of the dose in Cobalt-60 radiation therapy. The first part is dedicated to the physical characteristics of inhomogeneities and to the conventional methods of correction. New methods of correction are proposed based on the analysis of the scatter. This analysis allows to take account, with a greater accuracy of their physical characteristics and of the corresponding modifications of the dose: ''the differential TAR method'' and ''the Beam Substraction Method''. The second part is dedicated to the computer implementation of the second method of correction for routine application in hospital [fr

  14. A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield.

    Science.gov (United States)

    Ringard, Justine; Seyler, Frederique; Linguet, Laurent

    2017-06-16

    Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale.

  15. Detection of Static Eccentricity Fault in Saturated Induction Motors by Air-Gap Magnetic Flux Signature Analysis Using Finite Element Method

    Directory of Open Access Journals (Sweden)

    N. Halem

    2013-06-01

    Full Text Available Unfortunately, motor current signature analysis (MCSA cannot detect the small degrees of the purely static eccentricity (SE defects, while the air-gap magnetic flux signature analysis (FSA is applied successfully. The simulation results are obtained by using time stepping finite elements (TSFE method. In order to show the impact of magnetic saturation upon the diagnosis of SE fault, the analysis is carried out for saturated induction motors. The index signatures of static eccentricity fault around fundamental and PSHs are detected successfully for saturated motor.

  16. DETECTION OF STATIC ECCENTRICITY FAULT IN SATURATED INDUCTION MOTORS BY AIR-GAP MAGNETIC FLUX SIGNATURE ANALYSIS USING FINITE ELEMENT METHOD

    Directory of Open Access Journals (Sweden)

    N. Halem

    2013-06-01

    Full Text Available Unfortunately, motor current signature analysis (MCSA cannot detect the small degrees of the purely static eccentricity (SE defects, while the air-gap magnetic flux signature analysis (FSA is applied successfully. The simulation results are obtained by using time stepping finite elements (TSFE method. In order to show the impact of magnetic saturation upon the diagnosis of SE fault, the analysis is carried out for saturated induction motors. The index signatures of static eccentricity fault around fundamental and PSHs are detected successfully for saturated motor.

  17. DETECTION OF STATIC ECCENTRICITY FAULT IN SATURATED INDUCTION MOTORS BY AIR-GAP MAGNETIC FLUX SIGNATURE ANALYSIS USING FINITE ELEMENT METHOD

    Directory of Open Access Journals (Sweden)

    N. Halem

    2015-07-01

    Full Text Available Unfortunately, motor current signature analysis (MCSA cannot detect the small degrees of the purely static eccentricity (SE defects, while the air-gap magnetic flux signature analysis (FSA is applied successfully. The simulation results are obtained by using time stepping finite elements (TSFE method. In order to show the impact of magnetic saturation upon the diagnosis of SE fault, the analysis is carried out for saturated induction motors. The index signatures of static eccentricity fault around fundamental and PSHs are detected successfully for saturated motor.

  18. Comparison of saturated areas mapping methods in the Jizera Mountains, Czech Republic

    Czech Academy of Sciences Publication Activity Database

    Kulasová, A.; Beven, K. J.; Blažková, Š. D.; Řezáčová, Daniela; Cajthaml, J.

    2014-01-01

    Roč. 62, č. 2 (2014), s. 160-168 ISSN 0042-790X R&D Projects: GA ČR(CZ) GAP209/11/2045 Institutional support: RVO:68378289 Keywords : mapping variable source areas * boot method * piezometers * vegetation mapping Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 1.486, year: 2014 http://147.213.145.2/vc_articles/2014_62_2_Kulasova_160.pdf

  19. Evaluation and parameterization of ATCOR3 topographic correction method for forest cover mapping in mountain areas

    Science.gov (United States)

    Balthazar, Vincent; Vanacker, Veerle; Lambin, Eric F.

    2012-08-01

    A topographic correction of optical remote sensing data is necessary to improve the quality of quantitative forest cover change analyses in mountainous terrain. The implementation of semi-empirical correction methods requires the calibration of model parameters that are empirically defined. This study develops a method to improve the performance of topographic corrections for forest cover change detection in mountainous terrain through an iterative tuning method of model parameters based on a systematic evaluation of the performance of the correction. The latter was based on: (i) the general matching of reflectances between sunlit and shaded slopes and (ii) the occurrence of abnormal reflectance values, qualified as statistical outliers, in very low illuminated areas. The method was tested on Landsat ETM+ data for rough (Ecuadorian Andes) and very rough mountainous terrain (Bhutan Himalayas). Compared to a reference level (no topographic correction), the ATCOR3 semi-empirical correction method resulted in a considerable reduction of dissimilarities between reflectance values of forested sites in different topographic orientations. Our results indicate that optimal parameter combinations are depending on the site, sun elevation and azimuth and spectral conditions. We demonstrate that the results of relatively simple topographic correction methods can be greatly improved through a feedback loop between parameter tuning and evaluation of the performance of the correction model.

  20. Limitations of amorphous content quantification by isothermal calorimetry using saturated salt solutions to control relative humidity: alternative methods.

    Science.gov (United States)

    Khalef, Nawel; Pinal, Rodolfo; Bakri, Aziz

    2010-04-01

    Despite the high sensitivity of isothermal calorimetry (IC), reported measurements of amorphous content by this technique show significant variability even for the same compound. An investigation into the reasons behind such variability is presented using amorphous lactose and salbutamol sulfate as model compounds. An analysis was carried out on the heat evolved as a result of the exchange of water vapor between the solid sample during crystallization and the saline solution reservoir. The use of saturated salt solutions as means of control of the vapor pressure of water within sealed ampoules bears inherent limitations that lead in turn to the variability associated with the IC technique. We present an alternative IC method, based on an open cell configuration that effectively addresses the limitations encountered with the sealed ampoule system. The proposed approach yields an integral whose value is proportional to the amorphous content in the sample, thus enabling reliable and consistent quantifications. 2009 Wiley-Liss, Inc. and the American Pharmacists Association

  1. Saturated salt method determination of hysteresis of Pinus sylvestris L. wood for 35 ºC isotherms

    Directory of Open Access Journals (Sweden)

    García Esteban, L.

    2004-12-01

    Full Text Available The saturated salts method was used in this study to quantify hysteresis in Pinus sylvestris L. wood, in an exercise that involved plotting the 35 ºC desorption and sorption isotherms. Nine salts were used, all of which establish stable and known relative humidity values when saturated in water The wood was kept at the relative humidity generated by each of these salts until the equilibrium moisture content (EMC was reached, both in the water loss or desorption, and the water uptake or sorption processes. The Guggenheim method was used to fit the values obtained to the respective curves. Hysteresis was evaluated in terms of the hysteresis coefficient, for which a mean value of 0.87 was found.

    Con este trabajo se ha cuantificado la histéresis de la madera de Pinus sylvestris L. Para ello, se han construido las isotermas de 35 ºC de adsorción y sorción, mediante el método de las sales saturadas. Se han utilizado nueve sales que cuando se saturan en agua dan lugar a unas humedades relativas estables y conocidas. La madera fue colocada bajo las distintas humedades relativas que confieren cada una de las sales hasta que alcanzaron las distintas humedades de equilibrio higroscópico, tanto en el proceso de pérdida de agua o desorción, como en el de adquisición de agua o de sorción. Los valores obtenidos fueron ajustados a las respectivas sigmoides, haciendo uso del método de Guggenheim. La valoración de la histéresis se determinó mediante el coeficiente de histéresis, obteniendo un valor medio de 0,87.

  2. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method

    International Nuclear Information System (INIS)

    Shidahara, Miho; Kato, Takashi; Kawatsu, Shoji; Yoshimura, Kumiko; Ito, Kengo; Watabe, Hiroshi; Kim, Kyeong Min; Iida, Hidehiro; Kato, Rikio

    2005-01-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99m Tc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I AC μb with Chang's attenuation correction factor. The scatter component image is estimated by convolving I AC μb with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99m Tc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine. (orig.)

  3. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method

    Energy Technology Data Exchange (ETDEWEB)

    Shidahara, Miho; Kato, Takashi; Kawatsu, Shoji; Yoshimura, Kumiko; Ito, Kengo [National Center for Geriatrics and Gerontology Research Institute, Department of Brain Science and Molecular Imaging, Obu, Aichi (Japan); Watabe, Hiroshi; Kim, Kyeong Min; Iida, Hidehiro [National Cardiovascular Center Research Institute, Department of Investigative Radiology, Suita (Japan); Kato, Rikio [National Center for Geriatrics and Gerontology, Department of Radiology, Obu (Japan)

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with {sup 99m}Tc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I{sub AC}{sup {mu}}{sup b} with Chang's attenuation correction factor. The scatter component image is estimated by convolving I{sub AC}{sup {mu}}{sup b} with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and {sup 99m}Tc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine. (orig.)

  4. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method.

    Science.gov (United States)

    Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.

  5. Autocalibration method for non-stationary CT bias correction.

    Science.gov (United States)

    Vegas-Sánchez-Ferrero, Gonzalo; Ledesma-Carbayo, Maria J; Washko, George R; Estépar, Raúl San José

    2018-02-01

    Computed tomography (CT) is a widely used imaging modality for screening and diagnosis. However, the deleterious effects of radiation exposure inherent in CT imaging require the development of image reconstruction methods which can reduce exposure levels. The development of iterative reconstruction techniques is now enabling the acquisition of low-dose CT images whose quality is comparable to that of CT images acquired with much higher radiation dosages. However, the characterization and calibration of the CT signal due to changes in dosage and reconstruction approaches is crucial to provide clinically relevant data. Although CT scanners are calibrated as part of the imaging workflow, the calibration is limited to select global reference values and does not consider other inherent factors of the acquisition that depend on the subject scanned (e.g. photon starvation, partial volume effect, beam hardening) and result in a non-stationary noise response. In this work, we analyze the effect of reconstruction biases caused by non-stationary noise and propose an autocalibration methodology to compensate it. Our contributions are: 1) the derivation of a functional relationship between observed bias and non-stationary noise, 2) a robust and accurate method to estimate the local variance, 3) an autocalibration methodology that does not necessarily rely on a calibration phantom, attenuates the bias caused by noise and removes the systematic bias observed in devices from different vendors. The validation of the proposed methodology was performed with a physical phantom and clinical CT scans acquired with different configurations (kernels, doses, algorithms including iterative reconstruction). The results confirmed the suitability of the proposed methods for removing the intra-device and inter-device reconstruction biases. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Simple method for correct enumeration of Staphylococcus aureus

    DEFF Research Database (Denmark)

    Haaber, J.; Cohn, M. T.; Petersen, A.

    2016-01-01

    culture. When grown in such liquid cultures, the human pathogen Staphylococcus aureus is characterized by its aggregation of single cells into clusters of variable size. Here, we show that aggregation during growth in the laboratory standard medium tryptic soy broth (TSB) is common among clinical...... and laboratory S. aureus isolates and that aggregation may introduce significant bias when applying standard enumeration methods on S. aureus growing in laboratory batch cultures. We provide a simple and efficient sonication procedure, which can be applied prior to optical density measurements to give...

  7. Comparison of classical methods for blade design and the influence of tip correction on rotor performance

    DEFF Research Database (Denmark)

    Sørensen, Jens Nørkær; Okulov, Valery; Mikkelsen, Robert Flemming

    2016-01-01

    The classical blade-element/momentum (BE/M) method, which is used together with different types of corrections (e.g. the Prandtl or Glauert tip correction), is today the most basic tool in the design of wind turbine rotors. However, there are other classical techniques based on a combination...

  8. Application of pulse pile-up correction spectrum to the library least-squares method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sang Hoon [Kyungpook National Univ., Daegu (Korea, Republic of)

    2006-12-15

    The Monte Carlo simulation code CEARPPU has been developed and updated to provide pulse pile-up correction spectra for high counting rate cases. For neutron activation analysis, CEARPPU correction spectra were used in library least-squares method to give better isotopic activity results than the convention library least-squares fitting with uncorrected spectra.

  9. Resistivity Correction Factor for the Four-Probe Method: Experiment I

    Science.gov (United States)

    Yamashita, Masato; Yamaguchi, Shoji; Enjoji, Hideo

    1988-05-01

    Experimental verification of the theoretically derived resistivity correction factor (RCF) is presented. Resistivity and sheet resistance measurements by the four-probe method are made on three samples: isotropic graphite, ITO film and Au film. It is indicated that the RCF can correct the apparent variations of experimental data to yield reasonable resistivities and sheet resistances.

  10. Subspace Correction Methods for Total Variation and $\\ell_1$-Minimization

    KAUST Repository

    Fornasier, Massimo

    2009-01-01

    This paper is concerned with the numerical minimization of energy functionals in Hilbert spaces involving convex constraints coinciding with a seminorm for a subspace. The optimization is realized by alternating minimizations of the functional on a sequence of orthogonal subspaces. On each subspace an iterative proximity-map algorithm is implemented via oblique thresholding, which is the main new tool introduced in this work. We provide convergence conditions for the algorithm in order to compute minimizers of the target energy. Analogous results are derived for a parallel variant of the algorithm. Applications are presented in domain decomposition methods for degenerate elliptic PDEs arising in total variation minimization and in accelerated sparse recovery algorithms based on 1-minimization. We include numerical examples which show e.cient solutions to classical problems in signal and image processing. © 2009 Society for Industrial and Applied Physics.

  11. RAPID COMMUNICATION: A novel time frequency-based 3D Lissajous figure method and its application to the determination of oxygen saturation from the photoplethysmogram

    Science.gov (United States)

    Addison, Paul S.; Watson, James N.

    2004-11-01

    We present a novel time-frequency method for the measurement of oxygen saturation using the photoplethysmogram (PPG) signals from a standard pulse oximeter machine. The method utilizes the time-frequency transformation of the red and infrared PPGs to derive a 3D Lissajous figure. By selecting the optimal Lissajous, the method provides an inherently robust basis for the determination of oxygen saturation as regions of the time-frequency plane where high- and low-frequency signal artefacts are to be found are automatically avoided.

  12. Evaluation of Fresnel's corrections to the eikonal approximation by the separabilization method

    International Nuclear Information System (INIS)

    Musakhanov, M.M.; Zubarev, A.L.

    1975-01-01

    Method of separabilization of potential over the Schroedinger approximate solutions, leading to Schwinger's variational principle for scattering amplitude, is suggested. The results are applied to calculation of the Fresnel corrections to the Glauber approximation

  13. A software-based x-ray scatter correction method for breast tomosynthesis

    OpenAIRE

    Jia Feng, Steve Si; Sechopoulos, Ioannis

    2011-01-01

    Purpose: To develop a software-based scatter correction method for digital breast tomosynthesis (DBT) imaging and investigate its impact on the image quality of tomosynthesis reconstructions of both phantoms and patients.

  14. Discussion on Boiler Efficiency Correction Method with Low Temperature Economizer-Air Heater System

    Science.gov (United States)

    Ke, Liu; Xing-sen, Yang; Fan-jun, Hou; Zhi-hong, Hu

    2017-05-01

    This paper pointed out that it is wrong to take the outlet flue gas temperature of low temperature economizer as exhaust gas temperature in boiler efficiency calculation based on GB10184-1988. What’s more, this paper proposed a new correction method, which decomposed low temperature economizer-air heater system into two hypothetical parts of air preheater and pre condensed water heater and take the outlet equivalent gas temperature of air preheater as exhaust gas temperature in boiler efficiency calculation. This method makes the boiler efficiency calculation more concise, with no air heater correction. It has a positive reference value to deal with this kind of problem correctly.

  15. Determination of Matric Suction and Saturation Degree for Unsaturated Soils, Comparative Study - Numerical Method versus Analytical Method

    Science.gov (United States)

    Chiorean, Vasile-Florin

    2017-10-01

    Matric suction is a soil parameter which influences the behaviour of unsaturated soils in both terms of shear strength and permeability. It is a necessary aspect to know the variation of matric suction in unsaturated soil zone for solving geotechnical issues like unsaturated soil slopes stability or bearing capacity for unsaturated foundation ground. Mathematical expression of the dependency between soil moisture content and it’s matric suction (soil water characteristic curve) has a powerful character of nonlinearity. This paper presents two methods to determine the variation of matric suction along the depth included between groundwater level and soil level. First method is an analytical approach to emphasize one direction steady state unsaturated infiltration phenomenon that occurs between the groundwater level and the soil level. There were simulated three different situations in terms of border conditions: precipitations (inflow conditions on ground surface), evaporation (outflow conditions on ground surface), and perfect equilibrium (no flow on ground surface). Numerical method is finite element method used for steady state, two-dimensional, unsaturated infiltration calculus. Regarding boundary conditions there were simulated identical situations as in analytical approach. For both methods, was adopted the equation proposed by van Genuchten-Mualen (1980) for mathematical expression of soil water characteristic curve. Also for the unsaturated soil permeability prediction model was adopted the equation proposed by van Genuchten-Mualen. The fitting parameters of these models were adopted according to RETC 6.02 software in function of soil type. The analyses were performed in both methods for three major soil types: clay, silt and sand. For each soil type were concluded analyses for three situations in terms of border conditions applied on soil surface: inflow, outflow, and no flow. The obtained results are presented in order to highlight the differences

  16. Error analysis of motion correction method for laser scanning of moving objects

    Science.gov (United States)

    Goel, S.; Lohani, B.

    2014-05-01

    The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such `motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of "motion correction" explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of "motion correction" method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results.

  17. A new correction method for determination on carbohydrates in lignocellulosic biomass.

    Science.gov (United States)

    Li, Hong-Qiang; Xu, Jian

    2013-06-01

    The accurate determination on the key components in lignocellulosic biomass is the premise of pretreatment and bioconversion. Currently, the widely used 72% H2SO4 two-step hydrolysis quantitative saccharification (QS) procedure uses loss coefficient of monosaccharide standards to correct monosaccharide loss in the secondary hydrolysis (SH) of QS and may result in excessive correction. By studying the quantitative relationships of glucose and xylose losses during special hydrolysis conditions and the HMF and furfural productions, a simple correction on the monosaccharide loss from both PH and SH was established by using HMF and furfural as the calibrators. This method was used to the component determination on corn stover, Miscanthus and cotton stalk (raw materials and pretreated) and compared to the NREL method. It has been proved that this method can avoid excessive correction on the samples with high-carbohydrate contents. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Beam-Based Error Identification and Correction Methods for Particle Accelerators

    CERN Document Server

    AUTHOR|(SzGeCERN)692826; Tomas, Rogelio; Nilsson, Thomas

    2014-06-10

    Modern particle accelerators have tight tolerances on the acceptable deviation from their desired machine parameters. The control of the parameters is of crucial importance for safe machine operation and performance. This thesis focuses on beam-based methods and algorithms to identify and correct errors in particle accelerators. The optics measurements and corrections of the Large Hadron Collider (LHC), which resulted in an unprecedented low β-beat for a hadron collider is described. The transverse coupling is another parameter which is of importance to control. Improvement in the reconstruction of the coupling from turn-by-turn data has resulted in a significant decrease of the measurement uncertainty. An automatic coupling correction method, which is based on the injected beam oscillations, has been successfully used in normal operation of the LHC. Furthermore, a new method to measure and correct chromatic coupling that was applied to the LHC, is described. It resulted in a decrease of the chromatic coupli...

  19. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods.

    Science.gov (United States)

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-07

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses' quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups' output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.

  20. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods

    Directory of Open Access Journals (Sweden)

    Huiliang Cao

    2016-01-01

    Full Text Available This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC, Quadrature Force Correction (QFC and Coupling Stiffness Correction (CSC methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.

  1. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods

    Science.gov (United States)

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-01

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability. PMID:26751455

  2. RELIC: a novel dye-bias correction method for Illumina Methylation BeadChip.

    Science.gov (United States)

    Xu, Zongli; Langie, Sabine A S; De Boever, Patrick; Taylor, Jack A; Niu, Liang

    2017-01-03

    The Illumina Infinium HumanMethylation450 BeadChip and its successor, Infinium MethylationEPIC BeadChip, have been extensively utilized in epigenome-wide association studies. Both arrays use two fluorescent dyes (Cy3-green/Cy5-red) to measure methylation level at CpG sites. However, performance difference between dyes can result in biased estimates of methylation levels. Here we describe a novel method, called REgression on Logarithm of Internal Control probes (RELIC) to correct for dye bias on whole array by utilizing the intensity values of paired internal control probes that monitor the two color channels. We evaluate the method in several datasets against other widely used dye-bias correction methods. Results on data quality improvement showed that RELIC correction statistically significantly outperforms alternative dye-bias correction methods. We incorporated the method into the R package ENmix, which is freely available from the Bioconductor website ( https://www.bioconductor.org/packages/release/bioc/html/ENmix.html ). RELIC is an efficient and robust method to correct for dye-bias in Illumina Methylation BeadChip data. It outperforms other alternative methods and conveniently implemented in R package ENmix to facilitate DNA methylation studies.

  3. A novel 3D absorption correction method for quantitative EDX-STEM tomography

    International Nuclear Information System (INIS)

    Burdet, Pierre; Saghi, Z.; Filippin, A.N.; Borrás, A.; Midgley, P.A.

    2016-01-01

    This paper presents a novel 3D method to correct for absorption in energy dispersive X-ray (EDX) microanalysis of heterogeneous samples of unknown structure and composition. By using STEM-based tomography coupled with EDX, an initial 3D reconstruction is used to extract the location of generated X-rays as well as the X-ray path through the sample to the surface. The absorption correction needed to retrieve the generated X-ray intensity is then calculated voxel-by-voxel estimating the different compositions encountered by the X-ray. The method is applied to a core/shell nanowire containing carbon and oxygen, two elements generating highly absorbed low energy X-rays. Absorption is shown to cause major reconstruction artefacts, in the form of an incomplete recovery of the oxide and an erroneous presence of carbon in the shell. By applying the correction method, these artefacts are greatly reduced. The accuracy of the method is assessed using reference X-ray lines with low absorption. - Highlights: • A novel 3D absorption correction method is proposed for 3D EDX-STEM tomography. • The absorption of X-rays along the path to the surface is calculated voxel-by-voxel. • The method is applied on highly absorbed X-rays emitted from a core/shell nanowire. • Absorption is shown to cause major artefacts in the reconstruction. • Using the absorption correction method, the reconstruction artefacts are greatly reduced.

  4. A novel 3D absorption correction method for quantitative EDX-STEM tomography

    Energy Technology Data Exchange (ETDEWEB)

    Burdet, Pierre, E-mail: pierre.burdet@a3.epfl.ch [Department of Materials Science and Metallurgy, University of Cambridge, Charles Babbage Road 27, Cambridge CB3 0FS, Cambridgeshire (United Kingdom); Saghi, Z. [Department of Materials Science and Metallurgy, University of Cambridge, Charles Babbage Road 27, Cambridge CB3 0FS, Cambridgeshire (United Kingdom); Filippin, A.N.; Borrás, A. [Nanotechnology on Surfaces Laboratory, Materials Science Institute of Seville (ICMS), CSIC-University of Seville, C/ Americo Vespucio 49, 41092 Seville (Spain); Midgley, P.A. [Department of Materials Science and Metallurgy, University of Cambridge, Charles Babbage Road 27, Cambridge CB3 0FS, Cambridgeshire (United Kingdom)

    2016-01-15

    This paper presents a novel 3D method to correct for absorption in energy dispersive X-ray (EDX) microanalysis of heterogeneous samples of unknown structure and composition. By using STEM-based tomography coupled with EDX, an initial 3D reconstruction is used to extract the location of generated X-rays as well as the X-ray path through the sample to the surface. The absorption correction needed to retrieve the generated X-ray intensity is then calculated voxel-by-voxel estimating the different compositions encountered by the X-ray. The method is applied to a core/shell nanowire containing carbon and oxygen, two elements generating highly absorbed low energy X-rays. Absorption is shown to cause major reconstruction artefacts, in the form of an incomplete recovery of the oxide and an erroneous presence of carbon in the shell. By applying the correction method, these artefacts are greatly reduced. The accuracy of the method is assessed using reference X-ray lines with low absorption. - Highlights: • A novel 3D absorption correction method is proposed for 3D EDX-STEM tomography. • The absorption of X-rays along the path to the surface is calculated voxel-by-voxel. • The method is applied on highly absorbed X-rays emitted from a core/shell nanowire. • Absorption is shown to cause major artefacts in the reconstruction. • Using the absorption correction method, the reconstruction artefacts are greatly reduced.

  5. Quantitative chemical exchange saturation transfer (qCEST) MRI - omega plot analysis of RF-spillover-corrected inverse CEST ratio asymmetry for simultaneous determination of labile proton ratio and exchange rate.

    Science.gov (United States)

    Wu, Renhua; Xiao, Gang; Zhou, Iris Yuwen; Ran, Chongzhao; Sun, Phillip Zhe

    2015-03-01

    Chemical exchange saturation transfer (CEST) MRI is sensitive to labile proton concentration and exchange rate, thus allowing measurement of dilute CEST agent and microenvironmental properties. However, CEST measurement depends not only on the CEST agent properties but also on the experimental conditions. Quantitative CEST (qCEST) analysis has been proposed to address the limitation of the commonly used simplistic CEST-weighted calculation. Recent research has shown that the concomitant direct RF saturation (spillover) effect can be corrected using an inverse CEST ratio calculation. We postulated that a simplified qCEST analysis is feasible with omega plot analysis of the inverse CEST asymmetry calculation. Specifically, simulations showed that the numerically derived labile proton ratio and exchange rate were in good agreement with input values. In addition, the qCEST analysis was confirmed experimentally in a phantom with concurrent variation in CEST agent concentration and pH. Also, we demonstrated that the derived labile proton ratio increased linearly with creatine concentration (P analysis can simultaneously determine labile proton ratio and exchange rate in a relatively complex in vitro CEST system. Copyright © 2015 John Wiley & Sons, Ltd.

  6. Methods of correction of carriage of junior schoolchildren by facilities of physical exercises

    Directory of Open Access Journals (Sweden)

    Gagara V.F.

    2012-08-01

    Full Text Available The results of influence of methods of physical rehabilitation on the organism of children are resulted. In research took part 16 children of lower school with the scoliotic changes of pectoral department of spine. The complex of methods of physical rehabilitation included special correction and general health-improving exercises, medical gymnastics, correction position. Employments on a medical gymnastics during 30-45 minutes 3-4 times per a week were conducted. The improvement of indexes of mobility of spine and state of carriage of schoolchildren is marked. The absolute indexes of the state of carriage and flexibility of spine considerably got around physiology sizes. A rehabilitation complex which includes the elements of correction gymnastics is recommended, medical physical culture, correction, massage of muscles of trunk, position. It is also necessary to adhere to the rational mode of day and feed, provide the normative parameters of working furniture and self-control of the state of carriage.

  7. An efficient shutter-less non-uniformity correction method for infrared focal plane arrays

    Science.gov (United States)

    Huang, Xiyan; Sui, Xiubao; Zhao, Yao

    2017-02-01

    The non-uniformity response in infrared focal plane array (IRFPA) detectors has a bad effect on images with fixed pattern noise. At present, it is common to use shutter to prevent from radiation of target and to update the parameters of non-uniformity correction in the infrared imaging system. The use of shutter causes "freezing" image. And inevitably, there exists the problems of the instability and reliability of system, power consumption, and concealment of infrared detection. In this paper, we present an efficient shutter-less non-uniformity correction (NUC) method for infrared focal plane arrays. The infrared imaging system can use the data gaining in thermostat to calculate the incident infrared radiation by shell real-timely. And the primary output of detector except the shell radiation can be corrected by the gain coefficient. This method has been tested in real infrared imaging system, reaching high correction level, reducing fixed pattern noise, adapting wide temperature range.

  8. Efficient color correction method for smartphone camera-based health monitoring application.

    Science.gov (United States)

    Duc Dang; Chae Ho Cho; Daeik Kim; Oh Seok Kwon; Jo Woon Chong

    2017-07-01

    Smartphone health monitoring applications are recently highlighted due to the rapid development of hardware and software performance of smartphones. However, color characteristics of images captured by different smartphone models are dissimilar each other and this difference may give non-identical health monitoring results when the smartphone health monitoring applications monitor physiological information using their embedded smartphone cameras. In this paper, we investigate the differences in color properties of the captured images from different smartphone models and apply a color correction method to adjust dissimilar color values obtained from different smartphone cameras. Experimental results show that the color corrected images using the correction method provide much smaller color intensity errors compared to the images without correction. These results can be applied to enhance the consistency of smartphone camera-based health monitoring applications by reducing color intensity errors among the images obtained from different smartphones.

  9. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    International Nuclear Information System (INIS)

    Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees

    2015-01-01

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts

  10. Investigation of Compton scattering correction methods in cardiac SPECT by Monte Carlo simulations

    International Nuclear Information System (INIS)

    Silva, A.M. Marques da; Furlan, A.M.; Robilotta, C.C.

    2001-01-01

    The goal of this work was the use of Monte Carlo simulations to investigate the effects of two scattering correction methods: dual energy window (DEW) and dual photopeak window (DPW), in quantitative cardiac SPECT reconstruction. MCAT torso-cardiac phantom, with 99m Tc and non-uniform attenuation map was simulated. Two different photopeak windows were evaluated in DEW method: 15% and 20%. Two 10% wide subwindows centered symmetrically within the photopeak were used in DPW method. Iterative ML-EM reconstruction with modified projector-backprojector for attenuation correction was applied. Results indicated that the choice of the scattering and photopeak windows determines the correction accuracy. For the 15% window, fitted scatter fraction gives better results than k = 0.5. For the 20% window, DPW is the best method, but it requires parameters estimation using Monte Carlo simulations. (author)

  11. Ballistic deficit correction methods for large Ge detectors-high counting rate study

    International Nuclear Information System (INIS)

    Duchene, G.; Moszynski, M.

    1995-01-01

    This study presents different ballistic correction methods versus input count rate (from 3 to 50 kcounts/s) using four large Ge detectors of about 70 % relative efficiency. It turns out that the Tennelec TC245 linear amplifier in the BDC mode (Hinshaw method) is the best compromise for energy resolution throughout. All correction methods lead to narrow sum-peaks indistinguishable from single Γ lines. The full energy peak throughput is found representative of the pile-up inspection dead time of the corrector circuits. This work also presents a new and simple representation, plotting simultaneously energy resolution and throughput versus input count rate. (TEC). 12 refs., 11 figs

  12. N3 Bias Field Correction Explained as a Bayesian Modeling Method

    DEFF Research Database (Denmark)

    Larsen, Christian Thode; Iglesias, Juan Eugenio; Van Leemput, Koen

    2014-01-01

    Although N3 is perhaps the most widely used method for MRI bias field correction, its underlying mechanism is in fact not well understood. Specifically, the method relies on a relatively heuristic recipe of alternating iterative steps that does not optimize any particular objective function. In t...

  13. Integrals of random fields treated by the model correction factor method

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  14. Model correction factor method for reliability problems involving integrals of non-Gaussian random fields

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  15. Measurement of saturated hydraulic conductivity in fine-grained glacial tills in Iowa: Comparison of in situ and laboratory methods

    Science.gov (United States)

    Bruner, D. Roger; Lutenegger, Alan J.

    1994-01-01

    Nested-standpipe and vibrating-wire piezometers were installed in Pre-Illinoian Wolf Creek and Albernett formations at the Eastern Iowa Till Hydrology Site located in Linn County, Iowa. These surficial deposits are composed of fine-grained glacial diamicton (till) with occasional discontinuous lenses of sand and silt. They overlie the Silurian (dolomite) aquifer which provides private, public, and municipal drinking water supplies in the region. The saturated hydraulic conductivity of the Wolf Creek Formation was investigated in a sub-area of the Eastern Iowa Till Hydrology Site. Calculations of saturated hydraulic conductivity were based on laboratoryflexible-wall permeameter tests, bailer tests, and pumping test data. Results show that bulk hydraulic conductivity increases by several orders of magnitude as the tested volume of till increases. Increasing values of saturated hydraulic conductivity at larger spatial scales conceptually support a double-porosity flow model for this till.

  16. Automated potentiometric titrations in KCl/water-saturated octanol: method for quantifying factors influencing ion-pair partitioning.

    Science.gov (United States)

    Scherrer, Robert A; Donovan, Stephen F

    2009-04-01

    The knowledge base of factors influencing ion pair partitioning is very sparse, primarily because of the difficulty in determining accurate log P(I) values of desirable low molecular weight (MW) reference compounds. We have developed a potentiometric titration procedure in KCl/water-saturated octanol that provides a link to log P(I) through the thermodynamic cycle of ionization and partitioning. These titrations have the advantage of being independent of the magnitude of log P, while maintaining a reproducibility of a few hundredths of a log P in the calculated difference between log P neutral and log P ion pair (diff (log P(N - I))). Simple model compounds can be used. The titration procedure is described in detail, along with a program for calculating pK(a)'' values incorporating the ionization of water in octanol. Hydrogen bonding and steric factors have a greater influence on ion pairs than they do on neutral species, yet these factors are missing from current programs used to calculate log P(I) and log D. In contrast to the common assumption that diff (log P(N - I)) is the same for all amines, they can actually vary more than 3 log units, as in our examples. A major factor affecting log P(I) is the ability of water and the counterion to approach the charge center. Bulky substituents near the charge center have a negative influence on log P(I). On the other hand, hydrogen bonding groups near the charge center have the opposite effect by lowering the free energy of the ion pair. The use of this titration method to determine substituent ion pair stabilization values (IPS) should bring about more accurate log D calculations and encourage species-specific QSAR involving log D(N) and log D(I). This work also brings attention to the fascinating world of nature's highly stabilized ion pairs.

  17. Automated Potentiometric Titrations in KCl/Water-Saturated Octanol: Method for Quantifying Factors Influencing Ion-Pair Partitioning

    Science.gov (United States)

    2009-01-01

    The knowledge base of factors influencing ion pair partitioning is very sparse, primarily because of the difficulty in determining accurate log PI values of desirable low molecular weight (MW) reference compounds. We have developed a potentiometric titration procedure in KCl/water-saturated octanol that provides a link to log PI through the thermodynamic cycle of ionization and partitioning. These titrations have the advantage of being independent of the magnitude of log P, while maintaining a reproducibility of a few hundredths of a log P in the calculated difference between log P neutral and log P ion pair (diff (log PN − I)). Simple model compounds can be used. The titration procedure is described in detail, along with a program for calculating pKa′′ values incorporating the ionization of water in octanol. Hydrogen bonding and steric factors have a greater influence on ion pairs than they do on neutral species, yet these factors are missing from current programs used to calculate log PI and log D. In contrast to the common assumption that diff (log PN − I) is the same for all amines, they can actually vary more than 3 log units, as in our examples. A major factor affecting log PI is the ability of water and the counterion to approach the charge center. Bulky substituents near the charge center have a negative influence on log PI. On the other hand, hydrogen bonding groups near the charge center have the opposite effect by lowering the free energy of the ion pair. The use of this titration method to determine substituent ion pair stabilization values (IPS) should bring about more accurate log D calculations and encourage species-specific QSAR involving log DN and log DI. This work also brings attention to the fascinating world of nature’s highly stabilized ion pairs. PMID:19265385

  18. A Geometric Correction Method of Plane Image Based on OpenCV

    Directory of Open Access Journals (Sweden)

    Li Xiaopeng

    2014-02-01

    Full Text Available Using OpenCV, a geometric correction method of plane image from single grid image in a state of unknown camera position is presented. The method can remove the perspective and lens distortions from an image. The method is simple and easy to implement, and the efficiency is high. Experiments indicate that this method has high precision, and can be used in some domains such as plane measurement.

  19. A New Online Calibration Method Based on Lord's Bias-Correction.

    Science.gov (United States)

    He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei

    2017-09-01

    Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.

  20. An Investigation on the Efficiency Correction Method of the Turbocharger at Low Speed

    Directory of Open Access Journals (Sweden)

    Jin Eun Chung

    2018-01-01

    Full Text Available The heat transfer in the turbocharger occurs due to the temperature difference between the exhaust gas and intake air, coolant, and oil. This heat transfer causes the efficiency of the compressor and turbine to be distorted, which is known to be exacerbated during low rotational speeds. Thus, this study proposes a method to mitigate the distortion of the test result data caused by heat transfer in the turbocharger. With this method, the representative compressor temperature is defined and the heat transfer rate of the compressor is calculated by considering the effect of the oil and turbine inlet temperatures at low rotation speeds, when the cold and the hot gas test are simultaneously performed. The correction of compressor efficiency, depending on the turbine inlet temperature, was performed through both hot and cold gas tests and the results showed a maximum of 16% error prior to correction and a maximum of 3% error after the correction. In addition, it shows that it is possible to correct the efficiency distortion of the turbocharger by heat transfer by correcting to the combined turbine efficiency based on the corrected compressor efficiency.

  1. Evaluation of bias-correction methods for ensemble streamflow volume forecasts

    Directory of Open Access Journals (Sweden)

    T. Hashino

    2007-01-01

    Full Text Available Ensemble prediction systems are used operationally to make probabilistic streamflow forecasts for seasonal time scales. However, hydrological models used for ensemble streamflow prediction often have simulation biases that degrade forecast quality and limit the operational usefulness of the forecasts. This study evaluates three bias-correction methods for ensemble streamflow volume forecasts. All three adjust the ensemble traces using a transformation derived with simulated and observed flows from a historical simulation. The quality of probabilistic forecasts issued when using the three bias-correction methods is evaluated using a distributions-oriented verification approach. Comparisons are made of retrospective forecasts of monthly flow volumes for a north-central United States basin (Des Moines River, Iowa, issued sequentially for each month over a 48-year record. The results show that all three bias-correction methods significantly improve forecast quality by eliminating unconditional biases and enhancing the potential skill. Still, subtle differences in the attributes of the bias-corrected forecasts have important implications for their use in operational decision-making. Diagnostic verification distinguishes these attributes in a context meaningful for decision-making, providing criteria to choose among bias-correction methods with comparable skill.

  2. A multilevel correction adaptive finite element method for Kohn-Sham equation

    Science.gov (United States)

    Hu, Guanghui; Xie, Hehu; Xu, Fei

    2018-02-01

    In this paper, an adaptive finite element method is proposed for solving Kohn-Sham equation with the multilevel correction technique. In the method, the Kohn-Sham equation is solved on a fixed and appropriately coarse mesh with the finite element method in which the finite element space is kept improving by solving the derived boundary value problems on a series of adaptively and successively refined meshes. A main feature of the method is that solving large scale Kohn-Sham system is avoided effectively, and solving the derived boundary value problems can be handled efficiently by classical methods such as the multigrid method. Hence, the significant acceleration can be obtained on solving Kohn-Sham equation with the proposed multilevel correction technique. The performance of the method is examined by a variety of numerical experiments.

  3. Analysis of slippery droplet on tilted plate by development of optical correction method

    Science.gov (United States)

    Ko, Han Seo; Gim, Yeonghyeon; Choi, Sung Ho; Jang, Dong Kyu; Sohn, Dong Kee

    2017-11-01

    Because of distortion effects on a surface of a sessile droplet, the inner flow field of the droplet is measured by a PIV (particle image velocimetry) method with low reliability. In order to solve this problem, many researchers have studied and developed the optical correction method. However, the method cannot be applied for various cases such as the tilted droplet or other asymmetric shaped droplets since most methods were considered only for the axisymmetric shaped droplets. For the optical correction of the asymmetric shaped droplet, the surface function was calculated by the three-dimensional reconstruction using the ellipse curve fitting method. Also, the optical correction using the surface function was verified by the numerical simulation. Then, the developed method was applied to reconstruct the inner flow field of the droplet on the tilted plate. The colloidal droplet of water on the tilted surface was used, and the distorted effect on the surface of the droplet was calculated. Using the obtained results and the PIV method, the corrected flow field for the inner and interface parts of the droplet was reconstructed. Consequently, the error caused by the distortion effect of the velocity vector located on the apex of the droplet was removed. National Research Foundation (NRF) of Korea, (2016R1A2B4011087).

  4. Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.

    Science.gov (United States)

    Jain, Ram B

    2016-08-01

    Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.

  5. Method and apparatus for reconstructing in-cylinder pressure and correcting for signal decay

    Science.gov (United States)

    Huang, Jian

    2013-03-12

    A method comprises steps for reconstructing in-cylinder pressure data from a vibration signal collected from a vibration sensor mounted on an engine component where it can generate a signal with a high signal-to-noise ratio, and correcting the vibration signal for errors introduced by vibration signal charge decay and sensor sensitivity. The correction factors are determined as a function of estimated motoring pressure and the measured vibration signal itself with each of these being associated with the same engine cycle. Accordingly, the method corrects for charge decay and changes in sensor sensitivity responsive to different engine conditions to allow greater accuracy in the reconstructed in-cylinder pressure data. An apparatus is also disclosed for practicing the disclosed method, comprising a vibration sensor, a data acquisition unit for receiving the vibration signal, a computer processing unit for processing the acquired signal and a controller for controlling the engine operation based on the reconstructed in-cylinder pressure.

  6. Correction method of slit modulation transfer function on digital medical imaging system

    International Nuclear Information System (INIS)

    Kim, Jung Min; Jung, Hoi Woun; Min, Jung Whan; Im, Eon Kyung

    2006-01-01

    By using CR image pixel data, We examined the way how to calculate the MTF and digital characteristic curve. It can be changed to the text-file (Excel) from a pixel data which was printed with a digital x-ray equipment. In this place, We described the way how to figure out and correct the sharpness of a digital images of the MTF from FUJITA. Excel program was utilized to calculate from radiography of slit. Digital characteristic curve, Line Spread Function, Discrete Fourier Transform, Fast Fourier Transform digital specification curve, were indicated in regular sequence. A big advantage of this method, It can be understood easily and you can get results without costly program an without full knowledge of computer language. It shows many different values by using different correction methods. Therefore we need to be handy with appropriate correction method and we should try many experiments to get a precise MTF figures

  7. A third-generation dispersion and third-generation hydrogen bonding corrected PM6 method

    DEFF Research Database (Denmark)

    Kromann, Jimmy Charnley; Christensen, Anders Steen; Svendsen, Casper Steinmann

    2014-01-01

    We present new dispersion and hydrogen bond corrections to the PM6 method, PM6-D3H+, and its implementation in the GAMESS program. The method combines the DFT-D3 dispersion correction by Grimme et al. with a modified version of the H+ hydrogen bond correction by Korth. Overall, the interaction...... in GAMESS, while the corresponding numbers for PM6-DH+ implemented in MOPAC are 54, 17, 15, and 2. The PM6-D3H+ method as implemented in GAMESS offers an attractive alternative to PM6-DH+ in MOPAC in cases where the LBFGS optimizer must be used and a vibrational analysis is needed, e.g., when computing...... vibrational free energies. While the GAMESS implementation is up to 10 times slower for geometry optimizations of proteins in bulk solvent, compared to MOPAC, it is sufficiently fast to make geometry optimizations of small proteins practically feasible....

  8. Correction to the count-rate detection limit and sample/blank time-allocation methods

    International Nuclear Information System (INIS)

    Alvarez, Joseph L.

    2013-01-01

    A common form of count-rate detection limits contains a propagation of uncertainty error. This error originated in methods to minimize uncertainty in the subtraction of the blank counts from the gross sample counts by allocation of blank and sample counting times. Correct uncertainty propagation showed that the time allocation equations have no solution. This publication presents the correct form of count-rate detection limits. -- Highlights: •The paper demonstrated a proper method of propagating uncertainty of count rate differences. •The standard count-rate detection limits were in error. •Count-time allocation methods for minimum uncertainty were in error. •The paper presented the correct form of the count-rate detection limit. •The paper discussed the confusion between count-rate uncertainty and count uncertainty

  9. Characteristic of methods for prevention and correction of moral of alienation of students

    Directory of Open Access Journals (Sweden)

    Z. K. Malieva

    2014-01-01

    Full Text Available A moral alienation is a complex integrative phenomenon characterized by individual’s rejection of universal spiritual and moral values of society. The last opportunity to find a purposeful competent solution of the problem of individual’s moral alienation lies in the space of professional education.The subject of study of this article is to identify methods for prevention and correction of moral alienation of students that can be used by teachers both in the process of extracurricular activities, and in conducting classes in humanitarian disciplines.The purpose of the work is to study methods and techniques that enhance the effectiveness of the prevention and correction of moral alienation of students, identify their characteristics and application in the educational activities of teachers.The paper concretizes a definition of methods to prevent and correct the moral alienation of students who represent a system of interrelated actions of educator and students aimed at: redefining of negative values, rules and norms of behavior; overcoming the negative mental states, negative attitudes, interests and aptitudes of aducatees.The article distinguishes and characterizes the most effective methods for prevention and correction of moral alienation of students: the conviction, the method of "Socrates"; understanding; semiotic analysis; suggestion, method of "explosion." It also presents the rules and necessary conditions for the application of these methods in the educational process.It is ascertained that the choice of effective preventive and corrective methods and techniques is determined by the content of intrapersonal, psychological sources of moral alienation associated with the following: negative attitude due to previous experience; orientation to these or those negative values; inadequate self-esteem, having a negative impact on the development and functioning of the individual’s psyche and behavior; mental states.The conclusions of the

  10. Output power PDF of a saturated semiconductor optical amplifier: Second-order noise contributions by path integral method

    DEFF Research Database (Denmark)

    Öhman, Filip; Mørk, Jesper; Tromborg, Bjarne

    2007-01-01

    We have developed a second-order small-signal model for describing the nonlinear redistribution of noise in a saturated semiconductor optical amplifier. In this paper, the details of the model are presented. A numerical example is used to compare the model to statistical simulations. We show that...

  11. Orbit Determination from Tracking Data of Artificial Satellite Using the Method of Differential Correction

    Directory of Open Access Journals (Sweden)

    Byoung-Sun Lee

    1988-06-01

    Full Text Available The differential correction process determining osculating orbital elements as correct as possible at a given instant of time from tracking data of artificial satellite was accomplished. Preliminary orbital elements were used as an initial value of the differential correction procedure and iterated until the residual of real observation(O and computed observation(C was minimized. Tracking satellite was NOAA-9 or TIROS-N series. Two types of tracking data were prediction data precomputed from mean orbital elements of TBUS and real data obtained from tracking 1.707GHz HRPT signal of NOAA-9 using 5 meter auto-track antenna in Radio Research Laboratory. According to tracking data either Gauss method or Herrick-Gibbs method was applied to preliminary orbit determination. In the differential correction stage we used both of the Escobal(1975's analytical method and numerical ones are nearly consistent. And the differentially corrected orbit converged to the same value in spite of the differences between preliminary orbits of each time span.

  12. Bias-correction of CORDEX-MENA projections using the Distribution Based Scaling method

    Science.gov (United States)

    Bosshard, Thomas; Yang, Wei; Sjökvist, Elin; Arheimer, Berit; Graham, L. Phil

    2014-05-01

    Within the Regional Initiative for the Assessment of the Impact of Climate Change on Water Resources and Socio-Economic Vulnerability in the Arab Region (RICCAR) lead by UN ESCWA, CORDEX RCM projections for the Middle East Northern Africa (MENA) domain are used to drive hydrological impacts models. Bias-correction of newly available CORDEX-MENA projections is a central part of this project. In this study, the distribution based scaling (DBS) method has been applied to 6 regional climate model projections driven by 2 RCP emission scenarios. The DBS method uses a quantile mapping approach and features a conditional temperature correction dependent on the wet/dry state in the climate model data. The CORDEX-MENA domain is particularly challenging for bias-correction as it spans very diverse climates showing pronounced dry and wet seasons. Results show that the regional climate models simulate too low temperatures and often have a displaced rainfall band compared to WATCH ERA-Interim forcing data in the reference period 1979-2008. DBS is able to correct the temperature biases as well as some aspects of the precipitation biases. Special focus is given to the analysis of the influence of the dry-frequency bias (i.e. climate models simulating too few rain days) on the bias-corrected projections and on the modification of the climate change signal by the DBS method.

  13. A new method to make gamma-ray self-absorption correction

    International Nuclear Information System (INIS)

    Tian Dongfeng; Xie Dong; Ho Yukun; Yang Fujia

    2001-01-01

    This paper is devoted to discuss a new method to directly extract the information of the geometric self-absorption correction through the measurement of characteristic γ radiation emitted spontaneously from nuclear fissile material. The numerical simulation tests show that this method can extract the purely original information needed for nondestructive assay method by the γ-ray spectra to be measured, even though the geometric shape of the sample and materials between sample and detector are not known in advance. (author)

  14. Gluon Saturation and EIC

    Energy Technology Data Exchange (ETDEWEB)

    Sichtermann, Ernst

    2016-12-15

    The fundamental structure of nucleons and nuclear matter is described by the properties and dynamics of quarks and gluons in quantum chromodynamics. Electron-nucleon collisions are a powerful method to study this structure. As one increases the energy of the collisions, the interaction process probes regions of progressively higher gluon density. This density must eventually saturate. An high-energy polarized Electron-Ion Collider (EIC) has been proposed to observe and study the saturated gluon density regime. Selected measurements will be discussed, following a brief introduction.

  15. Corrected entropy of Friedmann-Robertson-Walker universe in tunneling method

    International Nuclear Information System (INIS)

    Zhu, Tao; Ren, Ji-Rong; Li, Ming-Fan

    2009-01-01

    In this paper, we study the thermodynamic quantities of Friedmann-Robertson-Walker (FRW) universe by using the tunneling formalism beyond semiclassical approximation developed by Banerjee and Majhi [25]. For this we first calculate the corrected Hawking-like temperature on apparent horizon by considering both scalar particle and fermion tunneling. With this corrected Hawking-like temperature, the explicit expressions of the corrected entropy of apparent horizon for various gravity theories including Einstein gravity, Gauss-Bonnet gravity, Lovelock gravity, f(R) gravity and scalar-tensor gravity, are computed. Our results show that the corrected entropy formula for different gravity theories can be written into a general expression (4.39) of a same form. It is also shown that this expression is also valid for black holes. This might imply that the expression for the corrected entropy derived from tunneling method is independent of gravity theory, spacetime and dimension of the spacetime. Moreover, it is concluded that the basic thermodynamical property that the corrected entropy on apparent horizon is a state function is satisfied by the FRW universe

  16. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers

    OpenAIRE

    Dobie, Robert A; Wojcik, Nancy C

    2015-01-01

    Objectives The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60?years. By comparison, recent data (1999?2006) show that hearing thresholds in the US population have improved....

  17. A Novel Flood Forecasting Method Based on Initial State Variable Correction

    Directory of Open Access Journals (Sweden)

    Kuang Li

    2017-12-01

    Full Text Available The influence of initial state variables on flood forecasting accuracy by using conceptual hydrological models is analyzed in this paper and a novel flood forecasting method based on correction of initial state variables is proposed. The new method is abbreviated as ISVC (Initial State Variable Correction. The ISVC takes the residual between the measured and forecasted flows during the initial period of the flood event as the objective function, and it uses a particle swarm optimization algorithm to correct the initial state variables, which are then used to drive the flood forecasting model. The historical flood events of 11 watersheds in south China are forecasted and verified, and important issues concerning the ISVC application are then discussed. The study results show that the ISVC is effective and applicable in flood forecasting tasks. It can significantly improve the flood forecasting accuracy in most cases.

  18. Method for the determination of spectroradiometric corrections of data from multichannel aerospatial spectrometers

    International Nuclear Information System (INIS)

    Bakalova, K.P.; Bakalov, D.D.

    1984-01-01

    Various factors in the aerospatial conditions of operation may lead to changes in the transmission characteristics of the electron-optical medium or environment of spectrometers for remote sensing of the Earth. Consequently, the data obtained need spectroradiometric corrections. In the paper, a unified approach to the determination of these corrections is suggested. The method uses measurements of standard sources with a smooth emission spectrum that is much wider than the width of the channels, such as a lamp with an incandescent filament, Sun and other natural objects, without special spectral reference standards. The presence of additional information about the character of changes occuring in the measurements may considerably simplify the determination of corrections through setting appropriate values of a coefficient and the spectral shift. The method has been used with the Spectrum-15 and SMP-32 spectrometers on the Salyut-7 orbital station and the 'Meteor-Priroda' satelite of the Bulgaria-1300-ii project

  19. Precise method for correcting count-rate losses in scintillation cameras

    International Nuclear Information System (INIS)

    Madsen, M.T.; Nickles, R.J.

    1986-01-01

    Quantitative studies performed with scintillation detectors often require corrections for lost data because of the finite resolving time of the detector. Methods that monitor losses by means of a reference source or pulser have unacceptably large statistical fluctuations associated with their correction factors. Analytic methods that model the detector as a paralyzable system require an accurate estimate of the system resolving time. Because the apparent resolving time depends on many variables, including the window setting, source distribution, and the amount of scattering material, significant errors can be introduced by relying on a resolving time obtained from phantom measurements. These problems can be overcome by curve-fitting the data from a reference source to a paralyzable model in which the true total count rate in the selected window is estimated from the observed total rate. The resolving time becomes a free parameter in this method which is optimized to provide the best fit to the observed reference data. The fitted curve has the inherent accuracy of the reference source method with the precision associated with the observed total image count rate. Correction factors can be simply calculated from the ratio of the true reference source rate and the fitted curve. As a result, the statistical uncertainty of the data corrected by this method is not significantly increased

  20. Comparison of fluorescence rejection methods of baseline correction and shifted excitation Raman difference spectroscopy

    Science.gov (United States)

    Cai, Zhijian; Zou, Wenlong; Wu, Jianhong

    2017-10-01

    Raman spectroscopy has been extensively used in biochemical tests, explosive detection, food additive and environmental pollutants. However, fluorescence disturbance brings a big trouble to the applications of portable Raman spectrometer. Currently, baseline correction and shifted-excitation Raman difference spectroscopy (SERDS) methods are the most prevailing fluorescence suppressing methods. In this paper, we compared the performances of baseline correction and SERDS methods, experimentally and simulatively. Through the comparison, it demonstrates that the baseline correction can get acceptable fluorescence-removed Raman spectrum if the original Raman signal has good signal-to-noise ratio, but it cannot recover the small Raman signals out of large noise background. By using SERDS method, the Raman signals, even very weak compared to fluorescence intensity and noise level, can be clearly extracted, and the fluorescence background can be completely rejected. The Raman spectrum recovered by SERDS has good signal to noise ratio. It's proved that baseline correction is more suitable for large bench-top Raman system with better quality or signal-to-noise ratio, while the SERDS method is more suitable for noisy devices, especially the portable Raman spectrometers.

  1. Development and evaluation of attenuation and scatter correction techniques for SPECT using the Monte Carlo method

    International Nuclear Information System (INIS)

    Ljungberg, M.

    1990-05-01

    Quantitative scintigrafic images, obtained by NaI(Tl) scintillation cameras, are limited by photon attenuation and contribution from scattered photons. A Monte Carlo program was developed in order to evaluate these effects. Simple source-phantom geometries and more complex nonhomogeneous cases can be simulated. Comparisons with experimental data for both homogeneous and nonhomogeneous regions and with published results have shown good agreement. The usefulness for simulation of parameters in scintillation camera systems, stationary as well as in SPECT systems, has also been demonstrated. An attenuation correction method based on density maps and build-up functions has been developed. The maps were obtained from a transmission measurement using an external 57 Co flood source and the build-up was simulated by the Monte Carlo code. Two scatter correction methods, the dual-window method and the convolution-subtraction method, have been compared using the Monte Carlo method. The aim was to compare the estimated scatter with the true scatter in the photo-peak window. It was concluded that accurate depth-dependent scatter functions are essential for a proper scatter correction. A new scatter and attenuation correction method has been developed based on scatter line-spread functions (SLSF) obtained for different depths and lateral positions in the phantom. An emission image is used to determine the source location in order to estimate the scatter in the photo-peak window. Simulation studies of a clinically realistic source in different positions in cylindrical water phantoms were made for three photon energies. The SLSF-correction method was also evaluated by simulation studies for 1. a myocardial source, 2. uniform source in the lungs and 3. a tumour located in the lungs in a realistic, nonhomogeneous computer phantom. The results showed that quantitative images could be obtained in nonhomogeneous regions. (67 refs.)

  2. Comparatively Studied Color Correction Methods for Color Calibration of Automated Microscopy Complex of Biomedical Specimens

    Directory of Open Access Journals (Sweden)

    T. A. Kravtsova

    2016-01-01

    Full Text Available The paper considers a task of generating the requirements and creating a calibration target for automated microscopy systems (AMS of biomedical specimens to provide the invariance of algorithms and software to the hardware configuration. The required number of color fields of the calibration target and their color coordinates are mostly determined by the color correction method, for which coefficients of the equations are estimated during the calibration process. The paper analyses existing color calibration techniques for digital imaging systems using an optical microscope and shows that there is a lack of published results of comparative studies to demonstrate a particular useful color correction method for microscopic images. A comparative study of ten image color correction methods in RGB space using polynomials and combinations of color coordinate of different orders was carried out. The method of conditioned least squares to estimate the coefficients in the color correction equations using captured images of 217 color fields of the calibration target Kodak Q60-E3 was applied. The regularization parameter in this method was chosen experimentally. It was demonstrated that the best color correction quality characteristics are provided by the method that uses a combination of color coordinates of the 3rd order. The study of the influence of the number and the set of color fields included in calibration target on color correction quality for microscopic images was performed. Six train sets containing 30, 35, 40, 50, 60 and 80 color fields, and test set of 47 color fields not included in any of the train sets were formed. It was found out that the train set of 60 color fields minimizes the color correction error values for both operating modes of digital camera: using "default" color settings and with automatic white balance. At the same time it was established that the use of color fields from the widely used now Kodak Q60-E3 target does not

  3. Assessment of Atmospheric Correction Methods for Sentinel-2 MSI Images Applied to Amazon Floodplain Lakes

    Directory of Open Access Journals (Sweden)

    Vitor Souza Martins

    2017-03-01

    Full Text Available Satellite data provide the only viable means for extensive monitoring of remote and large freshwater systems, such as the Amazon floodplain lakes. However, an accurate atmospheric correction is required to retrieve water constituents based on surface water reflectance ( R W . In this paper, we assessed three atmospheric correction methods (Second Simulation of a Satellite Signal in the Solar Spectrum (6SV, ACOLITE and Sen2Cor applied to an image acquired by the MultiSpectral Instrument (MSI on-board of the European Space Agency’s Sentinel-2A platform using concurrent in-situ measurements over four Amazon floodplain lakes in Brazil. In addition, we evaluated the correction of forest adjacency effects based on the linear spectral unmixing model, and performed a temporal evaluation of atmospheric constituents from Multi-Angle Implementation of Atmospheric Correction (MAIAC products. The validation of MAIAC aerosol optical depth (AOD indicated satisfactory retrievals over the Amazon region, with a correlation coefficient (R of ~0.7 and 0.85 for Terra and Aqua products, respectively. The seasonal distribution of the cloud cover and AOD revealed a contrast between the first and second half of the year in the study area. Furthermore, simulation of top-of-atmosphere (TOA reflectance showed a critical contribution of atmospheric effects (>50% to all spectral bands, especially the deep blue (92%–96% and blue (84%–92% bands. The atmospheric correction results of the visible bands illustrate the limitation of the methods over dark lakes ( R W < 1%, and better match of the R W shape compared with in-situ measurements over turbid lakes, although the accuracy varied depending on the spectral bands and methods. Particularly above 705 nm, R W was highly affected by Amazon forest adjacency, and the proposed adjacency effect correction minimized the spectral distortions in R W (RMSE < 0.006. Finally, an extensive validation of the methods is required for

  4. Asteroseismic modelling of solar-type stars: internal systematics from input physics and surface correction methods

    Science.gov (United States)

    Nsamba, B.; Campante, T. L.; Monteiro, M. J. P. F. G.; Cunha, M. S.; Rendle, B. M.; Reese, D. R.; Verma, K.

    2018-04-01

    Asteroseismic forward modelling techniques are being used to determine fundamental properties (e.g. mass, radius, and age) of solar-type stars. The need to take into account all possible sources of error is of paramount importance towards a robust determination of stellar properties. We present a study of 34 solar-type stars for which high signal-to-noise asteroseismic data is available from multi-year Kepler photometry. We explore the internal systematics on the stellar properties, that is, associated with the uncertainty in the input physics used to construct the stellar models. In particular, we explore the systematics arising from: (i) the inclusion of the diffusion of helium and heavy elements; and (ii) the uncertainty in solar metallicity mixture. We also assess the systematics arising from (iii) different surface correction methods used in optimisation/fitting procedures. The systematics arising from comparing results of models with and without diffusion are found to be 0.5%, 0.8%, 2.1%, and 16% in mean density, radius, mass, and age, respectively. The internal systematics in age are significantly larger than the statistical uncertainties. We find the internal systematics resulting from the uncertainty in solar metallicity mixture to be 0.7% in mean density, 0.5% in radius, 1.4% in mass, and 6.7% in age. The surface correction method by Sonoi et al. and Ball & Gizon's two-term correction produce the lowest internal systematics among the different correction methods, namely, ˜1%, ˜1%, ˜2%, and ˜8% in mean density, radius, mass, and age, respectively. Stellar masses obtained using the surface correction methods by Kjeldsen et al. and Ball & Gizon's one-term correction are systematically higher than those obtained using frequency ratios.

  5. A novel energy conversion based method for velocity correction in molecular dynamics simulations

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Hanhui [School of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027 (China); Collaborative Innovation Center of Advanced Aero-Engine, Hangzhou 310027 (China); Liu, Ningning [School of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027 (China); Ku, Xiaoke, E-mail: xiaokeku@zju.edu.cn [School of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027 (China); Fan, Jianren [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China)

    2017-05-01

    Molecular dynamics (MD) simulation has become an important tool for studying micro- or nano-scale dynamics and the statistical properties of fluids and solids. In MD simulations, there are mainly two approaches: equilibrium and non-equilibrium molecular dynamics (EMD and NEMD). In this paper, a new energy conversion based correction (ECBC) method for MD is developed. Unlike the traditional systematic correction based on macroscopic parameters, the ECBC method is developed strictly based on the physical interaction processes between the pair of molecules or atoms. The developed ECBC method can apply to EMD and NEMD directly. While using MD with this method, the difference between the EMD and NEMD is eliminated, and no macroscopic parameters such as external imposed potentials or coefficients are needed. With this method, many limits of using MD are lifted. The application scope of MD is greatly extended.

  6. Semiempirical Quantum-Chemical Orthogonalization-Corrected Methods: Benchmarks for Ground-State Properties.

    Science.gov (United States)

    Dral, Pavlo O; Wu, Xin; Spörkel, Lasse; Koslowski, Axel; Thiel, Walter

    2016-03-08

    The semiempirical orthogonalization-corrected OMx methods (OM1, OM2, and OM3) go beyond the standard MNDO model by including additional interactions in the electronic structure calculation. When augmented with empirical dispersion corrections, the resulting OMx-Dn approaches offer a fast and robust treatment of noncovalent interactions. Here we evaluate the performance of the OMx and OMx-Dn methods for a variety of ground-state properties using a large and diverse collection of benchmark sets from the literature, with a total of 13035 original and derived reference data. Extensive comparisons are made with the results from established semiempirical methods (MNDO, AM1, PM3, PM6, and PM7) that also use the NDDO (neglect of diatomic differential overlap) integral approximation. Statistical evaluations show that the OMx and OMx-Dn methods outperform the other methods for most of the benchmark sets.

  7. A novel energy conversion based method for velocity correction in molecular dynamics simulations

    International Nuclear Information System (INIS)

    Jin, Hanhui; Liu, Ningning; Ku, Xiaoke; Fan, Jianren

    2017-01-01

    Molecular dynamics (MD) simulation has become an important tool for studying micro- or nano-scale dynamics and the statistical properties of fluids and solids. In MD simulations, there are mainly two approaches: equilibrium and non-equilibrium molecular dynamics (EMD and NEMD). In this paper, a new energy conversion based correction (ECBC) method for MD is developed. Unlike the traditional systematic correction based on macroscopic parameters, the ECBC method is developed strictly based on the physical interaction processes between the pair of molecules or atoms. The developed ECBC method can apply to EMD and NEMD directly. While using MD with this method, the difference between the EMD and NEMD is eliminated, and no macroscopic parameters such as external imposed potentials or coefficients are needed. With this method, many limits of using MD are lifted. The application scope of MD is greatly extended.

  8. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers.

    Science.gov (United States)

    Dobie, Robert A; Wojcik, Nancy C

    2015-07-13

    The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999-2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Regression analysis was used to derive new age-correction values using audiometric data from the 1999-2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20-75 years. The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20-75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61-75 years. Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to

  9. Regression dilution bias: tools for correction methods and sample size calculation.

    Science.gov (United States)

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  10. A New High-Precision Correction Method of Temperature Distribution in Model Stellar Atmospheres

    Directory of Open Access Journals (Sweden)

    Sapar A.

    2013-06-01

    Full Text Available The main features of the temperature correction methods, suggested and used in modeling of plane-parallel stellar atmospheres, are discussed. The main features of the new method are described. Derivation of the formulae for a version of the Unsöld-Lucy method, used by us in the SMART (Stellar Model Atmospheres and Radiative Transport software for modeling stellar atmospheres, is presented. The method is based on a correction of the model temperature distribution based on minimizing differences of flux from its accepted constant value and on the requirement of the lack of its gradient, meaning that local source and sink terms of radiation must be equal. The final relative flux constancy obtainable by the method with the SMART code turned out to have the precision of the order of 0.5 %. Some of the rapidly converging iteration steps can be useful before starting the high-precision model correction. The corrections of both the flux value and of its gradient, like in Unsöld-Lucy method, are unavoidably needed to obtain high-precision flux constancy. A new temperature correction method to obtain high-precision flux constancy for plane-parallel LTE model stellar atmospheres is proposed and studied. The non-linear optimization is carried out by the least squares, in which the Levenberg-Marquardt correction method and thereafter additional correction by the Broyden iteration loop were applied. Small finite differences of temperature (δT/T = 10−3 are used in the computations. A single Jacobian step appears to be mostly sufficient to get flux constancy of the order 10−2 %. The dual numbers and their generalization – the dual complex numbers (the duplex numbers – enable automatically to get the derivatives in the nilpotent part of the dual numbers. A version of the SMART software is in the stage of refactorization to dual and duplex numbers, what enables to get rid of the finite differences, as an additional source of lowering precision of the

  11. High-order multi-implicit spectral deferred correction methods for problems of reactive flow

    International Nuclear Information System (INIS)

    Bourlioux, Anne; Layton, Anita T.; Minion, Michael L.

    2003-01-01

    Models for reacting flow are typically based on advection-diffusion-reaction (A-D-R) partial differential equations. Many practical cases correspond to situations where the relevant time scales associated with each of the three sub-processes can be widely different, leading to disparate time-step requirements for robust and accurate time-integration. In particular, interesting regimes in combustion correspond to systems in which diffusion and reaction are much faster processes than advection. The numerical strategy introduced in this paper is a general procedure to account for this time-scale disparity. The proposed methods are high-order multi-implicit generalizations of spectral deferred correction methods (MISDC methods), constructed for the temporal integration of A-D-R equations. Spectral deferred correction methods compute a high-order approximation to the solution of a differential equation by using a simple, low-order numerical method to solve a series of correction equations, each of which increases the order of accuracy of the approximation. The key feature of MISDC methods is their flexibility in handling several sub-processes implicitly but independently, while avoiding the splitting errors present in traditional operator-splitting methods and also allowing for different time steps for each process. The stability, accuracy, and efficiency of MISDC methods are first analyzed using a linear model problem and the results are compared to semi-implicit spectral deferred correction methods. Furthermore, numerical tests on simplified reacting flows demonstrate the expected convergence rates for MISDC methods of orders three, four, and five. The gain in efficiency by independently controlling the sub-process time steps is illustrated for nonlinear problems, where reaction and diffusion are much stiffer than advection. Although the paper focuses on this specific time-scales ordering, the generalization to any ordering combination is straightforward

  12. The unbiasedness of a generalized mirage boundary correction method for Monte Carlo integration estimators of volume

    Science.gov (United States)

    Thomas B. Lynch; Jeffrey H. Gove

    2014-01-01

    The typical "double counting" application of the mirage method of boundary correction cannot be applied to sampling systems such as critical height sampling (CHS) that are based on a Monte Carlo sample of a tree (or debris) attribute because the critical height (or other random attribute) sampled from a mirage point is generally not equal to the critical...

  13. An FFT-based Method for Attenuation Correction in Fluorescence Confocal Microscopy

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.; Bakker, M.

    1993-01-01

    A problem in three-dimensional imaging by a confocal scanning laser microscope (CSLM) in the (epi)fluorescence mode is the darkening of the deeper layers due to absorption and scattering of both the excitation and the fluorescence light. In this paper we propose a new method to correct for these

  14. A brain MRI bias field correction method created in the Gaussian multi-scale space

    Science.gov (United States)

    Chen, Mingsheng; Qin, Mingxin

    2017-07-01

    A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.

  15. QED radiative correction for the single-W production using a parton shower method

    International Nuclear Information System (INIS)

    Kurihara, Y.; Fujimoto, J.; Ishikawa, T.; Shimizu, Y.; Kato, K.; Tobimatsu, K.; Munehisa, T.

    2001-01-01

    A parton shower method for the photonic radiative correction is applied to single W-boson production processes. The energy scale for the evolution of the parton shower is determined so that the correct soft-photon emission is reproduced. Photon spectra radiated from the partons are compared with those from the exact matrix elements, and show a good agreement. Possible errors due to an inappropriate energy-scale selection or due to the ambiguity of the energy-scale determination are also discussed, particularly for the measurements on triple gauge couplings. (orig.)

  16. The two-phase flow IPTT method for measurement of nonwetting-wetting liquid interfacial areas at higher nonwetting saturations in natural porous media.

    Science.gov (United States)

    Zhong, Hua; Ouni, Asma El; Lin, Dan; Wang, Bingguo; Brusseau, Mark L

    2016-07-01

    Interfacial areas between nonwetting-wetting (NW-W) liquids in natural porous media were measured using a modified version of the interfacial partitioning tracer test (IPTT) method that employed simultaneous two-phase flow conditions, which allowed measurement at NW saturations higher than trapped residual saturation. Measurements were conducted over a range of saturations for a well-sorted quartz sand under three wetting scenarios of primary drainage (PD), secondary imbibition (SI), and secondary drainage (SD). Limited sets of experiments were also conducted for a model glass-bead medium and for a soil. The measured interfacial areas were compared to interfacial areas measured using the standard IPTT method for liquid-liquid systems, which employs residual NW saturations. In addition, the theoretical maximum interfacial areas estimated from the measured data are compared to specific solid surface areas measured with the N 2 /BET method and estimated based on geometrical calculations for smooth spheres. Interfacial areas increase linearly with decreasing water saturation over the range of saturations employed. The maximum interfacial areas determined for the glass beads, which have no surface roughness, are 32±4 and 36±5 cm -1 for PD and SI cycles, respectively. The values are similar to the geometric specific solid surface area (31±2 cm -1 ) and the N 2 /BET solid surface area (28±2 cm -1 ). The maximum interfacial areas are 274±38, 235±27, and 581±160 cm -1 for the sand for PD, SI, and SD cycles, respectively, and ~7625 cm -1 for the soil for PD and SI. The maximum interfacial areas for the sand and soil are significantly larger than the estimated smooth-sphere specific solid surface areas (107±8 cm -1 and 152±8 cm -1 , respectively), but much smaller than the N 2 /BET solid surface area (1387±92 cm -1 and 55224 cm -1 , respectively). The NW-W interfacial areas measured with the two-phase flow method compare well to values measured using the standard

  17. Correction method for the error of diamond tool's radius in ultra-precision cutting

    Science.gov (United States)

    Wang, Yi; Yu, Jing-chi

    2010-10-01

    The compensation method for the error of diamond tool's cutting edge is a bottle-neck technology to hinder the high accuracy aspheric surface's directly formation after single diamond turning. Traditional compensation was done according to the measurement result from profile meter, which took long measurement time and caused low processing efficiency. A new compensation method was firstly put forward in the article, in which the correction of the error of diamond tool's cutting edge was done according to measurement result from digital interferometer. First, detailed theoretical calculation related with compensation method was deduced. Then, the effect after compensation was simulated by computer. Finally, φ50 mm work piece finished its diamond turning and new correction turning under Nanotech 250. Testing surface achieved high shape accuracy pv 0.137λ and rms=0.011λ, which approved the new compensation method agreed with predictive analysis, high accuracy and fast speed of error convergence.

  18. A distortion correction method for image intensifier and electronic portal images used in radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Ioannidis, G T; Geramani, K N; Zamboglou, N [Strahlenklinik, Stadtische Kliniken Offenbach, Offenbach (Germany); Uzunoglu, N [Department of Electrical and Computer Engineering, National Technical University of Athens, Athens (Greece)

    1999-12-31

    At the most of radiation departments a simulator and an `on line` verification system of the treated volume, in form of an electronic portal imaging device (EPID), are available. Networking and digital handling (saving, archiving etc.) of the image information is a necessity in the image processing procedures in order to evaluate verification and simulation recordings at the computer screen. Distortion is on the other hand prerequisite for quantitative comparison of both image modalities. Another limitation factor, in order to make quantitative assertions, is the fact that the irradiation fields in radiotherapy are usually bigger than the field of view of an image intensifier. Several segments of the irradiation field must therefore be acquired. Using pattern recognition techniques these segments can be composed into a single image. In this paper a distortion correction method will be presented. The method is based upon a well defined Grid which is embedded during the registration process on the image. The video signal from the image intensifier is acquired and processed. The grid is then recognised using image processing techniques. Ideally if all grid points are recognised, various methods can be applied in order to correct the distortion. But in practice this is not the case. Overlapping structures (bones etc.) have as a consequence that not all of the grid points can be recognised. Mathematical models from the Graph theory are applied in order to reconstruct the whole grid. The deviation of the grid points positions from the rated value is then used to calculate correction coefficients. This method (well defined grid, grid recognition, correction factors) can also be applied in verification images from the EPID or in other image modalities, and therefore a quantitative comparison in radiation treatment is possible. The distortion correction method and the application on simulator images will be presented. (authors)

  19. New component-based normalization method to correct PET system models

    International Nuclear Information System (INIS)

    Kinouchi, Shoko; Miyoshi, Yuji; Suga, Mikio; Yamaya, Taiga; Yoshida, Eiji; Nishikido, Fumihiko; Tashima, Hideaki

    2011-01-01

    Normalization correction is necessary to obtain high-quality reconstructed images in positron emission tomography (PET). There are two basic types of normalization methods: the direct method and component-based methods. The former method suffers from the problem that a huge count number in the blank scan data is required. Therefore, the latter methods have been proposed to obtain high statistical accuracy normalization coefficients with a small count number in the blank scan data. In iterative image reconstruction methods, on the other hand, the quality of the obtained reconstructed images depends on the system modeling accuracy. Therefore, the normalization weighing approach, in which normalization coefficients are directly applied to the system matrix instead of a sinogram, has been proposed. In this paper, we propose a new component-based normalization method to correct system model accuracy. In the proposed method, two components are defined and are calculated iteratively in such a way as to minimize errors of system modeling. To compare the proposed method and the direct method, we applied both methods to our small OpenPET prototype system. We achieved acceptable statistical accuracy of normalization coefficients while reducing the count number of the blank scan data to one-fortieth that required in the direct method. (author)

  20. A Time-Walk Correction Method for PET Detectors Based on Leading Edge Discriminators.

    Science.gov (United States)

    Du, Junwei; Schmall, Jeffrey P; Judenhofer, Martin S; Di, Kun; Yang, Yongfeng; Cherry, Simon R

    2017-09-01

    The leading edge timing pick-off technique is the simplest timing extraction method for PET detectors. Due to the inherent time-walk of the leading edge technique, corrections should be made to improve timing resolution, especially for time-of-flight PET. Time-walk correction can be done by utilizing the relationship between the threshold crossing time and the event energy on an event by event basis. In this paper, a time-walk correction method is proposed and evaluated using timing information from two identical detectors both using leading edge discriminators. This differs from other techniques that use an external dedicated reference detector, such as a fast PMT-based detector using constant fraction techniques to pick-off timing information. In our proposed method, one detector was used as reference detector to correct the time-walk of the other detector. Time-walk in the reference detector was minimized by using events within a small energy window (508.5 - 513.5 keV). To validate this method, a coincidence detector pair was assembled using two SensL MicroFB SiPMs and two 2.5 mm × 2.5 mm × 20 mm polished LYSO crystals. Coincidence timing resolutions using different time pick-off techniques were obtained at a bias voltage of 27.5 V and a fixed temperature of 20 °C. The coincidence timing resolution without time-walk correction were 389.0 ± 12.0 ps (425 -650 keV energy window) and 670.2 ± 16.2 ps (250-750 keV energy window). The timing resolution with time-walk correction improved to 367.3 ± 0.5 ps (425 - 650 keV) and 413.7 ± 0.9 ps (250 - 750 keV). For comparison, timing resolutions were 442.8 ± 12.8 ps (425 - 650 keV) and 476.0 ± 13.0 ps (250 - 750 keV) using constant fraction techniques, and 367.3 ± 0.4 ps (425 - 650 keV) and 413.4 ± 0.9 ps (250 - 750 keV) using a reference detector based on the constant fraction technique. These results show that the proposed leading edge based time-walk correction method works well. Timing resolution obtained

  1. Hydraulic correction method (HCM) to enhance the efficiency of SRTM DEM in flood modeling

    Science.gov (United States)

    Chen, Huili; Liang, Qiuhua; Liu, Yong; Xie, Shuguang

    2018-04-01

    Digital Elevation Model (DEM) is one of the most important controlling factors determining the simulation accuracy of hydraulic models. However, the currently available global topographic data is confronted with limitations for application in 2-D hydraulic modeling, mainly due to the existence of vegetation bias, random errors and insufficient spatial resolution. A hydraulic correction method (HCM) for the SRTM DEM is proposed in this study to improve modeling accuracy. Firstly, we employ the global vegetation corrected DEM (i.e. Bare-Earth DEM), developed from the SRTM DEM to include both vegetation height and SRTM vegetation signal. Then, a newly released DEM, removing both vegetation bias and random errors (i.e. Multi-Error Removed DEM), is employed to overcome the limitation of height errors. Last, an approach to correct the Multi-Error Removed DEM is presented to account for the insufficiency of spatial resolution, ensuring flow connectivity of the river networks. The approach involves: (a) extracting river networks from the Multi-Error Removed DEM using an automated algorithm in ArcGIS; (b) correcting the location and layout of extracted streams with the aid of Google Earth platform and Remote Sensing imagery; and (c) removing the positive biases of the raised segment in the river networks based on bed slope to generate the hydraulically corrected DEM. The proposed HCM utilizes easily available data and tools to improve the flow connectivity of river networks without manual adjustment. To demonstrate the advantages of HCM, an extreme flood event in Huifa River Basin (China) is simulated on the original DEM, Bare-Earth DEM, Multi-Error removed DEM, and hydraulically corrected DEM using an integrated hydrologic-hydraulic model. A comparative analysis is subsequently performed to assess the simulation accuracy and performance of four different DEMs and favorable results have been obtained on the corrected DEM.

  2. A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

    Science.gov (United States)

    Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro

    2016-09-01

    This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.

  3. A method of bias correction for maximal reliability with dichotomous measures.

    Science.gov (United States)

    Penev, Spiridon; Raykov, Tenko

    2010-02-01

    This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.

  4. Method of correcting eddy current magnetic fields in particle accelerator vacuum chambers

    Science.gov (United States)

    Danby, Gordon T.; Jackson, John W.

    1991-01-01

    A method for correcting magnetic field aberrations produced by eddy currents induced in a particle accelerator vacuum chamber housing is provided wherein correction windings are attached to selected positions on the housing and the windings are energized by transformer action from secondary coils, which coils are inductively coupled to the poles of electro-magnets that are powered to confine the charged particle beam within a desired orbit as the charged particles are accelerated through the vacuum chamber by a particle-driving rf field. The power inductively coupled to the secondary coils varies as a function of variations in the power supplied by the particle-accelerating rf field to a beam of particles accelerated through the vacuum chamber, so the current in the energized correction coils is effective to cancel eddy current flux fields that would otherwise be induced in the vacuum chamber by power variations in the particle beam.

  5. Consistent calculation of the polarization electric dipole moment by the shell-correction method

    International Nuclear Information System (INIS)

    Denisov, V.Yu.

    1992-01-01

    Macroscopic calculations of the polarization electric dipole moment which arises in nuclei with an octupole deformation are discussed in detail. This dipole moment is shown to depend on the position of the center of gravity. The conditions of consistency of the radii of the proton and neutron potentials and the radii of the proton and neutron surfaces, respectively, are discussed. These conditions must be incorporated in a shell-correction calculation of this dipole moment. A correct calculation of this moment by the shell-correction method is carried out. Dipole transitions between (on the one hand) levels belonging to an octupole vibrational band and (on the other) the ground state in rare-earth nuclei with a large quadrupole deformation are studied. 19 refs., 3 figs

  6. Decomposition and correction overlapping peaks of LIBS using an error compensation method combined with curve fitting.

    Science.gov (United States)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-09-01

    The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.

  7. Correcting for cryptic relatedness by a regression-based genomic control method

    Directory of Open Access Journals (Sweden)

    Yang Yaning

    2009-12-01

    Full Text Available Abstract Background Genomic control (GC method is a useful tool to correct for the cryptic relatedness in population-based association studies. It was originally proposed for correcting for the variance inflation of Cochran-Armitage's additive trend test by using information from unlinked null markers, and was later generalized to be applicable to other tests with the additional requirement that the null markers are matched with the candidate marker in allele frequencies. However, matching allele frequencies limits the number of available null markers and thus limits the applicability of the GC method. On the other hand, errors in genotype/allele frequencies may cause further bias and variance inflation and thereby aggravate the effect of GC correction. Results In this paper, we propose a regression-based GC method using null markers that are not necessarily matched in allele frequencies with the candidate marker. Variation of allele frequencies of the null markers is adjusted by a regression method. Conclusion The proposed method can be readily applied to the Cochran-Armitage's trend tests other than the additive trend test, the Pearson's chi-square test and other robust efficiency tests. Simulation results show that the proposed method is effective in controlling type I error in the presence of population substructure.

  8. Attenuation correction of myocardial SPECT by scatter-photopeak window method in normal subjects

    International Nuclear Information System (INIS)

    Okuda, Koichi; Nakajima, Kenichi; Matsuo, Shinro; Kinuya, Seigo; Motomura, Nobutoku; Kubota, Masahiro; Yamaki, Noriyasu; Maeda, Hisato

    2009-01-01

    Segmentation with scatter and photopeak window data using attenuation correction (SSPAC) method can provide a patient-specific non-uniform attenuation coefficient map only by using photopeak and scatter images without X-ray computed tomography (CT). The purpose of this study is to evaluate the performance of attenuation correction (AC) by the SSPAC method on normal myocardial perfusion database. A total of 32 sets of exercise-rest myocardial images with Tc-99m-sestamibi were acquired in both photopeak (140 keV±10%) and scatter (7% of lower side of the photopeak window) energy windows. Myocardial perfusion databases by the SSPAC method and non-AC (NC) were created from 15 female and 17 male subjects with low likelihood of cardiac disease using quantitative perfusion SPECT software. Segmental myocardial counts of a 17-segment model from these databases were compared on the basis of paired t test. AC average myocardial perfusion count was significantly higher than that in NC in the septal and inferior regions (P<0.02). On the contrary, AC average count was significantly lower in the anterolateral and apical regions (P<0.01). Coefficient variation of the AC count in the mid, apical and apex regions was lower than that of NC. The SSPAC method can improve average myocardial perfusion uptake in the septal and inferior regions and provide uniform distribution of myocardial perfusion. The SSPAC method could be a practical method of attenuation correction without X-ray CT. (author)

  9. Local blur analysis and phase error correction method for fringe projection profilometry systems.

    Science.gov (United States)

    Rao, Li; Da, Feipeng

    2018-05-20

    We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.

  10. Attenuation correction with region growing method used in the positron emission mammography imaging system

    Science.gov (United States)

    Gu, Xiao-Yue; Li, Lin; Yin, Peng-Fei; Yun, Ming-Kai; Chai, Pei; Huang, Xian-Chao; Sun, Xiao-Li; Wei, Long

    2015-10-01

    The Positron Emission Mammography imaging system (PEMi) provides a novel nuclear diagnosis method dedicated for breast imaging. With a better resolution than whole body PET, PEMi can detect millimeter-sized breast tumors. To address the requirement of semi-quantitative analysis with a radiotracer concentration map of the breast, a new attenuation correction method based on a three-dimensional seeded region growing image segmentation (3DSRG-AC) method has been developed. The method gives a 3D connected region as the segmentation result instead of image slices. The continuity property of the segmentation result makes this new method free of activity variation of breast tissues. The threshold value chosen is the key process for the segmentation method. The first valley in the grey level histogram of the reconstruction image is set as the lower threshold, which works well in clinical application. Results show that attenuation correction for PEMi improves the image quality and the quantitative accuracy of radioactivity distribution determination. Attenuation correction also improves the probability of detecting small and early breast tumors. Supported by Knowledge Innovation Project of The Chinese Academy of Sciences (KJCX2-EW-N06)

  11. Scatter correction method for x-ray CT using primary modulation: Phantom studies

    International Nuclear Information System (INIS)

    Gao Hewei; Fahrig, Rebecca; Bennett, N. Robert; Sun Mingshan; Star-Lack, Josh; Zhu Lei

    2010-01-01

    Purpose: Scatter correction is a major challenge in x-ray imaging using large area detectors. Recently, the authors proposed a promising scatter correction method for x-ray computed tomography (CT) using primary modulation. Proof of concept was previously illustrated by Monte Carlo simulations and physical experiments on a small phantom with a simple geometry. In this work, the authors provide a quantitative evaluation of the primary modulation technique and demonstrate its performance in applications where scatter correction is more challenging. Methods: The authors first analyze the potential errors of the estimated scatter in the primary modulation method. On two tabletop CT systems, the method is investigated using three phantoms: A Catphan(c)600 phantom, an anthropomorphic chest phantom, and the Catphan(c)600 phantom with two annuli. Two different primary modulators are also designed to show the impact of the modulator parameters on the scatter correction efficiency. The first is an aluminum modulator with a weak modulation and a low modulation frequency, and the second is a copper modulator with a strong modulation and a high modulation frequency. Results: On the Catphan(c)600 phantom in the first study, the method reduces the error of the CT number in the selected regions of interest (ROIs) from 371.4 to 21.9 Hounsfield units (HU); the contrast to noise ratio also increases from 10.9 to 19.2. On the anthropomorphic chest phantom in the second study, which represents a more difficult case due to the high scatter signals and object heterogeneity, the method reduces the error of the CT number from 327 to 19 HU in the selected ROIs and from 31.4% to 5.7% on the overall average. The third study is to investigate the impact of object size on the efficiency of our method. The scatter-to-primary ratio estimation error on the Catphan(c)600 phantom without any annulus (20 cm in diameter) is at the level of 0.04, it rises to 0.07 and 0.1 on the phantom with an

  12. A software-based x-ray scatter correction method for breast tomosynthesis

    International Nuclear Information System (INIS)

    Jia Feng, Steve Si; Sechopoulos, Ioannis

    2011-01-01

    Purpose: To develop a software-based scatter correction method for digital breast tomosynthesis (DBT) imaging and investigate its impact on the image quality of tomosynthesis reconstructions of both phantoms and patients. Methods: A Monte Carlo (MC) simulation of x-ray scatter, with geometry matching that of the cranio-caudal (CC) view of a DBT clinical prototype, was developed using the Geant4 toolkit and used to generate maps of the scatter-to-primary ratio (SPR) of a number of homogeneous standard-shaped breasts of varying sizes. Dimension-matched SPR maps were then deformed and registered to DBT acquisition projections, allowing for the estimation of the primary x-ray signal acquired by the imaging system. Noise filtering of the estimated projections was then performed to reduce the impact of the quantum noise of the x-ray scatter. Three dimensional (3D) reconstruction was then performed using the maximum likelihood-expectation maximization (MLEM) method. This process was tested on acquisitions of a heterogeneous 50/50 adipose/glandular tomosynthesis phantom with embedded masses, fibers, and microcalcifications and on acquisitions of patients. The image quality of the reconstructions of the scatter-corrected and uncorrected projections was analyzed by studying the signal-difference-to-noise ratio (SDNR), the integral of the signal in each mass lesion (integrated mass signal, IMS), and the modulation transfer function (MTF). Results: The reconstructions of the scatter-corrected projections demonstrated superior image quality. The SDNR of masses embedded in a 5 cm thick tomosynthesis phantom improved 60%-66%, while the SDNR of the smallest mass in an 8 cm thick phantom improved by 59% (p < 0.01). The IMS of the masses in the 5 cm thick phantom also improved by 15%-29%, while the IMS of the masses in the 8 cm thick phantom improved by 26%-62% (p < 0.01). Some embedded microcalcifications in the tomosynthesis phantoms were visible only in the scatter-corrected

  13. Evaluation of a method for correction of scatter radiation in thorax cone beam CT

    International Nuclear Information System (INIS)

    Rinkel, J.; Dinten, J.M.; Esteve, F.

    2004-01-01

    Purpose: Cone beam CT (CBCT) enables three-dimensional imaging with isotropic resolution. X-ray scatter estimation is a big challenge for quantitative CBCT imaging of thorax: scatter level is significantly higher on cone beam systems compared to collimated fan beam systems. The effects of this scattered radiation are cupping artefacts, streaks, and quantification inaccuracies. The beam stops conventional scatter estimation approach can be used for CBCT but leads to a significant increase in terms of dose and acquisition time. At CEA-LETI has been developed an original scatter management process without supplementary acquisition. Methods and Materials: This Analytical Plus Indexing-based method (API) of scatter correction in CBCT is based on scatter calibration through offline acquisitions with beam stops on lucite plates, combined to an analytical transformation issued from physical equations. This approach has been applied with success in bone densitometry and mammography. To evaluate this method in CBCT, acquisitions from a thorax phantom with and without beam stops have been performed. To compare different scatter correction approaches, Feldkamp algorithm has been applied on rough data corrected from scatter by API and by beam stops approaches. Results: The API method provides results in good agreement with the beam stops array approach, suppressing cupping artefact. Otherwise influence of the scatter correction method on the noise in the reconstructed images has been evaluated. Conclusion: The results indicate that the API method is effective for quantitative CBCT imaging of thorax. Compared to a beam stops array method it needs a lower x-ray dose and shortens acquisition time. (authors)

  14. Joint de-blurring and nonuniformity correction method for infrared microscopy imaging

    Science.gov (United States)

    Jara, Anselmo; Torres, Sergio; Machuca, Guillermo; Ramírez, Wagner; Gutiérrez, Pablo A.; Viafora, Laura A.; Godoy, Sebastián E.; Vera, Esteban

    2018-05-01

    In this work, we present a new technique to simultaneously reduce two major degradation artifacts found in mid-wavelength infrared microscopy imagery, namely the inherent focal-plane array nonuniformity noise and the scene defocus presented due to the point spread function of the infrared microscope. We correct both nuisances using a novel, recursive method that combines the constant range nonuniformity correction algorithm with a frame-by-frame deconvolution approach. The ability of the method to jointly compensate for both nonuniformity noise and blur is demonstrated using two different real mid-wavelength infrared microscopic video sequences, which were captured from two microscopic living organisms using a Janos-Sofradir mid-wavelength infrared microscopy setup. The performance of the proposed method is assessed on real and simulated infrared data by computing the root mean-square error and the roughness-laplacian pattern index, which was specifically developed for the present work.

  15. Conservative multi-implicit integral deferred correction methods with adaptive mesh refinement

    International Nuclear Information System (INIS)

    Layton, A.T.

    2004-01-01

    In most models of reacting gas dynamics, the characteristic time scales of chemical reactions are much shorter than the hydrodynamic and diffusive time scales, rendering the reaction part of the model equations stiff. Moreover, nonlinear forcings may introduce into the solutions sharp gradients or shocks, the robust behavior and correct propagation of which require the use of specialized spatial discretization procedures. This study presents high-order conservative methods for the temporal integration of model equations of reacting flows. By means of a method of lines discretization on the flux difference form of the equations, these methods compute approximations to the cell-averaged or finite-volume solution. The temporal discretization is based on a multi-implicit generalization of integral deferred correction methods. The advection term is integrated explicitly, and the diffusion and reaction terms are treated implicitly but independently, with the splitting errors present in traditional operator splitting methods reduced via the integral deferred correction procedure. To reduce computational cost, time steps used to integrate processes with widely-differing time scales may differ in size. (author)

  16. Scatter measurement and correction method for cone-beam CT based on single grating scan

    Science.gov (United States)

    Huang, Kuidong; Shi, Wenlong; Wang, Xinyu; Dong, Yin; Chang, Taoqi; Zhang, Hua; Zhang, Dinghua

    2017-06-01

    In cone-beam computed tomography (CBCT) systems based on flat-panel detector imaging, the presence of scatter significantly reduces the quality of slices. Based on the concept of collimation, this paper presents a scatter measurement and correction method based on single grating scan. First, according to the characteristics of CBCT imaging, the scan method using single grating and the design requirements of the grating are analyzed and figured out. Second, by analyzing the composition of object projection images and object-and-grating projection images, the processing method for the scatter image at single projection angle is proposed. In addition, to avoid additional scan, this paper proposes an angle interpolation method of scatter images to reduce scan cost. Finally, the experimental results show that the scatter images obtained by this method are accurate and reliable, and the effect of scatter correction is obvious. When the additional object-and-grating projection images are collected and interpolated at intervals of 30 deg, the scatter correction error of slices can still be controlled within 3%.

  17. The study on the X-ray correction method of long fracture displacement

    International Nuclear Information System (INIS)

    Jia Bin; Huang Ailing; Chen Fuzhong; Men Chunyan; Sui Chengzong; Cui Yiming; Yang Yundong

    2010-01-01

    Objective: To explore the image correction of fracture displacement by conventional X-ray photography (ortho tropic and lateral) and test by computed tomography (CT). Methods: The correction method of fracture displacement was designed according to geometry of X-ray photography. Selected one midhumeral fracture specimen which designed with lateral shift and angular displacement, and scanned from anteroposterior and position respectively, and also volume scanned using CT, the data obtained from volume scan were processed using multiplanar reconstruction (MPR) and shaded surface display (SSD). The displacement data relied on X-ray image, CT with MPR and SSD processing, actual design of specimens were compared respectively. Results: The direction and degree of displacement among correction data of X-ray images and the data from MPR and SSD, actual design of specimen were little difference, location difference <1.5 mm, degree difference <1.5 degree. Conclusion: It is really reliable for fracture displacement by conventional X-ray photography with coordinate correction, and it is helpful to obviously improve the diagnostic accuracy of the degree of fracture displacement. (authors)

  18. Modular correction method of bending elastic modulus based on sliding behavior of contact point

    International Nuclear Information System (INIS)

    Ma, Zhichao; Zhao, Hongwei; Zhang, Qixun; Liu, Changyi

    2015-01-01

    During the three-point bending test, the sliding behavior of the contact point between the specimen and supports was observed, the sliding behavior was verified to affect the measurements of both deflection and span length, which directly affect the calculation of the bending elastic modulus. Based on the Hertz formula to calculate the elastic contact deformation and the theoretical calculation of the sliding behavior of the contact point, a theoretical model to precisely describe the deflection and span length as a function of bending load was established. Moreover, a modular correction method of bending elastic modulus was proposed, via the comparison between the corrected elastic modulus of three materials (H63 copper–zinc alloy, AZ31B magnesium alloy and 2026 aluminum alloy) and the standard modulus obtained from standard uniaxial tensile tests, the universal feasibility of the proposed correction method was verified. Also, the ratio of corrected to raw elastic modulus presented a monotonically decreasing tendency as the raw elastic modulus of materials increased. (technical note)

  19. Experimental aspects of buoyancy correction in measuring reliable highpressure excess adsorption isotherms using the gravimetric method.

    Science.gov (United States)

    Nguyen, Huong Giang T; Horn, Jarod C; Thommes, Matthias; van Zee, Roger D; Espinal, Laura

    2017-12-01

    Addressing reproducibility issues in adsorption measurements is critical to accelerating the path to discovery of new industrial adsorbents and to understanding adsorption processes. A National Institute of Standards and Technology Reference Material, RM 8852 (ammonium ZSM-5 zeolite), and two gravimetric instruments with asymmetric two-beam balances were used to measure high-pressure adsorption isotherms. This work demonstrates how common approaches to buoyancy correction, a key factor in obtaining the mass change due to surface excess gas uptake from the apparent mass change, can impact the adsorption isotherm data. Three different approaches to buoyancy correction were investigated and applied to the subcritical CO 2 and supercritical N 2 adsorption isotherms at 293 K. It was observed that measuring a collective volume for all balance components for the buoyancy correction (helium method) introduces an inherent bias in temperature partition when there is a temperature gradient (i.e. analysis temperature is not equal to instrument air bath temperature). We demonstrate that a blank subtraction is effective in mitigating the biases associated with temperature partitioning, instrument calibration, and the determined volumes of the balance components. In general, the manual and subtraction methods allow for better treatment of the temperature gradient during buoyancy correction. From the study, best practices specific to asymmetric two-beam balances and more general recommendations for measuring isotherms far from critical temperatures using gravimetric instruments are offered.

  20. A novel method to correct for pitch and yaw patient setup errors in helical tomotherapy

    International Nuclear Information System (INIS)

    Boswell, Sarah A.; Jeraj, Robert; Ruchala, Kenneth J.; Olivera, Gustavo H.; Jaradat, Hazim A.; James, Joshua A.; Gutierrez, Alonso; Pearson, Dave; Frank, Gary; Mackie, T. Rock

    2005-01-01

    An accurate means of determining and correcting for daily patient setup errors is important to the cancer outcome in radiotherapy. While many tools have been developed to detect setup errors, difficulty may arise in accurately adjusting the patient to account for the rotational error components. A novel, automated method to correct for rotational patient setup errors in helical tomotherapy is proposed for a treatment couch that is restricted to motion along translational axes. In tomotherapy, only a narrow superior/inferior section of the target receives a dose at any instant, thus rotations in the sagittal and coronal planes may be approximately corrected for by very slow continuous couch motion in a direction perpendicular to the scanning direction. Results from proof-of-principle tests indicate that the method improves the accuracy of treatment delivery, especially for long and narrow targets. Rotational corrections about an axis perpendicular to the transverse plane continue to be implemented easily in tomotherapy by adjustment of the initial gantry angle

  1. A Correction Method for UAV Helicopter Airborne Temperature and Humidity Sensor

    Directory of Open Access Journals (Sweden)

    Longqing Fan

    2017-01-01

    Full Text Available This paper presents a correction method for UAV helicopter airborne temperature and humidity including an error correction scheme and a bias-calibration scheme. As rotor downwash flow brings measurement error on helicopter airborne sensors inevitably, the error correction scheme constructs a model between the rotor induced velocity and temperature and humidity by building the heat balance equation for platinum resistor temperature sensor and the pressure correction term for humidity sensor. The induced velocity of a spatial point below the rotor disc plane can be calculated by the sum of the induced velocities excited by center line vortex, rotor disk vortex, and skew cylinder vortex based on the generalized vortex theory. In order to minimize the systematic biases, the bias-calibration scheme adopts a multiple linear regression to achieve a systematically consistent result with the tethered balloon profiles. Two temperature and humidity sensors were mounted on “Z-5” UAV helicopter in the field experiment. Overall, the result of applying the calibration method shows that the temperature and relative humidity obtained by UAV helicopter closely align with tethered balloon profiles in providing measurements of the temperature profiles and humidity profiles within marine atmospheric boundary layers.

  2. Perturbation theory corrections to the two-particle reduced density matrix variational method.

    Science.gov (United States)

    Juhasz, Tamas; Mazziotti, David A

    2004-07-15

    In the variational 2-particle-reduced-density-matrix (2-RDM) method, the ground-state energy is minimized with respect to the 2-particle reduced density matrix, constrained by N-representability conditions. Consider the N-electron Hamiltonian H(lambda) as a function of the parameter lambda where we recover the Fock Hamiltonian at lambda=0 and we recover the fully correlated Hamiltonian at lambda=1. We explore using the accuracy of perturbation theory at small lambda to correct the 2-RDM variational energies at lambda=1 where the Hamiltonian represents correlated atoms and molecules. A key assumption in the correction is that the 2-RDM method will capture a fairly constant percentage of the correlation energy for lambda in (0,1] because the nonperturbative 2-RDM approach depends more significantly upon the nature rather than the strength of the two-body Hamiltonian interaction. For a variety of molecules we observe that this correction improves the 2-RDM energies in the equilibrium bonding region, while the 2-RDM energies at stretched or nearly dissociated geometries, already highly accurate, are not significantly changed. At equilibrium geometries the corrected 2-RDM energies are similar in accuracy to those from coupled-cluster singles and doubles (CCSD), but at nonequilibrium geometries the 2-RDM energies are often dramatically more accurate as shown in the bond stretching and dissociation data for water and nitrogen. (c) 2004 American Institute of Physics.

  3. Saturation analysis

    International Nuclear Information System (INIS)

    1974-01-01

    The invention comprises a radioimmunoassay kit for steroid determination. Selenium-75 is used as labelling element. The chemical preparation methods for various selenium-labelled keto-steroids and their derivatives, such as hydrocortisone, testosteron, corticosteron, estriol, and other steroid hormones as well as cardiacal glycosides are described. Analytical examples are presented

  4. Investigating the Factors Affecting theZahedan’s Aquifer Hydrogeochemistry Using Foctor Analysis, Saturation Indices and Composite Diagrams’ Methods

    Directory of Open Access Journals (Sweden)

    J. Dowlati

    2014-12-01

    Full Text Available Zahedan aquifer is located in the northernof Zahedanwatedshed. It is essential to evaluate the quality of groundwater resources due to proving some part of drinking water, agricultural and industrial waters of this city. In order to carry out ground water quality monitoring, and assess the controlling possesses and determine cations and anions sources of the groundwater, 26 wells were sampled and water quality parameters were measured.The results of the analysis showed that almost all of the samples proved very saline and electrical conductivity varied from 1,359 to 12,620μS cm−1. In the Zahedan aquifer, sodium, chloride and sulfate were predominant Cation and Anions respectively, and sodium-chloride Na-Cl( and sodium - sulfateNa-So4 were dominant types of the groundwater. The factor analysis of samples results indicates that the two natural and human factors controlled about the 83/30% and 74/37% of the quality variations of the groundwater respectively in October and February. The first and major factor related to the natural processes of ion exchange and dissolution had a correlation with positive loadings of EC, Ca2+, Mg2+, Na+, Cl-, K+ and So42- and controls the 65.25% of the quality variations of the ground water in October and the 58.82% in February. The second factor related toCa2+, No3- constituted the18.05% of the quality variations in October and 15.56% in February, and given the urban development and less agricultural development in the aquifer, is dependent on human activities. For the samples collected in October, the saturation indices of calcite, gypsum and dolomite minerals showed saturated condition and calcite and dolomite in February showed saturated condition for more than 60% and 90% of samples and gypsum index revealed under-saturated condition for almost all samples.The unsaturated condition of Zahedan groundwater aquifer is resulted from the insufficient time for retaining water in the aquifer to dissolve the minerals

  5. Monitor hemoglobin concentration and oxygen saturation in living mouse tail using photoacoustic CT scanner

    Science.gov (United States)

    Liu, Bo; Kruger, Robert; Reinecke, Daniel; Stantz, Keith M.

    2010-02-01

    Purpose: The purpose of this study is to use PCT spectroscopy scanner to monitor the hemoglobin concentration and oxygen saturation change of living mouse by imaging the artery and veins in a mouse tail. Materials and Methods: One mouse tail was scanned using the PCT small animal scanner at the isosbestic wavelength (796nm) to obtain its hemoglobin concentration. Immediately after the scan, the mouse was euthanized and its blood was extracted from the heart. The true hemoglobin concentration was measured using a co-oximeter. Reconstruction correction algorithm to compensate the acoustic signal loss due to the existence of bone structure in the mouse tail was developed. After the correction, the hemoglobin concentration was calculated from the PCT images and compared with co-oximeter result. Next, one mouse were immobilized in the PCT scanner. Gas with different concentrations of oxygen was given to mouse to change the oxygen saturation. PCT tail vessel spectroscopy scans were performed 15 minutes after the introduction of gas. The oxygen saturation values were then calculated to monitor the oxygen saturation change of mouse. Results: The systematic error for hemoglobin concentration measurement was less than 5% based on preliminary analysis. Same correction technique was used for oxygen saturation calculation. After correction, the oxygen saturation level change matches the oxygen volume ratio change of the introduced gas. Conclusion: This living mouse tail experiment has shown that NIR PCT-spectroscopy can be used to monitor the oxygen saturation status in living small animals.

  6. Computer method to detect and correct cycle skipping on sonic logs

    International Nuclear Information System (INIS)

    Muller, D.C.

    1985-01-01

    A simple but effective computer method has been developed to detect cycle skipping on sonic logs and to replace cycle skips with estimates of correct traveltimes. The method can be used to correct observed traveltime pairs from the transmitter to both receivers. The basis of the method is the linearity of a plot of theoretical traveltime from the transmitter to the first receiver versus theoretical traveltime from the transmitter to the second receiver. Theoretical traveltime pairs are calculated assuming that the sonic logging tool is centered in the borehole, that the borehole diameter is constant, that the borehole fluid velocity is constant, and that the formation is homogeneous. The plot is linear for the full range of possible formation-rock velocity. Plots of observed traveltime pairs from a sonic logging tool are also linear but have a large degree of scatter due to borehole rugosity, sharp boundaries exhibiting large velocity contrasts, and system measurement uncertainties. However, this scatter can be reduced to a level that is less than scatter due to cycle skipping, so that cycle skips may be detected and discarded or replaced with estimated values of traveltime. Advantages of the method are that it can be applied in real time, that it can be used with data collected by existing tools, that it only affects data that exhibit cycle skipping and leaves other data unchanged, and that a correction trace can be generated which shows where cycle skipping occurs and the amount of correction applied. The method has been successfully tested on sonic log data taken in two holes drilled at the Nevada Test Site, Nye County, Nevada

  7. A new method of CCD dark current correction via extracting the dark Information from scientific images

    Science.gov (United States)

    Ma, Bin; Shang, Zhaohui; Hu, Yi; Liu, Qiang; Wang, Lifan; Wei, Peng

    2014-07-01

    We have developed a new method to correct dark current at relatively high temperatures for Charge-Coupled Device (CCD) images when dark frames cannot be obtained on the telescope. For images taken with the Antarctic Survey Telescopes (AST3) in 2012, due to the low cooling efficiency, the median CCD temperature was -46°C, resulting in a high dark current level of about 3e-/pix/sec, even comparable to the sky brightness (10e-/pix/sec). If not corrected, the nonuniformity of the dark current could even overweight the photon noise of the sky background. However, dark frames could not be obtained during the observing season because the camera was operated in frame-transfer mode without a shutter, and the telescope was unattended in winter. Here we present an alternative, but simple and effective method to derive the dark current frame from the scientific images. Then we can scale this dark frame to the temperature at which the scientific images were taken, and apply the dark frame corrections to the scientific images. We have applied this method to the AST3 data, and demonstrated that it can reduce the noise to a level roughly as low as the photon noise of the sky brightness, solving the high noise problem and improving the photometric precision. This method will also be helpful for other projects that suffer from similar issues.

  8. Monte Carlo evaluation of scattering correction methods in 131I studies using pinhole collimator

    International Nuclear Information System (INIS)

    López Díaz, Adlin; San Pedro, Aley Palau; Martín Escuela, Juan Miguel; Rodríguez Pérez, Sunay; Díaz García, Angelina

    2017-01-01

    Scattering is quite important for image activity quantification. In order to study the scattering factors and the efficacy of 3 multiple window energy scatter correction methods during 131 I thyroid studies with a pinhole collimator (5 mm hole) a Monte Carlo simulation (MC) was developed. The GAMOS MC code was used to model the gamma camera and the thyroid source geometry. First, to validate the MC gamma camera pinhole-source model, sensibility in air and water of the simulated and measured thyroid phantom geometries were compared. Next, simulations to investigate scattering and the result of triple energy (TEW), Double energy (DW) and Reduced double (RDW) energy windows correction methods were performed for different thyroid sizes and depth thicknesses. The relative discrepancies to MC real event were evaluated. Results: The accuracy of the GAMOS MC model was verified and validated. The image’s scattering contribution was significant, between 27-40 %. The discrepancies between 3 multiple window energy correction method results were significant (between 9-86 %). The Reduce Double Window methods (15%) provide discrepancies of 9-16 %. Conclusions: For the simulated thyroid geometry with pinhole, the RDW (15 %) was the most effective. (author)

  9. Improving the accuracy of CT dimensional metrology by a novel beam hardening correction method

    International Nuclear Information System (INIS)

    Zhang, Xiang; Li, Lei; Zhang, Feng; Xi, Xiaoqi; Deng, Lin; Yan, Bin

    2015-01-01

    Its powerful nondestructive characteristics are attracting more and more research into the study of computed tomography (CT) for dimensional metrology, which offers a practical alternative to the common measurement methods. However, the inaccuracy and uncertainty severely limit the further utilization of CT for dimensional metrology due to many factors, among which the beam hardening (BH) effect plays a vital role. This paper mainly focuses on eliminating the influence of the BH effect in the accuracy of CT dimensional metrology. To correct the BH effect, a novel exponential correction model is proposed. The parameters of the model are determined by minimizing the gray entropy of the reconstructed volume. In order to maintain the consistency and contrast of the corrected volume, a punishment term is added to the cost function, enabling more accurate measurement results to be obtained by the simple global threshold method. The proposed method is efficient, and especially suited to the case where there is a large difference in gray value between material and background. Different spheres with known diameters are used to verify the accuracy of dimensional measurement. Both simulation and real experimental results demonstrate the improvement in measurement precision. Moreover, a more complex workpiece is also tested to show that the proposed method is of general feasibility. (paper)

  10. Simplified Transient Hot-Wire Method for Effective Thermal Conductivity Measurement in Geo Materials: Microstructure and Saturation Effect

    Directory of Open Access Journals (Sweden)

    B. Merckx

    2012-01-01

    Full Text Available The thermal conductivity measurement by a simplified transient hot-wire technique is applied to geomaterials in order to show the relationships which can exist between effective thermal conductivity, texture, and moisture of the materials. After a validation of the used “one hot-wire” technique in water, toluene, and glass-bead assemblages, the investigations were performed (1 in glass-bead assemblages of different diameters in dried, water, and acetone-saturated states in order to observe the role of grain sizes and saturation on the effective thermal conductivity, (2 in a compacted earth brick at different moisture states, and (3 in a lime-hemp concrete during 110 days following its manufacture. The lime-hemp concrete allows the measurements during the setting, desiccation and carbonation steps. The recorded Δ/ln( diagrams allow the calculation of one effective thermal conductivity in the continuous and homogeneous fluids and two effective thermal conductivities in the heterogeneous solids. The first one measured in the short time acquisitions (<1 s mainly depends on the contact between the wire and grains and thus microtexture and hydrated state of the material. The second one, measured for longer time acquisitions, characterizes the mean effective thermal conductivity of the material.

  11. A method for correcting the depth-of-interaction blurring in PET cameras

    International Nuclear Information System (INIS)

    Rogers, J.G.

    1993-11-01

    A method is presented for the purpose of correcting PET images for the blurring caused by variations in the depth-of-interaction in position-sensitive gamma ray detectors. In the case of a fine-cut 50x50x30 mm BGO block detector, the method is shown to improve the detector resolution by about 25%, measured in the geometry corresponding to detection at the edge of the field-of-view. Strengths and weaknesses of the method are discussed and its potential usefulness for improving the images of future PET cameras is assessed. (author). 8 refs., 3 figs

  12. Hydrological modeling as an evaluation tool of EURO-CORDEX climate projections and bias correction methods

    Science.gov (United States)

    Hakala, Kirsti; Addor, Nans; Seibert, Jan

    2017-04-01

    Streamflow stemming from Switzerland's mountainous landscape will be influenced by climate change, which will pose significant challenges to the water management and policy sector. In climate change impact research, the determination of future streamflow is impeded by different sources of uncertainty, which propagate through the model chain. In this research, we explicitly considered the following sources of uncertainty: (1) climate models, (2) downscaling of the climate projections to the catchment scale, (3) bias correction method and (4) parameterization of the hydrological model. We utilize climate projections at the 0.11 degree 12.5 km resolution from the EURO-CORDEX project, which are the most recent climate projections for the European domain. EURO-CORDEX is comprised of regional climate model (RCM) simulations, which have been downscaled from global climate models (GCMs) from the CMIP5 archive, using both dynamical and statistical techniques. Uncertainties are explored by applying a modeling chain involving 14 GCM-RCMs to ten Swiss catchments. We utilize the rainfall-runoff model HBV Light, which has been widely used in operational hydrological forecasting. The Lindström measure, a combination of model efficiency and volume error, was used as an objective function to calibrate HBV Light. Ten best sets of parameters are then achieved by calibrating using the genetic algorithm and Powell optimization (GAP) method. The GAP optimization method is based on the evolution of parameter sets, which works by selecting and recombining high performing parameter sets with each other. Once HBV is calibrated, we then perform a quantitative comparison of the influence of biases inherited from climate model simulations to the biases stemming from the hydrological model. The evaluation is conducted over two time periods: i) 1980-2009 to characterize the simulation realism under the current climate and ii) 2070-2099 to identify the magnitude of the projected change of

  13. THE EFFECT OF DIFFERENT CORRECTIVE FEEDBACK METHODS ON THE OUTCOME AND SELF CONFIDENCE OF YOUNG ATHLETES

    Directory of Open Access Journals (Sweden)

    George Tzetzis

    2008-09-01

    Full Text Available This experiment investigated the effects of three corrective feedback methods, using different combinations of correction, or error cues and positive feedback for learning two badminton skills with different difficulty (forehand clear - low difficulty, backhand clear - high difficulty. Outcome and self-confidence scores were used as dependent variables. The 48 participants were randomly assigned into four groups. Group A received correction cues and positive feedback. Group B received cues on errors of execution. Group C received positive feedback, correction cues and error cues. Group D was the control group. A pre, post and a retention test was conducted. A three way analysis of variance ANOVA (4 groups X 2 task difficulty X 3 measures with repeated measures on the last factor revealed significant interactions for each depended variable. All the corrective feedback methods groups, increased their outcome scores over time for the easy skill, but only groups A and C for the difficult skill. Groups A and B had significantly better outcome scores than group C and the control group for the easy skill on the retention test. However, for the difficult skill, group C was better than groups A, B and D. The self confidence scores of groups A and C improved over time for the easy skill but not for group B and D. Again, for the difficult skill, only group C improved over time. Finally a regression analysis depicted that the improvement in performance predicted a proportion of the improvement in self confidence for both the easy and the difficult skill. It was concluded that when young athletes are taught skills of different difficulty, different type of instruction, might be more appropriate in order to improve outcome and self confidence. A more integrated approach on teaching will assist coaches or physical education teachers to be more efficient and effective

  14. An improved correlated sampling method for calculating correction factor of detector

    International Nuclear Information System (INIS)

    Wu Zhen; Li Junli; Cheng Jianping

    2006-01-01

    In the case of a small size detector lying inside a bulk of medium, there are two problems in the correction factors calculation of the detectors. One is that the detector is too small for the particles to arrive at and collide in; the other is that the ratio of two quantities is not accurate enough. The method discussed in this paper, which combines correlated sampling with modified particle collision auto-importance sampling, and has been realized on the MCNP-4C platform, can solve these two problems. Besides, other 3 variance reduction techniques are also combined with correlated sampling respectively to calculate a simple calculating model of the correction factors of detectors. The results prove that, although all the variance reduction techniques combined with correlated sampling can improve the calculating efficiency, the method combining the modified particle collision auto-importance sampling with the correlated sampling is the most efficient one. (authors)

  15. A method of measuring and correcting tilt of anti - vibration wind turbines based on screening algorithm

    Science.gov (United States)

    Xiao, Zhongxiu

    2018-04-01

    A Method of Measuring and Correcting Tilt of Anti - vibration Wind Turbines Based on Screening Algorithm is proposed in this paper. First of all, we design a device which the core is the acceleration sensor ADXL203, the inclination is measured by installing it on the tower of the wind turbine as well as the engine room. Next using the Kalman filter algorithm to filter effectively by establishing a state space model for signal and noise. Then we use matlab for simulation. Considering the impact of the tower and nacelle vibration on the collected data, the original data and the filtering data are classified and stored by the Screening algorithm, then filter the filtering data to make the output data more accurate. Finally, we eliminate installation errors by using algorithm to achieve the tilt correction. The device based on this method has high precision, low cost and anti-vibration advantages. It has a wide range of application and promotion value.

  16. Corrected direct force balance method for atomic force microscopy lateral force calibration

    International Nuclear Information System (INIS)

    Asay, David B.; Hsiao, Erik; Kim, Seong H.

    2009-01-01

    This paper reports corrections and improvements of the previously reported direct force balance method (DFBM) developed for lateral calibration of atomic force microscopy. The DFBM method employs the lateral force signal obtained during a force-distance measurement on a sloped surface and relates this signal to the applied load and the slope of the surface to determine the lateral calibration factor. In the original publication [Rev. Sci. Instrum. 77, 043903 (2006)], the tip-substrate contact was assumed to be pinned at the point of contact, i.e., no slip along the slope. In control experiments, the tip was found to slide along the slope during force-distance curve measurement. This paper presents the correct force balance for lateral force calibration.

  17. Calibration of an accountability tank by bubbling pressure method: correction factors to be taken into account

    International Nuclear Information System (INIS)

    Cauchetier, Ph.

    1993-01-01

    To obtain the needed precision in the calibration of an accountability tank by bubbling pressure method, it requires to use very slow bubbling. The measured data (mass and pressure) must be transformed into physical sizes of the vessel (height and cubic capacity). All corrections to take in account (buoyancy, calibration curve of the sensor, density of the liquid, weight of the gas column, bubbling overpressure, temperature...) are reviewed and valuated. We give the used equations. (author). 3 figs., 1 tab., 2 refs

  18. The method of edge anxiety-depressive disorder correction in patients with diabetes mellitus

    Directory of Open Access Journals (Sweden)

    A. Kozhanova

    2015-11-01

    4.    Kazimierz Wielki University, Bydgoszcz, Poland Abstract   The article presents the results of research on the effectiveness of the method developed by the authors for correcting the anxiety and depressive edge disorders in patients with type 2 diabetes through the use of magnetic-therapy.   Tags: anxiety-depressive disorder, hidden depression, diabetes, medical rehabilitation, singlet-oxygen therapy.

  19. BIOFEEDBACK: A NEW METHOD FOR CORRECTION OF MOTOR DISORDERS IN PATIENTS WITH MULTIPLE SCLEROSIS

    Directory of Open Access Journals (Sweden)

    Ya. S. Pekker

    2014-01-01

    Full Text Available Major disabling factors in multiple sclerosis is motor disorders. Rehabilitation of such violations is one of the most important medical and social problems. Currently, most of the role given to the development of methods for correction of motor disorders based on accessing natural resources of the human body. One of these methods is the adaptive control with biofeedback (BFB. The aim of our study was the correction of motor disorders in multiple sclerosis patients using biofeedback training. In the study, we have developed scenarios for training rehabilitation program computer EMG biofeedback aimed at correction of motor disorders in patients with multiple sclerosis (MS. The method was tested in the neurological clinic of SSMU. The study included 9 patients with definite diagnosis of MS with the presence of the clinical picture of combined pyramidal and cerebellar symptoms. Assessed the effectiveness of rehabilitation procedures biofeedback training using specialized scales (rating scale functional systems Kurtzke; questionnaire research quality of life – SF-36, evaluation of disease impact Profile – SIP and score on a scale fatigue – FSS. In the studied group of patients decreased score on a scale of fatigue (FSS, increased motor control (SIP2, the physical and mental components of health (SF-36. The tendency to reduce the amount of neurological deficit by reducing the points on the pyramidal Kurtske violations. Analysis of the exchange rate dynamics of biofeedback training on EMG for trained muscles indicates an increase in the recorded signal OEMG from session to session. Proved a tendency to increase strength and coordination trained muscles of patients studied.Positive results of biofeedback therapy in patients with MS can be recommended to use this method in the complex rehabilitation measures to correct motor and psycho-emotional disorders.

  20. New calculation method for thermodynamic properties of humid air in humid air turbine cycle – The general model and solutions for saturated humid air

    International Nuclear Information System (INIS)

    Wang, Zidong; Chen, Hanping; Weng, Shilie

    2013-01-01

    The article proposes a new calculation method for thermodynamic properties (i.e. specific enthalpy, specific entropy and specific volume) of humid air in humid air turbine cycle. The research pressure range is from 0.1 MPa to 5 MPa. The fundamental behaviors of dry air and water vapor in saturated humid air are explored in depth. The new model proposes and verifies the relationship between total gas mixture pressure and gas component pressures. This provides a good explanation of the fundamental behaviors of gas components in gas mixture from a new perspective. Another discovery is that the water vapor component pressure of saturated humid air equals P S , always smaller than its partial pressure (f·P S ) which was believed in the past researches. In the new model, “Local Gas Constant” describes the interaction between similar molecules. “Improvement Factor” is proposed for the first time by this article, and it quantitatively describes the magnitude of interaction between dissimilar molecules. They are combined to fully describe the real thermodynamic properties of humid air. The average error of Revised Dalton's Method is within 0.1% compared to experimentally-based data. - Highlights: • Our new model is suitable to calculate thermodynamic properties of humid air in HAT cycle. • Fundamental behaviors of dry air and water vapor in saturated humid air are explored in depth. • Local-Gas-Constant describes existing alone component and Improvement Factor describes interaction between different components. • The new model proposes and verifies the relationship between total gas mixture pressure and component pressures. • It solves saturated humid air thoroughly and deviates from experimental data less than 0.1%

  1. A New Variational Method for Bias Correction and Its Applications to Rodent Brain Extraction.

    Science.gov (United States)

    Chang, Huibin; Huang, Weimin; Wu, Chunlin; Huang, Su; Guan, Cuntai; Sekar, Sakthivel; Bhakoo, Kishore Kumar; Duan, Yuping

    2017-03-01

    Brain extraction is an important preprocessing step for further analysis of brain MR images. Significant intensity inhomogeneity can be observed in rodent brain images due to the high-field MRI technique. Unlike most existing brain extraction methods that require bias corrected MRI, we present a high-order and L 0 regularized variational model for bias correction and brain extraction. The model is composed of a data fitting term, a piecewise constant regularization and a smooth regularization, which is constructed on a 3-D formulation for medical images with anisotropic voxel sizes. We propose an efficient multi-resolution algorithm for fast computation. At each resolution layer, we solve an alternating direction scheme, all subproblems of which have the closed-form solutions. The method is tested on three T2 weighted acquisition configurations comprising a total of 50 rodent brain volumes, which are with the acquisition field strengths of 4.7 Tesla, 9.4 Tesla and 17.6 Tesla, respectively. On one hand, we compare the results of bias correction with N3 and N4 in terms of the coefficient of variations on 20 different tissues of rodent brain. On the other hand, the results of brain extraction are compared against manually segmented gold standards, BET, BSE and 3-D PCNN based on a number of metrics. With the high accuracy and efficiency, our proposed method can facilitate automatic processing of large-scale brain studies.

  2. A method based on moving least squares for XRII image distortion correction

    International Nuclear Information System (INIS)

    Yan Shiju; Wang Chengtao; Ye Ming

    2007-01-01

    This paper presents a novel integrated method to correct geometric distortions of XRII (x-ray image intensifier) images. The method has been compared, in terms of mean-squared residual error measured at control and intermediate points, with two traditional local methods and a traditional global methods. The proposed method is based on the methods of moving least squares (MLS) and polynomial fitting. Extensive experiments were performed on simulated and real XRII images. In simulation, the effect of pincushion distortion, sigmoidal distortion, local distortion, noise, and the number of control points was tested. The traditional local methods were sensitive to pincushion and sigmoidal distortion. The traditional global method was only sensitive to sigmoidal distortion. The proposed method was found neither sensitive to pincushion distortion nor sensitive to sigmoidal distortion. The sensitivity of the proposed method to local distortion was lower than or comparable with that of the traditional global method. The sensitivity of the proposed method to noise was higher than that of all three traditional methods. Nevertheless, provided the standard deviation of noise was not greater than 0.1 pixels, accuracy of the proposed method is still higher than the traditional methods. The sensitivity of the proposed method to the number of control points was greatly lower than that of the traditional methods. Provided that a proper cutoff radius is chosen, accuracy of the proposed method is higher than that of the traditional methods. Experiments on real images, carried out by using a 9 in. XRII, showed that residual error of the proposed method (0.2544±0.2479 pixels) is lower than that of the traditional global method (0.4223±0.3879 pixels) and local methods (0.4555±0.3518 pixels and 0.3696±0.4019 pixels, respectively)

  3. Correction of 157-nm lens based on phase ring aberration extraction method

    Science.gov (United States)

    Meute, Jeff; Rich, Georgia K.; Conley, Will; Smith, Bruce W.; Zavyalova, Lena V.; Cashmore, Julian S.; Ashworth, Dominic; Webb, James E.; Rich, Lisa

    2004-05-01

    Early manufacture and use of 157nm high NA lenses has presented significant challenges including: intrinsic birefringence correction, control of optical surface contamination, and the use of relatively unproven materials, coatings, and metrology. Many of these issues were addressed during the manufacture and use of International SEMATECH"s 0.85NA lens. Most significantly, we were the first to employ 157nm phase measurement interferometry (PMI) and birefringence modeling software for lens optimization. These efforts yielded significant wavefront improvement and produced one of the best wavefront-corrected 157nm lenses to date. After applying the best practices to the manufacture of the lens, we still had to overcome the difficulties of integrating the lens into the tool platform at International SEMATECH instead of at the supplier facility. After lens integration, alignment, and field optimization were complete, conventional lithography and phase ring aberration extraction techniques were used to characterize system performance. These techniques suggested a wavefront error of approximately 0.05 waves RMS--much larger than the 0.03 waves RMS predicted by 157nm PMI. In-situ wavefront correction was planned for in the early stages of this project to mitigate risks introduced by the use of development materials and techniques and field integration of the lens. In this publication, we document the development and use of a phase ring aberration extraction method for characterizing imaging performance and a technique for correcting aberrations with the addition of an optical compensation plate. Imaging results before and after the lens correction are presented and differences between actual and predicted results are discussed.

  4. Use of regularization method in the determination of ring parameters and orbit correction

    International Nuclear Information System (INIS)

    Tang, Y.N.; Krinsky, S.

    1993-01-01

    We discuss applying the regularization method of Tikhonov to the solution of inverse problems arising in accelerator operations. This approach has been successfully used for orbit correction on the NSLS storage rings, and is presently being applied to the determination of betatron functions and phases from the measured response matrix. The inverse problem of differential equation often leads to a set of integral equations of the first kind which are ill-conditioned. The regularization method is used to combat the ill-posedness

  5. Application of the spectral correction method to reanalysis data in South Africa

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Kruger, Andries C.

    2014-01-01

    of this study is to evaluate the applicability of the method to the relevant region. The impacts from the two aspects are investigated for interior and coastal locations. Measurements from five stations from South Africa are used to evaluate the results from the spectral model S(f)=af−5/3 together...... with the hourly time series of the Climate Forecast System Reanalysis (CFSR) 10 m wind at 38 km resolution over South Africa. The results show that using the spectral correction method to the CFSR wind data produce extreme wind atlases in acceptable agreement with the atlas made from limited measurements across...

  6. Nonlinear effect of the structured light profilometry in the phase-shifting method and error correction

    International Nuclear Information System (INIS)

    Zhang Wan-Zhen; Chen Zhe-Bo; Xia Bin-Feng; Lin Bin; Cao Xiang-Qun

    2014-01-01

    Digital structured light (SL) profilometry is increasingly used in three-dimensional (3D) measurement technology. However, the nonlinearity of the off-the-shelf projectors and cameras seriously reduces the measurement accuracy. In this paper, first, we review the nonlinear effects of the projector–camera system in the phase-shifting structured light depth measurement method. We show that high order harmonic wave components lead to phase error in the phase-shifting method. Then a practical method based on frequency domain filtering is proposed for nonlinear error reduction. By using this method, the nonlinear calibration of the SL system is not required. Moreover, both the nonlinear effects of the projector and the camera can be effectively reduced. The simulations and experiments have verified our nonlinear correction method. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  7. A Novel Bias Correction Method for Soil Moisture and Ocean Salinity (SMOS Soil Moisture: Retrieval Ensembles

    Directory of Open Access Journals (Sweden)

    Ju Hyoung Lee

    2015-12-01

    Full Text Available Bias correction is a very important pre-processing step in satellite data assimilation analysis, as data assimilation itself cannot circumvent satellite biases. We introduce a retrieval algorithm-specific and spatially heterogeneous Instantaneous Field of View (IFOV bias correction method for Soil Moisture and Ocean Salinity (SMOS soil moisture. To the best of our knowledge, this is the first paper to present the probabilistic presentation of SMOS soil moisture using retrieval ensembles. We illustrate that retrieval ensembles effectively mitigated the overestimation problem of SMOS soil moisture arising from brightness temperature errors over West Africa in a computationally efficient way (ensemble size: 12, no time-integration. In contrast, the existing method of Cumulative Distribution Function (CDF matching considerably increased the SMOS biases, due to the limitations of relying on the imperfect reference data. From the validation at two semi-arid sites, Benin (moderately wet and vegetated area and Niger (dry and sandy bare soils, it was shown that the SMOS errors arising from rain and vegetation attenuation were appropriately corrected by ensemble approaches. In Benin, the Root Mean Square Errors (RMSEs decreased from 0.1248 m3/m3 for CDF matching to 0.0678 m3/m3 for the proposed ensemble approach. In Niger, the RMSEs decreased from 0.14 m3/m3 for CDF matching to 0.045 m3/m3 for the ensemble approach.

  8. Determination of corrective factors for an ultrasonic flow measuring method in pipes accounting for perturbations

    International Nuclear Information System (INIS)

    Etter, S.

    1982-01-01

    By current ultrasonic flow measuring equipment (UFME) the mean velocity is measured for one or two measuring paths. This mean velocity is not equal to the velocity averaged over the flow cross-section, by means of which the flow rate is calculated. This difference will be found already for axially symmetrical, fully developed velocity profiles and, to a larger extent, for disturbed profiles varying in flow direction and for nonsteady flow. Corrective factors are defined for steady and nonsteady flows. These factors can be derived from the flow profiles within the UFME. By mathematical simulation of the entrainment effect the influence of cross and swirl flows on various ultrasonic measuring methods is studied. The applied UFME with crossed measuring paths is shown to be largely independent of cross and swirl flows. For evaluation in a computer of velocity network measurements in circular cross-sections the equations for interpolation and integration are derived. Results of the mathematical method are the isotach profile, the flow rate and, for fully developed flow, directly the corrective factor. In the experimental part corrective factors are determined in nonsteady flow in a measuring plane before and in form measuring planes behind a perturbation. (orig./RW) [de

  9. X-ray scatter correction method for dedicated breast computed tomography: improvements and initial patient testing

    International Nuclear Information System (INIS)

    Ramamurthy, Senthil; D’Orsi, Carl J; Sechopoulos, Ioannis

    2016-01-01

    A previously proposed x-ray scatter correction method for dedicated breast computed tomography was further developed and implemented so as to allow for initial patient testing. The method involves the acquisition of a complete second set of breast CT projections covering 360° with a perforated tungsten plate in the path of the x-ray beam. To make patient testing feasible, a wirelessly controlled electronic positioner for the tungsten plate was designed and added to a breast CT system. Other improvements to the algorithm were implemented, including automated exclusion of non-valid primary estimate points and the use of a different approximation method to estimate the full scatter signal. To evaluate the effectiveness of the algorithm, evaluation of the resulting image quality was performed with a breast phantom and with nine patient images. The improvements in the algorithm resulted in the avoidance of introduction of artifacts, especially at the object borders, which was an issue in the previous implementation in some cases. Both contrast, in terms of signal difference and signal difference-to-noise ratio were improved with the proposed method, as opposed to with the correction algorithm incorporated in the system, which does not recover contrast. Patient image evaluation also showed enhanced contrast, better cupping correction, and more consistent voxel values for the different tissues. The algorithm also reduces artifacts present in reconstructions of non-regularly shaped breasts. With the implemented hardware and software improvements, the proposed method can be reliably used during patient breast CT imaging, resulting in improvement of image quality, no introduction of artifacts, and in some cases reduction of artifacts already present. The impact of the algorithm on actual clinical performance for detection, diagnosis and other clinical tasks in breast imaging remains to be evaluated. (paper)

  10. Bias correction for estimated QTL effects using the penalized maximum likelihood method.

    Science.gov (United States)

    Zhang, J; Yue, C; Zhang, Y-M

    2012-04-01

    A penalized maximum likelihood method has been proposed as an important approach to the detection of epistatic quantitative trait loci (QTL). However, this approach is not optimal in two special situations: (1) closely linked QTL with effects in opposite directions and (2) small-effect QTL, because the method produces downwardly biased estimates of QTL effects. The present study aims to correct the bias by using correction coefficients and shifting from the use of a uniform prior on the variance parameter of a QTL effect to that of a scaled inverse chi-square prior. The results of Monte Carlo simulation experiments show that the improved method increases the power from 25 to 88% in the detection of two closely linked QTL of equal size in opposite directions and from 60 to 80% in the identification of QTL with small effects (0.5% of the total phenotypic variance). We used the improved method to detect QTL responsible for the barley kernel weight trait using 145 doubled haploid lines developed in the North American Barley Genome Mapping Project. Application of the proposed method to other shrinkage estimation of QTL effects is discussed.

  11. A new method of body habitus correction for total body potassium measurements

    International Nuclear Information System (INIS)

    O'Hehir, S; Green, S; Beddoe, A H

    2006-01-01

    This paper describes an accurate and time-efficient method for the determination of total body potassium via a combination of measurements in the Birmingham whole body counter and the use of the Monte Carlo n-particle (MCNP) simulation code. In developing this method, MCNP has also been used to derive values for some components of the total measurement uncertainty which are difficult to quantify experimentally. A method is proposed for MCNP-assessed body habitus corrections based on a simple generic anthropomorphic model, scaled for individual height and weight. The use of this model increases patient comfort by reducing the need for comprehensive anthropomorphic measurements. The analysis shows that the total uncertainty in potassium weight determination by this whole body counting methodology for water-filled phantoms with a known amount of potassium is 2.7% (SD). The uncertainty in the method of body habitus correction (applicable also to phantom-based methods) is 1.5% (SD). It is concluded that this new strategy provides a sufficiently accurate model for routine clinical use

  12. Nonuniform Illumination Correction Algorithm for Underwater Images Using Maximum Likelihood Estimation Method

    Directory of Open Access Journals (Sweden)

    Sonali Sachin Sankpal

    2016-01-01

    Full Text Available Scattering and absorption of light is main reason for limited visibility in water. The suspended particles and dissolved chemical compounds in water are also responsible for scattering and absorption of light in water. The limited visibility in water results in degradation of underwater images. The visibility can be increased by using artificial light source in underwater imaging system. But the artificial light illuminates the scene in a nonuniform fashion. It produces bright spot at the center with the dark region at surroundings. In some cases imaging system itself creates dark region in the image by producing shadow on the objects. The problem of nonuniform illumination is neglected by the researchers in most of the image enhancement techniques of underwater images. Also very few methods are discussed showing the results on color images. This paper suggests a method for nonuniform illumination correction for underwater images. The method assumes that natural underwater images are Rayleigh distributed. This paper used maximum likelihood estimation of scale parameter to map distribution of image to Rayleigh distribution. The method is compared with traditional methods for nonuniform illumination correction using no-reference image quality metrics like average luminance, average information entropy, normalized neighborhood function, average contrast, and comprehensive assessment function.

  13. A new method of body habitus correction for total body potassium measurements

    Energy Technology Data Exchange (ETDEWEB)

    O' Hehir, S [University Hospital Birmingham Foundation NHS Trust, Birmingham (United Kingdom); Green, S [University Hospital Birmingham Foundation NHS Trust, Birmingham (United Kingdom); Beddoe, A H [University Hospital Birmingham Foundation NHS Trust, Birmingham (United Kingdom)

    2006-09-07

    This paper describes an accurate and time-efficient method for the determination of total body potassium via a combination of measurements in the Birmingham whole body counter and the use of the Monte Carlo n-particle (MCNP) simulation code. In developing this method, MCNP has also been used to derive values for some components of the total measurement uncertainty which are difficult to quantify experimentally. A method is proposed for MCNP-assessed body habitus corrections based on a simple generic anthropomorphic model, scaled for individual height and weight. The use of this model increases patient comfort by reducing the need for comprehensive anthropomorphic measurements. The analysis shows that the total uncertainty in potassium weight determination by this whole body counting methodology for water-filled phantoms with a known amount of potassium is 2.7% (SD). The uncertainty in the method of body habitus correction (applicable also to phantom-based methods) is 1.5% (SD). It is concluded that this new strategy provides a sufficiently accurate model for routine clinical use.

  14. Effects of projection and background correction method upon calculation of right ventricular ejection fraction using first-pass radionuclide angiography

    International Nuclear Information System (INIS)

    Caplin, J.L.; Flatman, W.D.; Dymond, D.S.

    1985-01-01

    There is no consensus as to the best projection or correction method for first-pass radionuclide studies of the right ventricle. We assessed the effects of two commonly used projections, 30 degrees right anterior oblique and anterior-posterior, on the calculation of right ventricular ejection fraction. In addition two background correction methods, planar background correction to account for scatter, and right atrial correction to account for right atrio-ventricular overlap were assessed. Two first-pass radionuclide angiograms were performed in 19 subjects, one in each projection, using gold-195m (half-life 30.5 seconds), and each study was analysed using the two methods of correction. Right ventricular ejection fraction was highest using the right anterior oblique projection with right atrial correction 35.6 +/- 12.5% (mean +/- SD), and lowest when using the anterior posterior projection with planar background correction 26.2 +/- 11% (p less than 0.001). The study design allowed assessment of the effects of correction method and projection independently. Correction method appeared to have relatively little effect on right ventricular ejection fraction. Using right atrial correction correlation coefficient (r) between projections was 0.92, and for planar background correction r = 0.76, both p less than 0.001. However, right ventricular ejection fraction was far more dependent upon projection. When the anterior-posterior projection was used calculated right ventricular ejection fraction was much more dependent on correction method (r = 0.65, p = not significant), than using the right anterior oblique projection (r = 0.85, p less than 0.001)

  15. Analysis of efficient preconditioned defect correction methods for nonlinear water waves

    DEFF Research Database (Denmark)

    Engsig-Karup, Allan Peter

    2014-01-01

    Robust computational procedures for the solution of non-hydrostatic, free surface, irrotational and inviscid free-surface water waves in three space dimensions can be based on iterative preconditioned defect correction (PDC) methods. Such methods can be made efficient and scalable to enable...... prediction of free-surface wave transformation and accurate wave kinematics in both deep and shallow waters in large marine areas or for predicting the outcome of experiments in large numerical wave tanks. We revisit the classical governing equations are fully nonlinear and dispersive potential flow...... equations. We present new detailed fundamental analysis using finite-amplitude wave solutions for iterative solvers. We demonstrate that the PDC method in combination with a high-order discretization method enables efficient and scalable solution of the linear system of equations arising in potential flow...

  16. A New Global Regression Analysis Method for the Prediction of Wind Tunnel Model Weight Corrections

    Science.gov (United States)

    Ulbrich, Norbert Manfred; Bridge, Thomas M.; Amaya, Max A.

    2014-01-01

    A new global regression analysis method is discussed that predicts wind tunnel model weight corrections for strain-gage balance loads during a wind tunnel test. The method determines corrections by combining "wind-on" model attitude measurements with least squares estimates of the model weight and center of gravity coordinates that are obtained from "wind-off" data points. The method treats the least squares fit of the model weight separate from the fit of the center of gravity coordinates. Therefore, it performs two fits of "wind- off" data points and uses the least squares estimator of the model weight as an input for the fit of the center of gravity coordinates. Explicit equations for the least squares estimators of the weight and center of gravity coordinates are derived that simplify the implementation of the method in the data system software of a wind tunnel. In addition, recommendations for sets of "wind-off" data points are made that take typical model support system constraints into account. Explicit equations of the confidence intervals on the model weight and center of gravity coordinates and two different error analyses of the model weight prediction are also discussed in the appendices of the paper.

  17. Error-finding and error-correcting methods for the start-up of the SLC

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.

    1987-02-01

    During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper

  18. Non perturbative method for radiative corrections applied to lepton-proton scattering

    International Nuclear Information System (INIS)

    Chahine, C.

    1979-01-01

    We present a new, non perturbative method to effect radiative corrections in lepton (electron or muon)-nucleon scattering, useful for existing or planned experiments. This method relies on a spectral function derived in a previous paper, which takes into account both real soft photons and virtual ones and hence is free from infrared divergence. Hard effects are computed perturbatively and then included in the form of 'hard factors' in the non peturbative soft formulas. Practical computations are effected using the Gauss-Jacobi integration method which reduce the relevant integrals to a rapidly converging sequence. For the simple problem of the radiative quasi-elastic peak, we get an exponentiated form conjectured by Schwinger and found by Yennie, Frautschi and Suura. We compare also our results with the peaking approximation, which we derive independantly, and with the exact one-photon emission formula of Mo and Tsai. Applications of our method to the continuous spectrum include the radiative tail of the Δ 33 resonance in e + p scattering and radiative corrections to the Feynman scale invariant F 2 structure function for the kinematics of two recent high energy muon experiments

  19. Corrections for hysteresis curves for rare earth magnet materials measured by open magnetic circuit methods

    International Nuclear Information System (INIS)

    Nakagawa, Yasuaki

    1996-01-01

    The methods for testing permanent magnets stipulated in the usual industrial standards are so-called closed magnetic circuit methods which employ a loop tracer using an iron-core electromagnet. If the coercivity exceeds the highest magnetic field generated by the electromagnet, full hysteresis curves cannot be obtained. In the present work, magnetic fields up to 15 T were generated by a high-power water-cooled magnet, and the magnetization was measured by an induction method with an open magnetic circuit, in which the effect of a demagnetizing field should be taken into account. Various rare earth magnets materials such as sintered or bonded Sm-Co and Nd-Fe-B were provided by a number of manufacturers. Hysteresis curves for cylindrical samples with 10 nm in diameter and 2 mm, 3.5 mm, 5 mm, 14 mm or 28 mm in length were measured. Correction for the demagnetizing field is rather difficult because of its non-uniformity. Roughly speaking, a mean demagnetizing factor for soft magnetic materials can be used for the correction, although the application of this factor to hard magnetic material is hardly justified. Thus the dimensions of the sample should be specified when the data obtained by the open magnetic circuit method are used as industrial standards. (author)

  20. Fast pressure-correction method for incompressible Navier-Stokes equations in curvilinear coordinates

    Science.gov (United States)

    Aithal, Abhiram; Ferrante, Antonino

    2017-11-01

    In order to perform direct numerical simulations (DNS) of turbulent flows over curved surfaces and axisymmetric bodies, we have developed the numerical methodology to solve the incompressible Navier-Stokes (NS) equations in curvilinear coordinates for orthogonal meshes. The orthogonal meshes are generated by solving a coupled system of non-linear Poisson equations. The NS equations in orthogonal curvilinear coordinates are discretized in space on a staggered mesh using second-order central-difference scheme and are solved with an FFT-based pressure-correction method. The momentum equation is integrated in time using the second-order Adams-Bashforth scheme. The velocity field is advanced in time by applying the pressure correction to the approximate velocity such that it satisfies the divergence free condition. The novelty of the method stands in solving the variable coefficient Poisson equation for pressure using an FFT-based Poisson solver rather than the slower multigrid methods. We present the verification and validation results of the new numerical method and the DNS results of transitional flow over a curved axisymmetric body.

  1. Covariate measurement error correction methods in mediation analysis with failure time data.

    Science.gov (United States)

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  2. Methods for the correction of vascular artifacts in PET O-15 water brain-mapping studies

    Science.gov (United States)

    Chen, Kewei; Reiman, E. M.; Lawson, M.; Yun, Lang-sheng; Bandy, D.; Palant, A.

    1996-12-01

    While positron emission tomographic (PET) measurements of regional cerebral blood flow (rCBF) can be used to map brain regions that are involved in normal and pathological human behaviors, measurements in the anteromedial temporal lobe can be confounded by the combined effects of radiotracer activity in neighboring arteries and partial-volume averaging. The authors now describe two simple methods to address this vascular artifact. One method utilizes the early frames of a dynamic PET study, while the other method utilizes a coregistered magnetic resonance image (MRI) to characterize the vascular region of interest (VROI). Both methods subsequently assign a common value to each pixel in the VROI for the control (baseline) scan and the activation scan. To study the vascular artifact and to demonstrate the ability of the proposed methods correcting the vascular artifact, four dynamic PET scans were performed in a single subject during the same behavioral state. For each of the four scans, a vascular scan containing vascular activity was computed as the summation of the images acquired 0-60 s after radiotracer administration, and a control scan containing minimal vascular activity was computed as the summation of the images acquired 20-80 s after radiotracer administration. t-score maps calculated from the four pairs of vascular and control scans were used to characterize regional blood flow differences related to vascular activity before and after the application of each vascular artifact correction method. Both methods eliminated the observed differences in vascular activity, as well as the vascular artifact observed in the anteromedial temporal lobes. Using PET data from a study of normal human emotion, these methods permitted the authors to identify rCBF increases in the anteromedial temporal lobe free from the potentially confounding, combined effects of vascular activity and partial-volume averaging.

  3. Methods for the correction of vascular artifacts in PET O-15 water brain-mapping studies

    International Nuclear Information System (INIS)

    Chen, K.; Reiman, E.M.; Good Samaritan Regional Medical Center, Phoenix, AZ; Lawson, M.; Yun, L.S.; Bandy, D.

    1996-01-01

    While positron emission tomographic (PET) measurements of regional cerebral blood flow (rCBF) can be used to map brain regions that are involved in normal and pathological human behaviors, measurements in the anteromedial temporal lobe can be confounded by the combined effects of radiotracer activity in neighboring arteries and partial-volume averaging. The authors now describe two simple methods to address this vascular artifact. One method utilizes the early frames of a dynamic PET study, while the other method utilizes a coregistered magnetic resonance image (MRI) to characterize the vascular region of interest (VROI). Both methods subsequently assign a common value to each pixel in the VROI for the control scan and the activation scan. To study the vascular artifact and to demonstrate the ability of the proposed methods correcting the vascular artifact, four dynamic PET scans were performed in a single subject during the same behavioral state. For each of the four scans, a vascular scan containing vascular activity was computed as the summation of the images acquired 0--60 s after radiotracer administrations, and a control scan containing minimal vascular activity was computed as the summation of the images acquired 20--80 s after radiotracer administration. t-score maps calculated from the four pairs of vascular and control scans were used to characterize regional blood flow differences related to vascular activity before and after the applications of each vascular artifact correction method. Both methods eliminated the observed differences in vascular activity, as well as the vascular artifact observed in the anteromedial temporal lobes. Using PET data from a study of normal human emotion, these methods permitted us to identify rCBF increases in the anteromedial temporal lobe free from the potentially confounding, combined effects of vascular activity and partial-volume averaging

  4. Water saturation in shaly sands: logging parameters from log-derived values

    International Nuclear Information System (INIS)

    Miyairi, M.; Itoh, T.; Okabe, F.

    1976-01-01

    The methods are presented for determining the relation of porosity to formation factor and that of true resistivity of formation to water saturation, which were investigated through the log interpretation of one of the oil and gas fields of northern Japan Sea. The values of the coefficients ''a'' and ''m'' in porosity-formation factor relation are derived from cross-plot of porosity and resistivity of formation corrected by clay content. The saturation exponent ''n'' is determined from cross-plot of porosity and resistivity index on the assumption that the product of porosity and irreducible water saturation is constant. The relation of porosity to irreducible water saturation is also investigated from core analysis. The new logging parameters determined from the methods, a = 1, m = 2, n = 1.4, improved the values of water saturation by 6 percent in average, and made it easy to distinguish the points which belong to the productive zone and ones belonging to the nonproductive zone

  5. Evaluation of the ICS and DEW scatter correction methods for low statistical content scans in 3D PET

    International Nuclear Information System (INIS)

    Sossi, V.; Oakes, T.R.; Ruth, T.J.

    1996-01-01

    The performance of the Integral Convolution and the Dual Energy Window scatter correction methods in 3D PET has been evaluated over a wide range of statistical content of acquired data (1M to 400M events) The order in which scatter correction and detector normalization should be applied has also been investigated. Phantom and human neuroreceptor studies were used with the following figures of merit: axial and radial uniformity, sinogram and image noise, contrast accuracy and contrast accuracy uniformity. Both scatter correction methods perform reliably in the range of number of events examined. Normalization applied after scatter correction yields better radial uniformity and fewer image artifacts

  6. An improved bias correction method of daily rainfall data using a sliding window technique for climate change impact assessment

    Science.gov (United States)

    Smitha, P. S.; Narasimhan, B.; Sudheer, K. P.; Annamalai, H.

    2018-01-01

    Regional climate models (RCMs) are used to downscale the coarse resolution General Circulation Model (GCM) outputs to a finer resolution for hydrological impact studies. However, RCM outputs often deviate from the observed climatological data, and therefore need bias correction before they are used for hydrological simulations. While there are a number of methods for bias correction, most of them use monthly statistics to derive correction factors, which may cause errors in the rainfall magnitude when applied on a daily scale. This study proposes a sliding window based daily correction factor derivations that help build reliable daily rainfall data from climate models. The procedure is applied to five existing bias correction methods, and is tested on six watersheds in different climatic zones of India for assessing the effectiveness of the corrected rainfall and the consequent hydrological simulations. The bias correction was performed on rainfall data downscaled using Conformal Cubic Atmospheric Model (CCAM) to 0.5° × 0.5° from two different CMIP5 models (CNRM-CM5.0, GFDL-CM3.0). The India Meteorological Department (IMD) gridded (0.25° × 0.25°) observed rainfall data was considered to test the effectiveness of the proposed bias correction method. The quantile-quantile (Q-Q) plots and Nash Sutcliffe efficiency (NSE) were employed for evaluation of different methods of bias correction. The analysis suggested that the proposed method effectively corrects the daily bias in rainfall as compared to using monthly factors. The methods such as local intensity scaling, modified power transformation and distribution mapping, which adjusted the wet day frequencies, performed superior compared to the other methods, which did not consider adjustment of wet day frequencies. The distribution mapping method with daily correction factors was able to replicate the daily rainfall pattern of observed data with NSE value above 0.81 over most parts of India. Hydrological

  7. Monte Carlo evaluation of accuracy and noise properties of two scatter correction methods

    International Nuclear Information System (INIS)

    Narita, Y.; Eberl, S.; Nakamura, T.

    1996-01-01

    Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for 99m Tc and 201 Tl for numerical chest phantoms. Data were reconstructed with ordered-subset ML-EM algorithm including attenuation correction using the transmission data. In the chest phantom simulation, TDCS provided better S/N than TEW, and better accuracy, i.e., 1.0% vs -7.2% in myocardium, and -3.7% vs -30.1% in the ventricular chamber for 99m Tc with TDCS and TEW, respectively. For 201 Tl, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT

  8. Methods for detecting and correcting inaccurate results in inductively coupled plasma-atomic emission spectrometry

    Science.gov (United States)

    Chan, George C. Y. [Bloomington, IN; Hieftje, Gary M [Bloomington, IN

    2010-08-03

    A method for detecting and correcting inaccurate results in inductively coupled plasma-atomic emission spectrometry (ICP-AES). ICP-AES analysis is performed across a plurality of selected locations in the plasma on an unknown sample, collecting the light intensity at one or more selected wavelengths of one or more sought-for analytes, creating a first dataset. The first dataset is then calibrated with a calibration dataset creating a calibrated first dataset curve. If the calibrated first dataset curve has a variability along the location within the plasma for a selected wavelength, errors are present. Plasma-related errors are then corrected by diluting the unknown sample and performing the same ICP-AES analysis on the diluted unknown sample creating a calibrated second dataset curve (accounting for the dilution) for the one or more sought-for analytes. The cross-over point of the calibrated dataset curves yields the corrected value (free from plasma related errors) for each sought-for analyte.

  9. A neural network method to correct bidirectional effects in water-leaving radiance

    Science.gov (United States)

    Fan, Yongzhen; Li, Wei; Voss, Kenneth J.; Gatebe, Charles K.; Stamnes, Knut

    2017-02-01

    The standard method to convert the measured water-leaving radiances from the observation direction to the nadir direction developed by Morel and coworkers requires knowledge of the chlorophyll concentration (CHL). Also, the standard method was developed for open ocean water, which makes it unsuitable for turbid coastal waters. We introduce a neural network method to convert the water-leaving radiance (or the corresponding remote sensing reflectance) from the observation direction to the nadir direction. This method does not require any prior knowledge of the water constituents or the inherent optical properties (IOPs). This method is fast, accurate and can be easily adapted to different remote sensing instruments. Validation using NuRADS measurements in different types of water shows that this method is suitable for both open ocean and coastal waters. In open ocean or chlorophyll-dominated waters, our neural network method produces corrections similar to those of the standard method. In turbid coastal waters, especially sediment-dominated waters, a significant improvement was obtained compared to the standard method.

  10. Effects of Atmospheric Refraction on an Airborne Weather Radar Detection and Correction Method

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2015-01-01

    Full Text Available This study investigates the effect of atmospheric refraction, affected by temperature, atmospheric pressure, and humidity, on airborne weather radar beam paths. Using three types of typical atmospheric background sounding data, we established a simulation model for an actual transmission path and a fitted correction path of an airborne weather radar beam during airplane take-offs and landings based on initial flight parameters and X-band airborne phased-array weather radar parameters. Errors in an ideal electromagnetic beam propagation path are much greater than those of a fitted path when atmospheric refraction is not considered. The rates of change in the atmospheric refraction index differ with weather conditions and the radar detection angles differ during airplane take-off and landing. Therefore, the airborne radar detection path must be revised in real time according to the specific sounding data and flight parameters. However, an error analysis indicates that a direct linear-fitting method produces significant errors in a negatively refractive atmosphere; a piecewise-fitting method can be adopted to revise the paths according to the actual atmospheric structure. This study provides researchers and practitioners in the aeronautics and astronautics field with updated information regarding the effect of atmospheric refraction on airborne weather radar detection and correction methods.

  11. METHOD OF RADIOMETRIC DISTORTION CORRECTION OF MULTISPECTRAL DATA FOR THE EARTH REMOTE SENSING

    Directory of Open Access Journals (Sweden)

    A. N. Grigoriev

    2015-07-01

    Full Text Available The paper deals with technologies of ground secondary processing of heterogeneous multispectral data. The factors of heterogeneous data include uneven illumination of objects on the Earth surface caused by different properties of the relief. A procedure for the image restoration of spectral channels by means of terrain distortion compensation is developed. The object matter of this paper is to improve the quality of the results during image restoration of areas with large and medium landforms. Methods. Researches are based on the elements of the digital image processing theory, statistical processing of the observation results and the theory of multi-dimensional arrays. Main Results. The author has introduced operations on multidimensional arrays: concatenation and elementwise division. Extended model description for input data about the area is given. The model contains all necessary data for image restoration. Correction method for multispectral data radiometric distortions of the Earth remote sensing has been developed. The method consists of two phases: construction of empirical dependences for spectral reflectance on the relief properties and restoration of spectral images according to semiempirical data. Practical Relevance. Research novelty lies in developme nt of the application theory of multidimensional arrays with respect to the processing of multispectral data, together with data on the topography and terrain objects. The results are usable for development of radiometric data correction tools. Processing is performed on the basis of a digital terrain model without carrying out ground works connected with research of the objects reflective properties.

  12. An Enhanced VOF Method Coupled with Heat Transfer and Phase Change to Characterise Bubble Detachment in Saturated Pool Boiling

    Directory of Open Access Journals (Sweden)

    Anastasios Georgoulas

    2017-02-01

    Full Text Available The present numerical investigation identifies quantitative effects of fundamental controlling parameters on the detachment characteristics of isolated bubbles in cases of pool boiling in the nucleate boiling regime. For this purpose, an improved Volume of Fluid (VOF approach, developed previously in the general framework of OpenFOAM Computational Fluid Dynamics (CFD Toolbox, is further coupled with heat transfer and phase change. The predictions of the model are quantitatively verified against an existing analytical solution and experimental data in the literature. Following the model validation, four different series of parametric numerical experiments are performed, exploring the effect of the initial thermal boundary layer (ITBL thickness for the case of saturated pool boiling of R113 as well as the effects of the surface wettability, wall superheat and gravity level for the cases of R113, R22 and R134a refrigerants. It is confirmed that the ITBL is a very important parameter in the bubble growth and detachment process. Furthermore, for all of the examined working fluids the bubble detachment characteristics seem to be significantly affected by the triple-line contact angle (i.e., the wettability of the heated plate for equilibrium contact angles higher than 45°. As expected, the simulations revealed that the heated wall superheat is very influential on the bubble growth and detachment process. Finally, besides the novelty of the numerical approach, a last finding is the fact that the effect of the gravity level variation in the bubble detachment time and the volume diminishes with the increase of the ambient pressure.

  13. Temperature effects on pitfall catches of epigeal arthropods: a model and method for bias correction.

    Science.gov (United States)

    Saska, Pavel; van der Werf, Wopke; Hemerik, Lia; Luff, Martin L; Hatten, Timothy D; Honek, Alois; Pocock, Michael

    2013-02-01

    Carabids and other epigeal arthropods make important contributions to biodiversity, food webs and biocontrol of invertebrate pests and weeds. Pitfall trapping is widely used for sampling carabid populations, but this technique yields biased estimates of abundance ('activity-density') because individual activity - which is affected by climatic factors - affects the rate of catch. To date, the impact of temperature on pitfall catches, while suspected to be large, has not been quantified, and no method is available to account for it. This lack of knowledge and the unavailability of a method for bias correction affect the confidence that can be placed on results of ecological field studies based on pitfall data.Here, we develop a simple model for the effect of temperature, assuming a constant proportional change in the rate of catch per °C change in temperature, r , consistent with an exponential Q 10 response to temperature. We fit this model to 38 time series of pitfall catches and accompanying temperature records from the literature, using first differences and other detrending methods to account for seasonality. We use meta-analysis to assess consistency of the estimated parameter r among studies.The mean rate of increase in total catch across data sets was 0·0863 ± 0·0058 per °C of maximum temperature and 0·0497 ± 0·0107 per °C of minimum temperature. Multiple regression analyses of 19 data sets showed that temperature is the key climatic variable affecting total catch. Relationships between temperature and catch were also identified at species level. Correction for temperature bias had substantial effects on seasonal trends of carabid catches. Synthesis and Applications . The effect of temperature on pitfall catches is shown here to be substantial and worthy of consideration when interpreting results of pitfall trapping. The exponential model can be used both for effect estimation and for bias correction of observed data. Correcting for temperature

  14. A method to correct sampling ghosts in historic near-infrared Fourier transform spectrometer (FTS) measurements

    Science.gov (United States)

    Dohe, S.; Sherlock, V.; Hase, F.; Gisi, M.; Robinson, J.; Sepúlveda, E.; Schneider, M.; Blumenstock, T.

    2013-08-01

    The Total Carbon Column Observing Network (TCCON) has been established to provide ground-based remote sensing measurements of the column-averaged dry air mole fractions (DMF) of key greenhouse gases. To ensure network-wide consistency, biases between Fourier transform spectrometers at different sites have to be well controlled. Errors in interferogram sampling can introduce significant biases in retrievals. In this study we investigate a two-step scheme to correct these errors. In the first step the laser sampling error (LSE) is estimated by determining the sampling shift which minimises the magnitude of the signal intensity in selected, fully absorbed regions of the solar spectrum. The LSE is estimated for every day with measurements which meet certain selection criteria to derive the site-specific time series of the LSEs. In the second step, this sequence of LSEs is used to resample all the interferograms acquired at the site, and hence correct the sampling errors. Measurements acquired at the Izaña and Lauder TCCON sites are used to demonstrate the method. At both sites the sampling error histories show changes in LSE due to instrument interventions (e.g. realignment). Estimated LSEs are in good agreement with sampling errors inferred from the ratio of primary and ghost spectral signatures in optically bandpass-limited tungsten lamp spectra acquired at Lauder. The original time series of Xair and XCO2 (XY: column-averaged DMF of the target gas Y) at both sites show discrepancies of 0.2-0.5% due to changes in the LSE associated with instrument interventions or changes in the measurement sample rate. After resampling, discrepancies are reduced to 0.1% or less at Lauder and 0.2% at Izaña. In the latter case, coincident changes in interferometer alignment may also have contributed to the residual difference. In the future the proposed method will be used to correct historical spectra at all TCCON sites.

  15. A method to correct sampling ghosts in historic near-infrared Fourier transform spectrometer (FTS measurements

    Directory of Open Access Journals (Sweden)

    S. Dohe

    2013-08-01

    Full Text Available The Total Carbon Column Observing Network (TCCON has been established to provide ground-based remote sensing measurements of the column-averaged dry air mole fractions (DMF of key greenhouse gases. To ensure network-wide consistency, biases between Fourier transform spectrometers at different sites have to be well controlled. Errors in interferogram sampling can introduce significant biases in retrievals. In this study we investigate a two-step scheme to correct these errors. In the first step the laser sampling error (LSE is estimated by determining the sampling shift which minimises the magnitude of the signal intensity in selected, fully absorbed regions of the solar spectrum. The LSE is estimated for every day with measurements which meet certain selection criteria to derive the site-specific time series of the LSEs. In the second step, this sequence of LSEs is used to resample all the interferograms acquired at the site, and hence correct the sampling errors. Measurements acquired at the Izaña and Lauder TCCON sites are used to demonstrate the method. At both sites the sampling error histories show changes in LSE due to instrument interventions (e.g. realignment. Estimated LSEs are in good agreement with sampling errors inferred from the ratio of primary and ghost spectral signatures in optically bandpass-limited tungsten lamp spectra acquired at Lauder. The original time series of Xair and XCO2 (XY: column-averaged DMF of the target gas Y at both sites show discrepancies of 0.2–0.5% due to changes in the LSE associated with instrument interventions or changes in the measurement sample rate. After resampling, discrepancies are reduced to 0.1% or less at Lauder and 0.2% at Izaña. In the latter case, coincident changes in interferometer alignment may also have contributed to the residual difference. In the future the proposed method will be used to correct historical spectra at all TCCON sites.

  16. Optimal correction and design parameter search by modern methods of rigorous global optimization

    International Nuclear Information System (INIS)

    Makino, K.; Berz, M.

    2011-01-01

    Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of optimization runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global optimization problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an optimization problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting optimization problem cannot usually be solved. However, recent significant advances in modern methods of rigorous global optimization make these methods feasible for optics design for the first time. The key ideas of the method lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic methods used in particle

  17. Attenuation correction for renal scintigraphy with 99mTc - DMSA: comparison between Raynaud and the geometric mean methods

    International Nuclear Information System (INIS)

    Argenta, J.; Brambilla, C.R.; Marques da Silva, A.M.

    2009-01-01

    The evaluation of the index of renal function (IF) requires soft-tissue attenuation correction. This paper investigates the impact over the IF, when attenuation correction is applied using the Raynaud method and the geometric mean method in renal planar scintigraphy, using posterior and anterior views. The study was conducted with Monte Carlo simulated images of five GSF family voxel phantoms with different relative uptakes in each kidney from normal (50% -50%) to pathological (10% -90%). The results showed that Raynaud method corrects more efficiently the cases where the renal depth is close to the value of the standard phantom. The geometric mean method showed similar results to the Raynaud method for Baby, Child and Golem. For Helga and Donna models, the errors were above 20%, increasing with relative uptake. Further studies should be conducted to assess the influences of the standard phantom in the correcting attenuation methods. (author)

  18. Attenuation correction for renal scintigraphy with 99mTc-DMSA: analysis between Raynaud and the geometric mean methods

    International Nuclear Information System (INIS)

    Argenta, Jackson; Brambilla, Claudia R.; Silva, Ana Maria M. da

    2010-01-01

    The evaluation of the index of renal function (IF) requires soft-tissue attenuation correction. This paper investigates the impact over the IF, when attenuation correction is applied using the Raynaud method and the Geometric Mean method in renal planar scintigraphy, using posterior and anterior views. The study was conducted with Monte Carlo simulated images of five GSF family voxel phantoms with different relative uptakes in each kidney from normal (50% -50%) to pathological (10% -90%). The results showed that Raynaud method corrects more efficiently the cases where the renal depth is close to the value of the standard phantom. The geometric mean method showed similar results to the Raynaud method for Baby, Child and Golem. For Helga and Donna models, the errors were above 20%, increasing with relative uptake. Further studies should be conducted to assess the influences of the standard phantom in the correcting attenuation methods. (author)

  19. Development of a new technic for breast attenuation correction in myocardial perfusion scintigraphy using computational methods

    International Nuclear Information System (INIS)

    Oliveira, Anderson de

    2015-01-01

    Introduction: One of the limitations of nuclear medicine studies are false-positive results that lead to unnecessary exams and procedures associated to morbidity and costs to the individual and society. One of the most frequent causes for reducing the specificity of myocardial perfusion imaging (MPI) is photon attenuation, especially by breast in women. Objective: To develop a new technique to compensate the photon attenuation by women breasts in myocardial perfusion imaging with 99m Tc-sestamibi, using computational methods. Materials and methods: A procedure was proposed which integrates Monte Carlo simulation, computational methods and experimental techniques. Initially, were obtained the chest attenuation correction percentages using a phantom Jaszczak and breast attenuation percentages by Monte Carlo simulation method, using the EGS4 program. The percentages of attenuation correction were linked to individual patients' characteristics by an artificial neural network and a multivariate analysis. A preliminary technical validation was done by comparing the results of the MPI and catheterism (CAT), before and after applying the technique to 4 patients. The t test for parametric data, Wilcoxon, Mann-Whitney and X 2 for the others were used. Probability values less than 0.05 were considered statistically significant. Results: Each increment of 1 cm in the thickness of breast was associated to an average increment of 6% on photon attenuation, while the maximum increase related to breast composition was about 2%. The average chest attenuation percentage per unit was 2.9%. Both, the artificial neural network and linear regression, showed an error less than 3% as predictive models for percentage of female attenuation. The anatomical-functional correlation between MPI and CAT was maintained after the use of the technique. Conclusion: Results suggest that the proposed technique is promising and could be a possible alternative to other conventional methods employed

  20. pH-metric solubility. 2: correlation between the acid-base titration and the saturation shake-flask solubility-pH methods.

    Science.gov (United States)

    Avdeef, A; Berger, C M; Brownell, C

    2000-01-01

    The objective of this study was to compare the results of a normal saturation shake-flask method to a new potentiometric acid-base titration method for determining the intrinsic solubility and the solubility-pH profiles of ionizable molecules, and to report the solubility constants determined by the latter technique. The solubility-pH profiles of twelve generic drugs (atenolol, diclofenac.Na, famotidine, flurbiprofen, furosemide, hydrochlorothiazide, ibuprofen, ketoprofen, labetolol.HCl, naproxen, phenytoin, and propranolol.HCl), with solubilities spanning over six orders of magnitude, were determined both by the new pH-metric method and by a traditional approach (24 hr shaking of saturated solutions, followed by filtration, then HPLC assaying with UV detection). The 212 separate saturation shake-flask solubility measurements and those derived from 65 potentiometric titrations agreed well. The analysis produced the correlation equation: log(1/S)titration = -0.063(+/- 0.032) + 1.025(+/- 0.011) log(1/S)shake-flask, s = 0.20, r2 = 0.978. The potentiometrically-derived intrinsic solubilities of the drugs were: atenolol 13.5 mg/mL, diclofenac.Na 0.82 microg/mL, famotidine 1.1 mg/ mL, flurbiprofen 10.6 microg/mL, furosemide 5.9 microg/mL, hydrochlorothiazide 0.70 mg/mL, ibuprofen 49 microg/mL, ketoprofen 118 microg/mL, labetolol.HCl 128 microg/mL, naproxen 14 microg/mL, phenytoin 19 microg/mL, and propranolol.HCl 70 microg/mL. The new potentiometric method was shown to be reliable for determining the solubility-pH profiles of uncharged ionizable drug substances. Its speed compared to conventional equilibrium measurements, its sound theoretical basis, its ability to generate the full solubility-pH profile from a single titration, and its dynamic range (currently estimated to be seven orders of magnitude) make the new pH-metric method an attractive addition to traditional approaches used by preformulation and development scientists. It may be useful even to discovery

  1. 76 FR 53819 - Methods of Accounting Used by Corporations That Acquire the Assets of Other Corporations; Correction

    Science.gov (United States)

    2011-08-30

    ... of Accounting Used by Corporations That Acquire the Assets of Other Corporations; Correction AGENCY... describes corrections to final regulations (TD 9534) relating to the methods of accounting, including the... corporate reorganizations and tax-free liquidations. These regulations were published in the Federal...

  2. A numerical method for determining the radial wave motion correction in plane wave couplers

    DEFF Research Database (Denmark)

    Cutanda Henriquez, Vicente; Barrera Figueroa, Salvador; Torras Rosell, Antoni

    2016-01-01

    Microphones are used for realising the unit of sound pressure level, the pascal (Pa). Electro-acoustic reciprocity is the preferred method for the absolute determination of the sensitivity. This method can be applied in different sound fields: uniform pressure, free field or diffuse field. Pressure...... solution is an analytical expression that estimates the difference between the ideal plane wave sound field and a more complex lossless sound field created by a non-planar movement of the microphone’s membranes. Alternatively, a correction may be calculated numerically by introducing a full model...... of the microphone-coupler system in a Boundary Element formulation. In order to obtain a realistic representation of the sound field, viscous losses must be introduced in the model. This paper presents such a model, and the results of the simulations for different combinations of microphones and couplers...

  3. Evaluation of Machine Learning Methods for LHC Optics Measurements and Corrections Software

    CERN Document Server

    AUTHOR|(CDS)2206853; Henning, Peter

    The field of artificial intelligence is driven by the goal to provide machines with human-like intelligence. However modern science is currently facing problems with high complexity that cannot be solved by humans in the same timescale as by machines. Therefore there is a demand on automation of complex tasks. To identify the category of tasks which can be performed by machines in the domain of optics measurements and correction on the Large Hadron Collider (LHC) is one of the central research subjects of this thesis. The application of machine learning methods and concepts of artificial intelligence can be found in various industry and scientific branches. In High Energy Physics these concepts are mostly used in offline analysis of experiments data and to perform regression tasks. In Accelerator Physics the machine learning approach has not found a wide application yet. Therefore potential tasks for machine learning solutions can be specified in this domain. The appropriate methods and their suitability for...

  4. Correction method for critical extrapolation of control-rods-rising during physical start-up of reactor

    International Nuclear Information System (INIS)

    Zhang Fan; Chen Wenzhen; Yu Lei

    2008-01-01

    During physical start-up of nuclear reactor, the curve got by lifting the con- trol rods to extrapolate to the critical state is often in protruding shape, by which the supercritical phenomena is led. In the paper, the reason why the curve was in protruding was analyzed. A correction method was introduced, and the calculations were carried out by the practical data used in a nuclear power plant. The results show that the correction method reverses the protruding shape of the extrapolating curve, and the risk of reactor supercritical phenomena can be reduced using the extrapolated curve got by the correction method during physical start-up of the reactor. (authors)

  5. Spectrum correction algorithm for detectors in airborne radioactivity monitoring equipment NH-UAV based on a ratio processing method

    International Nuclear Information System (INIS)

    Cao, Ye; Tang, Xiao-Bin; Wang, Peng; Meng, Jia; Huang, Xi; Wen, Liang-Sheng; Chen, Da

    2015-01-01

    The unmanned aerial vehicle (UAV) radiation monitoring method plays an important role in nuclear accidents emergency. In this research, a spectrum correction algorithm about the UAV airborne radioactivity monitoring equipment NH-UAV was studied to measure the radioactive nuclides within a small area in real time and in a fixed place. The simulation spectra of the high-purity germanium (HPGe) detector and the lanthanum bromide (LaBr 3 ) detector in the equipment were obtained using the Monte Carlo technique. Spectrum correction coefficients were calculated after performing ratio processing techniques about the net peak areas between the double detectors on the detection spectrum of the LaBr 3 detector according to the accuracy of the detection spectrum of the HPGe detector. The relationship between the spectrum correction coefficient and the size of the source term was also investigated. A good linear relation exists between the spectrum correction coefficient and the corresponding energy (R 2 =0.9765). The maximum relative deviation from the real condition reduced from 1.65 to 0.035. The spectrum correction method was verified as feasible. - Highlights: • An airborne radioactivity monitoring equipment NH-UAV was developed to measure radionuclide after a nuclear accident. • A spectrum correction algorithm was proposed to obtain precise information on the detected radioactivity within a small area. • The spectrum correction method was verified as feasible. • The corresponding spectrum correction coefficients increase first and then stay constant

  6. Spectrum correction algorithm for detectors in airborne radioactivity monitoring equipment NH-UAV based on a ratio processing method

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Ye [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Tang, Xiao-Bin, E-mail: tangxiaobin@nuaa.edu.cn [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Jiangsu Key Laboratory of Nuclear Energy Equipment Materials Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Wang, Peng; Meng, Jia; Huang, Xi; Wen, Liang-Sheng [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Chen, Da [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Jiangsu Key Laboratory of Nuclear Energy Equipment Materials Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China)

    2015-10-11

    The unmanned aerial vehicle (UAV) radiation monitoring method plays an important role in nuclear accidents emergency. In this research, a spectrum correction algorithm about the UAV airborne radioactivity monitoring equipment NH-UAV was studied to measure the radioactive nuclides within a small area in real time and in a fixed place. The simulation spectra of the high-purity germanium (HPGe) detector and the lanthanum bromide (LaBr{sub 3}) detector in the equipment were obtained using the Monte Carlo technique. Spectrum correction coefficients were calculated after performing ratio processing techniques about the net peak areas between the double detectors on the detection spectrum of the LaBr{sub 3} detector according to the accuracy of the detection spectrum of the HPGe detector. The relationship between the spectrum correction coefficient and the size of the source term was also investigated. A good linear relation exists between the spectrum correction coefficient and the corresponding energy (R{sup 2}=0.9765). The maximum relative deviation from the real condition reduced from 1.65 to 0.035. The spectrum correction method was verified as feasible. - Highlights: • An airborne radioactivity monitoring equipment NH-UAV was developed to measure radionuclide after a nuclear accident. • A spectrum correction algorithm was proposed to obtain precise information on the detected radioactivity within a small area. • The spectrum correction method was verified as feasible. • The corresponding spectrum correction coefficients increase first and then stay constant.

  7. Fission track dating of volcanic glass: experimental evidence for the validity of the Size-Correction Method

    International Nuclear Information System (INIS)

    Bernardes, C.; Hadler Neto, J.C.; Lattes, C.M.G.; Araya, A.M.O.; Bigazzi, G.; Cesar, M.F.

    1986-01-01

    Two techniques may be employed for correcting thermally lowered fission track ages on glass material: the so called 'size-correcting method' and 'Plateau method'. Several results from fission track dating on obsidian were analysed in order to compare the model rising size-correction method with experimental evidences. The results from this work can be summarized as follows: 1) The assumption that mean size of spontaneous and induced etched tracks are equal on samples unaffected by partial fading is supported by experimental results. If reactor effects such as an enhancing of the etching rate in the irradiated fraction due to the radiation damage and/or to the fact that induced fission releases a quantity of energy slightly greater than spontaneous one exist, their influence on size-correction method is very small. 2) The above two correction techniques produce concordant results. 3) Several samples from the same obsidian, affected by 'instantaneous' as well as 'continuous' natural fading to different degrees were analysed: the curve showing decreasing of spontaneous track mean-size vs. fraction of spontaneous tracks lost by fading is in close agreement with the correction curve constructed for the same obsidian by imparting artificial thermal treatements on induced tracks. By the above points one can conclude that the assumptions on which size-correction method is based are well supported, at least in first approximation. (Author) [pt

  8. Comparison of oxygen saturation levels in patients receiving Technegas by the conventional unassisted method vs. the positive ventilation delivery system (PVDS)

    International Nuclear Information System (INIS)

    Dobson, M.P.; Leiper, C.A.; Lee, K.; Dixson, H.

    2000-01-01

    Full text: The purpose of this study is to compare oxygen saturation levels (SaO 2 ) in 289 patients undergoing conventional lung ventilation scintigraphy (control group) and 27 patients undergoing Positive Ventilation Delivery System (PVDS). The 27 patients where selected as their conventional method of inhalation proved to be inadequate or non-diagnostic. The patients underwent a second ventilation using PVDS, which improved the diagnostic quality of the ventilation image and assisted in clinical management decisions. Some patients in both the PVDS and the control group experienced a transient lowering in their SaO 2 . The mean initial SaO 2 in the control group did not fall below 94.9% and in the PVDS group measured 90.6%. 93% (25/27) of patients in the PVDS group were assessed as non CO 2 retaining, and received oxygen at 10L/min during Technegas inhalation. The mean trough saturation in the PVDS group was 91.7% which was significantly higher than that of the control group (86.9%). No patient in either group experienced any significant complication attributed to the transient tall in SaO 2 during technegas administration. We conclude that oxygen supplied as part of the PVDS system ameliorates the transient reduction in SaO 2 seen during standard Technegas administration. Copyright (2000) The Australian and New Zealand Society of Nuclear Medicine Inc

  9. Landsliding in partially saturated materials

    Science.gov (United States)

    Godt, J.W.; Baum, R.L.; Lu, N.

    2009-01-01

    [1] Rainfall-induced landslides are pervasive in hillslope environments around the world and among the most costly and deadly natural hazards. However, capturing their occurrence with scientific instrumentation in a natural setting is extremely rare. The prevailing thinking on landslide initiation, particularly for those landslides that occur under intense precipitation, is that the failure surface is saturated and has positive pore-water pressures acting on it. Most analytic methods used for landslide hazard assessment are based on the above perception and assume that the failure surface is located beneath a water table. By monitoring the pore water and soil suction response to rainfall, we observed shallow landslide occurrence under partially saturated conditions for the first time in a natural setting. We show that the partially saturated shallow landslide at this site is predictable using measured soil suction and water content and a novel unified effective stress concept for partially saturated earth materials. Copyright 2009 by the American Geophysical Union.

  10. A Novel Optimal Control Method for Impulsive-Correction Projectile Based on Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Ruisheng Sun

    2016-01-01

    Full Text Available This paper presents a new parametric optimization approach based on a modified particle swarm optimization (PSO to design a class of impulsive-correction projectiles with discrete, flexible-time interval, and finite-energy control. In terms of optimal control theory, the task is described as the formulation of minimum working number of impulses and minimum control error, which involves reference model linearization, boundary conditions, and discontinuous objective function. These result in difficulties in finding the global optimum solution by directly utilizing any other optimization approaches, for example, Hp-adaptive pseudospectral method. Consequently, PSO mechanism is employed for optimal setting of impulsive control by considering the time intervals between two neighboring lateral impulses as design variables, which makes the briefness of the optimization process. A modification on basic PSO algorithm is developed to improve the convergence speed of this optimization through linearly decreasing the inertial weight. In addition, a suboptimal control and guidance law based on PSO technique are put forward for the real-time consideration of the online design in practice. Finally, a simulation case coupled with a nonlinear flight dynamic model is applied to validate the modified PSO control algorithm. The results of comparative study illustrate that the proposed optimal control algorithm has a good performance in obtaining the optimal control efficiently and accurately and provides a reference approach to handling such impulsive-correction problem.

  11. Comparison between MRI-based attenuation correction methods for brain PET in dementia patients

    International Nuclear Information System (INIS)

    Cabello, Jorge; Lukas, Mathias; Pyka, Thomas; Nekolla, Stephan G.; Ziegler, Sibylle I.; Rota Kops, Elena; Shah, N. Jon; Ribeiro, Andre; Yakushev, Igor

    2016-01-01

    The combination of Positron Emission Tomography (PET) with magnetic resonance imaging (MRI) in hybrid PET/MRI scanners offers a number of advantages in investigating brain structure and function. A critical step of PET data reconstruction is attenuation correction (AC). Accounting for bone in attenuation maps (μ-map) was shown to be important in brain PET studies. While there are a number of MRI-based AC methods, no systematic comparison between them has been performed so far. The aim of this work was to study the different performance obtained by some of the recent methods presented in the literature. To perform such a comparison, we focused on [ 18 F]-Fluorodeoxyglucose-PET/MRI neurodegenerative dementing disorders, which are known to exhibit reduced levels of glucose metabolism in certain brain regions. Four novel methods were used to calculate μ-maps from MRI data of 15 patients with Alzheimer's dementia (AD). The methods cover two atlas-based methods, a segmentation method, and a hybrid template/segmentation method. Additionally, the Dixon-based and a UTE-based method, offered by a vendor, were included in the comparison. Performance was assessed at three levels: tissue identification accuracy in the μ-map, quantitative accuracy of reconstructed PET data in specific brain regions, and precision in diagnostic images at identifying hypometabolic areas. Quantitative regional errors of -20-10 % were obtained using the vendor's AC methods, whereas the novel methods produced errors in a margin of ±5 %. The obtained precision at identifying areas with abnormally low levels of glucose uptake, potentially regions affected by AD, were 62.9 and 79.5 % for the two vendor AC methods, the former ignoring bone and the latter including bone information. The precision increased to 87.5-93.3 % in average for the four new methods, exhibiting similar performances. We confirm that the AC methods based on the Dixon and UTE sequences provided by the vendor are inferior

  12. Comparison between MRI-based attenuation correction methods for brain PET in dementia patients

    Energy Technology Data Exchange (ETDEWEB)

    Cabello, Jorge; Lukas, Mathias; Pyka, Thomas; Nekolla, Stephan G.; Ziegler, Sibylle I. [Technische Universitaet Muenchen, Nuklearmedizinische Klinik und Poliklinik, Klinikum rechts der Isar, Munich (Germany); Rota Kops, Elena; Shah, N. Jon [Forschungszentrum Juelich GmbH, Institute of Neuroscience and Medicine 4, Medical Imaging Physics, Juelich (Germany); Ribeiro, Andre [Forschungszentrum Juelich GmbH, Institute of Neuroscience and Medicine 4, Medical Imaging Physics, Juelich (Germany); Institute of Biophysics and Biomedical Engineering, Lisbon (Portugal); Yakushev, Igor [Technische Universitaet Muenchen, Nuklearmedizinische Klinik und Poliklinik, Klinikum rechts der Isar, Munich (Germany); Institute TUM Neuroimaging Center (TUM-NIC), Munich (Germany)

    2016-11-15

    The combination of Positron Emission Tomography (PET) with magnetic resonance imaging (MRI) in hybrid PET/MRI scanners offers a number of advantages in investigating brain structure and function. A critical step of PET data reconstruction is attenuation correction (AC). Accounting for bone in attenuation maps (μ-map) was shown to be important in brain PET studies. While there are a number of MRI-based AC methods, no systematic comparison between them has been performed so far. The aim of this work was to study the different performance obtained by some of the recent methods presented in the literature. To perform such a comparison, we focused on [{sup 18}F]-Fluorodeoxyglucose-PET/MRI neurodegenerative dementing disorders, which are known to exhibit reduced levels of glucose metabolism in certain brain regions. Four novel methods were used to calculate μ-maps from MRI data of 15 patients with Alzheimer's dementia (AD). The methods cover two atlas-based methods, a segmentation method, and a hybrid template/segmentation method. Additionally, the Dixon-based and a UTE-based method, offered by a vendor, were included in the comparison. Performance was assessed at three levels: tissue identification accuracy in the μ-map, quantitative accuracy of reconstructed PET data in specific brain regions, and precision in diagnostic images at identifying hypometabolic areas. Quantitative regional errors of -20-10 % were obtained using the vendor's AC methods, whereas the novel methods produced errors in a margin of ±5 %. The obtained precision at identifying areas with abnormally low levels of glucose uptake, potentially regions affected by AD, were 62.9 and 79.5 % for the two vendor AC methods, the former ignoring bone and the latter including bone information. The precision increased to 87.5-93.3 % in average for the four new methods, exhibiting similar performances. We confirm that the AC methods based on the Dixon and UTE sequences provided by the vendor are

  13. An Improved Dynamical Downscaling Method with GCM Bias Corrections and Its Validation with 30 Years of Climate Simulations

    KAUST Repository

    Xu, Zhongfeng; Yang, Zong-Liang

    2012-01-01

    An improved dynamical downscaling method (IDD) with general circulation model (GCM) bias corrections is developed and assessed over North America. A set of regional climate simulations is performed with the Weather Research and Forecasting Model

  14. A level set method for cupping artifact correction in cone-beam CT

    International Nuclear Information System (INIS)

    Xie, Shipeng; Li, Haibo; Ge, Qi; Li, Chunming

    2015-01-01

    Purpose: To reduce cupping artifacts and improve the contrast-to-noise ratio in cone-beam computed tomography (CBCT). Methods: A level set method is proposed to reduce cupping artifacts in the reconstructed image of CBCT. The authors derive a local intensity clustering property of the CBCT image and define a local clustering criterion function of the image intensities in a neighborhood of each point. This criterion function defines an energy in terms of the level set functions, which represent a segmentation result and the cupping artifacts. The cupping artifacts are estimated as a result of minimizing this energy. Results: The cupping artifacts in CBCT are reduced by an average of 90%. The results indicate that the level set-based algorithm is practical and effective for reducing the cupping artifacts and preserving the quality of the reconstructed image. Conclusions: The proposed method focuses on the reconstructed image without requiring any additional physical equipment, is easily implemented, and provides cupping correction through a single-scan acquisition. The experimental results demonstrate that the proposed method successfully reduces the cupping artifacts

  15. Evaluation of three methods for retrospective correction of vignetting on medical microscopy images utilizing two open source software tools.

    Science.gov (United States)

    Babaloukas, Georgios; Tentolouris, Nicholas; Liatis, Stavros; Sklavounou, Alexandra; Perrea, Despoina

    2011-12-01

    Correction of vignetting on images obtained by a digital camera mounted on a microscope is essential before applying image analysis. The aim of this study is to evaluate three methods for retrospective correction of vignetting on medical microscopy images and compare them with a prospective correction method. One digital image from four different tissues was used and a vignetting effect was applied on each of these images. The resulted vignetted image was replicated four times and in each replica a different method for vignetting correction was applied with fiji and gimp software tools. The highest peak signal-to-noise ratio from the comparison of each method to the original image was obtained from the prospective method in all tissues. The morphological filtering method provided the highest peak signal-to-noise ratio value amongst the retrospective methods. The prospective method is suggested as the method of choice for correction of vignetting and if it is not applicable, then the morphological filtering may be suggested as the retrospective alternative method. © 2011 The Authors Journal of Microscopy © 2011 Royal Microscopical Society.

  16. SPECT quantification: a review of the different correction methods with compton scatter, attenuation and spatial deterioration effects

    International Nuclear Information System (INIS)

    Groiselle, C.; Rocchisani, J.M.; Moretti, J.L.; Dreuille, O. de; Gaillard, J.F.; Bendriem, B.

    1997-01-01

    SPECT quantification: a review of the different correction methods with Compton scatter attenuation and spatial deterioration effects. The improvement of gamma-cameras, acquisition and reconstruction software opens new perspectives in term of image quantification in nuclear medicine. In order to meet the challenge, numerous works have been undertaken in recent years to correct for the different physical phenomena that prevent an exact estimation of the radioactivity distribution. The main phenomena that have to betaken into account are scatter, attenuation and resolution. In this work, authors present the physical basis of each issue, its consequences on quantification and the main methods proposed to correct them. (authors)

  17. A new image correction method for live cell atomic force microscopy

    International Nuclear Information System (INIS)

    Shen, Y; Sun, J L; Zhang, A; Hu, J; Xu, L X

    2007-01-01

    During live cell imaging via atomic force microscopy (AFM), the interactions between the AFM probe and the membrane yield distorted cell images. In this work, an image correction method was developed based on the force-distance curve and the modified Hertzian model. The normal loading and lateral forces exerted on the cell membrane by the AFM tip were both accounted for during the scanning. Two assumptions were made in modelling based on the experimental measurements: (1) the lateral force on the endothelial cells was linear to the height; (2) the cell membrane Young's modulus could be derived from the displacement measurement of a normal force curve. Results have shown that the model could be used to recover up to 30% of the actual cell height depending on the loading force. The accuracy of the model was also investigated with respect to the loading force and mechanical property of the cell membrane

  18. A new image correction method for live cell atomic force microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Y; Sun, J L; Zhang, A; Hu, J; Xu, L X [College of Life Science and Biotechnology, Shanghai Jiao Tong University, Shanghai 200030 (China)

    2007-04-21

    During live cell imaging via atomic force microscopy (AFM), the interactions between the AFM probe and the membrane yield distorted cell images. In this work, an image correction method was developed based on the force-distance curve and the modified Hertzian model. The normal loading and lateral forces exerted on the cell membrane by the AFM tip were both accounted for during the scanning. Two assumptions were made in modelling based on the experimental measurements: (1) the lateral force on the endothelial cells was linear to the height; (2) the cell membrane Young's modulus could be derived from the displacement measurement of a normal force curve. Results have shown that the model could be used to recover up to 30% of the actual cell height depending on the loading force. The accuracy of the model was also investigated with respect to the loading force and mechanical property of the cell membrane.

  19. Method for determining correction factors induced by irradiation of ionization chamber cables in large radiation field

    International Nuclear Information System (INIS)

    Rodrigues, L.L.C.

    1988-01-01

    A simple method was developed to be suggested to hospital physicists in order to be followed during large radiation field dosimetry, to evaluate the effects of cables, connectors and extension cables irradiation and to determine correction factors for each system or geometry. All quality control tests were performed according to the International Electrotechnical Commission for three clinical dosimeters. Photon and electron irradiation effects for cables, connectors and extention cables were investigated under different experimental conditions by means of measurements of chamber sensitivity to a standard radiation source of 90 Sr. The radiation induced leakage current was also measured for cables, connectors and extension cables irradiated by photons and electrons. All measurements were performed at standard dosimetry conditions. Finally, measurements were performed in large fields. Cable factors and leakage factors were determined by the relation between chamber responses for irradiated and unirradiated cables. (author) [pt

  20. Method and apparatus for producing a porosity log of a subsurface formation corrected for detector standoff

    International Nuclear Information System (INIS)

    Allen, L.S.; Mills, W.R.; Stromswold, D.C.

    1991-01-01

    This paper describes a method and apparatus for producing a porosity log of a substance formation corrected for detector stand of. It includes: lowering a logging tool having a neutron source and a neutron detector into the borehole, irradiating the subsurface formation with neutrons from the neutron source as the logging tool is traversed along the subsurface formation, recording die-away signals representing the die-away of nuclear radiation in the subsurface formation as detected by the neutron detector, producing intensity signals representing the variations in intensity of the die-away signals, producing a model of the die-away of nuclear radiation in the subsurface formation having terms varying exponentially in response to borehole, formation and background effects on the die-away of nuclear radiation as detected by the detector

  1. On the evaluation of the correction factor μ (rho', tau') for the periodic pulse method

    International Nuclear Information System (INIS)

    Mueller, J.W.

    1976-01-01

    The inconveniences associated with the purely numerical approach we have chosen to solve some of the problems which arise in connection with the source-pulser method are twofold. On the one hand, there is the trouble of calculating the tables for μ, requiring several nights of computer time. On the other hand, apart from some simple limiting values as μ = 1 for tau' = 0 or 1, μ = 1/0.5 + /0.5 - tau'/ for rho' → 0 (and 0 > 1, no appropriate analytical form for the correction factor μ of sufficient precision is known for the moment. This drawback, we hope, is partly removed by a tabulation which should cover the whole region of practical interest. The computer programs for both the evaluation of μ and the Monte Carlo simulation are available upon request

  2. Can bias correction and statistical downscaling methods improve the skill of seasonal precipitation forecasts?

    Science.gov (United States)

    Manzanas, R.; Lucero, A.; Weisheimer, A.; Gutiérrez, J. M.

    2018-02-01

    Statistical downscaling methods are popular post-processing tools which are widely used in many sectors to adapt the coarse-resolution biased outputs from global climate simulations to the regional-to-local scale typically required by users. They range from simple and pragmatic Bias Correction (BC) methods, which directly adjust the model outputs of interest (e.g. precipitation) according to the available local observations, to more complex Perfect Prognosis (PP) ones, which indirectly derive local predictions (e.g. precipitation) from appropriate upper-air large-scale model variables (predictors). Statistical downscaling methods have been extensively used and critically assessed in climate change applications; however, their advantages and limitations in seasonal forecasting are not well understood yet. In particular, a key problem in this context is whether they serve to improve the forecast quality/skill of raw model outputs beyond the adjustment of their systematic biases. In this paper we analyze this issue by applying two state-of-the-art BC and two PP methods to downscale precipitation from a multimodel seasonal hindcast in a challenging tropical region, the Philippines. To properly assess the potential added value beyond the reduction of model biases, we consider two validation scores which are not sensitive to changes in the mean (correlation and reliability categories). Our results show that, whereas BC methods maintain or worsen the skill of the raw model forecasts, PP methods can yield significant skill improvement (worsening) in cases for which the large-scale predictor variables considered are better (worse) predicted by the model than precipitation. For instance, PP methods are found to increase (decrease) model reliability in nearly 40% of the stations considered in boreal summer (autumn). Therefore, the choice of a convenient downscaling approach (either BC or PP) depends on the region and the season.

  3. An improved level set method for brain MR images segmentation and bias correction.

    Science.gov (United States)

    Chen, Yunjie; Zhang, Jianwei; Macione, Jim

    2009-10-01

    Intensity inhomogeneities cause considerable difficulty in the quantitative analysis of magnetic resonance (MR) images. Thus, bias field estimation is a necessary step before quantitative analysis of MR data can be undertaken. This paper presents a variational level set approach to bias correction and segmentation for images with intensity inhomogeneities. Our method is based on an observation that intensities in a relatively small local region are separable, despite of the inseparability of the intensities in the whole image caused by the overall intensity inhomogeneity. We first define a localized K-means-type clustering objective function for image intensities in a neighborhood around each point. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. The objective function is then integrated over the entire domain to define the data term into the level set framework. Our method is able to capture bias of quite general profiles. Moreover, it is robust to initialization, and thereby allows fully automated applications. The proposed method has been used for images of various modalities with promising results.

  4. A meshless scheme for incompressible fluid flow using a velocity-pressure correction method

    KAUST Repository

    Bourantas, Georgios

    2013-12-01

    A meshless point collocation method is proposed for the numerical solution of the steady state, incompressible Navier-Stokes (NS) equations in their primitive u-v-p formulation. The flow equations are solved in their strong form using either a collocated or a semi-staggered "grid" configuration. The developed numerical scheme approximates the unknown field functions using the Moving Least Squares approximation. A velocity, along with a pressure correction scheme is applied in the context of the meshless point collocation method. The proposed meshless point collocation (MPC) scheme has the following characteristics: (i) it is a truly meshless method, (ii) there is no need for pressure boundary conditions since no pressure constitutive equation is solved, (iii) it incorporates simplicity and accuracy, (iv) results can be obtained using collocated or semi-staggered "grids", (v) there is no need for the usage of a curvilinear system of coordinates and (vi) it can solve steady and unsteady flows. The lid-driven cavity flow problem, for Reynolds numbers up to 5000, has been considered, by using both staggered and collocated grid configurations. Following, the Backward-Facing Step (BFS) flow problem was considered for Reynolds numbers up to 800 using a staggered grid. As a final example, the case of a laminar flow in a two-dimensional tube with an obstacle was examined. © 2013 Elsevier Ltd.

  5. Research of beam hardening correction method for CL system based on SART algorithm

    International Nuclear Information System (INIS)

    Cao Daquan; Wang Yaxiao; Que Jiemin; Sun Cuili; Wei Cunfeng; Wei Long

    2014-01-01

    Computed laminography (CL) is a non-destructive testing technique for large objects, especially for planar objects. Beam hardening artifacts were wildly observed in the CL system and significantly reduce the image quality. This study proposed a novel simultaneous algebraic reconstruction technique (SART) based beam hardening correction (BHC) method for the CL system, namely the SART-BHC algorithm in short. The SART-BHC algorithm took the polychromatic attenuation process in account to formulate the iterative reconstruction update. A novel projection matrix calculation method which was different from the conventional cone-beam or fan-beam geometry was also studied for the CL system. The proposed method was evaluated with simulation data and experimental data, which was generated using the Monte Carlo simulation toolkit Geant4 and a bench-top CL system, respectively. All projection data were reconstructed with SART-BHC algorithm and the standard filtered back projection (FBP) algorithm. The reconstructed images show that beam hardening artifacts are greatly reduced with the SART-BHC algorithm compared to the FBP algorithm. The SART-BHC algorithm doesn't need any prior know-ledge about the object or the X-ray spectrum and it can also mitigate the interlayer aliasing. (authors)

  6. A new bias field correction method combining N3 and FCM for improved segmentation of breast density on MRI.

    Science.gov (United States)

    Lin, Muqing; Chan, Siwa; Chen, Jeon-Hor; Chang, Daniel; Nie, Ke; Chen, Shih-Ting; Lin, Cheng-Ju; Shih, Tzu-Ching; Nalcioglu, Orhan; Su, Min-Ying

    2011-01-01

    Quantitative breast density is known as a strong risk factor associated with the development of breast cancer. Measurement of breast density based on three-dimensional breast MRI may provide very useful information. One important step for quantitative analysis of breast density on MRI is the correction of field inhomogeneity to allow an accurate segmentation of the fibroglandular tissue (dense tissue). A new bias field correction method by combining the nonparametric nonuniformity normalization (N3) algorithm and fuzzy-C-means (FCM)-based inhomogeneity correction algorithm is developed in this work. The analysis is performed on non-fat-sat T1-weighted images acquired using a 1.5 T MRI scanner. A total of 60 breasts from 30 healthy volunteers was analyzed. N3 is known as a robust correction method, but it cannot correct a strong bias field on a large area. FCM-based algorithm can correct the bias field on a large area, but it may change the tissue contrast and affect the segmentation quality. The proposed algorithm applies N3 first, followed by FCM, and then the generated bias field is smoothed using Gaussian kernal and B-spline surface fitting to minimize the problem of mistakenly changed tissue contrast. The segmentation results based on the N3+FCM corrected images were compared to the N3 and FCM alone corrected images and another method, coherent local intensity clustering (CLIC), corrected images. The segmentation quality based on different correction methods were evaluated by a radiologist and ranked. The authors demonstrated that the iterative N3+FCM correction method brightens the signal intensity of fatty tissues and that separates the histogram peaks between the fibroglandular and fatty tissues to allow an accurate segmentation between them. In the first reading session, the radiologist found (N3+FCM > N3 > FCM) ranking in 17 breasts, (N3+FCM > N3 = FCM) ranking in 7 breasts, (N3+FCM = N3 > FCM) in 32 breasts, (N3+FCM = N3 = FCM) in 2 breasts, and (N3 > N3

  7. Phylogeny Reconstruction with Alignment-Free Method That Corrects for Horizontal Gene Transfer.

    Directory of Open Access Journals (Sweden)

    Raquel Bromberg

    2016-06-01

    Full Text Available Advances in sequencing have generated a large number of complete genomes. Traditionally, phylogenetic analysis relies on alignments of orthologs, but defining orthologs and separating them from paralogs is a complex task that may not always be suited to the large datasets of the future. An alternative to traditional, alignment-based approaches are whole-genome, alignment-free methods. These methods are scalable and require minimal manual intervention. We developed SlopeTree, a new alignment-free method that estimates evolutionary distances by measuring the decay of exact substring matches as a function of match length. SlopeTree corrects for horizontal gene transfer, for composition variation and low complexity sequences, and for branch-length nonlinearity caused by multiple mutations at the same site. We tested SlopeTree on 495 bacteria, 73 archaea, and 72 strains of Escherichia coli and Shigella. We compared our trees to the NCBI taxonomy, to trees based on concatenated alignments, and to trees produced by other alignment-free methods. The results were consistent with current knowledge about prokaryotic evolution. We assessed differences in tree topology over different methods and settings and found that the majority of bacteria and archaea have a core set of proteins that evolves by descent. In trees built from complete genomes rather than sets of core genes, we observed some grouping by phenotype rather than phylogeny, for instance with a cluster of sulfur-reducing thermophilic bacteria coming together irrespective of their phyla. The source-code for SlopeTree is available at: http://prodata.swmed.edu/download/pub/slopetree_v1/slopetree.tar.gz.

  8. Phylogeny Reconstruction with Alignment-Free Method That Corrects for Horizontal Gene Transfer

    Science.gov (United States)

    Grishin, Nick V.; Otwinowski, Zbyszek

    2016-01-01

    Advances in sequencing have generated a large number of complete genomes. Traditionally, phylogenetic analysis relies on alignments of orthologs, but defining orthologs and separating them from paralogs is a complex task that may not always be suited to the large datasets of the future. An alternative to traditional, alignment-based approaches are whole-genome, alignment-free methods. These methods are scalable and require minimal manual intervention. We developed SlopeTree, a new alignment-free method that estimates evolutionary distances by measuring the decay of exact substring matches as a function of match length. SlopeTree corrects for horizontal gene transfer, for composition variation and low complexity sequences, and for branch-length nonlinearity caused by multiple mutations at the same site. We tested SlopeTree on 495 bacteria, 73 archaea, and 72 strains of Escherichia coli and Shigella. We compared our trees to the NCBI taxonomy, to trees based on concatenated alignments, and to trees produced by other alignment-free methods. The results were consistent with current knowledge about prokaryotic evolution. We assessed differences in tree topology over different methods and settings and found that the majority of bacteria and archaea have a core set of proteins that evolves by descent. In trees built from complete genomes rather than sets of core genes, we observed some grouping by phenotype rather than phylogeny, for instance with a cluster of sulfur-reducing thermophilic bacteria coming together irrespective of their phyla. The source-code for SlopeTree is available at: http://prodata.swmed.edu/download/pub/slopetree_v1/slopetree.tar.gz. PMID:27336403

  9. A direct ROI quantification method for inherent PVE correction: accuracy assessment in striatal SPECT measurements

    Energy Technology Data Exchange (ETDEWEB)

    Vanzi, Eleonora; De Cristofaro, Maria T.; Sotgia, Barbara; Mascalchi, Mario; Formiconi, Andreas R. [University of Florence, Clinical Pathophysiology, Florence (Italy); Ramat, Silvia [University of Florence, Neurological and Psychiatric Sciences, Florence (Italy)

    2007-09-15

    The clinical potential of striatal imaging with dopamine transporter (DAT) SPECT tracers is hampered by the limited capability to recover activity concentration ratios due to partial volume effects (PVE). We evaluated the accuracy of a least squares method that allows retrieval of activity in regions of interest directly from projections (LS-ROI). An Alderson striatal phantom was filled with striatal to background ratios of 6:1, 9:1 and 28:1; the striatal and background ROIs were drawn on a coregistered X-ray CT of the phantom. The activity ratios of these ROIs were derived both with the LS-ROI method and with conventional SPECT EM reconstruction (EM-SPECT). Moreover, the two methods were compared in seven patients with motor symptoms who were examined with N-3-fluoropropyl-2-{beta}-carboxymethoxy-3-{beta}-(4-iodophenyl) (FP-CIT) SPECT, calculating the binding potential (BP). In the phantom study, the activity ratios obtained with EM-SPECT were 3.5, 5.3 and 17.0, respectively, whereas the LS-ROI method resulted in ratios of 6.2, 9.0 and 27.3, respectively. With the LS-ROI method, the BP in the seven patients was approximately 60% higher than with EM-SPECT; a linear correlation between the LS-ROI and the EM estimates was found (r = 0.98, p = 0.03). The LS-ROI PVE correction capability is mainly due to the fact that the ill-conditioning of the LS-ROI approach is lower than that of the EM-SPECT one. The LS-ROI seems to be feasible and accurate in the examination of the dopaminergic system. This approach can be fruitful in monitoring of disease progression and in clinical trials of dopaminergic drugs. (orig.)

  10. Electronic Transport as a Driver for Self-Interaction-Corrected Methods

    KAUST Repository

    Pertsova, Anna; Canali, Carlo Maria; Pederson, Mark R.; Rungger, Ivan; Sanvito, Stefano

    2015-01-01

    © 2015 Elsevier Inc. While spintronics often investigates striking collective spin effects in large systems, a very important research direction deals with spin-dependent phenomena in nanostructures, reaching the extreme of a single spin confined in a quantum dot, in a molecule, or localized on an impurity or dopant. The issue considered in this chapter involves taking this extreme to the nanoscale and the quest to use first-principles methods to predict and control the behavior of a few "spins" (down to 1 spin) when they are placed in an interesting environment. Particular interest is on environments for which addressing these systems with external fields and/or electric or spin currents is possible. The realization of such systems, including those that consist of a core of a few transition-metal (TM) atoms carrying a spin, connected and exchanged-coupled through bridging oxo-ligands has been due to work by many experimental researchers at the interface of atomic, molecular and condensed matter physics. This chapter addresses computational problems associated with understanding the behaviors of nano- and molecular-scale spin systems and reports on how the computational complexity increases when such systems are used for elements of electron transport devices. Especially for cases where these elements are attached to substrates with electronegativities that are very different than the molecule, or for coulomb blockade systems, or for cases where the spin-ordering within the molecules is weakly antiferromagnetic, the delocalization error in DFT is particularly problematic and one which requires solutions, such as self-interaction corrections, to move forward. We highlight the intersecting fields of spin-ordered nanoscale molecular magnets, electron transport, and coulomb blockade and highlight cases where self-interaction corrected methodologies can improve our predictive power in this emerging field.

  11. A graphical method for comparing nocturnal oxygen saturation profiles in individuals and populations: Application to healthy infants and preterm neonates.

    Science.gov (United States)

    Terrill, Philip I; Dakin, Carolyn; Edwards, Bradley A; Wilson, Stephen J; MacLean, Joanna E

    2018-05-01

    Pulse-oximetry (SpO 2 ) allows the identification of important clinical physiology. However, summary statistics such as mean values and desaturation incidence do not capture the complexity of the information contained within continuous recordings. The aim of this study was to develop an objective method to quantify important SpO 2 characteristics; and assess its utility in healthy infant and preterm neonate cohorts. An algorithm was developed to calculate the desaturation incidence, depth, and duration. These variables are presented using three plots: SpO 2 cumulative-frequency relationship; desaturation-depth versus incidence; desaturation-duration versus incidence. This method was applied to two populations who underwent nocturnal pulse-oximetry: (1) thirty-four healthy term infants studied at 2-weeks, 3, 6, 12, and 24-months of age and (2) thirty-seven neonates born <26 weeks and studied at discharge from NICU (37-44 weeks post-conceptual age). The maturation in healthy infants was characterized by reduced desaturation index (27.2/h vs 3.3/h at 2-weeks and 24-months, P < 0.01), and increased percentage of desaturation events ≥6-s in duration (27.8% vs 43.2% at 2-weeks and 3-months, P < 0.01). Compared with term-infants, preterm infants had a greater desaturation incidence (54.8/h vs 27.2/h, P < 0.01), and these desaturations were deeper (52.9% vs 37.6% were ≥6% below baseline, P < 0.01). The incidence of longer desaturations (≥14-s) in preterm infants was correlated with healthcare utilization over the first 24-months (r = 0.63, P < 0.01). This tool allows the objective comparison of extended oximetry recordings between groups and for individuals; and serves as a basis for the development of reference ranges for populations. © 2018 Wiley Periodicals, Inc.

  12. Orbit Determination from Tracking Data of Artificial Satellite Using the Method of Differential Correction

    OpenAIRE

    Byoung-Sun Lee; Jung-Hyun Jo; Sang-Young Park; Kyu-Hong Choi; Chun-Hwey Kim

    1988-01-01

    The differential correction process determining osculating orbital elements as correct as possible at a given instant of time from tracking data of artificial satellite was accomplished. Preliminary orbital elements were used as an initial value of the differential correction procedure and iterated until the residual of real observation(O) and computed observation(C) was minimized. Tracking satellite was NOAA-9 or TIROS-N series. Two types of tracking data were prediction data precomputed fro...

  13. Resistivity Correction Factor for the Four-Probe Method: Experiment II

    Science.gov (United States)

    Yamashita, Masato; Yamaguchi, Shoji; Nishii, Toshifumi; Kurihara, Hiroshi; Enjoji, Hideo

    1989-05-01

    Experimental verification of the theoretically derived resistivity correction factor F is presented. Factor F can be applied to a system consisting of a disk sample and a four-probe array. Measurements are made on isotropic graphite disks and crystalline ITO films. Factor F can correct the apparent variations of the data and lead to reasonable resistivities and sheet resistances. Here factor F is compared to other correction factors; i.e. FASTM and FJIS.

  14. Evaluation of scatter limitation correction: a new method of correcting photopenic artifacts caused by patient motion during whole-body PET/CT imaging.

    Science.gov (United States)

    Miwa, Kenta; Umeda, Takuro; Murata, Taisuke; Wagatsuma, Kei; Miyaji, Noriaki; Terauchi, Takashi; Koizumi, Mitsuru; Sasaki, Masayuki

    2016-02-01

    Overcorrection of scatter caused by patient motion during whole-body PET/computed tomography (CT) imaging can induce the appearance of photopenic artifacts in the PET images. The present study aimed to quantify the accuracy of scatter limitation correction (SLC) for eliminating photopenic artifacts. This study analyzed photopenic artifacts in (18)F-fluorodeoxyglucose ((18)F-FDG) PET/CT images acquired from 12 patients and from a National Electrical Manufacturers Association phantom with two peripheral plastic bottles that simulated the human body and arms, respectively. The phantom comprised a sphere (diameter, 10 or 37 mm) containing fluorine-18 solutions with target-to-background ratios of 2, 4, and 8. The plastic bottles were moved 10 cm posteriorly between CT and PET acquisitions. All PET data were reconstructed using model-based scatter correction (SC), no scatter correction (NSC), and SLC, and the presence or absence of artifacts on the PET images was visually evaluated. The SC and SLC images were also semiquantitatively evaluated using standardized uptake values (SUVs). Photopenic artifacts were not recognizable in any NSC and SLC image from all 12 patients in the clinical study. The SUVmax of mismatched SLC PET/CT images were almost equal to those of matched SC and SLC PET/CT images. Applying NSC and SLC substantially eliminated the photopenic artifacts on SC PET images in the phantom study. SLC improved the activity concentration of the sphere for all target-to-background ratios. The highest %errors of the 10 and 37-mm spheres were 93.3 and 58.3%, respectively, for mismatched SC, and 73.2 and 22.0%, respectively, for mismatched SLC. Photopenic artifacts caused by SC error induced by CT and PET image misalignment were corrected using SLC, indicating that this method is useful and practical for clinical qualitative and quantitative PET/CT assessment.

  15. Temperature effect correction for muon flux at the Earth surface: estimation of the accuracy of different methods

    International Nuclear Information System (INIS)

    Dmitrieva, A N; Astapov, I I; Kovylyaeva, A A; Pankova, D V

    2013-01-01

    Correction of the muon flux at the Earth surface for temperature effect with the help of two simple methods is considered. In the first method, it is assumed that major part of muons are generated at some effective generation level, which altitude depends on the temperature profile of the atmosphere. In the second method, dependence of muon flux on the mass-averaged atmosphere temperature is considered. The methods were tested with the data of muon hodoscope URAGAN (Moscow, Russia). Difference between data corrected with the help of differential in altitude temperature coefficients and simplified methods does not exceed 1-1.5%, so the latter ones may be used for introduction of a fast preliminary correction.

  16. Comparative study of chance coincidence correction in measuring 223Ra and 224Ra by delay coincidence method

    International Nuclear Information System (INIS)

    Yan Yongjun; Huang Derong; Zhou Jianliang; Qiu Shoukang

    2013-01-01

    The delay coincidence measurement of 220 Rn and 219 Rn has been proved to be a valid indirect method for measuring 224 Ra and 223 Ra extracted from natural water, which can provide valuable information on estuarine/ocean mixing, submarine groundwater discharge, and water/soil interactions. In practical operation chance coincidence correction must be considered, mostly Moore's correction method, but Moore's and Giffin's methods were incomplete in some ways. In this paper the modification (method 1) and a new chance coincidence correction formula (method 2) were provided. Experiments results are presented to demonstrate the conclusions. The results show that precision is improved while counting rate is less than 70 min- 1 . (authors)

  17. Iterative correction method for shift-variant blurring caused by collimator aperture in SPECT

    International Nuclear Information System (INIS)

    Ogawa, Koichi; Katsu, Haruto

    1996-01-01

    A collimation system in single photon computed tomography (SPECT) induces blurring on reconstructed images. The blurring varies with the collimator aperture which is determined by the shape of the hole (its diameter and length), and with the distance between the collimator surface and the object. The blurring has shift-variant properties. This paper presents a new iterative method for correcting the shift-variant blurring. The method estimates the ratio of 'ideal projection value' to 'measured projection value' at each sample point. The term 'ideal projection value' means the number of photons which enter the hole perpendicular to the collimator surface, and the term 'measured projection value' means the number of photons which enter the hole at acute angles to the collimator aperture axis. If the estimation is accurate, ideal projection value can be obtained as the product of the measured projection value and the estimated ratio. The accuracy of the estimation is improved iteratively by comparing the measured projection value with a weighted summation of several estimated projection value. The simulation results showed that spatial resolution was improved without amplification of artifacts due to statistical noise. (author)

  18. An inter-crystal scatter correction method for DOI PET image reconstruction

    International Nuclear Information System (INIS)

    Lam, Chih Fung; Hagiwara, Naoki; Obi, Takashi; Yamaguchi, Masahiro; Yamaya, Taiga; Murayama, Hideo

    2006-01-01

    New positron emission tomography (PET) scanners utilize depth-of-interaction (DOI) information to improve image resolution, particularly at the edge of field-of-view while maintaining high detector sensitivity. However, the inter-crystal scatter (ICS) effect cannot be neglected in DOI scanners due to the use of smaller crystals. ICS is the phenomenon wherein there are multiple scintillations for irradiation of a gamma photon due to Compton scatter in detecting crystals. In the case of ICS, only one scintillation position is approximated for detectors with Anger-type logic calculation. This causes an error in position detection and ICS worsens the image contrast, particularly for smaller hotspots. In this study, we propose to model an ICS probability by using a Monte Carlo simulator. The probability is given as a statistical relationship between the gamma photon first interaction crystal pair and the detected crystal pair. It is then used to improve the system matrix of a statistical image reconstruction algorithm, such as maximum likehood expectation maximization (ML-EM) in order to correct for the position error caused by ICS. We apply the proposed method to simulated data of the jPET-D4, which is a four-layer DOI PET being developed at the National Institute of Radiological Sciences. Our computer simulations show that image contrast is recovered successfully by the proposed method. (author)

  19. Single photon emission computed tomography using a regularizing iterative method for attenuation correction

    International Nuclear Information System (INIS)

    Soussaline, Francoise; Cao, A.; Lecoq, G.

    1981-06-01

    An analytically exact solution to the attenuated tomographic operator is proposed. Such a technique called Regularizing Iterative Method (RIM) belongs to the iterative class of procedures where a priori knowledge can be introduced on the evaluation of the size and shape of the activity domain to be reconstructed, and on the exact attenuation distribution. The relaxation factor used is so named because it leads to fast convergence and provides noise filtering for a small number of iteractions. The effectiveness of such a method was tested in the Single Photon Emission Computed Tomography (SPECT) reconstruction problem, with the goal of precise correction for attenuation before quantitative study. Its implementation involves the use of a rotating scintillation camera based SPECT detector connected to a mini computer system. Mathematical simulations of cylindical uniformly attenuated phantoms indicate that in the range of a priori calculated relaxation factor a fast converging solution can always be found with a (contrast) accuracy of the order of 0.2 to 4% given that numerical errors and noise are or not, taken into account. The sensitivity of the (RIM) algorithm to errors in the size of the reconstructed object and in the value of the attenuation coefficient μ was studied, using the same simulation data. Extreme variations of +- 15% in these parameters will lead to errors of the order of +- 20% in the quantitative results. Physical phantoms representing a variety of geometrical situations were also studied

  20. A forward bias method for lag correction of an a-Si flat panel detector

    International Nuclear Information System (INIS)

    Starman, Jared; Tognina, Carlo; Partain, Larry; Fahrig, Rebecca

    2012-01-01

    Purpose: Digital a-Si flat panel (FP) x-ray detectors can exhibit detector lag, or residual signal, of several percent that can cause ghosting in projection images or severe shading artifacts, known as the radar artifact, in cone-beam computed tomography (CBCT) reconstructions. A major contributor to detector lag is believed to be defect states, or traps, in the a-Si layer of the FP. Software methods to characterize and correct for the detector lag exist, but they may make assumptions such as system linearity and time invariance, which may not be true. The purpose of this work is to investigate a new hardware based method to reduce lag in an a-Si FP and to evaluate its effectiveness at removing shading artifacts in CBCT reconstructions. The feasibility of a novel, partially hardware based solution is also examined. Methods: The proposed hardware solution for lag reduction requires only a minor change to the FP. For pulsed irradiation, the proposed method inserts a new operation step between the readout and data collection stages. During this new stage the photodiode is operated in a forward bias mode, which fills the defect states with charge. A Varian 4030CB panel was modified to allow for operation in the forward bias mode. The contrast of residual lag ghosts was measured for lag frames 2 and 100 after irradiation ceased for standard and forward bias modes. Detector step response, lag, SNR, modulation transfer function (MTF), and detective quantum efficiency (DQE) measurements were made with standard and forward bias firmware. CBCT data of pelvic and head phantoms were also collected. Results: Overall, the 2nd and 100th detector lag frame residual signals were reduced 70%-88% using the new method. SNR, MTF, and DQE measurements show a small decrease in collected signal and a small increase in noise. The forward bias hardware successfully reduced the radar artifact in the CBCT reconstruction of the pelvic and head phantoms by 48%-81%. Conclusions: Overall, the

  1. Accuracy of radiotherapy dose calculations based on cone-beam CT: comparison of deformable registration and image correction based methods

    Science.gov (United States)

    Marchant, T. E.; Joshi, K. D.; Moore, C. J.

    2018-03-01

    Radiotherapy dose calculations based on cone-beam CT (CBCT) images can be inaccurate due to unreliable Hounsfield units (HU) in the CBCT. Deformable image registration of planning CT images to CBCT, and direct correction of CBCT image values are two methods proposed to allow heterogeneity corrected dose calculations based on CBCT. In this paper we compare the accuracy and robustness of these two approaches. CBCT images for 44 patients were used including pelvis, lung and head & neck sites. CBCT HU were corrected using a ‘shading correction’ algorithm and via deformable registration of planning CT to CBCT using either Elastix or Niftyreg. Radiotherapy dose distributions were re-calculated with heterogeneity correction based on the corrected CBCT and several relevant dose metrics for target and OAR volumes were calculated. Accuracy of CBCT based dose metrics was determined using an ‘override ratio’ method where the ratio of the dose metric to that calculated on a bulk-density assigned version of the same image is assumed to be constant for each patient, allowing comparison to the patient’s planning CT as a gold standard. Similar performance is achieved by shading corrected CBCT and both deformable registration algorithms, with mean and standard deviation of dose metric error less than 1% for all sites studied. For lung images, use of deformed CT leads to slightly larger standard deviation of dose metric error than shading corrected CBCT with more dose metric errors greater than 2% observed (7% versus 1%).

  2. Shunt resistance and saturation current determination in CdTe and CIGS solar cells. Part 2: application to experimental IV measurements and comparison with other methods

    Science.gov (United States)

    Rangel-Kuoppa, Victor-Tapio; Albor-Aguilera, María-de-Lourdes; Hérnandez-Vásquez, César; Flores-Márquez, José-Manuel; Jiménez-Olarte, Daniel; Sastré-Hernández, Jorge; González-Trujillo, Miguel-Ángel; Contreras-Puente, Gerardo-Silverio

    2018-04-01

    In this Part 2 of this series of articles, the procedure proposed in Part 1, namely a new parameter extraction technique of the shunt resistance (R sh ) and saturation current (I sat ) of a current-voltage (I-V) measurement of a solar cell, within the one-diode model, is applied to CdS-CdTe and CIGS-CdS solar cells. First, the Cheung method is used to obtain the series resistance (R s ) and the ideality factor n. Afterwards, procedures A and B proposed in Part 1 are used to obtain R sh and I sat . The procedure is compared with two other commonly used procedures. Better accuracy on the simulated I-V curves used with the parameters extracted by our method is obtained. Also, the integral percentage errors from the simulated I-V curves using the method proposed in this study are one order of magnitude smaller compared with the integral percentage errors using the other two methods.

  3. Investigation of Chemical Exchange at Intermediate Exchange Rates using a Combination of Chemical Exchange Saturation Transfer (CEST) and Spin-Locking methods (CESTrho)

    Science.gov (United States)

    Kogan, Feliks; Singh, Anup; Cai, Keija; Haris, Mohammad; Hariharan, Hari; Reddy, Ravinder

    2011-01-01

    Proton exchange imaging is important as it allows for visualization and quantification of the distribution of specific metabolites with conventional MRI. Current exchange mediated MRI methods suffer from poor contrast as well as confounding factors that influence exchange rates. In this study we developed a new method to measure proton exchange which combines chemical exchange saturation transfer (CEST) and T1ρ magnetization preparation methods (CESTrho). We demonstrated that this new CESTrho sequence can detect proton exchange in the slow to intermediate exchange regimes. It has a linear dependence on proton concentration which allows it to be used to quantitatively measure changes in metabolite concentration. Additionally, the magnetization scheme of this new method can be customized to make it insensitive to changes in exchange rate while retaining its dependency on solute concentration. Finally, we showed the feasibility of using CESTrho in vivo. This sequence is able to detect proton exchange at intermediate exchange rates and is unaffected by the confounding factors that influence proton exchange rates thus making it ideal for the measurement of metabolites with exchangeable protons in this exchange regime. PMID:22009759

  4. Investigation of chemical exchange at intermediate exchange rates using a combination of chemical exchange saturation transfer (CEST) and spin-locking methods (CESTrho).

    Science.gov (United States)

    Kogan, Feliks; Singh, Anup; Cai, Keija; Haris, Mohammad; Hariharan, Hari; Reddy, Ravinder

    2012-07-01

    Proton exchange imaging is important as it allows for visualization and quantification of the distribution of specific metabolites with conventional MRI. Current exchange mediated MRI methods suffer from poor contrast as well as confounding factors that influence exchange rates. In this study we developed a new method to measure proton exchange which combines chemical exchange saturation transfer and T(1)(ρ) magnetization preparation methods (CESTrho). We demonstrated that this new CESTrho sequence can detect proton exchange in the slow to intermediate exchange regimes. It has a linear dependence on proton concentration which allows it to be used to quantitatively measure changes in metabolite concentration. Additionally, the magnetization scheme of this new method can be customized to make it insensitive to changes in exchange rate while retaining its dependency on solute concentration. Finally, we showed the feasibility of using CESTrho in vivo. This sequence is able to detect proton exchange at intermediate exchange rates and is unaffected by the confounding factors that influence proton exchange rates thus making it ideal for the measurement of metabolites with exchangeable protons in this exchange regime. Copyright © 2011 Wiley Periodicals, Inc.

  5. Long-term results of forearm lengthening and deformity correction by the Ilizarov method.

    Science.gov (United States)

    Orzechowski, Wiktor; Morasiewicz, Leszek; Krawczyk, Artur; Dragan, Szymon; Czapiński, Jacek

    2002-06-30

    Background. Shortening and deformity of the forearm is most frequently caused by congenital disorders or posttraumatic injury. Given its complex anatomy and biomechanics, the forearm is clearly the most difficult segment for lengthening and deformity correction. Material and methods. We analyzed 16 patients with shortening and deformity of the forearm, treated surgically, using the Ilizarov method in our Department from 1989 to 2001. in 9 cases 1-stage surgery was sufficient, while the remaining 7 patients underwent 2-5 stages of treatment. At total of 31 surgical operations were performed. The extent of forearm shortening ranged from 1,5 to 14,5 cm (5-70%). We development a new fixator based on Schanz half-pins. Results. The length of forearm lengthening per operative stage averaged 2,35 cm. the proportion of lengthening ranged from 6% to 48% with an average of 18,3%. The mean lengthening index was 48,15 days/cm. the per-patient rate of complications was 88% compared 45% per stage of treatment, mostly limited rotational mobility and abnormal consolidation of regenerated bone. Conclusions. Despite the high complication rate, the Ilizarov method is the method of choice for patients with forearm shortenings and deformities. Treatment is particularly indicated in patients with shortening caused by disproportionate length of the ulnar and forearm bones. Treatment should be managed so as cause the least possible damage to arm function, even at the cost of limited lengthening. Our new stabilizer based on Schanz half-pins makes it possible to preserve forearm rotation.

  6. Examination of attenuation correction method for cerebral blood Flow SPECT Using MR imaging

    International Nuclear Information System (INIS)

    Mizuno, Takashi; Takahashi, Masaaki

    2009-01-01

    Authors developed a software for attenuation correction using MR imaging (MRAC) (Toshiba Med. System Engineer.) based on the idea that precision of AC could be improved by the head contour in MRI T2-weighted images (T2WI) obtained before 123 I-iofetamine (IMP) single photon emission computed tomography (SPECT) for cerebral blood flow (CBF) measurement. In the present study, this MRAC was retrospectively evaluated by comparison with the previous standard AC methods derived from transmission CT (TCT) and X-CT which overcoming the problem of sinogram threshold Chang method but still having cost and patient exposure issues. MRAC was essentially performed in the Toshiba GMS5500/PI processor where 3D registration was conducted with images of SPECT and MRI of the same patient. The gamma camera for 123 I-IMP SPECT and 99m TcO 4 - TCT was Toshiba 3-detector GCA9300A equipped with the above processor for MRAC and with low energy high resolution (LEHR) fan beam collimator. Machines for MRI and CT were Siemens-Asahi Meditech. MAGNETOM Symphony 1.5T and SOMATOM plus4, respectively. MRAC was examined in 8 patients with images of T1WI, TCT and SPECT, and in 18 of T2WI, CT and SPECT. Evaluation was made by comparison of attenuation coefficients (μ) by the 4 methods. As a result, the present MRAC was found to be closer to AC by TCT and CT than by the Chang method since MRAC, due to exact imaging of the head contour, was independent on radiation count, and was thought to be useful for improving the precision of CBF SPECT. (K.T.)

  7. Calibration of EBT2 film by the PDD method with scanner non-uniformity correction.

    Science.gov (United States)

    Chang, Liyun; Chui, Chen-Shou; Ding, Hueisch-Jy; Hwang, Ing-Ming; Ho, Sheng-Yow

    2012-09-21

    The EBT2 film together with a flatbed scanner is a convenient dosimetry QA tool for verification of clinical radiotherapy treatments. However, it suffers from a relatively high degree of uncertainty and a tedious film calibration process for every new lot of films, including cutting the films into several small pieces, exposing with different doses, restoring them back and selecting the proper region of interest (ROI) for each piece for curve fitting. In this work, we present a percentage depth dose (PDD) method that can accurately calibrate the EBT2 film together with the scanner non-uniformity correction and provide an easy way to perform film dosimetry. All films were scanned before and after the irradiation in one of the two homemade 2 mm thick acrylic frames (one portrait and the other landscape), which was located at a fixed position on the scan bed of an Epson 10 000XL scanner. After the pre-irradiated scan, the film was placed parallel to the beam central axis and sandwiched between six polystyrene plates (5 cm thick each), followed by irradiation of a 20 × 20 cm² 6 MV photon beam. Two different beams on times were used on two different films to deliver a dose to the film ranging from 32 to 320 cGy. After the post-irradiated scan, the net optical densities for a total of 235 points on the beam central axis on the films were auto-extracted and compared with the corresponding depth doses that were calculated through the measurement of a 0.6 cc farmer chamber and the related PDD table to perform the curve fitting. The portrait film location was selected for routine calibration, since the central beam axis on the film is parallel to the scanning direction, where non-uniformity correction is not needed (Ferreira et al 2009 Phys. Med. Biol. 54 1073-85). To perform the scanner non-uniformity calibration, the cross-beam profiles of the film were analysed by referencing the measured profiles from a Profiler™. Finally, to verify our method, the films were

  8. Calibration of EBT2 film by the PDD method with scanner non-uniformity correction

    International Nuclear Information System (INIS)

    Chang Liyun; Ding, Hueisch-Jy; Chui, Chen-Shou; Hwang, Ing-Ming; Ho, Sheng-Yow

    2012-01-01

    The EBT2 film together with a flatbed scanner is a convenient dosimetry QA tool for verification of clinical radiotherapy treatments. However, it suffers from a relatively high degree of uncertainty and a tedious film calibration process for every new lot of films, including cutting the films into several small pieces, exposing with different doses, restoring them back and selecting the proper region of interest (ROI) for each piece for curve fitting. In this work, we present a percentage depth dose (PDD) method that can accurately calibrate the EBT2 film together with the scanner non-uniformity correction and provide an easy way to perform film dosimetry. All films were scanned before and after the irradiation in one of the two homemade 2 mm thick acrylic frames (one portrait and the other landscape), which was located at a fixed position on the scan bed of an Epson 10 000XL scanner. After the pre-irradiated scan, the film was placed parallel to the beam central axis and sandwiched between six polystyrene plates (5 cm thick each), followed by irradiation of a 20 × 20 cm 2 6 MV photon beam. Two different beams on times were used on two different films to deliver a dose to the film ranging from 32 to 320 cGy. After the post-irradiated scan, the net optical densities for a total of 235 points on the beam central axis on the films were auto-extracted and compared with the corresponding depth doses that were calculated through the measurement of a 0.6 cc farmer chamber and the related PDD table to perform the curve fitting. The portrait film location was selected for routine calibration, since the central beam axis on the film is parallel to the scanning direction, where non-uniformity correction is not needed (Ferreira et al 2009 Phys. Med. Biol. 54 1073–85). To perform the scanner non-uniformity calibration, the cross-beam profiles of the film were analysed by referencing the measured profiles from a Profiler™. Finally, to verify our method, the films were

  9. On the logarithmic-singularity correction in the kernel function method of subsonic lifting-surface theory

    Science.gov (United States)

    Lan, C. E.; Lamar, J. E.

    1977-01-01

    A logarithmic-singularity correction factor is derived for use in kernel function methods associated with Multhopp's subsonic lifting-surface theory. Because of the form of the factor, a relation was formulated between the numbers of chordwise and spanwise control points needed for good accuracy. This formulation is developed and discussed. Numerical results are given to show the improvement of the computation with the new correction factor.

  10. Corrections in the gold foil activation method for determination of neutron beam density

    DEFF Research Database (Denmark)

    Als-Nielsen, Jens Aage

    1967-01-01

    A finite foil thickness and deviation in the cross section from the 1ν law imply corrections in the determination of neutron beam densities by means of foil activation. These corrections, which depend on the neutron velocity distribution, have been examined in general and are given in a specific...

  11. Determination of saturation functions and wettability for chalk based on measured fluid saturations

    Energy Technology Data Exchange (ETDEWEB)

    Olsen, D.; Bech, N.; Moeller Nielsen, C.

    1998-08-01

    The end effect of displacement experiments on low permeable porous media is used for determination of relative permeability functions and capillary pressure functions. Saturation functions for a drainage process are determined from a primary drainage experiment. A reversal of the flooding direction creates an intrinsic imbibition process in the sample, which enables determination if imbibition saturation functions. The saturation functions are determined by a parameter estimation technique. Scanning effects are modelled by the method of Killough. Saturation profiles are determined by NMR. (au)

  12. Effect of inter-crystal scatter on estimation methods for random coincidences and subsequent correction

    International Nuclear Information System (INIS)

    Torres-Espallardo, I; Spanoudaki, V; Ziegler, S I; Rafecas, M; McElroy, D P

    2008-01-01

    Random coincidences can contribute substantially to the background in positron emission tomography (PET). Several estimation methods are being used for correcting them. The goal of this study was to investigate the validity of techniques for random coincidence estimation, with various low-energy thresholds (LETs). Simulated singles list-mode data of the MADPET-II small animal PET scanner were used as input. The simulations have been performed using the GATE simulation toolkit. Several sources with different geometries have been employed. We evaluated the number of random events using three methods: delayed window (DW), singles rate (SR) and time histogram fitting (TH). Since the GATE simulations allow random and true coincidences to be distinguished, a comparison between the number of random coincidences estimated using the standard methods and the number obtained using GATE was performed. An overestimation in the number of random events was observed using the DW and SR methods. This overestimation decreases for LETs higher than 255 keV. It is additionally reduced when the single events which have undergone a Compton interaction in crystals before being detected are removed from the data. These two observations lead us to infer that the overestimation is due to inter-crystal scatter. The effect of this mismatch in the reconstructed images is important for quantification because it leads to an underestimation of activity. This was shown using a hot-cold-background source with 3.7 MBq total activity in the background region and a 1.59 MBq total activity in the hot region. For both 200 keV and 400 keV LET, an overestimation of random coincidences for the DW and SR methods was observed, resulting in approximately 1.5% or more (at 200 keV LET: 1.7% for DW and 7% for SR) and less than 1% (at 400 keV LET: both methods) underestimation of activity within the background region. In almost all cases, images obtained by compensating for random events in the reconstruction

  13. A new method for evaluation and correction of thermal reactor power and present operational applications

    International Nuclear Information System (INIS)

    Langenstein, M.; Streit, S.; Laipple, B.; Eitschberger, H.

    2005-01-01

    The determination of the thermal reactor power is traditionally be done by heat balance: 1) for a boiling water reactor (BWR) at the interface of reactor control volume and heat cycle. 2) for a pressurised-water reactor (PWR) at the interface of the steam generator control volume and turbine island on the secondary side. The uncertainty of these traditional methods is not easy to determine and can be in the range of several percent. Technical and legal regulations (e.g. 10CFR50) cover an estimated error of instrumentation up to 2% by increasing the design thermal reactor power for emergency analysis to 102 % of the licensed thermal reactor power. Basically the licensee has the duty to warrant at any time operation inside the analyzed region for thermal reactor power. This is normally done by keeping the indicated reactor power at the licensed 100% value. The better way is to use a method which allows a continuous warranty evaluation. The quantification of the level of fulfilment of this warranty is only achievable by a method which: 1) is independent of single measurements accuracies. 2) results in a certified quality of single process values and for the total heat cycle analysis. 3)leads to complete results including 2-sigma deviation especially for thermal reactor power. Here this method, which is called 'process data reconciliation based on VDI 2048 guideline', is presented [1, 2]. This method allows to determine the true process parameters with a statistical probability of 95%, by considering closed material, mass- and energy balances following the Gaussian correction principle. The amount of redundant process information and complexity of the process improves the final results. This represents the most probable state of the process with minimized uncertainty according to VDI 2048. Hence, calibration and control of the thermal reactor power are possible with low effort but high accuracy and independent of single measurement accuracies. Further more, VDI 2048

  14. Characterizing the marker-dye correction for Gafchromic(®) EBT2 film: a comparison of three analysis methods.

    Science.gov (United States)

    McCaw, Travis J; Micka, John A; Dewerd, Larry A

    2011-10-01

    Gafchromic(®) EBT2 film has a yellow marker dye incorporated into the active layer of the film that can be used to correct the film response for small variations in thickness. This work characterizes the effect of the marker-dye correction on the uniformity and uncertainty of dose measurements with EBT2 film. The effect of variations in time postexposure on the uniformity of EBT2 is also investigated. EBT2 films were used to measure the flatness of a (60)Co field to provide a high-spatial resolution evaluation of the film uniformity. As a reference, the flatness of the (60)Co field was also measured with Kodak EDR2 films. The EBT2 films were digitized with a flatbed document scanner 24, 48, and 72 h postexposure, and the images were analyzed using three methods: (1) the manufacturer-recommended marker-dye correction, (2) an in-house marker-dye correction, and (3) a net optical density (OD) measurement in the red color channel. The field flatness was calculated from orthogonal profiles through the center of the field using each analysis method, and the results were compared with the EDR2 measurements. Uncertainty was propagated through a dose calculation for each analysis method. The change in the measured field flatness for increasing times postexposure was also determined. Both marker-dye correction methods improved the field flatness measured with EBT2 film relative to the net OD method, with a maximum improvement of 1% using the manufacturer-recommended correction. However, the manufacturer-recommended correction also resulted in a dose uncertainty an order of magnitude greater than the other two methods. The in-house marker-dye correction lowered the dose uncertainty relative to the net OD method. The measured field flatness did not exhibit any unidirectional change with increasing time postexposure and showed a maximum change of 0.3%. The marker dye in EBT2 can be used to improve the response uniformity of the film. Depending on the film analysis method used

  15. Saturated Switching Systems

    CERN Document Server

    Benzaouia, Abdellah

    2012-01-01

    Saturated Switching Systems treats the problem of actuator saturation, inherent in all dynamical systems by using two approaches: positive invariance in which the controller is designed to work within a region of non-saturating linear behaviour; and saturation technique which allows saturation but guarantees asymptotic stability. The results obtained are extended from the linear systems in which they were first developed to switching systems with uncertainties, 2D switching systems, switching systems with Markovian jumping and switching systems of the Takagi-Sugeno type. The text represents a thoroughly referenced distillation of results obtained in this field during the last decade. The selected tool for analysis and design of stabilizing controllers is based on multiple Lyapunov functions and linear matrix inequalities. All the results are illustrated with numerical examples and figures many of them being modelled using MATLAB®. Saturated Switching Systems will be of interest to academic researchers in con...

  16. [Posttraumatic torsional deformities of the forearm : Methods of measurement and decision guidelines for correction].

    Science.gov (United States)

    Blossey, R D; Krettek, C; Liodakis, E

    2018-03-01

    Forearm fractures are common in all age groups. Even if the adjacent joints are not directly involved, these fractures have an intra-articular character. One of the most common complications of these injuries is a painful limitation of the range of motion and especially of pronation and supination. This is often due to an underdiagnosed torsional deformity; however, in recent years new methods have been developed to make these torsional differences visible and quantifiable through the use of sectional imaging. The principle of measurement corresponds to that of the torsion measurement of the lower limbs. Computed tomography (CT) or magnetic resonance imaging (MRI) scans are created at defined heights. By searching for certain landmarks, torsional angles are measured in relation to a defined reference line. A new alternative is the use of 3D reformation models. The presence of a torsional deformity, especial of the radius, leads to an impairment of the pronation and supination of the forearm. In the presence of torsional deformities, radiological measurements can help to decide if an operation is needed or not. Unlike the lower limbs, there are still no uniform cut-off values as to when a correction is indicated. Decisions must be made together with the patient by taking the clinical and radiological results into account.

  17. Method for the depth corrected detection of ionizing events from a co-planar grids sensor

    Science.gov (United States)

    De Geronimo, Gianluigi [Syosset, NY; Bolotnikov, Aleksey E [South Setauket, NY; Carini, Gabriella [Port Jefferson, NY

    2009-05-12

    A method for the detection of ionizing events utilizing a co-planar grids sensor comprising a semiconductor substrate, cathode electrode, collecting grid and non-collecting grid. The semiconductor substrate is sensitive to ionizing radiation. A voltage less than 0 Volts is applied to the cathode electrode. A voltage greater than the voltage applied to the cathode is applied to the non-collecting grid. A voltage greater than the voltage applied to the non-collecting grid is applied to the collecting grid. The collecting grid and the non-collecting grid are summed and subtracted creating a sum and difference respectively. The difference and sum are divided creating a ratio. A gain coefficient factor for each depth (distance between the ionizing event and the collecting grid) is determined, whereby the difference between the collecting electrode and the non-collecting electrode multiplied by the corresponding gain coefficient is the depth corrected energy of an ionizing event. Therefore, the energy of each ionizing event is the difference between the collecting grid and the non-collecting grid multiplied by the corresponding gain coefficient. The depth of the ionizing event can also be determined from the ratio.

  18. Reliability Analysis of a Composite Wind Turbine Blade Section Using the Model Correction Factor Method: Numerical Study and Validation

    DEFF Research Database (Denmark)

    Dimitrov, Nikolay Krasimirov; Friis-Hansen, Peter; Berggreen, Christian

    2013-01-01

    by the composite failure criteria. Each failure mode has been considered in a separate component reliability analysis, followed by a system analysis which gives the total probability of failure of the structure. The Model Correction Factor method used in connection with FORM (First-Order Reliability Method) proved...

  19. Statistical signal processing for gamma spectrometry: application for a pileup correction method

    International Nuclear Information System (INIS)

    Trigano, T.

    2005-12-01

    The main objective of gamma spectrometry is to characterize the radioactive elements of an unknown source by studying the energy of the emitted photons. When a photon interacts with a detector, its energy is converted into an electrical pulse. The histogram obtained by collecting the energies can be used to identify radioactive elements and measure their activity. However, at high counting rates, perturbations which are due to the stochastic aspect of the temporal signal can cripple the identification of the radioactive elements. More specifically, since the detector has a finite resolution, close arrival times of photons which can be modeled as an homogeneous Poisson process cause pile-ups of individual pulses. This phenomenon distorts energy spectra by introducing multiple fake spikes and prolonging artificially the Compton continuum, which can mask spikes of low intensity. The objective of this thesis is to correct the distortion caused by the pile-up phenomenon in the energy spectra. Since the shape of photonic pulses depends on many physical parameters, we consider this problem in a nonparametric framework. By introducing an adapted model based on two marked point processes, we establish a nonlinear relation between the probability measure associated to the observations and the probability density function we wish to estimate. This relation is derived both for continuous and for discrete time signals, and therefore can be used on a large set of detectors and from an analog or digital point of view. It also provides a framework to this problem, which can be considered as a problem of nonlinear density deconvolution and nonparametric density estimation from indirect measurements. Using these considerations, we propose an estimator obtained by direct inversion. We show that this estimator is consistent and almost achieves the usual rate of convergence obtained in classical nonparametric density estimation in the L 2 sense. We have applied our method to a set of

  20. SU-F-T-584: Investigating Correction Methods for Ion Recombination Effects in OCTAVIUS 1000 SRS Measurements

    International Nuclear Information System (INIS)

    Knill, C; Snyder, M; Rakowski, J; J, Burmeister; Zhuang, L; Matuszak, M

    2016-01-01

    Purpose: PTW’s Octavius 1000 SRS array performs IMRT QA measurements with liquid filled ionization chambers (LICs). Collection efficiencies of LICs have been shown to change during IMRT delivery as a function of LINAC pulse frequency and pulse dose, which affects QA results. In this study, two methods were developed to correct changes in collection efficiencies during IMRT QA measurements, and the effects of these corrections on QA pass rates were compared. Methods: For the first correction, Matlab software was developed that calculates pulse frequency and pulse dose for each detector, using measurement and DICOM RT Plan files. Pulse information is converted to collection efficiency and measurements are corrected by multiplying detector dose by ratios of calibration to measured collection efficiencies. For the second correction, MU/min in daily 1000 SRS calibration was chosen to match average MU/min of the VMAT plan. Usefulness of derived corrections were evaluated using 6MV and 10FFF SBRT RapidArc plans delivered to the OCTAVIUS 4D system using a TrueBeam equipped with an HD- MLC. Effects of the two corrections on QA results were examined by performing 3D gamma analysis comparing predicted to measured dose, with and without corrections. Results: After complex Matlab corrections, average 3D gamma pass rates improved by [0.07%,0.40%,1.17%] for 6MV and [0.29%,1.40%,4.57%] for 10FFF using [3%/3mm,2%/2mm,1%/1mm] criteria. Maximum changes in gamma pass rates were [0.43%,1.63%,3.05%] for 6MV and [1.00%,4.80%,11.2%] for 10FFF using [3%/3mm,2%/2mm,1%/1mm] criteria. On average, pass rates of simple daily calibration corrections were within 1% of complex Matlab corrections. Conclusion: Ion recombination effects can potentially be clinically significant for OCTAVIUS 1000 SRS measurements, especially for higher pulse dose unflattened beams when using tighter gamma tolerances. Matching daily 1000 SRS calibration MU/min to average planned MU/min is a simple correction that

  1. [A new method to orthodontically correct dental occlusal plane canting: wave-shaped arch].

    Science.gov (United States)

    Zheng, X; Hu, X X; Ma, N; Chen, X H

    2017-02-18

    ; after treatment the angles were from -0.17° to 2.57° with a median of 1.87°, the decrease of the angles between AOP and BBP after treatment ranged from 1.08° to 4.15° with a median of 2.21°. Paired Wilcoxon test P was 0.000. The wave-shaped arch can be used independently or in combination with other treatment methods, which can take advantage of left and right interactive anchorage to correct AOPC effectively, so it has certain application value in clinical practice.

  2. N3LO corrections to jet production in deep inelastic scattering using the Projection-to-Born method

    Science.gov (United States)

    Currie, J.; Gehrmann, T.; Glover, E. W. N.; Huss, A.; Niehues, J.; Vogt, A.

    2018-05-01

    Computations of higher-order QCD corrections for processes with exclusive final states require a subtraction method for real-radiation contributions. We present the first-ever generalisation of a subtraction method for third-order (N3LO) QCD corrections. The Projection-to-Born method is used to combine inclusive N3LO coefficient functions with an exclusive second-order (NNLO) calculation for a final state with an extra jet. The input requirements, advantages, and potential applications of the method are discussed, and validations at lower orders are performed. As a test case, we compute the N3LO corrections to kinematical distributions and production rates for single-jet production in deep inelastic scattering in the laboratory frame, and compare them with data from the ZEUS experiment at HERA. The corrections are small in the central rapidity region, where they stabilize the predictions to sub per-cent level. The corrections increase substantially towards forward rapidity where large logarithmic effects are expected, thereby yielding an improved description of the data in this region.

  3. Estimating Chlorophyll Fluorescence Parameters Using the Joint Fraunhofer Line Depth and Laser-Induced Saturation Pulse (FLD-LISP Method in Different Plant Species

    Directory of Open Access Journals (Sweden)

    Parinaz Rahimzadeh-Bajgiran

    2017-06-01

    Full Text Available A comprehensive evaluation of the recently developed Fraunhofer line depth (FLD and laser-induced saturation pulse (FLD-LISP method was conducted to measure chlorophyll fluorescence (ChlF parameters of the quantum yield of photosystem II (ΦPSII, non-photochemical quenching (NPQ, and the photosystem II-based electron transport rate (ETR in three plant species including paprika (C3 plant, maize (C4 plant, and pachira (C3 plant. First, the relationships between photosynthetic photon flux density (PPFD and ChlF parameters retrieved using FLD-LISP and the pulse amplitude-modulated (PAM methods were analyzed for all three species. Then the relationships between ChlF parameters measured using FLD-LISP and PAM were evaluated for the plants in different growth stages of leaves from mature to aging conditions. The relationships of ChlF parameters/PPFD were similar in both FLD-LISP and PAM methods in all plant species. ΦPSII showed a linear relationship with PPFD in all three species whereas NPQ was found to be linearly related to PPFD in paprika and maize, but not for pachira. The ETR/PPFD relationship was nonlinear with increasing values observed for PPFDs lower than about 800 μmol m−2 s−1 for paprika, lower than about 1200 μmol m−2 s−1 for maize, and lower than about 800 μmol m−2 s−1 for pachira. The ΦPSII, NPQ, and ETR of both the FLD-LISP and PAM methods were very well correlated (R2 = 0.89, RMSE = 0.05, (R2 = 0.86, RMSE = 0.44, and (R2 = 0.88, RMSE = 24.69, respectively, for all plants. Therefore, the FLD-LISP method can be recommended as a robust technique for the estimation of ChlF parameters.

  4. Dissipative dynamics with the corrected propagator method. Numerical comparison between fully quantum and mixed quantum/classical simulations

    International Nuclear Information System (INIS)

    Gelman, David; Schwartz, Steven D.

    2010-01-01

    The recently developed quantum-classical method has been applied to the study of dissipative dynamics in multidimensional systems. The method is designed to treat many-body systems consisting of a low dimensional quantum part coupled to a classical bath. Assuming the approximate zeroth order evolution rule, the corrections to the quantum propagator are defined in terms of the total Hamiltonian and the zeroth order propagator. Then the corrections are taken to the classical limit by introducing the frozen Gaussian approximation for the bath degrees of freedom. The evolution of the primary part is governed by the corrected propagator yielding the exact quantum dynamics. The method has been tested on two model systems coupled to a harmonic bath: (i) an anharmonic (Morse) oscillator and (ii) a double-well potential. The simulations have been performed at zero temperature. The results have been compared to the exact quantum simulations using the surrogate Hamiltonian approach.

  5. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data.

    Science.gov (United States)

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-06-15

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration's (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003-2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW's) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach.

  6. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data

    Directory of Open Access Journals (Sweden)

    Haris Akram Bhatti

    2016-06-01

    Full Text Available With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration’s (NOAA Climate Prediction Centre (CPC morphing technique (CMORPH satellite rainfall product (CMORPH in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003–2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW sizes and for sequential windows (SW’s of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE. To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r and standard deviation (SD. Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach.

  7. Method and system for automatically correcting aberrations of a beam of charged particles

    International Nuclear Information System (INIS)

    1975-01-01

    The location of a beam of charged particles within a deflection field is determined by its orthogonal deflection voltages. With the location of the beam in the field, correction currents are supplied to a focus coil and to each of a pair of stigmator coils to correct for change of focal length and astigmatism due to the beam being deflected away from the center of its deflection field

  8. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data

    Science.gov (United States)

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-01-01

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration’s (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003–2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW’s) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach. PMID:27314363

  9. Modified Ponseti method of treatment for correction of neglected clubfoot in older children and adolescents--a preliminary report.

    Science.gov (United States)

    Bashi, Ramin Haj Zargar; Baghdadi, Taghi; Shirazi, Mehdi Ramezan; Abdi, Reza; Aslani, Hossein

    2016-03-01

    Congenital talipes equinovarus may be the most common congenital orthopedic condition requiring treatment. Nonoperative treatment including different methods is generally accepted as the first step in the deformity correction. Ignacio Ponseti introduced his nonsurgical approach to the treatment of clubfoot in the early 1940s. The method is reportedly successful in treating clubfoot in patients up to 9 years of age. However, whether age at the beginning of treatment affects the rate of effective correction and relapse is unknown. We have applied the Ponseti method successfully with some modifications for 11 patients with a mean age of 11.2 years (range, 6 to 19 years) with neglected and untreated clubbed feet. The mean follow-up was 15 months (12 to 36 months). Correction was achieved with a mean of nine casts (six to 13). Clinically, 17 out of 18 feet (94.4%) were considered to achieve a good result with no need for further surgery. The application of this method of treatment is very simple and also cheap in developing countries with limited financial and social resources for health service. To the best of the authors' knowledge, such a modified method as a correction method for clubfoot in older children and adolescents has not been applied previously for neglected clubfeet in older children in the literature.

  10. Software Design of Mobile Antenna for Auto Satellite Tracking Using Modem Correction and Elevation Azimuth Method

    Directory of Open Access Journals (Sweden)

    Djamhari Sirat

    2010-10-01

    Full Text Available Pointing accuracy is an important thing in satellite communication. Because the satellite’s distance to the surface of the earth's satellite is so huge, thus 1 degree of pointing error will make the antenna can not send data to satellites. To overcome this, the auto-tracking satellite controller is made. This system uses a microcontroller as the controller, with the GPS as the indicator location of the antenna, digital compass as the beginning of antenna pointing direction, rotary encoder as sensor azimuth and elevation, and modem to see Eb/No signal. The microcontroller use serial communication to read the input. Thus the programming should be focused on in the UART and serial communication software UART. This controller use 2 phase in the process of tracking satellites. Early stages is the method Elevation-Azimuth, where at this stage with input from GPS, Digital Compass, and the position of satellites (both coordinates, and height that are stored in microcontroller. Controller will calculate the elevation and azimuth angle, then move the antenna according to the antenna azimuth and elevation angle. Next stages is correction modem, where in this stage controller only use modem as the input, and antenna movement is set up to obtain the largest value of Eb/No signal. From the results of the controller operation, there is a change in the value of the original input level from -81.7 dB to -30.2 dB with end of Eb/No value, reaching 5.7 dB.

  11. Intelligent error correction method applied on an active pixel sensor based star tracker

    Science.gov (United States)

    Schmidt, Uwe

    2005-10-01

    Star trackers are opto-electronic sensors used on-board of satellites for the autonomous inertial attitude determination. During the last years star trackers became more and more important in the field of the attitude and orbit control system (AOCS) sensors. High performance star trackers are based up today on charge coupled device (CCD) optical camera heads. The active pixel sensor (APS) technology, introduced in the early 90-ties, allows now the beneficial replacement of CCD detectors by APS detectors with respect to performance, reliability, power, mass and cost. The company's heritage in star tracker design started in the early 80-ties with the launch of the worldwide first fully autonomous star tracker system ASTRO1 to the Russian MIR space station. Jena-Optronik recently developed an active pixel sensor based autonomous star tracker "ASTRO APS" as successor of the CCD based star tracker product series ASTRO1, ASTRO5, ASTRO10 and ASTRO15. Key features of the APS detector technology are, a true xy-address random access, the multiple windowing read out and the on-chip signal processing including the analogue to digital conversion. These features can be used for robust star tracking at high slew rates and under worse conditions like stray light and solar flare induced single event upsets. A special algorithm have been developed to manage the typical APS detector error contributors like fixed pattern noise (FPN), dark signal non-uniformity (DSNU) and white spots. The algorithm works fully autonomous and adapts to e.g. increasing DSNU and up-coming white spots automatically without ground maintenance or re-calibration. In contrast to conventional correction methods the described algorithm does not need calibration data memory like full image sized calibration data sets. The application of the presented algorithm managing the typical APS detector error contributors is a key element for the design of star trackers for long term satellite applications like

  12. Gluon saturation in a saturated environment

    International Nuclear Information System (INIS)

    Kopeliovich, B. Z.; Potashnikova, I. K.; Schmidt, Ivan

    2011-01-01

    A bootstrap equation for self-quenched gluon shadowing leads to a reduced magnitude of broadening for partons propagating through a nucleus. Saturation of small-x gluons in a nucleus, which has the form of transverse momentum broadening of projectile gluons in pA collisions in the nuclear rest frame, leads to a modification of the parton distribution functions in the beam compared with pp collisions. In nucleus-nucleus collisions all participating nucleons acquire enhanced gluon density at small x, which boosts further the saturation scale. Solution of the reciprocity equations for central collisions of two heavy nuclei demonstrate a significant, up to several times, enhancement of Q sA 2 , in AA compared with pA collisions.

  13. Validation of phenol red versus gravimetric method for water reabsorption correction and study of gender differences in Doluisio's absorption technique.

    Science.gov (United States)

    Tuğcu-Demiröz, Fatmanur; Gonzalez-Alvarez, Isabel; Gonzalez-Alvarez, Marta; Bermejo, Marival

    2014-10-01

    The aim of the present study was to develop a method for water flux reabsorption measurement in Doluisio's Perfusion Technique based on the use of phenol red as a non-absorbable marker and to validate it by comparison with gravimetric procedure. The compounds selected for the study were metoprolol, atenolol, cimetidine and cefadroxil in order to include low, intermediate and high permeability drugs absorbed by passive diffusion and by carrier mediated mechanism. The intestinal permeabilities (Peff) of the drugs were obtained in male and female Wistar rats and calculated using both methods of water flux correction. The absorption rate coefficients of all the assayed compounds did not show statistically significant differences between male and female rats consequently all the individual values were combined to compare between reabsorption methods. The absorption rate coefficients and permeability values did not show statistically significant differences between the two strategies of concentration correction. The apparent zero order water absorption coefficients were also similar in both correction procedures. In conclusion gravimetric and phenol red method for water reabsorption correction are accurate and interchangeable for permeability estimation in closed loop perfusion method. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Implementing a generic method for bias correction in statistical models using random effects, with spatial and population dynamics examples

    DEFF Research Database (Denmark)

    Thorson, James T.; Kristensen, Kasper

    2016-01-01

    Statistical models play an important role in fisheries science when reconciling ecological theory with available data for wild populations or experimental studies. Ecological models increasingly include both fixed and random effects, and are often estimated using maximum likelihood techniques...... configurations of an age-structured population dynamics model. This simulation experiment shows that the epsilon-method and the existing bias-correction method perform equally well in data-rich contexts, but the epsilon-method is slightly less biased in data-poor contexts. We then apply the epsilon......-method to a spatial regression model when estimating an index of population abundance, and compare results with an alternative bias-correction algorithm that involves Markov-chain Monte Carlo sampling. This example shows that the epsilon-method leads to a biologically significant difference in estimates of average...

  15. 4SM: A Novel Self-Calibrated Algebraic Ratio Method for Satellite-Derived Bathymetry and Water Column Correction.

    Science.gov (United States)

    Morel, Yann G; Favoretto, Fabio

    2017-07-21

    All empirical water column correction methods have consistently been reported to require existing depth sounding data for the purpose of calibrating a simple depth retrieval model; they yield poor results over very bright or very dark bottoms. In contrast, we set out to (i) use only the relative radiance data in the image along with published data, and several new assumptions; (ii) in order to specify and operate the simplified radiative transfer equation (RTE); (iii) for the purpose of retrieving both the satellite derived bathymetry (SDB) and the water column corrected spectral reflectance over shallow seabeds. Sea truth regressions show that SDB depths retrieved by the method only need tide correction. Therefore it shall be demonstrated that, under such new assumptions, there is no need for (i) formal atmospheric correction; (ii) conversion of relative radiance into calibrated reflectance; or (iii) existing depth sounding data, to specify the simplified RTE and produce both SDB and spectral water column corrected radiance ready for bottom typing. Moreover, the use of the panchromatic band for that purpose is introduced. Altogether, we named this process the Self-Calibrated Supervised Spectral Shallow-sea Modeler (4SM). This approach requires a trained practitioner, though, to produce its results within hours of downloading the raw image. The ideal raw image should be a "near-nadir" view, exhibit homogeneous atmosphere and water column, include some coverage of optically deep waters and bare land, and lend itself to quality removal of haze, atmospheric adjacency effect, and sun/sky glint.

  16. Automated 3-D method for the correction of axial artifacts in spectral-domain optical coherence tomography images

    Science.gov (United States)

    Antony, Bhavna; Abràmoff, Michael D.; Tang, Li; Ramdas, Wishal D.; Vingerling, Johannes R.; Jansonius, Nomdo M.; Lee, Kyungmoo; Kwon, Young H.; Sonka, Milan; Garvin, Mona K.

    2011-01-01

    The 3-D spectral-domain optical coherence tomography (SD-OCT) images of the retina often do not reflect the true shape of the retina and are distorted differently along the x and y axes. In this paper, we propose a novel technique that uses thin-plate splines in two stages to estimate and correct the distinct axial artifacts in SD-OCT images. The method was quantitatively validated using nine pairs of OCT scans obtained with orthogonal fast-scanning axes, where a segmented surface was compared after both datasets had been corrected. The mean unsigned difference computed between the locations of this artifact-corrected surface after the single-spline and dual-spline correction was 23.36 ± 4.04 μm and 5.94 ± 1.09 μm, respectively, and showed a significant difference (p < 0.001 from two-tailed paired t-test). The method was also validated using depth maps constructed from stereo fundus photographs of the optic nerve head, which were compared to the flattened top surface from the OCT datasets. Significant differences (p < 0.001) were noted between the artifact-corrected datasets and the original datasets, where the mean unsigned differences computed over 30 optic-nerve-head-centered scans (in normalized units) were 0.134 ± 0.035 and 0.302 ± 0.134, respectively. PMID:21833377

  17. Effect of attenuation by the cranium on quantitative SPECT measurements of cerebral blood flow and a correction method

    International Nuclear Information System (INIS)

    Iwase, Mikio; Kurono, Kenji; Iida, Akihiko.

    1998-01-01

    Attenuation correction for cerebral blood flow SPECT image reconstruction is usually performed by considering the head as a whole to be equivalent to water, and the effects of differences in attenuation between subjects produced by the cranium have not been taken into account. We determined the differences in attenuation between subjects and assessed a method of correcting quantitative cerebral blood flow values. Attenuations by head on the right and left sides were measured before intravenous injection of 123 I-IMP, and water-converted diameters of both sides (Ta) were calculated from the measurements obtained. After acquiring SPECT images, attenuation correction was conducted according to the method of Sorenson, and images were reconstructed. The diameters of the right and left sides in the same position as the Ta (Tt) were calculated from the contours determined by threshold values. Using Ts given by 2 Ts=Ta-Tt, the correction factor λ=exp(μ 1 Ts) was calculated and multiplied as the correction factor when rCBF was determined. The results revealed significant differences between Tt and Ta. Although no gender differences were observed in Tt, they were seen in both Ta and Ts. Thus, interindividual differences in attenuation by the cranium were found to have an influence that cannot be ignored. Inter-subject correlation is needed to obtain accurate quantitative values. (author)

  18. METHODS FOR CORRECTION OF RHINOPHONIA IN PATIENTS WITH ACQUIRED MAXILLARY DEFECTS

    Directory of Open Access Journals (Sweden)

    E. G. Matyakin

    2012-01-01

    Full Text Available Speech recovery sessions were conducted in 63 patients with acquired maxillary defects. Assessment of speech quality in patients after auditory maxillary resection without a prosthestic has indicated 100 % significant rhinolalia, indistinct articulation. Prosthetic defect replacement completely corrects speech dysfunction and creates conditions for forming correct speech stereotypes. Speech therapy sessions and testing are aimed at increasing the performance of the speech apparatus and at improving the automatizaton of speaking skills. The techniques to remove nasal emission include: – articulation exercises (activation of the muscles of the lips, cheeks, tongue, pharynx, neck, and larynx; – speech respiratory gymnastics; – phonopedic (vocal exercises. The elements of rational psychotherapy have extensive applications during each session and include suggestion, an emotional exposure to correct personality disorders, as well as pedagogical elements. 

  19. Method and apparatus for producing a porosity log of a subsurface formation corrected for detector standoff

    International Nuclear Information System (INIS)

    Allen, L.S.; Leland, F.P.; Lyle, W.D. Jr.; Stromswold, D.C.

    1993-01-01

    A borehole logging tool with a pulsed source of fast neutrons is lowered into a borehole traversing a subsurface formation, and a neutron detector measures the die-away of nuclear radiation in the formation. A model of the die-away is produced using exponential terms varying as the sum of borehole, formation and thermal neutron background components. Exponentially weighted moments of both the die-away measurements and a model are determined and equated. The formation decay constant is determined from the formation and thermal neutron background. An epithermal neutron lifetime is determined from the formation decay constant and is used with the amplitude ratio by a trained neural network to determine a lifetime correction. A standoff corrected lifetime is determined from the epithermal neutron lifetime and the lifetime correction. (author)

  20. A comparison of different experimental methods for general recombination correction for liquid ionization chambers

    DEFF Research Database (Denmark)

    Andersson, Jonas; Kaiser, Franz-Joachim; Gomez, Faustino

    2012-01-01

    Radiation dosimetry of highly modulated dose distributions requires a detector with a high spatial resolution. Liquid filled ionization chambers (LICs) have the potential to become a valuable tool for the characterization of such radiation fields. However, the effect of an increased recombination...... of the charge carriers, as compared to using air as the sensitive medium has to be corrected for. Due to the presence of initial recombination in LICs, the correction for general recombination losses is more complicated than for air-filled ionization chambers. In the present work, recently published...

  1. The strategy of spectral shifts and the sets of correct methods for calculating eigenvalues of general tridiagonal matrices

    International Nuclear Information System (INIS)

    Emel'yanenko, G.A.; Sek, I.E.

    1988-01-01

    Many correctable unknown methods for eigenvalue calculation of general tridiagonal matrices with real elements; criteria of singular tridiagonal matrices; necessary and sufficient conditions of tridiagonal matrix degeneracy; process with boundary conditions according to calculation processes of general upper and lower tridiagonal matrix minors are obtained. 6 refs

  2. Reliability Analysis of Offshore Jacket Structures with Wave Load on Deck using the Model Correction Factor Method

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Friis-Hansen, P.; Nielsen, J.S.

    2006-01-01

    failure/collapse of jacket type platforms with wave in deck loads using the so-called Model Correction Factor Method (MCFM). A simple representative model for the RSR measure is developed and used in the MCFM technique. A realistic example is evaluated and it is seen that it is possible to perform...

  3. Automatic NAA. Saturation activities

    International Nuclear Information System (INIS)

    Westphal, G.P.; Grass, F.; Kuhnert, M.

    2008-01-01

    A system for Automatic NAA is based on a list of specific saturation activities determined for one irradiation position at a given neutron flux and a single detector geometry. Originally compiled from measurements of standard reference materials, the list may be extended also by the calculation of saturation activities from k 0 and Q 0 factors, and f and α values of the irradiation position. A systematic improvement of the SRM approach is currently being performed by pseudo-cyclic activation analysis, to reduce counting errors. From these measurements, the list of saturation activities is recalculated in an automatic procedure. (author)

  4. Saturation at Low X and Nonlinear Evolution

    International Nuclear Information System (INIS)

    Stasto, A.M.

    2002-01-01

    In this talk the results of the analytical and numerical analysis of the nonlinear Balitsky-Kovchegov equation are presented. The characteristic BFKL diffusion into infrared regime is suppressed by the generation of the saturation scale Q s . We identify the scaling and linear regimes for the solution. We also study the impact of subleading corrections onto the nonlinear evolution. (author)

  5. Evaluation of a method for correction of scatter radiation in thorax cone beam CT; Evaluation d'une methode de correction du rayonnement diffuse en tomographie du thorax avec faisceau conique

    Energy Technology Data Exchange (ETDEWEB)

    Rinkel, J.; Dinten, J.M. [CEA Grenoble (DTBS/STD), Lab. d' Electronique et de Technologie de l' Informatique, LETI, 38 (France); Esteve, F. [European Synchrotron Radiation Facility (ESRF), 38 - Grenoble (France)

    2004-07-01

    Purpose: Cone beam CT (CBCT) enables three-dimensional imaging with isotropic resolution. X-ray scatter estimation is a big challenge for quantitative CBCT imaging of thorax: scatter level is significantly higher on cone beam systems compared to collimated fan beam systems. The effects of this scattered radiation are cupping artefacts, streaks, and quantification inaccuracies. The beam stops conventional scatter estimation approach can be used for CBCT but leads to a significant increase in terms of dose and acquisition time. At CEA-LETI has been developed an original scatter management process without supplementary acquisition. Methods and Materials: This Analytical Plus Indexing-based method (API) of scatter correction in CBCT is based on scatter calibration through offline acquisitions with beam stops on lucite plates, combined to an analytical transformation issued from physical equations. This approach has been applied with success in bone densitometry and mammography. To evaluate this method in CBCT, acquisitions from a thorax phantom with and without beam stops have been performed. To compare different scatter correction approaches, Feldkamp algorithm has been applied on rough data corrected from scatter by API and by beam stops approaches. Results: The API method provides results in good agreement with the beam stops array approach, suppressing cupping artefact. Otherwise influence of the scatter correction method on the noise in the reconstructed images has been evaluated. Conclusion: The results indicate that the API method is effective for quantitative CBCT imaging of thorax. Compared to a beam stops array method it needs a lower x-ray dose and shortens acquisition time. (authors)

  6. Proton dose distribution measurements using a MOSFET detector with a simple dose-weighted correction method for LET effects.

    Science.gov (United States)

    Kohno, Ryosuke; Hotta, Kenji; Matsuura, Taeko; Matsubara, Kana; Nishioka, Shie; Nishio, Teiji; Kawashima, Mitsuhiko; Ogino, Takashi

    2011-04-04

    We experimentally evaluated the proton beam dose reproducibility, sensitivity, angular dependence and depth-dose relationships for a new Metal Oxide Semiconductor Field Effect Transistor (MOSFET) detector. The detector was fabricated with a thinner oxide layer and was operated at high-bias voltages. In order to accurately measure dose distributions, we developed a practical method for correcting the MOSFET response to proton beams. The detector was tested by examining lateral dose profiles formed by protons passing through an L-shaped bolus. The dose reproducibility, angular dependence and depth-dose response were evaluated using a 190 MeV proton beam. Depth-output curves produced using the MOSFET detectors were compared with results obtained using an ionization chamber (IC). Since accurate measurements of proton dose distribution require correction for LET effects, we developed a simple dose-weighted correction method. The correction factors were determined as a function of proton penetration depth, or residual range. The residual proton range at each measurement point was calculated using the pencil beam algorithm. Lateral measurements in a phantom were obtained for pristine and SOBP beams. The reproducibility of the MOSFET detector was within 2%, and the angular dependence was less than 9%. The detector exhibited a good response at the Bragg peak (0.74 relative to the IC detector). For dose distributions resulting from protons passing through an L-shaped bolus, the corrected MOSFET dose agreed well with the IC results. Absolute proton dosimetry can be performed using MOSFET detectors to a precision of about 3% (1 sigma). A thinner oxide layer thickness improved the LET in proton dosimetry. By employing correction methods for LET dependence, it is possible to measure absolute proton dose using MOSFET detectors.

  7. Proton dose distribution measurements using a MOSFET detector with a simple dose‐weighted correction method for LET effects

    Science.gov (United States)

    Hotta, Kenji; Matsuura, Taeko; Matsubara, Kana; Nishioka, Shie; Nishio, Teiji; Kawashima, Mitsuhiko; Ogino, Takashi

    2011-01-01

    We experimentally evaluated the proton beam dose reproducibility, sensitivity, angular dependence and depth‐dose relationships for a new Metal Oxide Semiconductor Field Effect Transistor (MOSFET) detector. The detector was fabricated with a thinner oxide layer and was operated at high‐bias voltages. In order to accurately measure dose distributions, we developed a practical method for correcting the MOSFET response to proton beams. The detector was tested by examining lateral dose profiles formed by protons passing through an L‐shaped bolus. The dose reproducibility, angular dependence and depth‐dose response were evaluated using a 190 MeV proton beam. Depth‐output curves produced using the MOSFET detectors were compared with results obtained using an ionization chamber (IC). Since accurate measurements of proton dose distribution require correction for LET effects, we developed a simple dose‐weighted correction method. The correction factors were determined as a function of proton penetration depth, or residual range. The residual proton range at each measurement point was calculated using the pencil beam algorithm. Lateral measurements in a phantom were obtained for pristine and SOBP beams. The reproducibility of the MOSFET detector was within 2%, and the angular dependence was less than 9%. The detector exhibited a good response at the Bragg peak (0.74 relative to the IC detector). For dose distributions resulting from protons passing through an L‐shaped bolus, the corrected MOSFET dose agreed well with the IC results. Absolute proton dosimetry can be performed using MOSFET detectors to a precision of about 3% (1 sigma). A thinner oxide layer thickness improved the LET in proton dosimetry. By employing correction methods for LET dependence, it is possible to measure absolute proton dose using MOSFET detectors. PACS number: 87.56.‐v

  8. Analysis of an automated background correction method for cardiovascular MR phase contrast imaging in children and young adults

    Energy Technology Data Exchange (ETDEWEB)

    Rigsby, Cynthia K.; Hilpipre, Nicholas; Boylan, Emma E.; Popescu, Andrada R.; Deng, Jie [Ann and Robert H. Lurie Children' s Hospital of Chicago, Department of Medical Imaging, Chicago, IL (United States); McNeal, Gary R. [Siemens Medical Solutions USA Inc., Customer Solutions Group, Cardiovascular MR R and D, Chicago, IL (United States); Zhang, Gang [Ann and Robert H. Lurie Children' s Hospital of Chicago Research Center, Biostatistics Research Core, Chicago, IL (United States); Choi, Grace [Ann and Robert H. Lurie Children' s Hospital of Chicago, Department of Pediatrics, Chicago, IL (United States); Greiser, Andreas [Siemens AG Healthcare Sector, Erlangen (Germany)

    2014-03-15

    Phase contrast magnetic resonance imaging (MRI) is a powerful tool for evaluating vessel blood flow. Inherent errors in acquisition, such as phase offset, eddy currents and gradient field effects, can cause significant inaccuracies in flow parameters. These errors can be rectified with the use of background correction software. To evaluate the performance of an automated phase contrast MRI background phase correction method in children and young adults undergoing cardiac MR imaging. We conducted a retrospective review of patients undergoing routine clinical cardiac MRI including phase contrast MRI for flow quantification in the aorta (Ao) and main pulmonary artery (MPA). When phase contrast MRI of the right and left pulmonary arteries was also performed, these data were included. We excluded patients with known shunts and metallic implants causing visible MRI artifact and those with more than mild to moderate aortic or pulmonary stenosis. Phase contrast MRI of the Ao, mid MPA, proximal right pulmonary artery (RPA) and left pulmonary artery (LPA) using 2-D gradient echo Fast Low Angle SHot (FLASH) imaging was acquired during normal respiration with retrospective cardiac gating. Standard phase image reconstruction and the automatic spatially dependent background-phase-corrected reconstruction were performed on each phase contrast MRI dataset. Non-background-corrected and background-phase-corrected net flow, forward flow, regurgitant volume, regurgitant fraction, and vessel cardiac output were recorded for each vessel. We compared standard non-background-corrected and background-phase-corrected mean flow values for the Ao and MPA. The ratio of pulmonary to systemic blood flow (Qp:Qs) was calculated for the standard non-background and background-phase-corrected data and these values were compared to each other and for proximity to 1. In a subset of patients who also underwent phase contrast MRI of the MPA, RPA, and LPA a comparison was made between standard non-background-corrected

  9. Comparison between the Gauss' law method and the zero current method to calculate multi-species ionic diffusion in saturated uncharged porous materials

    DEFF Research Database (Denmark)

    Johannesson, Björn

    2010-01-01

    There exist, mainly, two different continuum approaches to calculate transient multi species ionic diffusion. One of them is based on explicitly assuming a zero current in the diffusing mixture together with an introduction of a streaming electrical potential in the constitutive equations...... of the coupled set of equation in favor of the staggering approach. A one step truly implicit time stepping scheme is adopted together with an implementation of a modified Newton-Raphson iterational scheme for search of equilibrium at each considered time step calculation. Results from the zero current case...... difference of the two types of potentials, that is, the streaming electrical potential and the electrical field is carefully examined. A novel numerical method based on the finite element approach is established for the zero current method case. The proposed numerical method uses the direct calculation...

  10. Best Practices for Controlling Tuberculosis-Training in Correctional Facilities: A Mixed Methods Evaluation

    Science.gov (United States)

    Murray, Ellen R.

    2016-01-01

    According to the literature, identifying and treating tuberculosis (TB) in correctional facilities have been problematic for the inmates and also for the communities into which inmates are released. The importance of training those who can identify this disease early into incarceration is vital to halt the transmission. Although some training has…

  11. Unilateral canine crossbite correction in adults using the Invisalign method: a case report.

    Science.gov (United States)

    Giancotti, Aldo; Mampieri, Gianluca

    2012-01-01

    The aim of this paper is to present and debate the treatment of a unilateral canine crossbite using clear aligners (Invisalign). The possibility of combining partial fixed appliances with removable elastics to optimize the final outcome is also described. The advantages of protected movement, due to the presence of the aligners, to jump the occlusion during crossbite correction is also highlighted.

  12. Saturation flow versus green time at two-stage signal controlled intersections

    Directory of Open Access Journals (Sweden)

    A. Boumediene

    2009-12-01

    Full Text Available Intersections are the key components of road networks considerably affecting capacity. As flow levels and experience have increased over the years, methods and means have been developed to cope with growing demand for traffic at road junctions. Among various traffic control devices and techniques developed to cope with conflicting movements, traffic signals create artificial gaps to accommodate the impeded traffic streams. The majority of parameters that govern signalised intersection control and operations such as a degree of saturation, delays, queue lengths, the level of service etc. are very sensitive to saturation flow. Therefore, it is essential to reliably evaluate saturation flow for correctly setting traffic signals to avoid unnecessary delays and conflicts. Generally, almost all guidelines support the constancy of saturation flow irrespective of green time duration. This paper presents the results of field studies carried out to enable the performance of signalised intersections to be compared at different green time durations. It was found that saturation flow decreased slightly with growing green time. Reduction corresponded to between 2 and 5 pcus/gh per second of green time. However, the analyses of the discharge rate during the successive time intervals of 6-seconds showed a substantial reduction of 10% to 13% in saturation flow levels after 36 seconds of green time compared to those relating to 6–36 seconds range. No reduction in saturation flow levels was detected at the sites where only green periods of 44 seconds or less were implemented.

  13. A Fixed-Pattern Noise Correction Method Based on Gray Value Compensation for TDI CMOS Image Sensor.

    Science.gov (United States)

    Liu, Zhenwang; Xu, Jiangtao; Wang, Xinlei; Nie, Kaiming; Jin, Weimin

    2015-09-16

    In order to eliminate the fixed-pattern noise (FPN) in the output image of time-delay-integration CMOS image sensor (TDI-CIS), a FPN correction method based on gray value compensation is proposed. One hundred images are first captured under uniform illumination. Then, row FPN (RFPN) and column FPN (CFPN) are estimated based on the row-mean vector and column-mean vector of all collected images, respectively. Finally, RFPN are corrected by adding the estimated RFPN gray value to the original gray values of pixels in the corresponding row, and CFPN are corrected by subtracting the estimated CFPN gray value from the original gray values of pixels in the corresponding column. Experimental results based on a 128-stage TDI-CIS show that, after correcting the FPN in the image captured under uniform illumination with the proposed method, the standard-deviation of row-mean vector decreases from 5.6798 to 0.4214 LSB, and the standard-deviation of column-mean vector decreases from 15.2080 to 13.4623 LSB. Both kinds of FPN in the real images captured by TDI-CIS are eliminated effectively with the proposed method.

  14. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples

    Science.gov (United States)

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-01

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses.

  15. Evaluation of metal artifacts in MVCT systems using a model based correction method

    Energy Technology Data Exchange (ETDEWEB)

    Paudel, M. R.; Mackenzie, M.; Fallone, B. G.; Rathee, S. [Department of Oncology, Medical Physics Division, University of Alberta, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada); Department of Medical Physics, Cross Cancer Institute, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada) and Department of Oncology, Medical Physics Division, University of Alberta, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada); Department of Physics, University of Alberta, 11322-89 Avenue, Edmonton, Alberta T6G 2G7 (Canada); Department of Medical Physics, Cross Cancer Institute, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada) and Department of Oncology, Medical Physics Division, University of Alberta, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada); Department of Medical Physics, Cross Cancer Institute, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada) and Department of Oncology, Medical Physics Division, University of Alberta, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada)

    2012-10-15

    Purpose: To evaluate the performance of a model based image reconstruction method in reducing metal artifacts in the megavoltage computed tomography (MVCT) images of a phantom representing bilateral hip prostheses and to compare with the filtered-backprojection (FBP) technique. Methods: An iterative maximum likelihood polychromatic algorithm for CT (IMPACT) is used with an additional model for the pair/triplet production process and the energy dependent response of the detectors. The beam spectra for an in-house bench-top and TomoTherapy Trade-Mark-Sign MVCTs are modeled for use in IMPACT. The empirical energy dependent response of detectors is calculated using a constrained optimization technique that predicts the measured attenuation of the beam by various thicknesses (0-24 cm) of solid water slabs. A cylindrical (19.1 cm diameter) plexiglass phantom containing various cylindrical inserts of relative electron densities 0.295-1.695 positioned between two steel rods (2.7 cm diameter) is scanned in the bench-top MVCT that utilizes the bremsstrahlung radiation from a 6 MeV electron beam passed through 4 cm solid water on the Varian Clinac 2300C and in the imaging beam of the TomoTherapy Trade-Mark-Sign MVCT. The FBP technique in bench-top MVCT reconstructs images from raw signal normalized to air scan and corrected for beam hardening using a uniform plexiglass cylinder (20 cm diameter). The IMPACT starts with a FBP reconstructed seed image and reconstructs the final image in 150 iterations. Results: In both MVCTs, FBP produces visible dark shading in the image connecting the steel rods. In the IMPACT reconstructed images this shading is nearly removed and the uniform background is restored. The average attenuation coefficients of the inserts and the background are very close to the corresponding values in the absence of the steel inserts. In the FBP images of the bench-top MVCT, the shading causes 4%-9.5% underestimation of electron density at the central inserts

  16. Bias Correction Methods Explain Much of the Variation Seen in Breast Cancer Risks of BRCA1/2 Mutation Carriers.

    Science.gov (United States)

    Vos, Janet R; Hsu, Li; Brohet, Richard M; Mourits, Marian J E; de Vries, Jakob; Malone, Kathleen E; Oosterwijk, Jan C; de Bock, Geertruida H

    2015-08-10

    Recommendations for treating patients who carry a BRCA1/2 gene are mainly based on cumulative lifetime risks (CLTRs) of breast cancer determined from retrospective cohorts. These risks vary widely (27% to 88%), and it is important to understand why. We analyzed the effects of methods of risk estimation and bias correction and of population factors on CLTRs in this retrospective clinical cohort of BRCA1/2 carriers. The following methods to estimate the breast cancer risk of BRCA1/2 carriers were identified from the literature: Kaplan-Meier, frailty, and modified segregation analyses with bias correction consisting of including or excluding index patients combined with including or excluding first-degree relatives (FDRs) or different conditional likelihoods. These were applied to clinical data of BRCA1/2 families derived from our family cancer clinic for whom a simulation was also performed to evaluate the methods. CLTRs and 95% CIs were estimated and compared with the reference CLTRs. CLTRs ranged from 35% to 83% for BRCA1 and 41% to 86% for BRCA2 carriers at age 70 years width of 95% CIs: 10% to 35% and 13% to 46%, respectively). Relative bias varied from -38% to +16%. Bias correction with inclusion of index patients and untested FDRs gave the smallest bias: +2% (SD, 2%) in BRCA1 and +0.9% (SD, 3.6%) in BRCA2. Much of the variation in breast cancer CLTRs in retrospective clinical BRCA1/2 cohorts is due to the bias-correction method, whereas a smaller part is due to population differences. Kaplan-Meier analyses with bias correction that includes index patients and a proportion of untested FDRs provide suitable CLTRs for carriers counseled in the clinic. © 2015 by American Society of Clinical Oncology.

  17. Development of a method to determine the total C-14 content in saturated salt solutions; Entwicklung eines Verfahrens zur Bestimmung von C-14{sub gesamt} in gesaettigten Salzloesungen

    Energy Technology Data Exchange (ETDEWEB)

    Lucks, C.; Prautsch, C. [Bundesamt fuer Strahlenschutz, Berlin (Germany)

    2016-07-01

    This two-step method described here for the determination of the total carbon-14 content in saturated salt solutions is divided in the analysis of the carbon-14 in the evaporable and the non-evaporable fraction. After driving off the inorganic carbon by acidification, the volatile carbon compounds and volatile decomposition products follow with rising temperature inside the sample vessel in a mild stream of oxygen to a tube furnace equipped with CuO catalyst for oxidizing the carbon compounds to CO{sub 2} at a temperature of 800 C. Water is condensed out with an intensive condenser and the released CO{sub 2} is absorbed in a wash bottle filled with sodium hydroxide. Similarly, an aliquot of the evaporation residue is put in the first zone of the tube furnace during the second step of the analysis. After heating the catalyst in the second zone of the furnace to 800 C the residue is heated stepwise to 800 C. By proceeding in this way, the non-volatile compounds are decomposed or oxidised in the oxygen stream and finally completely oxidized by the aid of the catalyst. The released CO{sub 2} is again absorbed in another wash bottle. The carbonate of each fraction is then precipitated as BaCO{sub 3} separately. Finally, the precipitate is washed, dried, finely grounded and covered with toluene scintillation cocktail for measurement in a LSC. The detection limit is about 0,2 Bq/l for a sample volume of 250 ml.

  18. Study protocol: the empirical investigation of methods to correct for measurement error in biobanks with dietary assessment

    Directory of Open Access Journals (Sweden)

    Masson Lindsey F

    2011-10-01

    Full Text Available Abstract Background The Public Population Project in Genomics (P3G is an organisation that aims to promote collaboration between researchers in the field of population-based genomics. The main objectives of P3G are to encourage collaboration between researchers and biobankers, optimize study design, promote the harmonization of information use in biobanks, and facilitate transfer of knowledge between interested parties. The importance of calibration and harmonisation of methods for environmental exposure assessment to allow pooling of data across studies in the evaluation of gene-environment interactions has been recognised by P3G, which has set up a methodological group on calibration with the aim of; 1 reviewing the published methodological literature on measurement error correction methods with assumptions and methods of implementation; 2 reviewing the evidence available from published nutritional epidemiological studies that have used a calibration approach; 3 disseminating information in the form of a comparison chart on approaches to perform calibration studies and how to obtain correction factors in order to support research groups collaborating within the P3G network that are unfamiliar with the methods employed; 4 with application to the field of nutritional epidemiology, including gene-diet interactions, ultimately developing a inventory of the typical correction factors for various nutrients. Methods/Design Systematic review of (a the methodological literature on methods to correct for measurement error in epidemiological studies; and (b studies that have been designed primarily to investigate the association between diet and disease and have also corrected for measurement error in dietary intake. Discussion The conduct of a systematic review of the methodological literature on calibration will facilitate the evaluation of methods to correct for measurement error and the design of calibration studies for the prospective pooling of

  19. INTESTINAL DYSBIOSIS IN CHILDREN WITH FOOD ALLERGY: PATHOGENETIC ASPECTS AND MODERN CORRECTION METHODS

    Directory of Open Access Journals (Sweden)

    S.G. Makarova

    2008-01-01

    Full Text Available Background paper analyses the role of intestinal micro-flora at the stage of forming immunity, the importance of intestinal microflora abnormalities during the periods of allergic diseases development (primarily food allergies, as well as mechanisms of dysbiosis effect on the allergic processes in child's body. The study discusses the mechanisms of treatment and prevention effect of probiotics for cases of child allergic diseases. The work also specifies modern approaches to correcting dysbiotic abnormalities for children with food allergies, reviews the options of diet and medication treatment of food allergy, suggests a new algorithm of gradual treatment that targets correction of dysbiosis in this patient category.Key words: children, food allergy, dysbiosis, probiotics, prebiotics, diet therapy.

  20. Qualitative evaluation of Chang method of attenuation correction on heart SPECT by using custom made heart phantom

    International Nuclear Information System (INIS)

    Takavar, A.; Eftekhari, M.; Beiki, D.; Saghari, M.; Mostaghim, N.; Sohrabi, M.

    2003-01-01

    SPECT detects γ- rays from administrated radiopharmaceutical within the patient body. γ-rays pass through different tissues before reaching detectors and are attenuated. Attenuation can cause artifacts; therefore different and used to minimize attenuation effects. In our study efficacy of Chang method was evaluated for attenuation purpose, using a custom made heart phantom. Due to different tissues surrounding heart, evaluation is not uniform more over activity distribution around heart is also non- uniform. In Chang method distribution of radioactivity and attenuation due to the surrounding tissue is considered uniform. Our phantom is a piece of plastic producing similar SPECT image as left ventricle. A dual head, ADAC system was used in our study. Images were taken by 180 d ig C (limited angle) and 360 d ig C (total rotation). Images are compared with and without attenuation correction. Our results indicate that Chang attenuation correction method is not capable of eliminating attenuation artifact completely in particular attenuation effects caused by breast

  1. Current estimate of functional vision in patients with bifocal pseudophakia after correction of residual defocus by different methods

    Directory of Open Access Journals (Sweden)

    Yuri V Takhtaev

    2016-03-01

    Full Text Available In this article we evaluated the influence of different surgical methods for correction of residual ametropia on contrast sensitivity at different light conditions and high-order aberrations in patients with bifocal pseudophakia. The study included 45 eyes (30 people after cataract surgery, which studied dependence between contrast sensitivity and aberrations level before and after surgical correction of residual ametropia by of three methods - LASIK, Sulcoflex IOL implantation or IOL exchange. Contrast sensitivity was measured by Optec 6500 and aberration using Pentacam «OCULUS». We processed the results using the Mann-Whitney U-test. This study shows correlation between each method and residual aberrations level and their influence on contrast sensitivity level.

  2. Overview of Akatsuki data products: definition of data levels, method and accuracy of geometric correction

    Science.gov (United States)

    Ogohara, Kazunori; Takagi, Masahiro; Murakami, Shin-ya; Horinouchi, Takeshi; Yamada, Manabu; Kouyama, Toru; Hashimoto, George L.; Imamura, Takeshi; Yamamoto, Yukio; Kashimura, Hiroki; Hirata, Naru; Sato, Naoki; Yamazaki, Atsushi; Satoh, Takehiko; Iwagami, Naomoto; Taguchi, Makoto; Watanabe, Shigeto; Sato, Takao M.; Ohtsuki, Shoko; Fukuhara, Tetsuya; Futaguchi, Masahiko; Sakanoi, Takeshi; Kameda, Shingo; Sugiyama, Ko-ichiro; Ando, Hiroki; Lee, Yeon Joo; Nakamura, Masato; Suzuki, Makoto; Hirose, Chikako; Ishii, Nobuaki; Abe, Takumi

    2017-12-01

    We provide an overview of data products from observations by the Japanese Venus Climate Orbiter, Akatsuki, and describe the definition and content of each data-processing level. Levels 1 and 2 consist of non-calibrated and calibrated radiance (or brightness temperature), respectively, as well as geometry information (e.g., illumination angles). Level 3 data are global-grid data in the regular longitude-latitude coordinate system, produced from the contents of Level 2. Non-negligible errors in navigational data and instrumental alignment can result in serious errors in the geometry calculations. Such errors cause mismapping of the data and lead to inconsistencies between radiances and illumination angles, along with errors in cloud-motion vectors. Thus, we carefully correct the boresight pointing of each camera by fitting an ellipse to the observed Venusian limb to provide improved longitude-latitude maps for Level 3 products, if possible. The accuracy of the pointing correction is also estimated statistically by simulating observed limb distributions. The results show that our algorithm successfully corrects instrumental pointing and will enable a variety of studies on the Venusian atmosphere using Akatsuki data.[Figure not available: see fulltext.

  3. Self-consistent EXAFS PDF Projection Method by Matched Correction of Fourier Filter Signal Distortion

    International Nuclear Information System (INIS)

    Lee, Jay Min; Yang, Dong-Seok

    2007-01-01

    Inverse problem solving computation was performed for solving PDF (pair distribution function) from simulated data EXAFS based on data FEFF. For a realistic comparison with experimental data, we chose a model of the first sub-shell Mn-0 pair showing the Jahn Teller distortion in crystalline LaMnO3. To restore the Fourier filtering signal distortion, involved in the first sub-shell information isolated from higher shell contents, relevant distortion matching function was computed initially from the proximity model, and iteratively from the prior-guess during consecutive regularization computation. Adaptive computation of EXAFS background correction is an issue of algorithm development, but our preliminary test was performed under the simulated background correction perfectly excluding the higher shell interference. In our numerical result, efficient convergence of iterative solution indicates a self-consistent tendency that a true PDF solution is convinced as a counterpart of genuine chi-data, provided that a background correction function is iteratively solved using an extended algorithm of MEPP (Matched EXAFS PDF Projection) under development

  4. Recipe for residual oil saturation determination

    Energy Technology Data Exchange (ETDEWEB)

    Guillory, A.J.; Kidwell, C.M.

    1979-01-01

    In 1978, Shell Oil Co., in conjunction with the US Department of Energy, conducted a residual oil saturation study in a deep, hot high-pressured Gulf Coast Reservoir. The work was conducted prior to initiation of CO/sub 2/ tertiary recovery pilot. Many problems had to be resolved prior to and during the residual oil saturation determination. The problems confronted are outlined such that the procedure can be used much like a cookbook in designing future studies in similar reservoirs. Primary discussion centers around planning and results of a log-inject-log operation used as a prime method to determine the residual oil saturation. Several independent methods were used to calculate the residual oil saturation in the subject well in an interval between 12,910 ft (3935 m) and 12,020 ft (3938 m). In general, these numbers were in good agreement and indicated a residual oil saturation between 22% and 24%. 10 references.

  5. GafChromic EBT film dosimetry with flatbed CCD scanner: a novel background correction method and full dose uncertainty analysis.

    Science.gov (United States)

    Saur, Sigrun; Frengen, Jomar

    2008-07-01

    Film dosimetry using radiochromic EBT film in combination with a flatbed charge coupled device scanner is a useful method both for two-dimensional verification of intensity-modulated radiation treatment plans and for general quality assurance of treatment planning systems and linear accelerators. Unfortunately, the response over the scanner area is nonuniform, and when not corrected for, this results in a systematic error in the measured dose which is both dose and position dependent. In this study a novel method for background correction is presented. The method is based on the subtraction of a correction matrix, a matrix that is based on scans of films that are irradiated to nine dose levels in the range 0.08-2.93 Gy. Because the response of the film is dependent on the film's orientation with respect to the scanner, correction matrices for both landscape oriented and portrait oriented scans were made. In addition to the background correction method, a full dose uncertainty analysis of the film dosimetry procedure was performed. This analysis takes into account the fit uncertainty of the calibration curve, the variation in response for different film sheets, the nonuniformity after background correction, and the noise in the scanned films. The film analysis was performed for film pieces of size 16 x 16 cm, all with the same lot number, and all irradiations were done perpendicular onto the films. The results show that the 2-sigma dose uncertainty at 2 Gy is about 5% and 3.5% for landscape and portrait scans, respectively. The uncertainty gradually increases as the dose decreases, but at 1 Gy the 2-sigma dose uncertainty is still as good as 6% and 4% for landscape and portrait scans, respectively. The study shows that film dosimetry using GafChromic EBT film, an Epson Expression 1680 Professional scanner and a dedicated background correction technique gives precise and accurate results. For the purpose of dosimetric verification, the calculated dose distribution

  6. GafChromic EBT film dosimetry with flatbed CCD scanner: A novel background correction method and full dose uncertainty analysis

    International Nuclear Information System (INIS)

    Saur, Sigrun; Frengen, Jomar

    2008-01-01

    Film dosimetry using radiochromic EBT film in combination with a flatbed charge coupled device scanner is a useful method both for two-dimensional verification of intensity-modulated radiation treatment plans and for general quality assurance of treatment planning systems and linear accelerators. Unfortunately, the response over the scanner area is nonuniform, and when not corrected for, this results in a systematic error in the measured dose which is both dose and position dependent. In this study a novel method for background correction is presented. The method is based on the subtraction of a correction matrix, a matrix that is based on scans of films that are irradiated to nine dose levels in the range 0.08-2.93 Gy. Because the response of the film is dependent on the film's orientation with respect to the scanner, correction matrices for both landscape oriented and portrait oriented scans were made. In addition to the background correction method, a full dose uncertainty analysis of the film dosimetry procedure was performed. This analysis takes into account the fit uncertainty of the calibration curve, the variation in response for different film sheets, the nonuniformity after background correction, and the noise in the scanned films. The film analysis was performed for film pieces of size 16x16 cm, all with the same lot number, and all irradiations were done perpendicular onto the films. The results show that the 2-sigma dose uncertainty at 2 Gy is about 5% and 3.5% for landscape and portrait scans, respectively. The uncertainty gradually increases as the dose decreases, but at 1 Gy the 2-sigma dose uncertainty is still as good as 6% and 4% for landscape and portrait scans, respectively. The study shows that film dosimetry using GafChromic EBT film, an Epson Expression 1680 Professional scanner and a dedicated background correction technique gives precise and accurate results. For the purpose of dosimetric verification, the calculated dose distribution can

  7. Measurement correction method for force sensor used in dynamic pressure calibration based on artificial neural network optimized by genetic algorithm

    Science.gov (United States)

    Gu, Tingwei; Kong, Deren; Shang, Fei; Chen, Jing

    2017-12-01

    We present an optimization algorithm to obtain low-uncertainty dynamic pressure measurements from a force-transducer-based device. In this paper, the advantages and disadvantages of the methods that are commonly used to measure the propellant powder gas pressure, the applicable scope of dynamic pressure calibration devices, and the shortcomings of the traditional comparison calibration method based on the drop-weight device are firstly analysed in detail. Then, a dynamic calibration method for measuring pressure using a force sensor based on a drop-weight device is introduced. This method can effectively save time when many pressure sensors are calibrated simultaneously and extend the life of expensive reference sensors. However, the force sensor is installed between the drop-weight and the hammerhead by transition pieces through the connection mode of bolt fastening, which causes adverse effects such as additional pretightening and inertia forces. To solve these effects, the influence mechanisms of the pretightening force, the inertia force and other influence factors on the force measurement are theoretically analysed. Then a measurement correction method for the force measurement is proposed based on an artificial neural network optimized by a genetic algorithm. The training and testing data sets are obtained from calibration tests, and the selection criteria for the key parameters of the correction model is discussed. The evaluation results for the test data show that the correction model can effectively improve the force measurement accuracy of the force sensor. Compared with the traditional high-accuracy comparison calibration method, the percentage difference of the impact-force-based measurement is less than 0.6% and the relative uncertainty of the corrected force value is 1.95%, which can meet the requirements of engineering applications.

  8. Experimental aspects of buoyancy correction in measuring reliable high-pressure excess adsorption isotherms using the gravimetric method

    Science.gov (United States)

    Nguyen, Huong Giang T.; Horn, Jarod C.; Thommes, Matthias; van Zee, Roger D.; Espinal, Laura

    2017-12-01

    Addressing reproducibility issues in adsorption measurements is critical to accelerating the path to discovery of new industrial adsorbents and to understanding adsorption processes. A National Institute of Standards and Technology Reference Material, RM 8852 (ammonium ZSM-5 zeolite), and two gravimetric instruments with asymmetric two-beam balances were used to measure high-pressure adsorption isotherms. This work demonstrates how common approaches to buoyancy correction, a key factor in obtaining the mass change due to surface excess gas uptake from the apparent mass change, can impact the adsorption isotherm data. Three different approaches to buoyancy correction were investigated and applied to the subcritical CO2 and supercritical N2 adsorption isotherms at 293 K. It was observed that measuring a collective volume for all balance components for the buoyancy correction (helium method) introduces an inherent bias in temperature partition when there is a temperature gradient (i.e. analysis temperature is not equal to instrument air bath temperature). We demonstrate that a blank subtraction is effective in mitigating the biases associated with temperature partitioning, instrument calibration, and the determined volumes of the balance components. In general, the manual and subtraction methods allow for better treatment of the temperature gradient during buoyancy correction. From the study, best practices specific to asymmetric two-beam balances and more general recommendations for measuring isotherms far from critical temperatures using gravimetric instruments are offered.

  9. Experimental study on the location of energy windows for scatter correction by the TEW method in 201Tl imaging

    International Nuclear Information System (INIS)

    Kojima, Akihiro; Matsumoto, Masanori; Ohyama, Yoichi; Tomiguchi, Seiji; Kira, Mitsuko; Takahashi, Mutsumasa.

    1997-01-01

    To investigate validity of scatter correction by the TEW method in 201 Tl imaging, we performed an experimental study using the gamma camera with the capability to perform the TEW method and a plate source with a defect. Images were acquired with the triple energy window which is recommended by the gamma camera manufacturer. The result of the energy spectrum showed that backscattered photons were included within the lower sub-energy window and main energy window, and the spectral shapes in the upper half region of the photopeak (70 keV) were not changed greatly by the source shape and the thickness of scattering materials. The scatter fraction calculated using energy spectra and, visual observation and the contrast values measured at the defect using planar images also showed that substantial primary photons were included in the upper sub-energy window. In TEW method (for scatter correction), two sub-energy windows are expected to be defined on the part of energy region in which total counts mainly consist of scattered photons. Therefore, it is necessary to investigate the use of the upper sub-energy window on scatter correction by the TEW method in 201 Tl imaging. (author)

  10. A novel baseline correction method using convex optimization framework in laser-induced breakdown spectroscopy quantitative analysis

    Science.gov (United States)

    Yi, Cancan; Lv, Yong; Xiao, Han; Ke, Ke; Yu, Xun

    2017-12-01

    For laser-induced breakdown spectroscopy (LIBS) quantitative analysis technique, baseline correction is an essential part for the LIBS data preprocessing. As the widely existing cases, the phenomenon of baseline drift is generated by the fluctuation of laser energy, inhomogeneity of sample surfaces and the background noise, which has aroused the interest of many researchers. Most of the prevalent algorithms usually need to preset some key parameters, such as the suitable spline function and the fitting order, thus do not have adaptability. Based on the characteristics of LIBS, such as the sparsity of spectral peaks and the low-pass filtered feature of baseline, a novel baseline correction and spectral data denoising method is studied in this paper. The improved technology utilizes convex optimization scheme to form a non-parametric baseline correction model. Meanwhile, asymmetric punish function is conducted to enhance signal-noise ratio (SNR) of the LIBS signal and improve reconstruction precision. Furthermore, an efficient iterative algorithm is applied to the optimization process, so as to ensure the convergence of this algorithm. To validate the proposed method, the concentration analysis of Chromium (Cr),Manganese (Mn) and Nickel (Ni) contained in 23 certified high alloy steel samples is assessed by using quantitative models with Partial Least Squares (PLS) and Support Vector Machine (SVM). Because there is no prior knowledge of sample composition and mathematical hypothesis, compared with other methods, the method proposed in this paper has better accuracy in quantitative analysis, and fully reflects its adaptive ability.

  11. The usefulness and the problems of attenuation correction using simultaneous transmission and emission data acquisition method. Studies on normal volunteers and phantom

    International Nuclear Information System (INIS)

    Kijima, Tetsuji; Kumita, Shin-ichiro; Mizumura, Sunao; Cho, Keiichi; Ishihara, Makiko; Toba, Masahiro; Kumazaki, Tatsuo; Takahashi, Munehiro.

    1997-01-01

    Attenuation correction using simultaneous transmission data (TCT) and emission data (ECT) acquisition method was applied to 201 Tl myocardial SPECT with ten normal adults and the phantom in order to validate the efficacy of attenuation correction using this method. Normal adults study demonstrated improved 201 Tl accumulation to the septal wall and the posterior wall of the left ventricle and relative decreased activities in the lateral wall with attenuation correction (p 201 Tl uptake organs such as the liver and the stomach pushed up the activities in the septal wall and the posterior wall. Cardiac dynamic phantom studies showed partial volume effect due to cardiac motion contributed to under-correction of the apex, which might be overcome using gated SPECT. Although simultaneous TCT and ECT acquisition was conceived of the advantageous method for attenuation correction, miss-correction of the special myocardial segments should be taken into account in assessment of attenuation correction compensated images. (author)

  12. Initial evaluation of a practical PET respiratory motion correction method in clinical simultaneous PET/MRI

    International Nuclear Information System (INIS)

    Manber, Richard; Thielemans, Kris; Hutton, Brian; Barnes, Anna; Ourselin, Sebastien; Arridge, Simon; O’Meara, Celia; Atkinson, David

    2014-01-01

    Respiratory motion during PET acquisitions can cause image artefacts, with sharpness and tracer quantification adversely affected due to count ‘smearing’. Motion correction by registration of PET gates becomes increasingly difficult with shorter scan times and less counts. The advent of simultaneous PET/MRI scanners allows the use of high spatial resolution MRI to capture motion states during respiration [1, 2]. In this work, we use a respiratory signal derived from the PET list-mode data [3, ], with no requirement for an external device or MR sequence modifications.

  13. 4SM: A Novel Self-Calibrated Algebraic Ratio Method for Satellite-Derived Bathymetry and Water Column Correction

    Directory of Open Access Journals (Sweden)

    Yann G. Morel

    2017-07-01

    Full Text Available All empirical water column correction methods have consistently been reported to require existing depth sounding data for the purpose of calibrating a simple depth retrieval model; they yield poor results over very bright or very dark bottoms. In contrast, we set out to (i use only the relative radiance data in the image along with published data, and several new assumptions; (ii in order to specify and operate the simplified radiative transfer equation (RTE; (iii for the purpose of retrieving both the satellite derived bathymetry (SDB and the water column corrected spectral reflectance over shallow seabeds. Sea truth regressions show that SDB depths retrieved by the method only need tide correction. Therefore it shall be demonstrated that, under such new assumptions, there is no need for (i formal atmospheric correction; (ii conversion of relative radiance into calibrated reflectance; or (iii existing depth sounding data, to specify the simplified RTE and produce both SDB and spectral water column corrected radiance ready for bottom typing. Moreover, the use of the panchromatic band for that purpose is introduced. Altogether, we named this process the Self-Calibrated Supervised Spectral Shallow-sea Modeler (4SM. This approach requires a trained practitioner, though, to produce its results within hours of downloading the raw image. The ideal raw image should be a “near-nadir” view, exhibit homogeneous atmosphere and water column, include some coverage of optically deep waters and bare land, and lend itself to quality removal of haze, atmospheric adjacency effect, and sun/sky glint.

  14. Correction of measured charged-particle spectra for energy losses in the target - A comparison of three methods

    CERN Document Server

    Soederberg, J; Alm-Carlsson, G; Olsson, N

    2002-01-01

    The experimental facility, MEDLEY, at the The Svedberg Laboratory in Uppsala, has been constructed to measure neutron-induced charged-particle production cross-sections for (n, xp), (n, xd), (n, xt), (n, x sup 3 He) and (n, x alpha) reactions at neutron energies up to 100 MeV. Corrections for the energy loss of the charged particles in the target are needed in these measurements, as well as for loss of particles. Different approaches have been used in the literature to solve this problem. In this work, a stripping method is developed, which is compared with other methods developed by Rezentes et al. and Slypen et al. The results obtained using the three codes are similar and they could all be used for correction of experimental charged-particle spectra. Statistical fluctuations in the measured spectra cause problems independent of the applied technique, but the way to handle it differs in the three codes.

  15. Monte Carlo calculation of correction factors for radionuclide neutron source emission rate measurement by manganese bath method

    International Nuclear Information System (INIS)

    Li Chunjuan; Liu Yi'na; Zhang Weihua; Wang Zhiqiang

    2014-01-01

    The manganese bath method for measuring the neutron emission rate of radionuclide sources requires corrections to be made for emitted neutrons which are not captured by manganese nuclei. The Monte Carlo particle transport code MCNP was used to simulate the manganese bath system of the standards for the measurement of neutron source intensity. The correction factors were calculated and the reliability of the model was demonstrated through the key comparison for the radionuclide neutron source emission rate measurements organized by BIPM. The uncertainties in the calculated values were evaluated by considering the sensitivities to the solution density, the density of the radioactive material, the positioning of the source, the radius of the bath, and the interaction cross-sections. A new method for the evaluation of the uncertainties in Monte Carlo calculation was given. (authors)

  16. A Realization of Bias Correction Method in the GMAO Coupled System

    Science.gov (United States)

    Chang, Yehui; Koster, Randal; Wang, Hailan; Schubert, Siegfried; Suarez, Max

    2018-01-01

    Over the past several decades, a tremendous effort has been made to improve model performance in the simulation of the climate system. The cold or warm sea surface temperature (SST) bias in the tropics is still a problem common to most coupled ocean atmosphere general circulation models (CGCMs). The precipitation biases in CGCMs are also accompanied by SST and surface wind biases. The deficiencies and biases over the equatorial oceans through their influence on the Walker circulation likely contribute the precipitation biases over land surfaces. In this study, we introduce an approach in the CGCM modeling to correct model biases. This approach utilizes the history of the model's short-term forecasting errors and their seasonal dependence to modify model's tendency term and to minimize its climate drift. The study shows that such an approach removes most of model climate biases. A number of other aspects of the model simulation (e.g. extratropical transient activities) are also improved considerably due to the imposed pre-processed initial 3-hour model drift corrections. Because many regional biases in the GEOS-5 CGCM are common amongst other current models, our approaches and findings are applicable to these other models as well.

  17. A modification to the standard ionospheric correction method used in GPS radio occultation

    Directory of Open Access Journals (Sweden)

    S. B. Healy

    2015-08-01

    Full Text Available A modification to the standard bending-angle correction used in GPS radio occultation (GPS-RO is proposed. The modified approach should reduce systematic residual ionospheric errors in GPS radio occultation climatologies. A new second-order term is introduced in order to account for a known source of systematic error, which is generally neglected. The new term has the form κ(a × (αL1(a-αL2(a2, where a is the impact parameter and (αL1, αL2 are the L1 and L2 bending angles, respectively. The variable κ is a weak function of the impact parameter, a, but it does depend on a priori ionospheric information. The theoretical basis of the new term is examined. The sensitivity of κ to the assumed ionospheric parameters is investigated in one-dimensional simulations, and it is shown that κ ≃ 10–20 rad−1. We note that the current implicit assumption is κ=0, and this is probably adequate for numerical weather prediction applications. However, the uncertainty in κ should be included in the uncertainty estimates for the geophysical climatologies produced from GPS-RO measurements. The limitations of the new ionospheric correction when applied to CHAMP (Challenging Minisatellite Payload measurements are noted. These arise because of the assumption that the refractive index is unity at the satellite, made when deriving bending angles from the Doppler shift values.

  18. BLESS 2: accurate, memory-efficient and fast error correction method.

    Science.gov (United States)

    Heo, Yun; Ramachandran, Anand; Hwu, Wen-Mei; Ma, Jian; Chen, Deming

    2016-08-01

    The most important features of error correction tools for sequencing data are accuracy, memory efficiency and fast runtime. The previous version of BLESS was highly memory-efficient and accurate, but it was too slow to handle reads from large genomes. We have developed a new version of BLESS to improve runtime and accuracy while maintaining a small memory usage. The new version, called BLESS 2, has an error correction algorithm that is more accurate than BLESS, and the algorithm has been parallelized using hybrid MPI and OpenMP programming. BLESS 2 was compared with five top-performing tools, and it was found to be the fastest when it was executed on two computing nodes using MPI, with each node containing twelve cores. Also, BLESS 2 showed at least 11% higher gain while retaining the memory efficiency of the previous version for large genomes. Freely available at https://sourceforge.net/projects/bless-ec dchen@illinois.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. A third-generation dispersion and third-generation hydrogen bonding corrected PM6 method: PM6-D3H+

    Directory of Open Access Journals (Sweden)

    Jimmy C. Kromann

    2014-06-01

    Full Text Available We present new dispersion and hydrogen bond corrections to the PM6 method, PM6-D3H+, and its implementation in the GAMESS program. The method combines the DFT-D3 dispersion correction by Grimme et al. with a modified version of the H+ hydrogen bond correction by Korth. Overall, the interaction energy of PM6-D3H+ is very similar to PM6-DH2 and PM6-DH+, with RMSD and MAD values within 0.02 kcal/mol of one another. The main difference is that the geometry optimizations of 88 complexes result in 82, 6, 0, and 0 geometries with 0, 1, 2, and 3 or more imaginary frequencies using PM6-D3H+ implemented in GAMESS, while the corresponding numbers for PM6-DH+ implemented in MOPAC are 54, 17, 15, and 2. The PM6-D3H+ method as implemented in GAMESS offers an attractive alternative to PM6-DH+ in MOPAC in cases where the LBFGS optimizer must be used and a vibrational analysis is needed, e.g., when computing vibrational free energies. While the GAMESS implementation is up to 10 times slower for geometry optimizations of proteins in bulk solvent, compared to MOPAC, it is sufficiently fast to make geometry optimizations of small proteins practically feasible.

  20. Fixed-pattern noise correction method based on improved moment matching for a TDI CMOS image sensor.

    Science.gov (United States)

    Xu, Jiangtao; Nie, Huafeng; Nie, Kaiming; Jin, Weimin

    2017-09-01

    In this paper, an improved moment matching method based on a spatial correlation filter (SCF) and bilateral filter (BF) is proposed to correct the fixed-pattern noise (FPN) of a time-delay-integration CMOS image sensor (TDI-CIS). First, the values of row FPN (RFPN) and column FPN (CFPN) are estimated and added to the original image through SCF and BF, respectively. Then the filtered image will be processed by an improved moment matching method with a moving window. Experimental results based on a 128-stage TDI-CIS show that, after correcting the FPN in the image captured under uniform illumination, the standard deviation of row mean vector (SDRMV) decreases from 5.6761 LSB to 0.1948 LSB, while the standard deviation of the column mean vector (SDCMV) decreases from 15.2005 LSB to 13.1949LSB. In addition, for different images captured by different TDI-CISs, the average decrease of SDRMV and SDCMV is 5.4922/2.0357 LSB, respectively. Comparative experimental results indicate that the proposed method can effectively correct the FPNs of different TDI-CISs while maintaining image details without any auxiliary equipment.

  1. Correction for tissue attenuation in radionuclide gastric emptying studies: a comparison of a lateral image method and a geometric mean method

    Energy Technology Data Exchange (ETDEWEB)

    Collins, P.J.; Chatterton, B.E. (Royal Adelaide Hospital (Australia)); Horowitz, M.; Shearman, D.J.C. (Adelaide Univ. (Australia). Dept. of Medicine)

    1984-08-01

    Variation in depth of radionuclide within the stomach may result in significant errors in the measurement of gastric emptying if no attempt is made to correct for gamma-ray attenuation by the patient's tissues. A method of attenuation correction, which uses a single posteriorly located scintillation camera and correction factors derived from a lateral image of the stomach, was compared with a two-camera geometric mean method, in phantom studies and in five volunteer subjects. A meal of 100 g of ground beef containing /sup 99/Tcsup(m)-chicken liver, and 150 ml of water was used in the in vivo studies. In all subjects the geometric mean data showed that solid food emptied in two phases: an initial lag period, followed by a linear emptying phase. Using the geometric mean data as a standard, the anterior camera overestimated the 50% emptying time (T/sub 50/) by an average of 15% (range 5-18) and the posterior camera underestimated this parameter by 15% (4-22). The posterior data, corrected for attenuation using the lateral image method, underestimated the T/sub 50/ by 2% (-7 to +7). The difference in the distances of the proximal and distal stomach from the posterior detector was large in all subjects (mean 5.7 cm, range 3.9-7.4).

  2. Development of a self-absorption correction method used for a HPGe detector by means of a Monte Carlo simulation

    International Nuclear Information System (INIS)

    Itadzu, Hidesuke; Iguchi, Tetsuo; Suzuki, Toshikazu

    2013-01-01

    Quantitative analysis for food products and natural samples, to determine the activity of each radionuclide, can be made by using a high-purity germanium (HPGe) gamma-ray spectrometer system. The analysis procedure is, in general, based upon the guidelines established by the Nuclear Safety Division of the Ministry of Education, Culture, Sports, Science and Technology in Japan (JP MEXT). In the case of gamma-ray spectrum analysis for large volume samples, re-entrant (marinelli) containers are commonly used. The effect of photon attenuation in a large-volume sample, so-called “self-absorption”, should be corrected for precise determination of the activity. As for marinelli containers, two accurate geometries are shown in the JP MEXT guidelines for 700 milliliter and 2 liter volumes. In the document, the functions to obtain the self-absorption coefficients for these specific shapes are also shown. Therefore, self-absorption corrections have been carried out only for these two containers with practical media. However, to measure radioactivity for samples in containers of volumes other than those described in the guidelines, the self-absorption correction functions must be obtained by measuring at least two standard multinuclide volume sources, which consist of different media or different linear attenuation coefficients. In this work, we developed a method to obtain these functions over a wide range of linear attenuation coefficients for self-absorption in various shapes of marinelli containers using a Monte Carlo simulation. This method was applied to a 1-liter marinelli container, which is widely used for the above quantitative analysis, although its self-absorption correction function has not yet been established. The validity of this method was experimentally checked through an analysis of natural samples with known activity levels. (author)

  3. An Accurate CT Saturation Classification Using a Deep Learning Approach Based on Unsupervised Feature Extraction and Supervised Fine-Tuning Strategy

    Directory of Open Access Journals (Sweden)

    Muhammad Ali

    2017-11-01

    Full Text Available Current transformer (CT saturation is one of the significant problems for protection engineers. If CT saturation is not tackled properly, it can cause a disastrous effect on the stability of the power system, and may even create a complete blackout. To cope with CT saturation properly, an accurate detection or classification should be preceded. Recently, deep learning (DL methods have brought a subversive revolution in the field of artificial intelligence (AI. This paper presents a new DL classification method based on unsupervised feature extraction and supervised fine-tuning strategy to classify the saturated and unsaturated regions in case of CT saturation. In other words, if protection system is subjected to a CT saturation, proposed method will correctly classify the different levels of saturation with a high accuracy. Traditional AI methods are mostly based on supervised learning and rely heavily on human crafted features. This paper contributes to an unsupervised feature extraction, using autoencoders and deep neural networks (DNNs to extract features automatically without prior knowledge of optimal features. To validate the effectiveness of proposed method, a variety of simulation tests are conducted, and classification results are analyzed using standard classification metrics. Simulation results confirm that proposed method classifies the different levels of CT saturation with a remarkable accuracy and has unique feature extraction capabilities. Lastly, we provided a potential future research direction to conclude this paper.

  4. A review of neutron scattering correction for the calibration of neutron survey meters using the shadow cone method

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sang In; Kim, Bong Hwan; Kim, Jang Lyul; Lee, Jung Il [Health Physics Team, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-12-15

    The calibration methods of neutron-measuring devices such as the neutron survey meter have advantages and disadvantages. To compare the calibration factors obtained by the shadow cone method and semi-empirical method, 10 neutron survey meters of five different types were used in this study. This experiment was performed at the Korea Atomic Energy Research Institute (KAERI; Daejeon, South Korea), and the calibration neutron fields were constructed using a {sup 252}Californium ({sup 252}Cf) neutron source, which was positioned in the center of the neutron irradiation room. The neutron spectra of the calibration neutron fields were measured by a europium-activated lithium iodide scintillator in combination with KAERI's Bonner sphere system. When the shadow cone method was used, 10 single moderator-based survey meters exhibited a smaller calibration factor by as much as 3.1 - 9.3% than that of the semi-empirical method. This finding indicates that neutron survey meters underestimated the scattered neutrons and attenuated neutrons (i.e., the total scatter corrections). This underestimation of the calibration factor was attributed to the fact that single moderator-based survey meters have an under-ambient dose equivalent response in the thermal or thermal-dominant neutron field. As a result, when the shadow cone method is used for a single moderator-based survey meter, an additional correction and the International Organization for Standardization standard 8529-2 for room-scattered neutrons should be considered.

  5. A novel scene-based non-uniformity correction method for SWIR push-broom hyperspectral sensors

    Science.gov (United States)

    Hu, Bin-Lin; Hao, Shi-Jing; Sun, De-Xin; Liu, Yin-Nian

    2017-09-01

    A novel scene-based non-uniformity correction (NUC) method for short-wavelength infrared (SWIR) push-broom hyperspectral sensors is proposed and evaluated. This method relies on the assumption that for each band there will be ground objects with similar reflectance to form uniform regions when a sufficient number of scanning lines are acquired. The uniform regions are extracted automatically through a sorting algorithm, and are used to compute the corresponding NUC coefficients. SWIR hyperspectral data from airborne experiment are used to verify and evaluate the proposed method, and results show that stripes in the scenes have been well corrected without any significant information loss, and the non-uniformity is less than 0.5%. In addition, the proposed method is compared to two other regular methods, and they are evaluated based on their adaptability to the various scenes, non-uniformity, roughness and spectral fidelity. It turns out that the proposed method shows strong adaptability, high accuracy and efficiency.

  6. A review of neutron scattering correction for the calibration of neutron survey meters using the shadow cone method

    International Nuclear Information System (INIS)

    Kim, Sang In; Kim, Bong Hwan; Kim, Jang Lyul; Lee, Jung Il

    2015-01-01

    The calibration methods of neutron-measuring devices such as the neutron survey meter have advantages and disadvantages. To compare the calibration factors obtained by the shadow cone method and semi-empirical method, 10 neutron survey meters of five different types were used in this study. This experiment was performed at the Korea Atomic Energy Research Institute (KAERI; Daejeon, South Korea), and the calibration neutron fields were constructed using a 252 Californium ( 252 Cf) neutron source, which was positioned in the center of the neutron irradiation room. The neutron spectra of the calibration neutron fields were measured by a europium-activated lithium iodide scintillator in combination with KAERI's Bonner sphere system. When the shadow cone method was used, 10 single moderator-based survey meters exhibited a smaller calibration factor by as much as 3.1 - 9.3% than that of the semi-empirical method. This finding indicates that neutron survey meters underestimated the scattered neutrons and attenuated neutrons (i.e., the total scatter corrections). This underestimation of the calibration factor was attributed to the fact that single moderator-based survey meters have an under-ambient dose equivalent response in the thermal or thermal-dominant neutron field. As a result, when the shadow cone method is used for a single moderator-based survey meter, an additional correction and the International Organization for Standardization standard 8529-2 for room-scattered neutrons should be considered

  7. How about a Bayesian M/EEG imaging method correcting for incomplete spatio-temporal priors

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Attias, Hagai T.; Sekihara, Kensuke

    2013-01-01

    previous spatio-temporal inverse M/EEG models, the proposed model benefits of consisting of two source terms, namely, a spatio-temporal pattern term limiting the source configuration to a spatio-temporal subspace and a source correcting term to pick up source activity not covered by the spatio......-temporal prior belief. We have tested the model on both artificial data and real EEG data in order to demonstrate the efficacy of the model. The model was tested at different SNRs (-10.0,-5.2, -3.0, -1.0, 0, 0.8, 3.0 dB) using white noise. At all SNRs the sAquavit performs best in AUC measure, e.g. at SNR=0d...

  8. Bias correction method for climate change impact assessment at a basin scale

    Science.gov (United States)

    Nyunt, C.; Jaranilla-sanchez, P. A.; Yamamoto, A.; Nemoto, T.; Kitsuregawa, M.; Koike, T.

    2012-12-01

    Climate change impact studies are mainly based on the general circulation models GCM and these studies play an important role to define suitable adaptation strategies for resilient environment in a basin scale management. For this purpose, this study summarized how to select appropriate GCM to decrease the certain uncertainty amount in analysis. This was applied to the Pampanga, Angat and Kaliwa rivers in Luzon Island, the main island of Philippine and these three river basins play important roles in irrigation water supply, municipal water source for Metro Manila. According to the GCM scores of both seasonal evolution of Asia summer monsoon and spatial correlation and root mean squared error of atmospheric variables over the region, finally six GCM is chosen. Next, we develop a complete, efficient and comprehensive statistical bias correction scheme covering extremes events, normal rainfall and frequency of dry period. Due to the coarse resolution and parameterization scheme of GCM, extreme rainfall underestimation, too many rain days with low intensity and poor representation of local seasonality have been known as bias of GCM. Extreme rainfall has unusual characteristics and it should be focused specifically. Estimated maximum extreme rainfall is crucial for planning and design of infrastructures in river basin. Developing countries have limited technical, financial and management resources for implementing adaptation measures and they need detailed information of drought and flood for near future. Traditionally, the analysis of extreme has been examined using annual maximum series (AMS) adjusted to a Gumbel or Lognormal distribution. The drawback is the loss of the second, third etc, largest rainfall. Another approach is partial duration series (PDS) constructed using the values above a selected threshold and permit more than one event per year. The generalized Pareto distribution (GPD) has been used to model PDS and it is the series of excess over a threshold

  9. Well-logging method using well-logging tools run through a drill stem test string for determining in-situ change in formation water saturation values

    International Nuclear Information System (INIS)

    Fertl, W.H.

    1975-01-01

    A logging tool (pulsed neutron or neutron-gamma ray) whose response indicates formation water saturation value, is run through an opening extending through a portion of a drill stem test string. A sample portion of the formation fluid in the zone of interest is removed and another logging run is made. The differences between the plots of the two logging runs indicate the formation potential productivity in the zone of interest

  10. [THE CORRECTION OF TROPHIC DISORDERS IN CHILDREN OF CHRONIC GASTRODUODENITIS WITH METHOD LOW-FREQUENCY LIGHT-MAGNETOTHERAPY].

    Science.gov (United States)

    Kolosova, T A; Sadovnikova, I V; Belousova, T E

    2015-01-01

    The results of a survey of school children with chronic gastroduodenitis when applying at an early period the medical rehabilitation with method low-frequency light-magnetotherapy. During treatment of hospital was evaluated vegetative-trophic status with methods of cardiointervalography and thermovision functional tests. In normalizes clinical parameters was correction in dynamics of the vegetative status in children, it confirms the effectiveness of the therapy. It is proved, that the use of low-frequency light-magnetotherapy has a positive effect on the vegetative--trophic provision an organism and normalizes the vegetative dysfunction.

  11. Contribution to regularizing iterative method development for attenuation correction in gamma emission tomography

    International Nuclear Information System (INIS)

    Cao, A.

    1981-07-01

    This study is concerned with the transverse axial gamma emission tomography. The problem of self-attenuation of radiations in biologic tissues is raised. The regularizing iterative method is developed, as a reconstruction method of 3 dimensional images. The different steps from acquisition to results, necessary to its application, are described. Organigrams relative to each step are explained. Comparison notion between two reconstruction methods is introduced. Some methods used for the comparison or to bring about the characteristics of a reconstruction technique are defined. The studies realized to test the regularizing iterative method are presented and results are analyzed [fr

  12. Method of correction of motive sphere for deaf schoolboys during an orientation on employments on health tourism

    Directory of Open Access Journals (Sweden)

    Baikina N.G.

    2012-08-01

    Full Text Available The purpose of work consists in development of method of correction of motive sphere and linguistic development running on speed and endurance for deaf schoolboys which are engaged in health tourism. In an experiment deaf schoolboys took part 12-14 years. The sizes of latent period of reaction are set on a light signal and change in the indexes of nervous muscle vehicle. Bases of preparation of schoolboys are recommended on tactic of orientation on employments by health tourism. The features of speeding up and endurance are selected for deaf and hearings schoolboys on employments on an orientation. It is set that the correction of motive sphere must be carried out on the basis of running preparation - on speed and endurance. It is thus necessary to extend and choose the volume of initial verbal information - verbal, writing, haptic, gesticulation. It is marked about importance of introduction of sporting technicals in the process of implementation at run, multiple to repeat verbal information about logic of inversely connect actions of student. It is set that playing, repeated, competition and circular methods must be combined with verbal components in all of accessible forms. Also, in combination with a show and operative correction of their activity.

  13. Performance of bias-correction methods for exposure measurement error using repeated measurements with and without missing data.

    Science.gov (United States)

    Batistatou, Evridiki; McNamee, Roseanne

    2012-12-10

    It is known that measurement error leads to bias in assessing exposure effects, which can however, be corrected if independent replicates are available. For expensive replicates, two-stage (2S) studies that produce data 'missing by design', may be preferred over a single-stage (1S) study, because in the second stage, measurement of replicates is restricted to a sample of first-stage subjects. Motivated by an occupational study on the acute effect of carbon black exposure on respiratory morbidity, we compare the performance of several bias-correction methods for both designs in a simulation study: an instrumental variable method (EVROS IV) based on grouping strategies, which had been recommended especially when measurement error is large, the regression calibration and the simulation extrapolation methods. For the 2S design, either the problem of 'missing' data was ignored or the 'missing' data were imputed using multiple imputations. Both in 1S and 2S designs, in the case of small or moderate measurement error, regression calibration was shown to be the preferred approach in terms of root mean square error. For 2S designs, regression calibration as implemented by Stata software is not recommended in contrast to our implementation of this method; the 'problematic' implementation of regression calibration although substantially improved with use of multiple imputations. The EVROS IV method, under a good/fairly good grouping, outperforms the regression calibration approach in both design scenarios when exposure mismeasurement is severe. Both in 1S and 2S designs with moderate or large measurement error, simulation extrapolation severely failed to correct for bias. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Method and system for correcting an aberration of a beam of charged particles

    International Nuclear Information System (INIS)

    1975-01-01

    A beam of charged particles is deflected in a closed path such as a square over a cross wire grid, for example, at a constant velocity by an X Y deflection system. A small high frequency jitter is added at both axes of deflection to cause oscillation of the beam at 45deg to the X and Y axes. From the time that the leading edge of the oscillating beam passes over the wire until the trailing edge of the beam passes over the wire, an envelope of the oscillations produced by the jitter is obtained. A second envelope is obtained when the leading edge of the beam exits from being over the wire until the trailing edge of the beam ceases to be over the wire. Thus, a pair of envelopes is produced as the beam passes over each wire of the grid. The number of pulses exceeding ten percent of the peak voltage in the eight envelopes produced by the beam completing a cycle in its closed path around the grid are counted and compared with those counted during the previous cycle of the beam moving in its closed path over the grid. As the number of pulses decreases, the quality of the focus of the beam increases so that correction signals are applied to the focus coil in accordance with whether the number of pulses is increasing or decreasing

  15. PAIN IN RHEUMATOID ARTHRITIS: SPECIFIC FEATURES OF ITS DEVELOPMENT AND METHODS OF CORRECTION

    Directory of Open Access Journals (Sweden)

    Yuri Aleksandrovich Olyunin

    2010-06-01

    Full Text Available The pain syndrome holds a central position in the clinical picture of rheumatoid arthritis (RA. Articular inflammation is an essential, but not the only, factor that determines the occurrence of pain. Extraarticular soft tissue pathology can play an important role in the formation of pain perceptions in RA. The pain that increases on movement with involvement of affected structures, as well as local tenderness on palpation and dysfunction of an altered segment are the major clinical manifestations of extraarticular soft tissue involvement in RA. Swelling in the area of appropriate tendons and synovial bursae can be seen when superficially located anatomic formations are involved. Magnetic resonance imaging and ultrasonography permit more accurate determination of the site and pattern of an involvement. The pain and functional impairments associated with extraarticular soft tissue pathology determine a need for additional therapy that can correct the existing disorders and improve the quality of life in patients. The major components of this treatment are sparing routine and systemic and local drug therapy. Diclofenac sodium is one of the most universal agents that allow simultaneous monitoring of various pathogenetic mechanisms of the disease. Local glucocorticoids may be used if the sparing routine and nonsteroidal anti-inflammatory drugs fail to control the pain syndrome effectively.

  16. Method and system for correcting an aberration of a beam of charged particles

    Energy Technology Data Exchange (ETDEWEB)

    1975-06-20

    A beam of charged particles is deflected in a closed path such as a square over a cross wire grid, for example, at a constant velocity by an X Y deflection system. A small high frequency jitter is added at both axes of deflection to cause oscillation of the beam at 45deg to the X and Y axes. From the time that the leading edge of the oscillating beam passes over the wire until the trailing edge of the beam passes over the wire, an envelope of the oscillations produced by the jitter is obtained. A second envelope is obtained when the leading edge of the beam exits from being over the wire until the trailing edge of the beam ceases to be over the wire. Thus, a pair of envelopes is produced as the beam passes over each wire of the grid. The number of pulses exceeding ten percent of the peak voltage in the eight envelopes produced by the beam completing a cycle in its closed path around the grid are counted and compared with those counted during the previous cycle of the beam moving in its closed path over the grid. As the number of pulses decreases, the quality of the focus of the beam increases so that correction signals are applied to the focus coil in accordance with whether the number of pulses is increasing or decreasing.

  17. Circuit and method for comparator offset error detection and correction in ADC

    NARCIS (Netherlands)

    2017-01-01

    PROBLEM TO BE SOLVED: To provide a method for calibrating an analog-to-digital converter (ADC).SOLUTION: The method comprises: sampling an input voltage signal; comparing the sampled input voltage signal with an output signal of a feedback digital-to-analog converter (DAC) 40; determining in a

  18. Determination of avermectins by the internal standard recovery correction - high performance liquid chromatography - quantitative Nuclear Magnetic Resonance method.

    Science.gov (United States)

    Zhang, Wei; Huang, Ting; Li, Hongmei; Dai, Xinhua; Quan, Can; He, Yajuan

    2017-09-01

    Quantitative Nuclear Magnetic Resonance (qNMR) is widely used to determine the purity of organic compounds. For the compounds with lower purity especially molecular weight more than 500, qNMR is at risk of error for the purity, because the impurity peaks are likely to be incompletely separated from the peak of major component. In this study, an offline ISRC-HPLC-qNMR (internal standard recovery correction - high performance liquid chromatography - qNMR) was developed to overcome this problem. It is accurate by excluding the influence of impurity; it is low-cost by using common mobile phase; and it extends the applicable scope of qNMR. In this method, a mix solution of the sample and an internal standard was separated by HPLC with common mobile phases, and only the eluents of the analyte and the internal standard were collected in the same tube. After evaporation and re-dissolution, it was determined by qNMR. A recovery correction factor was determined by comparison of the solutions before and after these procedures. After correction, the mass fraction of analyte was constant and it was accurate and precise, even though the sample loss varied during these procedures, or even in bad resolution of HPLC. Avermectin B 1 a with the purity of ~93% and the molecular weight of 873 was analyzed. Moreover, the homologues of avermectin B 1 a were determined based on the identification and quantitative analysis by tandem mass spectrometry and HPLC, and the results were consistent with the results of traditional mass balance method. The result showed that the method could be widely used for the organic compounds, and could further promote qNMR to become a primary method in the international metrological systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. A Maximum-Likelihood Method to Correct for Allelic Dropout in Microsatellite Data with No Replicate Genotypes

    Science.gov (United States)

    Wang, Chaolong; Schroeder, Kari B.; Rosenberg, Noah A.

    2012-01-01

    Allelic dropout is a commonly observed source of missing data in microsatellite genotypes, in which one or both allelic copies at a locus fail to be amplified by the polymerase chain reaction. Especially for samples with poor DNA quality, this problem causes a downward bias in estimates of observed heterozygosity and an upward bias in estimates of inbreeding, owing to mistaken classifications of heterozygotes as homozygotes when one of the two copies drops out. One general approach for avoiding allelic dropout involves repeated genotyping of homozygous loci to minimize the effects of experimental error. Existing computational alternatives often require replicate genotyping as well. These approaches, however, are costly and are suitable only when enough DNA is available for repeated genotyping. In this study, we propose a maximum-likelihood approach together with an expectation-maximization algorithm to jointly estimate allelic dropout rates and allele frequencies when only one set of nonreplicated genotypes is available. Our method considers estimates of allelic dropout caused by both sample-specific factors and locus-specific factors, and it allows for deviation from Hardy–Weinberg equilibrium owing to inbreeding. Using the estimated parameters, we correct the bias in the estimation of observed heterozygosity through the use of multiple imputations of alleles in cases where dropout might have occurred. With simulated data, we show that our method can (1) effectively reproduce patterns of missing data and heterozygosity observed in real data; (2) correctly estimate model parameters, including sample-specific dropout rates, locus-specific dropout rates, and the inbreeding coefficient; and (3) successfully correct the downward bias in estimating the observed heterozygosity. We find that our method is fairly robust to violations of model assumptions caused by population structure and by genotyping errors from sources other than allelic dropout. Because the data sets

  20. Correction for dispersion and Coulombic interactions in molecular clusters with density functional derived methods: Application to polycyclic aromatic hydrocarbon clusters

    Science.gov (United States)

    Rapacioli, Mathias; Spiegelman, Fernand; Talbi, Dahbia; Mineva, Tzonka; Goursot, Annick; Heine, Thomas; Seifert, Gotthard

    2009-06-01

    The density functional based tight binding (DFTB) is a semiempirical method derived from the density functional theory (DFT). It inherits therefore its problems in treating van der Waals clusters. A major error comes from dispersion forces, which are poorly described by commonly used DFT functionals, but which can be accounted for by an a posteriori treatment DFT-D. This correction is used for DFTB. The self-consistent charge (SCC) DFTB is built on Mulliken charges which are known to give a poor representation of Coulombic intermolecular potential. We propose to calculate this potential using the class IV/charge model 3 definition of atomic charges. The self-consistent calculation of these charges is introduced in the SCC procedure and corresponding nuclear forces are derived. Benzene dimer is then studied as a benchmark system with this corrected DFTB (c-DFTB-D) method, but also, for comparison, with the DFT-D. Both methods give similar results and are in agreement with references calculations (CCSD(T) and symmetry adapted perturbation theory) calculations. As a first application, pyrene dimer is studied with the c-DFTB-D and DFT-D methods. For coronene clusters, only the c-DFTB-D approach is used, which finds the sandwich configurations to be more stable than the T-shaped ones.

  1. A comparison of high-order explicit Runge–Kutta, extrapolation, and deferred correction methods in serial and parallel

    KAUST Repository

    Ketcheson, David I.

    2014-06-13

    We compare the three main types of high-order one-step initial value solvers: extrapolation, spectral deferred correction, and embedded Runge–Kutta pairs. We consider orders four through twelve, including both serial and parallel implementations. We cast extrapolation and deferred correction methods as fixed-order Runge–Kutta methods, providing a natural framework for the comparison. The stability and accuracy properties of the methods are analyzed by theoretical measures, and these are compared with the results of numerical tests. In serial, the eighth-order pair of Prince and Dormand (DOP8) is most efficient. But other high-order methods can be more efficient than DOP8 when implemented in parallel. This is demonstrated by comparing a parallelized version of the wellknown ODEX code with the (serial) DOP853 code. For an N-body problem with N = 400, the experimental extrapolation code is as fast as the tuned Runge–Kutta pair at loose tolerances, and is up to two times as fast at tight tolerances.

  2. Comparison of three 15N methods to correct for microbial contamination when assessing in situ protein degradability of fresh forages.

    Science.gov (United States)

    Kamoun, M; Ammar, H; Théwis, A; Beckers, Y; France, J; López, S

    2014-11-01

    The use of stable (15)N as a marker to determine microbial contamination in nylon bag incubation residues to estimate protein degradability was investigated. Three methods using (15)N were compared: (15)N-labeled forage (dilution method, LF), (15)N enrichment of rumen solids-associated bacteria (SAB), and (15)N enrichment of rumen liquid-associated bacteria (LAB). Herbage from forages differing in protein and fiber contents (early-cut Italian ryegrass, late-cut Italian ryegrass, and red clover) were freeze-dried and ground and then incubated in situ in the rumen of 3 steers for 3, 6, 12, 24, and 48 h using the nylon bag technique. The (15)N-labeled forages were obtained by fertilizing the plots where herbage was grown with (15)NH4 (15)NO3. Unlabeled forages (obtained from plots fertilized with NH4NO3) were incubated at the same time that ((15)NH4)2SO4 was continuously infused into the rumen of the steers, and then pellets of labeled SAB and LAB were isolated by differential centrifugation of samples of ruminal contents. The proportion of bacterial N in the incubation residues increased from 0.09 and 0.45 g bacterial N/g total N at 3 h of incubation to 0.37 and 0.85 g bacterial N/g total N at 48 h of incubation for early-cut and late-cut ryegrass, respectively. There were differences (P forage (late-cut ryegrass) was 0.51, whereas the corrected values were 0.85, 0.84, and 0.77 for the LF, SAB, and LAB methods, respectively. With early-cut ryegrass and red clover, the differences between uncorrected and corrected values ranged between 6% and 13%, with small differences among the labeling methods. Generally, methods using labeled forage or labeled SAB and LAB provided similar corrected degradability values. The accuracy in estimating the extent of degradation of protein in the rumen from in situ disappearance curves is improved when values are corrected for microbial contamination of the bag residue.

  3. The decision optimization of product development by considering the customer demand saturation

    Directory of Open Access Journals (Sweden)

    Qing-song Xing

    2015-05-01

    Full Text Available Purpose: The purpose of this paper is to analyze the impacts of over meeting customer demands on the product development process, which is on the basis of the quantitative model of customer demands, development cost and time. Then propose the corresponding product development optimization decision. Design/methodology/approach: First of all, investigate to obtain the customer demand information, and then quantify customer demands weights by using variation coefficient method. Secondly, analyses the relationship between customer demands and product development time and cost based on the quality function deployment and establish corresponding mathematical model. On this basis, put forward the concept of customer demand saturation and optimization decision method of product development, and then apply it in the notebook development process of a company. Finally, when customer demand is saturated, it also needs to prove the consistency of strengthening satisfies customer demands and high attention degree customer demands, and the stability of customer demand saturation under different parameters. Findings: The development cost and the time will rise sharply when over meeting the customer demand. On the basis of considering the customer demand saturation, the relationship between customer demand and development time cost is quantified and balanced. And also there is basically consistent between the sequence of meeting customer demands and customer demands survey results. Originality/value: The paper proposes a model of customer demand saturation. It proves the correctness and effectiveness on the product development decision method.

  4. A GENERALIZED NON-LINEAR METHOD FOR DISTORTION CORRECTION AND TOP-DOWN VIEW CONVERSION OF FISH EYE IMAGES

    Directory of Open Access Journals (Sweden)

    Vivek Singh Bawa

    2017-06-01

    Full Text Available Advanced driver assistance systems (ADAS have been developed to automate and modify vehicles for safety and better driving experience. Among all computer vision modules in ADAS, 360-degree surround view generation of immediate surroundings of the vehicle is very important, due to application in on-road traffic assistance, parking assistance etc. This paper presents a novel algorithm for fast and computationally efficient transformation of input fisheye images into required top down view. This paper also presents a generalized framework for generating top down view of images captured by cameras with fish-eye lenses mounted on vehicles, irrespective of pitch or tilt angle. The proposed approach comprises of two major steps, viz. correcting the fish-eye lens images to rectilinear images, and generating top-view perspective of the corrected images. The images captured by the fish-eye lens possess barrel distortion, for which a nonlinear and non-iterative method is used. Thereafter, homography is used to obtain top-down view of corrected images. This paper also targets to develop surroundings of the vehicle for wider distortion less field of view and camera perspective independent top down view, with minimum computation cost which is essential due to limited computation power on vehicles.

  5. An Improved Dynamical Downscaling Method with GCM Bias Corrections and Its Validation with 30 Years of Climate Simulations

    KAUST Repository

    Xu, Zhongfeng

    2012-09-01

    An improved dynamical downscaling method (IDD) with general circulation model (GCM) bias corrections is developed and assessed over North America. A set of regional climate simulations is performed with the Weather Research and Forecasting Model (WRF) version 3.3 embedded in the National Center for Atmospheric Research\\'s (NCAR\\'s) Community Atmosphere Model (CAM). The GCM climatological means and the amplitudes of interannual variations are adjusted based on the National Centers for Environmental Prediction (NCEP)-NCAR global reanalysis products (NNRP) before using them to drive WRF. In this study, the WRF downscaling experiments are identical except the initial and lateral boundary conditions derived from the NNRP, original GCM output, and bias-corrected GCM output, respectively. The analysis finds that the IDD greatly improves the downscaled climate in both climatological means and extreme events relative to the traditional dynamical downscaling approach (TDD). The errors of downscaled climatological mean air temperature, geopotential height, wind vector, moisture, and precipitation are greatly reduced when the GCM bias corrections are applied. In the meantime, IDD also improves the downscaled extreme events characterized by the reduced errors in 2-yr return levels of surface air temperature and precipitation. In comparison with TDD, IDD is also able to produce a more realistic probability distribution in summer daily maximum temperature over the central U.S.-Canada region as well as in summer and winter daily precipitation over the middle and eastern United States. © 2012 American Meteorological Society.

  6. cgCorrect: a method to correct for confounding cell-cell variation due to cell growth in single-cell transcriptomics

    Science.gov (United States)

    Blasi, Thomas; Buettner, Florian; Strasser, Michael K.; Marr, Carsten; Theis, Fabian J.

    2017-06-01

    Accessing gene expression at a single-cell level has unraveled often large heterogeneity among seemingly homogeneous cells, which remains obscured when using traditional population-based approaches. The computational analysis of single-cell transcriptomics data, however, still imposes unresolved challenges with respect to normalization, visualization and modeling the data. One such issue is differences in cell size, which introduce additional variability into the data and for which appropriate normalization techniques are needed. Otherwise, these differences in cell size may obscure genuine heterogeneities among cell populations and lead to overdispersed steady-state distributions of mRNA transcript numbers. We present cgCorrect, a statistical framework to correct for differences in cell size that are due to cell growth in single-cell transcriptomics data. We derive the probability for the cell-growth-corrected mRNA transcript number given the measured, cell size-dependent mRNA transcript number, based on the assumption that the average number of transcripts in a cell increases proportionally to the cell’s volume during the cell cycle. cgCorrect can be used for both data normalization and to analyze the steady-state distributions used to infer the gene expression mechanism. We demonstrate its applicability on both simulated data and single-cell quantitative real-time polymerase chain reaction (PCR) data from mouse blood stem and progenitor cells (and to quantitative single-cell RNA-sequencing data obtained from mouse embryonic stem cells). We show that correcting for differences in cell size affects the interpretation of the data obtained by typically performed computational analysis.

  7. Gluon saturation beyond (naive) leading logs

    Energy Technology Data Exchange (ETDEWEB)

    Beuf, Guillaume

    2014-12-15

    An improved version of the Balitsky–Kovchegov equation is presented, with a consistent treatment of kinematics. That improvement allows to resum the most severe of the large higher order corrections which plague the conventional versions of high-energy evolution equations, with approximate kinematics. This result represents a further step towards having high-energy QCD scattering processes under control beyond strict Leading Logarithmic accuracy and with gluon saturation effects.

  8. Should methods of correction for multiple comparisons be applied in pharmacovigilance?

    Directory of Open Access Journals (Sweden)

    Lorenza Scotti

    2015-12-01

    Full Text Available Purpose. In pharmacovigilance, spontaneous reporting databases are devoted to the early detection of adverse event ‘signals’ of marketed drugs. A common limitation of these systems is the wide number of concurrently investigated associations, implying a high probability of generating positive signals simply by chance. However it is not clear if the application of methods aimed to adjust for the multiple testing problems are needed when at least some of the drug-outcome relationship under study are known. To this aim we applied a robust estimation method for the FDR (rFDR particularly suitable in the pharmacovigilance context. Methods. We exploited the data available for the SAFEGUARD project to apply the rFDR estimation methods to detect potential false positive signals of adverse reactions attributable to the use of non-insulin blood glucose lowering drugs. Specifically, the number of signals generated from the conventional disproportionality measures and after the application of the rFDR adjustment method was compared. Results. Among the 311 evaluable pairs (i.e., drug-event pairs with at least one adverse event report, 106 (34% signals were considered as significant from the conventional analysis. Among them 1 resulted in false positive signals according to rFDR method. Conclusions. The results of this study seem to suggest that when a restricted number of drug-outcome pairs is considered and warnings about some of them are known, multiple comparisons methods for recognizing false positive signals are not so useful as suggested by theoretical considerations.

  9. A new correction method serving to eliminate the parabola effect of flatbed scanners used in radiochromic film dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Poppinga, D., E-mail: daniela.poppinga@uni-oldenburg.de; Schoenfeld, A. A.; Poppe, B. [Medical Radiation Physics, Carl v. Ossietzky University, Oldenburg 26127, Germany and Department for Radiation Oncology, Pius Hospital, Oldenburg 26121 (Germany); Doerner, K. J. [Radiotherapy Department, General Hospital, Celle 29223 (Germany); Blanck, O. [CyberKnife Center Northern Germany, Güstrow 18273, Germany and Department for Radiation Oncology, University Clinic Schleswig-Holstein, Lübeck 23562 (Germany); Harder, D. [Medical Physics and Biophysics, Georg-August-University, Göttingen 37073 (Germany)

    2014-02-15

    Purpose: The purpose of this study is the correction of the lateral scanner artifact, i.e., the effect that, on a large homogeneously exposed EBT3 film, a flatbed scanner measures different optical densities at different positions along thex axis, the axis parallel to the elongated light source. At constant dose, the measured optical densitiy profiles along this axis have a parabolic shape with significant dose dependent curvature. Therefore, the effect is shortly called the parabola effect. The objective of the algorithm developed in this study is to correct for the parabola effect. Any optical density measured at given position x is transformed into the equivalent optical density c at the apex of the parabola and then converted into the corresponding dose via the calibration of c versus dose. Methods: For the present study EBT3 films and an Epson 10000XL scanner including transparency unit were used for the analysis of the parabola effect. The films were irradiated with 6 MV photons from an Elekta Synergy accelerator in a RW3 slab phantom. In order to quantify the effect, ten film pieces with doses graded from 0 to 20.9 Gy were sequentially scanned at eight positions along thex axis and at six positions along the z axis (the movement direction of the light source) both for the portrait and landscape film orientations. In order to test the effectiveness of the new correction algorithm, the dose profiles of an open square field and an IMRT plan were measured by EBT3 films and compared with ionization chamber and ionization chamber array measurement. Results: The parabola effect has been numerically studied over the whole measuring field of the Epson 10000XL scanner for doses up to 20.9 Gy and for both film orientations. The presented algorithm transforms any optical density at positionx into the equivalent optical density that would be measured at the same dose at the apex of the parabola. This correction method has been validated up to doses of 5.2 Gy all over the

  10. A new correction method serving to eliminate the parabola effect of flatbed scanners used in radiochromic film dosimetry

    International Nuclear Information System (INIS)

    Poppinga, D.; Schoenfeld, A. A.; Poppe, B.; Doerner, K. J.; Blanck, O.; Harder, D.

    2014-01-01

    Purpose: The purpose of this study is the correction of the lateral scanner artifact, i.e., the effect that, on a large homogeneously exposed EBT3 film, a flatbed scanner measures different optical densities at different positions along thex axis, the axis parallel to the elongated light source. At constant dose, the measured optical densitiy profiles along this axis have a parabolic shape with significant dose dependent curvature. Therefore, the effect is shortly called the parabola effect. The objective of the algorithm developed in this study is to correct for the parabola effect. Any optical density measured at given position x is transformed into the equivalent optical density c at the apex of the parabola and then converted into the corresponding dose via the calibration of c versus dose. Methods: For the present study EBT3 films and an Epson 10000XL scanner including transparency unit were used for the analysis of the parabola effect. The films were irradiated with 6 MV photons from an Elekta Synergy accelerator in a RW3 slab phantom. In order to quantify the effect, ten film pieces with doses graded from 0 to 20.9 Gy were sequentially scanned at eight positions along thex axis and at six positions along the z axis (the movement direction of the light source) both for the portrait and landscape film orientations. In order to test the effectiveness of the new correction algorithm, the dose profiles of an open square field and an IMRT plan were measured by EBT3 films and compared with ionization chamber and ionization chamber array measurement. Results: The parabola effect has been numerically studied over the whole measuring field of the Epson 10000XL scanner for doses up to 20.9 Gy and for both film orientations. The presented algorithm transforms any optical density at positionx into the equivalent optical density that would be measured at the same dose at the apex of the parabola. This correction method has been validated up to doses of 5.2 Gy all over the

  11. Theoretical and experimental analysis of electroweak corrections to the inclusive jet process. Development of extreme topologies detection methods

    International Nuclear Information System (INIS)

    Meric, Nicolas

    2013-01-01

    We have studied the behaviour of the inclusive jet, W+jets and Z+jets processes from the phenomenological and experimental point of view in the ATLAS experiment at LHC in order to understand how important is the impact of Sudakov logarithms on electroweak corrections and in the associated production of weak vector boson and jets at LHC. We have computed the amplitude of the real electroweak corrections to the inclusive jet process due to the real emission of weak vector bosons from jets. We have done this computation with the MCFM and NLOjet++ generators at 7 TeV, 8 TeV and 14 TeV. This study shows that, for the inclusive jet process, the partial cancellation of the virtual weak corrections (due to weak bosons in loops) by the real electroweak corrections occurs. This effect shows that Bloch-Nordsieck violation is reduced for this process. We have then participated to the measure of the differential cross-section for these different processes in the ATLAS experiment at 7 TeV. In particular we have been involved into technical aspects of the measurement such as the study of the QCD background to the W+jets process in the muon channel. We have then combined the different measurements in this channel to compare their behaviour. This tends to show that several effects are giving to the electroweak corrections their relative importance as we see an increase of the relative contribution of weak bosons with jets processes to the inclusive jet process with the transverse momentum of jets, if we explicitly ask for the presence of electroweak bosons in the final state. This study is currently only a preliminary study and aims at showing that this study can be useful to investigate the underlying structure of these processes. Finally we have studied the noises affecting the ATLAS calorimeter. This has allowed for the development of a new way to detect problematic events using well known theorems from statistics. This new method is able to detect bursts of noise and

  12. A combined method to calculate co-seismic displacements through strong motion acceleration baseline correction

    Science.gov (United States)

    Zhan, W.; Sun, Y.

    2015-12-01

    High frequency strong motion data, especially near field acceleration data, have been recorded widely through different observation station systems among the world. Due to tilting and a lot other reasons, recordings from these seismometers usually have baseline drift problems when big earthquake happens. It is hard to obtain a reasonable and precision co-seismic displacement through simply double integration. Here presents a combined method using wavelet transform and several simple liner procedures. Owning to the lack of dense high rate GNSS data in most of region of the world, we did not contain GNSS data in this method first but consider it as an evaluating mark of our results. This semi-automatic method unpacks a raw signal into two portions, a summation of high ranks and a low ranks summation using a cubic B-spline wavelet decomposition procedure. Independent liner treatments are processed against these two summations, which are then composed together to recover useable and reasonable result. We use data of 2008 Wenchuan earthquake and choose stations with a near GPS recording to validate this method. Nearly all of them have compatible co-seismic displacements when compared with GPS stations or field survey. Since seismometer stations and GNSS stations from observation systems in China are sometimes quite far from each other, we also test this method with some other earthquakes (1999 Chi-Chi earthquake and 2011 Tohoku earthquake). And for 2011 Tohoku earthquake, we will introduce GPS recordings to this combined method since the existence of a dense GNSS systems in Japan.

  13. Monte Carlo and experimental evaluation of accuracy and noise properties of two scatter correction methods for SPECT

    International Nuclear Information System (INIS)

    Narita, Y.; Eberl, S.; Bautovich, G.; Iida, H.; Hutton, B.F.; Braun, M.; Nakamura, T.

    1996-01-01

    Scatter correction is a prerequisite for quantitative SPECT, but potentially increases noise. Monte Carlo simulations (EGS4) and physical phantom measurements were used to compare accuracy and noise properties of two scatter correction techniques: the triple-energy window (TEW), and the transmission dependent convolution subtraction (TDCS) techniques. Two scatter functions were investigated for TDCS: (i) the originally proposed mono-exponential function (TDCS mono ) and (ii) an exponential plus Gaussian scatter function (TDCS Gauss ) demonstrated to be superior from our Monte Carlo simulations. Signal to noise ratio (S/N) and accuracy were investigated in cylindrical phantoms and a chest phantom. Results from each method were compared to the true primary counts (simulations), or known activity concentrations (phantom studies). 99m Tc was used in all cases. The optimized TDCS Gauss method overall performed best, with an accuracy of better than 4% for all simulations and physical phantom studies. Maximum errors for TEW and TDCS mono of -30 and -22%, respectively, were observed in the heart chamber of the simulated chest phantom. TEW had the worst S/N ratio of the three techniques. The S/N ratios of the two TDCS methods were similar and only slightly lower than those of simulated true primary data. Thus, accurate quantitation can be obtained with TDCS Gauss , with a relatively small reduction in S/N ratio. (author)

  14. Application of the two-dose-rate method for general recombination correction for liquid ionization chambers in continuous beams

    International Nuclear Information System (INIS)

    Andersson, Jonas; Toelli, Heikki

    2011-01-01

    A method to correct for the general recombination losses for liquid ionization chambers in continuous beams has been developed. The proposed method has been derived from Greening's theory for continuous beams and is based on measuring the signal from a liquid ionization chamber and an air filled monitor ionization chamber at two different dose rates. The method has been tested with two plane parallel liquid ionization chambers in a continuous radiation x-ray beam with a tube voltage of 120 kV and with dose rates between 2 and 13 Gy min -1 . The liquids used as sensitive media in the chambers were isooctane (C 8 H 18 ) and tetramethylsilane (Si(CH 3 ) 4 ). The general recombination effect was studied using chamber polarizing voltages of 100, 300, 500, 700 and 900 V for both liquids. The relative standard deviation of the results for the collection efficiency with respect to general recombination was found to be a maximum of 0.7% for isooctane and 2.4% for tetramethylsilane. The results are in excellent agreement with Greening's theory for collection efficiencies over 90%. The measured and corrected signals from the liquid ionization chambers used in this work are in very good agreement with the air filled monitor chamber with respect to signal to dose linearity.

  15. Assessing species saturation: conceptual and methodological challenges.

    Science.gov (United States)

    Olivares, Ingrid; Karger, Dirk N; Kessler, Michael

    2018-05-07

    Is there a maximum number of species that can coexist? Intuitively, we assume an upper limit to the number of species in a given assemblage, or that a lineage can produce, but defining and testing this limit has proven problematic. Herein, we first outline seven general challenges of studies on species saturation, most of which are independent of the actual method used to assess saturation. Among these are the challenge of defining saturation conceptually and operationally, the importance of setting an appropriate referential system, and the need to discriminate among patterns, processes and mechanisms. Second, we list and discuss the methodological approaches that have been used to study species saturation. These approaches vary in time and spatial scales, and in the variables and assumptions needed to assess saturation. We argue that assessing species saturation is possible, but that many studies conducted to date have conceptual and methodological flaws that prevent us from currently attaining a good idea of the occurrence of species saturation. © 2018 Cambridge Philosophical Society.

  16. A chemometric method for correcting FTIR spectra of biomaterials for interference from water in KBr discs

    Science.gov (United States)

    FTIR analysis of solid biomaterials by the familiar KBr disc technique is very often frustrated by water interference in the important protein (amide I) and carbohydrate (hydroxyl) regions of their spectra. A method was therefore devised that overcomes the difficulty and measures FTIR spectra of so...

  17. Soft wheat and flour products methods review: solvent retention capacity equation correction

    Science.gov (United States)

    This article discusses the results of a significant change to calculations made within AACCI Approved methods 56-10 and 56-11, the Alkaline Water Retention Capacity (AWRC) test and the Solvent Retention Capacity (SRC) test. The AACCI Soft Wheat and Flour Products Technical Committee reviewed propos...

  18. Hirudotherapy as a method of hemoreological disorders correction efficiency on osteoarthrosis

    Directory of Open Access Journals (Sweden)

    Karimova D.J.

    2017-12-01

    Full Text Available The aim: to study the hirudotherapy effectiveness in case of osteoarthrosis as a pathogenetic therapy in view of vascular disorders in the disease pathogenesis. Material and Methods. 81 patients with ostearthrosis, predominantly gonarthrosis, aged 36 to 76 years old with a disease duration from 1 to 15 years were under observation. There were clinical and laboratory studies (generally accepted in patients with osteoarthritis, special studies included bulbar conjunctiva biomicroscopy and lower limbs rheovasography. Results. Hirudotherapy is a clinically effective method for the articular syndrome treatment in various types of osteoarthrosis. It is advisable to use the method in patients with a disease duration up to 15 years, with severe pain syndrome, concomitant peripheral veins pathology, and hypertension. Conclusion. The influence of hirudotherapy on the state of microcirculation in patients with osteoarthritis with the evaluation of the clinical effect and the rheovasography and conjunctival biomicroscopy was first studied, confirmed by instrumental methods. The greatest clinical efficacy was achieved with a high degree of local inflammatory activity, what once again emphasizes the role of microchemocirculation disorders in the development of synovitis in osteoarthritis.

  19. A statistical method for correcting salinity observations from autonomous profiling floats: An ARGO perspective

    Digital Repository Service at National Institute of Oceanography (India)

    Durand, F.; Reverdin, G.

    the second consists of a least squares adjustment of the data of the various floats. The authors' method exhibits good skills to retrieve the proper hydrological structure of the case study area. It significantly improves the consistency of the PALACE dataset...

  20. A meshless scheme for incompressible fluid flow using a velocity-pressure correction method

    KAUST Repository

    Bourantas, Georgios; Loukopoulos, Vassilios C.

    2013-01-01

    A meshless point collocation method is proposed for the numerical solution of the steady state, incompressible Navier-Stokes (NS) equations in their primitive u-v-p formulation. The flow equations are solved in their strong form using either a

  1. A Semi-Analytical Method for Rapid Estimation of Near-Well Saturation, Temperature, Pressure and Stress in Non-Isothermal CO2 Injection

    Science.gov (United States)

    LaForce, T.; Ennis-King, J.; Paterson, L.

    2015-12-01

    Reservoir cooling near the wellbore is expected when fluids are injected into a reservoir or aquifer in CO2 storage, enhanced oil or gas recovery, enhanced geothermal systems, and water injection for disposal. Ignoring thermal effects near the well can lead to under-prediction of changes in reservoir pressure and stress due to competition between increased pressure and contraction of the rock in the cooled near-well region. In this work a previously developed semi-analytical model for immiscible, nonisothermal fluid injection is generalised to include partitioning of components between two phases. Advection-dominated radial flow is assumed so that the coupled two-phase flow and thermal conservation laws can be solved analytically. The temperature and saturation profiles are used to find the increase in reservoir pressure, tangential, and radial stress near the wellbore in a semi-analytical, forward-coupled model. Saturation, temperature, pressure, and stress profiles are found for parameters representative of several CO2 storage demonstration projects around the world. General results on maximum injection rates vs depth for common reservoir parameters are also presented. Prior to drilling an injection well there is often little information about the properties that will determine the injection rate that can be achieved without exceeding fracture pressure, yet injection rate and pressure are key parameters in well design and placement decisions. Analytical solutions to simplified models such as these can quickly provide order of magnitude estimates for flow and stress near the well based on a range of likely parameters.

  2. Research for correction pre-operative MRI images of brain during operation using particle method simulation

    International Nuclear Information System (INIS)

    Shino, Ryosaku; Koshizuka, Seiichi; Sakai, Mikio; Ito, Hirotaka; Iseki, Hiroshi; Muragaki, Yoshihiro

    2010-01-01

    In the neurosurgical procedures, surgeon formulates a surgery plan based on pre-operative images such as MRI. However, the brain is transformed by removal of the affected area. In this paper, we propose a method for reconstructing pre-operative images involving the deformation with physical simulation. First, the domain of brain is identified in pre-operative images. Second, we create particles for physical simulation. Then, we carry out the linear elastic simulation taking into account the gravity. Finally, we reconstruct pre-operative images with deformation according to movement of the particles. We show the effectiveness of this method by reconstructing the pre-operative image actually taken before surgery. (author)

  3. Advanced Corrections of Hydrogen Bonding and Dispersion for Semiempirical Quantum Mechanical Methods

    Czech Academy of Sciences Publication Activity Database

    Řezáč, Jan; Hobza, Pavel

    2012-01-01

    Roč. 8, č. 1 (2012), s. 141-151 ISSN 1549-9618 Grant - others:European Social Fund(XE) CZ.1.05/2.1.00/03.0058 Institutional research plan: CEZ:AV0Z40550506 Keywords : tight-binding method * noncovalent complexes * base -pairs * interaction energies Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 5.389, year: 2012

  4. An evaluation method for tornado missile strike probability with stochastic correction

    International Nuclear Information System (INIS)

    Eguchi, Yuzuru; Murakami, Takahiro; Hirakuchi, Hiromaru; Sugimoto, Soichiro; Hattori, Yasuo

    2017-01-01

    An efficient evaluation method for the probability of a tornado missile strike without using the Monte Carlo method is proposed in this paper. A major part of the proposed probability evaluation is based on numerical results computed using an in-house code, Tornado-borne missile analysis code, which enables us to evaluate the liftoff and flight behaviors of unconstrained objects on the ground driven by a tornado. Using the Tornado-borne missile analysis code, we can obtain a stochastic correlation between local wind speed and flight distance of each object, and this stochastic correlation is used to evaluate the conditional strike probability, QV(r), of a missile located at position r, where the local wind speed is V. In contrast, the annual exceedance probability of local wind speed, which can be computed using a tornado hazard analysis code, is used to derive the probability density function, p(V). Then, we finally obtain the annual probability of tornado missile strike on a structure with the convolutional integration of product of QV(r) and p(V) over V. The evaluation method is applied to a simple problem to qualitatively confirm the validity, and to quantitatively verify the results for two extreme cases in which an object is located just in the vicinity of or far away from the structure

  5. RCP: a novel probe design bias correction method for Illumina Methylation BeadChip.

    Science.gov (United States)

    Niu, Liang; Xu, Zongli; Taylor, Jack A

    2016-09-01

    The Illumina HumanMethylation450 BeadChip has been extensively utilized in epigenome-wide association studies. This array and its successor, the MethylationEPIC array, use two types of probes-Infinium I (type I) and Infinium II (type II)-in order to increase genome coverage but differences in probe chemistries result in different type I and II distributions of methylation values. Ignoring the difference in distributions between the two probe types may bias downstream analysis. Here, we developed a novel method, called Regression on Correlated Probes (RCP), which uses the existing correlation between pairs of nearby type I and II probes to adjust the beta values of all type II probes. We evaluate the effect of this adjustment on reducing probe design type bias, reducing technical variation in duplicate samples, improving accuracy of measurements against known standards, and retention of biological signal. We find that RCP is statistically significantly better than unadjusted data or adjustment with alternative methods including SWAN and BMIQ. We incorporated the method into the R package ENmix, which is freely available from the Bioconductor website (https://www.bioconductor.org/packages/release/bioc/html/ENmix.html). niulg@ucmail.uc.edu Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.

  6. An evaluation method for tornado missile strike probability with stochastic correction

    Energy Technology Data Exchange (ETDEWEB)

    Eguchi, Yuzuru; Murakami, Takahiro; Hirakuchi, Hiromaru; Sugimoto, Soichiro; Hattori, Yasuo [Nuclear Risk Research Center (External Natural Event Research Team), Central Research Institute of Electric Power Industry, Abiko (Japan)

    2017-03-15

    An efficient evaluation method for the probability of a tornado missile strike without using the Monte Carlo method is proposed in this paper. A major part of the proposed probability evaluation is based on numerical results computed using an in-house code, Tornado-borne missile analysis code, which enables us to evaluate the liftoff and flight behaviors of unconstrained objects on the ground driven by a tornado. Using the Tornado-borne missile analysis code, we can obtain a stochastic correlation between local wind speed and flight distance of each object, and this stochastic correlation is used to evaluate the conditional strike probability, QV(r), of a missile located at position r, where the local wind speed is V. In contrast, the annual exceedance probability of local wind speed, which can be computed using a tornado hazard analysis code, is used to derive the probability density function, p(V). Then, we finally obtain the annual probability of tornado missile strike on a structure with the convolutional integration of product of QV(r) and p(V) over V. The evaluation method is applied to a simple problem to qualitatively confirm the validity, and to quantitatively verify the results for two extreme cases in which an object is located just in the vicinity of or far away from the structure.

  7. Different partial volume correction methods lead to different conclusions: An (18)F-FDG-PET study of aging.

    Science.gov (United States)

    Greve, Douglas N; Salat, David H; Bowen, Spencer L; Izquierdo-Garcia, David; Schultz, Aaron P; Catana, Ciprian; Becker, J Alex; Svarer, Claus; Knudsen, Gitte M; Sperling, Reisa A; Johnson, Keith A

    2016-05-15

    A cross-sectional group study of the effects of aging on brain metabolism as measured with (18)F-FDG-PET was performed using several different partial volume correction (PVC) methods: no correction (NoPVC), Meltzer (MZ), Müller-Gärtner (MG), and the symmetric geometric transfer matrix (SGTM) using 99 subjects aged 65-87years from the Harvard Aging Brain study. Sensitivity to parameter selection was tested for MZ and MG. The various methods and parameter settings resulted in an extremely wide range of conclusions as to the effects of age on metabolism, from almost no changes to virtually all of cortical regions showing a decrease with age. Simulations showed that NoPVC had significant bias that made the age effect on metabolism appear to be much larger and more significant than it is. MZ was found to be the same as NoPVC for liberal brain masks; for conservative brain masks, MZ showed few areas correlated with age. MG and SGTM were found to be similar; however, MG was sensitive to a thresholding parameter that can result in data loss. CSF uptake was surprisingly high at about 15% of that in gray matter. The exclusion of CSF from SGTM and MG models, which is almost universally done, caused a substantial loss in the power to detect age-related changes. This diversity of results reflects the literature on the metabolism of aging and suggests that extreme care should be taken when applying PVC or interpreting results that have been corrected for partial volume effects. Using the SGTM, significant age-related changes of about 7% per decade were found in frontal and cingulate cortices as well as primary visual and insular cortices. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Different Partial Volume Correction Methods Lead to Different Conclusions: an 18F-FDG PET Study of Aging

    Science.gov (United States)

    Greve, Douglas N.; Salat, David H.; Bowen, Spencer L.; Izquierdo-Garcia, David; Schultz, Aaron P.; Catana, Ciprian; Becker, J. Alex; Svarer, Claus; Knudsen, Gitte; Sperling, Reisa A.; Johnson, Keith A.

    2016-01-01

    A cross-sectional group study of the effects of aging on brain metabolism as measured with 18F-FDG PET was performed using several different partial volume correction (PVC) methods: no correction (NoPVC), Meltzer (MZ), Müller-Gärtner (MG), and the symmetric geometric transfer matrix (SGTM) using 99 subjects aged 65-87 from the Harvard Aging Brain study. Sensitivity to parameter selection was tested for MZ and MG. The various methods and parameter settings resulted in an extremely wide range of conclusions as to the effects of age on metabolism, from almost no changes to virtually all of cortical regions showing a decrease with age. Simulations showed that NoPVC had significant bias that made the age effect on metabolism appear to be much larger and more significant than it is. MZ was found to be the same as NoPVC for liberal brain masks; for conservative brain masks, MZ showed few areas correlated with age. MG and SGTM were found to be similar; however, MG was sensitive to a thresholding parameter that can result in data loss. CSF uptake was surprisingly high at about 15% of that in gray matter. Exclusion of CSF from SGTM and MG models, which is almost universally done, caused a substantial loss in the power to detect age-related changes. This diversity of results reflects the literature on the metabolism of aging and suggests that extreme care should be taken when applying PVC or interpreting results that have been corrected for partial volume effects. Using the SGTM, significant age-related changes of about 7% per decade were found in frontal and cingulate cortices as well as primary visual and insular cortices. PMID:26915497

  9. A new correction method serving to eliminate the parabola effect of flatbed scanners used in radiochromic film dosimetry.

    Science.gov (United States)

    Poppinga, D; Schoenfeld, A A; Doerner, K J; Blanck, O; Harder, D; Poppe, B

    2014-02-01

    The purpose of this study is the correction of the lateral scanner artifact, i.e., the effect that, on a large homogeneously exposed EBT3 film, a flatbed scanner measures different optical densities at different positions along the x axis, the axis parallel to the elongated light source. At constant dose, the measured optical density profiles along this axis have a parabolic shape with significant dose dependent curvature. Therefore, the effect is shortly called the parabola effect. The objective of the algorithm developed in this study is to correct for the parabola effect. Any optical density measured at given position x is transformed into the equivalent optical density c at the apex of the parabola and then converted into the corresponding dose via the calibration of c versus dose. For the present study EBT3 films and an Epson 10000XL scanner including transparency unit were used for the analysis of the parabola effect. The films were irradiated with 6 MV photons from an Elekta Synergy accelerator in a RW3 slab phantom. In order to quantify the effect, ten film pieces with doses graded from 0 to 20.9 Gy were sequentially scanned at eight positions along the x axis and at six positions along the z axis (the movement direction of the light source) both for the portrait and landscape film orientations. In order to test the effectiveness of the new correction algorithm, the dose profiles of an open square field and an IMRT plan were measured by EBT3 films and compared with ionization chamber and ionization chamber array measurement. The parabola effect has been numerically studied over the whole measuring field of the Epson 10000XL scanner for doses up to 20.9 Gy and for both film orientations. The presented algorithm transforms any optical density at position x into the equivalent optical density that would be measured at the same dose at the apex of the parabola. This correction method has been validated up to doses of 5.2 Gy all over the scanner bed with 2D dose

  10. Quantitative MR thermometry based on phase-drift correction PRF shift method at 0.35 T.

    Science.gov (United States)

    Chen, Yuping; Ge, Mengke; Ali, Rizwan; Jiang, Hejun; Huang, Xiaoyan; Qiu, Bensheng

    2018-04-10

    Noninvasive magnetic resonance thermometry (MRT) at low-field using proton resonance frequency shift (PRFS) is a promising technique for monitoring ablation temperature, since low-field MR scanners with open-configuration are more suitable for interventional procedures than closed systems. In this study, phase-drift correction PRFS with first-order polynomial fitting method was proposed to investigate the feasibility and accuracy of quantitative MR thermography during hyperthermia procedures in a 0.35 T open MR scanner. Unheated phantom and ex vivo porcine liver experiments were performed to evaluate the optimal polynomial order for phase-drift correction PRFS. The temperature estimation approach was tested in brain temperature experiments of three healthy volunteers at room temperature, and in ex vivo porcine liver microwave ablation experiments. The output power of the microwave generator was set at 40 W for 330 s. In the unheated experiments, the temperature root mean square error (RMSE) in the inner region of interest was calculated to assess the best-fitting order for polynomial fit. For ablation experiments, relative temperature difference profile measured by the phase-drift correction PRFS was compared with the temperature changes recorded by fiber optic temperature probe around the microwave ablation antenna within the target thermal region. The phase-drift correction PRFS using first-order polynomial fitting could achieve the smallest temperature RMSE in unheated phantom, ex vivo porcine liver and in vivo human brain experiments. In the ex vivo porcine liver microwave ablation procedure, the temperature error between MRT and fiber optic probe of all but six temperature points were less than 2 °C. Overall, the RMSE of all temperature points was 1.49 °C. Both in vivo and ex vivo experiments showed that MR thermometry based on the phase-drift correction PRFS with first-order polynomial fitting could be applied to monitor temperature changes during

  11. A new method for x-ray scatter correction: first assessment on a cone-beam CT experimental setup

    International Nuclear Information System (INIS)

    Rinkel, J; Gerfault, L; Esteve, F; Dinten, J-M

    2007-01-01

    Cone-beam computed tomography (CBCT) enables three-dimensional imaging with isotropic resolution and a shorter acquisition time compared to a helical CT scanner. Because a larger object volume is exposed for each projection, scatter levels are much higher than in collimated fan-beam systems, resulting in cupping artifacts, streaks and quantification inaccuracies. In this paper, a general method to correct for scatter in CBCT, without supplementary on-line acquisition, is presented. This method is based on scatter calibration through off-line acquisition combined with on-line analytical transformation based on physical equations, to adapt calibration to the object observed. The method was tested on a PMMA phantom and on an anthropomorphic thorax phantom. The results were validated by comparison to simulation for the PMMA phantom and by comparison to scans obtained on a commercial multi-slice CT scanner for the thorax phantom. Finally, the improvements achieved with the new method were compared to those obtained using a standard beam-stop method. The new method provided results that closely agreed with the simulation and with the conventional CT scanner, eliminating cupping artifacts and significantly improving quantification. Compared to the beam-stop method, lower x-ray doses and shorter acquisition times were needed, both divided by a factor of 9 for the same scatter estimation accuracy

  12. A semiempirical method of applying the dechanneling correction in the extraction of disorder distribution

    International Nuclear Information System (INIS)

    Walker, R.S.; Thompson, D.A.; Poehlman, S.W.

    1977-01-01

    The application of single, plural or multiple scattering theories to the determination of defect dechanneling in channeling-backscattering disorder measurements is re-examined. A semiempirical modification to the method is described that results in making the extracted disorder and disorder distribution relatively insensitive to the scattering model employed. The various models and modifications have been applied to the 1 to 2 MeV He + channeling-backscatter data obtained from 20 to 80 keV H + to Ne + bombarded Si, GaP and GaAs at 50 K and 300 K. (author)

  13. Reassessing the forest impacts of protection: the challenge of nonrandom location and a corrective method.

    Science.gov (United States)

    Joppa, Lucas; Pfaff, Alexander

    2010-01-01

    Protected areas are leading tools in efforts to slow global species loss and appear also to have a role in climate change policy. Understanding their impacts on deforestation informs environmental policies. We review several approaches to evaluating protection's impact on deforestation, given three hurdles to empirical evaluation, and note that "matching" techniques from economic impact evaluation address those hurdles. The central hurdle derives from the fact that protected areas are distributed nonrandomly across landscapes. Nonrandom location can be intentional, and for good reasons, including biological and political ones. Yet even so, when protected areas are biased in their locations toward less-threatened areas, many methods for impact evaluation will overestimate protection's effect. The use of matching techniques allows one to control for known landscape biases when inferring the impact of protection. Applications of matching have revealed considerably lower impact estimates of forest protection than produced by other methods. A reduction in the estimated impact from existing parks does not suggest, however, that protection is unable to lower clearing. Rather, it indicates the importance of variation across locations in how much impact protection could possibly have on rates of deforestation. Matching, then, bundles improved estimates of the average impact of protection with guidance on where new parks' impacts will be highest. While many factors will determine where new protected areas will be sited in the future, we claim that the variation across space in protection's impact on deforestation rates should inform site choice.

  14. Aerodynamic optimization of wind turbine rotors using a blade element momentum method with corrections for wake rotation and expansion

    DEFF Research Database (Denmark)

    Døssing, Mads; Aagaard Madsen, Helge; Bak, Christian

    2012-01-01

    The blade element momentum (BEM) method is widely used for calculating the quasi-steady aerodynamics of horizontal axis wind turbines. Recently, the BEM method has been expanded to include corrections for wake expansion and the pressure due to wake rotation (), and more accurate solutions can now...... by the positive effect of wake rotation, which locally causes the efficiency to exceed the Betz limit. Wake expansion has a negative effect, which is most important at high tip speed ratios. It was further found that by using , it is possible to obtain a 5% reduction in flap bending moment when compared with BEM....... In short, allows fast aerodynamic calculations and optimizations with a much higher degree of accuracy than the traditional BEM model. Copyright © 2011 John Wiley & Sons, Ltd....

  15. Comparing the Performance of Popular MEG/EEG Artifact Correction Methods in an Evoked-Response Study

    DEFF Research Database (Denmark)

    Haumann, Niels Trusbak; Parkkonen, Lauri; Kliuchko, Marina

    2016-01-01

    We here compared results achieved by applying popular methods for reducing artifacts in magnetoencephalography (MEG) and electroencephalography (EEG) recordings of the auditory evoked Mismatch Negativity (MMN) responses in healthy adult subjects. We compared the Signal Space Separation (SSS......) and temporal SSS (tSSS) methods for reducing noise from external and nearby sources. Our results showed that tSSS reduces the interference level more reliably than plain SSS, particularly for MEG gradiometers, also for healthy subjects not wearing strongly interfering magnetic material. Therefore, tSSS...... is recommended over SSS. Furthermore, we found that better artifact correction is achieved by applying Independent Component Analysis (ICA) in comparison to Signal Space Projection (SSP). Although SSP reduces the baseline noise level more than ICA, SSP also significantly reduces the signal—slightly more than...

  16. A study of radon 222 permeation through plastic membranes. Application to a method of radon measurement in water and saturated soils

    International Nuclear Information System (INIS)

    Labed, V.

    1991-04-01

    In order to improve the BARASOL R device and to use it in water-saturated soils and in pressure constraint conditions, we have studied radon 222 permeation through plastic membranes. While the permeation process usually takes place between two media being in the same state, most often gaseous, the present study describes the transfer of radon 222 from the water to the air via a membrane. Polypropylene membranes have been tested with an experimental set-up by monitoring the evolution of radon concentrations in water and in air. The permeation coefficient and the activation energy were calculated in various conditions. With a second experimental set-up, we have tested the polyethylene membrane which has been adapted on the BARASOL. In these conditions, we have shown that it is possible to measure radon in water at concentrations around 10 3 Bq.m -3 [fr

  17. Correction Method of Wiener Spectrum (WS) on Digital Medical Imaging Systems

    International Nuclear Information System (INIS)

    Kim, Jung Min; Lee, Ki Sung; Kim, You Hyun

    2009-01-01

    Noise evaluation for an image has been performed by root mean square (RMS) granularity, autocorrelation function (ACF), and Wiener spectrum. RMS granularity stands for standard deviation of photon data and ACF is acquired by integration of 1 D function of distance variation. Fourier transform of ACF results in noise power spectrum which is called Wiener spectrum in image quality evaluation. Wiener spectrum represents noise itself. In addition, along with MTF, it is an important factor to produce detective quantum efficiency (DQE). The proposed evaluation method using Wiener spectrum is expected to contribute to educate the concept of Wiener spectrum in educational organizations, choose the appropriate imaging detectors for clinical applications, and maintain image quality in digital imaging systems.

  18. Ab initio O(N) elongation-counterpoise method for BSSE-corrected interaction energy analyses in biosystems

    Energy Technology Data Exchange (ETDEWEB)

    Orimoto, Yuuichi; Xie, Peng; Liu, Kai [Department of Material Sciences, Faculty of Engineering Sciences, Kyushu University, 6-1 Kasuga-Park, Fukuoka 816-8580 (Japan); Yamamoto, Ryohei [Department of Molecular and Material Sciences, Interdisciplinary Graduate School of Engineering Sciences, Kyushu University, 6-1 Kasuga-Park, Fukuoka 816-8580 (Japan); Imamura, Akira [Hiroshima Kokusai Gakuin University, 6-20-1 Nakano, Aki-ku, Hiroshima 739-0321 (Japan); Aoki, Yuriko, E-mail: aoki.yuriko.397@m.kyushu-u.ac.jp [Department of Material Sciences, Faculty of Engineering Sciences, Kyushu University, 6-1 Kasuga-Park, Fukuoka 816-8580 (Japan); Japan Science and Technology Agency, CREST, 4-1-8 Hon-chou, Kawaguchi, Saitama 332-0012 (Japan)

    2015-03-14

    An Elongation-counterpoise (ELG-CP) method was developed for performing accurate and efficient interaction energy analysis and correcting the basis set superposition error (BSSE) in biosystems. The method was achieved by combining our developed ab initio O(N) elongation method with the conventional counterpoise method proposed for solving the BSSE problem. As a test, the ELG-CP method was applied to the analysis of the DNAs’ inter-strands interaction energies with respect to the alkylation-induced base pair mismatch phenomenon that causes a transition from G⋯C to A⋯T. It was found that the ELG-CP method showed high efficiency (nearly linear-scaling) and high accuracy with a negligibly small energy error in the total energy calculations (in the order of 10{sup −7}–10{sup −8} hartree/atom) as compared with the conventional method during the counterpoise treatment. Furthermore, the magnitude of the BSSE was found to be ca. −290 kcal/mol for the calculation of a DNA model with 21 base pairs. This emphasizes the importance of BSSE correction when a limited size basis set is used to study the DNA models and compare small energy differences between them. In this work, we quantitatively estimated the inter-strands interaction energy for each possible step in the transition process from G⋯C to A⋯T by the ELG-CP method. It was found that the base pair replacement in the process only affects the interaction energy for a limited area around the mismatch position with a few adjacent base pairs. From the interaction energy point of view, our results showed that a base pair sliding mechanism possibly occurs after the alkylation of guanine to gain the maximum possible number of hydrogen bonds between the bases. In addition, the steps leading to the A⋯T replacement accompanied with replications were found to be unfavorable processes corresponding to ca. 10 kcal/mol loss in stabilization energy. The present study indicated that the ELG-CP method is promising for

  19. A Comparison of Methods for a Priori Bias Correction in Soil Moisture Data Assimilation

    Science.gov (United States)

    Kumar, Sujay V.; Reichle, Rolf H.; Harrison, Kenneth W.; Peters-Lidard, Christa D.; Yatheendradas, Soni; Santanello, Joseph A.

    2011-01-01

    Data assimilation is being increasingly used to merge remotely sensed land surface variables such as soil moisture, snow and skin temperature with estimates from land models. Its success, however, depends on unbiased model predictions and unbiased observations. Here, a suite of continental-scale, synthetic soil moisture assimilation experiments is used to compare two approaches that address typical biases in soil moisture prior to data assimilation: (i) parameter estimation to calibrate the land model to the climatology of the soil moisture observations, and (ii) scaling of the observations to the model s soil moisture climatology. To enable this research, an optimization infrastructure was added to the NASA Land Information System (LIS) that includes gradient-based optimization methods and global, heuristic search algorithms. The land model calibration eliminates the bias but does not necessarily result in more realistic model parameters. Nevertheless, the experiments confirm that model calibration yields assimilation estimates of surface and root zone soil moisture that are as skillful as those obtained through scaling of the observations to the model s climatology. Analysis of innovation diagnostics underlines the importance of addressing bias in soil moisture assimilation and confirms that both approaches adequately address the issue.

  20. A convenient method to prepare emulsified polyacrylate nanoparticles from powders [corrected] for drug delivery applications.

    Science.gov (United States)

    Garay-Jimenez, Julio C; Turos, Edward

    2011-08-01

    We describe a method to obtain purified, polyacrylate nanoparticles in a homogeneous powdered form that can be readily reconstituted in aqueous media for in vivo applications. Polyacrylate-based nanoparticles can be easily prepared by emulsion polymerization using a 7:3 mixture of butyl acrylate and styrene in water containing sodium dodecyl sulfate as a surfactant and potassium persulfate as a water-soluble radical initiator. The resulting emulsions contain nanoparticles measuring 40-50 nm in diameter with uniform morphology, and can be purified by centrifugation and dialysis to remove larger coagulants as well as residual surfactant and monomers associated with toxicity. These purified emulsions can be lyophilized in the presence of maltose (a non-toxic cryoprotectant) to provide a homogeneous dried powder, which can be reconstituted as an emulsion by addition of an aqueous diluent. Dynamic light scattering and microbiological experiments were carried out on the reconstituted nanoparticles. This procedure allows for ready preparation of nanoparticle emulsions for drug delivery applications. Copyright © 2011 Elsevier Ltd. All rights reserved.