WorldWideScience

Sample records for saturation correction method

  1. Correcting saturation of detectors for particle/droplet imaging methods

    International Nuclear Information System (INIS)

    Kalt, Peter A M

    2010-01-01

    Laser-based diagnostic methods are being applied to more and more flows of theoretical and practical interest and are revealing interesting new flow features. Imaging particles or droplets in nephelometry and laser sheet dropsizing methods requires a trade-off of maximized signal-to-noise ratio without over-saturating the detector. Droplet and particle imaging results in lognormal distribution of pixel intensities. It is possible to fit a derived lognormal distribution to the histogram of measured pixel intensities. If pixel intensities are clipped at a saturated value, it is possible to estimate a presumed probability density function (pdf) shape without the effects of saturation from the lognormal fit to the unsaturated histogram. Information about presumed shapes of the pixel intensity pdf is used to generate corrections that can be applied to data to account for saturation. The effects of even slight saturation are shown to be a significant source of error on the derived average. The influence of saturation on the derived root mean square (rms) is even more pronounced. It is found that errors on the determined average exceed 5% when the number of saturated samples exceeds 3% of the total. Errors on the rms are 20% for a similar saturation level. This study also attempts to delineate limits, within which the detector saturation can be accurately corrected. It is demonstrated that a simple method for reshaping the clipped part of the pixel intensity histogram makes accurate corrections to account for saturated pixels. These outcomes can be used to correct a saturated signal, quantify the effect of saturation on a derived average and offer a method to correct the derived average in the case of slight to moderate saturation of pixels

  2. A gamma camera count rate saturation correction method for whole-body planar imaging

    Science.gov (United States)

    Hobbs, Robert F.; Baechler, Sébastien; Senthamizhchelvan, Srinivasan; Prideaux, Andrew R.; Esaias, Caroline E.; Reinhardt, Melvin; Frey, Eric C.; Loeb, David M.; Sgouros, George

    2010-02-01

    Whole-body (WB) planar imaging has long been one of the staple methods of dosimetry, and its quantification has been formalized by the MIRD Committee in pamphlet no 16. One of the issues not specifically addressed in the formalism occurs when the count rates reaching the detector are sufficiently high to result in camera count saturation. Camera dead-time effects have been extensively studied, but all of the developed correction methods assume static acquisitions. However, during WB planar (sweep) imaging, a variable amount of imaged activity exists in the detector's field of view as a function of time and therefore the camera saturation is time dependent. A new time-dependent algorithm was developed to correct for dead-time effects during WB planar acquisitions that accounts for relative motion between detector heads and imaged object. Static camera dead-time parameters were acquired by imaging decaying activity in a phantom and obtaining a saturation curve. Using these parameters, an iterative algorithm akin to Newton's method was developed, which takes into account the variable count rate seen by the detector as a function of time. The algorithm was tested on simulated data as well as on a whole-body scan of high activity Samarium-153 in an ellipsoid phantom. A complete set of parameters from unsaturated phantom data necessary for count rate to activity conversion was also obtained, including build-up and attenuation coefficients, in order to convert corrected count rate values to activity. The algorithm proved successful in accounting for motion- and time-dependent saturation effects in both the simulated and measured data and converged to any desired degree of precision. The clearance half-life calculated from the ellipsoid phantom data was calculated to be 45.1 h after dead-time correction and 51.4 h with no correction; the physical decay half-life of Samarium-153 is 46.3 h. Accurate WB planar dosimetry of high activities relies on successfully compensating

  3. On Neglecting Chemical Exchange Effects When Correcting in Vivo 31P MRS Data for Partial Saturation

    Science.gov (United States)

    Ouwerkerk, Ronald; Bottomley, Paul A.

    2001-02-01

    Signal acquisition in most MRS experiments requires a correction for partial saturation that is commonly based on a single exponential model for T1 that ignores effects of chemical exchange. We evaluated the errors in 31P MRS measurements introduced by this approximation in two-, three-, and four-site chemical exchange models under a range of flip-angles and pulse sequence repetition times (TR) that provide near-optimum signal-to-noise ratio (SNR). In two-site exchange, such as the creatine-kinase reaction involving phosphocreatine (PCr) and γ-ATP in human skeletal and cardiac muscle, errors in saturation factors were determined for the progressive saturation method and the dual-angle method of measuring T1. The analysis shows that these errors are negligible for the progressive saturation method if the observed T1 is derived from a three-parameter fit of the data. When T1 is measured with the dual-angle method, errors in saturation factors are less than 5% for all conceivable values of the chemical exchange rate and flip-angles that deliver useful SNR per unit time over the range T1/5 ≤ TR ≤ 2T1. Errors are also less than 5% for three- and four-site exchange when TR ≥ T1*/2, the so-called "intrinsic" T1's of the metabolites. The effect of changing metabolite concentrations and chemical exchange rates on observed T1's and saturation corrections was also examined with a three-site chemical exchange model involving ATP, PCr, and inorganic phosphate in skeletal muscle undergoing up to 95% PCr depletion. Although the observed T1's were dependent on metabolite concentrations, errors in saturation corrections for TR = 2 s could be kept within 5% for all exchanging metabolites using a simple interpolation of two dual-angle T1 measurements performed at the start and end of the experiment. Thus, the single-exponential model appears to be reasonably accurate for correcting 31P MRS data for partial saturation in the presence of chemical exchange. Even in systems where

  4. Classical gluon production amplitude for nucleus-nucleus collisions:First saturation correction in the projectile

    International Nuclear Information System (INIS)

    Chirilli, Giovanni A.; Kovchegov, Yuri V.; Wertepny, Douglas E.

    2015-01-01

    We calculate the classical single-gluon production amplitude in nucleus-nucleus collisions including the first saturation correction in one of the nuclei (the projectile) while keeping multiple-rescattering (saturation) corrections to all orders in the other nucleus (the target). In our approximation only two nucleons interact in the projectile nucleus: the single-gluon production amplitude we calculate is order-g"3 and is leading-order in the atomic number of the projectile, while resumming all order-one saturation corrections in the target nucleus. Our result is the first step towards obtaining an analytic expression for the first projectile saturation correction to the gluon production cross section in nucleus-nucleus collisions.

  5. A preliminary study on method of saturated curve

    International Nuclear Information System (INIS)

    Cao Liguo; Chen Yan; Ao Qi; Li Huijuan

    1987-01-01

    It is an effective method to determine directly the absorption coefficient of sample with the matrix effect correction. The absorption coefficient is calculated using the relation of the characteristic X-ray intensity with the thickness of samples (saturated curve). The method explains directly the feature of the sample and the correction of the enhanced effect in certain condition. The method is not as same as the usual one in which the determination of the absorption coefficient of sample is based on the procedure of absorption of X-ray penetrating sample. The sensitivity factor KI 0 is discussed. The idea of determinating KI o by experiment and quasi-absoluted measurement of absorption coefficient μ are proposed. The experimental results with correction in different condition are shown

  6. An algorithm to correct saturated mass spectrometry ion abundances for enhanced quantitation and mass accuracy in omic studies

    Energy Technology Data Exchange (ETDEWEB)

    Bilbao, Aivett; Gibbons, Bryson C.; Slysz, Gordon W.; Crowell, Kevin L.; Monroe, Matthew E.; Ibrahim, Yehia M.; Smith, Richard D.; Payne, Samuel H.; Baker, Erin S.

    2018-04-01

    The mass accuracy and peak intensity of ions detected by mass spectrometry (MS) measurements are essential to facilitate compound identification and quantitation. However, high concentration species can easily cause problems if their ion intensities reach beyond the limits of the detection system, leading to distorted and non-ideal detector response (e.g. saturation), and largely precluding the calculation of accurate m/z and intensity values. Here we present an open source computational method to correct peaks above a defined intensity (saturated) threshold determined by the MS instrumentation such as the analog-to-digital converters or time-to-digital converters used in conjunction with time-of-flight MS. In this method, the isotopic envelope for each observed ion above the saturation threshold is compared to its expected theoretical isotopic distribution. The most intense isotopic peak for which saturation does not occur is then utilized to re-calculate the precursor m/z and correct the intensity, resulting in both higher mass accuracy and greater dynamic range. The benefits of this approach were evaluated with proteomic and lipidomic datasets of varying complexities. After correcting the high concentration species, reduced mass errors and enhanced dynamic range were observed for both simple and complex omic samples. Specifically, the mass error dropped by more than 50% in most cases with highly saturated species and dynamic range increased by 1-2 orders of magnitude for peptides in a blood serum sample.

  7. Saturation and Energy Corrections for TeV Electrons and Photons

    CERN Document Server

    Clerbaux, Barbara; Mahmoud, Tariq; Marage, Pierre Edouard

    2006-01-01

    This note presents a study of the response of the CMS electromagnetic calorimeter ECAL to high energy electrons and photons (from 500 to 4000 GeV), using the full simulation of the CMS detector. The longitudinal containment and the lateral extension of high energy showers are discussed, and energy and eta dependent correction factors F(E_meas, eta), where E_meas = E_ECAL + E_HCAL, are determined in order to reconstruct the incident particle energy, using the energies measured in the ECAL and in the hadronic calorimeter HCAL. For ECAL barrel crystals with energy deposit higher than 1700 GeV, improvements are proposed to techniques aimed at correcting for the effects of electronics saturation.

  8. Capillary pressure-saturation relationships for porous granular materials: Pore morphology method vs. pore unit assembly method

    Science.gov (United States)

    Sweijen, Thomas; Aslannejad, Hamed; Hassanizadeh, S. Majid

    2017-09-01

    In studies of two-phase flow in complex porous media it is often desirable to have an estimation of the capillary pressure-saturation curve prior to measurements. Therefore, we compare in this research the capability of three pore-scale approaches in reproducing experimentally measured capillary pressure-saturation curves. To do so, we have generated 12 packings of spheres that are representative of four different glass-bead packings and eight different sand packings, for which we have found experimental data on the capillary pressure-saturation curve in the literature. In generating the packings, we matched the particle size distributions and porosity values of the granular materials. We have used three different pore-scale approaches for generating the capillary pressure-saturation curves of each packing: i) the Pore Unit Assembly (PUA) method in combination with the Mayer and Stowe-Princen (MS-P) approximation for estimating the entry pressures of pore throats, ii) the PUA method in combination with the hemisphere approximation, and iii) the Pore Morphology Method (PMM) in combination with the hemisphere approximation. The three approaches were also used to produce capillary pressure-saturation curves for the coating layer of paper, used in inkjet printing. Curves for such layers are extremely difficult to determine experimentally, due to their very small thickness and the presence of extremely small pores (less than one micrometer in size). Results indicate that the PMM and PUA-hemisphere method give similar capillary pressure-saturation curves, because both methods rely on a hemisphere to represent the air-water interface. The ability of the hemisphere approximation and the MS-P approximation to reproduce correct capillary pressure seems to depend on the type of particle size distribution, with the hemisphere approximation working well for narrowly distributed granular materials.

  9. Correcting human heart 31P NMR spectra for partial saturation. Evidence that saturation factors for PCr/ATP are homogeneous in normal and disease states

    Science.gov (United States)

    Bottomley, Paul A.; Hardy, Christopher J.; Weiss, Robert G.

    Heart PCr/ATP ratios measured from spatially localized 31P NMR spectra can be corrected for partial saturation effects using saturation factors derived from unlocalized chest surface-coil spectra acquired at the heart rate and approximate Ernst angle for phosphor creatine (PCr) and again under fully relaxed conditions during each 31P exam. To validate this approach in studies of normal and disease states where the possibility of heterogeneity in metabolite T1 values between both chest muscle and heart and normal and disease states exists, the properties of saturation factors for metabolite ratios were investigated theoretically under conditions applicable in typical cardiac spectroscopy exams and empirically using data from 82 cardiac 31P exams in six study groups comprising normal controls ( n = 19) and patients with dilated ( n = 20) and hypertrophic ( n = 5) cardiomyopathy, coronary artery disease ( n = 16), heart transplants ( n = 19), and valvular heart disease ( n = 3). When TR ≪ T1,(PCr), with T1(PCr) ⩾ T1(ATP), the saturation factor for PCr/ATP lies in the range 1.5 ± 0.5, regardless of the T1 values. The precise value depends on the ratio of metabolite T1 values rather than their absolute values and is insensitive to modest changes in TR. Published data suggest that the metabolite T1 ratio is the same in heart and muscle. Our empirical data reveal that the saturation factors do not vary significantly with disease state, nor with the relative fractions of muscle and heart contributing to the chest surface-coil spectra. Also, the corrected myocardial PCr/ATP ratios in each normal or disease state bear no correlation with the corresponding saturation factors nor the fraction of muscle in the unlocalized chest spectra. However, application of the saturation correction (mean value, 1.36 ± 0.03 SE) significantly reduced scatter in myocardial PCr/ATP data by 14 ± 11% (SD) ( p ⩽ 0.05). The findings suggest that the relative T1 values of PCr and ATP are

  10. A non-parametric method for correction of global radiation observations

    DEFF Research Database (Denmark)

    Bacher, Peder; Madsen, Henrik; Perers, Bengt

    2013-01-01

    in the observations are corrected. These are errors such as: tilt in the leveling of the sensor, shadowing from surrounding objects, clipping and saturation in the signal processing, and errors from dirt and wear. The method is based on a statistical non-parametric clear-sky model which is applied to both...

  11. Research on 3-D terrain correction methods of airborne gamma-ray spectrometry survey

    International Nuclear Information System (INIS)

    Liu Yanyang; Liu Qingcheng; Zhang Zhiyong

    2008-01-01

    The general method of height correction is not effectual in complex terrain during the process of explaining airborne gamma-ray spectrometry data, and the 2-D terrain correction method researched in recent years is just available for correction of section measured. A new method of 3-D sector terrain correction is studied. The ground radiator is divided into many small sector radiators by the method, then the irradiation rate is calculated in certain survey distance, and the total value of all small radiate sources is regarded as the irradiation rate of the ground radiator at certain point of aero- survey, and the correction coefficients of every point are calculated which then applied to correct to airborne gamma-ray spectrometry data. The method can achieve the forward calculation, inversion calculation and terrain correction for airborne gamma-ray spectrometry survey in complex topography by dividing the ground radiator into many small sectors. Other factors are considered such as the un- saturated degree of measure scope, uneven-radiator content on ground, and so on. The results of for- ward model and an example analysis show that the 3-D terrain correction method is proper and effectual. (authors)

  12. Near-Saturation Single-Photon Avalanche Diode Afterpulse and Sensitivity Correction Scheme for the LHC Longitudinal density Monitor

    CERN Document Server

    Bravin, E; Palm, M

    2014-01-01

    Single-Photon Avalanche Diodes (SPADs) monitor the longitudinal density of the LHC beams by measuring the temporal distribution of synchrotron radiation. The relative population of nominally empty RF-buckets (satellites or ghosts) with respect to filled bunches is a key figure for the luminosity calibration of the LHC experiments. Since afterpulsing from a main bunch avalanche can be as high as, or higher than, the signal from satellites or ghosts, an accurate correction algorithm is needed. Furthermore, to reduce the integration time, the amount of light sent to the SPAD is enough so that pile-up effects and afterpulsing cannot be neglected. The SPAD sensitivity has also been found to vary at the end of the active quenching phase. We present a method to characterize and correct for SPAD deadtime, afterpulsing and sensitivity variation near saturation, together with laboratory benchmarking.

  13. Dead time corrections using the backward extrapolation method

    Energy Technology Data Exchange (ETDEWEB)

    Gilad, E., E-mail: gilade@bgu.ac.il [The Unit of Nuclear Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel); Dubi, C. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel); Geslot, B.; Blaise, P. [DEN/CAD/DER/SPEx/LPE, CEA Cadarache, Saint-Paul-les-Durance 13108 (France); Kolin, A. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel)

    2017-05-11

    Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold, up to total saturation of the detector system. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (e.g., paralyzing vs. non paralyzing), and the stochastic nature of the fission chains. In the present study, a new technique is introduced for dead time corrections on the sampled Count Per Second (CPS), based on backward extrapolation of the losses, created by increasingly growing artificially imposed dead time on the data, back to zero. The method has been implemented on actual neutron noise measurements carried out in the MINERVE zero power reactor, demonstrating high accuracy (of 1–2%) in restoring the corrected count rate. - Highlights: • A new method for dead time corrections is introduced and experimentally validated. • The method does not depend on any prior calibration nor assumes any specific model. • Different dead times are imposed on the signal and the losses are extrapolated to zero. • The method is implemented and validated using neutron measurements from the MINERVE. • Result show very good correspondence to empirical results.

  14. Spectral-ratio radon background correction method in airborne γ-ray spectrometry based on compton scattering deduction

    International Nuclear Information System (INIS)

    Gu Yi; Xiong Shengqing; Zhou Jianxin; Fan Zhengguo; Ge Liangquan

    2014-01-01

    γ-ray released by the radon daughter has severe impact on airborne γ-ray spectrometry. The spectral-ratio method is one of the best mathematical methods for radon background deduction in airborne γ-ray spectrometry. In this paper, an advanced spectral-ratio method was proposed which deducts Compton scattering ray by the fast Fourier transform rather than tripping ratios, the relationship between survey height and correction coefficient of the advanced spectral-ratio radon background correction method was studied, the advanced spectral-ratio radon background correction mathematic model was established, and the ground saturation model calibrating technology for correction coefficient was proposed. As for the advanced spectral-ratio radon background correction method, its applicability and correction efficiency are improved, and the application cost is saved. Furthermore, it can prevent the physical meaning lost and avoid the possible errors caused by matrix computation and mathematical fitting based on spectrum shape which is applied in traditional correction coefficient. (authors)

  15. From dry to saturated thermal conductivity: mixing-model correction charts and new conversion equations for sedimentary rocks

    Science.gov (United States)

    Fuchs, Sven; Schütz, Felina; Förster, Andrea; Förster, Hans-Jürgen

    2013-04-01

    satisfying. To improve the fit of the models, correction equations are calculated based on the statistical data. In addition, the application of correction equations allows a significant improvement of the accuracy of bulk TC data calculated. However, the "corrected" geometric mean constitutes the only model universally applicable to different types of sedimentary rocks and, thus, is recommended for the calculation of bulk TC. Finally, the statistical analysis also resulted in lithotype-specific conversion equations, which permit a calculation of the water-saturated bulk TC from dry-measured TC and porosity (e.g., well-log-derived porosity). This approach has the advantage that the saturated bulk TC could be calculated readily without application of any mixing model. The expected errors with this approach are in the range between 5 and 10 % (Fuchs et al., 2013).

  16. Monitor hemoglobin concentration and oxygen saturation in living mouse tail using photoacoustic CT scanner

    Science.gov (United States)

    Liu, Bo; Kruger, Robert; Reinecke, Daniel; Stantz, Keith M.

    2010-02-01

    Purpose: The purpose of this study is to use PCT spectroscopy scanner to monitor the hemoglobin concentration and oxygen saturation change of living mouse by imaging the artery and veins in a mouse tail. Materials and Methods: One mouse tail was scanned using the PCT small animal scanner at the isosbestic wavelength (796nm) to obtain its hemoglobin concentration. Immediately after the scan, the mouse was euthanized and its blood was extracted from the heart. The true hemoglobin concentration was measured using a co-oximeter. Reconstruction correction algorithm to compensate the acoustic signal loss due to the existence of bone structure in the mouse tail was developed. After the correction, the hemoglobin concentration was calculated from the PCT images and compared with co-oximeter result. Next, one mouse were immobilized in the PCT scanner. Gas with different concentrations of oxygen was given to mouse to change the oxygen saturation. PCT tail vessel spectroscopy scans were performed 15 minutes after the introduction of gas. The oxygen saturation values were then calculated to monitor the oxygen saturation change of mouse. Results: The systematic error for hemoglobin concentration measurement was less than 5% based on preliminary analysis. Same correction technique was used for oxygen saturation calculation. After correction, the oxygen saturation level change matches the oxygen volume ratio change of the introduced gas. Conclusion: This living mouse tail experiment has shown that NIR PCT-spectroscopy can be used to monitor the oxygen saturation status in living small animals.

  17. Selective saturation method for EPR dosimetry with tooth enamel

    International Nuclear Information System (INIS)

    Ignatiev, E.A.; Romanyukha, A.A.; Koshta, A.A.; Wieser, A.

    1996-01-01

    The method of selective saturation is based on the difference in the microwave (mw) power dependence of the background and radiation induced EPR components of the tooth enamel spectrum. The subtraction of the EPR spectrum recorded at low mw power from that recorded at higher mw power provides a considerable reduction of the background component in the spectrum. The resolution of the EPR spectrum could be improved 10-fold, however simultaneously the signal-to-noise ratio was found to be reduced twice. A detailed comparative study of reference samples with known absorbed doses was performed to demonstrate the advantage of the method. The application of the selective saturation method for EPR dosimetry with tooth enamel reduced the lower limit of EPR dosimetry to about 100 mGy. (author)

  18. Water saturation in shaly sands: logging parameters from log-derived values

    International Nuclear Information System (INIS)

    Miyairi, M.; Itoh, T.; Okabe, F.

    1976-01-01

    The methods are presented for determining the relation of porosity to formation factor and that of true resistivity of formation to water saturation, which were investigated through the log interpretation of one of the oil and gas fields of northern Japan Sea. The values of the coefficients ''a'' and ''m'' in porosity-formation factor relation are derived from cross-plot of porosity and resistivity of formation corrected by clay content. The saturation exponent ''n'' is determined from cross-plot of porosity and resistivity index on the assumption that the product of porosity and irreducible water saturation is constant. The relation of porosity to irreducible water saturation is also investigated from core analysis. The new logging parameters determined from the methods, a = 1, m = 2, n = 1.4, improved the values of water saturation by 6 percent in average, and made it easy to distinguish the points which belong to the productive zone and ones belonging to the nonproductive zone

  19. Model Correction Factor Method

    DEFF Research Database (Denmark)

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes

    1997-01-01

    The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...

  20. A practical procedure to improve the accuracy of radiochromic film dosimetry. A integration with a correction method of uniformity correction and a red/blue correction method

    International Nuclear Information System (INIS)

    Uehara, Ryuzo; Tachibana, Hidenobu; Ito, Yasushi; Yoshino, Shinichi; Matsubayashi, Fumiyasu; Sato, Tomoharu

    2013-01-01

    It has been reported that the light scattering could worsen the accuracy of dose distribution measurement using a radiochromic film. The purpose of this study was to investigate the accuracy of two different films, EDR2 and EBT2, as film dosimetry tools. The effectiveness of a correction method for the non-uniformity caused from EBT2 film and the light scattering was also evaluated. In addition the efficacy of this correction method integrated with the red/blue correction method was assessed. EDR2 and EBT2 films were read using a flatbed charge-coupled device scanner (EPSON 10000 G). Dose differences on the axis perpendicular to the scanner lamp movement axis were within 1% with EDR2, but exceeded 3% (Maximum: +8%) with EBT2. The non-uniformity correction method, after a single film exposure, was applied to the readout of the films. A corrected dose distribution data was subsequently created. The correction method showed more than 10%-better pass ratios in dose difference evaluation than when the correction method was not applied. The red/blue correction method resulted in 5%-improvement compared with the standard procedure that employed red color only. The correction method with EBT2 proved to be able to rapidly correct non-uniformity, and has potential for routine clinical intensity modulated radiation therapy (IMRT) dose verification if the accuracy of EBT2 is required to be similar to that of EDR2. The use of red/blue correction method may improve the accuracy, but we recommend we should use the red/blue correction method carefully and understand the characteristics of EBT2 for red color only and the red/blue correction method. (author)

  1. [A practical procedure to improve the accuracy of radiochromic film dosimetry: a integration with a correction method of uniformity correction and a red/blue correction method].

    Science.gov (United States)

    Uehara, Ryuzo; Tachibana, Hidenobu; Ito, Yasushi; Yoshino, Shinichi; Matsubayashi, Fumiyasu; Sato, Tomoharu

    2013-06-01

    It has been reported that the light scattering could worsen the accuracy of dose distribution measurement using a radiochromic film. The purpose of this study was to investigate the accuracy of two different films, EDR2 and EBT2, as film dosimetry tools. The effectiveness of a correction method for the non-uniformity caused from EBT2 film and the light scattering was also evaluated. In addition the efficacy of this correction method integrated with the red/blue correction method was assessed. EDR2 and EBT2 films were read using a flatbed charge-coupled device scanner (EPSON 10000G). Dose differences on the axis perpendicular to the scanner lamp movement axis were within 1% with EDR2, but exceeded 3% (Maximum: +8%) with EBT2. The non-uniformity correction method, after a single film exposure, was applied to the readout of the films. A corrected dose distribution data was subsequently created. The correction method showed more than 10%-better pass ratios in dose difference evaluation than when the correction method was not applied. The red/blue correction method resulted in 5%-improvement compared with the standard procedure that employed red color only. The correction method with EBT2 proved to be able to rapidly correct non-uniformity, and has potential for routine clinical IMRT dose verification if the accuracy of EBT2 is required to be similar to that of EDR2. The use of red/blue correction method may improve the accuracy, but we recommend we should use the red/blue correction method carefully and understand the characteristics of EBT2 for red color only and the red/blue correction method.

  2. A new method for calculating gas saturation of low-resistivity shale gas reservoirs

    Directory of Open Access Journals (Sweden)

    Jinyan Zhang

    2017-09-01

    Full Text Available The Jiaoshiba shale gas field is located in the Fuling area of the Sichuan Basin, with the Upper Ordovician Wufeng–Lower Silurian Longmaxi Fm as the pay zone. At the bottom of the pay zone, a high-quality shale gas reservoir about 20 m thick is generally developed with high organic contents and gas abundance, but its resistivity is relatively low. Accordingly, the gas saturation calculated by formulas (e.g. Archie using electric logging data is often much lower than the experiment-derived value. In this paper, a new method was presented for calculating gas saturation more accurately based on non-electric logging data. Firstly, the causes for the low resistivity of shale gas reservoirs in this area were analyzed. Then, the limitation of traditional methods for calculating gas saturation based on electric logging data was diagnosed, and the feasibility of the neutron–density porosity overlay method was illustrated. According to the response characteristics of neutron, density and other porosity logging in shale gas reservoirs, a model for calculating gas saturation of shale gas was established by core experimental calibration based on the density logging value, the density porosity and the difference between density porosity and neutron porosity, by means of multiple methods (e.g. the dual-porosity overlay method by optimizing the best overlay coefficient. This new method avoids the effect of low resistivity, and thus can provide normal calculated gas saturation of high-quality shale gas reservoirs. It works well in practical application. This new method provides a technical support for the calculation of shale gas reserves in this area. Keywords: Shale gas, Gas saturation, Low resistivity, Non-electric logging, Volume density, Compensated neutron, Overlay method, Reserves calculation, Sichuan Basin, Jiaoshiba shale gas field

  3. On Neglecting Chemical Exchange When Correcting in Vivo 31P MRS Data for Partial Saturation: Commentary on: ``Pitfalls in the Measurement of Metabolite Concentrations Using the One-Pulse Experiment in in Vivo NMR''

    Science.gov (United States)

    Ouwerkerk, Ronald; Bottomley, Paul A.

    2001-04-01

    This article replies to Spencer et al. (J. Magn. Reson.149, 251-257, 2001) concerning the degree to which chemical exchange affects partial saturation corrections using saturation factors. Considering the important case of in vivo31P NMR, we employ differential analysis to demonstrate a broad range of experimental conditions over which chemical exchange minimally affects saturation factors, and near-optimum signal-to-noise ratio is preserved. The analysis contradicts Spencer et al.'s broad claim that chemical exchange results in a strong dependence of saturation factors upon M0's and T1 and exchange parameters. For Spencer et al.'s example of a dynamic 31P NMR experiment in which phosphocreatine varies 20-fold, we show that our strategy of measuring saturation factors at the start and end of the study reduces errors in saturation corrections to 2% for the high-energy phosphates.

  4. Error of image saturation in the structured-light method.

    Science.gov (United States)

    Qi, Zhaoshuai; Wang, Zhao; Huang, Junhui; Xing, Chao; Gao, Jianmin

    2018-01-01

    In the phase-measuring structured-light method, image saturation will induce large phase errors. Usually, by selecting proper system parameters (such as the phase-shift number, exposure time, projection intensity, etc.), the phase error can be reduced. However, due to lack of a complete theory of phase error, there is no rational principle or basis for the selection of the optimal system parameters. For this reason, the phase error due to image saturation is analyzed completely, and the effects of the two main factors, including the phase-shift number and saturation degree, on the phase error are studied in depth. In addition, the selection of optimal system parameters is discussed, including the proper range and the selection principle of the system parameters. The error analysis and the conclusion are verified by simulation and experiment results, and the conclusion can be used for optimal parameter selection in practice.

  5. A Design Method of Robust Servo Internal Model Control with Control Input Saturation

    OpenAIRE

    山田, 功; 舩見, 洋祐

    2001-01-01

    In the present paper, we examine a design method of robust servo Internal Model Control with control input saturation. First of all, we clarify the condition that Internal Model Control has robust servo characteristics for the system with control input saturation. From this consideration, we propose new design method of Internal Model Control with robust servo characteristics. A numerical example to illustrate the effectiveness of the proposed method is shown.

  6. Off-Angle Iris Correction Methods

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Thompson, Joseph T [ORNL; Karakaya, Mahmut [ORNL; Boehnen, Chris Bensing [ORNL

    2016-01-01

    In many real world iris recognition systems obtaining consistent frontal images is problematic do to inexperienced or uncooperative users, untrained operators, or distracting environments. As a result many collected images are unusable by modern iris matchers. In this chapter we present four methods for correcting off-angle iris images to appear frontal which makes them compatible with existing iris matchers. The methods include an affine correction, a retraced model of the human eye, measured displacements, and a genetic algorithm optimized correction. The affine correction represents a simple way to create an iris image that appears frontal but it does not account for refractive distortions of the cornea. The other method account for refraction. The retraced model simulates the optical properties of the cornea. The other two methods are data driven. The first uses optical flow to measure the displacements of the iris texture when compared to frontal images of the same subject. The second uses a genetic algorithm to learn a mapping that optimizes the Hamming Distance scores between off-angle and frontal images. In this paper we hypothesize that the biological model presented in our earlier work does not adequately account for all variations in eye anatomy and therefore the two data-driven approaches should yield better performance. Results are presented using the commercial VeriEye matcher that show that the genetic algorithm method clearly improves over prior work and makes iris recognition possible up to 50 degrees off-angle.

  7. PRO-QUEST: a rapid assessment method based on progressive saturation for quantifying exchange rates using saturation times in CEST.

    Science.gov (United States)

    Demetriou, Eleni; Tachrount, Mohamed; Zaiss, Moritz; Shmueli, Karin; Golay, Xavier

    2018-03-05

    To develop a new MRI technique to rapidly measure exchange rates in CEST MRI. A novel pulse sequence for measuring chemical exchange rates through a progressive saturation recovery process, called PRO-QUEST (progressive saturation for quantifying exchange rates using saturation times), has been developed. Using this method, the water magnetization is sampled under non-steady-state conditions, and off-resonance saturation is interleaved with the acquisition of images obtained through a Look-Locker type of acquisition. A complete theoretical framework has been set up, and simple equations to obtain the exchange rates have been derived. A reduction of scan time from 58 to 16 minutes has been obtained using PRO-QUEST versus the standard QUEST. Maps of both T 1 of water and B 1 can simply be obtained by repetition of the sequence without off-resonance saturation pulses. Simulations and calculated exchange rates from experimental data using amino acids such as glutamate, glutamine, taurine, and alanine were compared and found to be in good agreement. The PRO-QUEST sequence was also applied on healthy and infarcted rats after 24 hours, and revealed that imaging specificity to ischemic acidification during stroke was substantially increased relative to standard amide proton transfer-weighted imaging. Because of the reduced scan time and insensitivity to nonchemical exchange factors such as direct water saturation, PRO-QUEST can serve as an excellent alternative for researchers and clinicians interested to map pH changes in vivo. © 2018 International Society for Magnetic Resonance in Medicine.

  8. Attenuation correction method for single photon emission CT

    Energy Technology Data Exchange (ETDEWEB)

    Morozumi, Tatsuru; Nakajima, Masato [Keio Univ., Yokohama (Japan). Faculty of Science and Technology; Ogawa, Koichi; Yuta, Shinichi

    1983-10-01

    A correction method (Modified Correction Matrix method) is proposed to implement iterative correction by exactly measuring attenuation constant distribution in a test body, calculating a correction factor for every picture element, then multiply the image by these factors. Computer simulation for the comparison of the results showed that the proposed method was specifically more effective to an application to the test body, in which the rate of attenuation constant change is large, than the conventional correction matrix method. Since the actual measurement data always contain quantum noise, the noise was taken into account in the simulation. However, the correction effect was large even under the noise. For verifying its clinical effectiveness, the experiment using an acrylic phantom was also carried out. As the result, the recovery of image quality in the parts with small attenuation constant was remarkable as compared with the conventional method.

  9. Iteration of ultrasound aberration correction methods

    Science.gov (United States)

    Maasoey, Svein-Erik; Angelsen, Bjoern; Varslot, Trond

    2004-05-01

    Aberration in ultrasound medical imaging is usually modeled by time-delay and amplitude variations concentrated on the transmitting/receiving array. This filter process is here denoted a TDA filter. The TDA filter is an approximation to the physical aberration process, which occurs over an extended part of the human body wall. Estimation of the TDA filter, and performing correction on transmit and receive, has proven difficult. It has yet to be shown that this method works adequately for severe aberration. Estimation of the TDA filter can be iterated by retransmitting a corrected signal and re-estimate until a convergence criterion is fulfilled (adaptive imaging). Two methods for estimating time-delay and amplitude variations in receive signals from random scatterers have been developed. One method correlates each element signal with a reference signal. The other method use eigenvalue decomposition of the receive cross-spectrum matrix, based upon a receive energy-maximizing criterion. Simulations of iterating aberration correction with a TDA filter have been investigated to study its convergence properties. A weak and strong human-body wall model generated aberration. Both emulated the human abdominal wall. Results after iteration improve aberration correction substantially, and both estimation methods converge, even for the case of strong aberration.

  10. A method to correct coordinate distortion in EBSD maps

    International Nuclear Information System (INIS)

    Zhang, Y.B.; Elbrønd, A.; Lin, F.X.

    2014-01-01

    Drift during electron backscatter diffraction mapping leads to coordinate distortions in resulting orientation maps, which affects, in some cases significantly, the accuracy of analysis. A method, thin plate spline, is introduced and tested to correct such coordinate distortions in the maps after the electron backscatter diffraction measurements. The accuracy of the correction as well as theoretical and practical aspects of using the thin plate spline method is discussed in detail. By comparing with other correction methods, it is shown that the thin plate spline method is most efficient to correct different local distortions in the electron backscatter diffraction maps. - Highlights: • A new method is suggested to correct nonlinear spatial distortion in EBSD maps. • The method corrects EBSD maps more precisely than presently available methods. • Errors less than 1–2 pixels are typically obtained. • Direct quantitative analysis of dynamic data are available after this correction

  11. Measured attenuation correction methods

    International Nuclear Information System (INIS)

    Ostertag, H.; Kuebler, W.K.; Doll, J.; Lorenz, W.J.

    1989-01-01

    Accurate attenuation correction is a prerequisite for the determination of exact local radioactivity concentrations in positron emission tomography. Attenuation correction factors range from 4-5 in brain studies to 50-100 in whole body measurements. This report gives an overview of the different methods of determining the attenuation correction factors by transmission measurements using an external positron emitting source. The long-lived generator nuclide 68 Ge/ 68 Ga is commonly used for this purpose. The additional patient dose from the transmission source is usually a small fraction of the dose due to the subsequent emission measurement. Ring-shaped transmission sources as well as rotating point or line sources are employed in modern positron tomographs. By masking a rotating line or point source, random and scattered events in the transmission scans can be effectively suppressed. The problems of measured attenuation correction are discussed: Transmission/emission mismatch, random and scattered event contamination, counting statistics, transmission/emission scatter compensation, transmission scan after administration of activity to the patient. By using a double masking technique simultaneous emission and transmission scans become feasible. (orig.)

  12. Simultaneous Imaging of CBF Change and BOLD with Saturation-Recovery-T1 Method.

    Directory of Open Access Journals (Sweden)

    Xiao Wang

    Full Text Available A neuroimaging technique based on the saturation-recovery (SR-T1 MRI method was applied for simultaneously imaging blood oxygenation level dependence (BOLD contrast and cerebral blood flow change (ΔCBF, which is determined by CBF-sensitive T1 relaxation rate change (ΔR1CBF. This technique was validated by quantitatively examining the relationships among ΔR1CBF, ΔCBF, BOLD and relative CBF change (rCBF, which was simultaneously measured by laser Doppler flowmetry under global ischemia and hypercapnia conditions, respectively, in the rat brain. It was found that during ischemia, BOLD decreased 23.1±2.8% in the cortical area; ΔR1CBF decreased 0.020±0.004s-1 corresponding to a ΔCBF decrease of 1.07±0.24 ml/g/min and 89.5±1.8% CBF reduction (n=5, resulting in a baseline CBF value (=1.18 ml/g/min consistent with the literature reports. The CBF change quantification based on temperature corrected ΔR1CBF had a better accuracy than apparent R1 change (ΔR1app; nevertheless, ΔR1app without temperature correction still provides a good approximation for quantifying CBF change since perfusion dominates the evolution of the longitudinal relaxation rate (R1app. In contrast to the excellent consistency between ΔCBF and rCBF measured during and after ischemia, the BOLD change during the post-ischemia period was temporally disassociated with ΔCBF, indicating distinct CBF and BOLD responses. Similar results were also observed for the hypercapnia study. The overall results demonstrate that the SR-T1 MRI method is effective for noninvasive and quantitative imaging of both ΔCBF and BOLD associated with physiological and/or pathological changes.

  13. New decoding methods of interleaved burst error-correcting codes

    Science.gov (United States)

    Nakano, Y.; Kasahara, M.; Namekawa, T.

    1983-04-01

    A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.

  14. An efficient optimization method to improve the measuring accuracy of oxygen saturation by using triangular wave optical signal

    Science.gov (United States)

    Li, Gang; Yu, Yue; Zhang, Cui; Lin, Ling

    2017-09-01

    The oxygen saturation is one of the important parameters to evaluate human health. This paper presents an efficient optimization method that can improve the accuracy of oxygen saturation measurement, which employs an optical frequency division triangular wave signal as the excitation signal to obtain dynamic spectrum and calculate oxygen saturation. In comparison to the traditional method measured RMSE (root mean square error) of SpO2 which is 0.1705, this proposed method significantly reduced the measured RMSE which is 0.0965. It is notable that the accuracy of oxygen saturation measurement has been improved significantly. The method can simplify the circuit and bring down the demand of elements. Furthermore, it has a great reference value on improving the signal to noise ratio of other physiological signals.

  15. A method to correct coordinate distortion in EBSD maps

    DEFF Research Database (Denmark)

    Zhang, Yubin; Elbrønd, Andreas Benjamin; Lin, Fengxiang

    2014-01-01

    Drift during electron backscatter diffraction mapping leads to coordinate distortions in resulting orientation maps, which affects, in some cases significantly, the accuracy of analysis. A method, thin plate spline, is introduced and tested to correct such coordinate distortions in the maps after...... the electron backscatter diffraction measurements. The accuracy of the correction as well as theoretical and practical aspects of using the thin plate spline method is discussed in detail. By comparing with other correction methods, it is shown that the thin plate spline method is most efficient to correct...

  16. Local defect correction for boundary integral equation methods

    NARCIS (Netherlands)

    Kakuba, G.; Anthonissen, M.J.H.

    2014-01-01

    The aim in this paper is to develop a new local defect correction approach to gridding for problems with localised regions of high activity in the boundary element method. The technique of local defect correction has been studied for other methods as finite difference methods and finite volume

  17. The decision optimization of product development by considering the customer demand saturation

    Directory of Open Access Journals (Sweden)

    Qing-song Xing

    2015-05-01

    Full Text Available Purpose: The purpose of this paper is to analyze the impacts of over meeting customer demands on the product development process, which is on the basis of the quantitative model of customer demands, development cost and time. Then propose the corresponding product development optimization decision. Design/methodology/approach: First of all, investigate to obtain the customer demand information, and then quantify customer demands weights by using variation coefficient method. Secondly, analyses the relationship between customer demands and product development time and cost based on the quality function deployment and establish corresponding mathematical model. On this basis, put forward the concept of customer demand saturation and optimization decision method of product development, and then apply it in the notebook development process of a company. Finally, when customer demand is saturated, it also needs to prove the consistency of strengthening satisfies customer demands and high attention degree customer demands, and the stability of customer demand saturation under different parameters. Findings: The development cost and the time will rise sharply when over meeting the customer demand. On the basis of considering the customer demand saturation, the relationship between customer demand and development time cost is quantified and balanced. And also there is basically consistent between the sequence of meeting customer demands and customer demands survey results. Originality/value: The paper proposes a model of customer demand saturation. It proves the correctness and effectiveness on the product development decision method.

  18. An Accurate CT Saturation Classification Using a Deep Learning Approach Based on Unsupervised Feature Extraction and Supervised Fine-Tuning Strategy

    Directory of Open Access Journals (Sweden)

    Muhammad Ali

    2017-11-01

    Full Text Available Current transformer (CT saturation is one of the significant problems for protection engineers. If CT saturation is not tackled properly, it can cause a disastrous effect on the stability of the power system, and may even create a complete blackout. To cope with CT saturation properly, an accurate detection or classification should be preceded. Recently, deep learning (DL methods have brought a subversive revolution in the field of artificial intelligence (AI. This paper presents a new DL classification method based on unsupervised feature extraction and supervised fine-tuning strategy to classify the saturated and unsaturated regions in case of CT saturation. In other words, if protection system is subjected to a CT saturation, proposed method will correctly classify the different levels of saturation with a high accuracy. Traditional AI methods are mostly based on supervised learning and rely heavily on human crafted features. This paper contributes to an unsupervised feature extraction, using autoencoders and deep neural networks (DNNs to extract features automatically without prior knowledge of optimal features. To validate the effectiveness of proposed method, a variety of simulation tests are conducted, and classification results are analyzed using standard classification metrics. Simulation results confirm that proposed method classifies the different levels of CT saturation with a remarkable accuracy and has unique feature extraction capabilities. Lastly, we provided a potential future research direction to conclude this paper.

  19. Multiple Site-Directed and Saturation Mutagenesis by the Patch Cloning Method.

    Science.gov (United States)

    Taniguchi, Naohiro; Murakami, Hiroshi

    2017-01-01

    Constructing protein-coding genes with desired mutations is a basic step for protein engineering. Herein, we describe a multiple site-directed and saturation mutagenesis method, termed MUPAC. This method has been used to introduce multiple site-directed mutations in the green fluorescent protein gene and in the moloney murine leukemia virus reverse transcriptase gene. Moreover, this method was also successfully used to introduce randomized codons at five desired positions in the green fluorescent protein gene, and for simple DNA assembly for cloning.

  20. A spectrum correction method for fuel assembly rehomogenization

    International Nuclear Information System (INIS)

    Lee, Kyung Taek; Cho, Nam Zin

    2004-01-01

    To overcome the limitation of existing homogenization methods based on the single assembly calculation with zero current boundary condition, we propose a new rehomogenization method, named spectrum correction method (SCM), consisting of the multigroup energy spectrum approximation by spectrum correction and the condensed two-group heterogeneous single assembly calculations with non-zero current boundary condition. In SCM, the spectrum shifting phenomena caused by current across assembly interfaces are considered by the spectrum correction at group condensation stage at first. Then, heterogeneous single assembly calculations with two-group cross sections condensed by using corrected multigroup energy spectrum are performed to obtain rehomogenized nodal diffusion parameters, i.e., assembly-wise homogenized cross sections and discontinuity factors. To evaluate the performance of SCM, it was applied to the analytic function expansion nodal (AFEN) method and several test problems were solved. The results show that SCM can reduce the errors significantly both in multiplication factors and assembly averaged power distributions

  1. A hybrid numerical method for orbit correction

    International Nuclear Information System (INIS)

    White, G.; Himel, T.; Shoaee, H.

    1997-09-01

    The authors describe a simple hybrid numerical method for beam orbit correction in particle accelerators. The method overcomes both degeneracy in the linear system being solved and respects boundaries on the solution. It uses the Singular Value Decomposition (SVD) to find and remove the null-space in the system, followed by a bounded Linear Least Squares analysis of the remaining recast problem. It was developed for correcting orbit and dispersion in the B-factory rings

  2. Saturated linkage map construction in Rubus idaeus using genotyping by sequencing and genome-independent imputation

    Directory of Open Access Journals (Sweden)

    Ward Judson A

    2013-01-01

    Full Text Available Abstract Background Rapid development of highly saturated genetic maps aids molecular breeding, which can accelerate gain per breeding cycle in woody perennial plants such as Rubus idaeus (red raspberry. Recently, robust genotyping methods based on high-throughput sequencing were developed, which provide high marker density, but result in some genotype errors and a large number of missing genotype values. Imputation can reduce the number of missing values and can correct genotyping errors, but current methods of imputation require a reference genome and thus are not an option for most species. Results Genotyping by Sequencing (GBS was used to produce highly saturated maps for a R. idaeus pseudo-testcross progeny. While low coverage and high variance in sequencing resulted in a large number of missing values for some individuals, a novel method of imputation based on maximum likelihood marker ordering from initial marker segregation overcame the challenge of missing values, and made map construction computationally tractable. The two resulting parental maps contained 4521 and 2391 molecular markers spanning 462.7 and 376.6 cM respectively over seven linkage groups. Detection of precise genomic regions with segregation distortion was possible because of map saturation. Microsatellites (SSRs linked these results to published maps for cross-validation and map comparison. Conclusions GBS together with genome-independent imputation provides a rapid method for genetic map construction in any pseudo-testcross progeny. Our method of imputation estimates the correct genotype call of missing values and corrects genotyping errors that lead to inflated map size and reduced precision in marker placement. Comparison of SSRs to published R. idaeus maps showed that the linkage maps constructed with GBS and our method of imputation were robust, and marker positioning reliable. The high marker density allowed identification of genomic regions with segregation

  3. Saturation flow versus green time at two-stage signal controlled intersections

    Directory of Open Access Journals (Sweden)

    A. Boumediene

    2009-12-01

    Full Text Available Intersections are the key components of road networks considerably affecting capacity. As flow levels and experience have increased over the years, methods and means have been developed to cope with growing demand for traffic at road junctions. Among various traffic control devices and techniques developed to cope with conflicting movements, traffic signals create artificial gaps to accommodate the impeded traffic streams. The majority of parameters that govern signalised intersection control and operations such as a degree of saturation, delays, queue lengths, the level of service etc. are very sensitive to saturation flow. Therefore, it is essential to reliably evaluate saturation flow for correctly setting traffic signals to avoid unnecessary delays and conflicts. Generally, almost all guidelines support the constancy of saturation flow irrespective of green time duration. This paper presents the results of field studies carried out to enable the performance of signalised intersections to be compared at different green time durations. It was found that saturation flow decreased slightly with growing green time. Reduction corresponded to between 2 and 5 pcus/gh per second of green time. However, the analyses of the discharge rate during the successive time intervals of 6-seconds showed a substantial reduction of 10% to 13% in saturation flow levels after 36 seconds of green time compared to those relating to 6–36 seconds range. No reduction in saturation flow levels was detected at the sites where only green periods of 44 seconds or less were implemented.

  4. An attenuation correction method for PET/CT images

    International Nuclear Information System (INIS)

    Ue, Hidenori; Yamazaki, Tomohiro; Haneishi, Hideaki

    2006-01-01

    In PET/CT systems, accurate attenuation correction can be achieved by creating an attenuation map from an X-ray CT image. On the other hand, respiratory-gated PET acquisition is an effective method for avoiding motion blurring of the thoracic and abdominal organs caused by respiratory motion. In PET/CT systems employing respiratory-gated PET, using an X-ray CT image acquired during breath-holding for attenuation correction may have a large effect on the voxel values, especially in regions with substantial respiratory motion. In this report, we propose an attenuation correction method in which, as the first step, a set of respiratory-gated PET images is reconstructed without attenuation correction, as the second step, the motion of each phase PET image from the PET image in the same phase as the CT acquisition timing is estimated by the previously proposed method, as the third step, the CT image corresponding to each respiratory phase is generated from the original CT image by deformation according to the motion vector maps, and as the final step, attenuation correction using these CT images and reconstruction are performed. The effectiveness of the proposed method was evaluated using 4D-NCAT phantoms, and good stability of the voxel values near the diaphragm was observed. (author)

  5. Clinical introduction of image lag correction for a cone beam CT system

    International Nuclear Information System (INIS)

    Stankovic, Uros; Ploeger, Lennert S.; Sonke, Jan-Jakob; Herk, Marcel van

    2016-01-01

    Purpose: Image lag in the flat-panel detector used for Linac integrated cone beam computed tomography (CBCT) has a degrading effect on CBCT image quality. The most prominent visible artifact is the presence of bright semicircular structure in the transverse view of the scans, known also as radar artifact. Several correction strategies have been proposed, but until now the clinical introduction of such corrections remains unreported. In November 2013, the authors have clinically implemented a previously proposed image lag correction on all of their machines at their main site in Amsterdam. The purpose of this study was to retrospectively evaluate the effect of the correction on the quality of CBCT images and evaluate the required calibration frequency. Methods: Image lag was measured in five clinical CBCT systems (Elekta Synergy 4.6) using an in-house developed beam interrupting device that stops the x-ray beam midway through the data acquisition of an unattenuated beam for calibration. A triple exponential falling edge response was fitted to the measured data and used to correct image lag from projection images with an infinite response. This filter, including an extrapolation for saturated pixels, was incorporated in the authors’ in-house developed clinical CBCT reconstruction software. To investigate the short-term stability of the lag and associated parameters, a series of five image lag measurement over a period of three months was performed. For quantitative analysis, the authors have retrospectively selected ten patients treated in the pelvic region. The apparent contrast was quantified in polar coordinates for scans reconstructed using the parameters obtained from different dates with and without saturation handling. Results: Visually, the radar artifact was minimal in scans reconstructed using image lag correction especially when saturation handling was used. In patient imaging, there was a significant reduction of the apparent contrast from 43 ± 16.7 to

  6. Determination of saturation functions and wettability for chalk based on measured fluid saturations

    Energy Technology Data Exchange (ETDEWEB)

    Olsen, D.; Bech, N.; Moeller Nielsen, C.

    1998-08-01

    The end effect of displacement experiments on low permeable porous media is used for determination of relative permeability functions and capillary pressure functions. Saturation functions for a drainage process are determined from a primary drainage experiment. A reversal of the flooding direction creates an intrinsic imbibition process in the sample, which enables determination if imbibition saturation functions. The saturation functions are determined by a parameter estimation technique. Scanning effects are modelled by the method of Killough. Saturation profiles are determined by NMR. (au)

  7. Efficient orbit integration by manifold correction methods.

    Science.gov (United States)

    Fukushima, Toshio

    2005-12-01

    Triggered by a desire to investigate, numerically, the planetary precession through a long-term numerical integration of the solar system, we developed a new formulation of numerical integration of orbital motion named manifold correct on methods. The main trick is to rigorously retain the consistency of physical relations, such as the orbital energy, the orbital angular momentum, or the Laplace integral, of a binary subsystem. This maintenance is done by applying a correction to the integrated variables at each integration step. Typical methods of correction are certain geometric transformations, such as spatial scaling and spatial rotation, which are commonly used in the comparison of reference frames, or mathematically reasonable operations, such as modularization of angle variables into the standard domain [-pi, pi). The form of the manifold correction methods finally evolved are the orbital longitude methods, which enable us to conduct an extremely precise integration of orbital motions. In unperturbed orbits, the integration errors are suppressed at the machine epsilon level for an indefinitely long period. In perturbed cases, on the other hand, the errors initially grow in proportion to the square root of time and then increase more rapidly, the onset of which depends on the type and magnitude of the perturbations. This feature is also realized for highly eccentric orbits by applying the same idea as used in KS-regularization. In particular, the introduction of time elements greatly enhances the performance of numerical integration of KS-regularized orbits, whether the scaling is applied or not.

  8. A vibration correction method for free-fall absolute gravimeters

    Science.gov (United States)

    Qian, J.; Wang, G.; Wu, K.; Wang, L. J.

    2018-02-01

    An accurate determination of gravitational acceleration, usually approximated as 9.8 m s-2, has been playing an important role in the areas of metrology, geophysics, and geodetics. Absolute gravimetry has been experiencing rapid developments in recent years. Most absolute gravimeters today employ a free-fall method to measure gravitational acceleration. Noise from ground vibration has become one of the most serious factors limiting measurement precision. Compared to vibration isolators, the vibration correction method is a simple and feasible way to reduce the influence of ground vibrations. A modified vibration correction method is proposed and demonstrated. A two-dimensional golden section search algorithm is used to search for the best parameters of the hypothetical transfer function. Experiments using a T-1 absolute gravimeter are performed. It is verified that for an identical group of drop data, the modified method proposed in this paper can achieve better correction effects with much less computation than previous methods. Compared to vibration isolators, the correction method applies to more hostile environments and even dynamic platforms, and is expected to be used in a wider range of applications.

  9. Metric-based method of software requirements correctness improvement

    Directory of Open Access Journals (Sweden)

    Yaremchuk Svitlana

    2017-01-01

    Full Text Available The work highlights the most important principles of software reliability management (SRM. The SRM concept construes a basis for developing a method of requirements correctness improvement. The method assumes that complicated requirements contain more actual and potential design faults/defects. The method applies a newer metric to evaluate the requirements complexity and double sorting technique evaluating the priority and complexity of a particular requirement. The method enables to improve requirements correctness due to identification of a higher number of defects with restricted resources. Practical application of the proposed method in the course of demands review assured a sensible technical and economic effect.

  10. Methods of the Detection and Identification of Structural Defects in Saturated Metallic Composite Castings

    Directory of Open Access Journals (Sweden)

    Gawdzińska K.

    2017-09-01

    Full Text Available Diagnostics of composite castings, due to their complex structure, requires that their characteristics are tested by an appropriate description method. Any deviation from the specific characteristic will be regarded as a material defect. The detection of defects in composite castings sometimes is not sufficient and the defects have to be identified. This study classifies defects found in the structures of saturated metallic composite castings and indicates those stages of the process where such defects are likely to be formed. Not only does the author determine the causes of structural defects, describe methods of their detection and identification, but also proposes a schematic procedure to be followed during detection and identification of structural defects of castings made from saturated reinforcement metallic composites. Alloys examination was conducted after technological process, while using destructive (macroscopic tests, light and scanning electron microscopy and non-destructive (ultrasonic and X-ray defectoscopy, tomography, gravimetric method methods. Research presented in this article are part of author’s work on castings quality.

  11. Automated general temperature correction method for dielectric soil moisture sensors

    Science.gov (United States)

    Kapilaratne, R. G. C. Jeewantinie; Lu, Minjiao

    2017-08-01

    An effective temperature correction method for dielectric sensors is important to ensure the accuracy of soil water content (SWC) measurements of local to regional-scale soil moisture monitoring networks. These networks are extensively using highly temperature sensitive dielectric sensors due to their low cost, ease of use and less power consumption. Yet there is no general temperature correction method for dielectric sensors, instead sensor or site dependent correction algorithms are employed. Such methods become ineffective at soil moisture monitoring networks with different sensor setups and those that cover diverse climatic conditions and soil types. This study attempted to develop a general temperature correction method for dielectric sensors which can be commonly used regardless of the differences in sensor type, climatic conditions and soil type without rainfall data. In this work an automated general temperature correction method was developed by adopting previously developed temperature correction algorithms using time domain reflectometry (TDR) measurements to ThetaProbe ML2X, Stevens Hydra probe II and Decagon Devices EC-TM sensor measurements. The rainy day effects removal procedure from SWC data was automated by incorporating a statistical inference technique with temperature correction algorithms. The temperature correction method was evaluated using 34 stations from the International Soil Moisture Monitoring Network and another nine stations from a local soil moisture monitoring network in Mongolia. Soil moisture monitoring networks used in this study cover four major climates and six major soil types. Results indicated that the automated temperature correction algorithms developed in this study can eliminate temperature effects from dielectric sensor measurements successfully even without on-site rainfall data. Furthermore, it has been found that actual daily average of SWC has been changed due to temperature effects of dielectric sensors with a

  12. A New Class of Scaling Correction Methods

    International Nuclear Information System (INIS)

    Mei Li-Jie; Wu Xin; Liu Fu-Yao

    2012-01-01

    When conventional integrators like Runge—Kutta-type algorithms are used, numerical errors can make an orbit deviate from a hypersurface determined by many constraints, which leads to unreliable numerical solutions. Scaling correction methods are a powerful tool to avoid this. We focus on their applications, and also develop a family of new velocity multiple scaling correction methods where scale factors only act on the related components of the integrated momenta. They can preserve exactly some first integrals of motion in discrete or continuous dynamical systems, so that rapid growth of roundoff or truncation errors is suppressed significantly. (general)

  13. [Study on phase correction method of spatial heterodyne spectrometer].

    Science.gov (United States)

    Wang, Xin-Qiang; Ye, Song; Zhang, Li-Juan; Xiong, Wei

    2013-05-01

    Phase distortion exists in collected interferogram because of a variety of measure reasons when spatial heterodyne spectrometers are used in practice. So an improved phase correction method is presented. The phase curve of interferogram was obtained through Fourier inverse transform to extract single side transform spectrum, based on which, the phase distortions were attained by fitting phase slope, so were the phase correction functions, and the convolution was processed between transform spectrum and phase correction function to implement spectrum phase correction. The method was applied to phase correction of actually measured monochromatic spectrum and emulational water vapor spectrum. Experimental results show that the low-frequency false signals in monochromatic spectrum fringe would be eliminated effectively to increase the periodicity and the symmetry of interferogram, in addition when the continuous spectrum imposed phase error was corrected, the standard deviation between it and the original spectrum would be reduced form 0.47 to 0.20, and thus the accuracy of spectrum could be improved.

  14. Nuclear matter saturation in a U(1) circle-times chiral model

    International Nuclear Information System (INIS)

    Lin, Wei

    1989-01-01

    The mean-field approximation in the U(1) circle-times chiral model for nuclear matter maturation is reviewed. Results show that it cannot be the correct saturation mechanism. It is argued that in this chiral model, other than the fact the ω mass can depend on the density of nuclear matter, saturation is still quite like the Walecka picture. 16 refs., 3 figs

  15. An Automated Baseline Correction Method Based on Iterative Morphological Operations.

    Science.gov (United States)

    Chen, Yunliang; Dai, Liankui

    2018-05-01

    Raman spectra usually suffer from baseline drift caused by fluorescence or other reasons. Therefore, baseline correction is a necessary and crucial step that must be performed before subsequent processing and analysis of Raman spectra. An automated baseline correction method based on iterative morphological operations is proposed in this work. The method can adaptively determine the structuring element first and then gradually remove the spectral peaks during iteration to get an estimated baseline. Experiments on simulated data and real-world Raman data show that the proposed method is accurate, fast, and flexible for handling different kinds of baselines in various practical situations. The comparison of the proposed method with some state-of-the-art baseline correction methods demonstrates its advantages over the existing methods in terms of accuracy, adaptability, and flexibility. Although only Raman spectra are investigated in this paper, the proposed method is hopefully to be used for the baseline correction of other analytical instrumental signals, such as IR spectra and chromatograms.

  16. Method for decoupling error correction from privacy amplification

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Hoi-Kwong [Department of Electrical and Computer Engineering and Department of Physics, University of Toronto, 10 King' s College Road, Toronto, Ontario, Canada, M5S 3G4 (Canada)

    2003-04-01

    In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof.

  17. Method for decoupling error correction from privacy amplification

    International Nuclear Information System (INIS)

    Lo, Hoi-Kwong

    2003-01-01

    In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof

  18. Simulating water hammer with corrective smoothed particle method

    NARCIS (Netherlands)

    Hou, Q.; Kruisbrink, A.C.H.; Tijsseling, A.S.; Keramat, A.

    2012-01-01

    The corrective smoothed particle method (CSPM) is used to simulate water hammer. The spatial derivatives in the water-hammer equations are approximated by a corrective kernel estimate. For the temporal derivatives, the Euler-forward time integration algorithm is employed. The CSPM results are in

  19. Unitarity corrections and high field strengths in high energy hard collisions

    International Nuclear Information System (INIS)

    Kovchegov, Y.V.; Mueller, A.H.

    1997-01-01

    Unitarity corrections to the BFKL description of high energy hard scattering are viewed in large N c QCD in light-cone quantization. In a center of mass frame unitarity corrections to high energy hard scattering are manifestly perturbatively calculable and unrelated to questions of parton saturation. In a frame where one of the hadrons is initially at rest unitarity corrections are related to parton saturation effects and involve potential strengths A μ ∝1/g. In such a frame we describe the high energy scattering in terms of the expectation value of a Wilson loop. The large potentials A μ ∝1/g are shown to be pure gauge terms allowing perturbation theory to again describe unitarity corrections and parton saturation effects. Genuine nonperturbative effects only come in at energies well beyond those energies where unitarity constraints first become important. (orig.)

  20. Nowcasting Surface Meteorological Parameters Using Successive Correction Method

    National Research Council Canada - National Science Library

    Henmi, Teizi

    2002-01-01

    The successive correction method was examined and evaluated statistically as a nowcasting method for surface meteorological parameters including temperature, dew point temperature, and horizontal wind vector components...

  1. Another method of dead time correction

    International Nuclear Information System (INIS)

    Sabol, J.

    1988-01-01

    A new method of the correction of counting losses caused by a non-extended dead time of pulse detection systems is presented. The approach is based on the distribution of time intervals between pulses at the output of the system. The method was verified both experimentally and by using the Monte Carlo simulations. The results show that the suggested technique is more reliable and accurate than other methods based on a separate measurement of the dead time. (author) 5 refs

  2. Saturating time-delay transformer for overcurrent protection

    Science.gov (United States)

    Praeg, Walter F.

    1977-01-01

    Electrical loads connected to d-c supplies are protected from damage by overcurrent in the case of a load fault by connecting in series with the load a saturating transformer that detects a load fault and limits the fault current to a safe level for a period long enough to correct the fault or else disconnect the power supply.

  3. Determination of the activity of a molecular solute in saturated solution

    International Nuclear Information System (INIS)

    Nordstroem, Fredrik L.; Rasmuson, Ake C.

    2008-01-01

    Prediction of the solubility of a solid molecular compound in a solvent, as well as, estimation of the solution activity coefficient from experimental solubility data both require estimation of the activity of the solute in the saturated solution. The activity of the solute in the saturated solution is often defined using the pure melt at the same temperature as the thermodynamic reference. In chemical engineering literature also the activity of the solid is usually defined on the same reference state. However, far below the melting temperature, the properties of this reference state cannot be determined experimentally, and different simplifications and approximations are normally adopted. In the present work, a novel method is presented to determine the activity of the solute in the saturated solution (=ideal solubility) and the heat capacity difference between the pure supercooled melt and solid. The approach is based on rigorous thermodynamics, using standard experimental thermodynamic data at the melting temperature of the pure compound and solubility measurements in different solvents at various temperatures. The method is illustrated using data for ortho-, meta-, and para-hydroxybenzoic acid, salicylamide and paracetamol. The results show that complete neglect of the heat capacity terms may lead to estimations of the activity that are incorrect by a factor of 12. Other commonly used simplifications may lead to estimations that are only one-third of the correct value

  4. Determination of the activity of a molecular solute in saturated solution

    Energy Technology Data Exchange (ETDEWEB)

    Nordstroem, Fredrik L. [Department of Chemical Engineering and Technology, Royal Institute of Technology, 100 44 Stockholm (Sweden); Rasmuson, Ake C. [Department of Chemical Engineering and Technology, Royal Institute of Technology, 100 44 Stockholm (Sweden)], E-mail: rasmuson@ket.kth.se

    2008-12-15

    Prediction of the solubility of a solid molecular compound in a solvent, as well as, estimation of the solution activity coefficient from experimental solubility data both require estimation of the activity of the solute in the saturated solution. The activity of the solute in the saturated solution is often defined using the pure melt at the same temperature as the thermodynamic reference. In chemical engineering literature also the activity of the solid is usually defined on the same reference state. However, far below the melting temperature, the properties of this reference state cannot be determined experimentally, and different simplifications and approximations are normally adopted. In the present work, a novel method is presented to determine the activity of the solute in the saturated solution (=ideal solubility) and the heat capacity difference between the pure supercooled melt and solid. The approach is based on rigorous thermodynamics, using standard experimental thermodynamic data at the melting temperature of the pure compound and solubility measurements in different solvents at various temperatures. The method is illustrated using data for ortho-, meta-, and para-hydroxybenzoic acid, salicylamide and paracetamol. The results show that complete neglect of the heat capacity terms may lead to estimations of the activity that are incorrect by a factor of 12. Other commonly used simplifications may lead to estimations that are only one-third of the correct value.

  5. Simple-MSSM: a simple and efficient method for simultaneous multi-site saturation mutagenesis.

    Science.gov (United States)

    Cheng, Feng; Xu, Jian-Miao; Xiang, Chao; Liu, Zhi-Qiang; Zhao, Li-Qing; Zheng, Yu-Guo

    2017-04-01

    To develop a practically simple and robust multi-site saturation mutagenesis (MSSM) method that enables simultaneously recombination of amino acid positions for focused mutant library generation. A general restriction enzyme-free and ligase-free MSSM method (Simple-MSSM) based on prolonged overlap extension PCR (POE-PCR) and Simple Cloning techniques. As a proof of principle of Simple-MSSM, the gene of eGFP (enhanced green fluorescent protein) was used as a template gene for simultaneous mutagenesis of five codons. Forty-eight randomly selected clones were sequenced. Sequencing revealed that all the 48 clones showed at least one mutant codon (mutation efficiency = 100%), and 46 out of the 48 clones had mutations at all the five codons. The obtained diversities at these five codons are 27, 24, 26, 26 and 22, respectively, which correspond to 84, 75, 81, 81, 69% of the theoretical diversity offered by NNK-degeneration (32 codons; NNK, K = T or G). The enzyme-free Simple-MSSM method can simultaneously and efficiently saturate five codons within one day, and therefore avoid missing interactions between residues in interacting amino acid networks.

  6. Coastal Zone Color Scanner atmospheric correction - Influence of El Chichon

    Science.gov (United States)

    Gordon, Howard R.; Castano, Diego J.

    1988-01-01

    The addition of an El Chichon-like aerosol layer in the stratosphere is shown to have very little effect on the basic CZCS atmospheric correction algorithm. The additional stratospheric aerosol is found to increase the total radiance exiting the atmosphere, thereby increasing the probability that the sensor will saturate. It is suggested that in the absence of saturation the correction algorithm should perform as well as in the absence of the stratospheric layer.

  7. A method for eliminating sulfur compounds from fluid, saturated, aliphatic hydrocarbons

    Energy Technology Data Exchange (ETDEWEB)

    Fakhriev, A.M.; Galiautdinov, N.G.; Kashevarov, L.A.; Mazgarov, A.M.

    1982-01-01

    The method for eliminating sulfur compounds from fluid, saturated, aliphatic hydrocarbons, which involves extracting hydrocarbons using a dimethylsulfoxide extractant, is improved by using a dimethylsulfoxide blend and 10-60 percent (by volume) diethylenetriamine or polyethylenepolyamine which contains diethylenetriamine, triethylenetetramine and tetraethylenepentamine, in order to eliminate the above compounds. Polyethylenepolyamine is produced as a by-product during the production of ethylenediamine. Elimination is performed at 0-50 degrees and 1-60 atmospheres of pressure. Here, the extractant may contain up to 10 percent water. The use of the proposed method, rather than the existing method, will make it possible to increase hydrocarbon elimination from mercaptans by 40 percent and from H/sub 2/S by 10 percent when the same amount is eliminated from dialkylsulfides.

  8. Evaluation of bias-correction methods for ensemble streamflow volume forecasts

    Directory of Open Access Journals (Sweden)

    T. Hashino

    2007-01-01

    Full Text Available Ensemble prediction systems are used operationally to make probabilistic streamflow forecasts for seasonal time scales. However, hydrological models used for ensemble streamflow prediction often have simulation biases that degrade forecast quality and limit the operational usefulness of the forecasts. This study evaluates three bias-correction methods for ensemble streamflow volume forecasts. All three adjust the ensemble traces using a transformation derived with simulated and observed flows from a historical simulation. The quality of probabilistic forecasts issued when using the three bias-correction methods is evaluated using a distributions-oriented verification approach. Comparisons are made of retrospective forecasts of monthly flow volumes for a north-central United States basin (Des Moines River, Iowa, issued sequentially for each month over a 48-year record. The results show that all three bias-correction methods significantly improve forecast quality by eliminating unconditional biases and enhancing the potential skill. Still, subtle differences in the attributes of the bias-corrected forecasts have important implications for their use in operational decision-making. Diagnostic verification distinguishes these attributes in a context meaningful for decision-making, providing criteria to choose among bias-correction methods with comparable skill.

  9. Gluon saturation beyond (naive) leading logs

    Energy Technology Data Exchange (ETDEWEB)

    Beuf, Guillaume

    2014-12-15

    An improved version of the Balitsky–Kovchegov equation is presented, with a consistent treatment of kinematics. That improvement allows to resum the most severe of the large higher order corrections which plague the conventional versions of high-energy evolution equations, with approximate kinematics. This result represents a further step towards having high-energy QCD scattering processes under control beyond strict Leading Logarithmic accuracy and with gluon saturation effects.

  10. Saturation at Low X and Nonlinear Evolution

    International Nuclear Information System (INIS)

    Stasto, A.M.

    2002-01-01

    In this talk the results of the analytical and numerical analysis of the nonlinear Balitsky-Kovchegov equation are presented. The characteristic BFKL diffusion into infrared regime is suppressed by the generation of the saturation scale Q s . We identify the scaling and linear regimes for the solution. We also study the impact of subleading corrections onto the nonlinear evolution. (author)

  11. Research on evaluation method for water saturation of tight sandstone in Suxi region

    Science.gov (United States)

    Lv, Hong; Lai, Fuqiang; Chen, Liang; Li, Chao; Li, Jie; Yi, Heping

    2017-05-01

    The evaluation of irreducible water saturation is important for qualitative and quantitative prediction of residual oil distribution. However, it is to be improved for the accuracy of experimental measuring the irreducible water saturation and logging evaluation. In this paper, firstly the multi-functional core flooding experiment and the nuclear magnetic resonance centrifugation experiment are carried out in the west of Sulige gas field. Then, the influence was discussed about particle size, porosity and permeability on the water saturation. Finally, the evaluation model was established about irreducible water saturation and the evaluation of irreducible water saturation was carried out. The results show that the results of two experiments are both reliable. It is inversely proportional to the median particle size, porosity and permeability, and is most affected by the median particle size. The water saturation of the dry layer is higher than that of the general reservoir. The worse the reservoir property, the greater the water saturation. The test results show that the irreducible water saturation model can be used to evaluate the water floor.

  12. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method

    International Nuclear Information System (INIS)

    Shidahara, Miho; Kato, Takashi; Kawatsu, Shoji; Yoshimura, Kumiko; Ito, Kengo; Watabe, Hiroshi; Kim, Kyeong Min; Iida, Hidehiro; Kato, Rikio

    2005-01-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99m Tc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I AC μb with Chang's attenuation correction factor. The scatter component image is estimated by convolving I AC μb with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99m Tc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine. (orig.)

  13. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method.

    Science.gov (United States)

    Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.

  14. Recipe for residual oil saturation determination

    Energy Technology Data Exchange (ETDEWEB)

    Guillory, A.J.; Kidwell, C.M.

    1979-01-01

    In 1978, Shell Oil Co., in conjunction with the US Department of Energy, conducted a residual oil saturation study in a deep, hot high-pressured Gulf Coast Reservoir. The work was conducted prior to initiation of CO/sub 2/ tertiary recovery pilot. Many problems had to be resolved prior to and during the residual oil saturation determination. The problems confronted are outlined such that the procedure can be used much like a cookbook in designing future studies in similar reservoirs. Primary discussion centers around planning and results of a log-inject-log operation used as a prime method to determine the residual oil saturation. Several independent methods were used to calculate the residual oil saturation in the subject well in an interval between 12,910 ft (3935 m) and 12,020 ft (3938 m). In general, these numbers were in good agreement and indicated a residual oil saturation between 22% and 24%. 10 references.

  15. Local defect correction for boundary integral equation methods

    NARCIS (Netherlands)

    Kakuba, G.; Anthonissen, M.J.H.

    2013-01-01

    This paper presents a new approach to gridding for problems with localised regions of high activity. The technique of local defect correction has been studied for other methods as ¿nite difference methods and ¿nite volume methods. In this paper we develop the technique for the boundary element

  16. Cryochromatography: a method for the separation of phosphoglycerides according to the number and length of saturated fatty acid components

    International Nuclear Information System (INIS)

    Henderson, R.F.; Clayton, M.H.

    1974-01-01

    A thin layer chromatographic method utilizing ultracold temperatures has been developed to separate phosphoglycerides containing only long-chain saturated fatty acids from phosphoglycerides containing fatty acids with any degree of unsaturation. The method is direct, nondiluting, and nondestructive. Since the surfactant lipids found in lung contain only long-chain, saturated fatty acids, the method should be particularly useful to those in lung lipid research. Studies on the uptake of labeled precursors into the lung surfactant lipids as well as work on quantitation of surfactant lecithins in the lung can be facilitated by this method. (U.S.)

  17. A New Online Calibration Method Based on Lord's Bias-Correction.

    Science.gov (United States)

    He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei

    2017-09-01

    Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.

  18. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method

    Energy Technology Data Exchange (ETDEWEB)

    Shidahara, Miho; Kato, Takashi; Kawatsu, Shoji; Yoshimura, Kumiko; Ito, Kengo [National Center for Geriatrics and Gerontology Research Institute, Department of Brain Science and Molecular Imaging, Obu, Aichi (Japan); Watabe, Hiroshi; Kim, Kyeong Min; Iida, Hidehiro [National Cardiovascular Center Research Institute, Department of Investigative Radiology, Suita (Japan); Kato, Rikio [National Center for Geriatrics and Gerontology, Department of Radiology, Obu (Japan)

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with {sup 99m}Tc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I{sub AC}{sup {mu}}{sup b} with Chang's attenuation correction factor. The scatter component image is estimated by convolving I{sub AC}{sup {mu}}{sup b} with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and {sup 99m}Tc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine. (orig.)

  19. Suggested Methods for Preventing Core Saturation Instability in HVDC Transmission Systems

    Energy Technology Data Exchange (ETDEWEB)

    Norheim, Ian

    2002-07-01

    In this thesis a study of the HVDC related phenomenon core saturation instability and methods to prevent this phenomenon is performed. It is reason to believe that this phenomenon caused disconnection of the Skagerrak HVDC link 10 August 1993. Internationally, core saturation instability has been reported at several HVDC schemes and thorough complex studies of the phenomenon has been performed. This thesis gives a detailed description of the phenomenon and suggest some interesting methods to prevent the development of it. Core saturation instability and its consequences can be described in a simplified way as follows: It is now assumed that a fundamental harmonic component is present in the DC side current. Due to the coupling between the AC side and the DC side of the HVDC converter, a subsequent second harmonic positive-sequence current and DC currents will be generated on the AC side. The DC currents will cause saturation in the converter transformers. This will cause the magnetizing current to also have a second harmonic positive-sequence component. If a high second harmonic impedance is seen from the commutation bus, a high positive-sequence second harmonic component will be present in the commutation voltages. This will result in a relatively high fundamental frequency component in the DC side voltage. If the fundamental frequency impedance at the DC side is relatively low the fundamental component in the DC side current may become larger than it originally was. In addition the HVDC control system may contribute to the fundamental frequency component in the DC side voltage, and in this way cause a system even more sensitive to core saturation instability. The large magnetizing currents that eventually will flow on the AC side cause large zero-sequence currents in the neutral conductors of the AC transmission lines connected to the HVDC link. This may result in disconnection of the lines. Alternatively, the harmonics in the large magnetizing currents may cause

  20. Different partial volume correction methods lead to different conclusions

    DEFF Research Database (Denmark)

    Greve, Douglas N; Salat, David H; Bowen, Spencer L

    2016-01-01

    A cross-sectional group study of the effects of aging on brain metabolism as measured with (18)F-FDG-PET was performed using several different partial volume correction (PVC) methods: no correction (NoPVC), Meltzer (MZ), Müller-Gärtner (MG), and the symmetric geometric transfer matrix (SGTM) usin...

  1. Beam-Based Error Identification and Correction Methods for Particle Accelerators

    CERN Document Server

    AUTHOR|(SzGeCERN)692826; Tomas, Rogelio; Nilsson, Thomas

    2014-06-10

    Modern particle accelerators have tight tolerances on the acceptable deviation from their desired machine parameters. The control of the parameters is of crucial importance for safe machine operation and performance. This thesis focuses on beam-based methods and algorithms to identify and correct errors in particle accelerators. The optics measurements and corrections of the Large Hadron Collider (LHC), which resulted in an unprecedented low β-beat for a hadron collider is described. The transverse coupling is another parameter which is of importance to control. Improvement in the reconstruction of the coupling from turn-by-turn data has resulted in a significant decrease of the measurement uncertainty. An automatic coupling correction method, which is based on the injected beam oscillations, has been successfully used in normal operation of the LHC. Furthermore, a new method to measure and correct chromatic coupling that was applied to the LHC, is described. It resulted in a decrease of the chromatic coupli...

  2. A Single Image Dehazing Method Using Average Saturation Prior

    Directory of Open Access Journals (Sweden)

    Zhenfei Gu

    2017-01-01

    Full Text Available Outdoor images captured in bad weather are prone to yield poor visibility, which is a fatal problem for most computer vision applications. The majority of existing dehazing methods rely on an atmospheric scattering model and therefore share a common limitation; that is, the model is only valid when the atmosphere is homogeneous. In this paper, we propose an improved atmospheric scattering model to overcome this inherent limitation. By adopting the proposed model, a corresponding dehazing method is also presented. In this method, we first create a haze density distribution map of a hazy image, which enables us to segment the hazy image into scenes according to the haze density similarity. Then, in order to improve the atmospheric light estimation accuracy, we define an effective weight assignment function to locate a candidate scene based on the scene segmentation results and therefore avoid most potential errors. Next, we propose a simple but powerful prior named the average saturation prior (ASP, which is a statistic of extensive high-definition outdoor images. Using this prior combined with the improved atmospheric scattering model, we can directly estimate the scene atmospheric scattering coefficient and restore the scene albedo. The experimental results verify that our model is physically valid, and the proposed method outperforms several state-of-the-art single image dehazing methods in terms of both robustness and effectiveness.

  3. GPU accelerated manifold correction method for spinning compact binaries

    Science.gov (United States)

    Ran, Chong-xi; Liu, Song; Zhong, Shuang-ying

    2018-04-01

    The graphics processing unit (GPU) acceleration of the manifold correction algorithm based on the compute unified device architecture (CUDA) technology is designed to simulate the dynamic evolution of the Post-Newtonian (PN) Hamiltonian formulation of spinning compact binaries. The feasibility and the efficiency of parallel computation on GPU have been confirmed by various numerical experiments. The numerical comparisons show that the accuracy on GPU execution of manifold corrections method has a good agreement with the execution of codes on merely central processing unit (CPU-based) method. The acceleration ability when the codes are implemented on GPU can increase enormously through the use of shared memory and register optimization techniques without additional hardware costs, implying that the speedup is nearly 13 times as compared with the codes executed on CPU for phase space scan (including 314 × 314 orbits). In addition, GPU-accelerated manifold correction method is used to numerically study how dynamics are affected by the spin-induced quadrupole-monopole interaction for black hole binary system.

  4. A New Dyslexia Reading Method and Visual Correction Position Method.

    Science.gov (United States)

    Manilla, George T; de Braga, Joe

    2017-01-01

    Pediatricians and educators may interact daily with several dyslexic patients or students. One dyslexic author accidently developed a personal, effective, corrective reading method. Its effectiveness was evaluated in 3 schools. One school utilized 8 demonstration special education students. Over 3 months, one student grew one third year, 3 grew 1 year, and 4 grew 2 years. In another school, 6 sixth-, seventh-, and eighth-grade classroom teachers followed 45 treated dyslexic students. They all excelled and progressed beyond their classroom peers in 4 months. Using cyclovergence upper gaze, dyslexic reading problems disappeared at one of the Positional Reading Arc positions of 30°, 60°, 90°, 120°, or 150° for 10 dyslexics. Positional Reading Arc on 112 students of the second through eighth grades showed words read per minute, reading errors, and comprehension improved. Dyslexia was visually corrected by use of a new reading method and Positional Reading Arc positions.

  5. Error analysis of motion correction method for laser scanning of moving objects

    Science.gov (United States)

    Goel, S.; Lohani, B.

    2014-05-01

    The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such `motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of "motion correction" explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of "motion correction" method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results.

  6. A Horizontal Tilt Correction Method for Ship License Numbers Recognition

    Science.gov (United States)

    Liu, Baolong; Zhang, Sanyuan; Hong, Zhenjie; Ye, Xiuzi

    2018-02-01

    An automatic ship license numbers (SLNs) recognition system plays a significant role in intelligent waterway transportation systems since it can be used to identify ships by recognizing the characters in SLNs. Tilt occurs frequently in many SLNs because the monitors and the ships usually have great vertical or horizontal angles, which decreases the accuracy and robustness of a SLNs recognition system significantly. In this paper, we present a horizontal tilt correction method for SLNs. For an input tilt SLN image, the proposed method accomplishes the correction task through three main steps. First, a MSER-based characters’ center-points computation algorithm is designed to compute the accurate center-points of the characters contained in the input SLN image. Second, a L 1- L 2 distance-based straight line is fitted to the computed center-points using M-estimator algorithm. The tilt angle is estimated at this stage. Finally, based on the computed tilt angle, an affine transformation rotation is conducted to rotate and to correct the input SLN horizontally. At last, the proposed method is tested on 200 tilt SLN images, the proposed method is proved to be effective with a tilt correction rate of 80.5%.

  7. Texture analysis by the Schulz reflection method: Defocalization corrections for thin films

    International Nuclear Information System (INIS)

    Chateigner, D.; Germi, P.; Pernet, M.

    1992-01-01

    A new method is described for correcting experimental data obtained from the texture analysis of thin films. The analysis employed for correcting the data usually requires the experimental curves of defocalization for a randomly oriented specimen. In view of difficulties in finding non-oriented films, a theoretical method for these corrections is proposed which uses the defocalization evolution for a bulk sample, the film thickness and the penetration depth of the incident beam in the material. This correction method is applied to a film of YBa 2 CU 3 O 7-δ on an SrTiO 3 single-crystal substrate. (orig.)

  8. Saturation and nucleation in hot nuclear systems

    International Nuclear Information System (INIS)

    Deangelis, A.R.

    1990-07-01

    We investigate nuclear fragmentation in a supersaturated system using classical nucleation theory. This allows us to go outside the normally applied constraint of chemical equilibrium. The system is governed by a virial equation of state, which we use to find an expression for the density as a function of pressure and temperature. The evolution of the system is discussed in terms of the phase diagram. Corrections are included to account for the droplet surface and all charges contained in the system. Using this model we investigate and discuss the effects of temperature and saturation, and compare the results to those of other models of fragmentation. We also discuss the limiting temperatures of the system for the cases with and without chemical equilibrium. We find that large nuclei will be formed in saturated systems, even above the limiting temperature as previously defined. We also find that saturation and temperature dominate surface and Coulomb effects. The effects are quite large, thus even a qualitative inspection of the yields may give an indication of the conditions during fragmentation

  9. Yucca Mountain Area Saturated Zone Dissolved Organic Carbon Isotopic Data

    International Nuclear Information System (INIS)

    Thomas, James; Decker, David; Patterson, Gary; Peterman, Zell; Mihevc, Todd; Larsen, Jessica; Hershey, Ronald

    2007-01-01

    groundwater ages. The DIC calculated groundwater ages were compared with DOC calculated groundwater ages and both of these ages were compared to travel times developed in ground-water flow and transport models. If nuclear waste is stored in Yucca Mountain, the saturated zone is the final barrier against the release of radionuclides to the environment. The most recent rendition of the TSPA takes little credit for the presence of the saturated zone and is a testament to the inadequate understanding of this important barrier. If radionuclides reach the saturated zone beneath Yucca Mountain, then there is a travel time before they would leave the Yucca Mountain area and flow down gradient to the Amargosa Valley area. Knowing how long it takes groundwater in the saturated zone to flow from beneath Yucca Mountain to down gradient areas is critical information for potential radionuclide transport. Radionuclide transport in groundwater may be the quickest pathway for radionuclides in the proposed Yucca Mountain repository to reach land surface by way of groundwater pumped in Amargosa Valley. An alternative approach to ground-water flow and transport models to determine the travel time of radionuclides from beneath Yucca Mountain to down gradient areas in the saturated zone is by carbon-14 dating of both inorganic and organic carbon dissolved in the groundwater. A standard method of determining ground-water ages is to measure the carbon-13 and carbon-14 of DIC in the groundwater and then correct the measured carbon-14 along a flow path for geochemical reactions that involve carbon containing phases. These geochemical reactions are constrained by carbon-13 and isotopic fractionations. Without correcting for geochemical reactions, the ground-water ages calculated from only the differences in carbon-14 measured along a flow path (assuming the decrease in carbon-14 is due strictly to radioactive decay) could be tens of thousands of years too old. The computer program NETPATH, developed by

  10. History and future of human cadaver preservation for surgical training: from formalin to saturated salt solution method.

    Science.gov (United States)

    Hayashi, Shogo; Naito, Munekazu; Kawata, Shinichi; Qu, Ning; Hatayama, Naoyuki; Hirai, Shuichi; Itoh, Masahiro

    2016-01-01

    Traditionally, surgical training meant on-the-job training with live patients in an operating room. However, due to advancing surgical techniques, such as minimally invasive surgery, and increasing safety demands during procedures, human cadavers have been used for surgical training. When considering the use of human cadavers for surgical training, one of the most important factors is their preservation. In this review, we summarize four preservation methods: fresh-frozen cadaver, formalin, Thiel's, and saturated salt solution methods. Fresh-frozen cadaver is currently the model that is closest to reality, but it also presents myriad problems, including the requirement of freezers for storage, limited work time because of rapid putrefaction, and risk of infection. Formalin is still used ubiquitously due to its low cost and wide availability, but it is not ideal because formaldehyde has an adverse health effect and formalin-embalmed cadavers do not exhibit many of the qualities of living organs. Thiel's method results in soft and flexible cadavers with almost natural colors, and Thiel-embalmed cadavers have been appraised widely in various medical disciplines. However, Thiel's method is relatively expensive and technically complicated. In addition, Thiel-embalmed cadavers have a limited dissection time. The saturated salt solution method is simple, carries a low risk of infection, and is relatively low cost. Although more research is needed, this method seems to be sufficiently useful for surgical training and has noteworthy features that expand the capability of clinical training. The saturated salt solution method will contribute to a wider use of cadavers for surgical training.

  11. The effect of lipid saturation on nutrient digestibility of layer diets

    African Journals Online (AJOL)

    Ernest King

    2013-10-11

    Oct 11, 2013 ... indicated that factors such as the fatty acid chain length, unsaturated to saturated ... Other authors (Zollitsch et al., 1997; Honda et al., 2009) ... The AME value was corrected for nitrogen equilibrium by assuming that excreta.

  12. RELIC: a novel dye-bias correction method for Illumina Methylation BeadChip.

    Science.gov (United States)

    Xu, Zongli; Langie, Sabine A S; De Boever, Patrick; Taylor, Jack A; Niu, Liang

    2017-01-03

    The Illumina Infinium HumanMethylation450 BeadChip and its successor, Infinium MethylationEPIC BeadChip, have been extensively utilized in epigenome-wide association studies. Both arrays use two fluorescent dyes (Cy3-green/Cy5-red) to measure methylation level at CpG sites. However, performance difference between dyes can result in biased estimates of methylation levels. Here we describe a novel method, called REgression on Logarithm of Internal Control probes (RELIC) to correct for dye bias on whole array by utilizing the intensity values of paired internal control probes that monitor the two color channels. We evaluate the method in several datasets against other widely used dye-bias correction methods. Results on data quality improvement showed that RELIC correction statistically significantly outperforms alternative dye-bias correction methods. We incorporated the method into the R package ENmix, which is freely available from the Bioconductor website ( https://www.bioconductor.org/packages/release/bioc/html/ENmix.html ). RELIC is an efficient and robust method to correct for dye-bias in Illumina Methylation BeadChip data. It outperforms other alternative methods and conveniently implemented in R package ENmix to facilitate DNA methylation studies.

  13. Differentiated-effect shims for medium field levels and saturation

    International Nuclear Information System (INIS)

    Richie, A.

    1976-01-01

    The arrangement of shims on the upstream and downstream ends of magnets may be based on the independent effects of variations in the geometric length and degree of saturation at the edges of the poles. This technique can be used to match the bending strength of an accelerator's magnets at two field levels (medium fields and maximum fields) and thus save special procedures (mixing the laminations, local compensation for errors by arranging the magnets in the appropriate order) and special devices (for instance, correcting dipoles) solely for correcting bending strengths at low field levels. (Auth.)

  14. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods

    Directory of Open Access Journals (Sweden)

    Huiliang Cao

    2016-01-01

    Full Text Available This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC, Quadrature Force Correction (QFC and Coupling Stiffness Correction (CSC methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.

  15. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods.

    Science.gov (United States)

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-07

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses' quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups' output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.

  16. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods

    Science.gov (United States)

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-01

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability. PMID:26751455

  17. A method of detector correction for cosmic ray muon radiography

    International Nuclear Information System (INIS)

    Liu Yuanyuan; Zhao Ziran; Chen Zhiqiang; Zhang Li; Wang Zhentian

    2008-01-01

    Cosmic ray muon radiography which has good penetrability and sensitivity to high-Z materials is an effective way for detecting shielded nuclear materials. The problem of data correction is one of the key points of muon radiography technique. Because of the influence of environmental background, environmental yawp and error of detectors, the raw data can not be used directly. If we used the raw data as the usable data to reconstruct without any corrections, it would turn up terrible artifacts. Based on the characteristics of the muon radiography system, aimed at the error of detectors, this paper proposes a method of detector correction. The simulation experiments demonstrate that this method can effectively correct the error produced by detectors. Therefore, we can say that it does a further step to let the technique of cosmic muon radiography into out real life. (authors)

  18. Precise method for correcting count-rate losses in scintillation cameras

    International Nuclear Information System (INIS)

    Madsen, M.T.; Nickles, R.J.

    1986-01-01

    Quantitative studies performed with scintillation detectors often require corrections for lost data because of the finite resolving time of the detector. Methods that monitor losses by means of a reference source or pulser have unacceptably large statistical fluctuations associated with their correction factors. Analytic methods that model the detector as a paralyzable system require an accurate estimate of the system resolving time. Because the apparent resolving time depends on many variables, including the window setting, source distribution, and the amount of scattering material, significant errors can be introduced by relying on a resolving time obtained from phantom measurements. These problems can be overcome by curve-fitting the data from a reference source to a paralyzable model in which the true total count rate in the selected window is estimated from the observed total rate. The resolving time becomes a free parameter in this method which is optimized to provide the best fit to the observed reference data. The fitted curve has the inherent accuracy of the reference source method with the precision associated with the observed total image count rate. Correction factors can be simply calculated from the ratio of the true reference source rate and the fitted curve. As a result, the statistical uncertainty of the data corrected by this method is not significantly increased

  19. Equation-Method for correcting clipping errors in OFDM signals.

    Science.gov (United States)

    Bibi, Nargis; Kleerekoper, Anthony; Muhammad, Nazeer; Cheetham, Barry

    2016-01-01

    Orthogonal frequency division multiplexing (OFDM) is the digital modulation technique used by 4G and many other wireless communication systems. OFDM signals have significant amplitude fluctuations resulting in high peak to average power ratios which can make an OFDM transmitter susceptible to non-linear distortion produced by its high power amplifiers (HPA). A simple and popular solution to this problem is to clip the peaks before an OFDM signal is applied to the HPA but this causes in-band distortion and introduces bit-errors at the receiver. In this paper we discuss a novel technique, which we call the Equation-Method, for correcting these errors. The Equation-Method uses the Fast Fourier Transform to create a set of simultaneous equations which, when solved, return the amplitudes of the peaks before they were clipped. We show analytically and through simulations that this method can, correct all clipping errors over a wide range of clipping thresholds. We show that numerical instability can be avoided and new techniques are needed to enable the receiver to differentiate between correctly and incorrectly received frequency-domain constellation symbols.

  20. Interger multiplication with overflow detection or saturation

    Energy Technology Data Exchange (ETDEWEB)

    Schulte, M.J.; Balzola, P.I.; Akkas, A.; Brocato, R.W.

    2000-01-11

    High-speed multiplication is frequently used in general-purpose and application-specific computer systems. These systems often support integer multiplication, where two n-bit integers are multiplied to produce a 2n-bit product. To prevent growth in word length, processors typically return the n least significant bits of the product and a flag that indicates whether or not overflow has occurred. Alternatively, some processors saturate results that overflow to the most positive or most negative representable number. This paper presents efficient methods for performing unsigned or two's complement integer multiplication with overflow detection or saturation. These methods have significantly less area and delay than conventional methods for integer multiplication with overflow detection and saturation.

  1. A new correction method for determination on carbohydrates in lignocellulosic biomass.

    Science.gov (United States)

    Li, Hong-Qiang; Xu, Jian

    2013-06-01

    The accurate determination on the key components in lignocellulosic biomass is the premise of pretreatment and bioconversion. Currently, the widely used 72% H2SO4 two-step hydrolysis quantitative saccharification (QS) procedure uses loss coefficient of monosaccharide standards to correct monosaccharide loss in the secondary hydrolysis (SH) of QS and may result in excessive correction. By studying the quantitative relationships of glucose and xylose losses during special hydrolysis conditions and the HMF and furfural productions, a simple correction on the monosaccharide loss from both PH and SH was established by using HMF and furfural as the calibrators. This method was used to the component determination on corn stover, Miscanthus and cotton stalk (raw materials and pretreated) and compared to the NREL method. It has been proved that this method can avoid excessive correction on the samples with high-carbohydrate contents. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Detection of Static Eccentricity Fault in Saturated Induction Motors by Air-Gap Magnetic Flux Signature Analysis Using Finite Element Method

    Directory of Open Access Journals (Sweden)

    N. Halem

    2013-06-01

    Full Text Available Unfortunately, motor current signature analysis (MCSA cannot detect the small degrees of the purely static eccentricity (SE defects, while the air-gap magnetic flux signature analysis (FSA is applied successfully. The simulation results are obtained by using time stepping finite elements (TSFE method. In order to show the impact of magnetic saturation upon the diagnosis of SE fault, the analysis is carried out for saturated induction motors. The index signatures of static eccentricity fault around fundamental and PSHs are detected successfully for saturated motor.

  3. DETECTION OF STATIC ECCENTRICITY FAULT IN SATURATED INDUCTION MOTORS BY AIR-GAP MAGNETIC FLUX SIGNATURE ANALYSIS USING FINITE ELEMENT METHOD

    Directory of Open Access Journals (Sweden)

    N. Halem

    2013-06-01

    Full Text Available Unfortunately, motor current signature analysis (MCSA cannot detect the small degrees of the purely static eccentricity (SE defects, while the air-gap magnetic flux signature analysis (FSA is applied successfully. The simulation results are obtained by using time stepping finite elements (TSFE method. In order to show the impact of magnetic saturation upon the diagnosis of SE fault, the analysis is carried out for saturated induction motors. The index signatures of static eccentricity fault around fundamental and PSHs are detected successfully for saturated motor.

  4. DETECTION OF STATIC ECCENTRICITY FAULT IN SATURATED INDUCTION MOTORS BY AIR-GAP MAGNETIC FLUX SIGNATURE ANALYSIS USING FINITE ELEMENT METHOD

    Directory of Open Access Journals (Sweden)

    N. Halem

    2015-07-01

    Full Text Available Unfortunately, motor current signature analysis (MCSA cannot detect the small degrees of the purely static eccentricity (SE defects, while the air-gap magnetic flux signature analysis (FSA is applied successfully. The simulation results are obtained by using time stepping finite elements (TSFE method. In order to show the impact of magnetic saturation upon the diagnosis of SE fault, the analysis is carried out for saturated induction motors. The index signatures of static eccentricity fault around fundamental and PSHs are detected successfully for saturated motor.

  5. Unitary screening corrections in high energy hadron reactions

    Energy Technology Data Exchange (ETDEWEB)

    Maor, U

    1994-10-01

    The role of s-channel unitarity screening corrections, calculated in the eikonal approximation, is investigated for elastic and diffractive hadron-hadron and photon-hadron scattering in the energy limit. We examine the differences between our results and those obtained from the supercritical Pomeron-Reggeon model with no such corrections. It is argued that the saturation of cross sections is attained at different scales for different channels. In particular, we point out that whereas the saturation scale for elastic scattering apparently above the Tevatron energy range, the appropriate diffraction scale is considerably lower and can be assessed with presently available data. A review of the relevant data and its implications is presented. (author). 12 refs, 3 figs, 2 tabs.

  6. Thermoluminescence dating of chinese porcelain using a regression method of saturating exponential in pre-dose technique

    International Nuclear Information System (INIS)

    Wang Weida; Xia Junding; Zhou Zhixin; Leung, P.L.

    2001-01-01

    Thermoluminescence (TL) dating using a regression method of saturating exponential in pre-dose technique was described. 23 porcelain samples from past dynasties of China were dated by this method. The results show that the TL ages are in reasonable agreement with archaeological dates within a standard deviation of 27%. Such error can be accepted in porcelain dating

  7. Scatter correction method with primary modulator for dual energy digital radiography: a preliminary study

    Science.gov (United States)

    Jo, Byung-Du; Lee, Young-Jin; Kim, Dae-Hong; Jeon, Pil-Hyun; Kim, Hee-Joung

    2014-03-01

    In conventional digital radiography (DR) using a dual energy subtraction technique, a significant fraction of the detected photons are scattered within the body, resulting in the scatter component. Scattered radiation can significantly deteriorate image quality in diagnostic X-ray imaging systems. Various methods of scatter correction, including both measurement and non-measurement-based methods have been proposed in the past. Both methods can reduce scatter artifacts in images. However, non-measurement-based methods require a homogeneous object and have insufficient scatter component correction. Therefore, we employed a measurement-based method to correct for the scatter component of inhomogeneous objects from dual energy DR (DEDR) images. We performed a simulation study using a Monte Carlo simulation with a primary modulator, which is a measurement-based method for the DEDR system. The primary modulator, which has a checkerboard pattern, was used to modulate primary radiation. Cylindrical phantoms of variable size were used to quantify imaging performance. For scatter estimation, we used Discrete Fourier Transform filtering. The primary modulation method was evaluated using a cylindrical phantom in the DEDR system. The scatter components were accurately removed using a primary modulator. When the results acquired with scatter correction and without correction were compared, the average contrast-to-noise ratio (CNR) with the correction was 1.35 times higher than that obtained without correction, and the average root mean square error (RMSE) with the correction was 38.00% better than that without correction. In the subtraction study, the average CNR with correction was 2.04 (aluminum subtraction) and 1.38 (polymethyl methacrylate (PMMA) subtraction) times higher than that obtained without the correction. The analysis demonstrated the accuracy of scatter correction and the improvement of image quality using a primary modulator and showed the feasibility of

  8. Gluon Saturation and EIC

    Energy Technology Data Exchange (ETDEWEB)

    Sichtermann, Ernst

    2016-12-15

    The fundamental structure of nucleons and nuclear matter is described by the properties and dynamics of quarks and gluons in quantum chromodynamics. Electron-nucleon collisions are a powerful method to study this structure. As one increases the energy of the collisions, the interaction process probes regions of progressively higher gluon density. This density must eventually saturate. An high-energy polarized Electron-Ion Collider (EIC) has been proposed to observe and study the saturated gluon density regime. Selected measurements will be discussed, following a brief introduction.

  9. Clinical introduction of image lag correction for a cone beam CT system.

    Science.gov (United States)

    Stankovic, Uros; Ploeger, Lennert S; Sonke, Jan-Jakob; van Herk, Marcel

    2016-03-01

    Image lag in the flat-panel detector used for Linac integrated cone beam computed tomography (CBCT) has a degrading effect on CBCT image quality. The most prominent visible artifact is the presence of bright semicircular structure in the transverse view of the scans, known also as radar artifact. Several correction strategies have been proposed, but until now the clinical introduction of such corrections remains unreported. In November 2013, the authors have clinically implemented a previously proposed image lag correction on all of their machines at their main site in Amsterdam. The purpose of this study was to retrospectively evaluate the effect of the correction on the quality of CBCT images and evaluate the required calibration frequency. Image lag was measured in five clinical CBCT systems (Elekta Synergy 4.6) using an in-house developed beam interrupting device that stops the x-ray beam midway through the data acquisition of an unattenuated beam for calibration. A triple exponential falling edge response was fitted to the measured data and used to correct image lag from projection images with an infinite response. This filter, including an extrapolation for saturated pixels, was incorporated in the authors' in-house developed clinical cbct reconstruction software. To investigate the short-term stability of the lag and associated parameters, a series of five image lag measurement over a period of three months was performed. For quantitative analysis, the authors have retrospectively selected ten patients treated in the pelvic region. The apparent contrast was quantified in polar coordinates for scans reconstructed using the parameters obtained from different dates with and without saturation handling. Visually, the radar artifact was minimal in scans reconstructed using image lag correction especially when saturation handling was used. In patient imaging, there was a significant reduction of the apparent contrast from 43 ± 16.7 to 15.5 ± 11.9 HU without the

  10. Massive Corrections to Entanglement in Minimal E8 Toda Field Theory

    Directory of Open Access Journals (Sweden)

    Olalla A. Castro-Alvaredo

    2017-02-01

    Full Text Available In this letter we study the exponentially decaying corrections to saturation of the second R\\'enyi entropy of one interval of length L in minimal E8 Toda field theory. It has been known for some time that the entanglement entropy of a massive quantum field theory in 1+1 dimensions saturates to a constant value for m1 L <<1 where m1 is the mass of the lightest particle in the spectrum. Subsequently, results by Cardy, Castro-Alvaredo and Doyon have shown that there are exponentially decaying corrections to this behaviour which are characterised by Bessel functions with arguments proportional to m1 L. For the von Neumann entropy the leading correction to saturation takes the precise universal form -K0(2m1 L/8 whereas for the R\\'enyi entropies leading corrections which are proportional to K0(m1 L are expected. Recent numerical work by P\\'almai for the second R\\'enyi entropy of minimal E8 Toda has found next-to-leading order corrections decaying as exp(-2m1 L rather than the expected exp(-m1 L. In this paper we investigate the origin of this result and show that it is incorrect. An exact form factor computation of correlators of branch point twist fields reveals that the leading corrections are proportional to K0(m1 L as expected.

  11. The various correction methods to the high precision aeromagnetic data

    International Nuclear Information System (INIS)

    Xu Guocang; Zhu Lin; Ning Yuanli; Meng Xiangbao; Zhang Hongjian

    2014-01-01

    In the airborne geophysical survey, an outstanding achievement first depends on the measurement precision of the instrument, and the choice of measurement conditions, the reliability of data collection, followed by the correct method of measurement data processing, the rationality of the data interpretation. Obviously, geophysical data processing is an important task for the comprehensive interpretation of the measurement results, processing method is correct or not directly related to the quality of the final results. we have developed a set of personal computer software to aeromagnetic and radiometric survey data processing in the process of actual production and scientific research in recent years, and successfully applied to the production. The processing methods and flowcharts to the high precision aromagnetic data were simply introduced in this paper. However, the mathematical techniques of the various correction programes to IGRF and flying height and magnetic diurnal variation were stressily discussed in the paper. Their processing effectness were illustrated by taking an example as well. (authors)

  12. Decay correction methods in dynamic PET studies

    International Nuclear Information System (INIS)

    Chen, K.; Reiman, E.; Lawson, M.

    1995-01-01

    In order to reconstruct positron emission tomography (PET) images in quantitative dynamic studies, the data must be corrected for radioactive decay. One of the two commonly used methods ignores physiological processes including blood flow that occur at the same time as radioactive decay; the other makes incorrect use of time-accumulated PET counts. In simulated dynamic PET studies using 11 C-acetate and 18 F-fluorodeoxyglucose (FDG), these methods are shown to result in biased estimates of the time-activity curve (TAC) and model parameters. New methods described in this article provide significantly improved parameter estimates in dynamic PET studies

  13. Method of absorbance correction in a spectroscopic heating value sensor

    Science.gov (United States)

    Saveliev, Alexei; Jangale, Vilas Vyankatrao; Zelepouga, Sergeui; Pratapas, John

    2013-09-17

    A method and apparatus for absorbance correction in a spectroscopic heating value sensor in which a reference light intensity measurement is made on a non-absorbing reference fluid, a light intensity measurement is made on a sample fluid, and a measured light absorbance of the sample fluid is determined. A corrective light intensity measurement at a non-absorbing wavelength of the sample fluid is made on the sample fluid from which an absorbance correction factor is determined. The absorbance correction factor is then applied to the measured light absorbance of the sample fluid to arrive at a true or accurate absorbance for the sample fluid.

  14. An efficient dose-compensation method for proximity effect correction

    International Nuclear Information System (INIS)

    Wang Ying; Han Weihua; Yang Xiang; Zhang Yang; Yang Fuhua; Zhang Renping

    2010-01-01

    A novel simple dose-compensation method is developed for proximity effect correction in electron-beam lithography. The sizes of exposed patterns depend on dose factors while other exposure parameters (including accelerate voltage, resist thickness, exposing step size, substrate material, and so on) remain constant. This method is based on two reasonable assumptions in the evaluation of the compensated dose factor: one is that the relation between dose factors and circle-diameters is linear in the range under consideration; the other is that the compensated dose factor is only affected by the nearest neighbors for simplicity. Four-layer-hexagon photonic crystal structures were fabricated as test patterns to demonstrate this method. Compared to the uncorrected structures, the homogeneity of the corrected hole-size in photonic crystal structures was clearly improved. (semiconductor technology)

  15. A novel 3D absorption correction method for quantitative EDX-STEM tomography

    International Nuclear Information System (INIS)

    Burdet, Pierre; Saghi, Z.; Filippin, A.N.; Borrás, A.; Midgley, P.A.

    2016-01-01

    This paper presents a novel 3D method to correct for absorption in energy dispersive X-ray (EDX) microanalysis of heterogeneous samples of unknown structure and composition. By using STEM-based tomography coupled with EDX, an initial 3D reconstruction is used to extract the location of generated X-rays as well as the X-ray path through the sample to the surface. The absorption correction needed to retrieve the generated X-ray intensity is then calculated voxel-by-voxel estimating the different compositions encountered by the X-ray. The method is applied to a core/shell nanowire containing carbon and oxygen, two elements generating highly absorbed low energy X-rays. Absorption is shown to cause major reconstruction artefacts, in the form of an incomplete recovery of the oxide and an erroneous presence of carbon in the shell. By applying the correction method, these artefacts are greatly reduced. The accuracy of the method is assessed using reference X-ray lines with low absorption. - Highlights: • A novel 3D absorption correction method is proposed for 3D EDX-STEM tomography. • The absorption of X-rays along the path to the surface is calculated voxel-by-voxel. • The method is applied on highly absorbed X-rays emitted from a core/shell nanowire. • Absorption is shown to cause major artefacts in the reconstruction. • Using the absorption correction method, the reconstruction artefacts are greatly reduced.

  16. A novel 3D absorption correction method for quantitative EDX-STEM tomography

    Energy Technology Data Exchange (ETDEWEB)

    Burdet, Pierre, E-mail: pierre.burdet@a3.epfl.ch [Department of Materials Science and Metallurgy, University of Cambridge, Charles Babbage Road 27, Cambridge CB3 0FS, Cambridgeshire (United Kingdom); Saghi, Z. [Department of Materials Science and Metallurgy, University of Cambridge, Charles Babbage Road 27, Cambridge CB3 0FS, Cambridgeshire (United Kingdom); Filippin, A.N.; Borrás, A. [Nanotechnology on Surfaces Laboratory, Materials Science Institute of Seville (ICMS), CSIC-University of Seville, C/ Americo Vespucio 49, 41092 Seville (Spain); Midgley, P.A. [Department of Materials Science and Metallurgy, University of Cambridge, Charles Babbage Road 27, Cambridge CB3 0FS, Cambridgeshire (United Kingdom)

    2016-01-15

    This paper presents a novel 3D method to correct for absorption in energy dispersive X-ray (EDX) microanalysis of heterogeneous samples of unknown structure and composition. By using STEM-based tomography coupled with EDX, an initial 3D reconstruction is used to extract the location of generated X-rays as well as the X-ray path through the sample to the surface. The absorption correction needed to retrieve the generated X-ray intensity is then calculated voxel-by-voxel estimating the different compositions encountered by the X-ray. The method is applied to a core/shell nanowire containing carbon and oxygen, two elements generating highly absorbed low energy X-rays. Absorption is shown to cause major reconstruction artefacts, in the form of an incomplete recovery of the oxide and an erroneous presence of carbon in the shell. By applying the correction method, these artefacts are greatly reduced. The accuracy of the method is assessed using reference X-ray lines with low absorption. - Highlights: • A novel 3D absorption correction method is proposed for 3D EDX-STEM tomography. • The absorption of X-rays along the path to the surface is calculated voxel-by-voxel. • The method is applied on highly absorbed X-rays emitted from a core/shell nanowire. • Absorption is shown to cause major artefacts in the reconstruction. • Using the absorption correction method, the reconstruction artefacts are greatly reduced.

  17. Color correction optimization with hue regularization

    Science.gov (United States)

    Zhang, Heng; Liu, Huaping; Quan, Shuxue

    2011-01-01

    Previous work has suggested that observers are capable of judging the quality of an image without any knowledge of the original scene. When no reference is available, observers can extract the apparent objects in an image and compare them with the typical colors of similar objects recalled from their memories. Some generally agreed upon research results indicate that although perfect colorimetric rendering is not conspicuous and color errors can be well tolerated, the appropriate rendition of certain memory colors such as skin, grass, and sky is an important factor in the overall perceived image quality. These colors are appreciated in a fairly consistent manner and are memorized with slightly different hues and higher color saturation. The aim of color correction for a digital color pipeline is to transform the image data from a device dependent color space to a target color space, usually through a color correction matrix which in its most basic form is optimized through linear regressions between the two sets of data in two color spaces in the sense of minimized Euclidean color error. Unfortunately, this method could result in objectionable distortions if the color error biased certain colors undesirably. In this paper, we propose a color correction optimization method with preferred color reproduction in mind through hue regularization and present some experimental results.

  18. Resistivity Correction Factor for the Four-Probe Method: Experiment I

    Science.gov (United States)

    Yamashita, Masato; Yamaguchi, Shoji; Enjoji, Hideo

    1988-05-01

    Experimental verification of the theoretically derived resistivity correction factor (RCF) is presented. Resistivity and sheet resistance measurements by the four-probe method are made on three samples: isotropic graphite, ITO film and Au film. It is indicated that the RCF can correct the apparent variations of experimental data to yield reasonable resistivities and sheet resistances.

  19. Input saturation in nonlinear multivariable processes resolved by nonlinear decoupling

    Directory of Open Access Journals (Sweden)

    Jens G. Balchen

    1995-04-01

    Full Text Available A new method is presented for the resolution of the problem of input saturation in nonlinear multivariable process control by means of elementary nonlinear decoupling (END. Input saturation can have serious consequences particularly in multivariable control because it may lead to very undesirable system behaviour and quite often system instability. Many authors have searched for systematic techniques for designing multivariable control systems in which saturation may occur in any of the control variables (inputs, manipulated variables. No generally accepted method seems to have been presented so far which gives a solution in closed form. The method of elementary nonlinear decoupling (END can be applied directly to the case of saturation control variables by deriving as many control strategies as there are combinations of saturating control variables. The method is demonstrated by the multivariable control of a simulated Fluidized Catalytic Cracker (FCC with very convincing results.

  20. Efficient color correction method for smartphone camera-based health monitoring application.

    Science.gov (United States)

    Duc Dang; Chae Ho Cho; Daeik Kim; Oh Seok Kwon; Jo Woon Chong

    2017-07-01

    Smartphone health monitoring applications are recently highlighted due to the rapid development of hardware and software performance of smartphones. However, color characteristics of images captured by different smartphone models are dissimilar each other and this difference may give non-identical health monitoring results when the smartphone health monitoring applications monitor physiological information using their embedded smartphone cameras. In this paper, we investigate the differences in color properties of the captured images from different smartphone models and apply a color correction method to adjust dissimilar color values obtained from different smartphone cameras. Experimental results show that the color corrected images using the correction method provide much smaller color intensity errors compared to the images without correction. These results can be applied to enhance the consistency of smartphone camera-based health monitoring applications by reducing color intensity errors among the images obtained from different smartphones.

  1. Methods of correcting Anger camera deadtime losses

    International Nuclear Information System (INIS)

    Sorenson, J.A.

    1976-01-01

    Three different methods of correcting for Anger camera deadtime loss were investigated. These included analytic methods (mathematical modeling), the marker-source method, and a new method based on counting ''pileup'' events appearing in a pulseheight analyzer window positioned above the photopeak of interest. The studies were done with /sup 99m/Tc on a Searle Radiographics camera with a measured deadtime of about 6 μsec. Analytic methods were found to be unreliable because of unpredictable changes in deadtime with changes in radiation scattering conditions. Both the marker-source method and the pileup-counting method were found to be accurate to within a few percent for true counting rates of up to about 200 K cps, with the pileup-counting method giving better results. This finding applied to sources at depths ranging up to 10 cm of pressed wood. The relative merits of the two methods are discussed

  2. Correction of Misclassifications Using a Proximity-Based Estimation Method

    Directory of Open Access Journals (Sweden)

    Shmulevich Ilya

    2004-01-01

    Full Text Available An estimation method for correcting misclassifications in signal and image processing is presented. The method is based on the use of context-based (temporal or spatial information in a sliding-window fashion. The classes can be purely nominal, that is, an ordering of the classes is not required. The method employs nonlinear operations based on class proximities defined by a proximity matrix. Two case studies are presented. In the first, the proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm. In the second, the estimation method is applied to two-dimensional signals for correction of misclassifications in images. In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework. Simulation results are presented in both case studies and the degree of improvement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.

  3. Landsliding in partially saturated materials

    Science.gov (United States)

    Godt, J.W.; Baum, R.L.; Lu, N.

    2009-01-01

    [1] Rainfall-induced landslides are pervasive in hillslope environments around the world and among the most costly and deadly natural hazards. However, capturing their occurrence with scientific instrumentation in a natural setting is extremely rare. The prevailing thinking on landslide initiation, particularly for those landslides that occur under intense precipitation, is that the failure surface is saturated and has positive pore-water pressures acting on it. Most analytic methods used for landslide hazard assessment are based on the above perception and assume that the failure surface is located beneath a water table. By monitoring the pore water and soil suction response to rainfall, we observed shallow landslide occurrence under partially saturated conditions for the first time in a natural setting. We show that the partially saturated shallow landslide at this site is predictable using measured soil suction and water content and a novel unified effective stress concept for partially saturated earth materials. Copyright 2009 by the American Geophysical Union.

  4. Conservative multi-implicit integral deferred correction methods with adaptive mesh refinement

    International Nuclear Information System (INIS)

    Layton, A.T.

    2004-01-01

    In most models of reacting gas dynamics, the characteristic time scales of chemical reactions are much shorter than the hydrodynamic and diffusive time scales, rendering the reaction part of the model equations stiff. Moreover, nonlinear forcings may introduce into the solutions sharp gradients or shocks, the robust behavior and correct propagation of which require the use of specialized spatial discretization procedures. This study presents high-order conservative methods for the temporal integration of model equations of reacting flows. By means of a method of lines discretization on the flux difference form of the equations, these methods compute approximations to the cell-averaged or finite-volume solution. The temporal discretization is based on a multi-implicit generalization of integral deferred correction methods. The advection term is integrated explicitly, and the diffusion and reaction terms are treated implicitly but independently, with the splitting errors present in traditional operator splitting methods reduced via the integral deferred correction procedure. To reduce computational cost, time steps used to integrate processes with widely-differing time scales may differ in size. (author)

  5. Higher order QCD corrections in small x physics

    International Nuclear Information System (INIS)

    Chachamis, G.

    2006-11-01

    We study higher order QCD corrections in small x Physics. The numerical implementation of the full NLO photon impact factor is the remaining necessary piece for the testing of the NLO BFKL resummation against data from physical processes, such as γ * γ * collisions. We perform the numerical integration over phase space for the virtual corrections to the NLO photon impact factor. This, along with the previously calculated real corrections, makes feasible in the near future first estimates for the γ*γ* total cross section, since the convolution of the full impact factor with the NLO BFKL gluon Green's function is now straightforward. The NLO corrections for the photon impact factor are sizeable and negative. In the second part of this thesis, we estimate higher order correction to the BK equation. We are mainly interested in whether partonic saturation delays or not in rapidity when going beyond the leading order. In our investigation, we use the so called 'rapidity veto' which forbid two emissions to be very close in rapidity, to 'switch on' higher order corrections to the BK equation. From analytic and numerical analysis, we conclude that indeed saturation does delay in rapidity when higher order corrections are taken into account. In the last part, we investigate higher order QCD corrections as additional corrections to the Electroweak (EW) sector. The question of whether BFKL corrections are of any importance in the Regge limit for the EW sector seems natural; although they arise in higher loop level, the accumulation of logarithms in energy s at high energies, cannot be dismissed without an investigation. We focus on the process γγ→ZZ. We calculate the pQCD corrections in the forward region at leading logarithmic (LL) BFKL accuracy, which are of the order of few percent at the TeV energy scale. (orig.)

  6. Higher order QCD corrections in small x physics

    Energy Technology Data Exchange (ETDEWEB)

    Chachamis, G.

    2006-11-15

    We study higher order QCD corrections in small x Physics. The numerical implementation of the full NLO photon impact factor is the remaining necessary piece for the testing of the NLO BFKL resummation against data from physical processes, such as {gamma}{sup *}{gamma}{sup *} collisions. We perform the numerical integration over phase space for the virtual corrections to the NLO photon impact factor. This, along with the previously calculated real corrections, makes feasible in the near future first estimates for the {gamma}*{gamma}* total cross section, since the convolution of the full impact factor with the NLO BFKL gluon Green's function is now straightforward. The NLO corrections for the photon impact factor are sizeable and negative. In the second part of this thesis, we estimate higher order correction to the BK equation. We are mainly interested in whether partonic saturation delays or not in rapidity when going beyond the leading order. In our investigation, we use the so called 'rapidity veto' which forbid two emissions to be very close in rapidity, to 'switch on' higher order corrections to the BK equation. From analytic and numerical analysis, we conclude that indeed saturation does delay in rapidity when higher order corrections are taken into account. In the last part, we investigate higher order QCD corrections as additional corrections to the Electroweak (EW) sector. The question of whether BFKL corrections are of any importance in the Regge limit for the EW sector seems natural; although they arise in higher loop level, the accumulation of logarithms in energy s at high energies, cannot be dismissed without an investigation. We focus on the process {gamma}{gamma}{yields}ZZ. We calculate the pQCD corrections in the forward region at leading logarithmic (LL) BFKL accuracy, which are of the order of few percent at the TeV energy scale. (orig.)

  7. Assessing species saturation: conceptual and methodological challenges.

    Science.gov (United States)

    Olivares, Ingrid; Karger, Dirk N; Kessler, Michael

    2018-05-07

    Is there a maximum number of species that can coexist? Intuitively, we assume an upper limit to the number of species in a given assemblage, or that a lineage can produce, but defining and testing this limit has proven problematic. Herein, we first outline seven general challenges of studies on species saturation, most of which are independent of the actual method used to assess saturation. Among these are the challenge of defining saturation conceptually and operationally, the importance of setting an appropriate referential system, and the need to discriminate among patterns, processes and mechanisms. Second, we list and discuss the methodological approaches that have been used to study species saturation. These approaches vary in time and spatial scales, and in the variables and assumptions needed to assess saturation. We argue that assessing species saturation is possible, but that many studies conducted to date have conceptual and methodological flaws that prevent us from currently attaining a good idea of the occurrence of species saturation. © 2018 Cambridge Philosophical Society.

  8. The two-phase flow IPTT method for measurement of nonwetting-wetting liquid interfacial areas at higher nonwetting saturations in natural porous media.

    Science.gov (United States)

    Zhong, Hua; Ouni, Asma El; Lin, Dan; Wang, Bingguo; Brusseau, Mark L

    2016-07-01

    Interfacial areas between nonwetting-wetting (NW-W) liquids in natural porous media were measured using a modified version of the interfacial partitioning tracer test (IPTT) method that employed simultaneous two-phase flow conditions, which allowed measurement at NW saturations higher than trapped residual saturation. Measurements were conducted over a range of saturations for a well-sorted quartz sand under three wetting scenarios of primary drainage (PD), secondary imbibition (SI), and secondary drainage (SD). Limited sets of experiments were also conducted for a model glass-bead medium and for a soil. The measured interfacial areas were compared to interfacial areas measured using the standard IPTT method for liquid-liquid systems, which employs residual NW saturations. In addition, the theoretical maximum interfacial areas estimated from the measured data are compared to specific solid surface areas measured with the N 2 /BET method and estimated based on geometrical calculations for smooth spheres. Interfacial areas increase linearly with decreasing water saturation over the range of saturations employed. The maximum interfacial areas determined for the glass beads, which have no surface roughness, are 32±4 and 36±5 cm -1 for PD and SI cycles, respectively. The values are similar to the geometric specific solid surface area (31±2 cm -1 ) and the N 2 /BET solid surface area (28±2 cm -1 ). The maximum interfacial areas are 274±38, 235±27, and 581±160 cm -1 for the sand for PD, SI, and SD cycles, respectively, and ~7625 cm -1 for the soil for PD and SI. The maximum interfacial areas for the sand and soil are significantly larger than the estimated smooth-sphere specific solid surface areas (107±8 cm -1 and 152±8 cm -1 , respectively), but much smaller than the N 2 /BET solid surface area (1387±92 cm -1 and 55224 cm -1 , respectively). The NW-W interfacial areas measured with the two-phase flow method compare well to values measured using the standard

  9. A rigid motion correction method for helical computed tomography (CT)

    International Nuclear Information System (INIS)

    Kim, J-H; Kyme, A; Fulton, R; Nuyts, J; Kuncic, Z

    2015-01-01

    We propose a method to compensate for six degree-of-freedom rigid motion in helical CT of the head. The method is demonstrated in simulations and in helical scans performed on a 16-slice CT scanner. Scans of a Hoffman brain phantom were acquired while an optical motion tracking system recorded the motion of the bed and the phantom. Motion correction was performed by restoring projection consistency using data from the motion tracking system, and reconstructing with an iterative fully 3D algorithm. Motion correction accuracy was evaluated by comparing reconstructed images with a stationary reference scan. We also investigated the effects on accuracy of tracker sampling rate, measurement jitter, interpolation of tracker measurements, and the synchronization of motion data and CT projections. After optimization of these aspects, motion corrected images corresponded remarkably closely to images of the stationary phantom with correlation and similarity coefficients both above 0.9. We performed a simulation study using volunteer head motion and found similarly that our method is capable of compensating effectively for realistic human head movements. To the best of our knowledge, this is the first practical demonstration of generalized rigid motion correction in helical CT. Its clinical value, which we have yet to explore, may be significant. For example it could reduce the necessity for repeat scans and resource-intensive anesthetic and sedation procedures in patient groups prone to motion, such as young children. It is not only applicable to dedicated CT imaging, but also to hybrid PET/CT and SPECT/CT, where it could also ensure an accurate CT image for lesion localization and attenuation correction of the functional image data. (paper)

  10. Quantitative chemical exchange saturation transfer (qCEST) MRI--RF spillover effect-corrected omega plot for simultaneous determination of labile proton fraction ratio and exchange rate.

    Science.gov (United States)

    Sun, Phillip Zhe; Wang, Yu; Dai, ZhuoZhi; Xiao, Gang; Wu, Renhua

    2014-01-01

    Chemical exchange saturation transfer (CEST) MRI is sensitive to dilute proteins and peptides as well as microenvironmental properties. However, the complexity of the CEST MRI effect, which varies with the labile proton content, exchange rate and experimental conditions, underscores the need for developing quantitative CEST (qCEST) analysis. Towards this goal, it has been shown that omega plot is capable of quantifying paramagnetic CEST MRI. However, the use of the omega plot is somewhat limited for diamagnetic CEST (DIACEST) MRI because it is more susceptible to direct radio frequency (RF) saturation (spillover) owing to the relatively small chemical shift. Recently, it has been found that, for dilute DIACEST agents that undergo slow to intermediate chemical exchange, the spillover effect varies little with the labile proton ratio and exchange rate. Therefore, we postulated that the omega plot analysis can be improved if RF spillover effect could be estimated and taken into account. Specifically, simulation showed that both labile proton ratio and exchange rate derived using the spillover effect-corrected omega plot were in good agreement with simulated values. In addition, the modified omega plot was confirmed experimentally, and we showed that the derived labile proton ratio increased linearly with creatine concentration (p plot for quantitative analysis of DIACEST MRI. Copyright © 2014 John Wiley & Sons, Ltd.

  11. Correction of the closed orbit and vertical dispersion and the tuning and field correction system in ISABELLE

    International Nuclear Information System (INIS)

    Parzen, G.

    1979-01-01

    Each ring in ISABELLE will have 10 separately powered systematic field correction coils to make required corrections which are the same in corresponding magnets around the ring. These corrections include changing the ν-value, shaping the working line in ν-space, correction of field errors due to iron saturation effects, the conductor arrangements, the construction of the coil ends, diamagnetic effects in the superconductor and to rate-dependent induced currents. The twelve insertion quadrupoles in the insertion surrounding each crossing point will each have a quadrupole trim coil. The closed orbit will be controlled by a system of 84 horizontal dipole coils and 90 vertical dipole coils in each ring, each coil being separately powered. This system of dipole coils will also be used to correct the vertical dispersion at the crossing points. Two families of skew quadrupoles per ring will be provided for correction of the coupling between the horizontal and vertical motions. Although there will be 258 separately powered correction coils in each ring

  12. Corrected entropy of Friedmann-Robertson-Walker universe in tunneling method

    International Nuclear Information System (INIS)

    Zhu, Tao; Ren, Ji-Rong; Li, Ming-Fan

    2009-01-01

    In this paper, we study the thermodynamic quantities of Friedmann-Robertson-Walker (FRW) universe by using the tunneling formalism beyond semiclassical approximation developed by Banerjee and Majhi [25]. For this we first calculate the corrected Hawking-like temperature on apparent horizon by considering both scalar particle and fermion tunneling. With this corrected Hawking-like temperature, the explicit expressions of the corrected entropy of apparent horizon for various gravity theories including Einstein gravity, Gauss-Bonnet gravity, Lovelock gravity, f(R) gravity and scalar-tensor gravity, are computed. Our results show that the corrected entropy formula for different gravity theories can be written into a general expression (4.39) of a same form. It is also shown that this expression is also valid for black holes. This might imply that the expression for the corrected entropy derived from tunneling method is independent of gravity theory, spacetime and dimension of the spacetime. Moreover, it is concluded that the basic thermodynamical property that the corrected entropy on apparent horizon is a state function is satisfied by the FRW universe

  13. A multilevel correction adaptive finite element method for Kohn-Sham equation

    Science.gov (United States)

    Hu, Guanghui; Xie, Hehu; Xu, Fei

    2018-02-01

    In this paper, an adaptive finite element method is proposed for solving Kohn-Sham equation with the multilevel correction technique. In the method, the Kohn-Sham equation is solved on a fixed and appropriately coarse mesh with the finite element method in which the finite element space is kept improving by solving the derived boundary value problems on a series of adaptively and successively refined meshes. A main feature of the method is that solving large scale Kohn-Sham system is avoided effectively, and solving the derived boundary value problems can be handled efficiently by classical methods such as the multigrid method. Hence, the significant acceleration can be obtained on solving Kohn-Sham equation with the proposed multilevel correction technique. The performance of the method is examined by a variety of numerical experiments.

  14. In vivo detection of hemoglobin oxygen saturation and carboxyhemoglobin saturation with multiwavelength photoacoustic microscopy.

    Science.gov (United States)

    Chen, Zhongjiang; Yang, Sihua; Xing, Da

    2012-08-15

    A method for noninvasively detecting hemoglobin oxygen saturation (SO2) and carboxyhemoglobin saturation (SCO) in subcutaneous microvasculature with multiwavelength photoacoustic microscopy is presented. Blood samples mixed with different concentrations of carboxyhemoglobin were used to test the feasibility and accuracy of photoacoustic microscopy compared with the blood-gas analyzer. Moreover, fixed-point detection of SO2 and SCO in mouse ear was obtained, and the changes from normoxia to carbon monoxide hypoxia were dynamically monitored in vivo. Experimental results demonstrate that multiwavelength photoacoustic microscopy can detect SO2 and SCO, which has future potential clinical applications.

  15. Comparison of pulseoximetry oxygen saturation and arterial oxygen saturation in open heart intensive care unit

    Directory of Open Access Journals (Sweden)

    Alireza Mahoori

    2013-08-01

    Full Text Available Background: Pulseoximetry is widely used in the critical care setting, currently used to guide therapeutic interventions. Few studies have evaluated the accuracy of SPO2 (puls-eoximetry oxygen saturation in intensive care unit after cardiac surgery. Our objective was to compare pulseoximetry with arterial oxygen saturation (SaO2 during clinical routine in such patients, and to examine the effect of mild acidosis on this relationship.Methods: In an observational prospective study 80 patients were evaluated in intensive care unit after cardiac surgery. SPO2 was recorded and compared with SaO2 obtained by blood gas analysis. One or serial arterial blood gas analyses (ABGs were performed via a radial artery line while a reliable pulseoximeter signal was present. One hundred thirty seven samples were collected and for each blood gas analyses, SaO2 and SPO2 we recorded.Results: O2 saturation as a marker of peripheral perfusion was measured by Pulseoxim-etry (SPO2. The mean difference between arterial oxygen saturation and pulseoximetry oxygen saturation was 0.12%±1.6%. A total of 137 paired readings demonstrated good correlation (r=0.754; P<0.0001 between changes in SPO2 and those in SaO2 in samples with normal hemoglobin. Also in forty seven samples with mild acidosis, paired readings demonstrated good correlation (r=0.799; P<0.0001 and the mean difference between SaO2 and SPO2 was 0.05%±1.5%.Conclusion: Data showed that in patients with stable hemodynamic and good signal quality, changes in pulseoximetry oxygen saturation reliably predict equivalent changes in arterial oxygen saturation. Mild acidosis doesn’t alter the relation between SPO2 and SaO2 to any clinically important extent. In conclusion, the pulse oximeter is useful to monitor oxygen saturation in patients with stable hemodynamic.

  16. Saturation Detection-Based Blocking Scheme for Transformer Differential Protection

    Directory of Open Access Journals (Sweden)

    Byung Eun Lee

    2014-07-01

    Full Text Available This paper describes a current differential relay for transformer protection that operates in conjunction with a core saturation detection-based blocking algorithm. The differential current for the magnetic inrush or over-excitation has a point of inflection at the start and end of each saturation period of the transformer core. At these instants, discontinuities arise in the first-difference function of the differential current. The second- and third-difference functions convert the points of inflection into pulses, the magnitudes of which are large enough to detect core saturation. The blocking signal is activated if the third-difference of the differential current is larger than the threshold and is maintained for one cycle. In addition, a method to discriminate between transformer saturation and current transformer (CT saturation is included. The performance of the proposed blocking scheme was compared with that of a conventional harmonic blocking method. The test results indicate that the proposed scheme successfully discriminates internal faults even with CT saturation from the magnetic inrush, over-excitation, and external faults with CT saturation, and can significantly reduce the operating time delay of the relay.

  17. A New High-Precision Correction Method of Temperature Distribution in Model Stellar Atmospheres

    Directory of Open Access Journals (Sweden)

    Sapar A.

    2013-06-01

    Full Text Available The main features of the temperature correction methods, suggested and used in modeling of plane-parallel stellar atmospheres, are discussed. The main features of the new method are described. Derivation of the formulae for a version of the Unsöld-Lucy method, used by us in the SMART (Stellar Model Atmospheres and Radiative Transport software for modeling stellar atmospheres, is presented. The method is based on a correction of the model temperature distribution based on minimizing differences of flux from its accepted constant value and on the requirement of the lack of its gradient, meaning that local source and sink terms of radiation must be equal. The final relative flux constancy obtainable by the method with the SMART code turned out to have the precision of the order of 0.5 %. Some of the rapidly converging iteration steps can be useful before starting the high-precision model correction. The corrections of both the flux value and of its gradient, like in Unsöld-Lucy method, are unavoidably needed to obtain high-precision flux constancy. A new temperature correction method to obtain high-precision flux constancy for plane-parallel LTE model stellar atmospheres is proposed and studied. The non-linear optimization is carried out by the least squares, in which the Levenberg-Marquardt correction method and thereafter additional correction by the Broyden iteration loop were applied. Small finite differences of temperature (δT/T = 10−3 are used in the computations. A single Jacobian step appears to be mostly sufficient to get flux constancy of the order 10−2 %. The dual numbers and their generalization – the dual complex numbers (the duplex numbers – enable automatically to get the derivatives in the nilpotent part of the dual numbers. A version of the SMART software is in the stage of refactorization to dual and duplex numbers, what enables to get rid of the finite differences, as an additional source of lowering precision of the

  18. A Method for Correcting IMRT Optimizer Heterogeneity Dose Calculations

    International Nuclear Information System (INIS)

    Zacarias, Albert S.; Brown, Mellonie F.; Mills, Michael D.

    2010-01-01

    Radiation therapy treatment planning for volumes close to the patient's surface, in lung tissue and in the head and neck region, can be challenging for the planning system optimizer because of the complexity of the treatment and protected volumes, as well as striking heterogeneity corrections. Because it is often the goal of the planner to produce an isodose plan with uniform dose throughout the planning target volume (PTV), there is a need for improved planning optimization procedures for PTVs located in these anatomical regions. To illustrate such an improved procedure, we present a treatment planning case of a patient with a lung lesion located in the posterior right lung. The intensity-modulated radiation therapy (IMRT) plan generated using standard optimization procedures produced substantial dose nonuniformity across the tumor caused by the effect of lung tissue surrounding the tumor. We demonstrate a novel iterative method of dose correction performed on the initial IMRT plan to produce a more uniform dose distribution within the PTV. This optimization method corrected for the dose missing on the periphery of the PTV and reduced the maximum dose on the PTV to 106% from 120% on the representative IMRT plan.

  19. Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.

    Science.gov (United States)

    Jain, Ram B

    2016-08-01

    Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.

  20. Correcting for cryptic relatedness by a regression-based genomic control method

    Directory of Open Access Journals (Sweden)

    Yang Yaning

    2009-12-01

    Full Text Available Abstract Background Genomic control (GC method is a useful tool to correct for the cryptic relatedness in population-based association studies. It was originally proposed for correcting for the variance inflation of Cochran-Armitage's additive trend test by using information from unlinked null markers, and was later generalized to be applicable to other tests with the additional requirement that the null markers are matched with the candidate marker in allele frequencies. However, matching allele frequencies limits the number of available null markers and thus limits the applicability of the GC method. On the other hand, errors in genotype/allele frequencies may cause further bias and variance inflation and thereby aggravate the effect of GC correction. Results In this paper, we propose a regression-based GC method using null markers that are not necessarily matched in allele frequencies with the candidate marker. Variation of allele frequencies of the null markers is adjusted by a regression method. Conclusion The proposed method can be readily applied to the Cochran-Armitage's trend tests other than the additive trend test, the Pearson's chi-square test and other robust efficiency tests. Simulation results show that the proposed method is effective in controlling type I error in the presence of population substructure.

  1. Weyl corrections to diffusion and chaos in holography

    Science.gov (United States)

    Li, Wei-Jia; Liu, Peng; Wu, Jian-Pin

    2018-04-01

    Using holographic methods in the Einstein-Maxwell-dilaton-axion (EMDA) theory, it was conjectured that the thermal diffusion in a strongly coupled metal without quasi-particles saturates an universal lower bound that is associated with the chaotic property of the system at infrared (IR) fixed points [1]. In this paper, we investigate the thermal transport and quantum chaos in the EMDA theory with a small Weyl coupling term. It is found that the Weyl coupling correct the thermal diffusion constant D Q and butterfly velocity v B in different ways, hence resulting in a modified relation between the two at IR fixed points. Unlike that in the EMDA case, our results show that the ratio D Q /( v B 2 τ L ) always contains a non-universal Weyl correction which depends also on the bulk fields as long as the U(1) current is marginally relevant in the IR.

  2. Comparison of empirical models and laboratory saturated hydraulic ...

    African Journals Online (AJOL)

    Numerous methods for estimating soil saturated hydraulic conductivity exist, which range from direct measurement in the laboratory to models that use only basic soil properties. A study was conducted to compare laboratory saturated hydraulic conductivity (Ksat) measurement and that estimated from empirical models.

  3. A new digitized reverse correction method for hypoid gears based on a one-dimensional probe

    Science.gov (United States)

    Li, Tianxing; Li, Jubo; Deng, Xiaozhong; Yang, Jianjun; Li, Genggeng; Ma, Wensuo

    2017-12-01

    In order to improve the tooth surface geometric accuracy and transmission quality of hypoid gears, a new digitized reverse correction method is proposed based on the measurement data from a one-dimensional probe. The minimization of tooth surface geometrical deviations is realized from the perspective of mathematical analysis and reverse engineering. Combining the analysis of complex tooth surface generation principles and the measurement mechanism of one-dimensional probes, the mathematical relationship between the theoretical designed tooth surface, the actual machined tooth surface and the deviation tooth surface is established, the mapping relation between machine-tool settings and tooth surface deviations is derived, and the essential connection between the accurate calculation of tooth surface deviations and the reverse correction method of machine-tool settings is revealed. Furthermore, a reverse correction model of machine-tool settings is built, a reverse correction strategy is planned, and the minimization of tooth surface deviations is achieved by means of the method of numerical iterative reverse solution. On this basis, a digitized reverse correction system for hypoid gears is developed by the organic combination of numerical control generation, accurate measurement, computer numerical processing, and digitized correction. Finally, the correctness and practicability of the digitized reverse correction method are proved through a reverse correction experiment. The experimental results show that the tooth surface geometric deviations meet the engineering requirements after two trial cuts and one correction.

  4. New component-based normalization method to correct PET system models

    International Nuclear Information System (INIS)

    Kinouchi, Shoko; Miyoshi, Yuji; Suga, Mikio; Yamaya, Taiga; Yoshida, Eiji; Nishikido, Fumihiko; Tashima, Hideaki

    2011-01-01

    Normalization correction is necessary to obtain high-quality reconstructed images in positron emission tomography (PET). There are two basic types of normalization methods: the direct method and component-based methods. The former method suffers from the problem that a huge count number in the blank scan data is required. Therefore, the latter methods have been proposed to obtain high statistical accuracy normalization coefficients with a small count number in the blank scan data. In iterative image reconstruction methods, on the other hand, the quality of the obtained reconstructed images depends on the system modeling accuracy. Therefore, the normalization weighing approach, in which normalization coefficients are directly applied to the system matrix instead of a sinogram, has been proposed. In this paper, we propose a new component-based normalization method to correct system model accuracy. In the proposed method, two components are defined and are calculated iteratively in such a way as to minimize errors of system modeling. To compare the proposed method and the direct method, we applied both methods to our small OpenPET prototype system. We achieved acceptable statistical accuracy of normalization coefficients while reducing the count number of the blank scan data to one-fortieth that required in the direct method. (author)

  5. Analysis and development of methods of correcting for heterogeneities to cobalt-60: computing application

    International Nuclear Information System (INIS)

    Kappas, K.

    1982-11-01

    The purpose of this work is the analysis of the influence of inhomogeneities of the human body on the determination of the dose in Cobalt-60 radiation therapy. The first part is dedicated to the physical characteristics of inhomogeneities and to the conventional methods of correction. New methods of correction are proposed based on the analysis of the scatter. This analysis allows to take account, with a greater accuracy of their physical characteristics and of the corresponding modifications of the dose: ''the differential TAR method'' and ''the Beam Substraction Method''. The second part is dedicated to the computer implementation of the second method of correction for routine application in hospital [fr

  6. Analysis of slippery droplet on tilted plate by development of optical correction method

    Science.gov (United States)

    Ko, Han Seo; Gim, Yeonghyeon; Choi, Sung Ho; Jang, Dong Kyu; Sohn, Dong Kee

    2017-11-01

    Because of distortion effects on a surface of a sessile droplet, the inner flow field of the droplet is measured by a PIV (particle image velocimetry) method with low reliability. In order to solve this problem, many researchers have studied and developed the optical correction method. However, the method cannot be applied for various cases such as the tilted droplet or other asymmetric shaped droplets since most methods were considered only for the axisymmetric shaped droplets. For the optical correction of the asymmetric shaped droplet, the surface function was calculated by the three-dimensional reconstruction using the ellipse curve fitting method. Also, the optical correction using the surface function was verified by the numerical simulation. Then, the developed method was applied to reconstruct the inner flow field of the droplet on the tilted plate. The colloidal droplet of water on the tilted surface was used, and the distorted effect on the surface of the droplet was calculated. Using the obtained results and the PIV method, the corrected flow field for the inner and interface parts of the droplet was reconstructed. Consequently, the error caused by the distortion effect of the velocity vector located on the apex of the droplet was removed. National Research Foundation (NRF) of Korea, (2016R1A2B4011087).

  7. A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield.

    Science.gov (United States)

    Ringard, Justine; Seyler, Frederique; Linguet, Laurent

    2017-06-16

    Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale.

  8. Nuclear determination of saturation profiles in core plugs

    International Nuclear Information System (INIS)

    Sletsgaard, J.; Oelgaard, P.L.

    1997-01-01

    A method to determine liquid saturations in core plugs during flooding is of importance when the relative permeability and capillary pressure function are to be determined. This part of the EFP-95 project uses transmission of γ-radiation to determine these saturations. In γ-transmission measurements, the electron density of the given substance is measured. This is an advantage as compared to methods that use electric conductivity, since neither oil nor gas conducts electricity. At the moment a single 137 Cs-source is used, but a theoretical investigation of whether it is possible to determine three saturations, using two radioactive sources with different γ-energies, has been performed. Measurements were made on three core plugs. To make sure that the measurements could be reproduced, all the plugs had a point of reference, i.e. a mark so that it was possible to place the plug same way every time. Two computer programs for calculation of saturation and porosity and the experimental setup are listed. (EG)

  9. RAPID COMMUNICATION: A novel time frequency-based 3D Lissajous figure method and its application to the determination of oxygen saturation from the photoplethysmogram

    Science.gov (United States)

    Addison, Paul S.; Watson, James N.

    2004-11-01

    We present a novel time-frequency method for the measurement of oxygen saturation using the photoplethysmogram (PPG) signals from a standard pulse oximeter machine. The method utilizes the time-frequency transformation of the red and infrared PPGs to derive a 3D Lissajous figure. By selecting the optimal Lissajous, the method provides an inherently robust basis for the determination of oxygen saturation as regions of the time-frequency plane where high- and low-frequency signal artefacts are to be found are automatically avoided.

  10. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    International Nuclear Information System (INIS)

    Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees

    2015-01-01

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts

  11. Computer method to detect and correct cycle skipping on sonic logs

    International Nuclear Information System (INIS)

    Muller, D.C.

    1985-01-01

    A simple but effective computer method has been developed to detect cycle skipping on sonic logs and to replace cycle skips with estimates of correct traveltimes. The method can be used to correct observed traveltime pairs from the transmitter to both receivers. The basis of the method is the linearity of a plot of theoretical traveltime from the transmitter to the first receiver versus theoretical traveltime from the transmitter to the second receiver. Theoretical traveltime pairs are calculated assuming that the sonic logging tool is centered in the borehole, that the borehole diameter is constant, that the borehole fluid velocity is constant, and that the formation is homogeneous. The plot is linear for the full range of possible formation-rock velocity. Plots of observed traveltime pairs from a sonic logging tool are also linear but have a large degree of scatter due to borehole rugosity, sharp boundaries exhibiting large velocity contrasts, and system measurement uncertainties. However, this scatter can be reduced to a level that is less than scatter due to cycle skipping, so that cycle skips may be detected and discarded or replaced with estimated values of traveltime. Advantages of the method are that it can be applied in real time, that it can be used with data collected by existing tools, that it only affects data that exhibit cycle skipping and leaves other data unchanged, and that a correction trace can be generated which shows where cycle skipping occurs and the amount of correction applied. The method has been successfully tested on sonic log data taken in two holes drilled at the Nevada Test Site, Nye County, Nevada

  12. Evaluation and parameterization of ATCOR3 topographic correction method for forest cover mapping in mountain areas

    Science.gov (United States)

    Balthazar, Vincent; Vanacker, Veerle; Lambin, Eric F.

    2012-08-01

    A topographic correction of optical remote sensing data is necessary to improve the quality of quantitative forest cover change analyses in mountainous terrain. The implementation of semi-empirical correction methods requires the calibration of model parameters that are empirically defined. This study develops a method to improve the performance of topographic corrections for forest cover change detection in mountainous terrain through an iterative tuning method of model parameters based on a systematic evaluation of the performance of the correction. The latter was based on: (i) the general matching of reflectances between sunlit and shaded slopes and (ii) the occurrence of abnormal reflectance values, qualified as statistical outliers, in very low illuminated areas. The method was tested on Landsat ETM+ data for rough (Ecuadorian Andes) and very rough mountainous terrain (Bhutan Himalayas). Compared to a reference level (no topographic correction), the ATCOR3 semi-empirical correction method resulted in a considerable reduction of dissimilarities between reflectance values of forested sites in different topographic orientations. Our results indicate that optimal parameter combinations are depending on the site, sun elevation and azimuth and spectral conditions. We demonstrate that the results of relatively simple topographic correction methods can be greatly improved through a feedback loop between parameter tuning and evaluation of the performance of the correction model.

  13. Semiempirical Quantum-Chemical Orthogonalization-Corrected Methods: Benchmarks for Ground-State Properties.

    Science.gov (United States)

    Dral, Pavlo O; Wu, Xin; Spörkel, Lasse; Koslowski, Axel; Thiel, Walter

    2016-03-08

    The semiempirical orthogonalization-corrected OMx methods (OM1, OM2, and OM3) go beyond the standard MNDO model by including additional interactions in the electronic structure calculation. When augmented with empirical dispersion corrections, the resulting OMx-Dn approaches offer a fast and robust treatment of noncovalent interactions. Here we evaluate the performance of the OMx and OMx-Dn methods for a variety of ground-state properties using a large and diverse collection of benchmark sets from the literature, with a total of 13035 original and derived reference data. Extensive comparisons are made with the results from established semiempirical methods (MNDO, AM1, PM3, PM6, and PM7) that also use the NDDO (neglect of diatomic differential overlap) integral approximation. Statistical evaluations show that the OMx and OMx-Dn methods outperform the other methods for most of the benchmark sets.

  14. Methods to Increase Educational Effectiveness in an Adult Correctional Setting.

    Science.gov (United States)

    Kuster, Byron

    1998-01-01

    A correctional educator reflects on methods that improve instructional effectiveness. These include teacher-student collaboration, clear goals, student accountability, positive classroom atmosphere, high expectations, and mutual respect. (SK)

  15. Saturated Switching Systems

    CERN Document Server

    Benzaouia, Abdellah

    2012-01-01

    Saturated Switching Systems treats the problem of actuator saturation, inherent in all dynamical systems by using two approaches: positive invariance in which the controller is designed to work within a region of non-saturating linear behaviour; and saturation technique which allows saturation but guarantees asymptotic stability. The results obtained are extended from the linear systems in which they were first developed to switching systems with uncertainties, 2D switching systems, switching systems with Markovian jumping and switching systems of the Takagi-Sugeno type. The text represents a thoroughly referenced distillation of results obtained in this field during the last decade. The selected tool for analysis and design of stabilizing controllers is based on multiple Lyapunov functions and linear matrix inequalities. All the results are illustrated with numerical examples and figures many of them being modelled using MATLAB®. Saturated Switching Systems will be of interest to academic researchers in con...

  16. A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

    Science.gov (United States)

    Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro

    2016-09-01

    This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.

  17. Numerical method for two phase flow with a unstable interface

    International Nuclear Information System (INIS)

    Glimm, J.; Marchesin, D.; McBryan, O.

    1981-01-01

    The random choice method is used to compute the oil-water interface for two dimensional porous media equations. The equations used are a pair of coupled equations; the (elliptic) pressure equation and the (hyperbolic) saturation equation. The equations do not include the dispersive capillary pressure term and the computation does not introduce numerical diffusion. The method resolves saturation discontinuities sharply. The main conclusion of this paper is that the random choice is a correct numerical procedure for this problem even in the highly fingered case. Two methods of inducing fingers are considered: deterministically, through choice of Cauchy data and heterogeneity, through maximizing the randomness of the random choice method

  18. Bias-correction of CORDEX-MENA projections using the Distribution Based Scaling method

    Science.gov (United States)

    Bosshard, Thomas; Yang, Wei; Sjökvist, Elin; Arheimer, Berit; Graham, L. Phil

    2014-05-01

    Within the Regional Initiative for the Assessment of the Impact of Climate Change on Water Resources and Socio-Economic Vulnerability in the Arab Region (RICCAR) lead by UN ESCWA, CORDEX RCM projections for the Middle East Northern Africa (MENA) domain are used to drive hydrological impacts models. Bias-correction of newly available CORDEX-MENA projections is a central part of this project. In this study, the distribution based scaling (DBS) method has been applied to 6 regional climate model projections driven by 2 RCP emission scenarios. The DBS method uses a quantile mapping approach and features a conditional temperature correction dependent on the wet/dry state in the climate model data. The CORDEX-MENA domain is particularly challenging for bias-correction as it spans very diverse climates showing pronounced dry and wet seasons. Results show that the regional climate models simulate too low temperatures and often have a displaced rainfall band compared to WATCH ERA-Interim forcing data in the reference period 1979-2008. DBS is able to correct the temperature biases as well as some aspects of the precipitation biases. Special focus is given to the analysis of the influence of the dry-frequency bias (i.e. climate models simulating too few rain days) on the bias-corrected projections and on the modification of the climate change signal by the DBS method.

  19. Characteristic of methods for prevention and correction of moral of alienation of students

    Directory of Open Access Journals (Sweden)

    Z. K. Malieva

    2014-01-01

    Full Text Available A moral alienation is a complex integrative phenomenon characterized by individual’s rejection of universal spiritual and moral values of society. The last opportunity to find a purposeful competent solution of the problem of individual’s moral alienation lies in the space of professional education.The subject of study of this article is to identify methods for prevention and correction of moral alienation of students that can be used by teachers both in the process of extracurricular activities, and in conducting classes in humanitarian disciplines.The purpose of the work is to study methods and techniques that enhance the effectiveness of the prevention and correction of moral alienation of students, identify their characteristics and application in the educational activities of teachers.The paper concretizes a definition of methods to prevent and correct the moral alienation of students who represent a system of interrelated actions of educator and students aimed at: redefining of negative values, rules and norms of behavior; overcoming the negative mental states, negative attitudes, interests and aptitudes of aducatees.The article distinguishes and characterizes the most effective methods for prevention and correction of moral alienation of students: the conviction, the method of "Socrates"; understanding; semiotic analysis; suggestion, method of "explosion." It also presents the rules and necessary conditions for the application of these methods in the educational process.It is ascertained that the choice of effective preventive and corrective methods and techniques is determined by the content of intrapersonal, psychological sources of moral alienation associated with the following: negative attitude due to previous experience; orientation to these or those negative values; inadequate self-esteem, having a negative impact on the development and functioning of the individual’s psyche and behavior; mental states.The conclusions of the

  20. RESEARCH ON THE DEGREE OF SATURATION INVESTIGATION BY THE SAMPLING OF THE SAND FOR LIQUEFACTION

    Science.gov (United States)

    Fujii, Nao; Ohuchi, Masatoshi; Sakai, Katsuo; Nishigaki, Makoto

    The liquefaction countermeasure technical method, whereby the liquefaction strength is enhanced by making sand deposit in the state of unsaturation, is currently under study. The author et al have suggested a simple method of verifying the persistence of residual air using the undisturbed sample under ordinary temperature and sampling underground water; and have actually implemented the method in the adjacent ground with the foundation of viaduct pneumatic caisson where the leaked air during the construction was considered to have been trapped. We demonstrated the method of correcting the influence of the pressure of sampling specimen as well as of the dissolved air, and studied the precision of required degree of saturation. As the result, it has been shown that the residual air entrapped in the sand deposit is sustainable for as long time as about 28 years.

  1. Energy dependent saturable and reverse saturable absorption in cube-like polyaniline/polymethyl methacrylate film

    Energy Technology Data Exchange (ETDEWEB)

    Thekkayil, Remyamol [Department of Chemistry, Indian Institute of Space Science and Technology, Valiamala, Thiruvananthapuram 695 547 (India); Philip, Reji [Light and Matter Physics Group, Raman Research Institute, C.V. Raman Avenue, Bangalore 560 080 (India); Gopinath, Pramod [Department of Physics, Indian Institute of Space Science and Technology, Valiamala, Thiruvananthapuram 695 547 (India); John, Honey, E-mail: honey@iist.ac.in [Department of Chemistry, Indian Institute of Space Science and Technology, Valiamala, Thiruvananthapuram 695 547 (India)

    2014-08-01

    Solid films of cube-like polyaniline synthesized by inverse microemulsion polymerization method have been fabricated in a transparent PMMA host by an in situ free radical polymerization technique, and are characterized by spectroscopic and microscopic techniques. The nonlinear optical properties are studied by open aperture Z-scan technique employing 5 ns (532 nm) and 100 fs (800 nm) laser pulses. At the relatively lower laser pulse energy of 5 μJ, the film shows saturable absorption both in the nanosecond and femtosecond excitation domains. An interesting switchover from saturable absorption to reverse saturable absorption is observed at 532 nm when the energy of the nanosecond laser pulses is increased. The nonlinear absorption coefficient increases with increase in polyaniline concentration, with low optical limiting threshold, as required for a good optical limiter. - Highlights: • Synthesized cube-like polyaniline nanostructures. • Fabricated polyaniline/PMMA nanocomposite films. • At 5 μJ energy, saturable absorption is observed both at ns and fs regime. • Switchover from SA to RSA is observed as energy of laser beam increases. • Film (0.1 wt % polyaniline) shows high β{sub eff} (230 cm GW{sup −1}) and low limiting threshold at 150 μJ.

  2. Studies of non-isothermal flow in saturated and partially saturated porous media

    International Nuclear Information System (INIS)

    Ho, C.K.; Maki, K.S.; Glass, R.J.

    1993-01-01

    Physical and numerical experiments have been performed to investigate the behavior of nonisothermal flow in two-dimensional saturated and partially saturated porous media. The physical experiments were performed to identify non-isothermal flow fields and temperature distributions in fully saturated, half-saturated, and residually saturated two-dimensional porous media with bottom heating and top cooling. Two counter-rotating liquid-phase convective cells were observed to develop in the saturated regions of all three cases. Gas-phase convection was also evidenced in the unsaturated regions of the partially saturated experiments. TOUGH2 numerical simulations of the saturated case were found to be strongly dependent on the assumed boundary conditions of the physical system. Models including heat losses through the boundaries of the test cell produced temperature and flow fields that were in better agreement with the observed temperature and flow fields than models that assumed insulated boundary conditions. A sensitivity analysis also showed that a reduction of the bulk permeability of the porous media in the numerical simulations depressed the effects of convection, flattening the temperature profiles across the test cell

  3. Orbit Determination from Tracking Data of Artificial Satellite Using the Method of Differential Correction

    Directory of Open Access Journals (Sweden)

    Byoung-Sun Lee

    1988-06-01

    Full Text Available The differential correction process determining osculating orbital elements as correct as possible at a given instant of time from tracking data of artificial satellite was accomplished. Preliminary orbital elements were used as an initial value of the differential correction procedure and iterated until the residual of real observation(O and computed observation(C was minimized. Tracking satellite was NOAA-9 or TIROS-N series. Two types of tracking data were prediction data precomputed from mean orbital elements of TBUS and real data obtained from tracking 1.707GHz HRPT signal of NOAA-9 using 5 meter auto-track antenna in Radio Research Laboratory. According to tracking data either Gauss method or Herrick-Gibbs method was applied to preliminary orbit determination. In the differential correction stage we used both of the Escobal(1975's analytical method and numerical ones are nearly consistent. And the differentially corrected orbit converged to the same value in spite of the differences between preliminary orbits of each time span.

  4. A Novel Flood Forecasting Method Based on Initial State Variable Correction

    Directory of Open Access Journals (Sweden)

    Kuang Li

    2017-12-01

    Full Text Available The influence of initial state variables on flood forecasting accuracy by using conceptual hydrological models is analyzed in this paper and a novel flood forecasting method based on correction of initial state variables is proposed. The new method is abbreviated as ISVC (Initial State Variable Correction. The ISVC takes the residual between the measured and forecasted flows during the initial period of the flood event as the objective function, and it uses a particle swarm optimization algorithm to correct the initial state variables, which are then used to drive the flood forecasting model. The historical flood events of 11 watersheds in south China are forecasted and verified, and important issues concerning the ISVC application are then discussed. The study results show that the ISVC is effective and applicable in flood forecasting tasks. It can significantly improve the flood forecasting accuracy in most cases.

  5. A Correction Method for UAV Helicopter Airborne Temperature and Humidity Sensor

    Directory of Open Access Journals (Sweden)

    Longqing Fan

    2017-01-01

    Full Text Available This paper presents a correction method for UAV helicopter airborne temperature and humidity including an error correction scheme and a bias-calibration scheme. As rotor downwash flow brings measurement error on helicopter airborne sensors inevitably, the error correction scheme constructs a model between the rotor induced velocity and temperature and humidity by building the heat balance equation for platinum resistor temperature sensor and the pressure correction term for humidity sensor. The induced velocity of a spatial point below the rotor disc plane can be calculated by the sum of the induced velocities excited by center line vortex, rotor disk vortex, and skew cylinder vortex based on the generalized vortex theory. In order to minimize the systematic biases, the bias-calibration scheme adopts a multiple linear regression to achieve a systematically consistent result with the tethered balloon profiles. Two temperature and humidity sensors were mounted on “Z-5” UAV helicopter in the field experiment. Overall, the result of applying the calibration method shows that the temperature and relative humidity obtained by UAV helicopter closely align with tethered balloon profiles in providing measurements of the temperature profiles and humidity profiles within marine atmospheric boundary layers.

  6. Multiscale optimization of saturated poroelastic actuators

    DEFF Research Database (Denmark)

    Andreasen, Casper Schousboe; Sigmund, Ole

    A multiscale method for optimizing the material micro structure in a macroscopically heterogeneous saturated poroelastic media with respect to macro properties is presented. The method is based on topology optimization using the homogenization technique, here applied to the optimization of a bi...

  7. An corrective method to correct of the inherent flaw of the asynchronization direct counting circuit

    International Nuclear Information System (INIS)

    Wang Renfei; Liu Congzhan; Jin Yongjie; Zhang Zhi; Li Yanguo

    2003-01-01

    As a inherent flaw of the Asynchronization Direct Counting Circuit, the crosstalk, which is resulted from the randomicity of the time-signal always exists between two adjacent channels. In order to reduce the counting error derived from the crosstalk, the author propose an effective method to correct the flaw after analysing the mechanism of the crosstalk

  8. Investigation of Compton scattering correction methods in cardiac SPECT by Monte Carlo simulations

    International Nuclear Information System (INIS)

    Silva, A.M. Marques da; Furlan, A.M.; Robilotta, C.C.

    2001-01-01

    The goal of this work was the use of Monte Carlo simulations to investigate the effects of two scattering correction methods: dual energy window (DEW) and dual photopeak window (DPW), in quantitative cardiac SPECT reconstruction. MCAT torso-cardiac phantom, with 99m Tc and non-uniform attenuation map was simulated. Two different photopeak windows were evaluated in DEW method: 15% and 20%. Two 10% wide subwindows centered symmetrically within the photopeak were used in DPW method. Iterative ML-EM reconstruction with modified projector-backprojector for attenuation correction was applied. Results indicated that the choice of the scattering and photopeak windows determines the correction accuracy. For the 15% window, fitted scatter fraction gives better results than k = 0.5. For the 20% window, DPW is the best method, but it requires parameters estimation using Monte Carlo simulations. (author)

  9. Correction to the count-rate detection limit and sample/blank time-allocation methods

    International Nuclear Information System (INIS)

    Alvarez, Joseph L.

    2013-01-01

    A common form of count-rate detection limits contains a propagation of uncertainty error. This error originated in methods to minimize uncertainty in the subtraction of the blank counts from the gross sample counts by allocation of blank and sample counting times. Correct uncertainty propagation showed that the time allocation equations have no solution. This publication presents the correct form of count-rate detection limits. -- Highlights: •The paper demonstrated a proper method of propagating uncertainty of count rate differences. •The standard count-rate detection limits were in error. •Count-time allocation methods for minimum uncertainty were in error. •The paper presented the correct form of the count-rate detection limit. •The paper discussed the confusion between count-rate uncertainty and count uncertainty

  10. Comparison of fluorescence rejection methods of baseline correction and shifted excitation Raman difference spectroscopy

    Science.gov (United States)

    Cai, Zhijian; Zou, Wenlong; Wu, Jianhong

    2017-10-01

    Raman spectroscopy has been extensively used in biochemical tests, explosive detection, food additive and environmental pollutants. However, fluorescence disturbance brings a big trouble to the applications of portable Raman spectrometer. Currently, baseline correction and shifted-excitation Raman difference spectroscopy (SERDS) methods are the most prevailing fluorescence suppressing methods. In this paper, we compared the performances of baseline correction and SERDS methods, experimentally and simulatively. Through the comparison, it demonstrates that the baseline correction can get acceptable fluorescence-removed Raman spectrum if the original Raman signal has good signal-to-noise ratio, but it cannot recover the small Raman signals out of large noise background. By using SERDS method, the Raman signals, even very weak compared to fluorescence intensity and noise level, can be clearly extracted, and the fluorescence background can be completely rejected. The Raman spectrum recovered by SERDS has good signal to noise ratio. It's proved that baseline correction is more suitable for large bench-top Raman system with better quality or signal-to-noise ratio, while the SERDS method is more suitable for noisy devices, especially the portable Raman spectrometers.

  11. [Models for quantification of fluid saturation in two-phase flow system by light transmission method and its application].

    Science.gov (United States)

    Zhang, Yan-Hong; Ye, Shu-Jun; Wu, Ji-Chun

    2014-06-01

    Based on light transmission method in quantification of liquid saturation and its application in two-phase flow system, two groups of sandbox experiments were set up to study the migration of gas or Dense Non-Aqueous Phase Liquids (DNAPLs) in water saturated porous media. The migration of gas or DNAPL was monitored in the study. Two modified Light Intensity-Saturation (LIS) models for water/gas two-phase system were applied and verified by the experiment data. Moreover two new LIS models for NAPL/water system were developed and applied to simulate the DNAPL infiltration experiment data. The gas injection experiment showed that gas moved upward to the top of the sandbox in the form of 'fingering' and finally formed continuous distribution. The results of DNAPL infiltration experiment showed that TCE mainly moved downward as the result of its gravity, eventually formed irregular plume and accumulated at the bottom of the sandbox. The outcomes of two LIS models for water/gas system (WG-A and WG-B) were consistent to the measured data. The results of two LIS models for NAPL/water system (NW-A and NW-B) fit well with the observations, and Model NW-A based on assumption of individual drainage gave better results. It could be a useful reference for quantification of NAPL/water saturation in porous media system.

  12. Correction method of slit modulation transfer function on digital medical imaging system

    International Nuclear Information System (INIS)

    Kim, Jung Min; Jung, Hoi Woun; Min, Jung Whan; Im, Eon Kyung

    2006-01-01

    By using CR image pixel data, We examined the way how to calculate the MTF and digital characteristic curve. It can be changed to the text-file (Excel) from a pixel data which was printed with a digital x-ray equipment. In this place, We described the way how to figure out and correct the sharpness of a digital images of the MTF from FUJITA. Excel program was utilized to calculate from radiography of slit. Digital characteristic curve, Line Spread Function, Discrete Fourier Transform, Fast Fourier Transform digital specification curve, were indicated in regular sequence. A big advantage of this method, It can be understood easily and you can get results without costly program an without full knowledge of computer language. It shows many different values by using different correction methods. Therefore we need to be handy with appropriate correction method and we should try many experiments to get a precise MTF figures

  13. Application of the finite volume method in the simulation of saturated flows of binary mixtures

    International Nuclear Information System (INIS)

    Murad, M.A.; Gama, R.M.S. da; Sampaio, R.

    1989-12-01

    This work presents the simulation of saturated flows of an incompressible Newtonian fluid through a rigid, homogeneous and isotropic porous medium. The employed mathematical model is derived from the Continuum Theory of Mixtures and generalizes the classical one which is based on Darcy's Law form of the momentum equation. In this approach fluid and porous matrix are regarded as continuous constituents of a binary mixture. The finite volume method is employed in the simulation. (author) [pt

  14. Regression dilution bias: tools for correction methods and sample size calculation.

    Science.gov (United States)

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  15. Scatter correction method for x-ray CT using primary modulation: Phantom studies

    International Nuclear Information System (INIS)

    Gao Hewei; Fahrig, Rebecca; Bennett, N. Robert; Sun Mingshan; Star-Lack, Josh; Zhu Lei

    2010-01-01

    Purpose: Scatter correction is a major challenge in x-ray imaging using large area detectors. Recently, the authors proposed a promising scatter correction method for x-ray computed tomography (CT) using primary modulation. Proof of concept was previously illustrated by Monte Carlo simulations and physical experiments on a small phantom with a simple geometry. In this work, the authors provide a quantitative evaluation of the primary modulation technique and demonstrate its performance in applications where scatter correction is more challenging. Methods: The authors first analyze the potential errors of the estimated scatter in the primary modulation method. On two tabletop CT systems, the method is investigated using three phantoms: A Catphan(c)600 phantom, an anthropomorphic chest phantom, and the Catphan(c)600 phantom with two annuli. Two different primary modulators are also designed to show the impact of the modulator parameters on the scatter correction efficiency. The first is an aluminum modulator with a weak modulation and a low modulation frequency, and the second is a copper modulator with a strong modulation and a high modulation frequency. Results: On the Catphan(c)600 phantom in the first study, the method reduces the error of the CT number in the selected regions of interest (ROIs) from 371.4 to 21.9 Hounsfield units (HU); the contrast to noise ratio also increases from 10.9 to 19.2. On the anthropomorphic chest phantom in the second study, which represents a more difficult case due to the high scatter signals and object heterogeneity, the method reduces the error of the CT number from 327 to 19 HU in the selected ROIs and from 31.4% to 5.7% on the overall average. The third study is to investigate the impact of object size on the efficiency of our method. The scatter-to-primary ratio estimation error on the Catphan(c)600 phantom without any annulus (20 cm in diameter) is at the level of 0.04, it rises to 0.07 and 0.1 on the phantom with an

  16. Local blur analysis and phase error correction method for fringe projection profilometry systems.

    Science.gov (United States)

    Rao, Li; Da, Feipeng

    2018-05-20

    We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.

  17. An Investigation on the Efficiency Correction Method of the Turbocharger at Low Speed

    Directory of Open Access Journals (Sweden)

    Jin Eun Chung

    2018-01-01

    Full Text Available The heat transfer in the turbocharger occurs due to the temperature difference between the exhaust gas and intake air, coolant, and oil. This heat transfer causes the efficiency of the compressor and turbine to be distorted, which is known to be exacerbated during low rotational speeds. Thus, this study proposes a method to mitigate the distortion of the test result data caused by heat transfer in the turbocharger. With this method, the representative compressor temperature is defined and the heat transfer rate of the compressor is calculated by considering the effect of the oil and turbine inlet temperatures at low rotation speeds, when the cold and the hot gas test are simultaneously performed. The correction of compressor efficiency, depending on the turbine inlet temperature, was performed through both hot and cold gas tests and the results showed a maximum of 16% error prior to correction and a maximum of 3% error after the correction. In addition, it shows that it is possible to correct the efficiency distortion of the turbocharger by heat transfer by correcting to the combined turbine efficiency based on the corrected compressor efficiency.

  18. Panorama of the Brazilian correctional structure

    Directory of Open Access Journals (Sweden)

    Renata de Oliveira Cartaxo

    2014-04-01

    Full Text Available Objective: To describe, based on the Penitentiary Information Integrated System (Sistema Integrado de Informações Penitenciárias - Infopen, aspects of the national correctional structure, the convicts’ characteristics and the profiles of the existing professionals to guarantee the constitutional precept of healthcare. Methods: Descriptive study, on a documental basis, carried out from secondary data available in the Penitentiary Information Integrated System, where the Brazilian correctional structure was assessed, along with the prison inmate’s profile regarding personal characteristics and the committed crime, and the professionals involved in healthcare. Results: There are 298,275 vacancies, occupied by 496,251 convicts in 1,857 prisons. Concerning the inmates’ profile, it was observed that 92.3% (461,444 are male between 18 and 24 years old (25.6% - 126,929, dark-skinned (36.7% - 82,354, with incomplete elementary school (40.7% - 201,938, who mainly committed the drug smuggling crime (23.5% - 100,648. As to the composition of the health assistance team, it was evidenced a total amount of 5,132 professionals registered in the system. Conclusion: Based on the penitentiary information integrated system, the Brazilian correctional structure is characterized by presenting a vacancy deficit, caused by overcrowding and/or saturation of the existent prisons, what makes it especially difficult to guarantee the fulfillment of the inmates’ necessities. doi:10.5020/18061230.2013.p266

  19. Discussion on Boiler Efficiency Correction Method with Low Temperature Economizer-Air Heater System

    Science.gov (United States)

    Ke, Liu; Xing-sen, Yang; Fan-jun, Hou; Zhi-hong, Hu

    2017-05-01

    This paper pointed out that it is wrong to take the outlet flue gas temperature of low temperature economizer as exhaust gas temperature in boiler efficiency calculation based on GB10184-1988. What’s more, this paper proposed a new correction method, which decomposed low temperature economizer-air heater system into two hypothetical parts of air preheater and pre condensed water heater and take the outlet equivalent gas temperature of air preheater as exhaust gas temperature in boiler efficiency calculation. This method makes the boiler efficiency calculation more concise, with no air heater correction. It has a positive reference value to deal with this kind of problem correctly.

  20. Ballistic deficit correction methods for large Ge detectors-high counting rate study

    International Nuclear Information System (INIS)

    Duchene, G.; Moszynski, M.

    1995-01-01

    This study presents different ballistic correction methods versus input count rate (from 3 to 50 kcounts/s) using four large Ge detectors of about 70 % relative efficiency. It turns out that the Tennelec TC245 linear amplifier in the BDC mode (Hinshaw method) is the best compromise for energy resolution throughout. All correction methods lead to narrow sum-peaks indistinguishable from single Γ lines. The full energy peak throughput is found representative of the pile-up inspection dead time of the corrector circuits. This work also presents a new and simple representation, plotting simultaneously energy resolution and throughput versus input count rate. (TEC). 12 refs., 11 figs

  1. A Hold-out method to correct PCA variance inflation

    DEFF Research Database (Denmark)

    Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Hansen, Lars Kai

    2012-01-01

    In this paper we analyze the problem of variance inflation experienced by the PCA algorithm when working in an ill-posed scenario where the dimensionality of the training set is larger than its sample size. In an earlier article a correction method based on a Leave-One-Out (LOO) procedure...

  2. Quantitative chemical exchange saturation transfer (qCEST) MRI - omega plot analysis of RF-spillover-corrected inverse CEST ratio asymmetry for simultaneous determination of labile proton ratio and exchange rate.

    Science.gov (United States)

    Wu, Renhua; Xiao, Gang; Zhou, Iris Yuwen; Ran, Chongzhao; Sun, Phillip Zhe

    2015-03-01

    Chemical exchange saturation transfer (CEST) MRI is sensitive to labile proton concentration and exchange rate, thus allowing measurement of dilute CEST agent and microenvironmental properties. However, CEST measurement depends not only on the CEST agent properties but also on the experimental conditions. Quantitative CEST (qCEST) analysis has been proposed to address the limitation of the commonly used simplistic CEST-weighted calculation. Recent research has shown that the concomitant direct RF saturation (spillover) effect can be corrected using an inverse CEST ratio calculation. We postulated that a simplified qCEST analysis is feasible with omega plot analysis of the inverse CEST asymmetry calculation. Specifically, simulations showed that the numerically derived labile proton ratio and exchange rate were in good agreement with input values. In addition, the qCEST analysis was confirmed experimentally in a phantom with concurrent variation in CEST agent concentration and pH. Also, we demonstrated that the derived labile proton ratio increased linearly with creatine concentration (P analysis can simultaneously determine labile proton ratio and exchange rate in a relatively complex in vitro CEST system. Copyright © 2015 John Wiley & Sons, Ltd.

  3. The study on the X-ray correction method of long fracture displacement

    International Nuclear Information System (INIS)

    Jia Bin; Huang Ailing; Chen Fuzhong; Men Chunyan; Sui Chengzong; Cui Yiming; Yang Yundong

    2010-01-01

    Objective: To explore the image correction of fracture displacement by conventional X-ray photography (ortho tropic and lateral) and test by computed tomography (CT). Methods: The correction method of fracture displacement was designed according to geometry of X-ray photography. Selected one midhumeral fracture specimen which designed with lateral shift and angular displacement, and scanned from anteroposterior and position respectively, and also volume scanned using CT, the data obtained from volume scan were processed using multiplanar reconstruction (MPR) and shaded surface display (SSD). The displacement data relied on X-ray image, CT with MPR and SSD processing, actual design of specimens were compared respectively. Results: The direction and degree of displacement among correction data of X-ray images and the data from MPR and SSD, actual design of specimen were little difference, location difference <1.5 mm, degree difference <1.5 degree. Conclusion: It is really reliable for fracture displacement by conventional X-ray photography with coordinate correction, and it is helpful to obviously improve the diagnostic accuracy of the degree of fracture displacement. (authors)

  4. Accuracy in the quantification of chemical exchange saturation transfer (CEST) and relayed nuclear Overhauser enhancement (rNOE) saturation transfer effects.

    Science.gov (United States)

    Zhang, Xiao-Yong; Wang, Feng; Li, Hua; Xu, Junzhong; Gochberg, Daniel F; Gore, John C; Zu, Zhongliang

    2017-07-01

    Accurate quantification of chemical exchange saturation transfer (CEST) effects, including dipole-dipole mediated relayed nuclear Overhauser enhancement (rNOE) saturation transfer, is important for applications and studies of molecular concentration and transfer rate (and thereby pH or temperature). Although several quantification methods, such as Lorentzian difference (LD) analysis, multiple-pool Lorentzian fits, and the three-point method, have been extensively used in several preclinical and clinical applications, the accuracy of these methods has not been evaluated. Here we simulated multiple-pool Z spectra containing the pools that contribute to the main CEST and rNOE saturation transfer signals in the brain, numerically fit them using the different methods, and then compared their derived CEST metrics with the known solute concentrations and exchange rates. Our results show that the LD analysis overestimates contributions from amide proton transfer (APT) and intermediate exchanging amine protons; the three-point method significantly underestimates both APT and rNOE saturation transfer at -3.5 ppm (NOE(-3.5)). The multiple-pool Lorentzian fit is more accurate than the other two methods, but only at lower irradiation powers (≤1 μT at 9.4 T) within the range of our simulations. At higher irradiation powers, this method is also inaccurate because of the presence of a fast exchanging CEST signal that has a non-Lorentzian lineshape. Quantitative parameters derived from in vivo images of rodent brain tumor obtained using an irradiation power of 1 μT were also compared. Our results demonstrate that all three quantification methods show similar contrasts between tumor and contralateral normal tissue for both APT and the NOE(-3.5). However, the quantified values of the three methods are significantly different. Our work provides insight into the fitting accuracy obtainable in a complex tissue model and provides guidelines for evaluating other newly developed

  5. A software-based x-ray scatter correction method for breast tomosynthesis

    International Nuclear Information System (INIS)

    Jia Feng, Steve Si; Sechopoulos, Ioannis

    2011-01-01

    Purpose: To develop a software-based scatter correction method for digital breast tomosynthesis (DBT) imaging and investigate its impact on the image quality of tomosynthesis reconstructions of both phantoms and patients. Methods: A Monte Carlo (MC) simulation of x-ray scatter, with geometry matching that of the cranio-caudal (CC) view of a DBT clinical prototype, was developed using the Geant4 toolkit and used to generate maps of the scatter-to-primary ratio (SPR) of a number of homogeneous standard-shaped breasts of varying sizes. Dimension-matched SPR maps were then deformed and registered to DBT acquisition projections, allowing for the estimation of the primary x-ray signal acquired by the imaging system. Noise filtering of the estimated projections was then performed to reduce the impact of the quantum noise of the x-ray scatter. Three dimensional (3D) reconstruction was then performed using the maximum likelihood-expectation maximization (MLEM) method. This process was tested on acquisitions of a heterogeneous 50/50 adipose/glandular tomosynthesis phantom with embedded masses, fibers, and microcalcifications and on acquisitions of patients. The image quality of the reconstructions of the scatter-corrected and uncorrected projections was analyzed by studying the signal-difference-to-noise ratio (SDNR), the integral of the signal in each mass lesion (integrated mass signal, IMS), and the modulation transfer function (MTF). Results: The reconstructions of the scatter-corrected projections demonstrated superior image quality. The SDNR of masses embedded in a 5 cm thick tomosynthesis phantom improved 60%-66%, while the SDNR of the smallest mass in an 8 cm thick phantom improved by 59% (p < 0.01). The IMS of the masses in the 5 cm thick phantom also improved by 15%-29%, while the IMS of the masses in the 8 cm thick phantom improved by 26%-62% (p < 0.01). Some embedded microcalcifications in the tomosynthesis phantoms were visible only in the scatter-corrected

  6. Saturated salt solution method: a useful cadaver embalming for surgical skills training.

    Science.gov (United States)

    Hayashi, Shogo; Homma, Hiroshi; Naito, Munekazu; Oda, Jun; Nishiyama, Takahisa; Kawamoto, Atsuo; Kawata, Shinichi; Sato, Norio; Fukuhara, Tomomi; Taguchi, Hirokazu; Mashiko, Kazuki; Azuhata, Takeo; Ito, Masayuki; Kawai, Kentaro; Suzuki, Tomoya; Nishizawa, Yuji; Araki, Jun; Matsuno, Naoto; Shirai, Takayuki; Qu, Ning; Hatayama, Naoyuki; Hirai, Shuichi; Fukui, Hidekimi; Ohseto, Kiyoshige; Yukioka, Tetsuo; Itoh, Masahiro

    2014-12-01

    This article evaluates the suitability of cadavers embalmed by the saturated salt solution (SSS) method for surgical skills training (SST). SST courses using cadavers have been performed to advance a surgeon's techniques without any risk to patients. One important factor for improving SST is the suitability of specimens, which depends on the embalming method. In addition, the infectious risk and cost involved in using cadavers are problems that need to be solved. Six cadavers were embalmed by 3 methods: formalin solution, Thiel solution (TS), and SSS methods. Bacterial and fungal culture tests and measurement of ranges of motion were conducted for each cadaver. Fourteen surgeons evaluated the 3 embalming methods and 9 SST instructors (7 trauma surgeons and 2 orthopedists) operated the cadavers by 21 procedures. In addition, ultrasonography, central venous catheterization, and incision with cauterization followed by autosuture stapling were performed in some cadavers. The SSS method had a sufficient antibiotic effect and produced cadavers with flexible joints and a high tissue quality suitable for SST. The surgeons evaluated the cadavers embalmed by the SSS method to be highly equal to those embalmed by the TS method. Ultrasound images were clear in the cadavers embalmed by both the methods. Central venous catheterization could be performed in a cadaver embalmed by the SSS method and then be affirmed by x-ray. Lungs and intestines could be incised with cauterization and autosuture stapling in the cadavers embalmed by TS and SSS methods. Cadavers embalmed by the SSS method are sufficiently useful for SST. This method is simple, carries a low infectious risk, and is relatively of low cost, enabling a wider use of cadavers for SST.

  7. Integrals of random fields treated by the model correction factor method

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  8. Microscopic analysis of saturable absorbers: Semiconductor saturable absorber mirrors versus graphene

    Energy Technology Data Exchange (ETDEWEB)

    Hader, J.; Moloney, J. V. [Nonlinear Control Strategies, Inc., 3542 N. Geronimo Ave., Tucson, Arizona 85705 (United States); College of Optical Sciences, University of Arizona, Tucson, Arizona 85721 (United States); Yang, H.-J.; Scheller, M. [College of Optical Sciences, University of Arizona, Tucson, Arizona 85721 (United States); Koch, S. W. [Department of Physics and Materials Sciences Center, Philipps Universität Marburg, Renthof 5, 35032 Marburg (Germany)

    2016-02-07

    Fully microscopic many-body calculations are used to study the influence of strong sub-picosecond pulses on the carrier distributions and corresponding optical response in saturable absorbers used for mode-locking—semiconductor (quantum well) saturable absorber mirrors (SESAMs) and single layer graphene based saturable absorber mirrors (GSAMs). Unlike in GSAMs, the saturation fluence and recovery time in SESAMs show a strong spectral dependence. While the saturation fluence in the SESAM is minimal at the excitonic bandgap, the optimal recovery time and least pulse distortion due to group delay dispersion are found for excitation higher in the first subband. For excitation near the SESAM bandgap, the saturation fluence is about one tenth of that in the GSAM. At energies above the bandgap, the fluences in both systems become similar. A strong dependence of the saturation fluence on the pulse width in both systems is caused by carrier relaxation during the pulse. The recovery time in graphene is found to be about two to four times faster than that in the SESAMs. The occurrence of negative differential transmission in graphene is shown to be caused by dopant related carriers. In SESAMs, a negative differential transmission is found when exciting below the excitonic resonance where excitation induced dephasing leads to an enhancement of the absorption. Comparisons of the simulation data to the experiment show a very good quantitative agreement.

  9. Implementation of the Centroid Method for the Correction of Turbulence

    Directory of Open Access Journals (Sweden)

    Enric Meinhardt-Llopis

    2014-07-01

    Full Text Available The centroid method for the correction of turbulence consists in computing the Karcher-Fréchet mean of the sequence of input images. The direction of deformation between a pair of images is determined by the optical flow. A distinguishing feature of the centroid method is that it can produce useful results from an arbitrarily small set of input images.

  10. A brain MRI bias field correction method created in the Gaussian multi-scale space

    Science.gov (United States)

    Chen, Mingsheng; Qin, Mingxin

    2017-07-01

    A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.

  11. Occurrence of two-photon absorption saturation in Ag nanocolloids, prepared by chemical reduction method

    Energy Technology Data Exchange (ETDEWEB)

    Rahulan, K. Mani, E-mail: krahul.au@gmail.com [Department of Physics, Anna University, Chennai (India); Ganesan, S. [Department of Physics, Anna University, Chennai (India); Aruna, P., E-mail: aruna@annauniv.edu [Department of Physics, Anna University, Chennai (India)

    2012-09-01

    Highlights: Black-Right-Pointing-Pointer Ag nanocolloids were synthesized via chemical reduction method. Black-Right-Pointing-Pointer The molecules of PVP play an important role in growth and agglomeration of silver nanocolloids. Black-Right-Pointing-Pointer Saturation behaviour followed by two photon absorption was responsible for good optical limiting characteristics in these nanocolloids. Black-Right-Pointing-Pointer The nonlinear optical parameters calculated from the data showed that these materials could be used as efficient optical limiters. - Abstract: Silver nanocolloids stabilized with polyvinyl pyrrolidone (PVP) have been prepared from (AgNO{sub 3}) by a chemical reduction method, involving the intermediate preparation of (Ag{sub 2}O) colloidal dispersions in the presence of sodium dodecycle sulfate as a surfactant and formaldehyde as reducing agent. The molecules of PVP play an important role in growth and agglomeration of silver nanocolloids. The formation of Ag nanocolloids was studied from the UV-vis absorption characteristics. An energy dispersive X-ray (EDX) spectrum and X-ray diffraction peak of the nanoparticles showed the highly crystalline nature of silver structure. The particle size was found to be 40 nm as analyzed from Field emission scanning electron microscopy (FESEM). The nonlinear optical and optical limiting properties of these nanoparticle dispersions were studied by using the Z-scan technique at 532 nm. Experimental results show that the Ag nanocolloids possess strong optical limiting effect, originated from absorption saturation followed by two-photon mechanism. The data show that Ag nanocolloids have great potential for nonlinear optical devices.

  12. A Time-Walk Correction Method for PET Detectors Based on Leading Edge Discriminators.

    Science.gov (United States)

    Du, Junwei; Schmall, Jeffrey P; Judenhofer, Martin S; Di, Kun; Yang, Yongfeng; Cherry, Simon R

    2017-09-01

    The leading edge timing pick-off technique is the simplest timing extraction method for PET detectors. Due to the inherent time-walk of the leading edge technique, corrections should be made to improve timing resolution, especially for time-of-flight PET. Time-walk correction can be done by utilizing the relationship between the threshold crossing time and the event energy on an event by event basis. In this paper, a time-walk correction method is proposed and evaluated using timing information from two identical detectors both using leading edge discriminators. This differs from other techniques that use an external dedicated reference detector, such as a fast PMT-based detector using constant fraction techniques to pick-off timing information. In our proposed method, one detector was used as reference detector to correct the time-walk of the other detector. Time-walk in the reference detector was minimized by using events within a small energy window (508.5 - 513.5 keV). To validate this method, a coincidence detector pair was assembled using two SensL MicroFB SiPMs and two 2.5 mm × 2.5 mm × 20 mm polished LYSO crystals. Coincidence timing resolutions using different time pick-off techniques were obtained at a bias voltage of 27.5 V and a fixed temperature of 20 °C. The coincidence timing resolution without time-walk correction were 389.0 ± 12.0 ps (425 -650 keV energy window) and 670.2 ± 16.2 ps (250-750 keV energy window). The timing resolution with time-walk correction improved to 367.3 ± 0.5 ps (425 - 650 keV) and 413.7 ± 0.9 ps (250 - 750 keV). For comparison, timing resolutions were 442.8 ± 12.8 ps (425 - 650 keV) and 476.0 ± 13.0 ps (250 - 750 keV) using constant fraction techniques, and 367.3 ± 0.4 ps (425 - 650 keV) and 413.4 ± 0.9 ps (250 - 750 keV) using a reference detector based on the constant fraction technique. These results show that the proposed leading edge based time-walk correction method works well. Timing resolution obtained

  13. Saturated hydraulic conductivity values of some forest soils of ...

    African Journals Online (AJOL)

    A simple falling-head method is presented for the laboratory determination of saturated hydraulic conductivity of some forest soils of Ghana. Using the procedure, it was found that saturated hydraulic conductivity was positively and negatively correlated with sand content and clay content, respectively, both at P = 0.05 level.

  14. Correction of measured multiplicity distributions by the simulated annealing method

    International Nuclear Information System (INIS)

    Hafidouni, M.

    1993-01-01

    Simulated annealing is a method used to solve combinatorial optimization problems. It is used here for the correction of the observed multiplicity distribution from S-Pb collisions at 200 GeV/c per nucleon. (author) 11 refs., 2 figs

  15. N3 Bias Field Correction Explained as a Bayesian Modeling Method

    DEFF Research Database (Denmark)

    Larsen, Christian Thode; Iglesias, Juan Eugenio; Van Leemput, Koen

    2014-01-01

    Although N3 is perhaps the most widely used method for MRI bias field correction, its underlying mechanism is in fact not well understood. Specifically, the method relies on a relatively heuristic recipe of alternating iterative steps that does not optimize any particular objective function. In t...

  16. High-order multi-implicit spectral deferred correction methods for problems of reactive flow

    International Nuclear Information System (INIS)

    Bourlioux, Anne; Layton, Anita T.; Minion, Michael L.

    2003-01-01

    Models for reacting flow are typically based on advection-diffusion-reaction (A-D-R) partial differential equations. Many practical cases correspond to situations where the relevant time scales associated with each of the three sub-processes can be widely different, leading to disparate time-step requirements for robust and accurate time-integration. In particular, interesting regimes in combustion correspond to systems in which diffusion and reaction are much faster processes than advection. The numerical strategy introduced in this paper is a general procedure to account for this time-scale disparity. The proposed methods are high-order multi-implicit generalizations of spectral deferred correction methods (MISDC methods), constructed for the temporal integration of A-D-R equations. Spectral deferred correction methods compute a high-order approximation to the solution of a differential equation by using a simple, low-order numerical method to solve a series of correction equations, each of which increases the order of accuracy of the approximation. The key feature of MISDC methods is their flexibility in handling several sub-processes implicitly but independently, while avoiding the splitting errors present in traditional operator-splitting methods and also allowing for different time steps for each process. The stability, accuracy, and efficiency of MISDC methods are first analyzed using a linear model problem and the results are compared to semi-implicit spectral deferred correction methods. Furthermore, numerical tests on simplified reacting flows demonstrate the expected convergence rates for MISDC methods of orders three, four, and five. The gain in efficiency by independently controlling the sub-process time steps is illustrated for nonlinear problems, where reaction and diffusion are much stiffer than advection. Although the paper focuses on this specific time-scales ordering, the generalization to any ordering combination is straightforward

  17. Transformation of seismic velocity data to extract porosity and saturation values for rocks

    International Nuclear Information System (INIS)

    Berryman, James G.; Berge, Patricia A.; Bonner, Brian P.

    2000-01-01

    For wave propagation at low frequencies in a porous medium, the Gassmann-Domenico relations are well-established for homogeneous partial saturation by a liquid. They provide the correct relations for seismic velocities in terms of constituent bulk and shear moduli, solid and fluid densities, porosity and saturation. It has not been possible, however, to invert these relations easily to determine porosity and saturation when the seismic velocities are known. Also, the state (or distribution) of saturation, i.e., whether or not liquid and gas are homogeneously mixed in the pore space, is another important variable for reservoir evaluation. A reliable ability to determine the state of saturation from velocity data continues to be problematic. It is shown how transforming compressional and shear wave velocity data to the (ρ/λ,μ/λ)-plane (where λ and μ are the Lame parameters and ρ is the total density) results in a set of quasi-orthogonal coordinates for porosity and liquid saturation that greatly aids in the interpretation of seismic data for the physical parameters of most interest. A second transformation of the same data then permits isolation of the liquid saturation value, and also provides some direct information about the state of saturation. By thus replotting the data in the (λ/μ, ρ/μ)-plane, inferences can be made concerning the degree of patchy (inhomogeneous) versus homogeneous saturation that is present in the region of the medium sampled by the data. Our examples include igneous and sedimentary rocks, as well as man-made porous materials. These results have potential applications in various areas of interest, including petroleum exploration and reservoir characterization, geothermal resource evaluation, environmental restoration monitoring, and geotechnical site characterization. (c) 2000 Acoustical Society of America

  18. Evaluation of a method for correction of scatter radiation in thorax cone beam CT; Evaluation d'une methode de correction du rayonnement diffuse en tomographie du thorax avec faisceau conique

    Energy Technology Data Exchange (ETDEWEB)

    Rinkel, J.; Dinten, J.M. [CEA Grenoble (DTBS/STD), Lab. d' Electronique et de Technologie de l' Informatique, LETI, 38 (France); Esteve, F. [European Synchrotron Radiation Facility (ESRF), 38 - Grenoble (France)

    2004-07-01

    Purpose: Cone beam CT (CBCT) enables three-dimensional imaging with isotropic resolution. X-ray scatter estimation is a big challenge for quantitative CBCT imaging of thorax: scatter level is significantly higher on cone beam systems compared to collimated fan beam systems. The effects of this scattered radiation are cupping artefacts, streaks, and quantification inaccuracies. The beam stops conventional scatter estimation approach can be used for CBCT but leads to a significant increase in terms of dose and acquisition time. At CEA-LETI has been developed an original scatter management process without supplementary acquisition. Methods and Materials: This Analytical Plus Indexing-based method (API) of scatter correction in CBCT is based on scatter calibration through offline acquisitions with beam stops on lucite plates, combined to an analytical transformation issued from physical equations. This approach has been applied with success in bone densitometry and mammography. To evaluate this method in CBCT, acquisitions from a thorax phantom with and without beam stops have been performed. To compare different scatter correction approaches, Feldkamp algorithm has been applied on rough data corrected from scatter by API and by beam stops approaches. Results: The API method provides results in good agreement with the beam stops array approach, suppressing cupping artefact. Otherwise influence of the scatter correction method on the noise in the reconstructed images has been evaluated. Conclusion: The results indicate that the API method is effective for quantitative CBCT imaging of thorax. Compared to a beam stops array method it needs a lower x-ray dose and shortens acquisition time. (authors)

  19. Fission track dating of volcanic glass: experimental evidence for the validity of the Size-Correction Method

    International Nuclear Information System (INIS)

    Bernardes, C.; Hadler Neto, J.C.; Lattes, C.M.G.; Araya, A.M.O.; Bigazzi, G.; Cesar, M.F.

    1986-01-01

    Two techniques may be employed for correcting thermally lowered fission track ages on glass material: the so called 'size-correcting method' and 'Plateau method'. Several results from fission track dating on obsidian were analysed in order to compare the model rising size-correction method with experimental evidences. The results from this work can be summarized as follows: 1) The assumption that mean size of spontaneous and induced etched tracks are equal on samples unaffected by partial fading is supported by experimental results. If reactor effects such as an enhancing of the etching rate in the irradiated fraction due to the radiation damage and/or to the fact that induced fission releases a quantity of energy slightly greater than spontaneous one exist, their influence on size-correction method is very small. 2) The above two correction techniques produce concordant results. 3) Several samples from the same obsidian, affected by 'instantaneous' as well as 'continuous' natural fading to different degrees were analysed: the curve showing decreasing of spontaneous track mean-size vs. fraction of spontaneous tracks lost by fading is in close agreement with the correction curve constructed for the same obsidian by imparting artificial thermal treatements on induced tracks. By the above points one can conclude that the assumptions on which size-correction method is based are well supported, at least in first approximation. (Author) [pt

  20. New methods for the correction of 31P NMR spectra in in vivo NMR spectroscopy

    International Nuclear Information System (INIS)

    Starcuk, Z.; Bartusek, K.; Starcuk, Z. jr.

    1994-01-01

    The new methods for the correction of 31 P NMR spectra in vivo NMR spectroscopy have been performed. A method for the baseline correction of the spectra which represents a combination of time-domain and frequency-domain has been discussed.The method is very fast and efficient for minimization of base line artifacts of biological tissues impact

  1. Saturated tearing modes in tokamaks. Renewal proposal, progress report

    International Nuclear Information System (INIS)

    Bateman, G.

    1984-01-01

    We have completed a computer code (GTOR) implementing our quasilinear method for determining saturated tearing mode magnetic island widths in axisymmetric toroidal plasmas. With this code we have surveyed the effect of current profile, aspect ratio and plasma elongation on saturated tearing modes. Current peaking within the islands is found to have a particularly large effect. In support of this research, we have developed a direct method for computing Hamada coordinates from harmonics of the inverse Grad-Shafranov equation

  2. Saturation flow mathematical model based on multiple combinations of lane groups

    Energy Technology Data Exchange (ETDEWEB)

    Racila, L.

    2016-07-01

    The ideal value of the traffic stream that can pass through an intersection is known as the saturation flow rate per hour on vehicle green time. The saturation flow is important in the understanding of the traffic light cycle and from there the understanding the Level of Service. The paper wishes to evaluate through a series of applied mathematical methods the effect of different lane grouping and critical lane group concept on the saturation flow rate. The importance of this method is that it creates a base for a signalized intersections timing plan. (Author)

  3. Nonlinear acoustics of water-saturated marine sediments

    DEFF Research Database (Denmark)

    Jensen, Leif Bjørnø

    1976-01-01

    Interest in the acoustic qualities of water-saturated marine sediments has increased considerably during recent years. The use of sources of high-intensity sound in oil propsecting, in geophysical and geological studies of bottom and subbottom materials and profiles and recently in marine...... archaeology has emphasized the need of information about the nonlinear acoustic qualities of water-saturated marine sediments. While the acoustic experiments and theoretical investigations hitherto performed have concentrated on a determination of the linear acoustic qualities of water-saturated marine...... sediments, their parameters of nonlinear acoustics are still unexplored. The strong absorption, increasing about linearly with frequency, found in most marine sediments and the occurrence of velocity dispersion by some marine sediments restrict the number of nonlinear acoustic test methods traditionally...

  4. A third-generation dispersion and third-generation hydrogen bonding corrected PM6 method

    DEFF Research Database (Denmark)

    Kromann, Jimmy Charnley; Christensen, Anders Steen; Svendsen, Casper Steinmann

    2014-01-01

    We present new dispersion and hydrogen bond corrections to the PM6 method, PM6-D3H+, and its implementation in the GAMESS program. The method combines the DFT-D3 dispersion correction by Grimme et al. with a modified version of the H+ hydrogen bond correction by Korth. Overall, the interaction...... in GAMESS, while the corresponding numbers for PM6-DH+ implemented in MOPAC are 54, 17, 15, and 2. The PM6-D3H+ method as implemented in GAMESS offers an attractive alternative to PM6-DH+ in MOPAC in cases where the LBFGS optimizer must be used and a vibrational analysis is needed, e.g., when computing...... vibrational free energies. While the GAMESS implementation is up to 10 times slower for geometry optimizations of proteins in bulk solvent, compared to MOPAC, it is sufficiently fast to make geometry optimizations of small proteins practically feasible....

  5. A Geometric Correction Method of Plane Image Based on OpenCV

    Directory of Open Access Journals (Sweden)

    Li Xiaopeng

    2014-02-01

    Full Text Available Using OpenCV, a geometric correction method of plane image from single grid image in a state of unknown camera position is presented. The method can remove the perspective and lens distortions from an image. The method is simple and easy to implement, and the efficiency is high. Experiments indicate that this method has high precision, and can be used in some domains such as plane measurement.

  6. An efficient shutter-less non-uniformity correction method for infrared focal plane arrays

    Science.gov (United States)

    Huang, Xiyan; Sui, Xiubao; Zhao, Yao

    2017-02-01

    The non-uniformity response in infrared focal plane array (IRFPA) detectors has a bad effect on images with fixed pattern noise. At present, it is common to use shutter to prevent from radiation of target and to update the parameters of non-uniformity correction in the infrared imaging system. The use of shutter causes "freezing" image. And inevitably, there exists the problems of the instability and reliability of system, power consumption, and concealment of infrared detection. In this paper, we present an efficient shutter-less non-uniformity correction (NUC) method for infrared focal plane arrays. The infrared imaging system can use the data gaining in thermostat to calculate the incident infrared radiation by shell real-timely. And the primary output of detector except the shell radiation can be corrected by the gain coefficient. This method has been tested in real infrared imaging system, reaching high correction level, reducing fixed pattern noise, adapting wide temperature range.

  7. Methods of correction of carriage of junior schoolchildren by facilities of physical exercises

    Directory of Open Access Journals (Sweden)

    Gagara V.F.

    2012-08-01

    Full Text Available The results of influence of methods of physical rehabilitation on the organism of children are resulted. In research took part 16 children of lower school with the scoliotic changes of pectoral department of spine. The complex of methods of physical rehabilitation included special correction and general health-improving exercises, medical gymnastics, correction position. Employments on a medical gymnastics during 30-45 minutes 3-4 times per a week were conducted. The improvement of indexes of mobility of spine and state of carriage of schoolchildren is marked. The absolute indexes of the state of carriage and flexibility of spine considerably got around physiology sizes. A rehabilitation complex which includes the elements of correction gymnastics is recommended, medical physical culture, correction, massage of muscles of trunk, position. It is also necessary to adhere to the rational mode of day and feed, provide the normative parameters of working furniture and self-control of the state of carriage.

  8. Band extension in digital methods of transfer function determination – signal conditioners asymmetry error corrections

    Directory of Open Access Journals (Sweden)

    Zbigniew Staroszczyk

    2014-12-01

    Full Text Available [b]Abstract[/b]. In the paper, the calibrating method for error correction in transfer function determination with the use of DSP has been proposed. The correction limits/eliminates influence of transfer function input/output signal conditioners on the estimated transfer functions in the investigated object. The method exploits frequency domain conditioning paths descriptor found during training observation made on the known reference object.[b]Keywords[/b]: transfer function, band extension, error correction, phase errors

  9. A new method to make gamma-ray self-absorption correction

    International Nuclear Information System (INIS)

    Tian Dongfeng; Xie Dong; Ho Yukun; Yang Fujia

    2001-01-01

    This paper is devoted to discuss a new method to directly extract the information of the geometric self-absorption correction through the measurement of characteristic γ radiation emitted spontaneously from nuclear fissile material. The numerical simulation tests show that this method can extract the purely original information needed for nondestructive assay method by the γ-ray spectra to be measured, even though the geometric shape of the sample and materials between sample and detector are not known in advance. (author)

  10. A method of bias correction for maximal reliability with dichotomous measures.

    Science.gov (United States)

    Penev, Spiridon; Raykov, Tenko

    2010-02-01

    This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.

  11. Joint de-blurring and nonuniformity correction method for infrared microscopy imaging

    Science.gov (United States)

    Jara, Anselmo; Torres, Sergio; Machuca, Guillermo; Ramírez, Wagner; Gutiérrez, Pablo A.; Viafora, Laura A.; Godoy, Sebastián E.; Vera, Esteban

    2018-05-01

    In this work, we present a new technique to simultaneously reduce two major degradation artifacts found in mid-wavelength infrared microscopy imagery, namely the inherent focal-plane array nonuniformity noise and the scene defocus presented due to the point spread function of the infrared microscope. We correct both nuisances using a novel, recursive method that combines the constant range nonuniformity correction algorithm with a frame-by-frame deconvolution approach. The ability of the method to jointly compensate for both nonuniformity noise and blur is demonstrated using two different real mid-wavelength infrared microscopic video sequences, which were captured from two microscopic living organisms using a Janos-Sofradir mid-wavelength infrared microscopy setup. The performance of the proposed method is assessed on real and simulated infrared data by computing the root mean-square error and the roughness-laplacian pattern index, which was specifically developed for the present work.

  12. Delayed system control in presence of actuator saturation

    Directory of Open Access Journals (Sweden)

    A. Mahjoub

    2014-09-01

    Full Text Available The paper is introducing a new design method for systems’ controllers with input delay and actuator saturations and focuses on how to force the system output to track a reference input not necessarily saturation-compatible. We propose a new norm based on the way we quantify tracking performance as a function of saturation errors found using the same norm. The newly defined norm is related to signal average power making possible to account for most common reference signals e.g. step, periodic. It is formally shown that, whatever the reference shape and amplitude, the achievable tracking quality is determined by a well defined reference tracking mismatch error. This latter depends on the reference rate and its compatibility with the actuator saturation constraint. In fact, asymptotic output-reference tracking is achieved in the presence of constraint-compatible step-like references.

  13. A software-based x-ray scatter correction method for breast tomosynthesis

    OpenAIRE

    Jia Feng, Steve Si; Sechopoulos, Ioannis

    2011-01-01

    Purpose: To develop a software-based scatter correction method for digital breast tomosynthesis (DBT) imaging and investigate its impact on the image quality of tomosynthesis reconstructions of both phantoms and patients.

  14. On the transmit field inhomogeneity correction of relaxation‐compensated amide and NOE CEST effects at 7 T

    Science.gov (United States)

    Windschuh, Johannes; Siero, Jeroen C.W.; Zaiss, Moritz; Luijten, Peter R.; Klomp, Dennis W.J.; Hoogduin, Hans

    2017-01-01

    High field MRI is beneficial for chemical exchange saturation transfer (CEST) in terms of high SNR, CNR, and chemical shift dispersion. These advantages may, however, be counter‐balanced by the increased transmit field inhomogeneity normally associated with high field MRI. The relatively high sensitivity of the CEST contrast to B 1 inhomogeneity necessitates the development of correction methods, which is essential for the clinical translation of CEST. In this work, two B 1 correction algorithms for the most studied CEST effects, amide‐CEST and nuclear Overhauser enhancement (NOE), were analyzed. Both methods rely on fitting the multi‐pool Bloch‐McConnell equations to the densely sampled CEST spectra. In the first method, the correction is achieved by using a linear B 1 correction of the calculated amide and NOE CEST effects. The second method uses the Bloch‐McConnell fit parameters and the desired B 1 amplitude to recalculate the CEST spectra, followed by the calculation of B 1‐corrected amide and NOE CEST effects. Both algorithms were systematically studied in Bloch‐McConnell equations and in human data, and compared with the earlier proposed ideal interpolation‐based B 1 correction method. In the low B 1 regime of 0.15–0.50 μT (average power), a simple linear model was sufficient to mitigate B 1 inhomogeneity effects on a par with the interpolation B 1 correction, as demonstrated by a reduced correlation of the CEST contrast with B 1 in both the simulations and the experiments. PMID:28111824

  15. Gluon saturation in a saturated environment

    International Nuclear Information System (INIS)

    Kopeliovich, B. Z.; Potashnikova, I. K.; Schmidt, Ivan

    2011-01-01

    A bootstrap equation for self-quenched gluon shadowing leads to a reduced magnitude of broadening for partons propagating through a nucleus. Saturation of small-x gluons in a nucleus, which has the form of transverse momentum broadening of projectile gluons in pA collisions in the nuclear rest frame, leads to a modification of the parton distribution functions in the beam compared with pp collisions. In nucleus-nucleus collisions all participating nucleons acquire enhanced gluon density at small x, which boosts further the saturation scale. Solution of the reciprocity equations for central collisions of two heavy nuclei demonstrate a significant, up to several times, enhancement of Q sA 2 , in AA compared with pA collisions.

  16. Traveling wave fronts and the transition to saturation

    International Nuclear Information System (INIS)

    Munier, S.; Peschanski, R.

    2004-01-01

    We propose a general method to study the solutions to nonlinear QCD evolution equations, based on a deep analogy with the physics of traveling waves. In particular, we show that the transition to the saturation regime of high energy QCD is identical to the formation of the front of a traveling wave. Within this physical picture, we provide the expressions for the saturation scale and the gluon density profile as a function of the total rapidity and the transverse momentum. The application to the Balitskii-Kovchegov equation for both fixed and running coupling constants confirms the effectiveness of this method

  17. Peculiarities of application the method of autogenic training in the correction of eating behavior

    OpenAIRE

    Shebanova, Vitaliya

    2014-01-01

    The article presented peculiarities of applying the method of autogenic training in the correction of eating disorders. Described stages of correction work with desadaptive eating behavior. Author makes accent on the rules self-assembly formula intentions.

  18. Method for measuring multiple scattering corrections between liquid scintillators

    Energy Technology Data Exchange (ETDEWEB)

    Verbeke, J.M., E-mail: verbeke2@llnl.gov; Glenn, A.M., E-mail: glenn22@llnl.gov; Keefer, G.J., E-mail: keefer1@llnl.gov; Wurtz, R.E., E-mail: wurtz1@llnl.gov

    2016-07-21

    A time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.

  19. A Method To ModifyCorrect The Performance Of Amplifiers

    Directory of Open Access Journals (Sweden)

    Rohith Krishnan R

    2015-01-01

    Full Text Available Abstract The actual response of the amplifier may vary with the replacement of some aged or damaged components and this method is to compensate that problem. Here we use op-amp Fixator as the design tool. The tool helps us to isolate the selected circuit component from rest of the circuit adjust its operating point to correct the performance deviations and to modify the circuit without changing other parts of the circuit. A method to modifycorrect the performance of amplifiers by properly redesign the circuit is presented in this paper.

  20. Investigation of Dynamic Properties of Water-Saturated Sand by the Results of the Inverse Experiment Technique

    Science.gov (United States)

    Bragov, A. M.; Balandin, Vl. V.; Kotov, V. L.; Balandin, Vl. Vl.

    2018-04-01

    We present new experimental results on the investigation of the dynamic properties of sand soil on the basis of the inverse experiment technique using a measuring rod with a flat front-end face. A limited applicability has been shown of the method using the procedure for correcting the shape of the deformation pulse due to dispersion during its propagation in the measuring rod. Estimates of the pulse maximum have been obtained and the results of comparison of numerical calculations with experimental data are given. The sufficient accuracy in determining the drag force during the quasi-stationary stage of penetration has been established. The parameters of dynamic compressibility and resistance to shear of water-saturated sand have been determined in the course of the experimental-theoretical analysis of the maximum values of the drag force and its values at the quasi-stationary stage of penetration. It has been shown that with almost complete water saturation of sand its shear properties are reduced but remain significant in the practically important range of penetration rates.

  1. Method and apparatus for reconstructing in-cylinder pressure and correcting for signal decay

    Science.gov (United States)

    Huang, Jian

    2013-03-12

    A method comprises steps for reconstructing in-cylinder pressure data from a vibration signal collected from a vibration sensor mounted on an engine component where it can generate a signal with a high signal-to-noise ratio, and correcting the vibration signal for errors introduced by vibration signal charge decay and sensor sensitivity. The correction factors are determined as a function of estimated motoring pressure and the measured vibration signal itself with each of these being associated with the same engine cycle. Accordingly, the method corrects for charge decay and changes in sensor sensitivity responsive to different engine conditions to allow greater accuracy in the reconstructed in-cylinder pressure data. An apparatus is also disclosed for practicing the disclosed method, comprising a vibration sensor, a data acquisition unit for receiving the vibration signal, a computer processing unit for processing the acquired signal and a controller for controlling the engine operation based on the reconstructed in-cylinder pressure.

  2. Correction method for critical extrapolation of control-rods-rising during physical start-up of reactor

    International Nuclear Information System (INIS)

    Zhang Fan; Chen Wenzhen; Yu Lei

    2008-01-01

    During physical start-up of nuclear reactor, the curve got by lifting the con- trol rods to extrapolate to the critical state is often in protruding shape, by which the supercritical phenomena is led. In the paper, the reason why the curve was in protruding was analyzed. A correction method was introduced, and the calculations were carried out by the practical data used in a nuclear power plant. The results show that the correction method reverses the protruding shape of the extrapolating curve, and the risk of reactor supercritical phenomena can be reduced using the extrapolated curve got by the correction method during physical start-up of the reactor. (authors)

  3. Lipid order, saturation and surface property relationships: a study of human meibum saturation.

    Science.gov (United States)

    Mudgil, Poonam; Borchman, Douglas; Yappert, Marta C; Duran, Diana; Cox, Gregory W; Smith, Ryan J; Bhola, Rahul; Dennis, Gary R; Whitehall, John S

    2013-11-01

    Tear film stability decreases with age however the cause(s) of the instability are speculative. Perhaps the more saturated meibum from infants may contribute to tear film stability. The meibum lipid phase transition temperature and lipid hydrocarbon chain order at physiological temperature (33 °C) decrease with increasing age. It is reasonable that stronger lipid-lipid interactions could stabilize the tear film since these interactions must be broken for tear break up to occur. In this study, meibum from a pool of adult donors was saturated catalytically. The influence of saturation on meibum hydrocarbon chain order was determined by infrared spectroscopy. Meibum is in an anhydrous state in the meibomian glands and on the surface of the eyelid. The influence of saturation on the surface properties of meibum was determined using Langmuir trough technology. Saturation of native human meibum did not change the minimum or maximum values of hydrocarbon chain order so at temperatures far above or below the phase transition of human meibum, saturation does not play a role in ordering or disordering the lipid hydrocarbon chains. Saturation did increase the phase transition temperature in human meibum by over 20 °C, a relatively high amount. Surface pressure-area studies showing the late take off and higher maximum surface pressure of saturated meibum compared to native meibum suggest that the saturated meibum film is quite molecularly ordered (stiff molecular arrangement) and elastic (molecules are able to rearrange during compression and expansion) compared with native meibum films which are more fluid agreeing with the infrared spectroscopic results of this study. In saturated meibum, the formation of compacted ordered islands of lipids above the surfactant layer would be expected to decrease the rate of evaporation compared to fluid and more loosely packed native meibum. Higher surface pressure observed with films of saturated meibum compared to native meibum

  4. Stability and stabilization of linear systems with saturating actuators

    CERN Document Server

    Tarbouriech, Sophie; Gomes da Silva Jr, João Manoel; Queinnec, Isabelle

    2011-01-01

    Gives the reader an in-depth understanding of the phenomena caused by the more-or-less ubiquitous problem of actuator saturation. Proposes methods and algorithms designed to avoid, manage or overcome the effects of actuator saturation. Uses a state-space approach to ensure local and global stability of the systems considered. Compilation of fifteen years' worth of research results.

  5. Application of pulse pile-up correction spectrum to the library least-squares method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sang Hoon [Kyungpook National Univ., Daegu (Korea, Republic of)

    2006-12-15

    The Monte Carlo simulation code CEARPPU has been developed and updated to provide pulse pile-up correction spectra for high counting rate cases. For neutron activation analysis, CEARPPU correction spectra were used in library least-squares method to give better isotopic activity results than the convention library least-squares fitting with uncorrected spectra.

  6. Stroke saturation on a MEMS deformable mirror for woofer-tweeter adaptive optics.

    Science.gov (United States)

    Morzinski, Katie; Macintosh, Bruce; Gavel, Donald; Dillon, Daren

    2009-03-30

    High-contrast imaging of extrasolar planet candidates around a main-sequence star has recently been realized from the ground using current adaptive optics (AO) systems. Advancing such observations will be a task for the Gemini Planet Imager, an upcoming "extreme" AO instrument. High-order "tweeter" and low-order "woofer" deformable mirrors (DMs) will supply a >90%-Strehl correction, a specialized coronagraph will suppress the stellar flux, and any planets can then be imaged in the "dark hole" region. Residual wavefront error scatters light into the DM-controlled dark hole, making planets difficult to image above the noise. It is crucial in this regard that the high-density tweeter, a micro-electrical mechanical systems (MEMS) DM, have sufficient stroke to deform to the shapes required by atmospheric turbulence. Laboratory experiments were conducted to determine the rate and circumstance of saturation, i.e. stroke insufficiency. A 1024-actuator 1.5-microm-stroke MEMS device was empirically tested with software Kolmogorov-turbulence screens of r(0) =10-15 cm. The MEMS when solitary suffered saturation approximately 4% of the time. Simulating a woofer DM with approximately 5-10 actuators across a 5-m primary mitigated MEMS saturation occurrence to a fraction of a percent. While no adjacent actuators were saturated at opposing positions, mid-to-high-spatial-frequency stroke did saturate more frequently than expected, implying that correlations through the influence functions are important. Analytical models underpredict the stroke requirements, so empirical studies are important.

  7. Transient performances analysis of wind turbine system with induction generator including flux saturation and skin effect

    DEFF Research Database (Denmark)

    Li, H.; Zhao, B.; Han, L.

    2010-01-01

    In order to analyze correctly the effect of different models for induction generators on the transient performances of large wind power generation, Wind turbine driven squirrel cage induction generator (SCIG) models taking into account both main and leakage flux saturation and skin effect were...

  8. Comparatively Studied Color Correction Methods for Color Calibration of Automated Microscopy Complex of Biomedical Specimens

    Directory of Open Access Journals (Sweden)

    T. A. Kravtsova

    2016-01-01

    Full Text Available The paper considers a task of generating the requirements and creating a calibration target for automated microscopy systems (AMS of biomedical specimens to provide the invariance of algorithms and software to the hardware configuration. The required number of color fields of the calibration target and their color coordinates are mostly determined by the color correction method, for which coefficients of the equations are estimated during the calibration process. The paper analyses existing color calibration techniques for digital imaging systems using an optical microscope and shows that there is a lack of published results of comparative studies to demonstrate a particular useful color correction method for microscopic images. A comparative study of ten image color correction methods in RGB space using polynomials and combinations of color coordinate of different orders was carried out. The method of conditioned least squares to estimate the coefficients in the color correction equations using captured images of 217 color fields of the calibration target Kodak Q60-E3 was applied. The regularization parameter in this method was chosen experimentally. It was demonstrated that the best color correction quality characteristics are provided by the method that uses a combination of color coordinates of the 3rd order. The study of the influence of the number and the set of color fields included in calibration target on color correction quality for microscopic images was performed. Six train sets containing 30, 35, 40, 50, 60 and 80 color fields, and test set of 47 color fields not included in any of the train sets were formed. It was found out that the train set of 60 color fields minimizes the color correction error values for both operating modes of digital camera: using "default" color settings and with automatic white balance. At the same time it was established that the use of color fields from the widely used now Kodak Q60-E3 target does not

  9. Premature saturation in backpropagation networks: Mechanism and necessary conditions

    International Nuclear Information System (INIS)

    Vitela, J.E.; Reifman, J.

    1997-01-01

    The mechanism that gives rise to the phenomenon of premature saturation of the output units of feedforward multilayer neural networks during training with the standard backpropagation algorithm is described. The entire process of premature saturation is characterized by three distinct stages and it is concluded that the momentum term plays the leading role in the occurrence of the phenomenon. The necessary conditions for the occurrence of premature saturation are presented and a new method is proposed, based on these conditions, that eliminates the occurrence of the phenomenon. Validity of the conditions and the proposed method are illustrated through simulation results. Three case studies are presented. The first two came from a training session for classification of three component failures in a nuclear power plant. The last case, comes from a training session for classification of welded fuel elements

  10. Scatter measurement and correction method for cone-beam CT based on single grating scan

    Science.gov (United States)

    Huang, Kuidong; Shi, Wenlong; Wang, Xinyu; Dong, Yin; Chang, Taoqi; Zhang, Hua; Zhang, Dinghua

    2017-06-01

    In cone-beam computed tomography (CBCT) systems based on flat-panel detector imaging, the presence of scatter significantly reduces the quality of slices. Based on the concept of collimation, this paper presents a scatter measurement and correction method based on single grating scan. First, according to the characteristics of CBCT imaging, the scan method using single grating and the design requirements of the grating are analyzed and figured out. Second, by analyzing the composition of object projection images and object-and-grating projection images, the processing method for the scatter image at single projection angle is proposed. In addition, to avoid additional scan, this paper proposes an angle interpolation method of scatter images to reduce scan cost. Finally, the experimental results show that the scatter images obtained by this method are accurate and reliable, and the effect of scatter correction is obvious. When the additional object-and-grating projection images are collected and interpolated at intervals of 30 deg, the scatter correction error of slices can still be controlled within 3%.

  11. Evaluation of a method for correction of scatter radiation in thorax cone beam CT

    International Nuclear Information System (INIS)

    Rinkel, J.; Dinten, J.M.; Esteve, F.

    2004-01-01

    Purpose: Cone beam CT (CBCT) enables three-dimensional imaging with isotropic resolution. X-ray scatter estimation is a big challenge for quantitative CBCT imaging of thorax: scatter level is significantly higher on cone beam systems compared to collimated fan beam systems. The effects of this scattered radiation are cupping artefacts, streaks, and quantification inaccuracies. The beam stops conventional scatter estimation approach can be used for CBCT but leads to a significant increase in terms of dose and acquisition time. At CEA-LETI has been developed an original scatter management process without supplementary acquisition. Methods and Materials: This Analytical Plus Indexing-based method (API) of scatter correction in CBCT is based on scatter calibration through offline acquisitions with beam stops on lucite plates, combined to an analytical transformation issued from physical equations. This approach has been applied with success in bone densitometry and mammography. To evaluate this method in CBCT, acquisitions from a thorax phantom with and without beam stops have been performed. To compare different scatter correction approaches, Feldkamp algorithm has been applied on rough data corrected from scatter by API and by beam stops approaches. Results: The API method provides results in good agreement with the beam stops array approach, suppressing cupping artefact. Otherwise influence of the scatter correction method on the noise in the reconstructed images has been evaluated. Conclusion: The results indicate that the API method is effective for quantitative CBCT imaging of thorax. Compared to a beam stops array method it needs a lower x-ray dose and shortens acquisition time. (authors)

  12. Effects of projection and background correction method upon calculation of right ventricular ejection fraction using first-pass radionuclide angiography

    International Nuclear Information System (INIS)

    Caplin, J.L.; Flatman, W.D.; Dymond, D.S.

    1985-01-01

    There is no consensus as to the best projection or correction method for first-pass radionuclide studies of the right ventricle. We assessed the effects of two commonly used projections, 30 degrees right anterior oblique and anterior-posterior, on the calculation of right ventricular ejection fraction. In addition two background correction methods, planar background correction to account for scatter, and right atrial correction to account for right atrio-ventricular overlap were assessed. Two first-pass radionuclide angiograms were performed in 19 subjects, one in each projection, using gold-195m (half-life 30.5 seconds), and each study was analysed using the two methods of correction. Right ventricular ejection fraction was highest using the right anterior oblique projection with right atrial correction 35.6 +/- 12.5% (mean +/- SD), and lowest when using the anterior posterior projection with planar background correction 26.2 +/- 11% (p less than 0.001). The study design allowed assessment of the effects of correction method and projection independently. Correction method appeared to have relatively little effect on right ventricular ejection fraction. Using right atrial correction correlation coefficient (r) between projections was 0.92, and for planar background correction r = 0.76, both p less than 0.001. However, right ventricular ejection fraction was far more dependent upon projection. When the anterior-posterior projection was used calculated right ventricular ejection fraction was much more dependent on correction method (r = 0.65, p = not significant), than using the right anterior oblique projection (r = 0.85, p less than 0.001)

  13. TAE Saturation of Alpha Particle Driven Instability in TFTR

    International Nuclear Information System (INIS)

    Berk, H.L.; Chen, Y.; Gorelenkov, N.N.; White, R.B.

    1998-01-01

    A nonlinear theory of kinetic instabilities near threshold [H.L. Berk, et al., Plasma Phys. Rep. 23, (1997) 842] is applied to calculate the saturation level of Toroidicity-induced Alfvn Eigenmodes (TAE) and be compared with the predictions of (delta)f method calculations [Y. Chen, Ph.D. Thesis, Princeton University, 1998]. Good agreement is observed between the predictions of both methods and the predicted saturation levels are comparable with experimentally measured amplitudes of the TAE oscillations in TFTR [D.J. Grove and D.M. Meade, Nucl. Fusion 25, (1985) 1167

  14. An improved bias correction method of daily rainfall data using a sliding window technique for climate change impact assessment

    Science.gov (United States)

    Smitha, P. S.; Narasimhan, B.; Sudheer, K. P.; Annamalai, H.

    2018-01-01

    Regional climate models (RCMs) are used to downscale the coarse resolution General Circulation Model (GCM) outputs to a finer resolution for hydrological impact studies. However, RCM outputs often deviate from the observed climatological data, and therefore need bias correction before they are used for hydrological simulations. While there are a number of methods for bias correction, most of them use monthly statistics to derive correction factors, which may cause errors in the rainfall magnitude when applied on a daily scale. This study proposes a sliding window based daily correction factor derivations that help build reliable daily rainfall data from climate models. The procedure is applied to five existing bias correction methods, and is tested on six watersheds in different climatic zones of India for assessing the effectiveness of the corrected rainfall and the consequent hydrological simulations. The bias correction was performed on rainfall data downscaled using Conformal Cubic Atmospheric Model (CCAM) to 0.5° × 0.5° from two different CMIP5 models (CNRM-CM5.0, GFDL-CM3.0). The India Meteorological Department (IMD) gridded (0.25° × 0.25°) observed rainfall data was considered to test the effectiveness of the proposed bias correction method. The quantile-quantile (Q-Q) plots and Nash Sutcliffe efficiency (NSE) were employed for evaluation of different methods of bias correction. The analysis suggested that the proposed method effectively corrects the daily bias in rainfall as compared to using monthly factors. The methods such as local intensity scaling, modified power transformation and distribution mapping, which adjusted the wet day frequencies, performed superior compared to the other methods, which did not consider adjustment of wet day frequencies. The distribution mapping method with daily correction factors was able to replicate the daily rainfall pattern of observed data with NSE value above 0.81 over most parts of India. Hydrological

  15. New method in obtaining correction factor of power confirming

    International Nuclear Information System (INIS)

    Deng Yongjun; Li Rundong; Liu Yongkang; Zhou Wei

    2010-01-01

    Westcott theory is the most widely used method in reactor power calibration, which particularly suited to research reactor. But this method is very fussy because lots of correction parameters which rely on empirical formula to special reactor type are needed. The incidence coefficient between foil activity and reactor power was obtained by Monte-Carlo calculation, which was carried out with precise description of the reactor core and the foil arrangement position by MCNP input card. So the reactor power was determined by the core neutron fluence profile and the foil activity placed in the position for normalization use. The characteristic of this new method is simpler, more flexible and accurate than Westcott theory. In this paper, the results of SPRR-300 obtained by the new method in theory were compared with the experimental results, which verified the possibility of this new method. (authors)

  16. pH-metric solubility. 2: correlation between the acid-base titration and the saturation shake-flask solubility-pH methods.

    Science.gov (United States)

    Avdeef, A; Berger, C M; Brownell, C

    2000-01-01

    The objective of this study was to compare the results of a normal saturation shake-flask method to a new potentiometric acid-base titration method for determining the intrinsic solubility and the solubility-pH profiles of ionizable molecules, and to report the solubility constants determined by the latter technique. The solubility-pH profiles of twelve generic drugs (atenolol, diclofenac.Na, famotidine, flurbiprofen, furosemide, hydrochlorothiazide, ibuprofen, ketoprofen, labetolol.HCl, naproxen, phenytoin, and propranolol.HCl), with solubilities spanning over six orders of magnitude, were determined both by the new pH-metric method and by a traditional approach (24 hr shaking of saturated solutions, followed by filtration, then HPLC assaying with UV detection). The 212 separate saturation shake-flask solubility measurements and those derived from 65 potentiometric titrations agreed well. The analysis produced the correlation equation: log(1/S)titration = -0.063(+/- 0.032) + 1.025(+/- 0.011) log(1/S)shake-flask, s = 0.20, r2 = 0.978. The potentiometrically-derived intrinsic solubilities of the drugs were: atenolol 13.5 mg/mL, diclofenac.Na 0.82 microg/mL, famotidine 1.1 mg/ mL, flurbiprofen 10.6 microg/mL, furosemide 5.9 microg/mL, hydrochlorothiazide 0.70 mg/mL, ibuprofen 49 microg/mL, ketoprofen 118 microg/mL, labetolol.HCl 128 microg/mL, naproxen 14 microg/mL, phenytoin 19 microg/mL, and propranolol.HCl 70 microg/mL. The new potentiometric method was shown to be reliable for determining the solubility-pH profiles of uncharged ionizable drug substances. Its speed compared to conventional equilibrium measurements, its sound theoretical basis, its ability to generate the full solubility-pH profile from a single titration, and its dynamic range (currently estimated to be seven orders of magnitude) make the new pH-metric method an attractive addition to traditional approaches used by preformulation and development scientists. It may be useful even to discovery

  17. Characterizing the marker-dye correction for Gafchromic(®) EBT2 film: a comparison of three analysis methods.

    Science.gov (United States)

    McCaw, Travis J; Micka, John A; Dewerd, Larry A

    2011-10-01

    Gafchromic(®) EBT2 film has a yellow marker dye incorporated into the active layer of the film that can be used to correct the film response for small variations in thickness. This work characterizes the effect of the marker-dye correction on the uniformity and uncertainty of dose measurements with EBT2 film. The effect of variations in time postexposure on the uniformity of EBT2 is also investigated. EBT2 films were used to measure the flatness of a (60)Co field to provide a high-spatial resolution evaluation of the film uniformity. As a reference, the flatness of the (60)Co field was also measured with Kodak EDR2 films. The EBT2 films were digitized with a flatbed document scanner 24, 48, and 72 h postexposure, and the images were analyzed using three methods: (1) the manufacturer-recommended marker-dye correction, (2) an in-house marker-dye correction, and (3) a net optical density (OD) measurement in the red color channel. The field flatness was calculated from orthogonal profiles through the center of the field using each analysis method, and the results were compared with the EDR2 measurements. Uncertainty was propagated through a dose calculation for each analysis method. The change in the measured field flatness for increasing times postexposure was also determined. Both marker-dye correction methods improved the field flatness measured with EBT2 film relative to the net OD method, with a maximum improvement of 1% using the manufacturer-recommended correction. However, the manufacturer-recommended correction also resulted in a dose uncertainty an order of magnitude greater than the other two methods. The in-house marker-dye correction lowered the dose uncertainty relative to the net OD method. The measured field flatness did not exhibit any unidirectional change with increasing time postexposure and showed a maximum change of 0.3%. The marker dye in EBT2 can be used to improve the response uniformity of the film. Depending on the film analysis method used

  18. Law of nonlinear flow in saturated clays and radial consolidation

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    It was derived that micro-scale amount level of average pore radius of clay changed from 0.01 to 0.1 micron by an equivalent concept of flow in porous media. There is good agreement between the derived results and test ones. Results of experiments show that flow in micro-scale pore of saturated clays follows law of nonlinear flow. Theoretical analyses demonstrate that an interaction of solid-liquid interfaces varies inversely with permeability or porous radius. The interaction is an important reason why nonlinear flow in saturated clays occurs. An exact mathematical model was presented for nonlinear flow in micro-scale pore of saturated clays. Dimension and physical meanings of parameters of it are definite. A new law of nonlinear flow in saturated clays was established. It can describe characteristics of flow curve of the whole process of the nonlinear flow from low hydraulic gradient to high one. Darcy law is a special case of the new law. A mathematical model was presented for consolidation of nonlinear flow in radius direction in saturated clays with constant rate based on the new law of nonlinear flow. Equations of average mass conservation and moving boundary, and formula of excess pore pressure distribution and average degree of consolidation for nonlinear flow in saturated clay were derived by using an idea of viscous boundary layer, a method of steady state in stead of transient state and a method of integral of an equation. Laws of excess pore pressure distribution and changes of average degree of consolidation with time were obtained. Results show that velocity of moving boundary decreases because of the nonlinear flow in saturated clay. The results can provide geology engineering and geotechnical engineering of saturated clay with new scientific bases. Calculations of average degree of consolidation of the Darcy flow are a special case of that of the nonlinear flow.

  19. Attenuation correction with region growing method used in the positron emission mammography imaging system

    Science.gov (United States)

    Gu, Xiao-Yue; Li, Lin; Yin, Peng-Fei; Yun, Ming-Kai; Chai, Pei; Huang, Xian-Chao; Sun, Xiao-Li; Wei, Long

    2015-10-01

    The Positron Emission Mammography imaging system (PEMi) provides a novel nuclear diagnosis method dedicated for breast imaging. With a better resolution than whole body PET, PEMi can detect millimeter-sized breast tumors. To address the requirement of semi-quantitative analysis with a radiotracer concentration map of the breast, a new attenuation correction method based on a three-dimensional seeded region growing image segmentation (3DSRG-AC) method has been developed. The method gives a 3D connected region as the segmentation result instead of image slices. The continuity property of the segmentation result makes this new method free of activity variation of breast tissues. The threshold value chosen is the key process for the segmentation method. The first valley in the grey level histogram of the reconstruction image is set as the lower threshold, which works well in clinical application. Results show that attenuation correction for PEMi improves the image quality and the quantitative accuracy of radioactivity distribution determination. Attenuation correction also improves the probability of detecting small and early breast tumors. Supported by Knowledge Innovation Project of The Chinese Academy of Sciences (KJCX2-EW-N06)

  20. A novel energy conversion based method for velocity correction in molecular dynamics simulations

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Hanhui [School of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027 (China); Collaborative Innovation Center of Advanced Aero-Engine, Hangzhou 310027 (China); Liu, Ningning [School of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027 (China); Ku, Xiaoke, E-mail: xiaokeku@zju.edu.cn [School of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027 (China); Fan, Jianren [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China)

    2017-05-01

    Molecular dynamics (MD) simulation has become an important tool for studying micro- or nano-scale dynamics and the statistical properties of fluids and solids. In MD simulations, there are mainly two approaches: equilibrium and non-equilibrium molecular dynamics (EMD and NEMD). In this paper, a new energy conversion based correction (ECBC) method for MD is developed. Unlike the traditional systematic correction based on macroscopic parameters, the ECBC method is developed strictly based on the physical interaction processes between the pair of molecules or atoms. The developed ECBC method can apply to EMD and NEMD directly. While using MD with this method, the difference between the EMD and NEMD is eliminated, and no macroscopic parameters such as external imposed potentials or coefficients are needed. With this method, many limits of using MD are lifted. The application scope of MD is greatly extended.

  1. A novel energy conversion based method for velocity correction in molecular dynamics simulations

    International Nuclear Information System (INIS)

    Jin, Hanhui; Liu, Ningning; Ku, Xiaoke; Fan, Jianren

    2017-01-01

    Molecular dynamics (MD) simulation has become an important tool for studying micro- or nano-scale dynamics and the statistical properties of fluids and solids. In MD simulations, there are mainly two approaches: equilibrium and non-equilibrium molecular dynamics (EMD and NEMD). In this paper, a new energy conversion based correction (ECBC) method for MD is developed. Unlike the traditional systematic correction based on macroscopic parameters, the ECBC method is developed strictly based on the physical interaction processes between the pair of molecules or atoms. The developed ECBC method can apply to EMD and NEMD directly. While using MD with this method, the difference between the EMD and NEMD is eliminated, and no macroscopic parameters such as external imposed potentials or coefficients are needed. With this method, many limits of using MD are lifted. The application scope of MD is greatly extended.

  2. Mechanics of non-saturated soils

    International Nuclear Information System (INIS)

    Coussy, O.; Fleureau, J.M.

    2002-01-01

    This book presents the different ways to approach the mechanics of non saturated soils, from the physico-chemical aspect to the mechanical aspect, from the experiment to the theoretical modeling, from the laboratory to the workmanship, and from the microscopic scale to the macroscopic one. Content: water and its representation; experimental bases of the behaviour of non-saturated soils; transfer laws in non-saturated environment; energy approach of the behaviour of non-saturated soils; homogenization for the non-saturated soils; plasticity and hysteresis; dams and backfilling; elaborated barriers. (J.S.)

  3. Non perturbative method for radiative corrections applied to lepton-proton scattering

    International Nuclear Information System (INIS)

    Chahine, C.

    1979-01-01

    We present a new, non perturbative method to effect radiative corrections in lepton (electron or muon)-nucleon scattering, useful for existing or planned experiments. This method relies on a spectral function derived in a previous paper, which takes into account both real soft photons and virtual ones and hence is free from infrared divergence. Hard effects are computed perturbatively and then included in the form of 'hard factors' in the non peturbative soft formulas. Practical computations are effected using the Gauss-Jacobi integration method which reduce the relevant integrals to a rapidly converging sequence. For the simple problem of the radiative quasi-elastic peak, we get an exponentiated form conjectured by Schwinger and found by Yennie, Frautschi and Suura. We compare also our results with the peaking approximation, which we derive independantly, and with the exact one-photon emission formula of Mo and Tsai. Applications of our method to the continuous spectrum include the radiative tail of the Δ 33 resonance in e + p scattering and radiative corrections to the Feynman scale invariant F 2 structure function for the kinematics of two recent high energy muon experiments

  4. Use of digital computers for correction of gamma method and neutron-gamma method indications

    International Nuclear Information System (INIS)

    Lakhnyuk, V.M.

    1978-01-01

    The program for the NAIRI-S computer is described which is intended for accounting and elimination of the effect of by-processes when interpreting gamma and neutron-gamma logging indications. By means of slight corrections it is possible to use the program as a mathematical basis for logging diagram standardization by the method of multidimensional regressive analysis and estimation of rock reservoir properties

  5. A distortion correction method for image intensifier and electronic portal images used in radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Ioannidis, G T; Geramani, K N; Zamboglou, N [Strahlenklinik, Stadtische Kliniken Offenbach, Offenbach (Germany); Uzunoglu, N [Department of Electrical and Computer Engineering, National Technical University of Athens, Athens (Greece)

    1999-12-31

    At the most of radiation departments a simulator and an `on line` verification system of the treated volume, in form of an electronic portal imaging device (EPID), are available. Networking and digital handling (saving, archiving etc.) of the image information is a necessity in the image processing procedures in order to evaluate verification and simulation recordings at the computer screen. Distortion is on the other hand prerequisite for quantitative comparison of both image modalities. Another limitation factor, in order to make quantitative assertions, is the fact that the irradiation fields in radiotherapy are usually bigger than the field of view of an image intensifier. Several segments of the irradiation field must therefore be acquired. Using pattern recognition techniques these segments can be composed into a single image. In this paper a distortion correction method will be presented. The method is based upon a well defined Grid which is embedded during the registration process on the image. The video signal from the image intensifier is acquired and processed. The grid is then recognised using image processing techniques. Ideally if all grid points are recognised, various methods can be applied in order to correct the distortion. But in practice this is not the case. Overlapping structures (bones etc.) have as a consequence that not all of the grid points can be recognised. Mathematical models from the Graph theory are applied in order to reconstruct the whole grid. The deviation of the grid points positions from the rated value is then used to calculate correction coefficients. This method (well defined grid, grid recognition, correction factors) can also be applied in verification images from the EPID or in other image modalities, and therefore a quantitative comparison in radiation treatment is possible. The distortion correction method and the application on simulator images will be presented. (authors)

  6. Modular correction method of bending elastic modulus based on sliding behavior of contact point

    International Nuclear Information System (INIS)

    Ma, Zhichao; Zhao, Hongwei; Zhang, Qixun; Liu, Changyi

    2015-01-01

    During the three-point bending test, the sliding behavior of the contact point between the specimen and supports was observed, the sliding behavior was verified to affect the measurements of both deflection and span length, which directly affect the calculation of the bending elastic modulus. Based on the Hertz formula to calculate the elastic contact deformation and the theoretical calculation of the sliding behavior of the contact point, a theoretical model to precisely describe the deflection and span length as a function of bending load was established. Moreover, a modular correction method of bending elastic modulus was proposed, via the comparison between the corrected elastic modulus of three materials (H63 copper–zinc alloy, AZ31B magnesium alloy and 2026 aluminum alloy) and the standard modulus obtained from standard uniaxial tensile tests, the universal feasibility of the proposed correction method was verified. Also, the ratio of corrected to raw elastic modulus presented a monotonically decreasing tendency as the raw elastic modulus of materials increased. (technical note)

  7. A new method of body habitus correction for total body potassium measurements

    International Nuclear Information System (INIS)

    O'Hehir, S; Green, S; Beddoe, A H

    2006-01-01

    This paper describes an accurate and time-efficient method for the determination of total body potassium via a combination of measurements in the Birmingham whole body counter and the use of the Monte Carlo n-particle (MCNP) simulation code. In developing this method, MCNP has also been used to derive values for some components of the total measurement uncertainty which are difficult to quantify experimentally. A method is proposed for MCNP-assessed body habitus corrections based on a simple generic anthropomorphic model, scaled for individual height and weight. The use of this model increases patient comfort by reducing the need for comprehensive anthropomorphic measurements. The analysis shows that the total uncertainty in potassium weight determination by this whole body counting methodology for water-filled phantoms with a known amount of potassium is 2.7% (SD). The uncertainty in the method of body habitus correction (applicable also to phantom-based methods) is 1.5% (SD). It is concluded that this new strategy provides a sufficiently accurate model for routine clinical use

  8. A new method of body habitus correction for total body potassium measurements

    Energy Technology Data Exchange (ETDEWEB)

    O' Hehir, S [University Hospital Birmingham Foundation NHS Trust, Birmingham (United Kingdom); Green, S [University Hospital Birmingham Foundation NHS Trust, Birmingham (United Kingdom); Beddoe, A H [University Hospital Birmingham Foundation NHS Trust, Birmingham (United Kingdom)

    2006-09-07

    This paper describes an accurate and time-efficient method for the determination of total body potassium via a combination of measurements in the Birmingham whole body counter and the use of the Monte Carlo n-particle (MCNP) simulation code. In developing this method, MCNP has also been used to derive values for some components of the total measurement uncertainty which are difficult to quantify experimentally. A method is proposed for MCNP-assessed body habitus corrections based on a simple generic anthropomorphic model, scaled for individual height and weight. The use of this model increases patient comfort by reducing the need for comprehensive anthropomorphic measurements. The analysis shows that the total uncertainty in potassium weight determination by this whole body counting methodology for water-filled phantoms with a known amount of potassium is 2.7% (SD). The uncertainty in the method of body habitus correction (applicable also to phantom-based methods) is 1.5% (SD). It is concluded that this new strategy provides a sufficiently accurate model for routine clinical use.

  9. Diffuse reflectance spectroscopy for the measurement of tissue oxygen saturation

    International Nuclear Information System (INIS)

    Sircan-Kucuksayan, A; Canpolat, M; Uyuklu, M

    2015-01-01

    Tissue oxygen saturation (StO 2 ) is a useful parameter for medical applications. A spectroscopic method has been developed to detect pathologic tissues, due to a lack of normal blood circulation, by measuring StO 2 . In this study, human blood samples with different levels of oxygen saturation have been prepared and spectra were acquired using an optical fiber probe to investigate the correlation between the oxygen saturation levels and the spectra. A linear correlation between the oxygen saturation and ratio of the intensities (760 nm to 790 nm) of the spectra acquired from blood samples has been found. In a validation study, oxygen saturations of the blood samples were estimated from the spectroscopic measurements with an error of 2.9%. It has also been shown that the linear dependence between the ratio and the oxygen saturation of the blood samples was valid for the blood samples with different hematocrits. Spectra were acquired from the forearms of 30 healthy volunteers to estimate StO 2 prior to, at the beginning of, after 2 min, and at the release of total vascular occlusion. The average StO 2 of a forearm before and after the two minutes occlusion was significantly different. The results suggested that optical reflectance spectroscopy is a sensitive method to estimate the StO 2 levels of human tissue. The technique developed to measure StO 2 has potential to detect ischemia in real time. (paper)

  10. Library construction and evaluation for site saturation mutagenesis.

    Science.gov (United States)

    Sullivan, Bradford; Walton, Adam Z; Stewart, Jon D

    2013-06-10

    We developed a method for creating and evaluating site-saturation libraries that consistently yields an average of 27.4±3.0 codons of the 32 possible within a pool of 95 transformants. This was verified by sequencing 95 members from 11 independent libraries within the gene encoding alkene reductase OYE 2.6 from Pichia stipitis. Correct PCR primer design as well as a variety of factors that increase transformation efficiency were critical contributors to the method's overall success. We also developed a quantitative analysis of library quality (Q-values) that defines library degeneracy. Q-values can be calculated from standard fluorescence sequencing data (capillary electropherograms) and the degeneracy predicted from an early stage of library construction (pooled plasmids from the initial transformation) closely matched that observed after ca. 1000 library members were sequenced. Based on this experience, we suggest that this analysis can be a useful guide when applying our optimized protocol to new systems, allowing one to focus only on good-quality libraries and reject substandard libraries at an early stage. This advantage is particularly important when lower-throughput screening techniques such as chiral-phase GC must be employed to identify protein variants with desirable properties, e.g., altered stereoselectivities or when multiple codons are targeted for simultaneous randomization. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. The viscosity of the refrigerant 1,1-difluoroethane along the saturation line

    Science.gov (United States)

    van der Gulik, P. S.

    1993-07-01

    The viscosity coefficient of the refrigerant R152a (1,1-difluoroethane) has been measured along the saturation line both in the saturated liquid and in the saturated vapor. The data have been obtained every 10 K from 243 up to 393 K by means of a vibrating-wire viscometer using the free damped oscillation method. The density along the saturation line was calculated from the equation of state given by Tamatsu et al. with application of the saturated vapor-pressure correlation given by Higashi et al. An interesting result is that in the neighborhood of the critical point, the kinematic viscosity of the saturated liquid seems to coincide with that of the saturated vapor. The results for the saturated liquid are in satisfying agreement with those of Kumagai and Takahashi and of Phillips and Murphy. A comparison of the saturatedvaport data with the unsaturated-vapor data of Takahashi et al. shows some discrepancies.

  12. Effect of methods of myopia correction on visual acuity, contrast sensitivity, and depth of focus

    NARCIS (Netherlands)

    Nio, YK; Jansonius, NM; Wijdh, RHJ; Beekhuis, WH; Worst, JGF; Noorby, S; Kooijman, AC

    Purpose. To psychophysically measure spherical and irregular aberrations in patients with various types of myopia correction. Setting: Laboratory of Experimental Ophthalmology, University of Groningen, Groningen, The Netherlands. Methods: Three groups of patients with low myopia correction

  13. Two-beam interaction in saturable media

    DEFF Research Database (Denmark)

    Schjødt-Eriksen, Jens; Schmidt, Michel R.; Juul Rasmussen, Jens

    1998-01-01

    The dynamics of two coupled soliton solutions of the nonlinear Schrodinger equation with a saturable nonlinearity is investigated It is shown by means of a variational method and by direct numerical calculations that two well-separated solitons can orbit around each other, if their initial velocity...

  14. Gluon saturation and baryon stopping in the SPS, RHIC, and LHC energy regions

    International Nuclear Information System (INIS)

    Li Shuang; Feng Shengqin

    2012-01-01

    A new geometrical scaling method with a gluon saturation rapidity limit is proposed to study the gluon saturation feature of the central rapidity region of relativistic nuclear collisions. The net-baryon number is essentially transported by valence quarks that probe the saturation regime in the target by multiple scattering. We take advantage of the gluon saturation model with geometric scaling of the rapidity limit to investigate net baryon distributions, nuclear stopping power and gluon saturation features in the SPS and RHIC energy regions. Predictions for net baryon rapidity distributions, mean rapidity loss and gluon saturation feature in central Pb + Pb collisions at the LHC are made in this paper. (authors)

  15. Corrected direct force balance method for atomic force microscopy lateral force calibration

    International Nuclear Information System (INIS)

    Asay, David B.; Hsiao, Erik; Kim, Seong H.

    2009-01-01

    This paper reports corrections and improvements of the previously reported direct force balance method (DFBM) developed for lateral calibration of atomic force microscopy. The DFBM method employs the lateral force signal obtained during a force-distance measurement on a sloped surface and relates this signal to the applied load and the slope of the surface to determine the lateral calibration factor. In the original publication [Rev. Sci. Instrum. 77, 043903 (2006)], the tip-substrate contact was assumed to be pinned at the point of contact, i.e., no slip along the slope. In control experiments, the tip was found to slide along the slope during force-distance curve measurement. This paper presents the correct force balance for lateral force calibration.

  16. Attenuation correction of myocardial SPECT by scatter-photopeak window method in normal subjects

    International Nuclear Information System (INIS)

    Okuda, Koichi; Nakajima, Kenichi; Matsuo, Shinro; Kinuya, Seigo; Motomura, Nobutoku; Kubota, Masahiro; Yamaki, Noriyasu; Maeda, Hisato

    2009-01-01

    Segmentation with scatter and photopeak window data using attenuation correction (SSPAC) method can provide a patient-specific non-uniform attenuation coefficient map only by using photopeak and scatter images without X-ray computed tomography (CT). The purpose of this study is to evaluate the performance of attenuation correction (AC) by the SSPAC method on normal myocardial perfusion database. A total of 32 sets of exercise-rest myocardial images with Tc-99m-sestamibi were acquired in both photopeak (140 keV±10%) and scatter (7% of lower side of the photopeak window) energy windows. Myocardial perfusion databases by the SSPAC method and non-AC (NC) were created from 15 female and 17 male subjects with low likelihood of cardiac disease using quantitative perfusion SPECT software. Segmental myocardial counts of a 17-segment model from these databases were compared on the basis of paired t test. AC average myocardial perfusion count was significantly higher than that in NC in the septal and inferior regions (P<0.02). On the contrary, AC average count was significantly lower in the anterolateral and apical regions (P<0.01). Coefficient variation of the AC count in the mid, apical and apex regions was lower than that of NC. The SSPAC method can improve average myocardial perfusion uptake in the septal and inferior regions and provide uniform distribution of myocardial perfusion. The SSPAC method could be a practical method of attenuation correction without X-ray CT. (author)

  17. Evaluation of three methods for retrospective correction of vignetting on medical microscopy images utilizing two open source software tools.

    Science.gov (United States)

    Babaloukas, Georgios; Tentolouris, Nicholas; Liatis, Stavros; Sklavounou, Alexandra; Perrea, Despoina

    2011-12-01

    Correction of vignetting on images obtained by a digital camera mounted on a microscope is essential before applying image analysis. The aim of this study is to evaluate three methods for retrospective correction of vignetting on medical microscopy images and compare them with a prospective correction method. One digital image from four different tissues was used and a vignetting effect was applied on each of these images. The resulted vignetted image was replicated four times and in each replica a different method for vignetting correction was applied with fiji and gimp software tools. The highest peak signal-to-noise ratio from the comparison of each method to the original image was obtained from the prospective method in all tissues. The morphological filtering method provided the highest peak signal-to-noise ratio value amongst the retrospective methods. The prospective method is suggested as the method of choice for correction of vignetting and if it is not applicable, then the morphological filtering may be suggested as the retrospective alternative method. © 2011 The Authors Journal of Microscopy © 2011 Royal Microscopical Society.

  18. N3LO corrections to jet production in deep inelastic scattering using the Projection-to-Born method

    Science.gov (United States)

    Currie, J.; Gehrmann, T.; Glover, E. W. N.; Huss, A.; Niehues, J.; Vogt, A.

    2018-05-01

    Computations of higher-order QCD corrections for processes with exclusive final states require a subtraction method for real-radiation contributions. We present the first-ever generalisation of a subtraction method for third-order (N3LO) QCD corrections. The Projection-to-Born method is used to combine inclusive N3LO coefficient functions with an exclusive second-order (NNLO) calculation for a final state with an extra jet. The input requirements, advantages, and potential applications of the method are discussed, and validations at lower orders are performed. As a test case, we compute the N3LO corrections to kinematical distributions and production rates for single-jet production in deep inelastic scattering in the laboratory frame, and compare them with data from the ZEUS experiment at HERA. The corrections are small in the central rapidity region, where they stabilize the predictions to sub per-cent level. The corrections increase substantially towards forward rapidity where large logarithmic effects are expected, thereby yielding an improved description of the data in this region.

  19. Temperature effect correction for muon flux at the Earth surface: estimation of the accuracy of different methods

    International Nuclear Information System (INIS)

    Dmitrieva, A N; Astapov, I I; Kovylyaeva, A A; Pankova, D V

    2013-01-01

    Correction of the muon flux at the Earth surface for temperature effect with the help of two simple methods is considered. In the first method, it is assumed that major part of muons are generated at some effective generation level, which altitude depends on the temperature profile of the atmosphere. In the second method, dependence of muon flux on the mass-averaged atmosphere temperature is considered. The methods were tested with the data of muon hodoscope URAGAN (Moscow, Russia). Difference between data corrected with the help of differential in altitude temperature coefficients and simplified methods does not exceed 1-1.5%, so the latter ones may be used for introduction of a fast preliminary correction.

  20. Comparative study of chance coincidence correction in measuring 223Ra and 224Ra by delay coincidence method

    International Nuclear Information System (INIS)

    Yan Yongjun; Huang Derong; Zhou Jianliang; Qiu Shoukang

    2013-01-01

    The delay coincidence measurement of 220 Rn and 219 Rn has been proved to be a valid indirect method for measuring 224 Ra and 223 Ra extracted from natural water, which can provide valuable information on estuarine/ocean mixing, submarine groundwater discharge, and water/soil interactions. In practical operation chance coincidence correction must be considered, mostly Moore's correction method, but Moore's and Giffin's methods were incomplete in some ways. In this paper the modification (method 1) and a new chance coincidence correction formula (method 2) were provided. Experiments results are presented to demonstrate the conclusions. The results show that precision is improved while counting rate is less than 70 min- 1 . (authors)

  1. An agent-based method for simulating porous fluid-saturated structures with indistinguishable components

    Science.gov (United States)

    Kashani, Jamal; Pettet, Graeme John; Gu, YuanTong; Zhang, Lihai; Oloyede, Adekunle

    2017-10-01

    Single-phase porous materials contain multiple components that intermingle up to the ultramicroscopic level. Although the structures of the porous materials have been simulated with agent-based methods, the results of the available methods continue to provide patterns of distinguishable solid and fluid agents which do not represent materials with indistinguishable phases. This paper introduces a new agent (hybrid agent) and category of rules (intra-agent rule) that can be used to create emergent structures that would more accurately represent single-phase structures and materials. The novel hybrid agent carries the characteristics of system's elements and it is capable of changing within itself, while also responding to its neighbours as they also change. As an example, the hybrid agent under one-dimensional cellular automata formalism in a two-dimensional domain is used to generate patterns that demonstrate the striking morphological and characteristic similarities with the porous saturated single-phase structures where each agent of the ;structure; carries semi-permeability property and consists of both fluid and solid in space and at all times. We conclude that the ability of the hybrid agent to change locally provides an enhanced protocol to simulate complex porous structures such as biological tissues which could facilitate models for agent-based techniques and numerical methods.

  2. Method for the determination of spectroradiometric corrections of data from multichannel aerospatial spectrometers

    International Nuclear Information System (INIS)

    Bakalova, K.P.; Bakalov, D.D.

    1984-01-01

    Various factors in the aerospatial conditions of operation may lead to changes in the transmission characteristics of the electron-optical medium or environment of spectrometers for remote sensing of the Earth. Consequently, the data obtained need spectroradiometric corrections. In the paper, a unified approach to the determination of these corrections is suggested. The method uses measurements of standard sources with a smooth emission spectrum that is much wider than the width of the channels, such as a lamp with an incandescent filament, Sun and other natural objects, without special spectral reference standards. The presence of additional information about the character of changes occuring in the measurements may considerably simplify the determination of corrections through setting appropriate values of a coefficient and the spectral shift. The method has been used with the Spectrum-15 and SMP-32 spectrometers on the Salyut-7 orbital station and the 'Meteor-Priroda' satelite of the Bulgaria-1300-ii project

  3. Femoral venous oxygen saturation is no surrogate for central venous oxygen saturation

    NARCIS (Netherlands)

    van Beest, Paul A.; van der Schors, Alice; Liefers, Henriëtte; Coenen, Ludo G. J.; Braam, Richard L.; Habib, Najib; Braber, Annemarije; Scheeren, Thomas W. L.; Kuiper, Michaël A.; Spronk, Peter E.

    2012-01-01

    Objective: The purpose of our study was to determine if central venous oxygen saturation and femoral venous oxygen saturation can be used interchangeably during surgery and in critically ill patients. Design: Prospective observational controlled study. Setting: Nonacademic university-affiliated

  4. Transport of water and ions in partially water-saturated porous media. Part 2. Filtration effects

    Science.gov (United States)

    Revil, A.

    2017-05-01

    A new set of constitutive equations describing the transport of the ions and water through charged porous media and considering the effect of ion filtration is applied to the problem of reverse osmosis and diffusion of a salt. Starting with the constitutive equations derived in Paper 1, I first determine specific formula for the osmotic coefficient and effective diffusion coefficient of a binary symmetric 1:1 salt (such as KCl or NaCl) as a function of a dimensionless number Θ corresponding to the ratio between the cation exchange capacity (CEC) and the salinity. The modeling is first carried with the Donnan model used to describe the concentrations of the charge carriers in the pore water phase. Then a new model is developed in the thin double layer approximation to determine these concentrations. These models provide explicit relationships between the concentration of the ionic species in the pore space and those in a neutral reservoir in local equilibrium with the pore space and the CEC. The case of reverse osmosis and diffusion coefficient are analyzed in details for the case of saturated and partially saturated porous materials. Comparisons are done with experimental data from the literature obtained on bentonite. The model predicts correctly the influence of salinity (including membrane behavior at high salinities), porosity, cation type (K+ versus Na+), and water saturation on the osmotic coefficient. It also correctly predicts the dependence of the diffusion coefficient of the salt with the salinity.

  5. Fast pressure-correction method for incompressible Navier-Stokes equations in curvilinear coordinates

    Science.gov (United States)

    Aithal, Abhiram; Ferrante, Antonino

    2017-11-01

    In order to perform direct numerical simulations (DNS) of turbulent flows over curved surfaces and axisymmetric bodies, we have developed the numerical methodology to solve the incompressible Navier-Stokes (NS) equations in curvilinear coordinates for orthogonal meshes. The orthogonal meshes are generated by solving a coupled system of non-linear Poisson equations. The NS equations in orthogonal curvilinear coordinates are discretized in space on a staggered mesh using second-order central-difference scheme and are solved with an FFT-based pressure-correction method. The momentum equation is integrated in time using the second-order Adams-Bashforth scheme. The velocity field is advanced in time by applying the pressure correction to the approximate velocity such that it satisfies the divergence free condition. The novelty of the method stands in solving the variable coefficient Poisson equation for pressure using an FFT-based Poisson solver rather than the slower multigrid methods. We present the verification and validation results of the new numerical method and the DNS results of transitional flow over a curved axisymmetric body.

  6. Perturbation theory corrections to the two-particle reduced density matrix variational method.

    Science.gov (United States)

    Juhasz, Tamas; Mazziotti, David A

    2004-07-15

    In the variational 2-particle-reduced-density-matrix (2-RDM) method, the ground-state energy is minimized with respect to the 2-particle reduced density matrix, constrained by N-representability conditions. Consider the N-electron Hamiltonian H(lambda) as a function of the parameter lambda where we recover the Fock Hamiltonian at lambda=0 and we recover the fully correlated Hamiltonian at lambda=1. We explore using the accuracy of perturbation theory at small lambda to correct the 2-RDM variational energies at lambda=1 where the Hamiltonian represents correlated atoms and molecules. A key assumption in the correction is that the 2-RDM method will capture a fairly constant percentage of the correlation energy for lambda in (0,1] because the nonperturbative 2-RDM approach depends more significantly upon the nature rather than the strength of the two-body Hamiltonian interaction. For a variety of molecules we observe that this correction improves the 2-RDM energies in the equilibrium bonding region, while the 2-RDM energies at stretched or nearly dissociated geometries, already highly accurate, are not significantly changed. At equilibrium geometries the corrected 2-RDM energies are similar in accuracy to those from coupled-cluster singles and doubles (CCSD), but at nonequilibrium geometries the 2-RDM energies are often dramatically more accurate as shown in the bond stretching and dissociation data for water and nitrogen. (c) 2004 American Institute of Physics.

  7. Evaluation of Fresnel's corrections to the eikonal approximation by the separabilization method

    International Nuclear Information System (INIS)

    Musakhanov, M.M.; Zubarev, A.L.

    1975-01-01

    Method of separabilization of potential over the Schroedinger approximate solutions, leading to Schwinger's variational principle for scattering amplitude, is suggested. The results are applied to calculation of the Fresnel corrections to the Glauber approximation

  8. An improved correlated sampling method for calculating correction factor of detector

    International Nuclear Information System (INIS)

    Wu Zhen; Li Junli; Cheng Jianping

    2006-01-01

    In the case of a small size detector lying inside a bulk of medium, there are two problems in the correction factors calculation of the detectors. One is that the detector is too small for the particles to arrive at and collide in; the other is that the ratio of two quantities is not accurate enough. The method discussed in this paper, which combines correlated sampling with modified particle collision auto-importance sampling, and has been realized on the MCNP-4C platform, can solve these two problems. Besides, other 3 variance reduction techniques are also combined with correlated sampling respectively to calculate a simple calculating model of the correction factors of detectors. The results prove that, although all the variance reduction techniques combined with correlated sampling can improve the calculating efficiency, the method combining the modified particle collision auto-importance sampling with the correlated sampling is the most efficient one. (authors)

  9. Bias correction for estimated QTL effects using the penalized maximum likelihood method.

    Science.gov (United States)

    Zhang, J; Yue, C; Zhang, Y-M

    2012-04-01

    A penalized maximum likelihood method has been proposed as an important approach to the detection of epistatic quantitative trait loci (QTL). However, this approach is not optimal in two special situations: (1) closely linked QTL with effects in opposite directions and (2) small-effect QTL, because the method produces downwardly biased estimates of QTL effects. The present study aims to correct the bias by using correction coefficients and shifting from the use of a uniform prior on the variance parameter of a QTL effect to that of a scaled inverse chi-square prior. The results of Monte Carlo simulation experiments show that the improved method increases the power from 25 to 88% in the detection of two closely linked QTL of equal size in opposite directions and from 60 to 80% in the identification of QTL with small effects (0.5% of the total phenotypic variance). We used the improved method to detect QTL responsible for the barley kernel weight trait using 145 doubled haploid lines developed in the North American Barley Genome Mapping Project. Application of the proposed method to other shrinkage estimation of QTL effects is discussed.

  10. Attenuation correction for renal scintigraphy with 99mTc - DMSA: comparison between Raynaud and the geometric mean methods

    International Nuclear Information System (INIS)

    Argenta, J.; Brambilla, C.R.; Marques da Silva, A.M.

    2009-01-01

    The evaluation of the index of renal function (IF) requires soft-tissue attenuation correction. This paper investigates the impact over the IF, when attenuation correction is applied using the Raynaud method and the geometric mean method in renal planar scintigraphy, using posterior and anterior views. The study was conducted with Monte Carlo simulated images of five GSF family voxel phantoms with different relative uptakes in each kidney from normal (50% -50%) to pathological (10% -90%). The results showed that Raynaud method corrects more efficiently the cases where the renal depth is close to the value of the standard phantom. The geometric mean method showed similar results to the Raynaud method for Baby, Child and Golem. For Helga and Donna models, the errors were above 20%, increasing with relative uptake. Further studies should be conducted to assess the influences of the standard phantom in the correcting attenuation methods. (author)

  11. Attenuation correction for renal scintigraphy with 99mTc-DMSA: analysis between Raynaud and the geometric mean methods

    International Nuclear Information System (INIS)

    Argenta, Jackson; Brambilla, Claudia R.; Silva, Ana Maria M. da

    2010-01-01

    The evaluation of the index of renal function (IF) requires soft-tissue attenuation correction. This paper investigates the impact over the IF, when attenuation correction is applied using the Raynaud method and the Geometric Mean method in renal planar scintigraphy, using posterior and anterior views. The study was conducted with Monte Carlo simulated images of five GSF family voxel phantoms with different relative uptakes in each kidney from normal (50% -50%) to pathological (10% -90%). The results showed that Raynaud method corrects more efficiently the cases where the renal depth is close to the value of the standard phantom. The geometric mean method showed similar results to the Raynaud method for Baby, Child and Golem. For Helga and Donna models, the errors were above 20%, increasing with relative uptake. Further studies should be conducted to assess the influences of the standard phantom in the correcting attenuation methods. (author)

  12. Medium corrections to nucleon-nucleon interactions

    International Nuclear Information System (INIS)

    Dortmans, P.J.; Amos, K.

    1990-01-01

    The Bethe-Goldstone equations have been solved for both negative and positive energies to specify two nucleon G-matrices fully off of the energy shell. Medium correction effects of Pauli blocking and of the auxiliary potential are included in infinite matter systems characterized by fermi momenta in the range 0.5 fm -1 to 1.8 fm -1 . The Paris interaction is used as the starting potential in most calculations. Medium corrections are shown to be very significant over a large range of energies and densities. On the energy shell values of G-matrices vary markedly from those of free two nucleon (NN) t-matrices which have been solved by way of the Lippmann-Schwinger equation. Off of the energy shell, however, the free and medium corrected Kowalski-Noyes f-ratios rate are quite similar suggesting that a useful model of medium corrected G-matrices are appropriately scaled free NN t-matrices. The choice of auxiliary potential form is also shown to play a decisive role in the negative energy regime, especially when the saturation of nuclear matter is considered. 30 refs., 7 tabs., 7 figs

  13. A technique for measuring oxygen saturation in biological tissues based on diffuse optical spectroscopy

    Science.gov (United States)

    Kleshnin, Mikhail; Orlova, Anna; Kirillin, Mikhail; Golubiatnikov, German; Turchin, Ilya

    2017-07-01

    A new approach to optical measuring blood oxygen saturation was developed and implemented. This technique is based on an original three-stage algorithm for reconstructing the relative concentration of biological chromophores (hemoglobin, water, lipids) from the measured spectra of diffusely scattered light at different distances from the probing radiation source. The numerical experiments and approbation of the proposed technique on a biological phantom have shown the high reconstruction accuracy and the possibility of correct calculation of hemoglobin oxygenation in the presence of additive noise and calibration errors. The obtained results of animal studies have agreed with the previously published results of other research groups and demonstrated the possibility to apply the developed technique to monitor oxygen saturation in tumor tissue.

  14. Asteroseismic modelling of solar-type stars: internal systematics from input physics and surface correction methods

    Science.gov (United States)

    Nsamba, B.; Campante, T. L.; Monteiro, M. J. P. F. G.; Cunha, M. S.; Rendle, B. M.; Reese, D. R.; Verma, K.

    2018-04-01

    Asteroseismic forward modelling techniques are being used to determine fundamental properties (e.g. mass, radius, and age) of solar-type stars. The need to take into account all possible sources of error is of paramount importance towards a robust determination of stellar properties. We present a study of 34 solar-type stars for which high signal-to-noise asteroseismic data is available from multi-year Kepler photometry. We explore the internal systematics on the stellar properties, that is, associated with the uncertainty in the input physics used to construct the stellar models. In particular, we explore the systematics arising from: (i) the inclusion of the diffusion of helium and heavy elements; and (ii) the uncertainty in solar metallicity mixture. We also assess the systematics arising from (iii) different surface correction methods used in optimisation/fitting procedures. The systematics arising from comparing results of models with and without diffusion are found to be 0.5%, 0.8%, 2.1%, and 16% in mean density, radius, mass, and age, respectively. The internal systematics in age are significantly larger than the statistical uncertainties. We find the internal systematics resulting from the uncertainty in solar metallicity mixture to be 0.7% in mean density, 0.5% in radius, 1.4% in mass, and 6.7% in age. The surface correction method by Sonoi et al. and Ball & Gizon's two-term correction produce the lowest internal systematics among the different correction methods, namely, ˜1%, ˜1%, ˜2%, and ˜8% in mean density, radius, mass, and age, respectively. Stellar masses obtained using the surface correction methods by Kjeldsen et al. and Ball & Gizon's one-term correction are systematically higher than those obtained using frequency ratios.

  15. Two-loop corrections for nuclear matter in the Walecka model

    International Nuclear Information System (INIS)

    Furnstahl, R.J.; Perry, R.J.; Serot, B.D.; Department of Physics, The Ohio State University, Columbus, Ohio 43210; Physics Department and Nuclear Theory Center, Indiana University, Bloomington, Indiana 47405)

    1989-01-01

    Two-loop corrections for nuclear matter, including vacuum polarization, are calculated in the Walecka model to study the loop expansion as an approximation scheme for quantum hadrodynamics. Criteria for useful approximation schemes are discussed, and the concepts of strong and weak convergence are introduced. The two-loop corrections are evaluated first with one-loop parameters and mean fields and then by minimizing the total energy density with respect to the scalar field and refitting parameters to empirical nuclear matter saturation properties. The size and nature of the corrections indicate that the loop expansion is not convergent at two-loop order in either the strong or weak sense. Prospects for alternative approximation schemes are discussed

  16. Femoral venous oxygen saturation is no surrogate for central venous oxygen saturation

    NARCIS (Netherlands)

    van Beest, Paul A.; van der Schors, Alice; Liefers, Henriette; Coenen, Ludo G. J.; Braam, Richard L.; Habib, Najib; Braber, Annemarije; Scheeren, Thomas W. L.; Kuiper, Michael A.; Spronk, Peter E.

    2012-01-01

    Objective:  The purpose of our study was to determine if central venous oxygen saturation and femoral venous oxygen saturation can be used interchangeably during surgery and in critically ill patients. Design:  Prospective observational controlled study. Setting:  Nonacademic university-affiliated

  17. Monte Carlo evaluation of scattering correction methods in 131I studies using pinhole collimator

    International Nuclear Information System (INIS)

    López Díaz, Adlin; San Pedro, Aley Palau; Martín Escuela, Juan Miguel; Rodríguez Pérez, Sunay; Díaz García, Angelina

    2017-01-01

    Scattering is quite important for image activity quantification. In order to study the scattering factors and the efficacy of 3 multiple window energy scatter correction methods during 131 I thyroid studies with a pinhole collimator (5 mm hole) a Monte Carlo simulation (MC) was developed. The GAMOS MC code was used to model the gamma camera and the thyroid source geometry. First, to validate the MC gamma camera pinhole-source model, sensibility in air and water of the simulated and measured thyroid phantom geometries were compared. Next, simulations to investigate scattering and the result of triple energy (TEW), Double energy (DW) and Reduced double (RDW) energy windows correction methods were performed for different thyroid sizes and depth thicknesses. The relative discrepancies to MC real event were evaluated. Results: The accuracy of the GAMOS MC model was verified and validated. The image’s scattering contribution was significant, between 27-40 %. The discrepancies between 3 multiple window energy correction method results were significant (between 9-86 %). The Reduce Double Window methods (15%) provide discrepancies of 9-16 %. Conclusions: For the simulated thyroid geometry with pinhole, the RDW (15 %) was the most effective. (author)

  18. Study of the orbital correction method

    International Nuclear Information System (INIS)

    Meserve, R.A.

    1976-01-01

    Two approximations of interest in atomic, molecular, and solid state physics are explored. First, a procedure for calculating an approximate Green's function for use in perturbation theory is derived. In lowest order it is shown to be equivalent to treating the contribution of the bound states of the unperturbed Hamiltonian exactly and representing the continuum contribution by plane waves orthogonalized to the bound states (OPW's). If the OPW approximation were inadequate, the procedure allows for systematic improvement of the approximation. For comparison purposes an exact but more limited procedure for performing second-order perturbation theory, one that involves solving an inhomogeneous differential equation, is also derived. Second, the Kohn-Sham many-electron formalism is discussed and formulae are derived and discussed for implementing perturbation theory within the formalism so as to find corrections to the total energy of a system through second order in the perturbation. Both approximations were used in the calculation of the polarizability of helium, neon, and argon. The calculation included direct and exchange effects by the Kohn-Sham method and full self-consistency was demanded. The results using the differential equation method yielded excellent agreement with the coupled Hartree-Fock results of others and with experiment. Moreover, the OPW approximation yielded satisfactory comparison with the results of calculation by the exact differential equation method. Finally, both approximations were used in the calculation of properties of hydrogen fluoride and methane. The appendix formulates a procedure using group theory and the internal coordinates of a molecular system to simplify the calculation of vibrational frequencies

  19. On the water saturation calculation in hydrocarbon sandstone reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Stalheim, Stein Ottar

    2002-07-01

    The main goal of this work was to identify the most important uncertainty sources in water saturation calculation and examine the possibility for developing new S{sub w} - equations or possibility to develop methods to remove weaknesses and uncertainties in existing S{sub w} - equations. Due to the need for industrial applicability of the equations we aimed for results with the following properties: The accuracy in S{sub w} should increase compared with existing S{sub w} - equations. The equations should be simple to use in petrophysical evaluations. The equations should be based on conventional logs and use as few as possible input parameters. The equations should be numerical stable. This thesis includes an uncertainty and sensitivity analysis of the most common S{sub w} equations. The results are addressed in chapter 3 and were intended to find the most important uncertainty sources in water saturation calculation. To increase the knowledge of the relationship between R{sub t} and S{sub w} in hydrocarbon sandstone reservoirs and to understand how the pore geometry affects the conductivity (n and m) of the rock a theoretical study was done. It was also an aim to examine the possibility for developing new S{sub w} - equations (or investigation an effective medium model) valid inhydrocarbon sandstone reservoirs. The results are presented in paper 1. A new equation for water saturation calculation in clean sandstone oil reservoirs is addressed in paper 2. A recommendation for best practice of water saturation calculation in non water wet formation is addressed in paper 3. Finally a new equation for water saturation calculation in thinly interbedded sandstone/mudstone reservoirs is presented in paper 4. The papers are titled: 1) Is the saturation exponent n a constant. 2) A New Model for Calculating Water Saturation In 3) Influence of wettability on water saturation modeling. 4) Water Saturation Calculations in Thinly Interbedded Sandstone/mudstone Reservoirs. A

  20. Filtering of SPECT reconstructions made using Bellini's attenuation correction method

    International Nuclear Information System (INIS)

    Glick, S.J.; Penney, B.C.; King, M.A.

    1991-01-01

    This paper evaluates a three-dimensional (3D) Wiener filter which is used to restore SPECT reconstructions which were made using Bellini's method of attenuation correction. Its performance is compared to that of several pre-reconstruction filers: the one-dimensional (1D) Butterworth, the two-dimensional (2D) Butterworth, and a 2D Wiener filer. A simulation study is used to compare the four filtering methods. An approximation to a clinical liver spleen study was used as the source distribution and algorithm which accounts for the depth and distance dependent blurring in SPECT was used to compute noise free projections. To study the effect of filtering method on tumor detection accuracy, a 2 cm diameter, cool spherical tumor (40% contrast) was placed at a known, but random, location with the liver. Projection sets for ten tumor locations were computed and five noise realizations of each set were obtained by introducing Poisson noise. The simulated projections were either: filtered with the 1D or 2D Butterworth or the 2D Wiener and then reconstructed using Bellini's intrinsic attenuation correction, or reconstructed first, then filtered with the 3D Wiener. The criteria used for comparison were: normalized mean square error (NMSE), cold spot contrast, and accuracy of tumor detection with an automated numerical method. Results indicate that restorations obtained with 3D Wiener filtering yielded significantly higher lesion contrast and lower NMSE values compared to the other methods of processing. The Wiener restoration filters and the 2D Butterworth all provided similar measures of detectability, which were noticeably higher than that obtained with 1D Butterworth smoothing

  1. Saturation behaviour of the LHC NEG coated beam pipes

    CERN Document Server

    Porcelli, T; Lanza, G; Baglin, V; Jimenez, J M

    2012-01-01

    In the CERN Large Hadron Collider (LHC), about 6 km of the UHV beam pipe are at room temperature and serve as experimental or utility insertions. TiZrV non-evaporable getter (NEG) coating is used to maintain the design pressure during beam operation. Molecular desorption due to dynamic effects is stimulated during protons operation at high intensity. This phenomenon produces an important gas load from the vacuum chamber walls, which could lead to a partial or total saturation of the NEG coating. To keep the design vacuum performances and to schedule technical interventions for NEG reactivation, it is necessary to take into account all these aspects and to regularly evaluate the saturation level of the NEG coating. Experimental studies of a typical LHC vacuum sector were conducted in the laboratory in order to identify the best method to assess the saturation level of the beam pipe. Partial saturation of the NEG was performed and the effective pumping speed, transmission and capture probability are analysed.

  2. Finite-time stabilization for a class of nonholonomic feedforward systems subject to inputs saturation.

    Science.gov (United States)

    Gao, Fangzheng; Yuan, Ye; Wu, Yuqiang

    2016-09-01

    This paper studies the problem of finite-time stabilization by state feedback for a class of uncertain nonholonomic systems in feedforward-like form subject to inputs saturation. Under the weaker homogeneous condition on systems growth, a saturated finite-time control scheme is developed by exploiting the adding a power integrator method, the homogeneous domination approach and the nested saturation technique. Together with a novel switching control strategy, the designed saturated controller guarantees that the states of closed-loop system are regulated to zero in a finite time without violation of the constraint. As an application of the proposed theoretical results, the problem of saturated finite-time control for vertical wheel on rotating table is solved. Simulation results are given to demonstrate the effectiveness of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  3. An FFT-based Method for Attenuation Correction in Fluorescence Confocal Microscopy

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.; Bakker, M.

    1993-01-01

    A problem in three-dimensional imaging by a confocal scanning laser microscope (CSLM) in the (epi)fluorescence mode is the darkening of the deeper layers due to absorption and scattering of both the excitation and the fluorescence light. In this paper we propose a new method to correct for these

  4. A novel method to correct for pitch and yaw patient setup errors in helical tomotherapy

    International Nuclear Information System (INIS)

    Boswell, Sarah A.; Jeraj, Robert; Ruchala, Kenneth J.; Olivera, Gustavo H.; Jaradat, Hazim A.; James, Joshua A.; Gutierrez, Alonso; Pearson, Dave; Frank, Gary; Mackie, T. Rock

    2005-01-01

    An accurate means of determining and correcting for daily patient setup errors is important to the cancer outcome in radiotherapy. While many tools have been developed to detect setup errors, difficulty may arise in accurately adjusting the patient to account for the rotational error components. A novel, automated method to correct for rotational patient setup errors in helical tomotherapy is proposed for a treatment couch that is restricted to motion along translational axes. In tomotherapy, only a narrow superior/inferior section of the target receives a dose at any instant, thus rotations in the sagittal and coronal planes may be approximately corrected for by very slow continuous couch motion in a direction perpendicular to the scanning direction. Results from proof-of-principle tests indicate that the method improves the accuracy of treatment delivery, especially for long and narrow targets. Rotational corrections about an axis perpendicular to the transverse plane continue to be implemented easily in tomotherapy by adjustment of the initial gantry angle

  5. Correction method for the error of diamond tool's radius in ultra-precision cutting

    Science.gov (United States)

    Wang, Yi; Yu, Jing-chi

    2010-10-01

    The compensation method for the error of diamond tool's cutting edge is a bottle-neck technology to hinder the high accuracy aspheric surface's directly formation after single diamond turning. Traditional compensation was done according to the measurement result from profile meter, which took long measurement time and caused low processing efficiency. A new compensation method was firstly put forward in the article, in which the correction of the error of diamond tool's cutting edge was done according to measurement result from digital interferometer. First, detailed theoretical calculation related with compensation method was deduced. Then, the effect after compensation was simulated by computer. Finally, φ50 mm work piece finished its diamond turning and new correction turning under Nanotech 250. Testing surface achieved high shape accuracy pv 0.137λ and rms=0.011λ, which approved the new compensation method agreed with predictive analysis, high accuracy and fast speed of error convergence.

  6. Laser frequency locking based on the normal and abnormal saturated absorption spectroscopy of 87Rb

    International Nuclear Information System (INIS)

    Wan Jian-Hong; Liu Chang; Wang Yan-Hui

    2016-01-01

    We present a practical method to avoid the mis-locking phenomenon in the saturated-absorption-spectrum laser-frequency-locking system and set up a simple theoretical model to explain the abnormal saturated absorption spectrum. The method uses the normal and abnormal saturated absorption spectra of the same transition 5 2 S 1/2 , F = 2–5 2 P 3/2 , F′ = 3 saturated absorption of the 87 Rb D 2 resonance line. After subtracting these two signals with the help of electronics, we can obtain a spectrum with a single peak to lock the laser. In our experiment, we use the normal and inverse signals of the transitions 5 2 S 1/2 , F = 2–5 2 P 3/2 , F′ = 3 saturated absorption of the 87 Rb D 2 resonance line to lock a 780-nm distributed feedback (DFB) diode laser. This method improves the long-term locking performance and is suitable for other kinds of diode lasers. (paper)

  7. Lipid Based Formulations of Biopharmaceutics Classification System (BCS Class II Drugs: Strategy, Formulations, Methods and Saturation

    Directory of Open Access Journals (Sweden)

    Šoltýsová I.

    2016-12-01

    Full Text Available Active ingredients in pharmaceuticals differ by their physico-chemical properties and their bioavailability therefore varies. The most frequently used and most convenient way of administration of medicines is oral, however many drugs are little soluble in water. Thus they are not sufficiently effective and suitable for such administration. For this reason a system of lipid based formulations (LBF was developed. Series of formulations were prepared and tested in water and biorelevant media. On the basis of selection criteria, there were selected formulations with the best emulsification potential, good dispersion in the environment and physical stability. Samples of structurally different drugs included in the Class II of the Biopharmaceutics classification system (BCS were obtained, namely Griseofulvin, Glibenclamide, Carbamazepine, Haloperidol, Itraconazol, Triclosan, Praziquantel and Rifaximin, for testing of maximal saturation in formulations prepared from commercially available excipients. Methods were developed for preparation of formulations, observation of emulsification and its description, determination of maximum solubility of drug samples in the respective formulation and subsequent analysis. Saturation of formulations with drugs showed that formulations 80 % XA and 20 % Xh, 35 % XF and 65 % Xh were best able to dissolve the drugs which supports the hypothesis that it is desirable to identify limited series of formulations which could be generally applied for this purpose.

  8. Development and evaluation of attenuation and scatter correction techniques for SPECT using the Monte Carlo method

    International Nuclear Information System (INIS)

    Ljungberg, M.

    1990-05-01

    Quantitative scintigrafic images, obtained by NaI(Tl) scintillation cameras, are limited by photon attenuation and contribution from scattered photons. A Monte Carlo program was developed in order to evaluate these effects. Simple source-phantom geometries and more complex nonhomogeneous cases can be simulated. Comparisons with experimental data for both homogeneous and nonhomogeneous regions and with published results have shown good agreement. The usefulness for simulation of parameters in scintillation camera systems, stationary as well as in SPECT systems, has also been demonstrated. An attenuation correction method based on density maps and build-up functions has been developed. The maps were obtained from a transmission measurement using an external 57 Co flood source and the build-up was simulated by the Monte Carlo code. Two scatter correction methods, the dual-window method and the convolution-subtraction method, have been compared using the Monte Carlo method. The aim was to compare the estimated scatter with the true scatter in the photo-peak window. It was concluded that accurate depth-dependent scatter functions are essential for a proper scatter correction. A new scatter and attenuation correction method has been developed based on scatter line-spread functions (SLSF) obtained for different depths and lateral positions in the phantom. An emission image is used to determine the source location in order to estimate the scatter in the photo-peak window. Simulation studies of a clinically realistic source in different positions in cylindrical water phantoms were made for three photon energies. The SLSF-correction method was also evaluated by simulation studies for 1. a myocardial source, 2. uniform source in the lungs and 3. a tumour located in the lungs in a realistic, nonhomogeneous computer phantom. The results showed that quantitative images could be obtained in nonhomogeneous regions. (67 refs.)

  9. Nonlinear feedback control of chaotic pendulum in presence of saturation effect

    Energy Technology Data Exchange (ETDEWEB)

    Alasty, Aria [Center of Excellence in Design, Robotics, and Automation (CEDRA), Department of Mechanical Engineering, Sharif University of Technology, Azadi Avenue, Tehran 1458889694 (Iran, Islamic Republic of)]. E-mail: aalasti@sharif.edu; Salarieh, Hassan [Center of Excellence in Design, Robotics, and Automation (CEDRA), Department of Mechanical Engineering, Sharif University of Technology, Azadi Avenue, Tehran 1458889694 (Iran, Islamic Republic of)]. E-mail: salarieh@mehr.sharif.edu

    2007-01-15

    In present paper, a feedback linearization control is applied to control a chaotic pendulum system. Tracking the desired periodic orbits such as period-one, period-two, and period-four orbits is efficiently achieved. Due to the presence of saturation in real world control signals, the stability of controller is investigated in presence of saturation and sufficient stability conditions are obtained. At first feedback linearization control law is designed, then to avoid the singularity condition, a saturating constraint is applied to the control signal. The stability conditions are obtained analytically. These conditions must be investigated for each specific case numerically. Simulation results show the effectiveness and robustness of proposed controller. A major advantage of this method is its shorter chaotic transient time in compare to other methods such as OGY and Pyragas controllers.

  10. A novel scene-based non-uniformity correction method for SWIR push-broom hyperspectral sensors

    Science.gov (United States)

    Hu, Bin-Lin; Hao, Shi-Jing; Sun, De-Xin; Liu, Yin-Nian

    2017-09-01

    A novel scene-based non-uniformity correction (NUC) method for short-wavelength infrared (SWIR) push-broom hyperspectral sensors is proposed and evaluated. This method relies on the assumption that for each band there will be ground objects with similar reflectance to form uniform regions when a sufficient number of scanning lines are acquired. The uniform regions are extracted automatically through a sorting algorithm, and are used to compute the corresponding NUC coefficients. SWIR hyperspectral data from airborne experiment are used to verify and evaluate the proposed method, and results show that stripes in the scenes have been well corrected without any significant information loss, and the non-uniformity is less than 0.5%. In addition, the proposed method is compared to two other regular methods, and they are evaluated based on their adaptability to the various scenes, non-uniformity, roughness and spectral fidelity. It turns out that the proposed method shows strong adaptability, high accuracy and efficiency.

  11. Nonuniform Illumination Correction Algorithm for Underwater Images Using Maximum Likelihood Estimation Method

    Directory of Open Access Journals (Sweden)

    Sonali Sachin Sankpal

    2016-01-01

    Full Text Available Scattering and absorption of light is main reason for limited visibility in water. The suspended particles and dissolved chemical compounds in water are also responsible for scattering and absorption of light in water. The limited visibility in water results in degradation of underwater images. The visibility can be increased by using artificial light source in underwater imaging system. But the artificial light illuminates the scene in a nonuniform fashion. It produces bright spot at the center with the dark region at surroundings. In some cases imaging system itself creates dark region in the image by producing shadow on the objects. The problem of nonuniform illumination is neglected by the researchers in most of the image enhancement techniques of underwater images. Also very few methods are discussed showing the results on color images. This paper suggests a method for nonuniform illumination correction for underwater images. The method assumes that natural underwater images are Rayleigh distributed. This paper used maximum likelihood estimation of scale parameter to map distribution of image to Rayleigh distribution. The method is compared with traditional methods for nonuniform illumination correction using no-reference image quality metrics like average luminance, average information entropy, normalized neighborhood function, average contrast, and comprehensive assessment function.

  12. Numerical Study of Frequency-dependent Seismoelectric Coupling in Partially-saturated Porous Media

    Directory of Open Access Journals (Sweden)

    Djuraev Ulugbek

    2017-01-01

    Full Text Available The seismoelectric phenomenon associated with propagation of seismic waves in fluid-saturated porous media has been studied for many decades. The method has a great potential to monitor subsurface fluid saturation changes associated with production of hydrocarbons. Frequency of the seismic source has a significant impact on measurement of the seismoelectric effects. In this paper, the effects of seismic wave frequency and water saturation on the seismoelectric response of a partially-saturated porous media is studied numerically. The conversion of seismic wave to electromagnetic wave was modelled by extending the theoretically developed seismoelectric coupling coefficient equation. We assumed constant values of pore radius and zeta-potential of 80 micrometers and 48 microvolts, respectively. Our calculations of the coupling coefficient were conducted at various water saturation values in the frequency range of 10 kHz to 150 kHz. The results show that the seismoelectric coupling is frequency-dependent and decreases exponentially when frequency increases. Similar trend is seen when water saturation is varied at different frequencies. However, when water saturation is less than about 0.6, the effect of frequency is significant. On the other hand, when the water saturation is greater than 0.6, the coupling coefficient shows monotonous trend when water saturation is increased at constant frequency.

  13. Improving the accuracy of CT dimensional metrology by a novel beam hardening correction method

    International Nuclear Information System (INIS)

    Zhang, Xiang; Li, Lei; Zhang, Feng; Xi, Xiaoqi; Deng, Lin; Yan, Bin

    2015-01-01

    Its powerful nondestructive characteristics are attracting more and more research into the study of computed tomography (CT) for dimensional metrology, which offers a practical alternative to the common measurement methods. However, the inaccuracy and uncertainty severely limit the further utilization of CT for dimensional metrology due to many factors, among which the beam hardening (BH) effect plays a vital role. This paper mainly focuses on eliminating the influence of the BH effect in the accuracy of CT dimensional metrology. To correct the BH effect, a novel exponential correction model is proposed. The parameters of the model are determined by minimizing the gray entropy of the reconstructed volume. In order to maintain the consistency and contrast of the corrected volume, a punishment term is added to the cost function, enabling more accurate measurement results to be obtained by the simple global threshold method. The proposed method is efficient, and especially suited to the case where there is a large difference in gray value between material and background. Different spheres with known diameters are used to verify the accuracy of dimensional measurement. Both simulation and real experimental results demonstrate the improvement in measurement precision. Moreover, a more complex workpiece is also tested to show that the proposed method is of general feasibility. (paper)

  14. State-Feedback Control for Fractional-Order Nonlinear Systems Subject to Input Saturation

    Directory of Open Access Journals (Sweden)

    Junhai Luo

    2014-01-01

    Full Text Available We give a state-feedback control method for fractional-order nonlinear systems subject to input saturation. First, a sufficient condition is derived for the asymptotical stability of a class of fractional-order nonlinear systems. Then based on Gronwall-Bellman lemma and a sector bounded condition of the saturation function, a linear state-feed back controller is designed. Finally, two simulation examples are presented to show the validity of the proposed method.

  15. Arterial blood oxygen saturation during blood pressure cuff-induced hypoperfusion

    International Nuclear Information System (INIS)

    Kyriacou, P A; Shafqat, K; Pal, S K

    2007-01-01

    Pulse oximetry has been one of the most significant technological advances in clinical monitoring in the last two decades. Pulse oximetry is a non-invasive photometric technique that provides information about the arterial blood oxygen saturation (SpO 2 ) and heart rate, and has widespread clinical applications. When peripheral perfusion is poor, as in states of hypovolaemia, hypothermia and vasoconstriction, oxygenation readings become unreliable or cease. The problem arises because conventional pulse oximetry sensors must be attached to the most peripheral parts of the body, such as finger, ear or toe, where pulsatile flow is most easily compromised. Pulse oximeters estimate arterial oxygen saturation by shining light at two different wavelengths, red and infrared, through vascular tissue. In this method the ac pulsatile photoplethysmographic (PPG) signal associated with cardiac contraction is assumed to be attributable solely to the arterial blood component. The amplitudes of the red and infrared ac PPG signals are sensitive to changes in arterial oxygen saturation because of differences in the light absorption of oxygenated and deoxygenated haemoglobin at these two wavelengths. From the ratios of these amplitudes, and the corresponding dc photoplethysmographic components, arterial blood oxygen saturation (SpO 2 ) is estimated. Hence, the technique of pulse oximetry relies on the presence of adequate peripheral arterial pulsations, which are detected as photoplethysmographic (PPG) signals. The aim of this study was to investigate the effect of pressure cuff-induced hypoperfusion on photoplethysmographic signals and arterial blood oxygen saturation using a custom made finger blood oxygen saturation PPG/SpO 2 sensor and a commercial finger pulse oximeter. Blood oxygen saturation values from the custom oxygen saturation sensor and a commercial finger oxygen saturation sensor were recorded from 14 healthy volunteers at various induced brachial pressures. Both pulse

  16. Arterial blood oxygen saturation during blood pressure cuff-induced hypoperfusion

    Science.gov (United States)

    Kyriacou, P. A.; Shafqat, K.; Pal, S. K.

    2007-10-01

    Pulse oximetry has been one of the most significant technological advances in clinical monitoring in the last two decades. Pulse oximetry is a non-invasive photometric technique that provides information about the arterial blood oxygen saturation (SpO2) and heart rate, and has widespread clinical applications. When peripheral perfusion is poor, as in states of hypovolaemia, hypothermia and vasoconstriction, oxygenation readings become unreliable or cease. The problem arises because conventional pulse oximetry sensors must be attached to the most peripheral parts of the body, such as finger, ear or toe, where pulsatile flow is most easily compromised. Pulse oximeters estimate arterial oxygen saturation by shining light at two different wavelengths, red and infrared, through vascular tissue. In this method the ac pulsatile photoplethysmographic (PPG) signal associated with cardiac contraction is assumed to be attributable solely to the arterial blood component. The amplitudes of the red and infrared ac PPG signals are sensitive to changes in arterial oxygen saturation because of differences in the light absorption of oxygenated and deoxygenated haemoglobin at these two wavelengths. From the ratios of these amplitudes, and the corresponding dc photoplethysmographic components, arterial blood oxygen saturation (SpO2) is estimated. Hence, the technique of pulse oximetry relies on the presence of adequate peripheral arterial pulsations, which are detected as photoplethysmographic (PPG) signals. The aim of this study was to investigate the effect of pressure cuff-induced hypoperfusion on photoplethysmographic signals and arterial blood oxygen saturation using a custom made finger blood oxygen saturation PPG/SpO2 sensor and a commercial finger pulse oximeter. Blood oxygen saturation values from the custom oxygen saturation sensor and a commercial finger oxygen saturation sensor were recorded from 14 healthy volunteers at various induced brachial pressures. Both pulse

  17. Method of correcting eddy current magnetic fields in particle accelerator vacuum chambers

    Science.gov (United States)

    Danby, Gordon T.; Jackson, John W.

    1991-01-01

    A method for correcting magnetic field aberrations produced by eddy currents induced in a particle accelerator vacuum chamber housing is provided wherein correction windings are attached to selected positions on the housing and the windings are energized by transformer action from secondary coils, which coils are inductively coupled to the poles of electro-magnets that are powered to confine the charged particle beam within a desired orbit as the charged particles are accelerated through the vacuum chamber by a particle-driving rf field. The power inductively coupled to the secondary coils varies as a function of variations in the power supplied by the particle-accelerating rf field to a beam of particles accelerated through the vacuum chamber, so the current in the energized correction coils is effective to cancel eddy current flux fields that would otherwise be induced in the vacuum chamber by power variations in the particle beam.

  18. Consistent calculation of the polarization electric dipole moment by the shell-correction method

    International Nuclear Information System (INIS)

    Denisov, V.Yu.

    1992-01-01

    Macroscopic calculations of the polarization electric dipole moment which arises in nuclei with an octupole deformation are discussed in detail. This dipole moment is shown to depend on the position of the center of gravity. The conditions of consistency of the radii of the proton and neutron potentials and the radii of the proton and neutron surfaces, respectively, are discussed. These conditions must be incorporated in a shell-correction calculation of this dipole moment. A correct calculation of this moment by the shell-correction method is carried out. Dipole transitions between (on the one hand) levels belonging to an octupole vibrational band and (on the other) the ground state in rare-earth nuclei with a large quadrupole deformation are studied. 19 refs., 3 figs

  19. Thermophysical properties of a fluid-saturated sandstone

    International Nuclear Information System (INIS)

    Abid, Muhammad; Hammerschmidt, Ulf; Koehler, Juergen

    2014-01-01

    Thermophysical properties of a fluid-saturated stone are presented that are obtained by using the transient hot-bridge technique (THB) at ambient conditions. Measurements are succeedingly done each after having filled the porous stone structure first with six different fluids of distinct thermal conductivities and next with six different gases also having different thermal conductivities. Variations in thermal conductivity, thermal diffusivity and volumetric specific heat due to liquid or gas saturations are discussed. Internal pore structure of the stone is studied by using Scanning Electron Microscopy (SEM), Mercury Intrusion Porosimetry (MIP) and other standardized density methods at ambient conditions. Effect of interstitial pore pressure on thermophysical properties are also discussed in the context of Knudsen effect. (authors)

  20. Assessment of Atmospheric Correction Methods for Sentinel-2 MSI Images Applied to Amazon Floodplain Lakes

    Directory of Open Access Journals (Sweden)

    Vitor Souza Martins

    2017-03-01

    Full Text Available Satellite data provide the only viable means for extensive monitoring of remote and large freshwater systems, such as the Amazon floodplain lakes. However, an accurate atmospheric correction is required to retrieve water constituents based on surface water reflectance ( R W . In this paper, we assessed three atmospheric correction methods (Second Simulation of a Satellite Signal in the Solar Spectrum (6SV, ACOLITE and Sen2Cor applied to an image acquired by the MultiSpectral Instrument (MSI on-board of the European Space Agency’s Sentinel-2A platform using concurrent in-situ measurements over four Amazon floodplain lakes in Brazil. In addition, we evaluated the correction of forest adjacency effects based on the linear spectral unmixing model, and performed a temporal evaluation of atmospheric constituents from Multi-Angle Implementation of Atmospheric Correction (MAIAC products. The validation of MAIAC aerosol optical depth (AOD indicated satisfactory retrievals over the Amazon region, with a correlation coefficient (R of ~0.7 and 0.85 for Terra and Aqua products, respectively. The seasonal distribution of the cloud cover and AOD revealed a contrast between the first and second half of the year in the study area. Furthermore, simulation of top-of-atmosphere (TOA reflectance showed a critical contribution of atmospheric effects (>50% to all spectral bands, especially the deep blue (92%–96% and blue (84%–92% bands. The atmospheric correction results of the visible bands illustrate the limitation of the methods over dark lakes ( R W < 1%, and better match of the R W shape compared with in-situ measurements over turbid lakes, although the accuracy varied depending on the spectral bands and methods. Particularly above 705 nm, R W was highly affected by Amazon forest adjacency, and the proposed adjacency effect correction minimized the spectral distortions in R W (RMSE < 0.006. Finally, an extensive validation of the methods is required for

  1. SATURATED ZONE IN-SITU TESTING

    Energy Technology Data Exchange (ETDEWEB)

    P.W. REIMUS

    2004-11-08

    The purpose of this scientific analysis is to document the results and interpretations of field experiments that test and validate conceptual flow and radionuclide transport models in the saturated zone (SZ) near Yucca Mountain, Nevada. The test interpretations provide estimates of flow and transport parameters used in the development of parameter distributions for total system performance assessment (TSPA) calculations. These parameter distributions are documented in ''Site-Scale Saturated Zone Flow Model (BSC 2004 [DIRS 170037]), Site-Scale Saturated Zone Transport'' (BSC 2004 [DIRS 170036]), Saturated Zone Colloid Transport (BSC 2004 [DIRS 170006]), and ''Saturated Zone Flow and Transport Model Abstraction'' (BSC 2004 [DIRS 170042]). Specifically, this scientific analysis contributes the following to the assessment of the capability of the SZ to serve as part of a natural barrier for waste isolation for the Yucca Mountain repository system: (1) The bases for selection of conceptual flow and transport models in the saturated volcanics and the saturated alluvium located near Yucca Mountain. (2) Results and interpretations of hydraulic and tracer tests conducted in saturated fractured volcanics at the C-wells complex near Yucca Mountain. The test interpretations include estimates of hydraulic conductivities, anisotropy in hydraulic conductivity, storativities, total porosities, effective porosities, longitudinal dispersivities, matrix diffusion mass transfer coefficients, matrix diffusion coefficients, fracture apertures, and colloid transport parameters. (3) Results and interpretations of hydraulic and tracer tests conducted in saturated alluvium at the Alluvial Testing Complex (ATC) located at the southwestern corner of the Nevada Test Site (NTS). The test interpretations include estimates of hydraulic conductivities, storativities, total porosities, effective porosities, longitudinal dispersivities, matrix diffusion mass

  2. SATURATED ZONE IN-SITU TESTING

    International Nuclear Information System (INIS)

    REIMUS, P.W.

    2004-01-01

    The purpose of this scientific analysis is to document the results and interpretations of field experiments that test and validate conceptual flow and radionuclide transport models in the saturated zone (SZ) near Yucca Mountain, Nevada. The test interpretations provide estimates of flow and transport parameters used in the development of parameter distributions for total system performance assessment (TSPA) calculations. These parameter distributions are documented in ''Site-Scale Saturated Zone Flow Model (BSC 2004 [DIRS 170037]), Site-Scale Saturated Zone Transport'' (BSC 2004 [DIRS 170036]), Saturated Zone Colloid Transport (BSC 2004 [DIRS 170006]), and ''Saturated Zone Flow and Transport Model Abstraction'' (BSC 2004 [DIRS 170042]). Specifically, this scientific analysis contributes the following to the assessment of the capability of the SZ to serve as part of a natural barrier for waste isolation for the Yucca Mountain repository system: (1) The bases for selection of conceptual flow and transport models in the saturated volcanics and the saturated alluvium located near Yucca Mountain. (2) Results and interpretations of hydraulic and tracer tests conducted in saturated fractured volcanics at the C-wells complex near Yucca Mountain. The test interpretations include estimates of hydraulic conductivities, anisotropy in hydraulic conductivity, storativities, total porosities, effective porosities, longitudinal dispersivities, matrix diffusion mass transfer coefficients, matrix diffusion coefficients, fracture apertures, and colloid transport parameters. (3) Results and interpretations of hydraulic and tracer tests conducted in saturated alluvium at the Alluvial Testing Complex (ATC) located at the southwestern corner of the Nevada Test Site (NTS). The test interpretations include estimates of hydraulic conductivities, storativities, total porosities, effective porosities, longitudinal dispersivities, matrix diffusion mass transfer coefficients, and colloid

  3. Automatic NAA. Saturation activities

    International Nuclear Information System (INIS)

    Westphal, G.P.; Grass, F.; Kuhnert, M.

    2008-01-01

    A system for Automatic NAA is based on a list of specific saturation activities determined for one irradiation position at a given neutron flux and a single detector geometry. Originally compiled from measurements of standard reference materials, the list may be extended also by the calculation of saturation activities from k 0 and Q 0 factors, and f and α values of the irradiation position. A systematic improvement of the SRM approach is currently being performed by pseudo-cyclic activation analysis, to reduce counting errors. From these measurements, the list of saturation activities is recalculated in an automatic procedure. (author)

  4. Correlation between Oxygen Saturation and Hemoglobin and Hematokrit Levels in Tetralogy of Fallot Patients

    Directory of Open Access Journals (Sweden)

    Farhatul Inayah Adiputri

    2016-03-01

    Full Text Available Background: Hemoglobin and hematocrit levels increase in Tetralogy of Fallot (TOF but the oxygen saturation declines. Reduced hemoglobin in circulating blood as a parameter of cyanosis does not indicate rising hemoglobin due to the ‘not-working’ hemoglobins that affect the oxygen saturation. Increasing hematocrit is the result of secondary erythrocytosis caused by declining oxygen level in blood, which is related to the oxygen saturation. This study was conducted to find the correlation between oxygen saturation and hemoglobin and hematocrite levels in TOF patients. Methods: This study was undertaken at Dr. Hasan Sadikin General Hospital in the period of January 2011 to December 2012 using the cross-sectional analytic method with total sampling technique. Inclusion criteria were medical records of TOF patients diagnosed based on echocardiography that included data on oxygen saturation, hemoglobin, and hematocrite. Exclusion criteria was the history of red blood transfusion. Results: Thirty medical records of TOF patiens from Dr. Hasan Sadikin General Hospital Bandung were included in this study. Due to skewed data distribution, Spearman correlation test was used to analyze the data. There was a significant negative correlation between oxygen saturation and hematocrit level (r= -0.412; p=0.024 and insignificant correlation between oxygen saturation and hemoglobin (r=-0.329; p= 0.076. Conclusions: There is a weak negative correlation between oxygen saturation and hematocrite levels

  5. Methods of orbit correction system optimization

    International Nuclear Information System (INIS)

    Chao, Yu-Chiu.

    1997-01-01

    Extracting optimal performance out of an orbit correction system is an important component of accelerator design and evaluation. The question of effectiveness vs. economy, however, is not always easily tractable. This is especially true in cases where betatron function magnitude and phase advance do not have smooth or periodic dependencies on the physical distance. In this report a program is presented using linear algebraic techniques to address this problem. A systematic recipe is given, supported with quantitative criteria, for arriving at an orbit correction system design with the optimal balance between performance and economy. The orbit referred to in this context can be generalized to include angle, path length, orbit effects on the optical transfer matrix, and simultaneous effects on multiple pass orbits

  6. Hydrological modeling as an evaluation tool of EURO-CORDEX climate projections and bias correction methods

    Science.gov (United States)

    Hakala, Kirsti; Addor, Nans; Seibert, Jan

    2017-04-01

    Streamflow stemming from Switzerland's mountainous landscape will be influenced by climate change, which will pose significant challenges to the water management and policy sector. In climate change impact research, the determination of future streamflow is impeded by different sources of uncertainty, which propagate through the model chain. In this research, we explicitly considered the following sources of uncertainty: (1) climate models, (2) downscaling of the climate projections to the catchment scale, (3) bias correction method and (4) parameterization of the hydrological model. We utilize climate projections at the 0.11 degree 12.5 km resolution from the EURO-CORDEX project, which are the most recent climate projections for the European domain. EURO-CORDEX is comprised of regional climate model (RCM) simulations, which have been downscaled from global climate models (GCMs) from the CMIP5 archive, using both dynamical and statistical techniques. Uncertainties are explored by applying a modeling chain involving 14 GCM-RCMs to ten Swiss catchments. We utilize the rainfall-runoff model HBV Light, which has been widely used in operational hydrological forecasting. The Lindström measure, a combination of model efficiency and volume error, was used as an objective function to calibrate HBV Light. Ten best sets of parameters are then achieved by calibrating using the genetic algorithm and Powell optimization (GAP) method. The GAP optimization method is based on the evolution of parameter sets, which works by selecting and recombining high performing parameter sets with each other. Once HBV is calibrated, we then perform a quantitative comparison of the influence of biases inherited from climate model simulations to the biases stemming from the hydrological model. The evaluation is conducted over two time periods: i) 1980-2009 to characterize the simulation realism under the current climate and ii) 2070-2099 to identify the magnitude of the projected change of

  7. Gynecomastia: the horizontal ellipse method for its correction.

    Science.gov (United States)

    Gheita, Alaa

    2008-09-01

    Gynecomastia is an extremely disturbing deformity affecting males, especially when it occurs in young subjects. Such subjects generally have no hormonal anomalies and thus either liposuction or surgical intervention, depending on the type and consistency of the breast, is required for treatment. If there is slight hypertrophy alone with no ptosis, then subcutaneous mastectomy is usually sufficient. However, when hypertrophy and/or ptosis are present, then corrective surgery on the skin and breast is mandatory to obtain a good cosmetic result. Most of the procedures suggested for reduction of the male breast are usually derived from reduction mammaplasty methods used for females. They have some disadvantages, mainly the multiple scars, which remain apparent in males, unusual shape, and the lack of symmetry with regard to the size of both breasts and/or the nipple position. The author presents a new, simple method that has proven superior to any previous method described so far. It consists of a horizontal excision ellipse of the breast's redundant skin and deep excess tissue and a superior pedicle flap carrying the areola-nipple complex to its new site on the chest wall. The method described yields excellent shape, symmetry, and minimal scars. A new method for treating gynecomastis is described in detail, its early and late operative results are shown, and its advantages are discussed.

  8. Compensation of Actuator’s Saturation by Using Fuzzy Logic and Imperialist Competitive Algorithm in a System with PID Controller

    Directory of Open Access Journals (Sweden)

    Abbas Ali Zamani

    2012-07-01

    Full Text Available Physical systems always include constraints and limits. Usually, the limits and constraints, in the control systems, are appeared as temperature and pressure limits or pumps capacity. One of the existing limits in the systems with PID controller is associated with the actuator’s saturation limits. With the saturating of the actuator, the controller’s output and plant’s input will be different and the output signal of controller do not lead the system and their states could not update correctly where this issue makes the system response undesirable. In this paper, by adding a fuzzy compensator that it’s parameters are tuned using imperialist competitive algorithm, the actuator saturation is prevented and the important parameters of the system response, such as setting time and overshoot, are improved.

  9. Adaptive fringe-pattern projection for image saturation avoidance in 3D surface-shape measurement.

    Science.gov (United States)

    Li, Dong; Kofman, Jonathan

    2014-04-21

    In fringe-projection 3D surface-shape measurement, image saturation results in incorrect intensities in captured images of fringe patterns, leading to phase and measurement errors. An adaptive fringe-pattern projection (AFPP) method was developed to adapt the maximum input gray level in projected fringe patterns to the local reflectivity of an object surface being measured. The AFPP method demonstrated improved 3D measurement accuracy by avoiding image saturation in highly-reflective surface regions while maintaining high intensity modulation across the entire surface. The AFPP method can avoid image saturation and handle varying surface reflectivity, using only two prior rounds of fringe-pattern projection and image capture to generate the adapted fringe patterns.

  10. STARL -- a Program to Correct CCD Image Defects

    Science.gov (United States)

    Narbutis, D.; Vanagas, R.; Vansevičius, V.

    We present a program tool, STARL, designed for automatic detection and correction of various defects in CCD images. It uses genetic algorithm for deblending and restoring of overlapping saturated stars in crowded stellar fields. Using Subaru Telescope Suprime-Cam images we demonstrate that the program can be implemented in the wide-field survey data processing pipelines for production of high quality color mosaics. The source code and examples are available at the STARL website.

  11. Development of synchronous generator saturation model from steady-state operating data

    Energy Technology Data Exchange (ETDEWEB)

    Jadric, Martin; Despalatovic, Marin; Terzic, Bozo [FESB University of Split, Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture, Split (Croatia)

    2010-11-15

    A new method to estimate and model the saturated synchronous reactances of hydroturbine generators from operating data is presented. For the estimation process, measurements of only the generator steady-state variables are required. First, using a specific procedure, the field to armature turns ratio is estimated from measured steady-state variables at constant power generation and various excitation conditions. Subsequently, for each set of steady-state operating data, saturated synchronous reactances are identified. Fitting surfaces, defined as polynomial functions in two variables, are later used to model these saturated reactances. It is shown that the simpler polynomial functions may be used to model saturation at the steady-state than at the dynamic conditions. The developed steady-state model is validated with measurements performed on the 34 MVA hydroturbine generator. (author)

  12. Calculation of heat exchangers with saturated and underheated superfluid helium (He-2)

    International Nuclear Information System (INIS)

    Belyakov, V.P.; Shaposhnikov, V.A.; Budrik, V.V.; Volkova, N.M.

    1986-01-01

    Calculation technique for heat exchangers with saturated and underheated He-2 under conditions of natural inner convection and forced convection is delivered. The following variants of heat exchangers are considered: inside the bath with saturated He-2 a tube is placed along which subcooled He-2 flow moves with a constant rate; inside the bath with subcooled He-2 a tube is placed both ends of which are in the bath with saturated He-2; inside the bath with saturated He-2 a tube is placed both ends of which are in the bath with subcooled He-2. For all cases examples of calculation and experimental data of heat exchanger tests are presented. The developed methods of calculation of heat exchangers for saturated He-2 and subcooled He-2 make it possible to design and create superfluid helium cryostatting systems

  13. Correction for tissue attenuation in radionuclide gastric emptying studies: a comparison of a lateral image method and a geometric mean method

    Energy Technology Data Exchange (ETDEWEB)

    Collins, P.J.; Chatterton, B.E. (Royal Adelaide Hospital (Australia)); Horowitz, M.; Shearman, D.J.C. (Adelaide Univ. (Australia). Dept. of Medicine)

    1984-08-01

    Variation in depth of radionuclide within the stomach may result in significant errors in the measurement of gastric emptying if no attempt is made to correct for gamma-ray attenuation by the patient's tissues. A method of attenuation correction, which uses a single posteriorly located scintillation camera and correction factors derived from a lateral image of the stomach, was compared with a two-camera geometric mean method, in phantom studies and in five volunteer subjects. A meal of 100 g of ground beef containing /sup 99/Tcsup(m)-chicken liver, and 150 ml of water was used in the in vivo studies. In all subjects the geometric mean data showed that solid food emptied in two phases: an initial lag period, followed by a linear emptying phase. Using the geometric mean data as a standard, the anterior camera overestimated the 50% emptying time (T/sub 50/) by an average of 15% (range 5-18) and the posterior camera underestimated this parameter by 15% (4-22). The posterior data, corrected for attenuation using the lateral image method, underestimated the T/sub 50/ by 2% (-7 to +7). The difference in the distances of the proximal and distal stomach from the posterior detector was large in all subjects (mean 5.7 cm, range 3.9-7.4).

  14. Fault tolerant control of systems with saturations

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2013-01-01

    This paper presents framework for fault tolerant controllers (FTC) that includes input saturation. The controller architecture known from FTC is based on the Youla-Jabr-Bongiorno-Kucera (YJBK) parameterization is extended to handle input saturation. Applying this controller architecture in connec......This paper presents framework for fault tolerant controllers (FTC) that includes input saturation. The controller architecture known from FTC is based on the Youla-Jabr-Bongiorno-Kucera (YJBK) parameterization is extended to handle input saturation. Applying this controller architecture...... in connection with faulty systems including input saturation gives an additional YJBK transfer function related to the input saturation. In the fault free case, this additional YJBK transfer function can be applied directly for optimizing the feedback loop around the input saturation. In the faulty case......, the design problem is a mixed design problem involved both parametric faults and input saturation....

  15. Evaluation of the ICS and DEW scatter correction methods for low statistical content scans in 3D PET

    International Nuclear Information System (INIS)

    Sossi, V.; Oakes, T.R.; Ruth, T.J.

    1996-01-01

    The performance of the Integral Convolution and the Dual Energy Window scatter correction methods in 3D PET has been evaluated over a wide range of statistical content of acquired data (1M to 400M events) The order in which scatter correction and detector normalization should be applied has also been investigated. Phantom and human neuroreceptor studies were used with the following figures of merit: axial and radial uniformity, sinogram and image noise, contrast accuracy and contrast accuracy uniformity. Both scatter correction methods perform reliably in the range of number of events examined. Normalization applied after scatter correction yields better radial uniformity and fewer image artifacts

  16. Effect of partial saturation of bonded neo magnet on the automotive accessory motor

    Directory of Open Access Journals (Sweden)

    Nimitkumar K. Sheth

    2017-05-01

    Full Text Available In this paper the effects of using a partially magnetized bonded neo (NdFeB magnet in an automotive accessory motor are presented. The potential reason for partial saturation of the bonded neo magnet is explained and a simple method to ensure saturation of the magnet is discussed. A magnetizing fixture design using the 2-D Finite element analysis (FEA is presented. The motor performance at various magnet saturation levels has been estimated using the 2-D FEA. Details of the thermal demagnetization test adopted by the automotive industry is also discussed and results of the motor performance for four saturation levels are detailed. These results indicate that the effect of demagnetization is more adverse in a motor with partially saturated magnets.

  17. Effect of partial saturation of bonded neo magnet on the automotive accessory motor

    Science.gov (United States)

    Sheth, Nimitkumar K.; Angara, Raghu C. S. Babu

    2017-05-01

    In this paper the effects of using a partially magnetized bonded neo (NdFeB) magnet in an automotive accessory motor are presented. The potential reason for partial saturation of the bonded neo magnet is explained and a simple method to ensure saturation of the magnet is discussed. A magnetizing fixture design using the 2-D Finite element analysis (FEA) is presented. The motor performance at various magnet saturation levels has been estimated using the 2-D FEA. Details of the thermal demagnetization test adopted by the automotive industry is also discussed and results of the motor performance for four saturation levels are detailed. These results indicate that the effect of demagnetization is more adverse in a motor with partially saturated magnets.

  18. Bioavailability of Oral Hydrocortisone Corrected for Binding Proteins and Measured by LC-MS/MS Using Serum Cortisol and Salivary Cortisone.

    Science.gov (United States)

    Johnson, T N; Whitaker, M J; Keevil, B; Ross, R J

    2018-01-01

    The assessment absolute bioavailability of oral hydrocortisone is complicated by its saturable binding to cortisol binding globulin (CBG). Previous assessment of bioavailability used a cortisol radioimmunoassay which has cross reactivity with other steroids. Salivary cortisone is a measure of free cortisol and LC-MS/MS is the gold standard method for measuring steroids. We here report the absolute bioavailability of hydrocortisone calculated using serum cortisol and salivary cortisone measured by LC-MS/MS. 14 healthy male dexamethasone suppressed volunteers were administered 20 mg hydrocortisone either intravenously or orally by tablet. Samples of serum and saliva were taken and measured for cortisol and cortisone by LC-MS/MS. Serum cortisol was corrected for saturable binding using published data and pharmacokinetic parameters derived using the program WinNonlin. The mean (95% CI) bioavailability of oral hydrocortisone calculated from serum cortisol, unbound serum cortisol and salivary cortisone was 1.00 (0.89-1.14); 0.88 (0.75-1.05); and 0.93 (0.83-1.05), respectively. The data confirm that, after oral administration, hydrocortisone is completely absorbed. The data derived from serum cortisol corrected for protein binding, and that from salivary cortisone, are similar supporting the concept that salivary cortisone reflects serum free cortisol levels and that salivary cortisone can be used as a non-invasive method for measuring the pharmacokinetics of hydrocortisone.

  19. Covariate measurement error correction methods in mediation analysis with failure time data.

    Science.gov (United States)

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  20. Spectrum correction algorithm for detectors in airborne radioactivity monitoring equipment NH-UAV based on a ratio processing method

    International Nuclear Information System (INIS)

    Cao, Ye; Tang, Xiao-Bin; Wang, Peng; Meng, Jia; Huang, Xi; Wen, Liang-Sheng; Chen, Da

    2015-01-01

    The unmanned aerial vehicle (UAV) radiation monitoring method plays an important role in nuclear accidents emergency. In this research, a spectrum correction algorithm about the UAV airborne radioactivity monitoring equipment NH-UAV was studied to measure the radioactive nuclides within a small area in real time and in a fixed place. The simulation spectra of the high-purity germanium (HPGe) detector and the lanthanum bromide (LaBr 3 ) detector in the equipment were obtained using the Monte Carlo technique. Spectrum correction coefficients were calculated after performing ratio processing techniques about the net peak areas between the double detectors on the detection spectrum of the LaBr 3 detector according to the accuracy of the detection spectrum of the HPGe detector. The relationship between the spectrum correction coefficient and the size of the source term was also investigated. A good linear relation exists between the spectrum correction coefficient and the corresponding energy (R 2 =0.9765). The maximum relative deviation from the real condition reduced from 1.65 to 0.035. The spectrum correction method was verified as feasible. - Highlights: • An airborne radioactivity monitoring equipment NH-UAV was developed to measure radionuclide after a nuclear accident. • A spectrum correction algorithm was proposed to obtain precise information on the detected radioactivity within a small area. • The spectrum correction method was verified as feasible. • The corresponding spectrum correction coefficients increase first and then stay constant

  1. Spectrum correction algorithm for detectors in airborne radioactivity monitoring equipment NH-UAV based on a ratio processing method

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Ye [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Tang, Xiao-Bin, E-mail: tangxiaobin@nuaa.edu.cn [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Jiangsu Key Laboratory of Nuclear Energy Equipment Materials Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Wang, Peng; Meng, Jia; Huang, Xi; Wen, Liang-Sheng [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Chen, Da [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Jiangsu Key Laboratory of Nuclear Energy Equipment Materials Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China)

    2015-10-11

    The unmanned aerial vehicle (UAV) radiation monitoring method plays an important role in nuclear accidents emergency. In this research, a spectrum correction algorithm about the UAV airborne radioactivity monitoring equipment NH-UAV was studied to measure the radioactive nuclides within a small area in real time and in a fixed place. The simulation spectra of the high-purity germanium (HPGe) detector and the lanthanum bromide (LaBr{sub 3}) detector in the equipment were obtained using the Monte Carlo technique. Spectrum correction coefficients were calculated after performing ratio processing techniques about the net peak areas between the double detectors on the detection spectrum of the LaBr{sub 3} detector according to the accuracy of the detection spectrum of the HPGe detector. The relationship between the spectrum correction coefficient and the size of the source term was also investigated. A good linear relation exists between the spectrum correction coefficient and the corresponding energy (R{sup 2}=0.9765). The maximum relative deviation from the real condition reduced from 1.65 to 0.035. The spectrum correction method was verified as feasible. - Highlights: • An airborne radioactivity monitoring equipment NH-UAV was developed to measure radionuclide after a nuclear accident. • A spectrum correction algorithm was proposed to obtain precise information on the detected radioactivity within a small area. • The spectrum correction method was verified as feasible. • The corresponding spectrum correction coefficients increase first and then stay constant.

  2. QED radiative correction for the single-W production using a parton shower method

    International Nuclear Information System (INIS)

    Kurihara, Y.; Fujimoto, J.; Ishikawa, T.; Shimizu, Y.; Kato, K.; Tobimatsu, K.; Munehisa, T.

    2001-01-01

    A parton shower method for the photonic radiative correction is applied to single W-boson production processes. The energy scale for the evolution of the parton shower is determined so that the correct soft-photon emission is reproduced. Photon spectra radiated from the partons are compared with those from the exact matrix elements, and show a good agreement. Possible errors due to an inappropriate energy-scale selection or due to the ambiguity of the energy-scale determination are also discussed, particularly for the measurements on triple gauge couplings. (orig.)

  3. Auto correct method of AD converters precision based on ethernet

    Directory of Open Access Journals (Sweden)

    NI Jifeng

    2013-10-01

    Full Text Available Ideal AD conversion should be a straight zero-crossing line in the Cartesian coordinate axis system. While in practical engineering, the signal processing circuit, chip performance and other factors have an impact on the accuracy of conversion. Therefore a linear fitting method is adopted to improve the conversion accuracy. An automatic modification of AD conversion based on Ethernet is presented by using software and hardware. Just by tapping the mouse, all the AD converter channel linearity correction can be automatically completed, and the error, SNR and ENOB (effective number of bits are calculated. Then the coefficients of linear modification are loaded into the onboard AD converter card's EEPROM. Compared with traditional methods, this method is more convenient, accurate and efficient,and has a broad application prospects.

  4. A Fixed-Pattern Noise Correction Method Based on Gray Value Compensation for TDI CMOS Image Sensor.

    Science.gov (United States)

    Liu, Zhenwang; Xu, Jiangtao; Wang, Xinlei; Nie, Kaiming; Jin, Weimin

    2015-09-16

    In order to eliminate the fixed-pattern noise (FPN) in the output image of time-delay-integration CMOS image sensor (TDI-CIS), a FPN correction method based on gray value compensation is proposed. One hundred images are first captured under uniform illumination. Then, row FPN (RFPN) and column FPN (CFPN) are estimated based on the row-mean vector and column-mean vector of all collected images, respectively. Finally, RFPN are corrected by adding the estimated RFPN gray value to the original gray values of pixels in the corresponding row, and CFPN are corrected by subtracting the estimated CFPN gray value from the original gray values of pixels in the corresponding column. Experimental results based on a 128-stage TDI-CIS show that, after correcting the FPN in the image captured under uniform illumination with the proposed method, the standard-deviation of row-mean vector decreases from 5.6798 to 0.4214 LSB, and the standard-deviation of column-mean vector decreases from 15.2080 to 13.4623 LSB. Both kinds of FPN in the real images captured by TDI-CIS are eliminated effectively with the proposed method.

  5. Model correction factor method for reliability problems involving integrals of non-Gaussian random fields

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  6. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers.

    Science.gov (United States)

    Dobie, Robert A; Wojcik, Nancy C

    2015-07-13

    The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999-2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Regression analysis was used to derive new age-correction values using audiometric data from the 1999-2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20-75 years. The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20-75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61-75 years. Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to

  7. BIOFEEDBACK: A NEW METHOD FOR CORRECTION OF MOTOR DISORDERS IN PATIENTS WITH MULTIPLE SCLEROSIS

    Directory of Open Access Journals (Sweden)

    Ya. S. Pekker

    2014-01-01

    Full Text Available Major disabling factors in multiple sclerosis is motor disorders. Rehabilitation of such violations is one of the most important medical and social problems. Currently, most of the role given to the development of methods for correction of motor disorders based on accessing natural resources of the human body. One of these methods is the adaptive control with biofeedback (BFB. The aim of our study was the correction of motor disorders in multiple sclerosis patients using biofeedback training. In the study, we have developed scenarios for training rehabilitation program computer EMG biofeedback aimed at correction of motor disorders in patients with multiple sclerosis (MS. The method was tested in the neurological clinic of SSMU. The study included 9 patients with definite diagnosis of MS with the presence of the clinical picture of combined pyramidal and cerebellar symptoms. Assessed the effectiveness of rehabilitation procedures biofeedback training using specialized scales (rating scale functional systems Kurtzke; questionnaire research quality of life – SF-36, evaluation of disease impact Profile – SIP and score on a scale fatigue – FSS. In the studied group of patients decreased score on a scale of fatigue (FSS, increased motor control (SIP2, the physical and mental components of health (SF-36. The tendency to reduce the amount of neurological deficit by reducing the points on the pyramidal Kurtske violations. Analysis of the exchange rate dynamics of biofeedback training on EMG for trained muscles indicates an increase in the recorded signal OEMG from session to session. Proved a tendency to increase strength and coordination trained muscles of patients studied.Positive results of biofeedback therapy in patients with MS can be recommended to use this method in the complex rehabilitation measures to correct motor and psycho-emotional disorders.

  8. Improved dq-Axes Model of PMSM Considering Airgap Flux Harmonics and Saturation

    DEFF Research Database (Denmark)

    Fasil, Muhammed; Antaloae, Ciprian; Mijatovic, Nenad

    2016-01-01

    In this work, the classical linear model of a permanent magnet synchronous motor (PMSM) is modified by adding d and q-axes harmonic inductances so that the modified model can consider non-linearities present in an interior permanent magnet (IPM) motor. Further, a method has been presented to assess...... the effect of saturation and cross-saturation on constant torque curves of PMSM. Two IPM motors with two different rotor topologies and different specifications are designed to evaluate the effect of saturation on synchronous and harmonic inductances, and on operating points of the machines...

  9. Mental abilities and performance efficacy under a simulated 480 meters helium-oxygen saturation diving

    Directory of Open Access Journals (Sweden)

    gonglin ehou

    2015-07-01

    Full Text Available Stress in extreme environment severely disrupts human physiology and mental abilities. The present study investigated the cognition and performance efficacy of four divers during a simulated 480 meters helium-oxygen saturation diving. We analyzed the spatial memory, 2D/3D mental rotation functioning, grip strength, and hand-eye coordination ability in four divers during the 0 – 480 meters compression and decompression processes of the simulated diving. The results showed that except for its mild decrease on grip strength, the high atmosphere pressure condition significantly impaired the hand-eye coordination (especially at 300 meters, the reaction time and correct rate of mental rotation, as well as the spatial memory (especially as 410 meters, showing high individual variability. We conclude that the human cognition and performance efficacy are significantly affected during deep water saturation diving.

  10. A mathematical model of avian influenza with half-saturated incidence.

    Science.gov (United States)

    Chong, Nyuk Sian; Tchuenche, Jean Michel; Smith, Robert J

    2014-03-01

    The widespread impact of avian influenza viruses not only poses risks to birds, but also to humans. The viruses spread from birds to humans and from human to human In addition, mutation in the primary strain will increase the infectiousness of avian influenza. We developed a mathematical model of avian influenza for both bird and human populations. The effect of half-saturated incidence on transmission dynamics of the disease is investigated. The half-saturation constants determine the levels at which birds and humans contract avian influenza. To prevent the spread of avian influenza, the associated half-saturation constants must be increased, especially the half-saturation constant H m for humans with mutant strain. The quantity H m plays an essential role in determining the basic reproduction number of this model. Furthermore, by decreasing the rate β m at which human-to-human mutant influenza is contracted, an outbreak can be controlled more effectively. To combat the outbreak, we propose both pharmaceutical (vaccination) and non-pharmaceutical (personal protection and isolation) control methods to reduce the transmission of avian influenza. Vaccination and personal protection will decrease β m, while isolation will increase H m. Numerical simulations demonstrate that all proposed control strategies will lead to disease eradication; however, if we only employ vaccination, it will require slightly longer to eradicate the disease than only applying non-pharmaceutical or a combination of pharmaceutical and non-pharmaceutical control methods. In conclusion, it is important to adopt a combination of control methods to fight an avian influenza outbreak.

  11. Arterial blood oxygen saturation during blood pressure cuff-induced hypoperfusion

    Energy Technology Data Exchange (ETDEWEB)

    Kyriacou, P A [School of Engineering and Mathematical Sciences, City University, London EC1V 0HB (United Kingdom); Shafqat, K [School of Engineering and Mathematical Sciences, City University, London EC1V 0HB (United Kingdom); Pal, S K [St Andrew' s Centre for Plastic Surgery and Burns, Broomfield Hospital, Chelmsford, CM1 7ET (United Kingdom)

    2007-10-15

    Pulse oximetry has been one of the most significant technological advances in clinical monitoring in the last two decades. Pulse oximetry is a non-invasive photometric technique that provides information about the arterial blood oxygen saturation (SpO{sub 2}) and heart rate, and has widespread clinical applications. When peripheral perfusion is poor, as in states of hypovolaemia, hypothermia and vasoconstriction, oxygenation readings become unreliable or cease. The problem arises because conventional pulse oximetry sensors must be attached to the most peripheral parts of the body, such as finger, ear or toe, where pulsatile flow is most easily compromised. Pulse oximeters estimate arterial oxygen saturation by shining light at two different wavelengths, red and infrared, through vascular tissue. In this method the ac pulsatile photoplethysmographic (PPG) signal associated with cardiac contraction is assumed to be attributable solely to the arterial blood component. The amplitudes of the red and infrared ac PPG signals are sensitive to changes in arterial oxygen saturation because of differences in the light absorption of oxygenated and deoxygenated haemoglobin at these two wavelengths. From the ratios of these amplitudes, and the corresponding dc photoplethysmographic components, arterial blood oxygen saturation (SpO{sub 2}) is estimated. Hence, the technique of pulse oximetry relies on the presence of adequate peripheral arterial pulsations, which are detected as photoplethysmographic (PPG) signals. The aim of this study was to investigate the effect of pressure cuff-induced hypoperfusion on photoplethysmographic signals and arterial blood oxygen saturation using a custom made finger blood oxygen saturation PPG/SpO{sub 2} sensor and a commercial finger pulse oximeter. Blood oxygen saturation values from the custom oxygen saturation sensor and a commercial finger oxygen saturation sensor were recorded from 14 healthy volunteers at various induced brachial pressures

  12. Comparison of classical methods for blade design and the influence of tip correction on rotor performance

    DEFF Research Database (Denmark)

    Sørensen, Jens Nørkær; Okulov, Valery; Mikkelsen, Robert Flemming

    2016-01-01

    The classical blade-element/momentum (BE/M) method, which is used together with different types of corrections (e.g. the Prandtl or Glauert tip correction), is today the most basic tool in the design of wind turbine rotors. However, there are other classical techniques based on a combination...

  13. Empirical method for matrix effects correction in liquid samples

    International Nuclear Information System (INIS)

    Vigoda de Leyt, Dora; Vazquez, Cristina

    1987-01-01

    A simple method for the determination of Cr, Ni and Mo in stainless steels is presented. In order to minimize the matrix effects, the conditions of liquid system to dissolve stainless steels chips has been developed. Pure element solutions were used as standards. Preparation of synthetic solutions with all the elements of steel and also mathematic corrections are avoided. It results in a simple chemical operation which simplifies the method of analysis. The variance analysis of the results obtained with steel samples show that the three elements may be determined from the comparison with the analytical curves obtained with the pure elements if the same parameters in the calibration curves are used. The accuracy and the precision were checked against other techniques using the British Chemical Standards of the Bureau of Anlysed Samples Ltd. (England). (M.E.L.) [es

  14. [Tissue oxygen saturation in the critically ill patient].

    Science.gov (United States)

    Gruartmoner, G; Mesquida, J; Baigorri, F

    2014-05-01

    Hemodynamic resuscitation seeks to correct global macrocirculatory parameters of pressure and flow. However, current evidence has shown that despite the normalization of these global parameters, microcirculatory and regional perfusion alterations can persist, and these alterations have been independently associated with a poorer patient prognosis. This in turn has lead to growing interest in new technologies for exploring regional circulation and microcirculation. Near infra-red spectroscopy allows us to monitor tissue oxygen saturation, and has been proposed as a noninvasive, continuous and easy-to-obtain measure of regional circulation. The present review aims to summarize the existing evidence on near infra-red spectroscopy and its potential clinical role in the resuscitation of critically ill patients in shock. Copyright © 2013 Elsevier España, S.L. and SEMICYUC. All rights reserved.

  15. Saturated salt method determination of hysteresis of Pinus sylvestris L. wood for 35 ºC isotherms

    Directory of Open Access Journals (Sweden)

    García Esteban, L.

    2004-12-01

    Full Text Available The saturated salts method was used in this study to quantify hysteresis in Pinus sylvestris L. wood, in an exercise that involved plotting the 35 ºC desorption and sorption isotherms. Nine salts were used, all of which establish stable and known relative humidity values when saturated in water The wood was kept at the relative humidity generated by each of these salts until the equilibrium moisture content (EMC was reached, both in the water loss or desorption, and the water uptake or sorption processes. The Guggenheim method was used to fit the values obtained to the respective curves. Hysteresis was evaluated in terms of the hysteresis coefficient, for which a mean value of 0.87 was found.

    Con este trabajo se ha cuantificado la histéresis de la madera de Pinus sylvestris L. Para ello, se han construido las isotermas de 35 ºC de adsorción y sorción, mediante el método de las sales saturadas. Se han utilizado nueve sales que cuando se saturan en agua dan lugar a unas humedades relativas estables y conocidas. La madera fue colocada bajo las distintas humedades relativas que confieren cada una de las sales hasta que alcanzaron las distintas humedades de equilibrio higroscópico, tanto en el proceso de pérdida de agua o desorción, como en el de adquisición de agua o de sorción. Los valores obtenidos fueron ajustados a las respectivas sigmoides, haciendo uso del método de Guggenheim. La valoración de la histéresis se determinó mediante el coeficiente de histéresis, obteniendo un valor medio de 0,87.

  16. SITE-SCALE SATURATED ZONE TRANSPORT

    International Nuclear Information System (INIS)

    S. KELLER

    2004-01-01

    This work provides a site-scale transport model for calculating radionuclide transport in the saturated zone (SZ) at Yucca Mountain, for use in the abstractions model in support of ''Total System Performance Assessment for License Application'' (TSPA-LA). The purpose of this model report is to provide documentation for the components of the site-scale SZ transport model in accordance with administrative procedure AP-SIII.10Q, Models. The initial documentation of this model report was conducted under the ''Technical Work Plan For: Saturated Zone Flow and Transport Modeling and Testing'' (BSC 2003 [DIRS 163965]). The model report has been revised in accordance with the ''Technical Work Plan For: Natural System--Saturated Zone Analysis and Model Report Integration'', Section 2.1.1.4 (BSC 2004 [DIRS 171421]) to incorporate Regulatory Integration Team comments. All activities listed in the technical work plan that are appropriate to the transport model are documented in this report and are described in Section 2.1.1.4 (BSC 2004 [DIRS 171421]). This report documents: (1) the advection-dispersion transport model including matrix diffusion (Sections 6.3 and 6.4); (2) a description and validation of the transport model (Sections 6.3 and 7); (3) the numerical methods for simulating radionuclide transport (Section 6.4); (4) the parameters (sorption coefficient, Kd ) and their uncertainty distributions used for modeling radionuclide sorption (Appendices A and C); (5) the parameters used for modeling colloid-facilitated radionuclide transport (Table 4-1, Section 6.4.2.6, and Appendix B); and (6) alternative conceptual models and their dispositions (Section 6.6). The intended use of this model is to simulate transport in saturated fractured porous rock (double porosity) and alluvium. The particle-tracking method of simulating radionuclide transport is incorporated in the finite-volume heat and mass transfer numerical analysis (FEHM) computer code, (FEHM V2.20, STN: 10086

  17. Saturation of alpha particle driven instability in Tokamak Fusion Test Reactor

    International Nuclear Information System (INIS)

    Gorelenkov, N.N.; Chen, Y.; White, R.B.; Berk, H.L.

    1999-01-01

    A nonlinear theory of kinetic instabilities near threshold [Berk et al., Plasma Phys. Rep. 23, 842 (1997)] is applied to calculate the saturation level of toroidicity-induced Alfven eigenmodes (TAE), and to be compared with the predictions of δf method calculations (Y. Chen, Ph.D. thesis, Princeton University, 1998). Good agreement is observed between the predictions of both methods and the predicted saturation levels are comparable to experimentally measured amplitudes of the TAE oscillations in Tokamak Fusion Test Reactor [D. J. Grove and D. M. Meade, Nucl. Fusion 25, 1167 (1985)]. copyright 1999 American Institute of Physics

  18. nitrogen saturation in stream ecosystems

    OpenAIRE

    Earl, S. R.; Valett, H. M.; Webster, J. R.

    2006-01-01

    The concept of nitrogen (N) saturation has organized the assessment of N loading in terrestrial ecosystems. Here we extend the concept to lotic ecosystems by coupling Michaelis-Menten kinetics and nutrient spiraling. We propose a series of saturation response types, which may be used to characterize the proximity of streams to N saturation. We conducted a series of short-term N releases using a tracer ((NO3)-N-15-N) to measure uptake. Experiments were conducted in streams spanning a gradient ...

  19. SPECT quantification: a review of the different correction methods with compton scatter, attenuation and spatial deterioration effects

    International Nuclear Information System (INIS)

    Groiselle, C.; Rocchisani, J.M.; Moretti, J.L.; Dreuille, O. de; Gaillard, J.F.; Bendriem, B.

    1997-01-01

    SPECT quantification: a review of the different correction methods with Compton scatter attenuation and spatial deterioration effects. The improvement of gamma-cameras, acquisition and reconstruction software opens new perspectives in term of image quantification in nuclear medicine. In order to meet the challenge, numerous works have been undertaken in recent years to correct for the different physical phenomena that prevent an exact estimation of the radioactivity distribution. The main phenomena that have to betaken into account are scatter, attenuation and resolution. In this work, authors present the physical basis of each issue, its consequences on quantification and the main methods proposed to correct them. (authors)

  20. Synthesis of high saturation magnetic iron oxide nanomaterials via low temperature hydrothermal method

    Energy Technology Data Exchange (ETDEWEB)

    Bhavani, P.; Rajababu, C.H. [Department of Materials Science & Nanotechnology, Yogivemana University, Vemanapuram 516003, Kadapa (India); Arif, M.D. [Environmental Magnetism Laboratory, Indian Institute of Geomagnetism (IIG), Navi Mumbai 410218, Mumbai (India); Reddy, I. Venkata Subba [Department of Physics, Gitam University, Hyderabad Campus, Rudraram, Medak 502329 (India); Reddy, N. Ramamanohar, E-mail: manoharphd@gmail.com [Department of Materials Science & Nanotechnology, Yogivemana University, Vemanapuram 516003, Kadapa (India)

    2017-03-15

    Iron oxide nanoparticles (IONPs) were synthesized through a simple low temperature hydrothermal approach to obtain with high saturation magnetization properties. Two series of iron precursors (sulfates and chlorides) were used in synthesis process by varying the reaction temperature at a constant pH. The X-ray diffraction pattern indicates the inverse spinel structure of the synthesized IONPs. The Field emission scanning electron microscopy and high resolution transmission electron microscopy studies revealed that the particles prepared using iron sulfate were consisting a mixer of spherical (16–40 nm) and rod (diameter ~20–25 nm, length <100 nm) morphologies that synthesized at 130 °C, while the IONPs synthesized by iron chlorides are found to be well distributed spherical shapes with size range 5–20 nm. On other hand, the IONPs synthesized at reaction temperature of 190 °C has spherical (16–46 nm) morphology in both series. The band gap values of IONPs were calculated from the obtained optical absorption spectra of the samples. The IONPs synthesized using iron sulfate at temperature of 130 °C exhibited high saturation magnetization (M{sub S}) of 103.017 emu/g and low remanant magnetization (M{sub r}) of 0.22 emu/g with coercivity (H{sub c}) of 70.9 Oe{sub ,} which may be attributed to the smaller magnetic domains (d{sub m}) and dead magnetic layer thickness (t). - Highlights: • Comparison of iron oxide materials prepared with Fe{sup +2}/Fe{sup +3} sulfates and chlorides at different temperatures. • We prepared super-paramagnetic and soft ferromagnetic magnetite nanoparticles. • We report higher saturation magnetization with lower coercivity.

  1. Evaluation of six scatter correction methods based on spectral analysis in 99m Tc SPECT imaging using SIMIND Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    Mahsa Noori Asl

    2013-01-01

    Full Text Available Compton-scattered photons included within the photopeak pulse-height window result in the degradation of SPECT images both qualitatively and quantitatively. The purpose of this study is to evaluate and compare six scatter correction methods based on setting the energy windows in 99m Tc spectrum. SIMIND Monte Carlo simulation is used to generate the projection images from a cold-sphere hot-background phantom. For evaluation of different scatter correction methods, three assessment criteria including image contrast, signal-to-noise ratio (SNR and relative noise of the background (RNB are considered. Except for the dual-photopeak window (DPW method, the image contrast of the five cold spheres is improved in the range of 2.7-26%. Among methods considered, two methods show a nonuniform correction performance. The RNB for all of the scatter correction methods is ranged from minimum 0.03 for DPW method to maximum 0.0727 for the three energy window (TEW method using trapezoidal approximation. The TEW method using triangular approximation because of ease of implementation, good improvement of the image contrast and the SNR for the five cold spheres, and the low noise level is proposed as most appropriate correction method.

  2. Saturation recovery EPR spin-labeling method for quantification of lipids in biological membrane domains.

    Science.gov (United States)

    Mainali, Laxman; Camenisch, Theodore G; Hyde, James S; Subczynski, Witold K

    2017-12-01

    The presence of integral membrane proteins induces the formation of distinct domains in the lipid bilayer portion of biological membranes. Qualitative application of both continuous wave (CW) and saturation recovery (SR) electron paramagnetic resonance (EPR) spin-labeling methods allowed discrimination of the bulk, boundary, and trapped lipid domains. A recently developed method, which is based on the CW EPR spectra of phospholipid (PL) and cholesterol (Chol) analog spin labels, allows evaluation of the relative amount of PLs (% of total PLs) in the boundary plus trapped lipid domain and the relative amount of Chol (% of total Chol) in the trapped lipid domain [ M. Raguz, L. Mainali, W. J. O'Brien, and W. K. Subczynski (2015), Exp. Eye Res., 140:179-186 ]. Here, a new method is presented that, based on SR EPR spin-labeling, allows quantitative evaluation of the relative amounts of PLs and Chol in the trapped lipid domain of intact membranes. This new method complements the existing one, allowing acquisition of more detailed information about the distribution of lipids between domains in intact membranes. The methodological transition of the SR EPR spin-labeling approach from qualitative to quantitative is demonstrated. The abilities of this method are illustrated for intact cortical and nuclear fiber cell plasma membranes from porcine eye lenses. Statistical analysis (Student's t -test) of the data allowed determination of the separations of mean values above which differences can be treated as statistically significant ( P ≤ 0.05) and can be attributed to sources other than preparation/technique.

  3. The open-pit truck dispatching method based on the completion of production target and the truck flow saturation

    Energy Technology Data Exchange (ETDEWEB)

    Xing, J.; Sun, X. [Northeastern University, Shenyang (China)

    2007-05-15

    To address current problems in the 'modular dispatch' dynamic programming system widely used in open-pit truck real-time dispatching, two concepts for meeting production targets and truck flow saturation were proposed. Using truck flow programming and taking into account stochastic factors and transportation distance, truck real-time dispatching was optimised. The method is applicable to both shovel-truck match and mismatching and also to empty and heavy truck dispatching. In an open-pit mine the production efficiency could be increased by between 8% and 18%. 6 refs.

  4. New calculation method for thermodynamic properties of humid air in humid air turbine cycle – The general model and solutions for saturated humid air

    International Nuclear Information System (INIS)

    Wang, Zidong; Chen, Hanping; Weng, Shilie

    2013-01-01

    The article proposes a new calculation method for thermodynamic properties (i.e. specific enthalpy, specific entropy and specific volume) of humid air in humid air turbine cycle. The research pressure range is from 0.1 MPa to 5 MPa. The fundamental behaviors of dry air and water vapor in saturated humid air are explored in depth. The new model proposes and verifies the relationship between total gas mixture pressure and gas component pressures. This provides a good explanation of the fundamental behaviors of gas components in gas mixture from a new perspective. Another discovery is that the water vapor component pressure of saturated humid air equals P S , always smaller than its partial pressure (f·P S ) which was believed in the past researches. In the new model, “Local Gas Constant” describes the interaction between similar molecules. “Improvement Factor” is proposed for the first time by this article, and it quantitatively describes the magnitude of interaction between dissimilar molecules. They are combined to fully describe the real thermodynamic properties of humid air. The average error of Revised Dalton's Method is within 0.1% compared to experimentally-based data. - Highlights: • Our new model is suitable to calculate thermodynamic properties of humid air in HAT cycle. • Fundamental behaviors of dry air and water vapor in saturated humid air are explored in depth. • Local-Gas-Constant describes existing alone component and Improvement Factor describes interaction between different components. • The new model proposes and verifies the relationship between total gas mixture pressure and component pressures. • It solves saturated humid air thoroughly and deviates from experimental data less than 0.1%

  5. SU-F-T-584: Investigating Correction Methods for Ion Recombination Effects in OCTAVIUS 1000 SRS Measurements

    International Nuclear Information System (INIS)

    Knill, C; Snyder, M; Rakowski, J; J, Burmeister; Zhuang, L; Matuszak, M

    2016-01-01

    Purpose: PTW’s Octavius 1000 SRS array performs IMRT QA measurements with liquid filled ionization chambers (LICs). Collection efficiencies of LICs have been shown to change during IMRT delivery as a function of LINAC pulse frequency and pulse dose, which affects QA results. In this study, two methods were developed to correct changes in collection efficiencies during IMRT QA measurements, and the effects of these corrections on QA pass rates were compared. Methods: For the first correction, Matlab software was developed that calculates pulse frequency and pulse dose for each detector, using measurement and DICOM RT Plan files. Pulse information is converted to collection efficiency and measurements are corrected by multiplying detector dose by ratios of calibration to measured collection efficiencies. For the second correction, MU/min in daily 1000 SRS calibration was chosen to match average MU/min of the VMAT plan. Usefulness of derived corrections were evaluated using 6MV and 10FFF SBRT RapidArc plans delivered to the OCTAVIUS 4D system using a TrueBeam equipped with an HD- MLC. Effects of the two corrections on QA results were examined by performing 3D gamma analysis comparing predicted to measured dose, with and without corrections. Results: After complex Matlab corrections, average 3D gamma pass rates improved by [0.07%,0.40%,1.17%] for 6MV and [0.29%,1.40%,4.57%] for 10FFF using [3%/3mm,2%/2mm,1%/1mm] criteria. Maximum changes in gamma pass rates were [0.43%,1.63%,3.05%] for 6MV and [1.00%,4.80%,11.2%] for 10FFF using [3%/3mm,2%/2mm,1%/1mm] criteria. On average, pass rates of simple daily calibration corrections were within 1% of complex Matlab corrections. Conclusion: Ion recombination effects can potentially be clinically significant for OCTAVIUS 1000 SRS measurements, especially for higher pulse dose unflattened beams when using tighter gamma tolerances. Matching daily 1000 SRS calibration MU/min to average planned MU/min is a simple correction that

  6. Saturated and unsaturated stability analysis of slope subjected to rainfall infiltration

    OpenAIRE

    Gofar Nurly; Rahardjo Harianto

    2017-01-01

    This paper presents results of saturated and unsaturated stability analysis of typical residual slopes subjected to rainfall infiltration corresponds to 50 years rainfall return period. The slope angles considered were 45° and 70°. The saturated stability analyses were carried out for original and critical ground water level commonly considered by practicing engineer. The analyses were conducted using limit equilibrium method. Unsaturated stability analyses used combination of coupled stress–...

  7. Correction of 157-nm lens based on phase ring aberration extraction method

    Science.gov (United States)

    Meute, Jeff; Rich, Georgia K.; Conley, Will; Smith, Bruce W.; Zavyalova, Lena V.; Cashmore, Julian S.; Ashworth, Dominic; Webb, James E.; Rich, Lisa

    2004-05-01

    Early manufacture and use of 157nm high NA lenses has presented significant challenges including: intrinsic birefringence correction, control of optical surface contamination, and the use of relatively unproven materials, coatings, and metrology. Many of these issues were addressed during the manufacture and use of International SEMATECH"s 0.85NA lens. Most significantly, we were the first to employ 157nm phase measurement interferometry (PMI) and birefringence modeling software for lens optimization. These efforts yielded significant wavefront improvement and produced one of the best wavefront-corrected 157nm lenses to date. After applying the best practices to the manufacture of the lens, we still had to overcome the difficulties of integrating the lens into the tool platform at International SEMATECH instead of at the supplier facility. After lens integration, alignment, and field optimization were complete, conventional lithography and phase ring aberration extraction techniques were used to characterize system performance. These techniques suggested a wavefront error of approximately 0.05 waves RMS--much larger than the 0.03 waves RMS predicted by 157nm PMI. In-situ wavefront correction was planned for in the early stages of this project to mitigate risks introduced by the use of development materials and techniques and field integration of the lens. In this publication, we document the development and use of a phase ring aberration extraction method for characterizing imaging performance and a technique for correcting aberrations with the addition of an optical compensation plate. Imaging results before and after the lens correction are presented and differences between actual and predicted results are discussed.

  8. Decomposition and correction overlapping peaks of LIBS using an error compensation method combined with curve fitting.

    Science.gov (United States)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-09-01

    The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.

  9. On the logarithmic-singularity correction in the kernel function method of subsonic lifting-surface theory

    Science.gov (United States)

    Lan, C. E.; Lamar, J. E.

    1977-01-01

    A logarithmic-singularity correction factor is derived for use in kernel function methods associated with Multhopp's subsonic lifting-surface theory. Because of the form of the factor, a relation was formulated between the numbers of chordwise and spanwise control points needed for good accuracy. This formulation is developed and discussed. Numerical results are given to show the improvement of the computation with the new correction factor.

  10. Semiconductor saturable absorbers for ultrafast terahertz signals

    DEFF Research Database (Denmark)

    Hoffmann, Matthias C.; Turchinovich, Dmitry

    2010-01-01

    states, due to conduction band onparabolicity and scattering into satellite valleys in strong THz fields. Saturable absorber parameters, such as linear and nonsaturable transmission, and saturation fluence, are extracted by fits to a classic saturable absorber model. Further, we observe THz pulse......We demonstrate saturable absorber behavior of n-type semiconductors GaAs, GaP, and Ge in the terahertz THz frequency range at room temperature using nonlinear THz spectroscopy. The saturation mechanism is based on a decrease in electron conductivity of semiconductors at high electron momentum...

  11. Study protocol: the empirical investigation of methods to correct for measurement error in biobanks with dietary assessment

    Directory of Open Access Journals (Sweden)

    Masson Lindsey F

    2011-10-01

    Full Text Available Abstract Background The Public Population Project in Genomics (P3G is an organisation that aims to promote collaboration between researchers in the field of population-based genomics. The main objectives of P3G are to encourage collaboration between researchers and biobankers, optimize study design, promote the harmonization of information use in biobanks, and facilitate transfer of knowledge between interested parties. The importance of calibration and harmonisation of methods for environmental exposure assessment to allow pooling of data across studies in the evaluation of gene-environment interactions has been recognised by P3G, which has set up a methodological group on calibration with the aim of; 1 reviewing the published methodological literature on measurement error correction methods with assumptions and methods of implementation; 2 reviewing the evidence available from published nutritional epidemiological studies that have used a calibration approach; 3 disseminating information in the form of a comparison chart on approaches to perform calibration studies and how to obtain correction factors in order to support research groups collaborating within the P3G network that are unfamiliar with the methods employed; 4 with application to the field of nutritional epidemiology, including gene-diet interactions, ultimately developing a inventory of the typical correction factors for various nutrients. Methods/Design Systematic review of (a the methodological literature on methods to correct for measurement error in epidemiological studies; and (b studies that have been designed primarily to investigate the association between diet and disease and have also corrected for measurement error in dietary intake. Discussion The conduct of a systematic review of the methodological literature on calibration will facilitate the evaluation of methods to correct for measurement error and the design of calibration studies for the prospective pooling of

  12. Biogeosystem Technique as a method to correct the climate

    Science.gov (United States)

    Kalinitchenko, Valery; Batukaev, Abdulmalik; Batukaev, Magomed; Minkina, Tatiana

    2017-04-01

    can be produced; The less energy is consumed for climate correction, the better. The proposed algorithm was never discussed before because most of its ingredients were unenforceable. Now the possibility to execute the algorithm exists in the framework of our new scientific-technical branch - Biogeosystem Technique (BGT*). The BGT* is a transcendental (non-imitating natural processes) approach to soil processing, regulation of energy, matter, water fluxes and biological productivity of biosphere: intra-soil machining to provide the new highly productive dispersed system of soil; intra-soil pulse continuous-discrete plants watering to reduce the transpiration rate and water consumption of plants for 5-20 times; intra-soil environmentally safe return of matter during intra-soil milling processing and (or) intra-soil pulse continuous-discrete plants watering with nutrition. Are possible: waste management; reducing flow of nutrients to water systems; carbon and other organic and mineral substances transformation into the soil to plant nutrition elements; less degradation of biological matter to greenhouse gases; increasing biological sequestration of carbon dioxide in terrestrial system's photosynthesis; oxidizing methane and hydrogen sulfide by fresh photosynthesis ionized biologically active oxygen; expansion of the active terrestrial site of biosphere. The high biological product output of biosphere will be gained. BGT* robotic systems are of low cost, energy and material consumption. By BGT* methods the uncertainties of climate and biosphere will be reduced. Key words: Biogeosystem Technique, method to correct, climate

  13. Analysis of efficient preconditioned defect correction methods for nonlinear water waves

    DEFF Research Database (Denmark)

    Engsig-Karup, Allan Peter

    2014-01-01

    Robust computational procedures for the solution of non-hydrostatic, free surface, irrotational and inviscid free-surface water waves in three space dimensions can be based on iterative preconditioned defect correction (PDC) methods. Such methods can be made efficient and scalable to enable...... prediction of free-surface wave transformation and accurate wave kinematics in both deep and shallow waters in large marine areas or for predicting the outcome of experiments in large numerical wave tanks. We revisit the classical governing equations are fully nonlinear and dispersive potential flow...... equations. We present new detailed fundamental analysis using finite-amplitude wave solutions for iterative solvers. We demonstrate that the PDC method in combination with a high-order discretization method enables efficient and scalable solution of the linear system of equations arising in potential flow...

  14. Modeling variably saturated multispecies reactive groundwater solute transport with MODFLOW-UZF and RT3D

    Science.gov (United States)

    Bailey, Ryan T.; Morway, Eric D.; Niswonger, Richard G.; Gates, Timothy K.

    2013-01-01

    A numerical model was developed that is capable of simulating multispecies reactive solute transport in variably saturated porous media. This model consists of a modified version of the reactive transport model RT3D (Reactive Transport in 3 Dimensions) that is linked to the Unsaturated-Zone Flow (UZF1) package and MODFLOW. Referred to as UZF-RT3D, the model is tested against published analytical benchmarks as well as other published contaminant transport models, including HYDRUS-1D, VS2DT, and SUTRA, and the coupled flow and transport modeling system of CATHY and TRAN3D. Comparisons in one-dimensional, two-dimensional, and three-dimensional variably saturated systems are explored. While several test cases are included to verify the correct implementation of variably saturated transport in UZF-RT3D, other cases are included to demonstrate the usefulness of the code in terms of model run-time and handling the reaction kinetics of multiple interacting species in variably saturated subsurface systems. As UZF1 relies on a kinematic-wave approximation for unsaturated flow that neglects the diffusive terms in Richards equation, UZF-RT3D can be used for large-scale aquifer systems for which the UZF1 formulation is reasonable, that is, capillary-pressure gradients can be neglected and soil parameters can be treated as homogeneous. Decreased model run-time and the ability to include site-specific chemical species and chemical reactions make UZF-RT3D an attractive model for efficient simulation of multispecies reactive transport in variably saturated large-scale subsurface systems.

  15. Nitrogen saturation in stream ecosystems.

    Science.gov (United States)

    Earl, Stevan R; Valett, H Maurice; Webster, Jackson R

    2006-12-01

    The concept of nitrogen (N) saturation has organized the assessment of N loading in terrestrial ecosystems. Here we extend the concept to lotic ecosystems by coupling Michaelis-Menten kinetics and nutrient spiraling. We propose a series of saturation response types, which may be used to characterize the proximity of streams to N saturation. We conducted a series of short-term N releases using a tracer (15NO3-N) to measure uptake. Experiments were conducted in streams spanning a gradient of background N concentration. Uptake increased in four of six streams as NO3-N was incrementally elevated, indicating that these streams were not saturated. Uptake generally corresponded to Michaelis-Menten kinetics but deviated from the model in two streams where some other growth-critical factor may have been limiting. Proximity to saturation was correlated to background N concentration but was better predicted by the ratio of dissolved inorganic N (DIN) to soluble reactive phosphorus (SRP), suggesting phosphorus limitation in several high-N streams. Uptake velocity, a reflection of uptake efficiency, declined nonlinearly with increasing N amendment in all streams. At the same time, uptake velocity was highest in the low-N streams. Our conceptual model of N transport, uptake, and uptake efficiency suggests that, while streams may be active sites of N uptake on the landscape, N saturation contributes to nonlinear changes in stream N dynamics that correspond to decreased uptake efficiency.

  16. Brain oxygen saturation assessment in neonates using T2-prepared blood imaging of oxygen saturation and near-infrared spectroscopy

    DEFF Research Database (Denmark)

    Alderliesten, Thomas; De Vis, Jill B; Lemmers, Petra Ma

    2017-01-01

    saturation in the sagittal sinus (R(2 )= 0.49, p = 0.023), but no significant correlations could be demonstrated with frontal and whole brain cerebral blood flow. These results suggest that measuring oxygen saturation by T2-prepared blood imaging of oxygen saturation is feasible, even in neonates. Strong...... sinus. A strong linear relation was found between the oxygen saturation measured by magnetic resonance imaging and the oxygen saturation measured by near-infrared spectroscopy (R(2 )= 0.64, p ..., and magnetic resonance imaging measures of frontal cerebral blood flow, whole brain cerebral blood flow and venous oxygen saturation in the sagittal sinus (R(2 )= 0.71, 0.50, 0.65; p 

  17. An Analysis and Design for Nonlinear Quadratic Systems Subject to Nested Saturation

    Directory of Open Access Journals (Sweden)

    Minsong Zhang

    2013-01-01

    Full Text Available This paper considers the stability problem for nonlinear quadratic systems with nested saturation input. The interesting treatment method proposed to nested saturation here is put into use a well-established linear differential control tool. And the new conclusions include the existing conclusion on this issue and have less conservatism than before. Simulation example illustrates the effectiveness of the established methodologies.

  18. A method for correcting the depth-of-interaction blurring in PET cameras

    International Nuclear Information System (INIS)

    Rogers, J.G.

    1993-11-01

    A method is presented for the purpose of correcting PET images for the blurring caused by variations in the depth-of-interaction in position-sensitive gamma ray detectors. In the case of a fine-cut 50x50x30 mm BGO block detector, the method is shown to improve the detector resolution by about 25%, measured in the geometry corresponding to detection at the edge of the field-of-view. Strengths and weaknesses of the method are discussed and its potential usefulness for improving the images of future PET cameras is assessed. (author). 8 refs., 3 figs

  19. 76 FR 53819 - Methods of Accounting Used by Corporations That Acquire the Assets of Other Corporations; Correction

    Science.gov (United States)

    2011-08-30

    ... of Accounting Used by Corporations That Acquire the Assets of Other Corporations; Correction AGENCY... describes corrections to final regulations (TD 9534) relating to the methods of accounting, including the... corporate reorganizations and tax-free liquidations. These regulations were published in the Federal...

  20. Modeling of carbon sequestration in coal-beds: A variable saturated simulation

    International Nuclear Information System (INIS)

    Liu Guoxiang; Smirnov, Andrei V.

    2008-01-01

    Storage of carbon dioxide in deep coal seams is a profitable method to reduce the concentration of green house gases in the atmosphere while the methane as a byproduct can be extracted during carbon dioxide injection into the coal seam. In this procedure, the key element is to keep carbon dioxide in the coal seam without escaping for a long term. It is depended on many factors such as properties of coal basin, fracture state, phase equilibrium, etc., especially the porosity, permeability and saturation of the coal seam. In this paper, a variable saturation model was developed to predict the capacity of carbon dioxide sequestration and coal-bed methane recovery. This variable saturation model can be used to track the saturation variability with the partial pressures change caused by carbon dioxide injection. Saturation variability is a key factor to predict the capacity of carbon dioxide storage and methane recovery. Based on this variable saturation model, a set of related variables including capillary pressure, relative permeability, porosity, coupled adsorption model, concentration and temperature equations were solved. From results of the simulation, historical data agree with the variable saturation model as well as the adsorption model constructed by Langmuir equations. The Appalachian basin, as an example, modeled the carbon dioxide sequestration in this paper. The results of the study and the developed models can provide the projections for the CO 2 sequestration and methane recovery in coal-beds within different regional specifics

  1. Micromechanics of non-active clays in saturated state and DEM modelling

    Directory of Open Access Journals (Sweden)

    Pagano Arianna Gea

    2017-01-01

    Full Text Available The paper presents a conceptual micromechanical model for 1-D compression behaviour of non-active clays in saturated state. An experimental investigation was carried out on kaolin clay samples saturated with fluids of different pH and dielectric permittivity. The effect of pore fluid characteristics on one-dimensional compressibility behaviour of kaolin was investigated. A three dimensional Discrete Element Method (DEM was implemented in order to simulate the response of saturated kaolin observed during the experiments. A complex contact model was introduced, considering both the mechanical and physico-chemical microscopic interactions between clay particles. A simple analysis with spherical particles only was performed as a preliminary step in the DEM study in the elastic regime.

  2. The saturation phenomenon in Kr85 ion implantation in metallic targets

    International Nuclear Information System (INIS)

    Baptista Junior, V. de P.

    1978-01-01

    Noble gases, as Krypton containing the radioative isotope Kr 85 , can be stably incorporated into a wide variety of solids and used as tracers or a kind of sensitive probe to measure chemical and physical phenomena. A general review is presented about the methods of incorporation with emphasis on ion bombardment and saturation. The problem of saturation of metal targets was correlated to certain properties in order to get a mathematical approach. Six properties were chosen as more significative to produce a simple model of saturation on experiments of ion implantation with Kr 85 at 45 KeV. The accuracy of the model is limited by the experimental error, the available data and its own simplicity. (Author) [pt

  3. Transient response of a cylindrical cavity in viscoelastic saturated porous medium

    Directory of Open Access Journals (Sweden)

    LIU Tao

    2016-10-01

    Full Text Available The study on dynamic characteristics for fluid-solid coupling system in saturated porous medium is of significant academic value and potential application foreground.In this paper,the transient response of a cylindrical cavity in infinite viscoelastic saturated porous medium with the circular lining is studied,and the corresponding results can be used in the design of foundation engineering,such as the tunnel analyses in saturated soil,the nuclear waste disposal engineering,and the exploitation and utilization of geothermal reservoirs and so on.Firstly,based on the porous media theory,the governing equations of coupled system are presented,and the corresponding boundary conditions,initial conditions as well as the joint conditions are derived.Then,the differential quadrature element method and the second-order backward difference scheme are applied to discretize the governing differential equations of the coupled system on the spatial and temporal domains,respectively.Finally,the Newton-Raphson method is adopted to solve the discretization equations with the initial conditions,the transient responses of the coupled system are analyzed,the effects of the parameters are considered,and the validity of the numerical method is verified.

  4. Space Charge Saturated Sheath Regime and Electron Temperature Saturation in Hall Thrusters

    International Nuclear Information System (INIS)

    Raitses, Y.; Staack, D.; Smirnov, A.; Fisch, N.J.

    2005-01-01

    Secondary electron emission in Hall thrusters is predicted to lead to space charge saturated wall sheaths resulting in enhanced power losses in the thruster channel. Analysis of experimentally obtained electron-wall collision frequency suggests that the electron temperature saturation, which occurs at high discharge voltages, appears to be caused by a decrease of the Joule heating rather than by the enhancement of the electron energy loss at the walls due to a strong secondary electron emission

  5. Saturated Zone Colloid-Facilitated Transport

    International Nuclear Information System (INIS)

    Wolfsberg, A.; Reimus, P.

    2001-01-01

    The purpose of the Saturated Zone Colloid-Facilitated Transport Analysis and Modeling Report (AMR), as outlined in its Work Direction and Planning Document (CRWMS MandO 1999a), is to provide retardation factors for colloids with irreversibly-attached radionuclides, such as plutonium, in the saturated zone (SZ) between their point of entrance from the unsaturated zone (UZ) and downgradient compliance points. Although it is not exclusive to any particular radionuclide release scenario, this AMR especially addresses those scenarios pertaining to evidence from waste degradation experiments, which indicate that plutonium and perhaps other radionuclides may be irreversibly attached to colloids. This report establishes the requirements and elements of the design of a methodology for calculating colloid transport in the saturated zone at Yucca Mountain. In previous Total Systems Performance Assessment (TSPA) analyses, radionuclide-bearing colloids were assumed to be unretarded in their migration. Field experiments in fractured tuff at Yucca Mountain and in porous media at other sites indicate that colloids may, in fact, experience retardation relative to the mean pore-water velocity, suggesting that contaminants associated with colloids should also experience some retardation. Therefore, this analysis incorporates field data where available and a theoretical framework when site-specific data are not available for estimating plausible ranges of retardation factors in both saturated fractured tuff and saturated alluvium. The distribution of retardation factors for tuff and alluvium are developed in a form consistent with the Performance Assessment (PA) analysis framework for simulating radionuclide transport in the saturated zone. To improve on the work performed so far for the saturated-zone flow and transport modeling, concerted effort has been made in quantifying colloid retardation factors in both fractured tuff and alluvium. The fractured tuff analysis used recent data

  6. A new bias field correction method combining N3 and FCM for improved segmentation of breast density on MRI.

    Science.gov (United States)

    Lin, Muqing; Chan, Siwa; Chen, Jeon-Hor; Chang, Daniel; Nie, Ke; Chen, Shih-Ting; Lin, Cheng-Ju; Shih, Tzu-Ching; Nalcioglu, Orhan; Su, Min-Ying

    2011-01-01

    Quantitative breast density is known as a strong risk factor associated with the development of breast cancer. Measurement of breast density based on three-dimensional breast MRI may provide very useful information. One important step for quantitative analysis of breast density on MRI is the correction of field inhomogeneity to allow an accurate segmentation of the fibroglandular tissue (dense tissue). A new bias field correction method by combining the nonparametric nonuniformity normalization (N3) algorithm and fuzzy-C-means (FCM)-based inhomogeneity correction algorithm is developed in this work. The analysis is performed on non-fat-sat T1-weighted images acquired using a 1.5 T MRI scanner. A total of 60 breasts from 30 healthy volunteers was analyzed. N3 is known as a robust correction method, but it cannot correct a strong bias field on a large area. FCM-based algorithm can correct the bias field on a large area, but it may change the tissue contrast and affect the segmentation quality. The proposed algorithm applies N3 first, followed by FCM, and then the generated bias field is smoothed using Gaussian kernal and B-spline surface fitting to minimize the problem of mistakenly changed tissue contrast. The segmentation results based on the N3+FCM corrected images were compared to the N3 and FCM alone corrected images and another method, coherent local intensity clustering (CLIC), corrected images. The segmentation quality based on different correction methods were evaluated by a radiologist and ranked. The authors demonstrated that the iterative N3+FCM correction method brightens the signal intensity of fatty tissues and that separates the histogram peaks between the fibroglandular and fatty tissues to allow an accurate segmentation between them. In the first reading session, the radiologist found (N3+FCM > N3 > FCM) ranking in 17 breasts, (N3+FCM > N3 = FCM) ranking in 7 breasts, (N3+FCM = N3 > FCM) in 32 breasts, (N3+FCM = N3 = FCM) in 2 breasts, and (N3 > N3

  7. Systematic underestimation of the age of samples with saturating exponential behaviour and inhomogeneous dose distribution

    International Nuclear Information System (INIS)

    Brennan, B.J.

    2000-01-01

    In luminescence and ESR studies, a systematic underestimate of the (average) equivalent dose, and thus also the age, of a sample can occur when there is significant variation of the natural dose within the sample and some regions approach saturation. This is demonstrated explicitly for a material that exhibits a single-saturating-exponential growth of signal with dose. The result is valid for any geometry (e.g. a plain layer, spherical grain, etc.) and some illustrative cases are modelled, with the age bias exceeding 10% in extreme cases. If the dose distribution within the sample can be modelled accurately, it is possible to correct for the bias in the estimates of equivalent dose estimate and age. While quantifying the effect would be more difficult, similar systematic biases in dose and age estimates are likely in other situations more complex than the one modelled

  8. TU-H-206-04: An Effective Homomorphic Unsharp Mask Filtering Method to Correct Intensity Inhomogeneity in Daily Treatment MR Images

    International Nuclear Information System (INIS)

    Yang, D; Gach, H; Li, H; Mutic, S

    2016-01-01

    Purpose: The daily treatment MRIs acquired on MR-IGRT systems, like diagnostic MRIs, suffer from intensity inhomogeneity issue, associated with B1 and B0 inhomogeneities. An improved homomorphic unsharp mask (HUM) filtering method, automatic and robust body segmentation, and imaging field-of-view (FOV) detection methods were developed to compute the multiplicative slow-varying correction field and correct the intensity inhomogeneity. The goal is to improve and normalize the voxel intensity so that the images could be processed more accurately by quantitative methods (e.g., segmentation and registration) that require consistent image voxel intensity values. Methods: HUM methods have been widely used for years. A body mask is required, otherwise the body surface in the corrected image would be incorrectly bright due to the sudden intensity transition at the body surface. In this study, we developed an improved HUM-based correction method that includes three main components: 1) Robust body segmentation on the normalized image gradient map, 2) Robust FOV detection (needed for body segmentation) using region growing and morphologic filters, and 3) An effective implementation of HUM using repeated Gaussian convolution. Results: The proposed method was successfully tested on patient images of common anatomical sites (H/N, lung, abdomen and pelvis). Initial qualitative comparisons showed that this improved HUM method outperformed three recently published algorithms (FCM, LEMS, MICO) in both computation speed (by 50+ times) and robustness (in intermediate to severe inhomogeneity situations). Currently implemented in MATLAB, it takes 20 to 25 seconds to process a 3D MRI volume. Conclusion: Compared to more sophisticated MRI inhomogeneity correction algorithms, the improved HUM method is simple and effective. The inhomogeneity correction, body mask, and FOV detection methods developed in this study would be useful as preprocessing tools for many MRI-related research and

  9. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples

    Science.gov (United States)

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-01

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses.

  10. Fat-saturated diffusion-weighted imaging with three-dimensional MP-RAGE sequence

    International Nuclear Information System (INIS)

    Numano, Tomokazu; Homma, Kazuhiro; Takahashi, Nobuyuki; Hirose, Takeshi

    2005-01-01

    Image misrepresentation due to chemical shifts can create image artifacts on MR images. Distinguishing the organization and affected area can be difficult due to the chemical shift artifacts. Chemical shift selective (CHESS) is a method of decreasing chemical shift artifacts. In this study we have developed a new sequence for fat-saturated three-dimensional diffusion weighted MR imaging. This imaging was done during in vivo studies using an animal experiment MR imaging system at 2.0 T. In this sequence a preparation phase with a ''CHESS-90 deg RF-Motion Proving Gradient (MPG-180 deg RF-MPG-90 deg RF pulse train) was used to sensitize the magnetization to fat-saturated diffusion. Centric k-space acquisition order is necessary to minimize saturation effects from tissues with short relaxation times. From experimental results obtained with a phantom, the effect of the diffusion weighting and the effect of the fat-saturation were confirmed. From rat experimental results, fat-saturated diffusion weighted image data (0.55 x 0.55 x 0.55 mm 3 : voxel size) were obtained. This sequence was useful for in vivo imaging. (author)

  11. Modified Ponseti method of treatment for correction of neglected clubfoot in older children and adolescents--a preliminary report.

    Science.gov (United States)

    Bashi, Ramin Haj Zargar; Baghdadi, Taghi; Shirazi, Mehdi Ramezan; Abdi, Reza; Aslani, Hossein

    2016-03-01

    Congenital talipes equinovarus may be the most common congenital orthopedic condition requiring treatment. Nonoperative treatment including different methods is generally accepted as the first step in the deformity correction. Ignacio Ponseti introduced his nonsurgical approach to the treatment of clubfoot in the early 1940s. The method is reportedly successful in treating clubfoot in patients up to 9 years of age. However, whether age at the beginning of treatment affects the rate of effective correction and relapse is unknown. We have applied the Ponseti method successfully with some modifications for 11 patients with a mean age of 11.2 years (range, 6 to 19 years) with neglected and untreated clubbed feet. The mean follow-up was 15 months (12 to 36 months). Correction was achieved with a mean of nine casts (six to 13). Clinically, 17 out of 18 feet (94.4%) were considered to achieve a good result with no need for further surgery. The application of this method of treatment is very simple and also cheap in developing countries with limited financial and social resources for health service. To the best of the authors' knowledge, such a modified method as a correction method for clubfoot in older children and adolescents has not been applied previously for neglected clubfeet in older children in the literature.

  12. Dissipative dynamics with the corrected propagator method. Numerical comparison between fully quantum and mixed quantum/classical simulations

    International Nuclear Information System (INIS)

    Gelman, David; Schwartz, Steven D.

    2010-01-01

    The recently developed quantum-classical method has been applied to the study of dissipative dynamics in multidimensional systems. The method is designed to treat many-body systems consisting of a low dimensional quantum part coupled to a classical bath. Assuming the approximate zeroth order evolution rule, the corrections to the quantum propagator are defined in terms of the total Hamiltonian and the zeroth order propagator. Then the corrections are taken to the classical limit by introducing the frozen Gaussian approximation for the bath degrees of freedom. The evolution of the primary part is governed by the corrected propagator yielding the exact quantum dynamics. The method has been tested on two model systems coupled to a harmonic bath: (i) an anharmonic (Morse) oscillator and (ii) a double-well potential. The simulations have been performed at zero temperature. The results have been compared to the exact quantum simulations using the surrogate Hamiltonian approach.

  13. Monte Carlo evaluation of accuracy and noise properties of two scatter correction methods

    International Nuclear Information System (INIS)

    Narita, Y.; Eberl, S.; Nakamura, T.

    1996-01-01

    Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for 99m Tc and 201 Tl for numerical chest phantoms. Data were reconstructed with ordered-subset ML-EM algorithm including attenuation correction using the transmission data. In the chest phantom simulation, TDCS provided better S/N than TEW, and better accuracy, i.e., 1.0% vs -7.2% in myocardium, and -3.7% vs -30.1% in the ventricular chamber for 99m Tc with TDCS and TEW, respectively. For 201 Tl, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT

  14. Evaluation of Regression and Neuro_Fuzzy Models in Estimating Saturated Hydraulic Conductivity

    Directory of Open Access Journals (Sweden)

    J. Behmanesh

    2015-06-01

    Full Text Available Study of soil hydraulic properties such as saturated and unsaturated hydraulic conductivity is required in the environmental investigations. Despite numerous research, measuring saturated hydraulic conductivity using by direct methods are still costly, time consuming and professional. Therefore estimating saturated hydraulic conductivity using rapid and low cost methods such as pedo-transfer functions with acceptable accuracy was developed. The purpose of this research was to compare and evaluate 11 pedo-transfer functions and Adaptive Neuro-Fuzzy Inference System (ANFIS to estimate saturated hydraulic conductivity of soil. In this direct, saturated hydraulic conductivity and physical properties in 40 points of Urmia were calculated. The soil excavated was used in the lab to determine its easily accessible parameters. The results showed that among existing models, Aimrun et al model had the best estimation for soil saturated hydraulic conductivity. For mentioned model, the Root Mean Square Error and Mean Absolute Error parameters were 0.174 and 0.028 m/day respectively. The results of the present research, emphasises the importance of effective porosity application as an important accessible parameter in accuracy of pedo-transfer functions. sand and silt percent, bulk density and soil particle density were selected to apply in 561 ANFIS models. In training phase of best ANFIS model, the R2 and RMSE were calculated 1 and 1.2×10-7 respectively. These amounts in the test phase were 0.98 and 0.0006 respectively. Comparison of regression and ANFIS models showed that the ANFIS model had better results than regression functions. Also Nuro-Fuzzy Inference System had capability to estimatae with high accuracy in various soil textures.

  15. Output feedback control of linear fractional transformation systems subject to actuator saturation

    Science.gov (United States)

    Ban, Xiaojun; Wu, Fen

    2016-11-01

    In this paper, the control problem for a class of linear parameter varying (LPV) plant subject to actuator saturation is investigated. For the saturated LPV plant depending on the scheduling parameters in linear fractional transformation (LFT) fashion, a gain-scheduled output feedback controller in the LFT form is designed to guarantee the stability of the closed-loop LPV system and provide optimised disturbance/error attenuation performance. By using the congruent transformation, the synthesis condition is formulated as a convex optimisation problem in terms of a finite number of LMIs for which efficient optimisation techniques are available. The nonlinear inverted pendulum problem is employed to demonstrate the effectiveness of the proposed approach. Moreover, the comparison between our LPV saturated approach with an existing linear saturated method reveals the advantage of the LPV controller when handling nonlinear plants.

  16. Misconceptions in Reporting Oxygen Saturation

    NARCIS (Netherlands)

    Toffaletti, John; Zijlstra, Willem G.

    2007-01-01

    BACKGROUND: We describe some misconceptions that have become common practice in reporting blood gas and cooximetry results. In 1980, oxygen saturation was incorrectly redefined in a report of a new instrument for analysis of hemoglobin (Hb) derivatives. Oxygen saturation (sO(2)) was redefined as the

  17. Torque Analysis With Saturation Effects for Non-Salient Single-Phase Permanent-Magnet Machines

    DEFF Research Database (Denmark)

    Lu, Kaiyuan; Ritchie, Ewen

    2011-01-01

    The effects of saturation on torque production for non-salient, single-phase, permanent-magnet machines are studied in this paper. An analytical torque equation is proposed to predict the instantaneous torque with saturation effects. Compared to the existing methods, it is computationally faster......-element results, and experimental results obtained on a prototype single-phase permanent-magnet machine....

  18. The relation between oxygen saturation level and retionopathy of prematurity

    Directory of Open Access Journals (Sweden)

    Mohammad Gharavi Fard

    2016-03-01

    Full Text Available Introduction: Oxygen therapy used for preterm infant disease might be associated with oxygen toxicity or oxidative stress. The exact oxygen concentration to control and maintain the arterial oxygen saturation balance is not certainly clear. We aimed to compare the efficacy of higher or lower oxygen saturations on the development of severe retinopathy of prematurity which is a major cause of blindness in preterm neonates. Methods: PubMed was searched for obtaining the relevant articles. A total of seven articles were included after studying the titles, abstracts, and the full text of retrieved articles at initial search. Inclusion criteria were all the English language human clinical randomized controlled trials with no time limitation, which studied the efficacy of low versus high oxygen saturation measured by pulse oximetry in preterm infants.Result: It can be suggested that lower limits of oxygen saturations have higher efficacy at postmesetural age of ≤28 weeks in preterm neonates. This relation has been demonstrated in five large clinical trials including three Boost trials, COT, and Support.Discussion: Applying higher concentrations of oxygen supplementations at mesentural age ≥32 weeks reduced the development of retinopathy of prematurity. Lower concentrations of oxygen saturation decreased the incidence and the development of retinopathy of prematurity in preterm neonates while applied soon after the birth.Conclusions: Targeting levels of oxygen saturation in the low or high range should be performed cautiously with attention to the postmesentural age in preterm infants at the time of starting the procedures.

  19. Tracking Controller for Intrinsic Output Saturated Systems in Presence of Amplitude and Rate Input Saturations

    DEFF Research Database (Denmark)

    Chater, E.; Giri, F.; Guerrero, Josep M.

    2014-01-01

    We consider the problem of controlling plants that are subject to multiple saturation constraints. Especially, we are interested in linear systems whose input is subject to amplitude and rate constraints of saturation type. Furthermore, the considered systems output is also subject to an intrinsi...

  20. Interrelated temperature dependence of bulk etch rate and track length saturation time in CR-39 detector

    International Nuclear Information System (INIS)

    Azooz, A.A.; Al-Jubbori, M.A.

    2013-01-01

    Highlights: • New empirical parameterization of CR-39 bulk etch rate. • Bulk etch rates measurements using two different methods give consistent results. • Temperature independence of track saturation length. • Two empirical relation between bulk etch rate and temperature are suggested. • Simple inverse relation between bulk etch rate and track saturation time. -- Abstract: Experimental measurements of the etching solution temperature dependence of bulk etch rate using two independent methods revealed a few interesting properties. It is found that while the track saturation length is independent of etching temperature, the etching time needed to reach saturation is strongly temperature-dependent. It is demonstrated that there is systematic simple inverse relation between track saturation time, and etching solution temperature. In addition, and although, the relation between the bulk etch rate and etching solution temperature can be reasonably described by a modified form of the Arrhenius equation, better fits can be obtained by another equation suggested in this work

  1. Diagnostics and correction of disregulation states by physical methods

    OpenAIRE

    Gorsha, O. V.; Gorsha, V. I.

    2017-01-01

    Nicolaus Copernicus University, Toruń, Poland Ukrainian Research Institute for Medicine of Transport, Odesa, Ukraine Gorsha O. V., Gorsha V. I. Diagnostics and correction of disregulation states by physical methods Горша О. В., Горша В. И. Диагностика и коррекция физическими методами дизрегуляторных состояний Toruń, Odesa 2017 Nicolaus Copernicus University, To...

  2. Determination of diagnostic standards on saturated soil extracts for cut roses grown in greenhouses.

    Science.gov (United States)

    Franco-Hermida, John Jairo; Quintero, María Fernanda; Cabrera, Raúl Iskander; Guzman, José Miguel

    2017-01-01

    This work comprises the theoretical determination and validation of diagnostic standards for the analysis of saturated soil extracts for cut rose flower crops (Rosa spp.) growing in the Bogota Plateau, Colombia. The data included 684 plant tissue analyses and 684 corresponding analyses of saturated soil extracts, all collected between January 2009 and June 2013. The tissue and soil samples were selected from 13 rose farms, and from cultivars grafted on the 'Natal Briar' rootstock. These concurrent samples of soil and plant tissues represented 251 production units (locations) of approximately 10,000 m2 distributed across the study area. The standards were conceived as a tool to improve the nutritional balance in the leaf tissue of rose plants and thereby define the norms for expressing optimum productive potential relative to nutritional conditions in the soil. To this end, previously determined diagnostic standard for rose leaf tissues were employed to obtain rates of foliar nutritional balance at each analyzed location and as criteria for determining the diagnostic norms for saturated soil extracts. Implementing this methodology to foliar analysis, showed a higher significant correlation for diagnostic indices. A similar behavior was observed in saturated soil extracts analysis, becoming a powerful tool for integrated nutritional diagnosis. Leaf analyses determine the most limiting nutrients for high yield and analyses of saturated soil extracts facilitate the possibility of correcting the fertigation formulations applied to soils or substrates. Recommendations are proposed to improve the balance in soil-plant system with which the possibility of yield increase becomes more probable. The main recommendations to increase and improve rose crop flower yields would be: continuously check pH values of SSE, reduce the amounts of P, Fe, Zn and Cu in fertigation solutions and carefully analyze the situation of Mn in the soil-plant system.

  3. On the correctness of the thermoluminescent high-temperature ratio (HTR) method for estimating ionization density effects in mixed radiation fields

    International Nuclear Information System (INIS)

    Bilski, Pawel

    2010-01-01

    The high-temperature ratio (HTR) method which exploits changes in the LiF:Mg,Ti glow-curve due to high-LET radiation, has been used for several years to estimate LET in an unknown radiation field. As TL efficiency is known to decrease after doses of densely ionizing radiation, a LET estimate is used to correct the TLD-measured values of dose. The HTR method is purely empirical and its general correctness is questionable. The validity of the HTR method was investigated by theoretical simulation of various mixed radiation fields. The LET eff values estimated with the HTR method for mixed radiation fields were found in general to be incorrect, in some cases underestimating the true values of dose-averaged LET by an order of magnitude. The method produced correct estimates of average LET only in cases of almost mono-energetic fields (i.e. in non-mixed radiation conditions). The value of LET eff found by the HTR method may therefore be treated as a qualitative indicator of increased LET, but not as a quantitative estimator of average LET. However, HTR-based correction of the TLD-measured dose value (HTR-B method) was found to be quite reliable. In all cases studied, application of this technique improved the result. Most of the measured doses fell within 10% of the true values. A further empirical improvement to the method is proposed. One may therefore recommend the HTR-B method to correct for decreased TL efficiency in mixed high-LET fields.

  4. The usefulness and the problems of attenuation correction using simultaneous transmission and emission data acquisition method. Studies on normal volunteers and phantom

    International Nuclear Information System (INIS)

    Kijima, Tetsuji; Kumita, Shin-ichiro; Mizumura, Sunao; Cho, Keiichi; Ishihara, Makiko; Toba, Masahiro; Kumazaki, Tatsuo; Takahashi, Munehiro.

    1997-01-01

    Attenuation correction using simultaneous transmission data (TCT) and emission data (ECT) acquisition method was applied to 201 Tl myocardial SPECT with ten normal adults and the phantom in order to validate the efficacy of attenuation correction using this method. Normal adults study demonstrated improved 201 Tl accumulation to the septal wall and the posterior wall of the left ventricle and relative decreased activities in the lateral wall with attenuation correction (p 201 Tl uptake organs such as the liver and the stomach pushed up the activities in the septal wall and the posterior wall. Cardiac dynamic phantom studies showed partial volume effect due to cardiac motion contributed to under-correction of the apex, which might be overcome using gated SPECT. Although simultaneous TCT and ECT acquisition was conceived of the advantageous method for attenuation correction, miss-correction of the special myocardial segments should be taken into account in assessment of attenuation correction compensated images. (author)

  5. A New Variational Method for Bias Correction and Its Applications to Rodent Brain Extraction.

    Science.gov (United States)

    Chang, Huibin; Huang, Weimin; Wu, Chunlin; Huang, Su; Guan, Cuntai; Sekar, Sakthivel; Bhakoo, Kishore Kumar; Duan, Yuping

    2017-03-01

    Brain extraction is an important preprocessing step for further analysis of brain MR images. Significant intensity inhomogeneity can be observed in rodent brain images due to the high-field MRI technique. Unlike most existing brain extraction methods that require bias corrected MRI, we present a high-order and L 0 regularized variational model for bias correction and brain extraction. The model is composed of a data fitting term, a piecewise constant regularization and a smooth regularization, which is constructed on a 3-D formulation for medical images with anisotropic voxel sizes. We propose an efficient multi-resolution algorithm for fast computation. At each resolution layer, we solve an alternating direction scheme, all subproblems of which have the closed-form solutions. The method is tested on three T2 weighted acquisition configurations comprising a total of 50 rodent brain volumes, which are with the acquisition field strengths of 4.7 Tesla, 9.4 Tesla and 17.6 Tesla, respectively. On one hand, we compare the results of bias correction with N3 and N4 in terms of the coefficient of variations on 20 different tissues of rodent brain. On the other hand, the results of brain extraction are compared against manually segmented gold standards, BET, BSE and 3-D PCNN based on a number of metrics. With the high accuracy and efficiency, our proposed method can facilitate automatic processing of large-scale brain studies.

  6. Evaluation of a scattering correction method for high energy tomography

    Science.gov (United States)

    Tisseur, David; Bhatia, Navnina; Estre, Nicolas; Berge, Léonie; Eck, Daniel; Payan, Emmanuel

    2018-01-01

    One of the main drawbacks of Cone Beam Computed Tomography (CBCT) is the contribution of the scattered photons due to the object and the detector. Scattered photons are deflected from their original path after their interaction with the object. This additional contribution of the scattered photons results in increased measured intensities, since the scattered intensity simply adds to the transmitted intensity. This effect is seen as an overestimation in the measured intensity thus corresponding to an underestimation of absorption. This results in artifacts like cupping, shading, streaks etc. on the reconstructed images. Moreover, the scattered radiation provides a bias for the quantitative tomography reconstruction (for example atomic number and volumic mass measurement with dual-energy technique). The effect can be significant and difficult in the range of MeV energy using large objects due to higher Scatter to Primary Ratio (SPR). Additionally, the incident high energy photons which are scattered by the Compton effect are more forward directed and hence more likely to reach the detector. Moreover, for MeV energy range, the contribution of the photons produced by pair production and Bremsstrahlung process also becomes important. We propose an evaluation of a scattering correction technique based on the method named Scatter Kernel Superposition (SKS). The algorithm uses a continuously thickness-adapted kernels method. The analytical parameterizations of the scatter kernels are derived in terms of material thickness, to form continuously thickness-adapted kernel maps in order to correct the projections. This approach has proved to be efficient in producing better sampling of the kernels with respect to the object thickness. This technique offers applicability over a wide range of imaging conditions and gives users an additional advantage. Moreover, since no extra hardware is required by this approach, it forms a major advantage especially in those cases where

  7. Intensity correction method customized for multi-animal abdominal MR imaging with 3 T clinical scanner and multi-array coil

    International Nuclear Information System (INIS)

    Mitsuda, Minoru; Yamaguchi, Masayuki; Nakagami, Ryutaro; Furuta, Toshihiro; Fujii, Hirofumi; Sekine, Norio; Niitsu, Mamoru; Moriyama, Noriyuki

    2013-01-01

    Simultaneous magnetic resonance (MR) imaging of multiple small animals in a single session increases throughput of preclinical imaging experiments. Such imaging using a 3-tesla clinical scanner with multi-array coil requires correction of intensity variation caused by the inhomogeneous sensitivity profile of the coil. We explored a method for correcting intensity that we customized for multi-animal MR imaging, especially abdominal imaging. Our institutional committee for animal experimentation approved the protocol. We acquired high resolution T 1 -, T 2 -, and T 2 * -weighted images and low resolution proton density-weighted images (PDWIs) of 4 rat abdomens simultaneously using a 3T clinical scanner and custom-made multi-array coil. For comparison, we also acquired T 1 -, T 2 -, and T 2 * -weighted volume coil images in the same rats in 4 separate sessions. We used software created in-house to correct intensity variation. We applied thresholding to the PDWIs to produce binary images that displayed only a signal-producing area, calculated multi-array coil sensitivity maps by dividing low-pass filtered PDWIs by low-pass filtered binary images pixel by pixel, and divided uncorrected T 1 -, T 2 -, or T 2 * -weighted images by those maps to obtain intensity-corrected images. We compared tissue contrast among the liver, spinal canal, and muscle between intensity-corrected multi-array coil images and volume coil images. Our intensity correction method performed well for all pulse sequences studied and corrected variation in original multi-array coil images without deteriorating the throughput of animal experiments. Tissue contrasts were comparable between intensity-corrected multi-array coil images and volume coil images. Our intensity correction method customized for multi-animal abdominal MR imaging using a 3T clinical scanner and dedicated multi-array coil could facilitate image interpretation. (author)

  8. Correction factors for the NMi free-air ionization chamber for medium-energy x-rays calculated with the Monte Carlo method

    International Nuclear Information System (INIS)

    Grimbergen, T.W.M.; Dijk, E. van; Vries, W. de

    1998-01-01

    A new method is described for the determination of x-ray quality dependent correction factors for free-air ionization chambers. The method is based on weighting correction factors for mono-energetic photons, which are calculated using the Monte Carlo method, with measured air kerma spectra. With this method, correction factors for electron loss, scatter inside the chamber and transmission through the diaphragm and front wall have been calculated for the NMi free-air chamber for medium-energy x-rays for a wide range of x-ray qualities in use at NMi. The newly obtained correction factors were compared with the values in use at present, which are based on interpolation of experimental data for a specific set of x-ray qualities. For x-ray qualities which are similar to this specific set, the agreement between the correction factors determined with the new method and those based on the experimental data is better than 0.1%, except for heavily filtered x-rays generated at 250 kV. For x-ray qualities dissimilar to the specific set, differences up to 0.4% exist, which can be explained by uncertainties in the interpolation procedure of the experimental data. Since the new method does not depend on experimental data for a specific set of x-ray qualities, the new method allows for a more flexible use of the free-air chamber as a primary standard for air kerma for any x-ray quality in the medium-energy x-ray range. (author)

  9. Saturation and linear transport equation

    International Nuclear Information System (INIS)

    Kutak, K.

    2009-03-01

    We show that the GBW saturation model provides an exact solution to the one dimensional linear transport equation. We also show that it is motivated by the BK equation considered in the saturated regime when the diffusion and the splitting term in the diffusive approximation are balanced by the nonlinear term. (orig.)

  10. TECHNIQUES OF EVALUATION OF HEMOGLOBIN OXYGEN SATURATION IN CLINICAL OPHTHALMOLOGY

    Directory of Open Access Journals (Sweden)

    S. Yu. Petrov

    2016-01-01

    Full Text Available Oxygen content in body fluids and tissues is an important indicator of life support functions. A number of ocular pathologies, e.g. glaucoma, are of presumable vascular origin which means altered blood supply and oxygen circulation. Most oxygen is transported in the blood in the association with hemoglobin. When passing through the capillaries, hemoglobin releases oxygen, converting from oxygenated form to deoxygenated form. This process is accompanied by the changes in spectral characteristics of hemoglobin which result in different colors of arterial and venous blood. Photometric technique for the measurement of oxygen saturation in blood is based on the differences in light absorption by different forms of hemoglobin. The measurement of saturation is called oximetry. Pulse oximetry with assessment of tissue oxygenation is the most commonly used method in medicine. The degree of hemoglobin oxygen saturation in the eye blood vessels is the most accessible for noninvasive studies during ophthalmoscopy and informative. Numerous studies showed the importance of this parameter for the diagnosis of retinopathy of various genesis, metabolic status analysis in hyperglycemia, diagnosis and control of treatment of glaucoma and other diseases involving alterations in eye blood supply. The specific method for evaluation of oxygen concentration is the measurement of pressure of oxygen dissolved in the blood, i.e. partial pressure of oxygen. In ophthalmological practice, this parameter is measured in anterior chamber fluid evaluating oxygen level for several ophthalmopathies including different forms of glaucoma, for instillations of hypotensive eye drops as well as in vitreous body near to the optic disc under various levels of intraocular pressure. Currently, monitoring of oxygen saturation in retinal blood vessels, i.e. retinal oximetry, is well developed. This technique is based on the assessment of light absorption by blood depending on

  11. Calculation on the heat of gasification for the saturated liquid of D2

    International Nuclear Information System (INIS)

    Ge Fangfang; China Academy of Engineering Physics, Mianyang; Zhu Zhenghe; Wang Hongbin; Zhou Weimin; Chen Hao; Liu Hongjie

    2005-01-01

    In general, the saturated stream is regarded as the ideal gas for calculating the heat of gasification for the saturated liquid. However, the result of calculation was not consistent with the general law if D 2 was treated as the ideal gas under T c =38.34 K, the critical temperature. Considering the change of the volume from the liquid state to the gas state, this paper implored the Clapeyron differential equation and the equation of vapor-liquid equilibrium, and then obtained the heat of gasification and the entropy from 20 K to 38 K and the saturation curve. The method avoided regarding the saturate gas D 2 as the ideal gas and ignoring the volume change from the liquid state to the gas state, improving the calculation exactitude. (authors)

  12. Development and Assessment of a Bundle Correction Method for CHF

    International Nuclear Information System (INIS)

    Hwang, Dae Hyun; Chang, Soon Heung

    1993-01-01

    A bundle correction method, based on the conservation laws of mass, energy, and momentum in an open subchannel, is proposed for the prediction of the critical heat flux (CHF) in rod bundles from round tube CHF correlations without detailed subchannel analysis. It takes into account the effects of the enthalpy and mass velocity distributions at subchannel level using the first dericatives of CHF with respect to the independent parameters. Three different CHF correlations for tubes (Groeneveld's CHF table, Katto correlation, and Biasi correlation) have been examined with uniformly heated bundle CHF data collected from various sources. A limited number of GHE data from a non-uniformly heated rod bundle are also evaluated with the aid of Tong's F-factor. The proposed method shows satisfactory CHF predictions for rod bundles both uniform and non-uniform power distributions. (Author)

  13. A solution thermodynamics definition of the fiber saturation point and the derivation of a wood-water phase (state) diagram

    Science.gov (United States)

    Samuel L. Zelinka; Samuel V. Glass; Joseph E. Jakes; Donald S. Stone

    2016-01-01

    The fiber saturation point (FSP) is an important concept in wood– moisture relations that differentiates between the states of water in wood and has been discussed in the literature for over 100 years. Despite its importance and extensive study, the exact theoretical definition of the FSP and the operational definition (the correct way to measure the FSP) are still...

  14. Fixed-pattern noise correction method based on improved moment matching for a TDI CMOS image sensor.

    Science.gov (United States)

    Xu, Jiangtao; Nie, Huafeng; Nie, Kaiming; Jin, Weimin

    2017-09-01

    In this paper, an improved moment matching method based on a spatial correlation filter (SCF) and bilateral filter (BF) is proposed to correct the fixed-pattern noise (FPN) of a time-delay-integration CMOS image sensor (TDI-CIS). First, the values of row FPN (RFPN) and column FPN (CFPN) are estimated and added to the original image through SCF and BF, respectively. Then the filtered image will be processed by an improved moment matching method with a moving window. Experimental results based on a 128-stage TDI-CIS show that, after correcting the FPN in the image captured under uniform illumination, the standard deviation of row mean vector (SDRMV) decreases from 5.6761 LSB to 0.1948 LSB, while the standard deviation of the column mean vector (SDCMV) decreases from 15.2005 LSB to 13.1949LSB. In addition, for different images captured by different TDI-CISs, the average decrease of SDRMV and SDCMV is 5.4922/2.0357 LSB, respectively. Comparative experimental results indicate that the proposed method can effectively correct the FPNs of different TDI-CISs while maintaining image details without any auxiliary equipment.

  15. Application of the spectral correction method to reanalysis data in South Africa

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Kruger, Andries C.

    2014-01-01

    of this study is to evaluate the applicability of the method to the relevant region. The impacts from the two aspects are investigated for interior and coastal locations. Measurements from five stations from South Africa are used to evaluate the results from the spectral model S(f)=af−5/3 together...... with the hourly time series of the Climate Forecast System Reanalysis (CFSR) 10 m wind at 38 km resolution over South Africa. The results show that using the spectral correction method to the CFSR wind data produce extreme wind atlases in acceptable agreement with the atlas made from limited measurements across...

  16. Operator quantum error-correcting subsystems for self-correcting quantum memories

    International Nuclear Information System (INIS)

    Bacon, Dave

    2006-01-01

    The most general method for encoding quantum information is not to encode the information into a subspace of a Hilbert space, but to encode information into a subsystem of a Hilbert space. Recently this notion has led to a more general notion of quantum error correction known as operator quantum error correction. In standard quantum error-correcting codes, one requires the ability to apply a procedure which exactly reverses on the error-correcting subspace any correctable error. In contrast, for operator error-correcting subsystems, the correction procedure need not undo the error which has occurred, but instead one must perform corrections only modulo the subsystem structure. This does not lead to codes which differ from subspace codes, but does lead to recovery routines which explicitly make use of the subsystem structure. Here we present two examples of such operator error-correcting subsystems. These examples are motivated by simple spatially local Hamiltonians on square and cubic lattices. In three dimensions we provide evidence, in the form a simple mean field theory, that our Hamiltonian gives rise to a system which is self-correcting. Such a system will be a natural high-temperature quantum memory, robust to noise without external intervening quantum error-correction procedures

  17. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data

    Science.gov (United States)

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-01-01

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration’s (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003–2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW’s) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach. PMID:27314363

  18. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data.

    Science.gov (United States)

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-06-15

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration's (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003-2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW's) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach.

  19. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data

    Directory of Open Access Journals (Sweden)

    Haris Akram Bhatti

    2016-06-01

    Full Text Available With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration’s (NOAA Climate Prediction Centre (CPC morphing technique (CMORPH satellite rainfall product (CMORPH in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003–2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW sizes and for sequential windows (SW’s of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE. To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r and standard deviation (SD. Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach.

  20. Use of regularization method in the determination of ring parameters and orbit correction

    International Nuclear Information System (INIS)

    Tang, Y.N.; Krinsky, S.

    1993-01-01

    We discuss applying the regularization method of Tikhonov to the solution of inverse problems arising in accelerator operations. This approach has been successfully used for orbit correction on the NSLS storage rings, and is presently being applied to the determination of betatron functions and phases from the measured response matrix. The inverse problem of differential equation often leads to a set of integral equations of the first kind which are ill-conditioned. The regularization method is used to combat the ill-posedness

  1. Hydraulic correction method (HCM) to enhance the efficiency of SRTM DEM in flood modeling

    Science.gov (United States)

    Chen, Huili; Liang, Qiuhua; Liu, Yong; Xie, Shuguang

    2018-04-01

    Digital Elevation Model (DEM) is one of the most important controlling factors determining the simulation accuracy of hydraulic models. However, the currently available global topographic data is confronted with limitations for application in 2-D hydraulic modeling, mainly due to the existence of vegetation bias, random errors and insufficient spatial resolution. A hydraulic correction method (HCM) for the SRTM DEM is proposed in this study to improve modeling accuracy. Firstly, we employ the global vegetation corrected DEM (i.e. Bare-Earth DEM), developed from the SRTM DEM to include both vegetation height and SRTM vegetation signal. Then, a newly released DEM, removing both vegetation bias and random errors (i.e. Multi-Error Removed DEM), is employed to overcome the limitation of height errors. Last, an approach to correct the Multi-Error Removed DEM is presented to account for the insufficiency of spatial resolution, ensuring flow connectivity of the river networks. The approach involves: (a) extracting river networks from the Multi-Error Removed DEM using an automated algorithm in ArcGIS; (b) correcting the location and layout of extracted streams with the aid of Google Earth platform and Remote Sensing imagery; and (c) removing the positive biases of the raised segment in the river networks based on bed slope to generate the hydraulically corrected DEM. The proposed HCM utilizes easily available data and tools to improve the flow connectivity of river networks without manual adjustment. To demonstrate the advantages of HCM, an extreme flood event in Huifa River Basin (China) is simulated on the original DEM, Bare-Earth DEM, Multi-Error removed DEM, and hydraulically corrected DEM using an integrated hydrologic-hydraulic model. A comparative analysis is subsequently performed to assess the simulation accuracy and performance of four different DEMs and favorable results have been obtained on the corrected DEM.

  2. Resistivity Correction Factor for the Four-Probe Method: Experiment II

    Science.gov (United States)

    Yamashita, Masato; Yamaguchi, Shoji; Nishii, Toshifumi; Kurihara, Hiroshi; Enjoji, Hideo

    1989-05-01

    Experimental verification of the theoretically derived resistivity correction factor F is presented. Factor F can be applied to a system consisting of a disk sample and a four-probe array. Measurements are made on isotropic graphite disks and crystalline ITO films. Factor F can correct the apparent variations of the data and lead to reasonable resistivities and sheet resistances. Here factor F is compared to other correction factors; i.e. FASTM and FJIS.

  3. Effects of Atmospheric Refraction on an Airborne Weather Radar Detection and Correction Method

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2015-01-01

    Full Text Available This study investigates the effect of atmospheric refraction, affected by temperature, atmospheric pressure, and humidity, on airborne weather radar beam paths. Using three types of typical atmospheric background sounding data, we established a simulation model for an actual transmission path and a fitted correction path of an airborne weather radar beam during airplane take-offs and landings based on initial flight parameters and X-band airborne phased-array weather radar parameters. Errors in an ideal electromagnetic beam propagation path are much greater than those of a fitted path when atmospheric refraction is not considered. The rates of change in the atmospheric refraction index differ with weather conditions and the radar detection angles differ during airplane take-off and landing. Therefore, the airborne radar detection path must be revised in real time according to the specific sounding data and flight parameters. However, an error analysis indicates that a direct linear-fitting method produces significant errors in a negatively refractive atmosphere; a piecewise-fitting method can be adopted to revise the paths according to the actual atmospheric structure. This study provides researchers and practitioners in the aeronautics and astronautics field with updated information regarding the effect of atmospheric refraction on airborne weather radar detection and correction methods.

  4. Low-loss saturable absorbers based on tapered fibers embedded in carbon nanotube/polymer composites

    Science.gov (United States)

    Martinez, Amos; Al Araimi, Mohammed; Dmitriev, Artemiy; Lutsyk, Petro; Li, Shen; Mou, Chengbo; Rozhin, Alexey; Sumetsky, Misha; Turitsyn, Sergei

    2017-12-01

    The emergence of low-dimensional materials has opened new opportunities in the fabrication of compact nonlinear photonic devices. Single-walled carbon nanotubes were among the first of those materials to attract the attention of the photonics community owing to their high third order susceptibility, broadband operation, and ultrafast response. Saturable absorption, in particular, has become a widespread application for nanotubes in the mode-locking of a fiber laser where they are used as nonlinear passive amplitude modulators to initiate pulsed operation. Numerous approaches have been proposed for the integration of nanotubes in fiber systems; these can be divided into those that rely on direct interaction (where the nanotubes are sandwiched between fiber connectors) and those that rely on lateral interaction with the evanescence field of the propagating wave. Tapered fibers, in particular, offer excellent flexibility to adjust the nonlinearity of nanotube-based devices but suffer from high losses (typically exceeding 50%) and poor saturable to non-saturable absorption ratios (typically above 1:5). In this paper, we propose a method to fabricate carbon nanotube saturable absorbers with controllable saturation power, low-losses (as low as 15%), and large saturable to non-saturable loss ratios approaching 1:1. This is achieved by optimizing the procedure of embedding tapered fibers in low-refractive index polymers. In addition, this study sheds light in the operation of these devices, highlighting a trade-off between losses and saturation power and providing guidelines for the design of saturable absorbers according to their application.

  5. A new method of CCD dark current correction via extracting the dark Information from scientific images

    Science.gov (United States)

    Ma, Bin; Shang, Zhaohui; Hu, Yi; Liu, Qiang; Wang, Lifan; Wei, Peng

    2014-07-01

    We have developed a new method to correct dark current at relatively high temperatures for Charge-Coupled Device (CCD) images when dark frames cannot be obtained on the telescope. For images taken with the Antarctic Survey Telescopes (AST3) in 2012, due to the low cooling efficiency, the median CCD temperature was -46°C, resulting in a high dark current level of about 3e-/pix/sec, even comparable to the sky brightness (10e-/pix/sec). If not corrected, the nonuniformity of the dark current could even overweight the photon noise of the sky background. However, dark frames could not be obtained during the observing season because the camera was operated in frame-transfer mode without a shutter, and the telescope was unattended in winter. Here we present an alternative, but simple and effective method to derive the dark current frame from the scientific images. Then we can scale this dark frame to the temperature at which the scientific images were taken, and apply the dark frame corrections to the scientific images. We have applied this method to the AST3 data, and demonstrated that it can reduce the noise to a level roughly as low as the photon noise of the sky brightness, solving the high noise problem and improving the photometric precision. This method will also be helpful for other projects that suffer from similar issues.

  6. Blood oxygen saturation determined by transmission spectrophotometry of hemolyzed blood samples

    Science.gov (United States)

    Malik, W. M.

    1967-01-01

    Use of the Lambert-Beer Transmission Law determines blood oxygen saturation of hemolyzed blood samples. This simplified method is based on the difference in optical absorption properties of hemoglobin and oxyhemoglobin.

  7. Mapping species distributions with MAXENT using a geographically biased sample of presence data: a performance assessment of methods for correcting sampling bias.

    Science.gov (United States)

    Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean

    2014-01-01

    MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.

  8. Single-mode saturation of the bump-on-tail instability

    International Nuclear Information System (INIS)

    Simon, A.; Rosenbluth, M.N.

    1976-01-01

    A slightly unstable plasma with only one or a few linear modes unstable is considered. Nonlinear saturation at small amplitudes has been treated by time-asymptotic analysis which is a generalization of the methods of Bogolyubov and co-workers. In this paper the method is applied to instability in a collisionless plasma governed by the vlasov equation. The bump-on-tail instability is considered for a one-dimensional plasma

  9. Numerical simulation of seismic performance of the underground structure buried in the dense saturated sand

    International Nuclear Information System (INIS)

    Kawai, Tadashi

    2006-01-01

    The applicability of the advanced earthquake resistant performance verification method on reinforced concrete underground structures developed by CRIEPI was investigated for the structures which buried in the dry sand. For the advancement of the method in practical use, the applicability to the structures buried in the saturated ground is expected to be verified. In this study the applicability of the effective stress based soil modeling method in numerical analysis, which was proposed through the modification of the formerly developed model by CRIEPI, was verified through the non-linear dynamic numerical simulations of the large centrifuge tests conducted by using a model comprised of fully saturated sand and a aluminium duct type structure specially prepared for the measurement of the load acting on the structure surface with the soil-structure interaction. The magnitudes of the simulated loads and the resultant deformations of the structure were almost same as those of experiments. As a result it is confirmed that the performance verification method is useful for the structures buried in the saturated ground with using the proposed effective stress based ground modeling method. (author)

  10. Validation of phenol red versus gravimetric method for water reabsorption correction and study of gender differences in Doluisio's absorption technique.

    Science.gov (United States)

    Tuğcu-Demiröz, Fatmanur; Gonzalez-Alvarez, Isabel; Gonzalez-Alvarez, Marta; Bermejo, Marival

    2014-10-01

    The aim of the present study was to develop a method for water flux reabsorption measurement in Doluisio's Perfusion Technique based on the use of phenol red as a non-absorbable marker and to validate it by comparison with gravimetric procedure. The compounds selected for the study were metoprolol, atenolol, cimetidine and cefadroxil in order to include low, intermediate and high permeability drugs absorbed by passive diffusion and by carrier mediated mechanism. The intestinal permeabilities (Peff) of the drugs were obtained in male and female Wistar rats and calculated using both methods of water flux correction. The absorption rate coefficients of all the assayed compounds did not show statistically significant differences between male and female rats consequently all the individual values were combined to compare between reabsorption methods. The absorption rate coefficients and permeability values did not show statistically significant differences between the two strategies of concentration correction. The apparent zero order water absorption coefficients were also similar in both correction procedures. In conclusion gravimetric and phenol red method for water reabsorption correction are accurate and interchangeable for permeability estimation in closed loop perfusion method. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Methods for Motion Correction Evaluation Using 18F-FDG Human Brain Scans on a High-Resolution PET Scanner

    DEFF Research Database (Denmark)

    Keller, Sune H.; Sibomana, Merence; Olesen, Oline Vinter

    2012-01-01

    Many authors have reported the importance of motion correction (MC) for PET. Patient motion during scanning disturbs kinetic analysis and degrades resolution. In addition, using misaligned transmission for attenuation and scatter correction may produce regional quantification bias in the reconstr......Many authors have reported the importance of motion correction (MC) for PET. Patient motion during scanning disturbs kinetic analysis and degrades resolution. In addition, using misaligned transmission for attenuation and scatter correction may produce regional quantification bias...... in the reconstructed emission images. The purpose of this work was the development of quality control (QC) methods for MC procedures based on external motion tracking (EMT) for human scanning using an optical motion tracking system. Methods: Two scans with minor motion and 5 with major motion (as reported...... (automated image registration) software. The following 3 QC methods were used to evaluate the EMT and AIR MC: a method using the ratio between 2 regions of interest with gray matter voxels (GM) and white matter voxels (WM), called GM/WM; mutual information; and cross correlation. Results: The results...

  12. Attenuation correction for SPECT

    International Nuclear Information System (INIS)

    Hosoba, Minoru

    1986-01-01

    Attenuation correction is required for the reconstruction of a quantitative SPECT image. A new method for detecting body contours, which are important for the correction of tissue attenuation, is presented. The effect of body contours, detected by the newly developed method, on the reconstructed images was evaluated using various techniques for attenuation correction. The count rates in the specified region of interest in the phantom image by the Radial Post Correction (RPC) method, the Weighted Back Projection (WBP) method, Chang's method were strongly affected by the accuracy of the contours, as compared to those by Sorenson's method. To evaluate the effect of non-uniform attenuators on the cardiac SPECT, computer simulation experiments were performed using two types of models, the uniform attenuator model (UAM) and the non-uniform attenuator model (NUAM). The RPC method showed the lowest relative percent error (%ERROR) in UAM (11 %). However, 20 to 30 percent increase in %ERROR was observed for NUAM reconstructed with the RPC, WBP, and Chang's methods. Introducing an average attenuation coefficient (0.12/cm for Tc-99m and 0.14/cm for Tl-201) in the RPC method decreased %ERROR to the levels for UAM. Finally, a comparison between images, which were obtained by 180 deg and 360 deg scans and reconstructed from the RPC method, showed that the degree of the distortion of the contour of the simulated ventricles in the 180 deg scan was 15 % higher than that in the 360 deg scan. (Namekawa, K.)

  13. Methods for detecting and correcting inaccurate results in inductively coupled plasma-atomic emission spectrometry

    Science.gov (United States)

    Chan, George C. Y. [Bloomington, IN; Hieftje, Gary M [Bloomington, IN

    2010-08-03

    A method for detecting and correcting inaccurate results in inductively coupled plasma-atomic emission spectrometry (ICP-AES). ICP-AES analysis is performed across a plurality of selected locations in the plasma on an unknown sample, collecting the light intensity at one or more selected wavelengths of one or more sought-for analytes, creating a first dataset. The first dataset is then calibrated with a calibration dataset creating a calibrated first dataset curve. If the calibrated first dataset curve has a variability along the location within the plasma for a selected wavelength, errors are present. Plasma-related errors are then corrected by diluting the unknown sample and performing the same ICP-AES analysis on the diluted unknown sample creating a calibrated second dataset curve (accounting for the dilution) for the one or more sought-for analytes. The cross-over point of the calibrated dataset curves yields the corrected value (free from plasma related errors) for each sought-for analyte.

  14. A Sensitive ANN Based Differential Relay for Transformer Protection with Security against CT Saturation and Tap Changer Operation

    OpenAIRE

    KHORASHADI-ZADEH, Hassan; LI, Zuyi

    2014-01-01

    This paper presents an artificial neural network (ANN) based scheme for fault identification in power transformer protection. The proposed scheme is featured by the application of ANN to identifying system patterns, the unique choice of harmonics of positive sequence differential currents as ANN inputs, the effective handling of current transformer (CT) saturation with an ANN based approach, and the consideration of tap changer position for correcting secondary CT current. Performanc...

  15. Comparison of oxygen saturation values obtained from fingers on physically restrained or unrestrained sides of the body.

    Science.gov (United States)

    Korhan, Esra Akin; Yönt, Gülendam Hakverdioğlu; Khorshid, Leyla

    2011-01-01

    The aim of this study was to compare semiexperimentally the pulse oximetry values obtained from a finger on restrained or unrestrained sides of the body. The pulse oximeter provides a noninvasive measurement of the oxygen saturation of hemoglobin in arterial blood. One of the procedures most frequently applied to patients in intensive care units is the application of physical restraint. Circulation problems are the most important complication in patients who are physically restrained. Evaluation of oxygen saturation from body parts in which circulation is impeded or has deteriorated can cause false results. The research sample consisted of 30 hospitalized patients who participated in the study voluntarily and who were concordant with the inclusion criteria of the study. Patient information and patient follow-up forms were used for data collection. Pulse oximetry values were measured simultaneously using OxiMax Nellcor finger sensors from fingers on the restrained and unrestrained sides of the body. Numeric and percentile distributions were used in evaluating the sociodemographic properties of patients. A significant difference was found between the oxygen saturation values obtained from a finger of an arm that had been physically restrained and a finger of an arm that had not been physically restrained. The mean oxygen saturation value measured from a finger of an arm that had been physically restrained was found to be 93.40 (SD, 2.97), and the mean oxygen saturation value measured from a finger of an arm that had not been physically restrained was found to be 95.53 (SD, 2.38). The results of this study indicate that nurses should use a finger of an arm that is not physically restrained when evaluating oxygen saturation values to evaluate them correctly.

  16. Relationship between Clinical and Polysomnography Measures Corrected for CPAP Use.

    Science.gov (United States)

    Kirkham, Erin M; Heckbert, Susan R; Weaver, Edward M

    2015-11-15

    The changes in patient-reported measures of obstructive sleep apnea (OSA) burden are largely discordant with the change in apnea-hypopnea index (AHI) and other polysomnography measures before and after treatment. For patients treated with continuous positive airway pressure (CPAP), some investigators have theorized that this discordance is due in part to the variability in CPAP use. We aim to test the hypothesis that patient-reported outcomes of CPAP treatment have stronger correlations with AHI when it is corrected for mean nightly CPAP use. This was a cross-sectional study of 459 adults treated with CPAP for OSA. Five patient-reported measures of OSA burden were collected at baseline and after 6 months of CPAP therapy. The correlations between the change in each patient-reported measure and the change in AHI as well as mean nightly AHI (corrected for CPAP use with a weighted average formula) were measured after 6 months of treatment. The same analysis was repeated for 4 additional polysomnography measures, including apnea index, arousal index, lowest oxyhemoglobin saturation, and desaturation index. The change in AHI was weakly but significantly correlated with change in 2 of the 5 clinical measures. The change in mean nightly AHI demonstrated statistically significant correlations with 4 out of 5 clinical measures, though each with coefficients less than 0.3. Similar results were seen for apnea index, arousal index, lowest oxyhemoglobin saturation, and desaturation index. Correction for CPAP use yielded overall small but significant improvements in the correlations between patient-reported measures of sleep apnea burden and polysomnography measures after 6 months of treatment. © 2015 American Academy of Sleep Medicine.

  17. THE PROGNOSTIC AND DIAGNOSTIC VALUE OF REPEATED TRANSRECTAL PROSTATE SATURATION BIOPSY

    Directory of Open Access Journals (Sweden)

    M. A. Kurdzhiev

    2014-08-01

    Full Text Available Objective: to determine the rate of prostate cancer (PC development after repeated transrectal saturation prostate biopsy (RTRSPB, to study the characteristics of diagnosed tumors, and to estimate their clinical significance from the data of radical retropubic prostatectomy (RRP.Materials and methods. The results of RTRSPB were analyzed in 226 patients with a later evaluation of a tumor from the results of RRP. All the patients underwent at least 2 prostate biopsies (mean 2.4. The average number of biopsy cores was 26.7 (range 24—30. The average value of total prostate-specific antigen before saturation biopsy was 7.5 (range 7.5 to 28.6 ng/ml. The mean age of patients was 62 years (range 53 to 70.  Results. PC was diagnosed in 14.6% of cases (33/226. An isolated lesion of the prostatic transition zone was in 12.1% of cases. If this zone had been excluded from the biopsy scheme, the detection rate of PC during saturation biopsy should be reduced by 13.8%. Better PC detectability during repeated saturation biopsy generally occurred due to the localized forms of the disease (93.3%. The agreement of Gleason tumor grading in the biopsy and prostatectomy specimens was noted in 66.7% of cases.Conclusion. Saturation biopsy allows prediction of a pathological stage of PC, Gleason grade of a tumor and its site localization with a greater probability. Most tumors detectable by saturation biopsy were clinically significant, which makes it possible to recommend RTRSPB to some cohort of high PC-risk patients 

  18. Automated 3-D method for the correction of axial artifacts in spectral-domain optical coherence tomography images

    Science.gov (United States)

    Antony, Bhavna; Abràmoff, Michael D.; Tang, Li; Ramdas, Wishal D.; Vingerling, Johannes R.; Jansonius, Nomdo M.; Lee, Kyungmoo; Kwon, Young H.; Sonka, Milan; Garvin, Mona K.

    2011-01-01

    The 3-D spectral-domain optical coherence tomography (SD-OCT) images of the retina often do not reflect the true shape of the retina and are distorted differently along the x and y axes. In this paper, we propose a novel technique that uses thin-plate splines in two stages to estimate and correct the distinct axial artifacts in SD-OCT images. The method was quantitatively validated using nine pairs of OCT scans obtained with orthogonal fast-scanning axes, where a segmented surface was compared after both datasets had been corrected. The mean unsigned difference computed between the locations of this artifact-corrected surface after the single-spline and dual-spline correction was 23.36 ± 4.04 μm and 5.94 ± 1.09 μm, respectively, and showed a significant difference (p < 0.001 from two-tailed paired t-test). The method was also validated using depth maps constructed from stereo fundus photographs of the optic nerve head, which were compared to the flattened top surface from the OCT datasets. Significant differences (p < 0.001) were noted between the artifact-corrected datasets and the original datasets, where the mean unsigned differences computed over 30 optic-nerve-head-centered scans (in normalized units) were 0.134 ± 0.035 and 0.302 ± 0.134, respectively. PMID:21833377

  19. A neural network method to correct bidirectional effects in water-leaving radiance

    Science.gov (United States)

    Fan, Yongzhen; Li, Wei; Voss, Kenneth J.; Gatebe, Charles K.; Stamnes, Knut

    2017-02-01

    The standard method to convert the measured water-leaving radiances from the observation direction to the nadir direction developed by Morel and coworkers requires knowledge of the chlorophyll concentration (CHL). Also, the standard method was developed for open ocean water, which makes it unsuitable for turbid coastal waters. We introduce a neural network method to convert the water-leaving radiance (or the corresponding remote sensing reflectance) from the observation direction to the nadir direction. This method does not require any prior knowledge of the water constituents or the inherent optical properties (IOPs). This method is fast, accurate and can be easily adapted to different remote sensing instruments. Validation using NuRADS measurements in different types of water shows that this method is suitable for both open ocean and coastal waters. In open ocean or chlorophyll-dominated waters, our neural network method produces corrections similar to those of the standard method. In turbid coastal waters, especially sediment-dominated waters, a significant improvement was obtained compared to the standard method.

  20. On natural convection in enclosures filled with fluid-saturated porous media including viscous dissipation

    Energy Technology Data Exchange (ETDEWEB)

    Costa, V.A.F. [Departamento de Engenharia Mecanica, Universidade de Aveiro, Campus Universitario de Santiago, 3810-193 Aveiro (Portugal)

    2006-07-15

    Care needs to be taken when considering the viscous dissipation in the energy conservation formulation of the natural convection problem in fluid-saturated porous media. The unique energy formulation compatible with the First Law of Thermodynamics informs us that if the viscous dissipation term is taken into account, also the work of pressure forces term needs to be taken into account. In integral terms, the work of pressure forces must equal the energy dissipated by viscous effects, and the net energy generation in the overall domain must be zero. If only the (positive) viscous dissipation term is considered in the energy conservation equation, the domain behaves as a heat multiplier, with an heat output greater than the heat input. Only the energy formulation consistent with the First Law of Thermodynamics leads to the correct flow and temperature fields, as well as of the heat transfer parameters characterizing the involved porous device. Attention is given to the natural convection problem in a square enclosure filled with a fluid-saturated porous medium, using the Darcy Law to describe the fluid flow, but the main ideas and conclusions apply equally for any general natural or mixed convection heat transfer problem. It is also analyzed the validity of the Oberbeck-Boussinesq approximation when applied to natural convection problems in fluid-saturated porous media. (author)

  1. A Hybrid Wavelet-Based Method for the Peak Detection of Photoplethysmography Signals

    Directory of Open Access Journals (Sweden)

    Suyi Li

    2017-01-01

    Full Text Available The noninvasive peripheral oxygen saturation (SpO2 and the pulse rate can be extracted from photoplethysmography (PPG signals. However, the accuracy of the extraction is directly affected by the quality of the signal obtained and the peak of the signal identified; therefore, a hybrid wavelet-based method is proposed in this study. Firstly, we suppressed the partial motion artifacts and corrected the baseline drift by using a wavelet method based on the principle of wavelet multiresolution. And then, we designed a quadratic spline wavelet modulus maximum algorithm to identify the PPG peaks automatically. To evaluate this hybrid method, a reflective pulse oximeter was used to acquire ten subjects’ PPG signals under sitting, raising hand, and gently walking postures, and the peak recognition results on the raw signal and on the corrected signal were compared, respectively. The results showed that the hybrid method not only corrected the morphologies of the signal well but also optimized the peaks identification quality, subsequently elevating the measurement accuracy of SpO2 and the pulse rate. As a result, our hybrid wavelet-based method profoundly optimized the evaluation of respiratory function and heart rate variability analysis.

  2. A Hybrid Wavelet-Based Method for the Peak Detection of Photoplethysmography Signals.

    Science.gov (United States)

    Li, Suyi; Jiang, Shanqing; Jiang, Shan; Wu, Jiang; Xiong, Wenji; Diao, Shu

    2017-01-01

    The noninvasive peripheral oxygen saturation (SpO 2 ) and the pulse rate can be extracted from photoplethysmography (PPG) signals. However, the accuracy of the extraction is directly affected by the quality of the signal obtained and the peak of the signal identified; therefore, a hybrid wavelet-based method is proposed in this study. Firstly, we suppressed the partial motion artifacts and corrected the baseline drift by using a wavelet method based on the principle of wavelet multiresolution. And then, we designed a quadratic spline wavelet modulus maximum algorithm to identify the PPG peaks automatically. To evaluate this hybrid method, a reflective pulse oximeter was used to acquire ten subjects' PPG signals under sitting, raising hand, and gently walking postures, and the peak recognition results on the raw signal and on the corrected signal were compared, respectively. The results showed that the hybrid method not only corrected the morphologies of the signal well but also optimized the peaks identification quality, subsequently elevating the measurement accuracy of SpO 2 and the pulse rate. As a result, our hybrid wavelet-based method profoundly optimized the evaluation of respiratory function and heart rate variability analysis.

  3. A Hybrid Wavelet-Based Method for the Peak Detection of Photoplethysmography Signals

    Science.gov (United States)

    Jiang, Shanqing; Jiang, Shan; Wu, Jiang; Xiong, Wenji

    2017-01-01

    The noninvasive peripheral oxygen saturation (SpO2) and the pulse rate can be extracted from photoplethysmography (PPG) signals. However, the accuracy of the extraction is directly affected by the quality of the signal obtained and the peak of the signal identified; therefore, a hybrid wavelet-based method is proposed in this study. Firstly, we suppressed the partial motion artifacts and corrected the baseline drift by using a wavelet method based on the principle of wavelet multiresolution. And then, we designed a quadratic spline wavelet modulus maximum algorithm to identify the PPG peaks automatically. To evaluate this hybrid method, a reflective pulse oximeter was used to acquire ten subjects' PPG signals under sitting, raising hand, and gently walking postures, and the peak recognition results on the raw signal and on the corrected signal were compared, respectively. The results showed that the hybrid method not only corrected the morphologies of the signal well but also optimized the peaks identification quality, subsequently elevating the measurement accuracy of SpO2 and the pulse rate. As a result, our hybrid wavelet-based method profoundly optimized the evaluation of respiratory function and heart rate variability analysis. PMID:29250135

  4. ADS genes for reducing saturated fatty acid levels in seed oils

    Science.gov (United States)

    Heilmann, Ingo H.; Shanklin, John

    2010-02-02

    The present invention relates to enzymes involved in lipid metabolism. In particular, the present invention provides coding sequences for Arabidopsis Desaturases (ADS), the encoded ADS polypeptides, and methods for using the sequences and encoded polypeptides, where such methods include decreasing and increasing saturated fatty acid content in plant seed oils.

  5. Spacecraft reorientation control in presence of attitude constraint considering input saturation and stochastic disturbance

    Science.gov (United States)

    Cheng, Yu; Ye, Dong; Sun, Zhaowei; Zhang, Shijie

    2018-03-01

    This paper proposes a novel feedback control law for spacecraft to deal with attitude constraint, input saturation, and stochastic disturbance during the attitude reorientation maneuver process. Applying the parameter selection method to improving the existence conditions for the repulsive potential function, the universality of the potential-function-based algorithm is enhanced. Moreover, utilizing the auxiliary system driven by the difference between saturated torque and command torque, a backstepping control law, which satisfies the input saturation constraint and guarantees the spacecraft stability, is presented. Unlike some methods that passively rely on the inherent characteristic of the existing controller to stabilize the adverse effects of external stochastic disturbance, this paper puts forward a nonlinear disturbance observer to compensate the disturbance in real-time, which achieves a better performance of robustness. The simulation results validate the effectiveness, reliability, and universality of the proposed control law.

  6. Error-finding and error-correcting methods for the start-up of the SLC

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.

    1987-02-01

    During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper

  7. Serum albumin--a non-saturable carrier

    DEFF Research Database (Denmark)

    Brodersen, R; Honoré, B; Larsen, F G

    1984-01-01

    The shape of binding isotherms for sixteen ligands to human serum albumin showed no signs of approaching saturation at high ligand concentrations. It is suggested that ligand binding to serum albumin is essentially different from saturable binding of substrates to enzymes, of oxygen to haemoglobi...

  8. X-ray scatter correction method for dedicated breast computed tomography: improvements and initial patient testing

    International Nuclear Information System (INIS)

    Ramamurthy, Senthil; D’Orsi, Carl J; Sechopoulos, Ioannis

    2016-01-01

    A previously proposed x-ray scatter correction method for dedicated breast computed tomography was further developed and implemented so as to allow for initial patient testing. The method involves the acquisition of a complete second set of breast CT projections covering 360° with a perforated tungsten plate in the path of the x-ray beam. To make patient testing feasible, a wirelessly controlled electronic positioner for the tungsten plate was designed and added to a breast CT system. Other improvements to the algorithm were implemented, including automated exclusion of non-valid primary estimate points and the use of a different approximation method to estimate the full scatter signal. To evaluate the effectiveness of the algorithm, evaluation of the resulting image quality was performed with a breast phantom and with nine patient images. The improvements in the algorithm resulted in the avoidance of introduction of artifacts, especially at the object borders, which was an issue in the previous implementation in some cases. Both contrast, in terms of signal difference and signal difference-to-noise ratio were improved with the proposed method, as opposed to with the correction algorithm incorporated in the system, which does not recover contrast. Patient image evaluation also showed enhanced contrast, better cupping correction, and more consistent voxel values for the different tissues. The algorithm also reduces artifacts present in reconstructions of non-regularly shaped breasts. With the implemented hardware and software improvements, the proposed method can be reliably used during patient breast CT imaging, resulting in improvement of image quality, no introduction of artifacts, and in some cases reduction of artifacts already present. The impact of the algorithm on actual clinical performance for detection, diagnosis and other clinical tasks in breast imaging remains to be evaluated. (paper)

  9. Saturation and forward jets at HERA

    International Nuclear Information System (INIS)

    Marquet, C.; Peschanski, R.; Royon, C.

    2004-01-01

    We analyse forward-jet production at HERA in the framework of the Golec-Biernat and Wusthoff saturation models. We obtain a good description of the forward-jet cross-sections measured by the H1 and ZEUS Collaborations in the two-hard-scale region (k T∼ Q >> Λ QCD ) with two different parametrizations with either significant or weak saturation effects. The weak saturation parametrization gives a scale compatible with the one found for the proton structure function F2. We argue that Mueller-Navelet jets at the Tevatron and the LHC could help distinguishing between both options

  10. The method of edge anxiety-depressive disorder correction in patients with diabetes mellitus

    Directory of Open Access Journals (Sweden)

    A. Kozhanova

    2015-11-01

    4.    Kazimierz Wielki University, Bydgoszcz, Poland Abstract   The article presents the results of research on the effectiveness of the method developed by the authors for correcting the anxiety and depressive edge disorders in patients with type 2 diabetes through the use of magnetic-therapy.   Tags: anxiety-depressive disorder, hidden depression, diabetes, medical rehabilitation, singlet-oxygen therapy.

  11. Low-loss saturable absorbers based on tapered fibers embedded in carbon nanotube/polymer composites

    Directory of Open Access Journals (Sweden)

    Amos Martinez

    2017-12-01

    Full Text Available The emergence of low-dimensional materials has opened new opportunities in the fabrication of compact nonlinear photonic devices. Single-walled carbon nanotubes were among the first of those materials to attract the attention of the photonics community owing to their high third order susceptibility, broadband operation, and ultrafast response. Saturable absorption, in particular, has become a widespread application for nanotubes in the mode-locking of a fiber laser where they are used as nonlinear passive amplitude modulators to initiate pulsed operation. Numerous approaches have been proposed for the integration of nanotubes in fiber systems; these can be divided into those that rely on direct interaction (where the nanotubes are sandwiched between fiber connectors and those that rely on lateral interaction with the evanescence field of the propagating wave. Tapered fibers, in particular, offer excellent flexibility to adjust the nonlinearity of nanotube-based devices but suffer from high losses (typically exceeding 50% and poor saturable to non-saturable absorption ratios (typically above 1:5. In this paper, we propose a method to fabricate carbon nanotube saturable absorbers with controllable saturation power, low-losses (as low as 15%, and large saturable to non-saturable loss ratios approaching 1:1. This is achieved by optimizing the procedure of embedding tapered fibers in low-refractive index polymers. In addition, this study sheds light in the operation of these devices, highlighting a trade-off between losses and saturation power and providing guidelines for the design of saturable absorbers according to their application.

  12. Adaptive projection intensity adjustment for avoiding saturation in three-dimensional shape measurement

    Science.gov (United States)

    Chen, Chao; Gao, Nan; Wang, Xiangjun; Zhang, Zonghua

    2018-03-01

    Phase-based fringe projection methods have been commonly used for three-dimensional (3D) measurements. However, image saturation results in incorrect intensities in captured fringe pattern images, leading to phase and measurement errors. Existing solutions are complex. This paper proposes an adaptive projection intensity adjustment method to avoid image saturation and maintain good fringe modulation in measuring objects with a high range of surface reflectivities. The adapted fringe patterns are created using only one prior step of fringe-pattern projection and image capture. First, a set of phase-shifted fringe patterns with maximum projection intensity value of 255 and a uniform gray level pattern are projected onto the surface of an object. The patterns are reflected from and deformed by the object surface and captured by a digital camera. The best projection intensities corresponding to each saturated-pixel clusters are determined by fitting a polynomial function to transform captured intensities to projected intensities. Subsequently, the adapted fringe patterns are constructed using the best projection intensities at projector pixel coordinate. Finally, the adapted fringe patterns are projected for phase recovery and 3D shape calculation. The experimental results demonstrate that the proposed method achieves high measurement accuracy even for objects with a high range of surface reflectivities.

  13. Stochastic analysis of radionuclide migration in saturated-unsaturated soils

    International Nuclear Information System (INIS)

    Kawanishi, Moto

    1988-01-01

    In Japan, LLRW (low level radioactive wastes) generated from nuclear power plants shall be started to store concentrically in the Shimokita site from 1990, and those could be transformed into land disposal if the positive safety is confirmed. Therefore, it is hoped that the safety assessment method shall be successed for the land disposal of LLRW. In this study, a stochastic model to analyze the radionuclide migration in saturated-unsaturated soils was constructed. The principal results are summarized as follows. 1) We presented a generalized idea for the modeling of the radionuclide migration in saturated-unsaturated soils as an advective-dispersion phenomena followed by the decay of radionuclides and those adsorption/desorption in soils. 2) Based on the radionuclide migration model mentioned above, we developed a stochastic analysis model on radionuclide migration in saturated-unsaturated soils. 3) From the comparison between the simulated results and the exact solution on a few simple one-dimensional advective-dispersion problems of radionuclides, the good validity of this model was confirmed. 4) From the comparison between the simulated results by this model and the experimental results of radionuclide migration in a one-dimensional unsaturated soil column with rainfall, the good applicability was shown. 5) As the stochastic model such as this has several advantages that it is easily able to represent the image of physical phenomena and has basically no numerical dissipation, this model should be more applicable to the analysis of the complicated radionuclide migration in saturated-unsaturated soils. (author)

  14. Experimental aspects of buoyancy correction in measuring reliable highpressure excess adsorption isotherms using the gravimetric method.

    Science.gov (United States)

    Nguyen, Huong Giang T; Horn, Jarod C; Thommes, Matthias; van Zee, Roger D; Espinal, Laura

    2017-12-01

    Addressing reproducibility issues in adsorption measurements is critical to accelerating the path to discovery of new industrial adsorbents and to understanding adsorption processes. A National Institute of Standards and Technology Reference Material, RM 8852 (ammonium ZSM-5 zeolite), and two gravimetric instruments with asymmetric two-beam balances were used to measure high-pressure adsorption isotherms. This work demonstrates how common approaches to buoyancy correction, a key factor in obtaining the mass change due to surface excess gas uptake from the apparent mass change, can impact the adsorption isotherm data. Three different approaches to buoyancy correction were investigated and applied to the subcritical CO 2 and supercritical N 2 adsorption isotherms at 293 K. It was observed that measuring a collective volume for all balance components for the buoyancy correction (helium method) introduces an inherent bias in temperature partition when there is a temperature gradient (i.e. analysis temperature is not equal to instrument air bath temperature). We demonstrate that a blank subtraction is effective in mitigating the biases associated with temperature partitioning, instrument calibration, and the determined volumes of the balance components. In general, the manual and subtraction methods allow for better treatment of the temperature gradient during buoyancy correction. From the study, best practices specific to asymmetric two-beam balances and more general recommendations for measuring isotherms far from critical temperatures using gravimetric instruments are offered.

  15. Proton dose distribution measurements using a MOSFET detector with a simple dose-weighted correction method for LET effects.

    Science.gov (United States)

    Kohno, Ryosuke; Hotta, Kenji; Matsuura, Taeko; Matsubara, Kana; Nishioka, Shie; Nishio, Teiji; Kawashima, Mitsuhiko; Ogino, Takashi

    2011-04-04

    We experimentally evaluated the proton beam dose reproducibility, sensitivity, angular dependence and depth-dose relationships for a new Metal Oxide Semiconductor Field Effect Transistor (MOSFET) detector. The detector was fabricated with a thinner oxide layer and was operated at high-bias voltages. In order to accurately measure dose distributions, we developed a practical method for correcting the MOSFET response to proton beams. The detector was tested by examining lateral dose profiles formed by protons passing through an L-shaped bolus. The dose reproducibility, angular dependence and depth-dose response were evaluated using a 190 MeV proton beam. Depth-output curves produced using the MOSFET detectors were compared with results obtained using an ionization chamber (IC). Since accurate measurements of proton dose distribution require correction for LET effects, we developed a simple dose-weighted correction method. The correction factors were determined as a function of proton penetration depth, or residual range. The residual proton range at each measurement point was calculated using the pencil beam algorithm. Lateral measurements in a phantom were obtained for pristine and SOBP beams. The reproducibility of the MOSFET detector was within 2%, and the angular dependence was less than 9%. The detector exhibited a good response at the Bragg peak (0.74 relative to the IC detector). For dose distributions resulting from protons passing through an L-shaped bolus, the corrected MOSFET dose agreed well with the IC results. Absolute proton dosimetry can be performed using MOSFET detectors to a precision of about 3% (1 sigma). A thinner oxide layer thickness improved the LET in proton dosimetry. By employing correction methods for LET dependence, it is possible to measure absolute proton dose using MOSFET detectors.

  16. Proton dose distribution measurements using a MOSFET detector with a simple dose‐weighted correction method for LET effects

    Science.gov (United States)

    Hotta, Kenji; Matsuura, Taeko; Matsubara, Kana; Nishioka, Shie; Nishio, Teiji; Kawashima, Mitsuhiko; Ogino, Takashi

    2011-01-01

    We experimentally evaluated the proton beam dose reproducibility, sensitivity, angular dependence and depth‐dose relationships for a new Metal Oxide Semiconductor Field Effect Transistor (MOSFET) detector. The detector was fabricated with a thinner oxide layer and was operated at high‐bias voltages. In order to accurately measure dose distributions, we developed a practical method for correcting the MOSFET response to proton beams. The detector was tested by examining lateral dose profiles formed by protons passing through an L‐shaped bolus. The dose reproducibility, angular dependence and depth‐dose response were evaluated using a 190 MeV proton beam. Depth‐output curves produced using the MOSFET detectors were compared with results obtained using an ionization chamber (IC). Since accurate measurements of proton dose distribution require correction for LET effects, we developed a simple dose‐weighted correction method. The correction factors were determined as a function of proton penetration depth, or residual range. The residual proton range at each measurement point was calculated using the pencil beam algorithm. Lateral measurements in a phantom were obtained for pristine and SOBP beams. The reproducibility of the MOSFET detector was within 2%, and the angular dependence was less than 9%. The detector exhibited a good response at the Bragg peak (0.74 relative to the IC detector). For dose distributions resulting from protons passing through an L‐shaped bolus, the corrected MOSFET dose agreed well with the IC results. Absolute proton dosimetry can be performed using MOSFET detectors to a precision of about 3% (1 sigma). A thinner oxide layer thickness improved the LET in proton dosimetry. By employing correction methods for LET dependence, it is possible to measure absolute proton dose using MOSFET detectors. PACS number: 87.56.‐v

  17. Correcting AUC for Measurement Error.

    Science.gov (United States)

    Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang

    2015-12-01

    Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.

  18. Effect of desaturation and re-saturation on shale in underground galleries

    International Nuclear Information System (INIS)

    Pham, Q.T.

    2006-03-01

    The aim of this thesis is to characterize by experimental and numerical approaches the hydric, mechanical and hydro-mechanical effects due to the desaturation and re-saturation of the Eastern argillite, host rock of the Bure site, future underground radioactive waste disposal facility. Experimental and numerical approaches for the characterization of hydric transfers in argilites are presented. A simple identification method is proposed which uses the determination of the linearized hydric diffusivity from weight measurements performed on samples (thin tubes and plates) submitted to humidity steps according to a desaturation-re-saturation cycle. The hydric transfer is nonlinear. In order to interpret this phenomenon, a non-linear numerical model is established which takes into account the physical phenomena (hydraulic conduction, vapor diffusion, phase change..). The evolution of the physical and mechanical behaviour of the argillaceous rock with respect to the imposed humidity is then analyzed according to a desaturation-re-saturation cycle by successive steps. The hydric deformation, the velocity of ultrasonic waves propagation, the elastic properties, the rupture characteristics and the delayed phenomena depend on the hydric state of the material. The desaturation and re-saturation influence on a scale model of tunnel is analyzed. Thick tubes parallel or perpendicular to the stratification are used to show up the anisotropy of the rock. These tubes are submitted to hydric loads by blowing air with variable hygrometry through their center hole. A nonlinear poro-elastic model is used to interpret the anisotropic hydro-mechanical phenomena observed. It is shown that hydric loads can lead to the rupture of test samples which follow the anisotropic directions of the rock and which can be interpreted by the hydro-mechanical model as a violation of a rupture criterion in total pulling stress. Finally, numerical calculations for the phenomena generated by desaturation

  19. Ultrafast THz Saturable Absorption in Semiconductors

    DEFF Research Database (Denmark)

    Turchinovich, Dmitry; Hoffmann, Matthias C.

    2011-01-01

    We demonstrate THz saturable absorption in n-doped semiconductors GaAs, GaP, and Ge in a nonlinear THz time-domain spectroscopy experiment. Saturable absorption is caused by sample conductivity modulation due to electron heating and satellite valley scattering in the field of a strong THz pulse....

  20. An estimate of higher twist at small xB and low Q2 based upon a saturation model

    International Nuclear Information System (INIS)

    Bartels, J.; Peters, K.

    2000-03-01

    We investigate the influence of higher twist corrections to deep inelastic structure functions in the low-Q 2 and small-x HERA region. We review the general features of the lowest-order QCD diagrams which contribute to twist-4 at small-x, in particular the sign structure of longitudinal and transverse structure functions which offers the possibility of strong cancellations in F 2 . For a numerical analysis we perform a twist analysis of the saturation model which has been very successful both in describing the structure function and the DIS diffractive cross section at HERA. As the main conclusion, twist 4 corrections are not small in F L or F T but in F 2 = F L + F T they almost cancel. We point out that F L analysis needs a large twist-4 correction. We also indicate the region of validity of the twist expansion. (orig.)

  1. A novel baseline correction method using convex optimization framework in laser-induced breakdown spectroscopy quantitative analysis

    Science.gov (United States)

    Yi, Cancan; Lv, Yong; Xiao, Han; Ke, Ke; Yu, Xun

    2017-12-01

    For laser-induced breakdown spectroscopy (LIBS) quantitative analysis technique, baseline correction is an essential part for the LIBS data preprocessing. As the widely existing cases, the phenomenon of baseline drift is generated by the fluctuation of laser energy, inhomogeneity of sample surfaces and the background noise, which has aroused the interest of many researchers. Most of the prevalent algorithms usually need to preset some key parameters, such as the suitable spline function and the fitting order, thus do not have adaptability. Based on the characteristics of LIBS, such as the sparsity of spectral peaks and the low-pass filtered feature of baseline, a novel baseline correction and spectral data denoising method is studied in this paper. The improved technology utilizes convex optimization scheme to form a non-parametric baseline correction model. Meanwhile, asymmetric punish function is conducted to enhance signal-noise ratio (SNR) of the LIBS signal and improve reconstruction precision. Furthermore, an efficient iterative algorithm is applied to the optimization process, so as to ensure the convergence of this algorithm. To validate the proposed method, the concentration analysis of Chromium (Cr),Manganese (Mn) and Nickel (Ni) contained in 23 certified high alloy steel samples is assessed by using quantitative models with Partial Least Squares (PLS) and Support Vector Machine (SVM). Because there is no prior knowledge of sample composition and mathematical hypothesis, compared with other methods, the method proposed in this paper has better accuracy in quantitative analysis, and fully reflects its adaptive ability.

  2. A New Global Regression Analysis Method for the Prediction of Wind Tunnel Model Weight Corrections

    Science.gov (United States)

    Ulbrich, Norbert Manfred; Bridge, Thomas M.; Amaya, Max A.

    2014-01-01

    A new global regression analysis method is discussed that predicts wind tunnel model weight corrections for strain-gage balance loads during a wind tunnel test. The method determines corrections by combining "wind-on" model attitude measurements with least squares estimates of the model weight and center of gravity coordinates that are obtained from "wind-off" data points. The method treats the least squares fit of the model weight separate from the fit of the center of gravity coordinates. Therefore, it performs two fits of "wind- off" data points and uses the least squares estimator of the model weight as an input for the fit of the center of gravity coordinates. Explicit equations for the least squares estimators of the weight and center of gravity coordinates are derived that simplify the implementation of the method in the data system software of a wind tunnel. In addition, recommendations for sets of "wind-off" data points are made that take typical model support system constraints into account. Explicit equations of the confidence intervals on the model weight and center of gravity coordinates and two different error analyses of the model weight prediction are also discussed in the appendices of the paper.

  3. Estimation of plasma ion saturation current and reduced tip arcing using Langmuir probe harmonics.

    Science.gov (United States)

    Boedo, J A; Rudakov, D L

    2017-03-01

    We present a method to calculate the ion saturation current, I sat , for Langmuir probes at high frequency (>100 kHz) using the harmonics technique and we compare that to a direct measurement of I sat . It is noted that the I sat estimation can be made directly by the ratio of harmonic amplitudes, without explicitly calculating T e . We also demonstrate that since the probe tips using the harmonic method are oscillating near the floating potential, drawing little power, this method reduces tip heating and arcing and allows plasma density measurements at a plasma power flux that would cause continuously biased tips to arc. A multi-probe array is used, with two spatially separated tips employing the harmonics technique and measuring the amplitude of at least two harmonics per tip. A third tip, located between the other two, measures the ion saturation current directly. We compare the measured and calculated ion saturation currents for a variety of plasma conditions and demonstrate the validity of the technique and its use in reducing arcs.

  4. Electrical conductivity modeling in fractal non-saturated porous media

    Science.gov (United States)

    Wei, W.; Cai, J.; Hu, X.; Han, Q.

    2016-12-01

    The variety of electrical conductivity in non-saturated conditions is important to study electric conduction in natural sedimentary rocks. The electrical conductivity in completely saturated porous media is a porosity-function representing the complex connected behavior of single conducting phases (pore fluid). For partially saturated conditions, the electrical conductivity becomes even more complicated since the connectedness of pore. Archie's second law is an empirical electrical conductivity-porosity and -saturation model that has been used to predict the formation factor of non-saturated porous rock. However, the physical interpretation of its parameters, e.g., the cementation exponent m and the saturation exponent n, remains questionable. On basis of our previous work, we combine the pore-solid fractal (PSF) model to build an electrical conductivity model in non-saturated porous media. Our theoretical porosity- and saturation-dependent models contain endmember properties, such as fluid electrical conductivities, pore fractal dimension and tortuosity fractal dimension (representing the complex degree of electrical flowing path). We find the presented model with non-saturation-dependent electrical conductivity datasets indicate excellent match between theory and experiments. This means the value of pore fractal dimension and tortuosity fractal dimension change from medium to medium and depends not only on geometrical properties of pore structure but also characteristics of electrical current flowing in the non-saturated porous media.

  5. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers

    OpenAIRE

    Dobie, Robert A; Wojcik, Nancy C

    2015-01-01

    Objectives The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60?years. By comparison, recent data (1999?2006) show that hearing thresholds in the US population have improved....

  6. Applications of Doppler-free saturation spectroscopy for edge physics studies (invited)

    Energy Technology Data Exchange (ETDEWEB)

    Martin, E. H., E-mail: martineh@ornl.gov; Caughman, J. B. O.; Isler, R. C.; Bell, G. L. [Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Zafar, A. [Department of Nuclear Engineering, North Carolina State University, Raleigh, North Carolina 27695 (United States)

    2016-11-15

    Doppler-free saturation spectroscopy provides a very powerful method to obtain detailed information about the electronic structure of the atom through measurement of the spectral line profile. This is achieved through a significant decrease in the Doppler broadening and essentially an elimination of the instrument broadening inherent to passive spectroscopic techniques. In this paper we present the technique and associated physics of Doppler-free saturation spectroscopy in addition to how one selects the appropriate transition. Simulations of H{sub δ} spectra are presented to illustrate the increased sensitivity to both electric field and electron density measurements.

  7. Determination of corrective factors for an ultrasonic flow measuring method in pipes accounting for perturbations

    International Nuclear Information System (INIS)

    Etter, S.

    1982-01-01

    By current ultrasonic flow measuring equipment (UFME) the mean velocity is measured for one or two measuring paths. This mean velocity is not equal to the velocity averaged over the flow cross-section, by means of which the flow rate is calculated. This difference will be found already for axially symmetrical, fully developed velocity profiles and, to a larger extent, for disturbed profiles varying in flow direction and for nonsteady flow. Corrective factors are defined for steady and nonsteady flows. These factors can be derived from the flow profiles within the UFME. By mathematical simulation of the entrainment effect the influence of cross and swirl flows on various ultrasonic measuring methods is studied. The applied UFME with crossed measuring paths is shown to be largely independent of cross and swirl flows. For evaluation in a computer of velocity network measurements in circular cross-sections the equations for interpolation and integration are derived. Results of the mathematical method are the isotach profile, the flow rate and, for fully developed flow, directly the corrective factor. In the experimental part corrective factors are determined in nonsteady flow in a measuring plane before and in form measuring planes behind a perturbation. (orig./RW) [de

  8. Neutron activation detector saturation activities measured in the AAEC research reactor HIFAR

    International Nuclear Information System (INIS)

    Hilditch, R.J.; Lowenthal, G.C.

    1980-01-01

    Titanium and cobalt wires are irradiated with radiation damage specimens in each reactor period to determine variations in neutron flux densities. The results from these monitors constitute a considerable body of data with good statistical significance. However, a difficulty encountered when using measurements collected over a number of reactor periods for determining flux depression factors or cadmium ratios is accounting for the effects on saturation activities of different irradiation conditions, in particular the continuously changing fuel burn-up rates. This difficulty was overcome by correlating the saturation activities of (n,γ) reactions with the number of fissions in the fuel. The experimental saturation activities so correlated enable (1) flux depression factors to be obtained for cobalt and silver wires, relative to thin foils, and (2) use of these flux depression factors and others quoted in the literature to calculate the ratio of saturation activities of Co and Ag wires. Finally, reference is made to the potential usefulness of the 123 Sb(n,γ) reaction as a resonance detector given that a new method for making thin monitors can be readily applied to antimony

  9. Site-Scale Saturated Zone Flow Model

    International Nuclear Information System (INIS)

    G. Zyvoloski

    2003-01-01

    The purpose of this model report is to document the components of the site-scale saturated-zone flow model at Yucca Mountain, Nevada, in accordance with administrative procedure (AP)-SIII.lOQ, ''Models''. This report provides validation and confidence in the flow model that was developed for site recommendation (SR) and will be used to provide flow fields in support of the Total Systems Performance Assessment (TSPA) for the License Application. The output from this report provides the flow model used in the ''Site-Scale Saturated Zone Transport'', MDL-NBS-HS-000010 Rev 01 (BSC 2003 [162419]). The Site-Scale Saturated Zone Transport model then provides output to the SZ Transport Abstraction Model (BSC 2003 [164870]). In particular, the output from the SZ site-scale flow model is used to simulate the groundwater flow pathways and radionuclide transport to the accessible environment for use in the TSPA calculations. Since the development and calibration of the saturated-zone flow model, more data have been gathered for use in model validation and confidence building, including new water-level data from Nye County wells, single- and multiple-well hydraulic testing data, and new hydrochemistry data. In addition, a new hydrogeologic framework model (HFM), which incorporates Nye County wells lithology, also provides geologic data for corroboration and confidence in the flow model. The intended use of this work is to provide a flow model that generates flow fields to simulate radionuclide transport in saturated porous rock and alluvium under natural or forced gradient flow conditions. The flow model simulations are completed using the three-dimensional (3-D), finite-element, flow, heat, and transport computer code, FEHM Version (V) 2.20 (software tracking number (STN): 10086-2.20-00; LANL 2003 [161725]). Concurrently, process-level transport model and methodology for calculating radionuclide transport in the saturated zone at Yucca Mountain using FEHM V 2.20 are being

  10. Accuracy of radiotherapy dose calculations based on cone-beam CT: comparison of deformable registration and image correction based methods

    Science.gov (United States)

    Marchant, T. E.; Joshi, K. D.; Moore, C. J.

    2018-03-01

    Radiotherapy dose calculations based on cone-beam CT (CBCT) images can be inaccurate due to unreliable Hounsfield units (HU) in the CBCT. Deformable image registration of planning CT images to CBCT, and direct correction of CBCT image values are two methods proposed to allow heterogeneity corrected dose calculations based on CBCT. In this paper we compare the accuracy and robustness of these two approaches. CBCT images for 44 patients were used including pelvis, lung and head & neck sites. CBCT HU were corrected using a ‘shading correction’ algorithm and via deformable registration of planning CT to CBCT using either Elastix or Niftyreg. Radiotherapy dose distributions were re-calculated with heterogeneity correction based on the corrected CBCT and several relevant dose metrics for target and OAR volumes were calculated. Accuracy of CBCT based dose metrics was determined using an ‘override ratio’ method where the ratio of the dose metric to that calculated on a bulk-density assigned version of the same image is assumed to be constant for each patient, allowing comparison to the patient’s planning CT as a gold standard. Similar performance is achieved by shading corrected CBCT and both deformable registration algorithms, with mean and standard deviation of dose metric error less than 1% for all sites studied. For lung images, use of deformed CT leads to slightly larger standard deviation of dose metric error than shading corrected CBCT with more dose metric errors greater than 2% observed (7% versus 1%).

  11. Correction of Non-Linear Propagation Artifact in Contrast-Enhanced Ultrasound Imaging of Carotid Arteries: Methods and in Vitro Evaluation.

    Science.gov (United States)

    Yildiz, Yesna O; Eckersley, Robert J; Senior, Roxy; Lim, Adrian K P; Cosgrove, David; Tang, Meng-Xing

    2015-07-01

    Non-linear propagation of ultrasound creates artifacts in contrast-enhanced ultrasound images that significantly affect both qualitative and quantitative assessments of tissue perfusion. This article describes the development and evaluation of a new algorithm to correct for this artifact. The correction is a post-processing method that estimates and removes non-linear artifact in the contrast-specific image using the simultaneously acquired B-mode image data. The method is evaluated on carotid artery flow phantoms with large and small vessels containing microbubbles of various concentrations at different acoustic pressures. The algorithm significantly reduces non-linear artifacts while maintaining the contrast signal from bubbles to increase the contrast-to-tissue ratio by up to 11 dB. Contrast signal from a small vessel 600 μm in diameter buried in tissue artifacts before correction was recovered after the correction. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  12. Experimental study on distributed optical fiber-based approach monitoring saturation line in levee engineering

    Science.gov (United States)

    Su, Huaizhi; Li, Hao; Kang, Yeyuan; Wen, Zhiping

    2018-02-01

    Seepage is one of key factors which affect the levee engineering safety. The seepage danger without timely detection and rapid response may likely lead to severe accidents such as seepage failure, slope instability, and even levee break. More than 90 percent of levee break events are caused by the seepage. It is very important for seepage behavior identification to determine accurately saturation line in levee engineering. Furthermore, the location of saturation line has a major impact on slope stability in levee engineering. Considering the structure characteristics and service condition of levee engineering, the distributed optical fiber sensing technology is introduced to implement the real-time observation of saturation line in levee engineering. The distributed optical fiber temperature sensor system (DTS)-based monitoring principle of saturation line in levee engineering is investigated. An experimental platform, which consists of DTS, heating system, water-supply system, auxiliary analysis system and levee model, is designed and constructed. The monitoring experiment of saturation line in levee model is implemented on this platform. According to the experimental results, the numerical relationship between moisture content and thermal conductivity in porous medium is identified. A line heat source-based distributed optical fiber method obtaining the thermal conductivity in porous medium is developed. A DTS-based approach is proposed to monitor the saturation line in levee engineering. The embedment pattern of optical fiber for monitoring saturation line is presented.

  13. Deformations during saturation of the crushed aggregate, Olkiluoto tonalite

    International Nuclear Information System (INIS)

    Laaksonen, R.; Rathmayer, H.; Takala, J.; Toernqvist, J.

    1994-03-01

    Crushed aggregate tonalite produced of crystalline tonalite or a correspondent rock with particle size up to 8 mm (or 16 mm) will be used as backfill material in the VLJ repository caverns at Olkiluoto (in Finland). The backfill material has to retard radionuclides, to restrict the groundwater perlocation and to support mechanically the concrete structure of the repository silos. Mechanical and hydraulic behaviour of crushed tonalite when effected by stresses applied during compaction of the backfill and due to groundwater perlocation was studied at three batches having different gradations. Information about the phenomenon of settlement due to saturation and as a function of the compaction methods was obtained from a literature survey. The maximum amount of possible deformation due to compaction was analyzed with a gyratory device, known to have a good repeatability. In a group of simulation tests using a large oedometer cell the amount of compression due to the saturation process was measured. Also studies on the suitability of different compaction methods could be done with these tests. (43 refs., 49 figs., 3 tabs.)

  14. Titration calorimetry of surfactant–drug interactions: Micelle formation and saturation studies

    International Nuclear Information System (INIS)

    Waters, Laura J.; Hussain, Talib; Parkes, Gareth M.B.

    2012-01-01

    Highlights: ► Isothermal titration calorimetry can be used to monitor the saturation of micelles with pharmaceutical compounds. ► The number of drug molecules per micelle varies depending on the drug used and the temperature of the calorimeter. ► The change in enthalpy for the saturation of micelles with drugs can be endothermic or exothermic. ► The critical micellar concentration of an anionic surfactant (SDS) does not appear to vary in the presence of drugs. - Abstract: Isothermal titration calorimetry (ITC) was employed to monitor the addition of five model drugs to anionic surfactant based micelles, composed of sodium dodecyl sulfate (SDS), through to the point at which they were saturated with drug. Analysis of the resultant data using this newly developed method has confirmed the suitability of the technique to acquire such data with saturation limits established in all cases. Values for the point at which saturation occurred ranged from 17 molecules of theophylline per micelle at T = 298 K up to 63 molecules of caffeine per micelle at 310 K. Micellar systems can be disrupted by the presence of additional chemicals, such as the drugs used in this study, therefore a separate investigation was undertaken to determine the critical micellar concentration (CMC) for SDS in the presence of each drug at T = 298 K and 310 K using ITC. In the majority of cases, there was no appreciable alteration to the CMC of SDS with drug present.

  15. Observability of linear systems with saturated outputs

    NARCIS (Netherlands)

    Koplon, R.; Sontag, E.D.; Hautus, M.L.J.

    1994-01-01

    We present necessary and sufficient conditions for observability of the class of output-saturated systems. These are linear systems whose output passes through a saturation function before it can be measured.

  16. Saturated Zone In-Situ Testing

    International Nuclear Information System (INIS)

    Reimus, P. W.; Umari, M. J.

    2003-01-01

    The purpose of this scientific analysis is to document the results and interpretations of field experiments that have been conducted to test and validate conceptual flow and radionuclide transport models in the saturated zone (SZ) near Yucca Mountain. The test interpretations provide estimates of flow and transport parameters that are used in the development of parameter distributions for Total System Performance Assessment (TSPA) calculations. These parameter distributions are documented in the revisions to the SZ flow model report (BSC 2003 [ 162649]), the SZ transport model report (BSC 2003 [ 162419]), the SZ colloid transport report (BSC 2003 [162729]), and the SZ transport model abstraction report (BSC 2003 [1648701]). Specifically, this scientific analysis report provides the following information that contributes to the assessment of the capability of the SZ to serve as a barrier for waste isolation for the Yucca Mountain repository system: (1) The bases for selection of conceptual flow and transport models in the saturated volcanics and the saturated alluvium located near Yucca Mountain. (2) Results and interpretations of hydraulic and tracer tests conducted in saturated fractured volcanics at the C-wells complex near Yucca Mountain. The test interpretations include estimates of hydraulic conductivities, anisotropy in hydraulic conductivity, storativities, total porosities, effective porosities, longitudinal dispersivities, matrix diffusion mass transfer coefficients, matrix diffusion coefficients, fracture apertures, and colloid transport parameters. (3) Results and interpretations of hydraulic and tracer tests conducted in saturated alluvium at the Alluvium Testing Complex (ATC), which is located at the southwestern corner of the Nevada Test Site (NTS). The test interpretations include estimates of hydraulic conductivities, storativities, total porosities, effective porosities, longitudinal dispersivities, matrix diffusion mass transfer coefficients, and

  17. Saturated Zone In-Situ Testing

    Energy Technology Data Exchange (ETDEWEB)

    P. W. Reimus; M. J. Umari

    2003-12-23

    The purpose of this scientific analysis is to document the results and interpretations of field experiments that have been conducted to test and validate conceptual flow and radionuclide transport models in the saturated zone (SZ) near Yucca Mountain. The test interpretations provide estimates of flow and transport parameters that are used in the development of parameter distributions for Total System Performance Assessment (TSPA) calculations. These parameter distributions are documented in the revisions to the SZ flow model report (BSC 2003 [ 162649]), the SZ transport model report (BSC 2003 [ 162419]), the SZ colloid transport report (BSC 2003 [162729]), and the SZ transport model abstraction report (BSC 2003 [1648701]). Specifically, this scientific analysis report provides the following information that contributes to the assessment of the capability of the SZ to serve as a barrier for waste isolation for the Yucca Mountain repository system: (1) The bases for selection of conceptual flow and transport models in the saturated volcanics and the saturated alluvium located near Yucca Mountain. (2) Results and interpretations of hydraulic and tracer tests conducted in saturated fractured volcanics at the C-wells complex near Yucca Mountain. The test interpretations include estimates of hydraulic conductivities, anisotropy in hydraulic conductivity, storativities, total porosities, effective porosities, longitudinal dispersivities, matrix diffusion mass transfer coefficients, matrix diffusion coefficients, fracture apertures, and colloid transport parameters. (3) Results and interpretations of hydraulic and tracer tests conducted in saturated alluvium at the Alluvium Testing Complex (ATC), which is located at the southwestern corner of the Nevada Test Site (NTS). The test interpretations include estimates of hydraulic conductivities, storativities, total porosities, effective porosities, longitudinal dispersivities, matrix diffusion mass transfer coefficients, and

  18. Low-cost but accurate radioactive logging for determining gas saturation in a reservior

    International Nuclear Information System (INIS)

    Neuman, C.H.

    1976-01-01

    A method is disclosed for determining gas saturation in a petroleum reservoir using logging signals indirectly related to the abundances of oxygen and carbon nuclei in the reservoir rock. The first step of the invention is to record first and second logs sensitive to the abundance of oxygen and carbon nuclei, respectively, after the region surrounding the well bore is caused to have fluid saturations representative of the bulk of the reservoir. A purposeful change is then made in the fluid saturations in the region surrounding the well bore by injecting a liquid capable of displacing substantially all of the original fluids. The logs are recorded a second time. The displacing fluid is then itself displaced by brine, and a third suite of logs is recorded. The total fluid and oil saturations are then determined from the differences between respective corresponding logs and from known fractional volume oxygen and carbon contents of the reservoir brine and oil and the first injected liquid. Gas saturation is then calculated from differences between total fluid and oil saturation values. It is not necessary that the log responses be independent of the material in the borehole, the casing, the casing cement, or the reservoir rock. It is only necessary that changes in formation fluids content cause proportional changes in log responses. 7 Claims, 4 Figures

  19. Spherical aberration correction with threefold symmetric line currents.

    Science.gov (United States)

    Hoque, Shahedul; Ito, Hiroyuki; Nishi, Ryuji; Takaoka, Akio; Munro, Eric

    2016-02-01

    It has been shown that N-fold symmetric line current (henceforth denoted as N-SYLC) produces 2N-pole magnetic fields. In this paper, a threefold symmetric line current (N3-SYLC in short) is proposed for correcting 3rd order spherical aberration of round lenses. N3-SYLC can be realized without using magnetic materials, which makes it free of the problems of hysteresis, inhomogeneity and saturation. We investigate theoretically the basic properties of an N3-SYLC configuration which can in principle be realized by simple wires. By optimizing the parameters of a system with beam energy of 5.5keV, the required excitation current for correcting 3rd order spherical aberration coefficient of 400 mm is less than 1AT, and the residual higher order aberrations can be kept sufficiently small to obtain beam size of less than 1 nm for initial slopes up to 5 mrad. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Relating oxygen partial pressure, saturation and content: the haemoglobin-oxygen dissociation curve.

    Science.gov (United States)

    Collins, Julie-Ann; Rudenski, Aram; Gibson, John; Howard, Luke; O'Driscoll, Ronan

    2015-09-01

    The delivery of oxygen by arterial blood to the tissues of the body has a number of critical determinants including blood oxygen concentration (content), saturation (S O2 ) and partial pressure, haemoglobin concentration and cardiac output, including its distribution. The haemoglobin-oxygen dissociation curve, a graphical representation of the relationship between oxygen satur-ation and oxygen partial pressure helps us to understand some of the principles underpinning this process. Historically this curve was derived from very limited data based on blood samples from small numbers of healthy subjects which were manipulated in vitro and ultimately determined by equations such as those described by Severinghaus in 1979. In a study of 3524 clinical specimens, we found that this equation estimated the S O2 in blood from patients with normal pH and S O2 >70% with remarkable accuracy and, to our knowledge, this is the first large-scale validation of this equation using clinical samples. Oxygen saturation by pulse oximetry (S pO2 ) is nowadays the standard clinical method for assessing arterial oxygen saturation, providing a convenient, pain-free means of continuously assessing oxygenation, provided the interpreting clinician is aware of important limitations. The use of pulse oximetry reduces the need for arterial blood gas analysis (S aO2 ) as many patients who are not at risk of hypercapnic respiratory failure or metabolic acidosis and have acceptable S pO2 do not necessarily require blood gas analysis. While arterial sampling remains the gold-standard method of assessing ventilation and oxygenation, in those patients in whom blood gas analysis is indicated, arterialised capillary samples also have a valuable role in patient care. The clinical role of venous blood gases however remains less well defined.

  1. Current estimate of functional vision in patients with bifocal pseudophakia after correction of residual defocus by different methods

    Directory of Open Access Journals (Sweden)

    Yuri V Takhtaev

    2016-03-01

    Full Text Available In this article we evaluated the influence of different surgical methods for correction of residual ametropia on contrast sensitivity at different light conditions and high-order aberrations in patients with bifocal pseudophakia. The study included 45 eyes (30 people after cataract surgery, which studied dependence between contrast sensitivity and aberrations level before and after surgical correction of residual ametropia by of three methods - LASIK, Sulcoflex IOL implantation or IOL exchange. Contrast sensitivity was measured by Optec 6500 and aberration using Pentacam «OCULUS». We processed the results using the Mann-Whitney U-test. This study shows correlation between each method and residual aberrations level and their influence on contrast sensitivity level.

  2. SATURATION OF MAGNETOROTATIONAL INSTABILITY THROUGH MAGNETIC FIELD GENERATION

    International Nuclear Information System (INIS)

    Ebrahimi, F.; Prager, S. C.; Schnack, D. D.

    2009-01-01

    The saturation mechanism of magnetorotational instability (MRI) is examined through analytical quasi-linear theory and through nonlinear computation of a single mode in a rotating disk. We find that large-scale magnetic field is generated through the α-effect (the correlated product of velocity and magnetic field fluctuations) and causes the MRI mode to saturate. If the large-scale plasma flow is allowed to evolve, the mode can also saturate through its flow relaxation. In astrophysical plasmas, for which the flow cannot relax because of gravitational constraints, the mode saturates through field generation only.

  3. Broadband EIT borehole measurements with high phase accuracy using numerical corrections of electromagnetic coupling effects

    International Nuclear Information System (INIS)

    Zhao, Y; Zimmermann, E; Wolters, B; Van Waasen, S; Huisman, J A; Treichel, A; Kemna, A

    2013-01-01

    be made in the mHz to kHz frequency range. This increased accuracy in the kHz range will allow a more accurate field characterization of the complex electrical conductivity of soils and sediments, which may lead to the improved estimation of saturated hydraulic conductivity from electrical properties. Although the correction methods have been developed for a custom-made EIT system, they also have potential to improve the phase accuracy of EIT measurements made with commercial systems relying on multicore cables. (paper)

  4. Self-correcting Multigrid Solver

    International Nuclear Information System (INIS)

    Lewandowski, Jerome L.V.

    2004-01-01

    A new multigrid algorithm based on the method of self-correction for the solution of elliptic problems is described. The method exploits information contained in the residual to dynamically modify the source term (right-hand side) of the elliptic problem. It is shown that the self-correcting solver is more efficient at damping the short wavelength modes of the algebraic error than its standard equivalent. When used in conjunction with a multigrid method, the resulting solver displays an improved convergence rate with no additional computational work

  5. Facilitated transport near the carrier saturation limit

    Directory of Open Access Journals (Sweden)

    Anawat Sungpet

    2002-11-01

    Full Text Available Permeation of ethylbenzene, styrene and 1-hexene through perfluorosulfonate ionomer membranes was carried out with the feed concentrations ranging from 1 M to pure. On comparison, fluxes of ethylbenzene through the Ag+-form membrane were the lowest. Only a small increase in ethylbenzene flux was observed after the feed concentration exceeded 3 M, indicating the existence of carrier saturation. The increase in styrene flux was suppressed to some degree at high concentration driving forces. In contrast, 1-hexene flux was the highest and continued to increase even at very high feed concentrations. After the experiments with pure feeds, extraction of the solutes from the membranes revealed that 62.5% of Ag+ ions reacted with 1-hexene as against 40.6% for styrene and 28.9% for ethylbenzene. Equilibrium constants, determined by distribution method, of 1-hexene, styrene and ethylbenzene were 129, 2.2 and 0.7 M-1 respectively, which suggested that stability of the complex was a key factor in the carrier saturation phenomenon.

  6. Gamma camera correction system and method for using the same

    International Nuclear Information System (INIS)

    Inbar, D.; Gafni, G.; Grimberg, E.; Bialick, K.; Koren, J.

    1986-01-01

    A gamma camera is described which consists of: (a) a detector head that includes photodetectors for producing output signals in response to radiation stimuli which are emitted by a radiation field and which interact with the detector head and produce an event; (b) signal processing circuitry responsive to the output signals of the photodetectors for producing a sum signal that is a measure of the total energy of the event; (c) an energy discriminator having a relatively wide window for comparison with the sum signal; (d) the signal processing circuitry including coordinate computation circuitry for operating on the output signals, and calculating an X,Y coordinate of an event when the sum signal lies within the window of the energy discriminator; (e) an energy correction table containing spatially dependent energy windows for producing a validation signal if the total energy of an event lies within the window associated with the X,Y coordinates of the event; (f) the signal processing circuitry including a dislocation correction table containing spatially dependent correction factors for converting the X,Y coordinates of an event to relocated coordinates in accordance with correction factors determined by the X,Y coordinates; (g) a digital memory for storing a map of the radiation field; and (h) means for recording an event at its relocated coordinates in the memory if the energy correction table produces a validation signal

  7. Effect of attenuation by the cranium on quantitative SPECT measurements of cerebral blood flow and a correction method

    International Nuclear Information System (INIS)

    Iwase, Mikio; Kurono, Kenji; Iida, Akihiko.

    1998-01-01

    Attenuation correction for cerebral blood flow SPECT image reconstruction is usually performed by considering the head as a whole to be equivalent to water, and the effects of differences in attenuation between subjects produced by the cranium have not been taken into account. We determined the differences in attenuation between subjects and assessed a method of correcting quantitative cerebral blood flow values. Attenuations by head on the right and left sides were measured before intravenous injection of 123 I-IMP, and water-converted diameters of both sides (Ta) were calculated from the measurements obtained. After acquiring SPECT images, attenuation correction was conducted according to the method of Sorenson, and images were reconstructed. The diameters of the right and left sides in the same position as the Ta (Tt) were calculated from the contours determined by threshold values. Using Ts given by 2 Ts=Ta-Tt, the correction factor λ=exp(μ 1 Ts) was calculated and multiplied as the correction factor when rCBF was determined. The results revealed significant differences between Tt and Ta. Although no gender differences were observed in Tt, they were seen in both Ta and Ts. Thus, interindividual differences in attenuation by the cranium were found to have an influence that cannot be ignored. Inter-subject correlation is needed to obtain accurate quantitative values. (author)

  8. GafChromic EBT film dosimetry with flatbed CCD scanner: A novel background correction method and full dose uncertainty analysis

    International Nuclear Information System (INIS)

    Saur, Sigrun; Frengen, Jomar

    2008-01-01

    Film dosimetry using radiochromic EBT film in combination with a flatbed charge coupled device scanner is a useful method both for two-dimensional verification of intensity-modulated radiation treatment plans and for general quality assurance of treatment planning systems and linear accelerators. Unfortunately, the response over the scanner area is nonuniform, and when not corrected for, this results in a systematic error in the measured dose which is both dose and position dependent. In this study a novel method for background correction is presented. The method is based on the subtraction of a correction matrix, a matrix that is based on scans of films that are irradiated to nine dose levels in the range 0.08-2.93 Gy. Because the response of the film is dependent on the film's orientation with respect to the scanner, correction matrices for both landscape oriented and portrait oriented scans were made. In addition to the background correction method, a full dose uncertainty analysis of the film dosimetry procedure was performed. This analysis takes into account the fit uncertainty of the calibration curve, the variation in response for different film sheets, the nonuniformity after background correction, and the noise in the scanned films. The film analysis was performed for film pieces of size 16x16 cm, all with the same lot number, and all irradiations were done perpendicular onto the films. The results show that the 2-sigma dose uncertainty at 2 Gy is about 5% and 3.5% for landscape and portrait scans, respectively. The uncertainty gradually increases as the dose decreases, but at 1 Gy the 2-sigma dose uncertainty is still as good as 6% and 4% for landscape and portrait scans, respectively. The study shows that film dosimetry using GafChromic EBT film, an Epson Expression 1680 Professional scanner and a dedicated background correction technique gives precise and accurate results. For the purpose of dosimetric verification, the calculated dose distribution can

  9. GafChromic EBT film dosimetry with flatbed CCD scanner: a novel background correction method and full dose uncertainty analysis.

    Science.gov (United States)

    Saur, Sigrun; Frengen, Jomar

    2008-07-01

    Film dosimetry using radiochromic EBT film in combination with a flatbed charge coupled device scanner is a useful method both for two-dimensional verification of intensity-modulated radiation treatment plans and for general quality assurance of treatment planning systems and linear accelerators. Unfortunately, the response over the scanner area is nonuniform, and when not corrected for, this results in a systematic error in the measured dose which is both dose and position dependent. In this study a novel method for background correction is presented. The method is based on the subtraction of a correction matrix, a matrix that is based on scans of films that are irradiated to nine dose levels in the range 0.08-2.93 Gy. Because the response of the film is dependent on the film's orientation with respect to the scanner, correction matrices for both landscape oriented and portrait oriented scans were made. In addition to the background correction method, a full dose uncertainty analysis of the film dosimetry procedure was performed. This analysis takes into account the fit uncertainty of the calibration curve, the variation in response for different film sheets, the nonuniformity after background correction, and the noise in the scanned films. The film analysis was performed for film pieces of size 16 x 16 cm, all with the same lot number, and all irradiations were done perpendicular onto the films. The results show that the 2-sigma dose uncertainty at 2 Gy is about 5% and 3.5% for landscape and portrait scans, respectively. The uncertainty gradually increases as the dose decreases, but at 1 Gy the 2-sigma dose uncertainty is still as good as 6% and 4% for landscape and portrait scans, respectively. The study shows that film dosimetry using GafChromic EBT film, an Epson Expression 1680 Professional scanner and a dedicated background correction technique gives precise and accurate results. For the purpose of dosimetric verification, the calculated dose distribution

  10. Measurement of saturated hydraulic conductivity in fine-grained glacial tills in Iowa: Comparison of in situ and laboratory methods

    Science.gov (United States)

    Bruner, D. Roger; Lutenegger, Alan J.

    1994-01-01

    Nested-standpipe and vibrating-wire piezometers were installed in Pre-Illinoian Wolf Creek and Albernett formations at the Eastern Iowa Till Hydrology Site located in Linn County, Iowa. These surficial deposits are composed of fine-grained glacial diamicton (till) with occasional discontinuous lenses of sand and silt. They overlie the Silurian (dolomite) aquifer which provides private, public, and municipal drinking water supplies in the region. The saturated hydraulic conductivity of the Wolf Creek Formation was investigated in a sub-area of the Eastern Iowa Till Hydrology Site. Calculations of saturated hydraulic conductivity were based on laboratoryflexible-wall permeameter tests, bailer tests, and pumping test data. Results show that bulk hydraulic conductivity increases by several orders of magnitude as the tested volume of till increases. Increasing values of saturated hydraulic conductivity at larger spatial scales conceptually support a double-porosity flow model for this till.

  11. Improved dq-axes Model of PMSM Considering Airgap Flux Harmonics and Saturation

    DEFF Research Database (Denmark)

    Fasil, Muhammed; Antaloae, Ciprian; Mijatovic, Nenad

    -saturation on constant torque curves of PMSM. Two interior permanent magnet motor with two different rotor topologies and different specifications are designed to evaluate the effect of saturation on synchronous and harmonic inductances, and operating points of the machines.......The classical dq-axes model of permanent magnet synchronous machines (PMSM) uses linear approximation. This was not an issue in earlier versions of PMSM drives because they mostly used surface magnet motors. With the arrival of interior permanent magnet (IPM) machines, which use reluctance torque...... along with magnet torque, the accuracy of linear models are found to be insufficient. In this work, the effect of air gap flux harmonics is included in the classical model of PMSM using d and q-axes harmonic inductances. Further, a method has been presented to assess the effect of saturation and cross...

  12. Statistical methods to correct for verification bias in diagnostic studies are inadequate when there are few false negatives: a simulation study

    Directory of Open Access Journals (Sweden)

    Vickers Andrew J

    2008-11-01

    Full Text Available Abstract Background A common feature of diagnostic research is that results for a diagnostic gold standard are available primarily for patients who are positive for the test under investigation. Data from such studies are subject to what has been termed "verification bias". We evaluated statistical methods for verification bias correction when there are few false negatives. Methods A simulation study was conducted of a screening study subject to verification bias. We compared estimates of the area-under-the-curve (AUC corrected for verification bias varying both the rate and mechanism of verification. Results In a single simulated data set, varying false negatives from 0 to 4 led to verification bias corrected AUCs ranging from 0.550 to 0.852. Excess variation associated with low numbers of false negatives was confirmed in simulation studies and by analyses of published studies that incorporated verification bias correction. The 2.5th – 97.5th centile range constituted as much as 60% of the possible range of AUCs for some simulations. Conclusion Screening programs are designed such that there are few false negatives. Standard statistical methods for verification bias correction are inadequate in this circumstance.

  13. Saturation of bentonite dependent upon temperature

    International Nuclear Information System (INIS)

    Hausmannova, Lucie; Vasicek, Radek

    2010-01-01

    Document available in extended abstract form only. The fundamental idea behind the long-term safe operation of a deep repository is the use of the Multi-barrier system principle. Barriers may well differ according to the type of host rock in which the repository is located. It is assumed that the buffer in the granitic host rock environment will consist of swelling clays which boast the ideal properties for such a function i.e. low permeability, high swelling pressure, self-healing ability etc. all of which are affected primarily by mineralogy and dry density. Water content plays a crucial role in the activation of swelling pressure as well as, subsequently, in the potential self healing of the various contact areas of the numerous buffer components made from bentonite. In the case of a deep repository, a change in water content is not only connected with the possible intake of water from the host rock, but also with its redistribution owing to changes in temperature after the insertion of the heat source (disposal waste package containing spent fuel) into the repository 'nest'. The principal reason for the experimental testing of this high dry density material is the uncertainty with regard to its saturation ability (final water content or the degree of saturation) at higher temperatures. The results of the Mock-Up-CZ experiment showed that when the barrier is constantly supplied with a saturation medium over a long time period the water content in the barrier as well as the degree of saturation settle independently of temperature. The Mock-Up-CZ experiment was performed at temperatures of 30 deg. - 90 deg. C in the barrier; therefore it was decided to experimentally verify this behaviour by means of targeted laboratory tests. A temperature of 110 deg. C was added to the set of experimental temperatures resulting in samples being tested at 25 deg. C, 95 deg. C and 110 deg. C. The degree of saturation is defined as the ratio of pore water volume to pore

  14. Minimum K_2,3-saturated Graphs

    OpenAIRE

    Chen, Ya-Chen

    2010-01-01

    A graph is K_{2,3}-saturated if it has no subgraph isomorphic to K_{2,3}, but does contain a K_{2,3} after the addition of any new edge. We prove that the minimum number of edges in a K_{2,3}-saturated graph on n >= 5 vertices is sat(n, K_{2,3}) = 2n - 3.

  15. THE EFFECT OF DIFFERENT CORRECTIVE FEEDBACK METHODS ON THE OUTCOME AND SELF CONFIDENCE OF YOUNG ATHLETES

    Directory of Open Access Journals (Sweden)

    George Tzetzis

    2008-09-01

    Full Text Available This experiment investigated the effects of three corrective feedback methods, using different combinations of correction, or error cues and positive feedback for learning two badminton skills with different difficulty (forehand clear - low difficulty, backhand clear - high difficulty. Outcome and self-confidence scores were used as dependent variables. The 48 participants were randomly assigned into four groups. Group A received correction cues and positive feedback. Group B received cues on errors of execution. Group C received positive feedback, correction cues and error cues. Group D was the control group. A pre, post and a retention test was conducted. A three way analysis of variance ANOVA (4 groups X 2 task difficulty X 3 measures with repeated measures on the last factor revealed significant interactions for each depended variable. All the corrective feedback methods groups, increased their outcome scores over time for the easy skill, but only groups A and C for the difficult skill. Groups A and B had significantly better outcome scores than group C and the control group for the easy skill on the retention test. However, for the difficult skill, group C was better than groups A, B and D. The self confidence scores of groups A and C improved over time for the easy skill but not for group B and D. Again, for the difficult skill, only group C improved over time. Finally a regression analysis depicted that the improvement in performance predicted a proportion of the improvement in self confidence for both the easy and the difficult skill. It was concluded that when young athletes are taught skills of different difficulty, different type of instruction, might be more appropriate in order to improve outcome and self confidence. A more integrated approach on teaching will assist coaches or physical education teachers to be more efficient and effective

  16. 4SM: A Novel Self-Calibrated Algebraic Ratio Method for Satellite-Derived Bathymetry and Water Column Correction

    Directory of Open Access Journals (Sweden)

    Yann G. Morel

    2017-07-01

    Full Text Available All empirical water column correction methods have consistently been reported to require existing depth sounding data for the purpose of calibrating a simple depth retrieval model; they yield poor results over very bright or very dark bottoms. In contrast, we set out to (i use only the relative radiance data in the image along with published data, and several new assumptions; (ii in order to specify and operate the simplified radiative transfer equation (RTE; (iii for the purpose of retrieving both the satellite derived bathymetry (SDB and the water column corrected spectral reflectance over shallow seabeds. Sea truth regressions show that SDB depths retrieved by the method only need tide correction. Therefore it shall be demonstrated that, under such new assumptions, there is no need for (i formal atmospheric correction; (ii conversion of relative radiance into calibrated reflectance; or (iii existing depth sounding data, to specify the simplified RTE and produce both SDB and spectral water column corrected radiance ready for bottom typing. Moreover, the use of the panchromatic band for that purpose is introduced. Altogether, we named this process the Self-Calibrated Supervised Spectral Shallow-sea Modeler (4SM. This approach requires a trained practitioner, though, to produce its results within hours of downloading the raw image. The ideal raw image should be a “near-nadir” view, exhibit homogeneous atmosphere and water column, include some coverage of optically deep waters and bare land, and lend itself to quality removal of haze, atmospheric adjacency effect, and sun/sky glint.

  17. 4SM: A Novel Self-Calibrated Algebraic Ratio Method for Satellite-Derived Bathymetry and Water Column Correction.

    Science.gov (United States)

    Morel, Yann G; Favoretto, Fabio

    2017-07-21

    All empirical water column correction methods have consistently been reported to require existing depth sounding data for the purpose of calibrating a simple depth retrieval model; they yield poor results over very bright or very dark bottoms. In contrast, we set out to (i) use only the relative radiance data in the image along with published data, and several new assumptions; (ii) in order to specify and operate the simplified radiative transfer equation (RTE); (iii) for the purpose of retrieving both the satellite derived bathymetry (SDB) and the water column corrected spectral reflectance over shallow seabeds. Sea truth regressions show that SDB depths retrieved by the method only need tide correction. Therefore it shall be demonstrated that, under such new assumptions, there is no need for (i) formal atmospheric correction; (ii) conversion of relative radiance into calibrated reflectance; or (iii) existing depth sounding data, to specify the simplified RTE and produce both SDB and spectral water column corrected radiance ready for bottom typing. Moreover, the use of the panchromatic band for that purpose is introduced. Altogether, we named this process the Self-Calibrated Supervised Spectral Shallow-sea Modeler (4SM). This approach requires a trained practitioner, though, to produce its results within hours of downloading the raw image. The ideal raw image should be a "near-nadir" view, exhibit homogeneous atmosphere and water column, include some coverage of optically deep waters and bare land, and lend itself to quality removal of haze, atmospheric adjacency effect, and sun/sky glint.

  18. METHOD OF RADIOMETRIC DISTORTION CORRECTION OF MULTISPECTRAL DATA FOR THE EARTH REMOTE SENSING

    Directory of Open Access Journals (Sweden)

    A. N. Grigoriev

    2015-07-01

    Full Text Available The paper deals with technologies of ground secondary processing of heterogeneous multispectral data. The factors of heterogeneous data include uneven illumination of objects on the Earth surface caused by different properties of the relief. A procedure for the image restoration of spectral channels by means of terrain distortion compensation is developed. The object matter of this paper is to improve the quality of the results during image restoration of areas with large and medium landforms. Methods. Researches are based on the elements of the digital image processing theory, statistical processing of the observation results and the theory of multi-dimensional arrays. Main Results. The author has introduced operations on multidimensional arrays: concatenation and elementwise division. Extended model description for input data about the area is given. The model contains all necessary data for image restoration. Correction method for multispectral data radiometric distortions of the Earth remote sensing has been developed. The method consists of two phases: construction of empirical dependences for spectral reflectance on the relief properties and restoration of spectral images according to semiempirical data. Practical Relevance. Research novelty lies in developme nt of the application theory of multidimensional arrays with respect to the processing of multispectral data, together with data on the topography and terrain objects. The results are usable for development of radiometric data correction tools. Processing is performed on the basis of a digital terrain model without carrying out ground works connected with research of the objects reflective properties.

  19. A comparison of high-order explicit Runge–Kutta, extrapolation, and deferred correction methods in serial and parallel

    KAUST Repository

    Ketcheson, David I.

    2014-06-13

    We compare the three main types of high-order one-step initial value solvers: extrapolation, spectral deferred correction, and embedded Runge–Kutta pairs. We consider orders four through twelve, including both serial and parallel implementations. We cast extrapolation and deferred correction methods as fixed-order Runge–Kutta methods, providing a natural framework for the comparison. The stability and accuracy properties of the methods are analyzed by theoretical measures, and these are compared with the results of numerical tests. In serial, the eighth-order pair of Prince and Dormand (DOP8) is most efficient. But other high-order methods can be more efficient than DOP8 when implemented in parallel. This is demonstrated by comparing a parallelized version of the wellknown ODEX code with the (serial) DOP853 code. For an N-body problem with N = 400, the experimental extrapolation code is as fast as the tuned Runge–Kutta pair at loose tolerances, and is up to two times as fast at tight tolerances.

  20. Effects of dietary saturated fat on LDL subclasses and apolipoprotein CIII in men

    OpenAIRE

    Faghihnia, Nastaran; Mangravite, Lara M.; Chiu, Sally; Bergeron, Nathalie; Krauss, Ronald M.

    2012-01-01

    Background/Objectives Small dense LDL particles and apolipoprotein (apo) CIII are risk factors for cardiovascular disease (CVD) that can be modulated by diet, but there is little information regarding the effects of dietary saturated fat on their plasma levels. We tested the effects of high vs. low saturated fat intake in the context of a high beef protein diet on levels and composition of LDL subclasses and on apoCIII levels in plasma and LDL. Subjects/Methods Following consumption of a base...

  1. Efficiency corrections in determining the 137Cs inventory of environmental soil samples by using relative measurement method and GEANT4 simulations

    International Nuclear Information System (INIS)

    Li, Gang; Liang, Yongfei; Xu, Jiayun; Bai, Lixin

    2015-01-01

    The determination of 137 Cs inventory is widely used to estimate the soil erosion or deposition rate. The generally used method to determine the activity of volumetric samples is the relative measurement method, which employs a calibration standard sample with accurately known activity. This method has great advantages in accuracy and operation only when there is a small difference in elemental composition, sample density and geometry between measuring samples and the calibration standard. Otherwise it needs additional efficiency corrections in the calculating process. The Monte Carlo simulations can handle these correction problems easily with lower financial cost and higher accuracy. This work presents a detailed description to the simulation and calibration procedure for a conventionally used commercial P-type coaxial HPGe detector with cylindrical sample geometry. The effects of sample elemental composition, density and geometry were discussed in detail and calculated in terms of efficiency correction factors. The effect of sample placement was also analyzed, the results indicate that the radioactive nuclides and sample density are not absolutely uniform distributed along the axial direction. At last, a unified binary quadratic functional relationship of efficiency correction factors as a function of sample density and height was obtained by the least square fitting method. This function covers the sample density and height range of 0.8–1.8 g/cm 3 and 3.0–7.25 cm, respectively. The efficiency correction factors calculated by the fitted function are in good agreement with those obtained by the GEANT4 simulations with the determination coefficient value greater than 0.9999. The results obtained in this paper make the above-mentioned relative measurements more accurate and efficient in the routine radioactive analysis of environmental cylindrical soil samples. - Highlights: • Determination of 137 Cs inventory in environmental soil samples by using relative

  2. Corrections for hysteresis curves for rare earth magnet materials measured by open magnetic circuit methods

    International Nuclear Information System (INIS)

    Nakagawa, Yasuaki

    1996-01-01

    The methods for testing permanent magnets stipulated in the usual industrial standards are so-called closed magnetic circuit methods which employ a loop tracer using an iron-core electromagnet. If the coercivity exceeds the highest magnetic field generated by the electromagnet, full hysteresis curves cannot be obtained. In the present work, magnetic fields up to 15 T were generated by a high-power water-cooled magnet, and the magnetization was measured by an induction method with an open magnetic circuit, in which the effect of a demagnetizing field should be taken into account. Various rare earth magnets materials such as sintered or bonded Sm-Co and Nd-Fe-B were provided by a number of manufacturers. Hysteresis curves for cylindrical samples with 10 nm in diameter and 2 mm, 3.5 mm, 5 mm, 14 mm or 28 mm in length were measured. Correction for the demagnetizing field is rather difficult because of its non-uniformity. Roughly speaking, a mean demagnetizing factor for soft magnetic materials can be used for the correction, although the application of this factor to hard magnetic material is hardly justified. Thus the dimensions of the sample should be specified when the data obtained by the open magnetic circuit method are used as industrial standards. (author)

  3. The Danish tax on saturated fat

    DEFF Research Database (Denmark)

    Vallgårda, Signild; Holm, Lotte; Jensen, Jørgen Dejgård

    2015-01-01

    arguments and themes involved in the debates surrounding the introduction and the repeal. SUBJECTS/METHODS: An analysis of parliamentary debates, expert reports and media coverage; key informant interviews; and a review of studies about the effects of the tax on consumer behaviour. RESULTS: A tax......BACKGROUND/OBJECTIVES: Health promoters have repeatedly proposed using economic policy tools, taxes and subsidies, as a means of changing consumer behaviour. As the first country in the world, Denmark introduced a tax on saturated fat in 2011. It was repealed in 2012. In this paper, we present...... indicates that the tax was effective in changing consumer behaviour....

  4. Implementing a generic method for bias correction in statistical models using random effects, with spatial and population dynamics examples

    DEFF Research Database (Denmark)

    Thorson, James T.; Kristensen, Kasper

    2016-01-01

    Statistical models play an important role in fisheries science when reconciling ecological theory with available data for wild populations or experimental studies. Ecological models increasingly include both fixed and random effects, and are often estimated using maximum likelihood techniques...... configurations of an age-structured population dynamics model. This simulation experiment shows that the epsilon-method and the existing bias-correction method perform equally well in data-rich contexts, but the epsilon-method is slightly less biased in data-poor contexts. We then apply the epsilon......-method to a spatial regression model when estimating an index of population abundance, and compare results with an alternative bias-correction algorithm that involves Markov-chain Monte Carlo sampling. This example shows that the epsilon-method leads to a biologically significant difference in estimates of average...

  5. Quantitative evaluation of automated skull-stripping methods applied to contemporary and legacy images: effects of diagnosis, bias correction, and slice location

    DEFF Research Database (Denmark)

    Fennema-Notestine, Christine; Ozyurt, I Burak; Clark, Camellia P

    2006-01-01

    Extractor (BSE, Sandor and Leahy [1997] IEEE Trans Med Imag 16:41-54; Shattuck et al. [2001] Neuroimage 13:856-876) to manually stripped images. The methods were applied to uncorrected and bias-corrected datasets; Legacy and Contemporary T1-weighted image sets; and four diagnostic groups (depressed...... distances, and an Expectation-Maximization algorithm. Methods tended to perform better on contemporary datasets; bias correction did not significantly improve method performance. Mesial sections were most difficult for all methods. Although AD image sets were most difficult to strip, HWA and BSE were more...

  6. A bias-corrected CMIP5 dataset for Africa using the CDF-t method - a contribution to agricultural impact studies

    Science.gov (United States)

    Moise Famien, Adjoua; Janicot, Serge; Delfin Ochou, Abe; Vrac, Mathieu; Defrance, Dimitri; Sultan, Benjamin; Noël, Thomas

    2018-03-01

    The objective of this paper is to present a new dataset of bias-corrected CMIP5 global climate model (GCM) daily data over Africa. This dataset was obtained using the cumulative distribution function transform (CDF-t) method, a method that has been applied to several regions and contexts but never to Africa. Here CDF-t has been applied over the period 1950-2099 combining Historical runs and climate change scenarios for six variables: precipitation, mean near-surface air temperature, near-surface maximum air temperature, near-surface minimum air temperature, surface downwelling shortwave radiation, and wind speed, which are critical variables for agricultural purposes. WFDEI has been used as the reference dataset to correct the GCMs. Evaluation of the results over West Africa has been carried out on a list of priority user-based metrics that were discussed and selected with stakeholders. It includes simulated yield using a crop model simulating maize growth. These bias-corrected GCM data have been compared with another available dataset of bias-corrected GCMs using WATCH Forcing Data as the reference dataset. The impact of WFD, WFDEI, and also EWEMBI reference datasets has been also examined in detail. It is shown that CDF-t is very effective at removing the biases and reducing the high inter-GCM scattering. Differences with other bias-corrected GCM data are mainly due to the differences among the reference datasets. This is particularly true for surface downwelling shortwave radiation, which has a significant impact in terms of simulated maize yields. Projections of future yields over West Africa are quite different, depending on the bias-correction method used. However all these projections show a similar relative decreasing trend over the 21st century.

  7. Scintillation probe with photomultiplier tube saturation indicator

    International Nuclear Information System (INIS)

    Ruch, J.F.; Urban, D.J.

    1996-01-01

    A photomultiplier tube saturation indicator is formed by supplying a supplemental light source, typically an light emitting diode (LED), adjacent to the photomultiplier tube. A switch allows the light source to be activated. The light is forwarded to the photomultiplier tube by an optical fiber. If the probe is properly light tight, then a meter attached to the indicator will register the light from the LED. If the probe is no longer light tight, and the saturation indicator is saturated, no signal will be registered when the LED is activated. 2 figs

  8. Rejection of Erroneous Saturation Data in Optical Pulse Oximetry in Newborn Patients

    Science.gov (United States)

    Scalise, L.; Marchionni, Paolo; Carnielli, Virgilio P.

    2011-08-01

    Pulse oximetry (PO) is extensively used in intensive care unit (ICU); this is mainly due to the fact that it is a non-invasive and real-time monitoring method. PO allows to measure arterial oxygen saturation (SaO2) and in particular hemoglobin oxygenation. Optical PO is typically realized by the use of a clip (to be applied on the ear or on the finger top) containing a couple of monochromatic LED sources and a photodiode. The main drawback with the use of PO is the presence of movement artifacts or disturbance due to optical sources and skin, causing erroneous saturation data. The aim of this work is to present the measurement procedure based on a specially developed algorithm able to reject erroneous oxygen saturation data during long lasting monitoring of patients in ICU and to compare measurement data with reference data provided by EGA. We have collected SaO2 data from a standard PO and used an intensive care unit monitor to collect data. This device was connected to our acquisition system and heart rate (HR) and SaO2 data were acquired and processed by our specially developed algorithm and directly reproduced on the PC screen for use by the clinicians. The algorithm here used for the individuation and rejection of erroneous saturation data is based on the assessment of the difference between the Heart Rate (HR) measured by respectively by the ECG and PO. We have used an emogasanalyzer (EGA) for comparison of the measured data. The study was carried out in a neonatal intensive care unit (NICU), using 817 data coming from 24 patients and the observation time was of about 10000 hours. Results show a reduction in the maximum difference between the SaO2 data measured, simultaneously, on the same patient by the EGA and by the proposed method of 14.20% and of the 4.76% in average over the 817 samples. The measurement method proposed is therefore able to individuate and eliminate the erroneous saturation data due to motion artifacts and reported by the pulse oxymeter

  9. An Uneven Illumination Correction Algorithm for Optical Remote Sensing Images Covered with Thin Clouds

    Directory of Open Access Journals (Sweden)

    Xiaole Shen

    2015-09-01

    Full Text Available The uneven illumination phenomenon caused by thin clouds will reduce the quality of remote sensing images, and bring adverse effects to the image interpretation. To remove the effect of thin clouds on images, an uneven illumination correction can be applied. In this paper, an effective uneven illumination correction algorithm is proposed to remove the effect of thin clouds and to restore the ground information of the optical remote sensing image. The imaging model of remote sensing images covered by thin clouds is analyzed. Due to the transmission attenuation, reflection, and scattering, the thin cloud cover usually increases region brightness and reduces saturation and contrast of the image. As a result, a wavelet domain enhancement is performed for the image in Hue-Saturation-Value (HSV color space. We use images with thin clouds in Wuhan area captured by QuickBird and ZiYuan-3 (ZY-3 satellites for experiments. Three traditional uneven illumination correction algorithms, i.e., multi-scale Retinex (MSR algorithm, homomorphic filtering (HF-based algorithm, and wavelet transform-based MASK (WT-MASK algorithm are performed for comparison. Five indicators, i.e., mean value, standard deviation, information entropy, average gradient, and hue deviation index (HDI are used to analyze the effect of the algorithms. The experimental results show that the proposed algorithm can effectively eliminate the influences of thin clouds and restore the real color of ground objects under thin clouds.

  10. Investigation of photobleaching and saturation of single molecules by fluorophore recrossing events

    Energy Technology Data Exchange (ETDEWEB)

    Burrows, Sean M.; Reif, Randall D. [Department of Chemistry and Biochemistry, Texas Tech University, Lubbock, TX 79409-1061 (United States); Pappas, Dimitri [Department of Chemistry and Biochemistry, Texas Tech University, Lubbock, TX 79409-1061 (United States)], E-mail: d.pappas@ttu.edu

    2007-08-15

    A method for investigation of photobleaching and saturation of single molecules by fluorophore recrossing events in a laser beam is described. The diffraction-limited probe volumes encountered in single-molecule detection (SMD) produce high excitation irradiance, which can decrease available signal. The single molecules of several dyes were detected and the data was used to extract interpeak times above a defined threshold value. The interpeak times revealed the number of fluorophore recrossing events. The number of molecules detected that were within 2 ms of each other represented a molecular recrossing for this work. Calcein, fluorescein and R-phycoerythrin were analyzed and the saturation irradiance and photobleaching effects were determined as a function of irradiance. This approach is simple and it serves as a method of optimizing experimental conditions for single-molecule detection.

  11. Experimental aspects of buoyancy correction in measuring reliable high-pressure excess adsorption isotherms using the gravimetric method

    Science.gov (United States)

    Nguyen, Huong Giang T.; Horn, Jarod C.; Thommes, Matthias; van Zee, Roger D.; Espinal, Laura

    2017-12-01

    Addressing reproducibility issues in adsorption measurements is critical to accelerating the path to discovery of new industrial adsorbents and to understanding adsorption processes. A National Institute of Standards and Technology Reference Material, RM 8852 (ammonium ZSM-5 zeolite), and two gravimetric instruments with asymmetric two-beam balances were used to measure high-pressure adsorption isotherms. This work demonstrates how common approaches to buoyancy correction, a key factor in obtaining the mass change due to surface excess gas uptake from the apparent mass change, can impact the adsorption isotherm data. Three different approaches to buoyancy correction were investigated and applied to the subcritical CO2 and supercritical N2 adsorption isotherms at 293 K. It was observed that measuring a collective volume for all balance components for the buoyancy correction (helium method) introduces an inherent bias in temperature partition when there is a temperature gradient (i.e. analysis temperature is not equal to instrument air bath temperature). We demonstrate that a blank subtraction is effective in mitigating the biases associated with temperature partitioning, instrument calibration, and the determined volumes of the balance components. In general, the manual and subtraction methods allow for better treatment of the temperature gradient during buoyancy correction. From the study, best practices specific to asymmetric two-beam balances and more general recommendations for measuring isotherms far from critical temperatures using gravimetric instruments are offered.

  12. Bias Correction Methods Explain Much of the Variation Seen in Breast Cancer Risks of BRCA1/2 Mutation Carriers.

    Science.gov (United States)

    Vos, Janet R; Hsu, Li; Brohet, Richard M; Mourits, Marian J E; de Vries, Jakob; Malone, Kathleen E; Oosterwijk, Jan C; de Bock, Geertruida H

    2015-08-10

    Recommendations for treating patients who carry a BRCA1/2 gene are mainly based on cumulative lifetime risks (CLTRs) of breast cancer determined from retrospective cohorts. These risks vary widely (27% to 88%), and it is important to understand why. We analyzed the effects of methods of risk estimation and bias correction and of population factors on CLTRs in this retrospective clinical cohort of BRCA1/2 carriers. The following methods to estimate the breast cancer risk of BRCA1/2 carriers were identified from the literature: Kaplan-Meier, frailty, and modified segregation analyses with bias correction consisting of including or excluding index patients combined with including or excluding first-degree relatives (FDRs) or different conditional likelihoods. These were applied to clinical data of BRCA1/2 families derived from our family cancer clinic for whom a simulation was also performed to evaluate the methods. CLTRs and 95% CIs were estimated and compared with the reference CLTRs. CLTRs ranged from 35% to 83% for BRCA1 and 41% to 86% for BRCA2 carriers at age 70 years width of 95% CIs: 10% to 35% and 13% to 46%, respectively). Relative bias varied from -38% to +16%. Bias correction with inclusion of index patients and untested FDRs gave the smallest bias: +2% (SD, 2%) in BRCA1 and +0.9% (SD, 3.6%) in BRCA2. Much of the variation in breast cancer CLTRs in retrospective clinical BRCA1/2 cohorts is due to the bias-correction method, whereas a smaller part is due to population differences. Kaplan-Meier analyses with bias correction that includes index patients and a proportion of untested FDRs provide suitable CLTRs for carriers counseled in the clinic. © 2015 by American Society of Clinical Oncology.

  13. Seismic Evaluation of Hydrocarbon Saturation in Deep-Water Reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Michael Batzle

    2006-04-30

    During this last period of the ''Seismic Evaluation of Hydrocarbon Saturation in Deep-Water Reservoirs'' project (Grant/Cooperative Agreement DE-FC26-02NT15342), we finalized integration of rock physics, well log analysis, seismic processing, and forward modeling techniques. Most of the last quarter was spent combining the results from the principal investigators and come to some final conclusions about the project. Also much of the effort was directed towards technology transfer through the Direct Hydrocarbon Indicators mini-symposium at UH and through publications. As a result we have: (1) Tested a new method to directly invert reservoir properties, water saturation, Sw, and porosity from seismic AVO attributes; (2) Constrained the seismic response based on fluid and rock property correlations; (3) Reprocessed seismic data from Ursa field; (4) Compared thin layer property distributions and averaging on AVO response; (5) Related pressures and sorting effects on porosity and their influence on DHI's; (6) Examined and compared gas saturation effects for deep and shallow reservoirs; (7) Performed forward modeling using geobodies from deepwater outcrops; (8) Documented velocities for deepwater sediments; (9) Continued incorporating outcrop descriptive models in seismic forward models; (10) Held an open DHI symposium to present the final results of the project; (11) Relations between Sw, porosity, and AVO attributes; (12) Models of Complex, Layered Reservoirs; and (14) Technology transfer Several factors can contribute to limit our ability to extract accurate hydrocarbon saturations in deep water environments. Rock and fluid properties are one factor, since, for example, hydrocarbon properties will be considerably different with great depths (high pressure) when compared to shallow properties. Significant over pressure, on the other hand will make the rocks behave as if they were shallower. In addition to the physical properties, the scale and

  14. Fat-saturated post gadolinium T1 imaging of the brain in multiple sclerosis

    International Nuclear Information System (INIS)

    Al-Saeed, Osama; Sheikh, Mehraj; Ismail, Mohammed; Athyal, Reji

    2011-01-01

    Background Magnetic resonance imaging (MRI) is of vital importance in the diagnosis and follow-up of patients with multiple sclerosis (MS). Imaging sequences better demonstrating enhancing lesions can help in detecting active MS plaques. Purpose To evaluate the role of fat-saturated gadolinium-enhanced T1-weighted (T1W) images of the brain in MS and to assess the benefit of performing this additional sequence in the detection of enhancing lesions. Material and Methods In a prospective study over a six-month period, 70 consecutive patients with clinically diagnosed MS were enrolled. These constituted 14 male and 56 female patients between the ages of 21 and 44 years. All the patients underwent brain MRIs on a 1.5 Tesla Magnet. Gadolinium-enhanced T1 images with and without fat saturation were compared and results were recorded and analyzed using a conspicuity score and McNemar test. Results There were a total of 157 lesions detected in 70 patients on post-contrast T1W fat-saturated images compared with 139 lesions seen on the post-contrast T1W fast spin-echo (FSE) images. This was because 18 of the lesions (11.5%) were only seen on the fat-saturated images. In addition, 15 lesions were more conspicuous on the fat saturation sequence (9.5%). The total conspicuity score obtained, including all the lesions, was 2.24 +/-0.60 (SD). Using the two-tailed McNemar test for quantitative analysis, the P value obtained was <0.0001. Conclusion T1W fat-saturated gadolinium-enhanced images show better lesion enhancement than T1W images without fat saturation

  15. Robust control for spacecraft rendezvous system with actuator unsymmetrical saturation: a gain scheduling approach

    Science.gov (United States)

    Wang, Qian; Xue, Anke

    2018-06-01

    This paper has proposed a robust control for the spacecraft rendezvous system by considering the parameter uncertainties and actuator unsymmetrical saturation based on the discrete gain scheduling approach. By changing of variables, we transform the actuator unsymmetrical saturation control problem into a symmetrical one. The main advantage of the proposed method is improving the dynamic performance of the closed-loop system with a region of attraction as large as possible. By the Lyapunov approach and the scheduling technology, the existence conditions for the admissible controller are formulated in the form of linear matrix inequalities. The numerical simulation illustrates the effectiveness of the proposed method.

  16. A general dead-time correction method based on live-time stamping. Application to the measurement of short-lived radionuclides.

    Science.gov (United States)

    Chauvenet, B; Bobin, C; Bouchard, J

    2017-12-01

    Dead-time correction formulae are established in the general case of superimposed non-homogeneous Poisson processes. Based on the same principles as conventional live-timed counting, this method exploits the additional information made available using digital signal processing systems, and especially the possibility to store the time stamps of live-time intervals. No approximation needs to be made to obtain those formulae. Estimates of the variances of corrected rates are also presented. This method is applied to the activity measurement of short-lived radionuclides. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Saturated vapor pressure of lutetium tris-acetylacetonate

    Energy Technology Data Exchange (ETDEWEB)

    Trembovetskij, G.V.; Berdonosov, S.S.; Murav' eva, I.A.; Martynenko, L.I. (Moskovskij Gosudarstvennyj Univ. (USSR))

    1983-12-01

    By the statical method using /sup 177/Lu radioactive isotope the saturated vapor pressure of anhydrous lutetium acetylacetonate at 130 to 160 deg is determined. The calculations are carried out assuming the vapor to be monomolecular. The equation of lgP versus 1/T takes the form: lg Psub((mmHg))=(8.7+-1.6)-(4110+-690)/T. The thermodynamical characteristics of LuA/sub 3/ sublimation are calculated to be ..delta..Hsub(subl.)=79+-13 kJ/mol; ..delta..Ssub(subl.)=111+-20 J/kxmol.

  18. Robust method for TALEN-edited correction of pF508del in patient-specific induced pluripotent stem cells.

    Science.gov (United States)

    Camarasa, María Vicenta; Gálvez, Víctor Miguel

    2016-02-09

    Cystic fibrosis is one of the most frequent inherited rare diseases, caused by mutations in the cystic fibrosis transmembrane conductance regulator gene. Apart from symptomatic treatments, therapeutic protocols for curing the disease have not yet been established. The regeneration of genetically corrected, disease-free epithelia in cystic fibrosis patients is envisioned by designing a stem cell/genetic therapy in which patient-derived pluripotent stem cells are genetically corrected, from which target tissues are derived. In this framework, we present an efficient method for seamless correction of pF508del mutation in patient-specific induced pluripotent stem cells by gene edited homologous recombination. Gene edition has been performed by transcription activator-like effector nucleases and a homologous recombination donor vector which contains a PiggyBac transposon-based double selectable marker cassette.This new method has been designed to partially avoid xenobiotics from the culture system, improve cell culture efficiency and genome stability by using a robust culture system method, and optimize timings. Overall, once the pluripotent cells have been amplified for the first nucleofection, the procedure can be completed in 69 days, and can be easily adapted to edit and change any gene of interest.

  19. Pressure of saturated vapor of yttrium and zirconium acetylacetonates

    Energy Technology Data Exchange (ETDEWEB)

    Trembovetskij, G.V.; Berdonosov, S.S.; Murav' eva, I.A.; Martynenko, L.I. (Moskovskij Gosudarstvennyj Univ. (USSR))

    1984-08-01

    The static method and the flow method using /sup 91/Y and /sup 95/Zr radioactive indicators have been applied to determine pressure of saturated vapour of yttrium and zirconium acetylacetonates. Values of thermodynamic functions ..delta..Hsub(subl)=(98+-16)kJ/mol and ..delta..Ssub(subl.)=(155+-30)J/mol x K are calculated for sublimation of yttrium acetylacetonate. For sublimation of zirconium acetylacetonates ..delta..Hsub(subl) equals (116+-38) kJ/mol and ..delta..Ssub(subl) is equal to (198+-65) J/molxK.

  20. Retinal oxygen saturation in relation to retinal thickness in diabetic macular edema

    DEFF Research Database (Denmark)

    Blindbæk, Søren Leer; Peto, Tunde; Grauslund, Jakob

    to retinal thickness in patients with diabetic macular edema (DME). Methods: We included 18 patients with DME that all had central retinal thickness (CRT) >300 µm and were free of active proliferative diabetic retinopathy. Optical coherence tomography (Topcon 3D OCT-2000 spectral domain OCT) was used...... for paracentral edema, the oxygen saturation in the upper and lower temporal arcade branches were compared to the corresponding upper and lower subfield thickness. Spearman’s rank was used to calculate correlation coefficients between CRT and retinal oximetry. Results: Median age and duration of diabetes was 59....... 92.3%, p=0.52). We found no correlation between CRT and retinal oxygen saturation, even when accounting for paracentral edema (p>0.05). Furthermore, there was no difference in retinal oxygen saturation between the macular hemisphere that was more or less affected by DME (p>0.05). Conclusion: Patients...