WorldWideScience

Sample records for nonuniformity correction method

  1. An efficient shutter-less non-uniformity correction method for infrared focal plane arrays

    Science.gov (United States)

    Huang, Xiyan; Sui, Xiubao; Zhao, Yao

    2017-02-01

    The non-uniformity response in infrared focal plane array (IRFPA) detectors has a bad effect on images with fixed pattern noise. At present, it is common to use shutter to prevent from radiation of target and to update the parameters of non-uniformity correction in the infrared imaging system. The use of shutter causes "freezing" image. And inevitably, there exists the problems of the instability and reliability of system, power consumption, and concealment of infrared detection. In this paper, we present an efficient shutter-less non-uniformity correction (NUC) method for infrared focal plane arrays. The infrared imaging system can use the data gaining in thermostat to calculate the incident infrared radiation by shell real-timely. And the primary output of detector except the shell radiation can be corrected by the gain coefficient. This method has been tested in real infrared imaging system, reaching high correction level, reducing fixed pattern noise, adapting wide temperature range.

  2. Joint de-blurring and nonuniformity correction method for infrared microscopy imaging

    Science.gov (United States)

    Jara, Anselmo; Torres, Sergio; Machuca, Guillermo; Ramírez, Wagner; Gutiérrez, Pablo A.; Viafora, Laura A.; Godoy, Sebastián E.; Vera, Esteban

    2018-05-01

    In this work, we present a new technique to simultaneously reduce two major degradation artifacts found in mid-wavelength infrared microscopy imagery, namely the inherent focal-plane array nonuniformity noise and the scene defocus presented due to the point spread function of the infrared microscope. We correct both nuisances using a novel, recursive method that combines the constant range nonuniformity correction algorithm with a frame-by-frame deconvolution approach. The ability of the method to jointly compensate for both nonuniformity noise and blur is demonstrated using two different real mid-wavelength infrared microscopic video sequences, which were captured from two microscopic living organisms using a Janos-Sofradir mid-wavelength infrared microscopy setup. The performance of the proposed method is assessed on real and simulated infrared data by computing the root mean-square error and the roughness-laplacian pattern index, which was specifically developed for the present work.

  3. Nonuniform Illumination Correction Algorithm for Underwater Images Using Maximum Likelihood Estimation Method

    Directory of Open Access Journals (Sweden)

    Sonali Sachin Sankpal

    2016-01-01

    Full Text Available Scattering and absorption of light is main reason for limited visibility in water. The suspended particles and dissolved chemical compounds in water are also responsible for scattering and absorption of light in water. The limited visibility in water results in degradation of underwater images. The visibility can be increased by using artificial light source in underwater imaging system. But the artificial light illuminates the scene in a nonuniform fashion. It produces bright spot at the center with the dark region at surroundings. In some cases imaging system itself creates dark region in the image by producing shadow on the objects. The problem of nonuniform illumination is neglected by the researchers in most of the image enhancement techniques of underwater images. Also very few methods are discussed showing the results on color images. This paper suggests a method for nonuniform illumination correction for underwater images. The method assumes that natural underwater images are Rayleigh distributed. This paper used maximum likelihood estimation of scale parameter to map distribution of image to Rayleigh distribution. The method is compared with traditional methods for nonuniform illumination correction using no-reference image quality metrics like average luminance, average information entropy, normalized neighborhood function, average contrast, and comprehensive assessment function.

  4. Calibration of EBT2 film by the PDD method with scanner non-uniformity correction.

    Science.gov (United States)

    Chang, Liyun; Chui, Chen-Shou; Ding, Hueisch-Jy; Hwang, Ing-Ming; Ho, Sheng-Yow

    2012-09-21

    The EBT2 film together with a flatbed scanner is a convenient dosimetry QA tool for verification of clinical radiotherapy treatments. However, it suffers from a relatively high degree of uncertainty and a tedious film calibration process for every new lot of films, including cutting the films into several small pieces, exposing with different doses, restoring them back and selecting the proper region of interest (ROI) for each piece for curve fitting. In this work, we present a percentage depth dose (PDD) method that can accurately calibrate the EBT2 film together with the scanner non-uniformity correction and provide an easy way to perform film dosimetry. All films were scanned before and after the irradiation in one of the two homemade 2 mm thick acrylic frames (one portrait and the other landscape), which was located at a fixed position on the scan bed of an Epson 10 000XL scanner. After the pre-irradiated scan, the film was placed parallel to the beam central axis and sandwiched between six polystyrene plates (5 cm thick each), followed by irradiation of a 20 × 20 cm² 6 MV photon beam. Two different beams on times were used on two different films to deliver a dose to the film ranging from 32 to 320 cGy. After the post-irradiated scan, the net optical densities for a total of 235 points on the beam central axis on the films were auto-extracted and compared with the corresponding depth doses that were calculated through the measurement of a 0.6 cc farmer chamber and the related PDD table to perform the curve fitting. The portrait film location was selected for routine calibration, since the central beam axis on the film is parallel to the scanning direction, where non-uniformity correction is not needed (Ferreira et al 2009 Phys. Med. Biol. 54 1073-85). To perform the scanner non-uniformity calibration, the cross-beam profiles of the film were analysed by referencing the measured profiles from a Profiler™. Finally, to verify our method, the films were

  5. Calibration of EBT2 film by the PDD method with scanner non-uniformity correction

    International Nuclear Information System (INIS)

    Chang Liyun; Ding, Hueisch-Jy; Chui, Chen-Shou; Hwang, Ing-Ming; Ho, Sheng-Yow

    2012-01-01

    The EBT2 film together with a flatbed scanner is a convenient dosimetry QA tool for verification of clinical radiotherapy treatments. However, it suffers from a relatively high degree of uncertainty and a tedious film calibration process for every new lot of films, including cutting the films into several small pieces, exposing with different doses, restoring them back and selecting the proper region of interest (ROI) for each piece for curve fitting. In this work, we present a percentage depth dose (PDD) method that can accurately calibrate the EBT2 film together with the scanner non-uniformity correction and provide an easy way to perform film dosimetry. All films were scanned before and after the irradiation in one of the two homemade 2 mm thick acrylic frames (one portrait and the other landscape), which was located at a fixed position on the scan bed of an Epson 10 000XL scanner. After the pre-irradiated scan, the film was placed parallel to the beam central axis and sandwiched between six polystyrene plates (5 cm thick each), followed by irradiation of a 20 × 20 cm 2 6 MV photon beam. Two different beams on times were used on two different films to deliver a dose to the film ranging from 32 to 320 cGy. After the post-irradiated scan, the net optical densities for a total of 235 points on the beam central axis on the films were auto-extracted and compared with the corresponding depth doses that were calculated through the measurement of a 0.6 cc farmer chamber and the related PDD table to perform the curve fitting. The portrait film location was selected for routine calibration, since the central beam axis on the film is parallel to the scanning direction, where non-uniformity correction is not needed (Ferreira et al 2009 Phys. Med. Biol. 54 1073–85). To perform the scanner non-uniformity calibration, the cross-beam profiles of the film were analysed by referencing the measured profiles from a Profiler™. Finally, to verify our method, the films were

  6. A novel scene-based non-uniformity correction method for SWIR push-broom hyperspectral sensors

    Science.gov (United States)

    Hu, Bin-Lin; Hao, Shi-Jing; Sun, De-Xin; Liu, Yin-Nian

    2017-09-01

    A novel scene-based non-uniformity correction (NUC) method for short-wavelength infrared (SWIR) push-broom hyperspectral sensors is proposed and evaluated. This method relies on the assumption that for each band there will be ground objects with similar reflectance to form uniform regions when a sufficient number of scanning lines are acquired. The uniform regions are extracted automatically through a sorting algorithm, and are used to compute the corresponding NUC coefficients. SWIR hyperspectral data from airborne experiment are used to verify and evaluate the proposed method, and results show that stripes in the scenes have been well corrected without any significant information loss, and the non-uniformity is less than 0.5%. In addition, the proposed method is compared to two other regular methods, and they are evaluated based on their adaptability to the various scenes, non-uniformity, roughness and spectral fidelity. It turns out that the proposed method shows strong adaptability, high accuracy and efficiency.

  7. Correction method and software for image distortion and nonuniform response in charge-coupled device-based x-ray detectors utilizing x-ray image intensifier

    International Nuclear Information System (INIS)

    Ito, Kazuki; Kamikubo, Hironari; Yagi, Naoto; Amemiya, Yoshiyuki

    2005-01-01

    An on-site method of correcting the image distortion and nonuniform response of a charge-coupled device (CCD)-based X-ray detector was developed using the response of the imaging plate as a reference. The CCD-based X-ray detector consists of a beryllium-windowed X-ray image intensifier (Be-XRII) and a CCD as the image sensor. An image distortion of 29% was improved to less than 1% after the correction. In the correction of nonuniform response due to image distortion, subpixel approximation was performed for the redistribution of pixel values. The optimal number of subpixels was also discussed. In an experiment with polystyrene (PS) latex, it was verified that the correction of both image distortion and nonuniform response worked properly. The correction for the 'contrast reduction' problem was also demonstrated for an isotropic X-ray scattering pattern from the PS latex. (author)

  8. Non-uniformity Correction of Infrared Images by Midway Equalization

    Directory of Open Access Journals (Sweden)

    Yohann Tendero

    2012-07-01

    Full Text Available The non-uniformity is a time-dependent noise caused by the lack of sensor equalization. We present here the detailed algorithm and on line demo of the non-uniformity correction method by midway infrared equalization. This method was designed to suit infrared images. Nevertheless, it can be applied to images produced for example by scanners, or by push-broom satellites. The obtained single image method works on static images, is fully automatic, having no user parameter, and requires no registration. It needs no camera motion compensation, no closed aperture sensor equalization and is able to correct for a fully non-linear non-uniformity.

  9. Field nonuniformity correction for quantitative analysis of digitized mammograms

    International Nuclear Information System (INIS)

    Pawluczyk, Olga; Yaffe, Martin J.

    2001-01-01

    Several factors, including the heel effect, variation in distance from the x-ray source to points in the image and path obliquity contribute to the signal nonuniformity of mammograms. To best use digitized mammograms for quantitative image analysis, these field non-uniformities must be corrected. An empirically based correction method, which uses a bowl-shaped calibration phantom, has been developed. Due to the annular spherical shape of the phantom, its attenuation is constant over the entire image. Remaining nonuniformities are due only to the heel and inverse square effects as well as the variable path through the beam filter, compression plate and image receptor. In logarithmic space, a normalized image of the phantom can be added to mammograms to correct for these effects. Then, an analytical correction for path obliquity in the breast can be applied to the images. It was found that the correction causes the errors associated with field nonuniformity to be reduced from 14% to 2% for a 4 cm block of material corresponding to a combination of 50% fibroglandular and 50% fatty breast tissue. A repeatability study has been conducted to show that in regions as far as 20 cm away from the chest wall, variations due to imaging conditions and phantom alignment contribute to <2% of overall corrected signal

  10. Physical Limitations To Nonuniformity Correction In IR Focal Plane Arrays

    Science.gov (United States)

    Scribner, D. A.; Kruer, M. R.; Gridley, J. C.; Sarkady, K.

    1988-05-01

    Simple nonuniformity correction algorithms currently in use can be severely limited by nonlinear response characteristics of the individual pixels in an IR focal plane array. Although more complicated multi-point algorithms improve the correction process they too can be limited by nonlinearities. Furthermore, analysis of single pixel noise power spectrums usually show some level of 1 /f noise. This in turn causes pixel outputs to drift independent of each other thus causing the spatial noise (often called fixed pattern noise) of the array to increase as a function of time since the last calibration. Measurements are presented for two arrays (a HgCdTe hybrid and a Pt:Si CCD) describing pixel nonlinearities, 1/f noise, and residual spatial noise (after nonuniforming correction). Of particular emphasis is spatial noise as a function of the lapsed time since the last calibration and the calibration process selected. The resulting spatial noise is examined in terms of its effect on the NEAT performance of each array tested and comparisons are made. Finally, a discussion of implications for array developers is given.

  11. Test stand for non-uniformity correction of microbolometer focal plane arrays used in thermal cameras

    Science.gov (United States)

    Krupiński, Michał; Bareła, Jaroslaw; Firmanty, Krzysztof; Kastek, Mariusz

    2013-10-01

    Uneven response of particular detectors (pixels) to the same incident power of infrared radiation is an inherent feature of microbolometer focal plane arrays. As a result an image degradation occurs, known as Fixed Pattern Noise (FPN), which distorts the thermal representation of an observed scene and impairs the parameters of a thermal camera. In order to compensate such non-uniformity, several NUC correction methods are applied in digital data processing modules implemented in thermal cameras. Coefficients required to perform the non-uniformity correction procedure (NUC coefficients) are determined by calibrating the camera against uniform radiation sources (blackbodies). Non-uniformity correction is performed in a digital processing unit in order to remove FPN pattern in the registered thermal images. Relevant correction coefficients are calculated on the basis of recorded detector responses to several values of radiant flux emitted from reference IR radiation sources (blackbodies). The measurement of correction coefficients requires specialized setup, in which uniform, extended radiation sources with high temperature stability are one of key elements. Measurement stand for NUC correction developed in Institute of Optoelectronics, MUT, comprises two integrated extended blackbodies with the following specifications: area 200×200 mm, stabilized absolute temperature range +15 °C÷100 °C, and uniformity of temperature distribution across entire surface +/-0.014 °C. Test stand, method used for the measurement of NUC coefficients and the results obtained during the measurements conducted on a prototype thermal camera will be presented in the paper.

  12. An improved non-uniformity correction algorithm and its GPU parallel implementation

    Science.gov (United States)

    Cheng, Kuanhong; Zhou, Huixin; Qin, Hanlin; Zhao, Dong; Qian, Kun; Rong, Shenghui

    2018-05-01

    The performance of SLP-THP based non-uniformity correction algorithm is seriously affected by the result of SLP filter, which always leads to image blurring and ghosting artifacts. To address this problem, an improved SLP-THP based non-uniformity correction method with curvature constraint was proposed. Here we put forward a new way to estimate spatial low frequency component. First, the details and contours of input image were obtained respectively by minimizing local Gaussian curvature and mean curvature of image surface. Then, the guided filter was utilized to combine these two parts together to get the estimate of spatial low frequency component. Finally, we brought this SLP component into SLP-THP method to achieve non-uniformity correction. The performance of proposed algorithm was verified by several real and simulated infrared image sequences. The experimental results indicated that the proposed algorithm can reduce the non-uniformity without detail losing. After that, a GPU based parallel implementation that runs 150 times faster than CPU was presented, which showed the proposed algorithm has great potential for real time application.

  13. Physical correction model for automatic correction of intensity non-uniformity in magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Stefan Leger

    2017-10-01

    Conclusion: The proposed PCM algorithm led to a significantly improved image quality compared to the originally acquired images, suggesting that it is applicable to the correction of MRI data. Thus it may help to reduce intensity non-uniformity which is an important step for advanced image analysis.

  14. Subrandom methods for multidimensional nonuniform sampling.

    Science.gov (United States)

    Worley, Bradley

    2016-08-01

    Methods of nonuniform sampling that utilize pseudorandom number sequences to select points from a weighted Nyquist grid are commonplace in biomolecular NMR studies, due to the beneficial incoherence introduced by pseudorandom sampling. However, these methods require the specification of a non-arbitrary seed number in order to initialize a pseudorandom number generator. Because the performance of pseudorandom sampling schedules can substantially vary based on seed number, this can complicate the task of routine data collection. Approaches such as jittered sampling and stochastic gap sampling are effective at reducing random seed dependence of nonuniform sampling schedules, but still require the specification of a seed number. This work formalizes the use of subrandom number sequences in nonuniform sampling as a means of seed-independent sampling, and compares the performance of three subrandom methods to their pseudorandom counterparts using commonly applied schedule performance metrics. Reconstruction results using experimental datasets are also provided to validate claims made using these performance metrics. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Gamma camera system with improved means for correcting nonuniformity

    International Nuclear Information System (INIS)

    Lange, K.; Jeppesen, J.

    1979-01-01

    In a gamma camera system, means are provided for correcting nonuniformity or lack of correspondence between the positions of scintillations and their calculated and displayed by x-y coordinates. In an accumulation mode, pulse counts corresponding with scintillations in various areas of the radiation field are stored in memory locations corresponding with their locations in the radiation field. A uniform radiation source is presented to the detectors during the accumulation is interrupted at which time other locations have fewer counts in them. In the run mode, counts are stored in corresponding locations of a memory and these counts are compared continuously with those stored in the accumulation mode. Means are provided for injecting a number of counts during the run mode proportional to the difference between the counts accumulated during the accumulation mode in a given area increment and the counts that should have been obtained from a uniform source

  16. Temporal high-pass non-uniformity correction algorithm based on grayscale mapping and hardware implementation

    Science.gov (United States)

    Jin, Minglei; Jin, Weiqi; Li, Yiyang; Li, Shuo

    2015-08-01

    In this paper, we propose a novel scene-based non-uniformity correction algorithm for infrared image processing-temporal high-pass non-uniformity correction algorithm based on grayscale mapping (THP and GM). The main sources of non-uniformity are: (1) detector fabrication inaccuracies; (2) non-linearity and variations in the read-out electronics and (3) optical path effects. The non-uniformity will be reduced by non-uniformity correction (NUC) algorithms. The NUC algorithms are often divided into calibration-based non-uniformity correction (CBNUC) algorithms and scene-based non-uniformity correction (SBNUC) algorithms. As non-uniformity drifts temporally, CBNUC algorithms must be repeated by inserting a uniform radiation source which SBNUC algorithms do not need into the view, so the SBNUC algorithm becomes an essential part of infrared imaging system. The SBNUC algorithms' poor robustness often leads two defects: artifacts and over-correction, meanwhile due to complicated calculation process and large storage consumption, hardware implementation of the SBNUC algorithms is difficult, especially in Field Programmable Gate Array (FPGA) platform. The THP and GM algorithm proposed in this paper can eliminate the non-uniformity without causing defects. The hardware implementation of the algorithm only based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay: less than 20 lines, it can be transplanted to a variety of infrared detectors equipped with FPGA image processing module, it can reduce the stripe non-uniformity and the ripple non-uniformity.

  17. Improvement of brain perfusion SPET using iterative reconstruction with scatter and non-uniform attenuation correction

    Energy Technology Data Exchange (ETDEWEB)

    Kauppinen, T.; Vanninen, E.; Kuikka, J.T. [Kuopio Central Hospital (Finland). Dept. of Clinical Physiology; Koskinen, M.O. [Dept. of Clinical Physiology and Nuclear Medicine, Tampere Univ. Hospital, Tampere (Finland); Alenius, S. [Signal Processing Lab., Tampere Univ. of Technology, Tampere (Finland)

    2000-09-01

    Filtered back-projection (FBP) is generally used as the reconstruction method for single-photon emission tomography although it produces noisy images with apparent streak artefacts. It is possible to improve the image quality by using an algorithm with iterative correction steps. The iterative reconstruction technique also has an additional benefit in that computation of attenuation correction can be included in the process. A commonly used iterative method, maximum-likelihood expectation maximisation (ML-EM), can be accelerated using ordered subsets (OS-EM). We have applied to the OS-EM algorithm a Bayesian one-step late correction method utilising median root prior (MRP). Methodological comparison was performed by means of measurements obtained with a brain perfusion phantom and using patient data. The aim of this work was to quantitate the accuracy of iterative reconstruction with scatter and non-uniform attenuation corrections and post-filtering in SPET brain perfusion imaging. SPET imaging was performed using a triple-head gamma camera with fan-beam collimators. Transmission and emission scans were acquired simultaneously. The brain phantom used was a high-resolution three-dimensional anthropomorphic JB003 phantom. Patient studies were performed in ten chronic pain syndrome patients. The images were reconstructed using conventional FBP and iterative OS-EM and MRP techniques including scatter and nonuniform attenuation corrections. Iterative reconstructions were individually post-filtered. The quantitative results obtained with the brain perfusion phantom were compared with the known actual contrast ratios. The calculated difference from the true values was largest with the FBP method; iteratively reconstructed images proved closer to the reality. Similar findings were obtained in the patient studies. The plain OS-EM method improved the contrast whereas in the case of the MRP technique the improvement in contrast was not so evident with post-filtering. (orig.)

  18. Improvement of brain perfusion SPET using iterative reconstruction with scatter and non-uniform attenuation correction

    International Nuclear Information System (INIS)

    Kauppinen, T.; Vanninen, E.; Kuikka, J.T.; Alenius, S.

    2000-01-01

    Filtered back-projection (FBP) is generally used as the reconstruction method for single-photon emission tomography although it produces noisy images with apparent streak artefacts. It is possible to improve the image quality by using an algorithm with iterative correction steps. The iterative reconstruction technique also has an additional benefit in that computation of attenuation correction can be included in the process. A commonly used iterative method, maximum-likelihood expectation maximisation (ML-EM), can be accelerated using ordered subsets (OS-EM). We have applied to the OS-EM algorithm a Bayesian one-step late correction method utilising median root prior (MRP). Methodological comparison was performed by means of measurements obtained with a brain perfusion phantom and using patient data. The aim of this work was to quantitate the accuracy of iterative reconstruction with scatter and non-uniform attenuation corrections and post-filtering in SPET brain perfusion imaging. SPET imaging was performed using a triple-head gamma camera with fan-beam collimators. Transmission and emission scans were acquired simultaneously. The brain phantom used was a high-resolution three-dimensional anthropomorphic JB003 phantom. Patient studies were performed in ten chronic pain syndrome patients. The images were reconstructed using conventional FBP and iterative OS-EM and MRP techniques including scatter and nonuniform attenuation corrections. Iterative reconstructions were individually post-filtered. The quantitative results obtained with the brain perfusion phantom were compared with the known actual contrast ratios. The calculated difference from the true values was largest with the FBP method; iteratively reconstructed images proved closer to the reality. Similar findings were obtained in the patient studies. The plain OS-EM method improved the contrast whereas in the case of the MRP technique the improvement in contrast was not so evident with post-filtering. (orig.)

  19. Nonuniformity correction of infrared cameras by reading radiance temperatures with a spatially nonhomogeneous radiation source

    International Nuclear Information System (INIS)

    Gutschwager, Berndt; Hollandt, Jörg

    2017-01-01

    We present a novel method of nonuniformity correction (NUC) of infrared cameras and focal plane arrays (FPA) in a wide optical spectral range by reading radiance temperatures and by applying a radiation source with an unknown and spatially nonhomogeneous radiance temperature distribution. The benefit of this novel method is that it works with the display and the calculation of radiance temperatures, it can be applied to radiation sources of arbitrary spatial radiance temperature distribution, and it only requires sufficient temporal stability of this distribution during the measurement process. In contrast to this method, an initially presented method described the calculation of NUC correction with the reading of monitored radiance values. Both methods are based on the recording of several (at least three) images of a radiation source and a purposeful row- and line-shift of these sequent images in relation to the first primary image. The mathematical procedure is explained in detail. Its numerical verification with a source of a predefined nonhomogeneous radiance temperature distribution and a thermal imager of a predefined nonuniform FPA responsivity is presented. (paper)

  20. Non-uniform versus uniform attenuation correction in brain perfusion SPET of healthy volunteers

    International Nuclear Information System (INIS)

    Van Laere, K.; Versijpt, J.; Dierckx, R.; Koole, M.

    2001-01-01

    Although non-uniform attenuation correction (NUAC) can supply more accurate absolute quantification, it is not entirely clear whether NUAC provides clear-cut benefits in the routine clinical practice of brain SPET imaging. The aim of this study was to compare the effect of NUAC versus uniform attenuation correction (UAC) on volume of interest (VOI)-based semi-quantification of a large age- and gender-stratified brain perfusion normal database. Eighty-nine healthy volunteers (46 females and 43 males, aged 20-81 years) underwent standardised high-resolution single-photon emission tomography (SPET) with 925 MBq 99m Tc-ethyl cysteinate dimer (ECD) on a Toshiba GCA-9300A camera with 153 Gd or 99m Tc transmission CT scanning. Emission images were reconstructed by filtered back-projection and scatter corrected using the triple-energy window correction method. Both non-uniform Chang attenuation correction (one iteration) and uniform Sorenson correction (attenuation coefficient 0.09 cm -1 ) were applied. Images were automatically re-oriented to a stereotactic template on which 35 predefined VOIs were defined for semi-quantification (normalisation on total VOI counts). Small but significant differences between relative VOI uptake values for NUAC versus UAC in the infratentorial region were found. VOI standard deviations were significantly smaller for UAC, 4.5% (range 2.6-7.5), than for NUAC, 5.0% (2.3-9.0) (P 99m Tc-ECD uptake values in healthy volunteers to those obtained with NUAC, although values for the infratentorial region are slightly lower. NUAC produces a slight increase in inter-subject variability. Further study is necessary in various patient populations to establish the full clinical impact of NUAC in brain perfusion SPET. (orig.)

  1. Non-Uniformity Correction Using Nonlinear Characteristic Performance Curves for Calibration

    Science.gov (United States)

    Lovejoy, McKenna Roberts

    Infrared imaging is an expansive field with many applications. Advances in infrared technology have lead to a greater demand from both commercial and military sectors. However, a known problem with infrared imaging is its non-uniformity. This non-uniformity stems from the fact that each pixel in an infrared focal plane array has its own photoresponse. Many factors such as exposure time, temperature, and amplifier choice affect how the pixels respond to incoming illumination and thus impact image uniformity. To improve performance non-uniformity correction (NUC) techniques are applied. Standard calibration based techniques commonly use a linear model to approximate the nonlinear response. This often leaves unacceptable levels of residual non-uniformity. Calibration techniques often have to be repeated during use to continually correct the image. In this dissertation alternates to linear NUC algorithms are investigated. The goal of this dissertation is to determine and compare nonlinear non-uniformity correction algorithms. Ideally the results will provide better NUC performance resulting in less residual non-uniformity as well as reduce the need for recalibration. This dissertation will consider new approaches to nonlinear NUC such as higher order polynomials and exponentials. More specifically, a new gain equalization algorithm has been developed. The various nonlinear non-uniformity correction algorithms will be compared with common linear non-uniformity correction algorithms. Performance will be compared based on RMS errors, residual non-uniformity, and the impact quantization has on correction. Performance will be improved by identifying and replacing bad pixels prior to correction. Two bad pixel identification and replacement techniques will be investigated and compared. Performance will be presented in the form of simulation results as well as before and after images taken with short wave infrared cameras. The initial results show, using a third order

  2. Improvement of quantitation in SPECT: Attenuation and scatter correction using non-uniform attenuation data

    International Nuclear Information System (INIS)

    Mukai, T.; Torizuka, K.; Douglass, K.H.; Wagner, H.N.

    1985-01-01

    Quantitative assessment of tracer distribution with single photon emission computed tomography (SPECT) is difficult because of attenuation and scattering of gamma rays within the object. A method considering the source geometry was developed, and effects of attenuation and scatter on SPECT quantitation were studied using phantoms with non-uniform attenuation. The distribution of attenuation coefficients (μ) within the source were obtained by transmission CT. The attenuation correction was performed by an iterative reprojection technique. The scatter correction was done by convolution of the attenuation corrected image and an appropriate filter made by line source studies. The filter characteristics depended on μ and SPEC measurement at each pixel. The SPECT obtained by this method showed the most reasonable results than the images reconstructed by other methods. The scatter correction could compensate completely for a 28% scatter components from a long line source, and a 61% component for thick and extended source. Consideration of source geometries was necessary for effective corrections. The present method is expected to be valuable for the quantitative assessment of regional tracer activity

  3. Quadratic Regression-based Non-uniform Response Correction for Radiochromic Film Scanners

    International Nuclear Information System (INIS)

    Jeong, Hae Sun; Kim, Chan Hyeong; Han, Young Yih; Kum, O Yeon

    2009-01-01

    In recent years, several types of radiochromic films have been extensively used for two-dimensional dose measurements such as dosimetry in radiotherapy as well as imaging and radiation protection applications. One of the critical aspects in radiochromic film dosimetry is the accurate readout of the scanner without dose distortion. However, most of charge-coupled device (CCD) scanners used for the optical density readout of the film employ a fluorescent lamp or a coldcathode lamp as a light source, which leads to a significant amount of light scattering on the active layer of the film. Due to the effect of the light scattering, dose distortions are produced with non-uniform responses, although the dose is uniformly irradiated to the film. In order to correct the distorted doses, a method based on correction factors (CF) has been reported and used. However, the prediction of the real incident doses is difficult when the indiscreet doses are delivered to the film, since the dose correction with the CF-based method is restrictively used in case that the incident doses are already known. In a previous study, therefore, a pixel-based algorithm with linear regression was developed to correct the dose distortion of a flatbed scanner, and to estimate the initial doses. The result, however, was not very good for some cases especially when the incident dose is under approximately 100 cGy. In the present study, the problem was addressed by replacing the linear regression with the quadratic regression. The corrected doses using this method were also compared with the results of other conventional methods

  4. A DSP-based neural network non-uniformity correction algorithm for IRFPA

    Science.gov (United States)

    Liu, Chong-liang; Jin, Wei-qi; Cao, Yang; Liu, Xiu

    2009-07-01

    An effective neural network non-uniformity correction (NUC) algorithm based on DSP is proposed in this paper. The non-uniform response in infrared focal plane array (IRFPA) detectors produces corrupted images with a fixed-pattern noise(FPN).We introduced and analyzed the artificial neural network scene-based non-uniformity correction (SBNUC) algorithm. A design of DSP-based NUC development platform for IRFPA is described. The DSP hardware platform designed is of low power consumption, with 32-bit fixed point DSP TMS320DM643 as the kernel processor. The dependability and expansibility of the software have been improved by DSP/BIOS real-time operating system and Reference Framework 5. In order to realize real-time performance, the calibration parameters update is set at a lower task priority then video input and output in DSP/BIOS. In this way, calibration parameters updating will not affect video streams. The work flow of the system and the strategy of real-time realization are introduced. Experiments on real infrared imaging sequences demonstrate that this algorithm requires only a few frames to obtain high quality corrections. It is computationally efficient and suitable for all kinds of non-uniformity.

  5. A Method of Sky Ripple Residual Nonuniformity Reduction for a Cooled Infrared Imager and Hardware Implementation.

    Science.gov (United States)

    Li, Yiyang; Jin, Weiqi; Li, Shuo; Zhang, Xu; Zhu, Jin

    2017-05-08

    Cooled infrared detector arrays always suffer from undesired ripple residual nonuniformity (RNU) in sky scene observations. The ripple residual nonuniformity seriously affects the imaging quality, especially for small target detection. It is difficult to eliminate it using the calibration-based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified temporal high-pass nonuniformity correction algorithm using fuzzy scene classification. The fuzzy scene classification is designed to control the correction threshold so that the algorithm can remove ripple RNU without degrading the scene details. We test the algorithm on a real infrared sequence by comparing it to several well-established methods. The result shows that the algorithm has obvious advantages compared with the tested methods in terms of detail conservation and convergence speed for ripple RNU correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA), which has two advantages: (1) low resources consumption; and (2) small hardware delay (less than 10 image rows). It has been successfully applied in an actual system.

  6. Computation of nonuniform transmission lines using the FDTD method

    Energy Technology Data Exchange (ETDEWEB)

    Miranda, G.C.; Paulino, J.O.S. [Universidade Federal de Minas Gerais, Belo Horizonte, MG (Brazil). School of Engineering

    1997-12-31

    Calculation of lightning overvoltages on transmission lines has been described. Lightning induced overvoltages are of great significance under certain conditions because of the main characteristics of the phenomena. The lightning channel model is one of the most important parameters essential to obtaining the generated electromagnetic fields. In this study, nonuniform transmission line equations were solved using the finite difference method and the leap-frog scheme, the Finite Difference Time Domain (FDTD) method. The subroutine was interfaced with the Electromagnetic Transients Program (EMTP). Two models were used to represent the characteristic impedance of the nonuniform lines used to model the transmission line towers and the lightning main channel. The advantages of the FDTD method was the much smaller code and faster processing time. 35 refs., 5 figs.

  7. The non-uniformity correction factor for the cylindrical ionization chambers in dosimetry of an HDR 192Ir brachytherapy source

    International Nuclear Information System (INIS)

    Majumdar, Bishnu; Patel, Narayan Prasad; Vijayan, V.

    2006-01-01

    The aim of this study is to derive the non-uniformity correction factor for the two therapy ionization chambers for the dose measurement near the brachytherapy source. The two ionization chambers of 0.6 cc and 0.1 cc volume were used. The measurement in air was performed for distances between 0.8 cm and 20 cm from the source in specially designed measurement jig. The non-uniformity correction factors were derived from the measured values. The experimentally derived factors were compared with the theoretically calculated non-uniformity correction factors and a close agreement was found between these two studies. The experimentally derived non-uniformity correction factor supports the anisotropic theory. (author)

  8. Shutterless non-uniformity correction for the long-term stability of an uncooled long-wave infrared camera

    Science.gov (United States)

    Liu, Chengwei; Sui, Xiubao; Gu, Guohua; Chen, Qian

    2018-02-01

    For the uncooled long-wave infrared (LWIR) camera, the infrared (IR) irradiation the focal plane array (FPA) receives is a crucial factor that affects the image quality. Ambient temperature fluctuation as well as system power consumption can result in changes of FPA temperature and radiation characteristics inside the IR camera; these will further degrade the imaging performance. In this paper, we present a novel shutterless non-uniformity correction method to compensate for non-uniformity derived from the variation of ambient temperature. Our method combines a calibration-based method and the properties of a scene-based method to obtain correction parameters at different ambient temperature conditions, so that the IR camera performance can be less influenced by ambient temperature fluctuation or system power consumption. The calibration process is carried out in a temperature chamber with slowly changing ambient temperature and a black body as uniform radiation source. Enough uniform images are captured and the gain coefficients are calculated during this period. Then in practical application, the offset parameters are calculated via the least squares method based on the gain coefficients, the captured uniform images and the actual scene. Thus we can get a corrected output through the gain coefficients and offset parameters. The performance of our proposed method is evaluated on realistic IR images and compared with two existing methods. The images we used in experiments are obtained by a 384× 288 pixels uncooled LWIR camera. Results show that our proposed method can adaptively update correction parameters as the actual target scene changes and is more stable to temperature fluctuation than the other two methods.

  9. Radiometric Non-Uniformity Characterization and Correction of Landsat 8 OLI Using Earth Imagery-Based Techniques

    Directory of Open Access Journals (Sweden)

    Frank Pesta

    2014-12-01

    Full Text Available Landsat 8 is the first satellite in the Landsat mission to acquire spectral imagery of the Earth using pushbroom sensor instruments. As a result, there are almost 70,000 unique detectors on the Operational Land Imager (OLI alone to monitor. Due to minute variations in manufacturing and temporal degradation, every detector will exhibit a different behavior when exposed to uniform radiance, causing a noticeable striping artifact in collected imagery. Solar collects using the OLI’s on-board solar diffuser panels are the primary method of characterizing detector level non-uniformity. This paper reports on an approach for using a side-slither maneuver to estimate relative detector gains within each individual focal plane module (FPM in the OLI. A method to characterize cirrus band detector-level non-uniformity using deep convective clouds (DCCs is also presented. These approaches are discussed, and then, correction results are compared with the diffuser-based method. Detector relative gain stability is assessed using the side-slither technique. Side-slither relative gains were found to correct streaking in test imagery with quality comparable to diffuser-based gains (within 0.005% for VNIR/PAN; 0.01% for SWIR and identified a 0.5% temporal drift over a year. The DCC technique provided relative gains that visually decreased striping over the operational calibration in many images.

  10. Nonuniform grid implicit spatial finite difference method for acoustic wave modeling in tilted transversely isotropic media

    KAUST Repository

    Chu, Chunlei; Stoffa, Paul L.

    2012-01-01

    sampled models onto vertically nonuniform grids. We use a 2D TTI salt model to demonstrate its effectiveness and show that the nonuniform grid implicit spatial finite difference method can produce highly accurate seismic modeling results with enhanced

  11. Determination of non-uniformity correction factors for cylindrical ionization chambers close to 192Ir brachytherapy sources

    International Nuclear Information System (INIS)

    Toelli, H.; Bielajew, A. F.; Mattsson, O.; Sernbo, G.

    1995-01-01

    When ionization chambers are used in brachytherapy dosimetry, the measurements must be corrected for the non-uniformity of the incident photon fluence. The theory for determination of non-uniformity correction factors, developed by Kondo and Randolph (Rad. Res. 1960) assumes that the electron fluence within the air cavity is isotropic and does not take into account material differences in the chamber wall. The theory was extended by Bielajew (PMB 1990) using an anisotropic electron angular fluence in the cavity. In contrast to the theory by Kondo and Randolph, the anisotropic theory predicts a wall material dependence in the non-uniformity correction factors. This work presents experimental determination of non-uniformity correction factors at distances between 10 and 140 mm from an Ir-192 source. The experimental work makes use of a PTW23331-chamber and Farmer-type chambers (NE2571 and NE2581) with different materials in the walls. The results of the experiments agree well with the anisotropic theory. Due to the geometrical shape of the NE-type chambers, it is shown that the full length of the these chambers, 24.1mm, is not an appropriate input parameter when theoretical non-uniformity correction factors are evaluated

  12. Model Correction Factor Method

    DEFF Research Database (Denmark)

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes

    1997-01-01

    The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...

  13. A tuning method for nonuniform traveling-wave accelerating structures

    International Nuclear Information System (INIS)

    Gong Cunkui; Zheng Shuxin; Shao Jiahang; Jia Xiaoyu; Chen Huaibi

    2013-01-01

    The tuning method of uniform traveling-wave structures based on non-resonant perturbation field distribution measurement has been widely used in tuning both constant-impedance and constant-gradient structures. In this paper, the method of tuning nonuniform structures is proposed on the basis of the above theory. The internal reflection coefficient of each cell is obtained from analyzing the normalized voltage distribution. A numerical simulation of tuning process according to the coupled cavity chain theory has been done and the result shows each cell is in right phase advance after tuning. The method will be used in the tuning of a disk-loaded traveling-wave structure being developed at the Accelerator Laboratory, Tsinghua University. (authors)

  14. Dual-energy digital mammography for calcification imaging: Scatter and nonuniformity corrections

    International Nuclear Information System (INIS)

    Kappadath, S. Cheenu; Shaw, Chris C.

    2005-01-01

    Mammographic images of small calcifications, which are often the earliest signs of breast cancer, can be obscured by overlapping fibroglandular tissue. We have developed and implemented a dual-energy digital mammography (DEDM) technique for calcification imaging under full-field imaging conditions using a commercially available aSi:H/CsI:Tl flat-panel based digital mammography system. The low- and high-energy images were combined using a nonlinear mapping function to cancel the tissue structures and generate the dual-energy (DE) calcification images. The total entrance-skin exposure and mean-glandular dose from the low- and high-energy images were constrained so that they were similar to screening-examination levels. To evaluate the DE calcification image, we designed a phantom using calcium carbonate crystals to simulate calcifications of various sizes (212-425 μm) overlaid with breast-tissue-equivalent material 5 cm thick with a continuously varying glandular-tissue ratio from 0% to 100%. We report on the effects of scatter radiation and nonuniformity in x-ray intensity and detector response on the DE calcification images. The nonuniformity was corrected by normalizing the low- and high-energy images with full-field reference images. Correction of scatter in the low- and high-energy images significantly reduced the background signal in the DE calcification image. Under the current implementation of DEDM, utilizing the mammography system and dose level tested, calcifications in the 300-355 μm size range were clearly visible in DE calcification images. Calcification threshold sizes decreased to the 250-280 μm size range when the visibility criteria were lowered to barely visible. Calcifications smaller than ∼250 μm were usually not visible in most cases. The visibility of calcifications with our DEDM imaging technique was limited by quantum noise, not system noise

  15. Influence of Signal Intensity Non-Uniformity on Brain Volumetry Using an Atlas-Based Method

    International Nuclear Information System (INIS)

    Takao, Hidemasa; Kunimatsu, Akira; Mori, Harushi

    2012-01-01

    Many studies have reported pre-processing effects for brain volumetry; however, no study has investigated whether non-parametric non-uniform intensity normalization (N3) correction processing results in reduced system dependency when using an atlas-based method. To address this shortcoming, the present study assessed whether N3 correction processing provides reduced system dependency in atlas-based volumetry. Contiguous sagittal T1-weighted images of the brain were obtained from 21 healthy participants, by using five magnetic resonance protocols. After image preprocessing using the Statistical Parametric Mapping 5 software, we measured the structural volume of the segmented images with the WFU-PickAtlas software. We applied six different bias-correction levels (Regularization 10, Regularization 0.0001, Regularization 0, Regularization 10 with N3, Regularization 0.0001 with N3, and Regularization 0 with N3) to each set of images. The structural volume change ratio (%) was defined as the change ratio (%) = (100 X[measured volume - mean volume of five magnetic resonance protocols] / mean volume of five magnetic resonance protocols) for each bias-correction level. A low change ratio was synonymous with lower system dependency. The results showed that the images with the N3 correction had a lower change ratio compared with those without the N3 correction. The present study is the first atlas-based volumetry study to show that the precision of atlas-based volumetry improves when using N3-corrected images. Therefore, correction for signal intensity non-uniformity is strongly advised for multi-scanner or multi-site imaging trials.

  16. Influence of signal intensity non-uniformity on brain volumetry using an atlas-based method.

    Science.gov (United States)

    Goto, Masami; Abe, Osamu; Miyati, Tosiaki; Kabasawa, Hiroyuki; Takao, Hidemasa; Hayashi, Naoto; Kurosu, Tomomi; Iwatsubo, Takeshi; Yamashita, Fumio; Matsuda, Hiroshi; Mori, Harushi; Kunimatsu, Akira; Aoki, Shigeki; Ino, Kenji; Yano, Keiichi; Ohtomo, Kuni

    2012-01-01

    Many studies have reported pre-processing effects for brain volumetry; however, no study has investigated whether non-parametric non-uniform intensity normalization (N3) correction processing results in reduced system dependency when using an atlas-based method. To address this shortcoming, the present study assessed whether N3 correction processing provides reduced system dependency in atlas-based volumetry. Contiguous sagittal T1-weighted images of the brain were obtained from 21 healthy participants, by using five magnetic resonance protocols. After image preprocessing using the Statistical Parametric Mapping 5 software, we measured the structural volume of the segmented images with the WFU-PickAtlas software. We applied six different bias-correction levels (Regularization 10, Regularization 0.0001, Regularization 0, Regularization 10 with N3, Regularization 0.0001 with N3, and Regularization 0 with N3) to each set of images. The structural volume change ratio (%) was defined as the change ratio (%) = (100 × [measured volume - mean volume of five magnetic resonance protocols] / mean volume of five magnetic resonance protocols) for each bias-correction level. A low change ratio was synonymous with lower system dependency. The results showed that the images with the N3 correction had a lower change ratio compared with those without the N3 correction. The present study is the first atlas-based volumetry study to show that the precision of atlas-based volumetry improves when using N3-corrected images. Therefore, correction for signal intensity non-uniformity is strongly advised for multi-scanner or multi-site imaging trials.

  17. Influence of Signal Intensity Non-Uniformity on Brain Volumetry Using an Atlas-Based Method

    Energy Technology Data Exchange (ETDEWEB)

    Takao, Hidemasa; Kunimatsu, Akira; Mori, Harushi [University of Tokyo Hospital, Tokyo (Japan); and others

    2012-07-15

    Many studies have reported pre-processing effects for brain volumetry; however, no study has investigated whether non-parametric non-uniform intensity normalization (N3) correction processing results in reduced system dependency when using an atlas-based method. To address this shortcoming, the present study assessed whether N3 correction processing provides reduced system dependency in atlas-based volumetry. Contiguous sagittal T1-weighted images of the brain were obtained from 21 healthy participants, by using five magnetic resonance protocols. After image preprocessing using the Statistical Parametric Mapping 5 software, we measured the structural volume of the segmented images with the WFU-PickAtlas software. We applied six different bias-correction levels (Regularization 10, Regularization 0.0001, Regularization 0, Regularization 10 with N3, Regularization 0.0001 with N3, and Regularization 0 with N3) to each set of images. The structural volume change ratio (%) was defined as the change ratio (%) = (100 X[measured volume - mean volume of five magnetic resonance protocols] / mean volume of five magnetic resonance protocols) for each bias-correction level. A low change ratio was synonymous with lower system dependency. The results showed that the images with the N3 correction had a lower change ratio compared with those without the N3 correction. The present study is the first atlas-based volumetry study to show that the precision of atlas-based volumetry improves when using N3-corrected images. Therefore, correction for signal intensity non-uniformity is strongly advised for multi-scanner or multi-site imaging trials.

  18. CC-MUSIC: An Optimization Estimator for Mutual Coupling Correction of L-Shaped Nonuniform Array with Single Snapshot

    Directory of Open Access Journals (Sweden)

    Yuguan Hou

    2015-01-01

    Full Text Available For the case of the single snapshot, the integrated SNR gain could not be obtained without the multiple snapshots, which degrades the mutual coupling correction performance under the lower SNR case. In this paper, a Convex Chain MUSIC (CC-MUSIC algorithm is proposed for the mutual coupling correction of the L-shaped nonuniform array with single snapshot. It is an online self-calibration algorithm and does not require the prior knowledge of the correction matrix initialization and the calibration source with the known position. An optimization for the approximation between the no mutual coupling covariance matrix without the interpolated transformation and the covariance matrix with the mutual coupling and the interpolated transformation is derived. A global optimization problem is formed for the mutual coupling correction and the spatial spectrum estimation. Furthermore, the nonconvex optimization problem of this global optimization is transformed as a chain of the convex optimization, which is basically an alternating optimization routine. The simulation results demonstrate the effectiveness of the proposed method, which improve the resolution ability and the estimation accuracy of the multisources with the single snapshot.

  19. Measured attenuation correction methods

    International Nuclear Information System (INIS)

    Ostertag, H.; Kuebler, W.K.; Doll, J.; Lorenz, W.J.

    1989-01-01

    Accurate attenuation correction is a prerequisite for the determination of exact local radioactivity concentrations in positron emission tomography. Attenuation correction factors range from 4-5 in brain studies to 50-100 in whole body measurements. This report gives an overview of the different methods of determining the attenuation correction factors by transmission measurements using an external positron emitting source. The long-lived generator nuclide 68 Ge/ 68 Ga is commonly used for this purpose. The additional patient dose from the transmission source is usually a small fraction of the dose due to the subsequent emission measurement. Ring-shaped transmission sources as well as rotating point or line sources are employed in modern positron tomographs. By masking a rotating line or point source, random and scattered events in the transmission scans can be effectively suppressed. The problems of measured attenuation correction are discussed: Transmission/emission mismatch, random and scattered event contamination, counting statistics, transmission/emission scatter compensation, transmission scan after administration of activity to the patient. By using a double masking technique simultaneous emission and transmission scans become feasible. (orig.)

  20. Non-uniform sampling and wide range angular spectrum method

    International Nuclear Information System (INIS)

    Kim, Yong-Hae; Byun, Chun-Won; Oh, Himchan; Lee, JaeWon; Pi, Jae-Eun; Heon Kim, Gi; Lee, Myung-Lae; Ryu, Hojun; Chu, Hye-Yong; Hwang, Chi-Sun

    2014-01-01

    A novel method is proposed for simulating free space field propagation from a source plane to a destination plane that is applicable for both small and large propagation distances. The angular spectrum method (ASM) was widely used for simulating near field propagation, but it caused a numerical error when the propagation distance was large because of aliasing due to under sampling. Band limited ASM satisfied the Nyquist condition on sampling by limiting a bandwidth of a propagation field to avoid an aliasing error so that it could extend the applicable propagation distance of the ASM. However, the band limited ASM also made an error due to the decrease of an effective sampling number in a Fourier space when the propagation distance was large. In the proposed wide range ASM, we use a non-uniform sampling in a Fourier space to keep a constant effective sampling number even though the propagation distance is large. As a result, the wide range ASM can produce simulation results with high accuracy for both far and near field propagation. For non-paraxial wave propagation, we applied the wide range ASM to a shifted destination plane as well. (paper)

  1. A practical procedure to improve the accuracy of radiochromic film dosimetry. A integration with a correction method of uniformity correction and a red/blue correction method

    International Nuclear Information System (INIS)

    Uehara, Ryuzo; Tachibana, Hidenobu; Ito, Yasushi; Yoshino, Shinichi; Matsubayashi, Fumiyasu; Sato, Tomoharu

    2013-01-01

    It has been reported that the light scattering could worsen the accuracy of dose distribution measurement using a radiochromic film. The purpose of this study was to investigate the accuracy of two different films, EDR2 and EBT2, as film dosimetry tools. The effectiveness of a correction method for the non-uniformity caused from EBT2 film and the light scattering was also evaluated. In addition the efficacy of this correction method integrated with the red/blue correction method was assessed. EDR2 and EBT2 films were read using a flatbed charge-coupled device scanner (EPSON 10000 G). Dose differences on the axis perpendicular to the scanner lamp movement axis were within 1% with EDR2, but exceeded 3% (Maximum: +8%) with EBT2. The non-uniformity correction method, after a single film exposure, was applied to the readout of the films. A corrected dose distribution data was subsequently created. The correction method showed more than 10%-better pass ratios in dose difference evaluation than when the correction method was not applied. The red/blue correction method resulted in 5%-improvement compared with the standard procedure that employed red color only. The correction method with EBT2 proved to be able to rapidly correct non-uniformity, and has potential for routine clinical intensity modulated radiation therapy (IMRT) dose verification if the accuracy of EBT2 is required to be similar to that of EDR2. The use of red/blue correction method may improve the accuracy, but we recommend we should use the red/blue correction method carefully and understand the characteristics of EBT2 for red color only and the red/blue correction method. (author)

  2. [A practical procedure to improve the accuracy of radiochromic film dosimetry: a integration with a correction method of uniformity correction and a red/blue correction method].

    Science.gov (United States)

    Uehara, Ryuzo; Tachibana, Hidenobu; Ito, Yasushi; Yoshino, Shinichi; Matsubayashi, Fumiyasu; Sato, Tomoharu

    2013-06-01

    It has been reported that the light scattering could worsen the accuracy of dose distribution measurement using a radiochromic film. The purpose of this study was to investigate the accuracy of two different films, EDR2 and EBT2, as film dosimetry tools. The effectiveness of a correction method for the non-uniformity caused from EBT2 film and the light scattering was also evaluated. In addition the efficacy of this correction method integrated with the red/blue correction method was assessed. EDR2 and EBT2 films were read using a flatbed charge-coupled device scanner (EPSON 10000G). Dose differences on the axis perpendicular to the scanner lamp movement axis were within 1% with EDR2, but exceeded 3% (Maximum: +8%) with EBT2. The non-uniformity correction method, after a single film exposure, was applied to the readout of the films. A corrected dose distribution data was subsequently created. The correction method showed more than 10%-better pass ratios in dose difference evaluation than when the correction method was not applied. The red/blue correction method resulted in 5%-improvement compared with the standard procedure that employed red color only. The correction method with EBT2 proved to be able to rapidly correct non-uniformity, and has potential for routine clinical IMRT dose verification if the accuracy of EBT2 is required to be similar to that of EDR2. The use of red/blue correction method may improve the accuracy, but we recommend we should use the red/blue correction method carefully and understand the characteristics of EBT2 for red color only and the red/blue correction method.

  3. Alternative methods for evaluation of non-uniformity in nuclear medicine images

    International Nuclear Information System (INIS)

    Rasaneh, S.; Rajabi, H.; Hajizadeh, E.

    2005-01-01

    Non-uniformity test is the most essential in daily quality control procedures of nuclear medicine equipment's. However, the calculation of non-uniformity is hindered due to high level of noise in nuclear medicine data. Non-uniformity may be considered as a type of systematic error while noise is certainly a random error. The present methods of uniformity evaluation are not able to distinguish between systematic and random error and therefore produce incorrect results when noise is significant. In the present study, two hypothetical methods have been tested for evaluation of non-uniformity in nuclear medicine images. Materials and Methods: Using the Monte Carlo method, uniform and non-uniform flood images of different matrix sizes and different counts were generated. The uniformity of the images was calculated using the conventional method and proposed methods. The results were compared with the known non-uniformity data of simulated images. Results: It was observed that the value of integral uniformity never went below the recommended values except in small matrix size of high counts (more than 80 millions counts). The differential uniformity was quite insensitive to the degree of non-uniformity in large matrix size. Matrix size of 64*64 was only found to be suitable for the calculation of differential uniformity. It was observed that in uniform images, a small amount of non-uniformity changes the p-value of Kolmogorov-Smirnov test and noise amplitude of fast fourier transformation test significantly while the conventional methods failed to detect the nonuniformity. Conclusion: The conventional methods do not distinguish noise, which is always present in the data and occasional non-uniformity at low count density. In a uniform intact flood image, the difference between maximum and minimum pixel count (the value of integral uniformity) is much more than the recommended values for non-uniformity. After filtration of image, this difference decreases, but remains high

  4. Reduction of gas flow nonuniformity in gas turbine engines by means of gas-dynamic methods

    Science.gov (United States)

    Matveev, V.; Baturin, O.; Kolmakova, D.; Popov, G.

    2017-08-01

    Gas flow nonuniformity is one of the main sources of rotor blade vibrations in the gas turbine engines. Usually, the flow circumferential nonuniformity occurs near the annular frames, located in the flow channel of the engine. This leads to the increased dynamic stresses in blades and as a consequence to the blade damage. The goal of the research was to find an acceptable method of reducing the level of gas flow nonuniformity as the source of dynamic stresses in the rotor blades. Two different methods were investigated during this research. Thus, this study gives the ideas about methods of improving the flow structure in gas turbine engine. On the basis of existing conditions (under development or existing engine) it allows the selection of the most suitable method for reducing gas flow nonuniformity.

  5. Nonuniform grid implicit spatial finite difference method for acoustic wave modeling in tilted transversely isotropic media

    KAUST Repository

    Chu, Chunlei

    2012-01-01

    Discrete earth models are commonly represented by uniform structured grids. In order to ensure accurate numerical description of all wave components propagating through these uniform grids, the grid size must be determined by the slowest velocity of the entire model. Consequently, high velocity areas are always oversampled, which inevitably increases the computational cost. A practical solution to this problem is to use nonuniform grids. We propose a nonuniform grid implicit spatial finite difference method which utilizes nonuniform grids to obtain high efficiency and relies on implicit operators to achieve high accuracy. We present a simple way of deriving implicit finite difference operators of arbitrary stencil widths on general nonuniform grids for the first and second derivatives and, as a demonstration example, apply these operators to the pseudo-acoustic wave equation in tilted transversely isotropic (TTI) media. We propose an efficient gridding algorithm that can be used to convert uniformly sampled models onto vertically nonuniform grids. We use a 2D TTI salt model to demonstrate its effectiveness and show that the nonuniform grid implicit spatial finite difference method can produce highly accurate seismic modeling results with enhanced efficiency, compared to uniform grid explicit finite difference implementations. © 2011 Elsevier B.V.

  6. Generalized subspace correction methods

    Energy Technology Data Exchange (ETDEWEB)

    Kolm, P. [Royal Institute of Technology, Stockholm (Sweden); Arbenz, P.; Gander, W. [Eidgenoessiche Technische Hochschule, Zuerich (Switzerland)

    1996-12-31

    A fundamental problem in scientific computing is the solution of large sparse systems of linear equations. Often these systems arise from the discretization of differential equations by finite difference, finite volume or finite element methods. Iterative methods exploiting these sparse structures have proven to be very effective on conventional computers for a wide area of applications. Due to the rapid development and increasing demand for the large computing powers of parallel computers, it has become important to design iterative methods specialized for these new architectures.

  7. Correction of experimental photon pencil-beams for the effects of non-uniform and non-parallel measurement conditions

    International Nuclear Information System (INIS)

    Ceberg, Crister P.; Bjaerngard, Bengt E.

    1995-01-01

    An approximate experimental determination of photon pencil-beams can be based on the reciprocity theorem. The scatter part of the pencil-beam is then essentially the derivative with respect to the field radius of measured scatter-to-primary ratios in circular fields. Obtained in this way, however, the pencil-beam implicitly carries the influence from the lateral fluence and beam quality variations of the incident photons, as well as the effects of the divergence of the beam. In this work we show how these effects can be corrected for. The procedure was to calculate scatter-to-primary ratios using an analytical expression for the pencil-beam. By disregarding one by one the effects of the divergence and the fluence and beam quality variations, the influence of these effects were separated and quantified. For instance, for a 6 MV beam of 20x20 cm 2 field size, at 20 cm depth and a source distance of 100 cm, the total effect was 3.9%; 2.0% was due to the non-uniform incident profile, 1.0% due to the non-uniform beam quality, and 0.9% due to the divergence of the beam. At a source distance of 400 cm, all these effects were much lower, adding up to a total of 0.3 %. Using calculated correction factors like these, measured scatter-to-primary ratios were then stripped from the effects of non-uniform and non-parallel measurement conditions, and the scatter part of the pencil-beam was determined using the reciprocity theorem without approximations

  8. Space cutter compensation method for five-axis nonuniform rational basis spline machining

    Directory of Open Access Journals (Sweden)

    Yanyu Ding

    2015-07-01

    Full Text Available In view of the good machining performance of traditional three-axis nonuniform rational basis spline interpolation and the space cutter compensation issue in multi-axis machining, this article presents a triple nonuniform rational basis spline five-axis interpolation method, which uses three nonuniform rational basis spline curves to describe cutter center location, cutter axis vector, and cutter contact point trajectory, respectively. The relative position of the cutter and workpiece is calculated under the workpiece coordinate system, and the cutter machining trajectory can be described precisely and smoothly using this method. The three nonuniform rational basis spline curves are transformed into a 12-dimentional Bézier curve to carry out discretization during the discrete process. With the cutter contact point trajectory as the precision control condition, the discretization is fast. As for different cutters and corners, the complete description method of space cutter compensation vector is presented in this article. Finally, the five-axis nonuniform rational basis spline machining method is further verified in a two-turntable five-axis machine.

  9. Computer Simulation of Nonuniform MTLs via Implicit Wendroff and State-Variable Methods

    Directory of Open Access Journals (Sweden)

    L. Brancik

    2011-04-01

    Full Text Available The paper deals with techniques for a computer simulation of nonuniform multiconductor transmission lines (MTLs based on the implicit Wendroff and the statevariable methods. The techniques fall into a class of finitedifference time-domain (FDTD methods useful to solve various electromagnetic systems. Their basic variants are extended and modified to enable solving both voltage and current distributions along nonuniform MTL’s wires and their sensitivities with respect to lumped and distributed parameters. An experimental error analysis is performed based on the Thomson cable whose analytical solutions are known, and some examples of simulation of both uniform and nonuniform MTLs are presented. Based on the Matlab language programme, CPU times are analyzed to compare efficiency of the methods. Some results for nonlinear MTLs simulation are presented as well.

  10. A discriminative model-constrained EM approach to 3D MRI brain tissue classification and intensity non-uniformity correction

    International Nuclear Information System (INIS)

    Wels, Michael; Hornegger, Joachim; Zheng Yefeng; Comaniciu, Dorin; Huber, Martin

    2011-01-01

    We describe a fully automated method for tissue classification, which is the segmentation into cerebral gray matter (GM), cerebral white matter (WM), and cerebral spinal fluid (CSF), and intensity non-uniformity (INU) correction in brain magnetic resonance imaging (MRI) volumes. It combines supervised MRI modality-specific discriminative modeling and unsupervised statistical expectation maximization (EM) segmentation into an integrated Bayesian framework. While both the parametric observation models and the non-parametrically modeled INUs are estimated via EM during segmentation itself, a Markov random field (MRF) prior model regularizes segmentation and parameter estimation. Firstly, the regularization takes into account knowledge about spatial and appearance-related homogeneity of segments in terms of pairwise clique potentials of adjacent voxels. Secondly and more importantly, patient-specific knowledge about the global spatial distribution of brain tissue is incorporated into the segmentation process via unary clique potentials. They are based on a strong discriminative model provided by a probabilistic boosting tree (PBT) for classifying image voxels. It relies on the surrounding context and alignment-based features derived from a probabilistic anatomical atlas. The context considered is encoded by 3D Haar-like features of reduced INU sensitivity. Alignment is carried out fully automatically by means of an affine registration algorithm minimizing cross-correlation. Both types of features do not immediately use the observed intensities provided by the MRI modality but instead rely on specifically transformed features, which are less sensitive to MRI artifacts. Detailed quantitative evaluations on standard phantom scans and standard real-world data show the accuracy and robustness of the proposed method. They also demonstrate relative superiority in comparison to other state-of-the-art approaches to this kind of computational task: our method achieves average

  11. Assessment of a non-uniform heat flux correction model to predicting CHF in PWR rod bundles

    International Nuclear Information System (INIS)

    Dae-Hyun, Hwang; Sung-Quun, Zee

    2001-01-01

    The full text follows. The prediction of CHF (critical heat flux) has been, in most cases, based on the empirical correlation. For PWR fuel assemblies the local parameter correlation requires the local thermal-hydraulic conditions usually calculated by a subchannel analysis code. The cross-sectional averaged fluid conditions of the subchannel, however, are not sufficient for determining CHF, especially for the cases of non-uniform axial heat flux distributions. Many investigators have studied the effect of the upstream heat flux on the CHF. In terms of the upstream memory effect, two different approaches have been considered as the limiting cases. The 'local conditions' hypothesis assumes that there is a unique relationship between the CHF and the local thermal-hydraulic conditions, and consequently there is no memory effect. In the 'overall power' hypothesis, on the other hand, it is assumed that the total power which can be fed into the tube with nonuniform heating will be the same as that for a uniformly heated tube of the same heated length with the same inlet conditions. Thus the CHF is totally influenced by the upstream heat flux distribution. In view of some experimental investigations such as the DeBortoli's test, it revealed that the two approaches are inadequate in general. It means that the local critical heat flux may be affected to some extent by the heat flux distribution upstream of the CHF location. Some correction-factor models have been suggested to take into account the upstream memory effect. Typically, Tong devised a correction factor on the basis of the heat balance of the superheated liquid layer that is spread underneath a highly viscous bubbly layer along the heated surface. His physical model suggested that the fluid enthalpy obtained from an energy balance of the superheated liquid layer is a representative quantity for the onset of DNB (departure nucleate boiling). A theoretically based correction factor model has been proposed by the

  12. A pipelined architecture for real time correction of non-uniformity in infrared focal plane arrays imaging system using multiprocessors

    Science.gov (United States)

    Zou, Liang; Fu, Zhuang; Zhao, YanZheng; Yang, JunYan

    2010-07-01

    This paper proposes a kind of pipelined electric circuit architecture implemented in FPGA, a very large scale integrated circuit (VLSI), which efficiently deals with the real time non-uniformity correction (NUC) algorithm for infrared focal plane arrays (IRFPA). Dual Nios II soft-core processors and a DSP with a 64+ core together constitute this image system. Each processor undertakes own systematic task, coordinating its work with each other's. The system on programmable chip (SOPC) in FPGA works steadily under the global clock frequency of 96Mhz. Adequate time allowance makes FPGA perform NUC image pre-processing algorithm with ease, which has offered favorable guarantee for the work of post image processing in DSP. And at the meantime, this paper presents a hardware (HW) and software (SW) co-design in FPGA. Thus, this systematic architecture yields an image processing system with multiprocessor, and a smart solution to the satisfaction with the performance of the system.

  13. Development and Assessment of a Bundle Correction Method for CHF

    International Nuclear Information System (INIS)

    Hwang, Dae Hyun; Chang, Soon Heung

    1993-01-01

    A bundle correction method, based on the conservation laws of mass, energy, and momentum in an open subchannel, is proposed for the prediction of the critical heat flux (CHF) in rod bundles from round tube CHF correlations without detailed subchannel analysis. It takes into account the effects of the enthalpy and mass velocity distributions at subchannel level using the first dericatives of CHF with respect to the independent parameters. Three different CHF correlations for tubes (Groeneveld's CHF table, Katto correlation, and Biasi correlation) have been examined with uniformly heated bundle CHF data collected from various sources. A limited number of GHE data from a non-uniformly heated rod bundle are also evaluated with the aid of Tong's F-factor. The proposed method shows satisfactory CHF predictions for rod bundles both uniform and non-uniform power distributions. (Author)

  14. Stability Analysis of Nonuniform Rectangular Beams Using Homotopy Perturbation Method

    Directory of Open Access Journals (Sweden)

    Seval Pinarbasi

    2012-01-01

    Full Text Available The design of slender beams, that is, beams with large laterally unsupported lengths, is commonly controlled by stability limit states. Beam buckling, also called “lateral torsional buckling,” is different from column buckling in that a beam not only displaces laterally but also twists about its axis during buckling. The coupling between twist and lateral displacement makes stability analysis of beams more complex than that of columns. For this reason, most of the analytical studies in the literature on beam stability are concentrated on simple cases: uniform beams with ideal boundary conditions and simple loadings. This paper shows that complex beam stability problems, such as lateral torsional buckling of rectangular beams with variable cross-sections, can successfully be solved using homotopy perturbation method (HPM.

  15. A method to measure the mean thickness and non-uniformity of non-uniform thin film by alpha-ray thickness gauge

    International Nuclear Information System (INIS)

    Miyahara, Hiroshi; Yoshida, Makoto; Watanabe, Tamaki

    1977-01-01

    The α-ray thickness gauge is used to measure non-destructively the thicknesses of thin films, and up to the present day, a thin film with uniform thickness is only taken up as the object of α-ray thickness gauge. When the thickness is determined from the displacement between the absorption curves in the presence and absence of thin film, the absorption curve must be displaced in parallel. When many uniform particles were dispersed as sample, the shape of the absorption curve was calculated as the sum of many absorption curves corresponding to the thin films with different thicknesses. By the comparison of the calculated and measured absorption curves, the number of particles, or the mean superficial density can be determined. This means the extension of thickness measurement from uniform to non-uniform films. Furthermore, these particle models being applied to non-uniform thin film, the possibility of measuring the mean thickness and non-uniformity was discussed. As the result, if the maximum difference of the thickness was more than 0.2 mg/cm 2 , the nonuniformity was considered to distinguish by the usual equipment. In this paper, an α-ray thickness gauge using the absorption curve method was treated, but one can apply this easily to an α-ray thickness gauge using α-ray energy spectra before and after the penetration of thin film. (auth.)

  16. Nonuniform Overlapping Method in Designing Microstrip Patch Antennas Using Genetic Algorithm Optimization

    Directory of Open Access Journals (Sweden)

    J. M. Jeevani W. Jayasinghe

    2015-01-01

    Full Text Available Genetic algorithm (GA has been a popular optimization technique used for performance improvement of microstrip patch antennas (MPAs. When using GA, the patch geometry is optimized by dividing the patch area into small rectangular cells. This has an inherent problem of adjacent cells being connected to each other with infinitesimal connections, which may not be achievable in practice due to fabrication tolerances in chemical etching. As a solution, this paper presents a novel method of dividing the patch area into cells with nonuniform overlaps. The optimized design, which is obtained by using fixed overlap sizes, shows a quad-band performance covering GSM1800, GSM1900, LTE2300, and Bluetooth bands. In contrast, use of nonuniform overlap sizes leads to obtaining a pentaband design covering GSM1800, GSM1900, UMTS, LTE2300, and Bluetooth bandswith fractional bands with of 38% due to the extra design flexibility.

  17. Gas-Dynamic Methods to Reduce Gas Flow Nonuniformity from the Annular Frames of Gas Turbine Engines

    Science.gov (United States)

    Kolmakova, D.; Popov, G.

    2018-01-01

    Gas flow nonuniformity is one of the main sources of rotor blade vibrations in the gas turbine engines. Usually, the flow circumferential nonuniformity occurs near the annular frames, located in the flow channel of the engine. This leads to the increased dynamic stresses in blades and consequently to the blade damage. The goal of the research was to find an acceptable method of reducing the level of gas flow nonuniformity. Two different methods were investigated during this research. Thus, this study gives the ideas about methods of improving the flow structure in gas turbine engine. Based on existing conditions (under development or existing engine) it allows the selection of the most suitable method for reducing gas flow nonuniformity.

  18. MRI non-uniformity correction through interleaved bias estimation and B-spline deformation with a template.

    Science.gov (United States)

    Fletcher, E; Carmichael, O; Decarli, C

    2012-01-01

    We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer's disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions.

  19. MRI Non-Uniformity Correction Through Interleaved Bias Estimation and B-Spline Deformation with a Template*

    Science.gov (United States)

    Fletcher, E.; Carmichael, O.; DeCarli, C.

    2013-01-01

    We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer’s disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions. PMID:23365843

  20. SPET reconstruction with a non-uniform attenuation coefficient using an analytical regularizing iterative method

    International Nuclear Information System (INIS)

    Soussaline, F.; LeCoq, C.; Raynaud, C.; Kellershohn, C.

    1982-09-01

    The aim of this study is to evaluate the potential of the RIM technique when used in brain studies. The analytical Regulatorizing Iterative Method (RIM) is designed to provide fast and accurate reconstruction of tomographic images when non-uniform attenuation is to be accounted for. As indicated by phantom studies, this method improves the contrast and the signal-to-noise ratio as compared to those obtained with FBP (Filtered Back Projection) technique. Preliminary results obtained in brain studies using AMPI-123 (isopropil-amphetamine I-123) are very encouraging in terms of quantitative regional cellular activity. However, the clinical usefulness of this mathematically accurate reconstruction procedure is going to be demonstrated in our Institution, in comparing quantitative data in heart or liver studies where control values can be obtained

  1. SPET reconstruction with a non-uniform attenuation coefficient using an analytical regularizing iterative method

    International Nuclear Information System (INIS)

    Soussaline, F.; LeCoq, C.; Raynaud, C.; Kellershohn

    1982-01-01

    The potential of the Regularizing Iterative Method (RIM), when used in brain studies, is evaluated. RIM is designed to provide fast and accurate reconstruction of tomographic images when non-uniform attenuation is to be accounted for. As indicated by phantom studies, this method improves the contrast and the signal-to-noise ratio as compared to those obtained with Filtered Back Projection (FBP) technique. Preliminary results obtained in brain studies using isopropil-amphetamine I-123 (AMPI-123) are very encouraging in terms of quantitative regional cellular activity. However, the clinical usefulness of this mathematically accurate reconstruction procedure is going to be demonstrated, in comparing quantitative data in heart or liver studies where control values can be obtained

  2. A method for real time detecting of non-uniform magnetic field

    Science.gov (United States)

    Marusenkov, Andriy

    2015-04-01

    The principle of measuring magnetic signatures for observing diverse objects is widely used in Near Surface work (unexploded ordnance (UXO); engineering & environmental; archaeology) and security and vehicle detection systems as well. As a rule, the magnitude of the signals to be measured is much lower than that of the quasi-uniform Earth magnetic field. Usually magnetometers for these purposes contain two or more spatially separated sensors to estimate the full tensor gradient of the magnetic field or, more frequently, only partial gradient components. The both types (scalar and vector) of magnetic sensors could be used. The identity of the scale factors and proper alignment of the sensitivity axes of the vector sensors are very important for deep suppression of the ambient field and detection of weak target signals. As a rule, the periodical calibration procedure is used to keep matching sensors' parameters as close as possible. In the present report we propose the technique for detection magnetic anomalies, which is almost insensitive to imperfect matching of the sensors. This method based on the idea that the difference signals between two sensors are considerably different when the instrument is rotated or moved in uniform and non-uniform fields. Due to the misfit of calibration parameters the difference signal observed at the rotation in the uniform field is similar to the total signal - the sum of the signals of both sensors. Zero change of the difference and total signals is expected, if the instrument moves in the uniform field along a straight line. In contrast, the same move in the non-uniform field produces some response of each of the sensors. In case one measures dB/dx and moves along x direction, the sensors signals is shifted in time with the lag proportional to the distance between sensors and the speed of move. It means that the difference signal looks like derivative of the total signal at move in the non-uniform field. So, using quite simple

  3. The calculation of wall and non-uniformity correction factors for the BIPM air-kerma standard for 60Co using the Monte Carlo code PENELOPE

    International Nuclear Information System (INIS)

    Burns, D.T.

    2002-01-01

    identified. This phase-space file was used to calculate k wall for the BIPM standard using the technique of photon regeneration. At the point of each photon interaction in the chamber wall, a new photon is generated with the same energy and direction as the incoming photon. The deposition of energy in the air cavity by regenerated photons effectively corrects for attenuation in the wall. At the same time, any outgoing scattered photon is tagged so that a correction for the energy deposition due to scattered photons may be evaluated. The result of these calculations is k wall =1.0017 (statistical uncertainty 0.0001), which is in good agreement with previous results. The overall uncertainties remain to be evaluated. For the calculation of k an , a modified technique was used which makes use of the full phase-space information rather than assuming, as is usual, that the beam is well approximated by a point source. When using the same model for the BIPM standard as used previously, the result k an =1.0032 (statistical uncertainty 0.0005) agrees reasonably well with previous results (the small difference may be due to the use of a point source rather than the realistic angular distribution). However, there is evidence that these new values are an artefact of the method and model and that the true non-uniformity correction is much closer to unity. Before implementing any new k an value for the BIPM standard, a more detailed study will be undertaken to explain the large difference between the new and existing values

  4. A Non-Uniformly Under-Sampled Blade Tip-Timing Signal Reconstruction Method for Blade Vibration Monitoring

    Directory of Open Access Journals (Sweden)

    Zheng Hu

    2015-01-01

    Full Text Available High-speed blades are often prone to fatigue due to severe blade vibrations. In particular, synchronous vibrations can cause irreversible damages to the blade. Blade tip-timing methods (BTT have become a promising way to monitor blade vibrations. However, synchronous vibrations are unsuitably monitored by uniform BTT sampling. Therefore, non-equally mounted probes have been used, which will result in the non-uniformity of the sampling signal. Since under-sampling is an intrinsic drawback of BTT methods, how to analyze non-uniformly under-sampled BTT signals is a big challenge. In this paper, a novel reconstruction method for non-uniformly under-sampled BTT data is presented. The method is based on the periodically non-uniform sampling theorem. Firstly, a mathematical model of a non-uniform BTT sampling process is built. It can be treated as the sum of certain uniform sample streams. For each stream, an interpolating function is required to prevent aliasing in the reconstructed signal. Secondly, simultaneous equations of all interpolating functions in each sub-band are built and corresponding solutions are ultimately derived to remove unwanted replicas of the original signal caused by the sampling, which may overlay the original signal. In the end, numerical simulations and experiments are carried out to validate the feasibility of the proposed method. The results demonstrate the accuracy of the reconstructed signal depends on the sampling frequency, the blade vibration frequency, the blade vibration bandwidth, the probe static offset and the number of samples. In practice, both types of blade vibration signals can be particularly reconstructed by non-uniform BTT data acquired from only two probes.

  5. Nonuniform fast Fourier transform method for numerical diffraction simulation on tilted planes.

    Science.gov (United States)

    Xiao, Yu; Tang, Xiahui; Qin, Yingxiong; Peng, Hao; Wang, Wei; Zhong, Lijing

    2016-10-01

    The method, based on the rotation of the angular spectrum in the frequency domain, is generally used for the diffraction simulation between the tilted planes. Due to the rotation of the angular spectrum, the interval between the sampling points in the Fourier domain is not even. For the conventional fast Fourier transform (FFT)-based methods, a spectrum interpolation is needed to get the approximate sampling value on the equidistant sampling points. However, due to the numerical error caused by the spectrum interpolation, the calculation accuracy degrades very quickly as the rotation angle increases. Here, the diffraction propagation between the tilted planes is transformed into a problem about the discrete Fourier transform on the uneven sampling points, which can be evaluated effectively and precisely through the nonuniform fast Fourier transform method (NUFFT). The most important advantage of this method is that the conventional spectrum interpolation is avoided and the high calculation accuracy can be guaranteed for different rotation angles, even when the rotation angle is close to π/2. Also, its calculation efficiency is comparable with that of the conventional FFT-based methods. Numerical examples as well as a discussion about the calculation accuracy and the sampling method are presented.

  6. New method for solving inductive electric fields in the non-uniformly conducting ionosphere

    Directory of Open Access Journals (Sweden)

    H. Vanhamäki

    2006-10-01

    Full Text Available We present a new calculation method for solving inductive electric fields in the ionosphere. The time series of the potential part of the ionospheric electric field, together with the Hall and Pedersen conductances serves as the input to this method. The output is the time series of the induced rotational part of the ionospheric electric field. The calculation method works in the time-domain and can be used with non-uniform, time-dependent conductances. In addition, no particular symmetry requirements are imposed on the input potential electric field. The presented method makes use of special non-local vector basis functions called the Cartesian Elementary Current Systems (CECS. This vector basis offers a convenient way of representing curl-free and divergence-free parts of 2-dimensional vector fields and makes it possible to solve the induction problem using simple linear algebra. The new calculation method is validated by comparing it with previously published results for Alfvén wave reflection from a uniformly conducting ionosphere.

  7. New method for solving inductive electric fields in the non-uniformly conducting ionosphere

    Science.gov (United States)

    Vanhamäki, H.; Amm, O.; Viljanen, A.

    2006-10-01

    We present a new calculation method for solving inductive electric fields in the ionosphere. The time series of the potential part of the ionospheric electric field, together with the Hall and Pedersen conductances serves as the input to this method. The output is the time series of the induced rotational part of the ionospheric electric field. The calculation method works in the time-domain and can be used with non-uniform, time-dependent conductances. In addition, no particular symmetry requirements are imposed on the input potential electric field. The presented method makes use of special non-local vector basis functions called the Cartesian Elementary Current Systems (CECS). This vector basis offers a convenient way of representing curl-free and divergence-free parts of 2-dimensional vector fields and makes it possible to solve the induction problem using simple linear algebra. The new calculation method is validated by comparing it with previously published results for Alfvén wave reflection from a uniformly conducting ionosphere.

  8. A Method for Correcting IMRT Optimizer Heterogeneity Dose Calculations

    International Nuclear Information System (INIS)

    Zacarias, Albert S.; Brown, Mellonie F.; Mills, Michael D.

    2010-01-01

    Radiation therapy treatment planning for volumes close to the patient's surface, in lung tissue and in the head and neck region, can be challenging for the planning system optimizer because of the complexity of the treatment and protected volumes, as well as striking heterogeneity corrections. Because it is often the goal of the planner to produce an isodose plan with uniform dose throughout the planning target volume (PTV), there is a need for improved planning optimization procedures for PTVs located in these anatomical regions. To illustrate such an improved procedure, we present a treatment planning case of a patient with a lung lesion located in the posterior right lung. The intensity-modulated radiation therapy (IMRT) plan generated using standard optimization procedures produced substantial dose nonuniformity across the tumor caused by the effect of lung tissue surrounding the tumor. We demonstrate a novel iterative method of dose correction performed on the initial IMRT plan to produce a more uniform dose distribution within the PTV. This optimization method corrected for the dose missing on the periphery of the PTV and reduced the maximum dose on the PTV to 106% from 120% on the representative IMRT plan.

  9. Modified Hitschfeld-Bordan Equations for Attenuation-Corrected Radar Rain Reflectivity: Application to Nonuniform Beamfilling at Off-Nadir Incidence

    Science.gov (United States)

    Meneghini, Robert; Liao, Liang

    2013-01-01

    As shown by Takahashi et al., multiple path attenuation estimates over the field of view of an airborne or spaceborne weather radar are feasible for off-nadir incidence angles. This follows from the fact that the surface reference technique, which provides path attenuation estimates, can be applied to each radar range gate that intersects the surface. This study builds on this result by showing that three of the modified Hitschfeld-Bordan estimates for the attenuation-corrected radar reflectivity factor can be generalized to the case where multiple path attenuation estimates are available, thereby providing a correction to the effects of nonuniform beamfilling. A simple simulation is presented showing some strengths and weaknesses of the approach.

  10. Factorization method for difference equations of hypergeometric type on nonuniform lattices

    Energy Technology Data Exchange (ETDEWEB)

    Alvarez-Nodarse, R. [Departamento de Analisis Matematico, Universidad de Sevilla, Sevilla (Spain); Instituto Carlos I de Fisica Teorica y Computacional, Universidad de Granada, Granada (Spain); Costas-Santos, R.S. [Departamento de Analisis Matematico, Universidad de Sevilla, Sevilla (Spain)

    2001-07-13

    We study the factorization of the hypergeometric-type difference equation of Nikiforov and Uvarov on nonuniform lattices. An explicit form of the raising and lowering operators is derived and some relevant examples are given. (author)

  11. Diagnosis of myocardial viability by dual-head coincidence gamma camera fluorine-18 fluorodeoxyglucose positron emission tomography with and without non-uniform attenuation correction

    International Nuclear Information System (INIS)

    Nowak, B.; Zimmy, M.; Kaiser, H.-J.; Schaefer, W.; Reinartz, P.; Buell, U.; Schwarz, E.R.; Dahl, J. vom

    2000-01-01

    This study assessed a dual-head coincidence gamma camera (hybrid PET) equipped with single-photon transmission for myocardial fluorine-18 fluorodeoxyglucose (FDG) imaging by comparing this technique with conventional positron emission tomography (PET) using a dedicated ring PET scanner. Twenty-one patients were studied with dedicated FDG ring PET and FDG hybrid PET for evaluation of myocardial glucose metabolism, as well as technetium-99 m tetrofosmin single-photon emission tomography (SPET) to estimate myocardial perfusion. All patients underwent transmitted attenuation correction using germanium-68 rod sources for ring PET and caesium-137 point sources for hybrid PET. Ring PET and hybrid PET emission scans were started 61±12 and 98±15 min, respectively, after administration of 154±31 MBq FDG. Attenuation-corrected images were reconstructed iteratively for ring PET and hybrid PET (ac-hybrid PET), and non-attenuation-corrected images for hybrid PET (non-ac-hybrid PET) only. Tracer distribution was analysed semiquantitatively using a volumetric vector sampling method dividing the left ventricular wall into 13 segments. FDG distribution in non-ac-hybrid PET and ring PET correlated with r=0.36 (P<0.0001), and in ac-hybrid PET and ring PET with r=0.79 (P<0.0001). Non-ac-hybrid PET significantly overestimated FDG uptake in the apical and supra-apical segments, and underestimated FDG uptake in the remaining segments, with the exception of one lateral segment. Ac-hybrid PET significantly overestimated FDG uptake in the apical segment, and underestimated FDG uptake in only three posteroseptal segments. A three-grade score was used to classify diagnosis of viability by FDG PET in 136 segments with reduced perfusion as assessed by SPET. Compared with ring PET, non-ac-hybrid PET showed concordant diagnoses in 80 segments (59%) and ac-hybrid PET in 101 segments (74%) (P<0.001). Agreement between ring PET and non-ac-hybrid PET was best in the basal lateral wall and in the

  12. Nonuniform sampling and non-Fourier signal processing methods in multidimensional NMR.

    Science.gov (United States)

    Mobli, Mehdi; Hoch, Jeffrey C

    2014-11-01

    Beginning with the introduction of Fourier Transform NMR by Ernst and Anderson in 1966, time domain measurement of the impulse response (the free induction decay, FID) consisted of sampling the signal at a series of discrete intervals. For compatibility with the discrete Fourier transform (DFT), the intervals are kept uniform, and the Nyquist theorem dictates the largest value of the interval sufficient to avoid aliasing. With the proposal by Jeener of parametric sampling along an indirect time dimension, extension to multidimensional experiments employed the same sampling techniques used in one dimension, similarly subject to the Nyquist condition and suitable for processing via the discrete Fourier transform. The challenges of obtaining high-resolution spectral estimates from short data records using the DFT were already well understood, however. Despite techniques such as linear prediction extrapolation, the achievable resolution in the indirect dimensions is limited by practical constraints on measuring time. The advent of non-Fourier methods of spectrum analysis capable of processing nonuniformly sampled data has led to an explosion in the development of novel sampling strategies that avoid the limits on resolution and measurement time imposed by uniform sampling. The first part of this review discusses the many approaches to data sampling in multidimensional NMR, the second part highlights commonly used methods for signal processing of such data, and the review concludes with a discussion of other approaches to speeding up data acquisition in NMR. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. A hybrid numerical method for orbit correction

    International Nuclear Information System (INIS)

    White, G.; Himel, T.; Shoaee, H.

    1997-09-01

    The authors describe a simple hybrid numerical method for beam orbit correction in particle accelerators. The method overcomes both degeneracy in the linear system being solved and respects boundaries on the solution. It uses the Singular Value Decomposition (SVD) to find and remove the null-space in the system, followed by a bounded Linear Least Squares analysis of the remaining recast problem. It was developed for correcting orbit and dispersion in the B-factory rings

  14. A METHOD FOR EVALUATION OF NON-UNIFORM RADIANT-CONVECTIVE LOAD ON HUMAN BODY DURING MENTAL WORK

    Directory of Open Access Journals (Sweden)

    Lenka Prokšová Zuská

    2017-10-01

    Full Text Available The objective of this study was to develop a documentation for the amendment of the microclimatic part of the Czech Government Regulation, particularly in a non-uniform radiant-convective load evaluation. Changes in regulation were made based on experimental data obtained on a group of experimental individuals in a climatic chamber. One of the objectives of the climatic chamber experiments was to evaluate whether there was a possibility to use an alternative method, which utilizes a new value – stereotemperature, for the assessment. A group of 24 women was exposed to a non-uniform radiant-convective load in a climatic chamber for 1 hour during their computer work. Measurements were divided according to the globe temperature into 3 stages. The physical parameters of air were continuously measured: the air temperature, globe temperature, air velocity, radiant temperature, relative humidity, stereotemperature and physiological parameters. Thermal sensations of experimental subjects were expressed in the seven-point scale according to EN ISO 7730. The thermal sensation correlated very well with the difference of stereotemperature and the globe temperature. The stereotemperature correlated very well with the radiant temperature. In this work, the composed equations were used to develop the limit values for the thermal stress evaluation in the uniform and non-uniform thermal environment at workplaces. It is possible to determine how the body of an exposed person perceives the non-uniform climatic conditions in the indoor environment, by adding the stereotemperature to government regulations.

  15. Determining optimum wavelength of ultraviolet rays to pre-exposure of non-uniformity error correction in Gafchromic EBT2 films

    Science.gov (United States)

    Katsuda, Toshizo; Gotanda, Rumi; Gotanda, Tatsuhiro; Akagawa, Takuya; Tanki, Nobuyoshi; Kuwano, Tadao; Noguchi, Atsushi; Yabunaka, Kouichi

    2018-03-01

    Gafchromic films have been used to measure X-ray doses in diagnostic radiology such as computed tomography. The double-exposure technique is used to correct non-uniformity error of Gafchromic EBT2 films. Because of the heel effect of diagnostic x-rays, ultraviolet A (UV-A) is intended to be used as a substitute for x-rays. When using a UV-A light-emitting diode (LED), it is necessary to determine the effective optimal UV wavelength for the active layer of Gafchromic EBT2 films. This study evaluated the relation between the increase in color density of Gafchromic EBT2 films and the UV wavelengths. First, to correct non-uniformity, a Gafchromic EBT2 film was pre-irradiated using uniform UV-A radiation for 60 min from a 72-cm distance. Second, the film was irradiated using a UV-LED with a wavelength of 353-410 nm for 60 min from a 5.3-cm distance. The maximum, minimum, and mean ± standard deviation (SD) of pixel values of the subtraction images were evaluated using 0.5 inches of a circular region of interest (ROI). The highest mean ± SD (8915.25 ± 608.86) of the pixel value was obtained at a wavelength of 375 nm. The results indicated that 375 nm is the most effective and sensitive wavelength of UV-A for Gafchromic EBT2 films and that UV-A can be used as a substitute for x-rays in the double-exposure technique.

  16. Implementation of real-time nonuniformity correction with multiple NUC tables using FPGA in an uncooled imaging system

    Science.gov (United States)

    Oh, Gyong Jin; Kim, Lyang-June; Sheen, Sue-Ho; Koo, Gyou-Phyo; Jin, Sang-Hun; Yeo, Bo-Yeon; Lee, Jong-Ho

    2009-05-01

    This paper presents a real time implementation of Non Uniformity Correction (NUC). Two point correction and one point correction with shutter were carried out in an uncooled imaging system which will be applied to a missile application. To design a small, light weight and high speed imaging system for a missile system, SoPC (System On a Programmable Chip) which comprises of FPGA and soft core (Micro-blaze) was used. Real time NUC and generation of control signals are implemented using FPGA. Also, three different NUC tables were made to make the operating time shorter and to reduce the power consumption in a large range of environment temperature. The imaging system consists of optics and four electronics boards which are detector interface board, Analog to Digital converter board, Detector signal generation board and Power supply board. To evaluate the imaging system, NETD was measured. The NETD was less than 160mK in three different environment temperatures.

  17. A New Class of Scaling Correction Methods

    International Nuclear Information System (INIS)

    Mei Li-Jie; Wu Xin; Liu Fu-Yao

    2012-01-01

    When conventional integrators like Runge—Kutta-type algorithms are used, numerical errors can make an orbit deviate from a hypersurface determined by many constraints, which leads to unreliable numerical solutions. Scaling correction methods are a powerful tool to avoid this. We focus on their applications, and also develop a family of new velocity multiple scaling correction methods where scale factors only act on the related components of the integrated momenta. They can preserve exactly some first integrals of motion in discrete or continuous dynamical systems, so that rapid growth of roundoff or truncation errors is suppressed significantly. (general)

  18. Another method of dead time correction

    International Nuclear Information System (INIS)

    Sabol, J.

    1988-01-01

    A new method of the correction of counting losses caused by a non-extended dead time of pulse detection systems is presented. The approach is based on the distribution of time intervals between pulses at the output of the system. The method was verified both experimentally and by using the Monte Carlo simulations. The results show that the suggested technique is more reliable and accurate than other methods based on a separate measurement of the dead time. (author) 5 refs

  19. Off-Angle Iris Correction Methods

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Thompson, Joseph T [ORNL; Karakaya, Mahmut [ORNL; Boehnen, Chris Bensing [ORNL

    2016-01-01

    In many real world iris recognition systems obtaining consistent frontal images is problematic do to inexperienced or uncooperative users, untrained operators, or distracting environments. As a result many collected images are unusable by modern iris matchers. In this chapter we present four methods for correcting off-angle iris images to appear frontal which makes them compatible with existing iris matchers. The methods include an affine correction, a retraced model of the human eye, measured displacements, and a genetic algorithm optimized correction. The affine correction represents a simple way to create an iris image that appears frontal but it does not account for refractive distortions of the cornea. The other method account for refraction. The retraced model simulates the optical properties of the cornea. The other two methods are data driven. The first uses optical flow to measure the displacements of the iris texture when compared to frontal images of the same subject. The second uses a genetic algorithm to learn a mapping that optimizes the Hamming Distance scores between off-angle and frontal images. In this paper we hypothesize that the biological model presented in our earlier work does not adequately account for all variations in eye anatomy and therefore the two data-driven approaches should yield better performance. Results are presented using the commercial VeriEye matcher that show that the genetic algorithm method clearly improves over prior work and makes iris recognition possible up to 50 degrees off-angle.

  20. Iteration of ultrasound aberration correction methods

    Science.gov (United States)

    Maasoey, Svein-Erik; Angelsen, Bjoern; Varslot, Trond

    2004-05-01

    Aberration in ultrasound medical imaging is usually modeled by time-delay and amplitude variations concentrated on the transmitting/receiving array. This filter process is here denoted a TDA filter. The TDA filter is an approximation to the physical aberration process, which occurs over an extended part of the human body wall. Estimation of the TDA filter, and performing correction on transmit and receive, has proven difficult. It has yet to be shown that this method works adequately for severe aberration. Estimation of the TDA filter can be iterated by retransmitting a corrected signal and re-estimate until a convergence criterion is fulfilled (adaptive imaging). Two methods for estimating time-delay and amplitude variations in receive signals from random scatterers have been developed. One method correlates each element signal with a reference signal. The other method use eigenvalue decomposition of the receive cross-spectrum matrix, based upon a receive energy-maximizing criterion. Simulations of iterating aberration correction with a TDA filter have been investigated to study its convergence properties. A weak and strong human-body wall model generated aberration. Both emulated the human abdominal wall. Results after iteration improve aberration correction substantially, and both estimation methods converge, even for the case of strong aberration.

  1. Efficient orbit integration by manifold correction methods.

    Science.gov (United States)

    Fukushima, Toshio

    2005-12-01

    Triggered by a desire to investigate, numerically, the planetary precession through a long-term numerical integration of the solar system, we developed a new formulation of numerical integration of orbital motion named manifold correct on methods. The main trick is to rigorously retain the consistency of physical relations, such as the orbital energy, the orbital angular momentum, or the Laplace integral, of a binary subsystem. This maintenance is done by applying a correction to the integrated variables at each integration step. Typical methods of correction are certain geometric transformations, such as spatial scaling and spatial rotation, which are commonly used in the comparison of reference frames, or mathematically reasonable operations, such as modularization of angle variables into the standard domain [-pi, pi). The form of the manifold correction methods finally evolved are the orbital longitude methods, which enable us to conduct an extremely precise integration of orbital motions. In unperturbed orbits, the integration errors are suppressed at the machine epsilon level for an indefinitely long period. In perturbed cases, on the other hand, the errors initially grow in proportion to the square root of time and then increase more rapidly, the onset of which depends on the type and magnitude of the perturbations. This feature is also realized for highly eccentric orbits by applying the same idea as used in KS-regularization. In particular, the introduction of time elements greatly enhances the performance of numerical integration of KS-regularized orbits, whether the scaling is applied or not.

  2. Investigation of Compton scattering correction methods in cardiac SPECT by Monte Carlo simulations

    International Nuclear Information System (INIS)

    Silva, A.M. Marques da; Furlan, A.M.; Robilotta, C.C.

    2001-01-01

    The goal of this work was the use of Monte Carlo simulations to investigate the effects of two scattering correction methods: dual energy window (DEW) and dual photopeak window (DPW), in quantitative cardiac SPECT reconstruction. MCAT torso-cardiac phantom, with 99m Tc and non-uniform attenuation map was simulated. Two different photopeak windows were evaluated in DEW method: 15% and 20%. Two 10% wide subwindows centered symmetrically within the photopeak were used in DPW method. Iterative ML-EM reconstruction with modified projector-backprojector for attenuation correction was applied. Results indicated that the choice of the scattering and photopeak windows determines the correction accuracy. For the 15% window, fitted scatter fraction gives better results than k = 0.5. For the 20% window, DPW is the best method, but it requires parameters estimation using Monte Carlo simulations. (author)

  3. Methods of correcting Anger camera deadtime losses

    International Nuclear Information System (INIS)

    Sorenson, J.A.

    1976-01-01

    Three different methods of correcting for Anger camera deadtime loss were investigated. These included analytic methods (mathematical modeling), the marker-source method, and a new method based on counting ''pileup'' events appearing in a pulseheight analyzer window positioned above the photopeak of interest. The studies were done with /sup 99m/Tc on a Searle Radiographics camera with a measured deadtime of about 6 μsec. Analytic methods were found to be unreliable because of unpredictable changes in deadtime with changes in radiation scattering conditions. Both the marker-source method and the pileup-counting method were found to be accurate to within a few percent for true counting rates of up to about 200 K cps, with the pileup-counting method giving better results. This finding applied to sources at depths ranging up to 10 cm of pressed wood. The relative merits of the two methods are discussed

  4. Decay correction methods in dynamic PET studies

    International Nuclear Information System (INIS)

    Chen, K.; Reiman, E.; Lawson, M.

    1995-01-01

    In order to reconstruct positron emission tomography (PET) images in quantitative dynamic studies, the data must be corrected for radioactive decay. One of the two commonly used methods ignores physiological processes including blood flow that occur at the same time as radioactive decay; the other makes incorrect use of time-accumulated PET counts. In simulated dynamic PET studies using 11 C-acetate and 18 F-fluorodeoxyglucose (FDG), these methods are shown to result in biased estimates of the time-activity curve (TAC) and model parameters. New methods described in this article provide significantly improved parameter estimates in dynamic PET studies

  5. Simple Moving Voltage Average Incremental Conductance MPPT Technique with Direct Control Method under Nonuniform Solar Irradiance Conditions

    Directory of Open Access Journals (Sweden)

    Amjad Ali

    2015-01-01

    Full Text Available A new simple moving voltage average (SMVA technique with fixed step direct control incremental conductance method is introduced to reduce solar photovoltaic voltage (VPV oscillation under nonuniform solar irradiation conditions. To evaluate and validate the performance of the proposed SMVA method in comparison with the conventional fixed step direct control incremental conductance method under extreme conditions, different scenarios were simulated. Simulation results show that in most cases SMVA gives better results with more stability as compared to traditional fixed step direct control INC with faster tracking system along with reduction in sustained oscillations and possesses fast steady state response and robustness. The steady state oscillations are almost eliminated because of extremely small dP/dV around maximum power (MP, which verify that the proposed method is suitable for standalone PV system under extreme weather conditions not only in terms of bus voltage stability but also in overall system efficiency.

  6. Incorporation of Collision Probability Method in STREAM to Consider Non-uniform Material Composition in Fuel Subregions

    International Nuclear Information System (INIS)

    Choi, Sooyoung; Choe, Jiwon; Lee, Deokjung

    2016-01-01

    STREAM uses a pin-based slowing-down method (PSM) which solves pointwise energy slowing-down problems with sub-divided fuel pellet, and shows a great performance in calculating effective cross-section (XS). Various issues in the conventional resonance treatment methods (i.e., approximations on resonance scattering source, resonance interference effect, and intrapellet self-shielding effect) were successfully resolved by PSM. PSM assumes that a fuel rod has a uniform material composition and temperature even though PSM calculates spatially dependent effective XSs of fuel subregions. When the depletion calculation or thermal/hydraulic (T/H) coupling are performed with sub-divided material meshes, each subregion has its own material condition depending on position. It was reported that the treatment of distributed temperature is important to calculate an accurate fuel temperature coefficient (FTC). In order to avoid the approximation in PSM, the collision probability method (CPM) has been incorporated as a calculation option. The resonance treatment method, PSM, used in the transport code STREAM has been enhanced to accurately consider a non-uniform material condition. The method incorporates CPM in computing collision probability of isolated fuel pin. From numerical tests with pin-cell problems, STREAM with the method showed very accurate multiplication factor and FTC results less than 83 pcm and 1.43 % differences from the references, respectively. The original PSM showed larger differences than the proposed method but still has a high accuracy

  7. Incorporation of Collision Probability Method in STREAM to Consider Non-uniform Material Composition in Fuel Subregions

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Sooyoung; Choe, Jiwon; Lee, Deokjung [UNIST, Ulsan (Korea, Republic of)

    2016-10-15

    STREAM uses a pin-based slowing-down method (PSM) which solves pointwise energy slowing-down problems with sub-divided fuel pellet, and shows a great performance in calculating effective cross-section (XS). Various issues in the conventional resonance treatment methods (i.e., approximations on resonance scattering source, resonance interference effect, and intrapellet self-shielding effect) were successfully resolved by PSM. PSM assumes that a fuel rod has a uniform material composition and temperature even though PSM calculates spatially dependent effective XSs of fuel subregions. When the depletion calculation or thermal/hydraulic (T/H) coupling are performed with sub-divided material meshes, each subregion has its own material condition depending on position. It was reported that the treatment of distributed temperature is important to calculate an accurate fuel temperature coefficient (FTC). In order to avoid the approximation in PSM, the collision probability method (CPM) has been incorporated as a calculation option. The resonance treatment method, PSM, used in the transport code STREAM has been enhanced to accurately consider a non-uniform material condition. The method incorporates CPM in computing collision probability of isolated fuel pin. From numerical tests with pin-cell problems, STREAM with the method showed very accurate multiplication factor and FTC results less than 83 pcm and 1.43 % differences from the references, respectively. The original PSM showed larger differences than the proposed method but still has a high accuracy.

  8. Study of the orbital correction method

    International Nuclear Information System (INIS)

    Meserve, R.A.

    1976-01-01

    Two approximations of interest in atomic, molecular, and solid state physics are explored. First, a procedure for calculating an approximate Green's function for use in perturbation theory is derived. In lowest order it is shown to be equivalent to treating the contribution of the bound states of the unperturbed Hamiltonian exactly and representing the continuum contribution by plane waves orthogonalized to the bound states (OPW's). If the OPW approximation were inadequate, the procedure allows for systematic improvement of the approximation. For comparison purposes an exact but more limited procedure for performing second-order perturbation theory, one that involves solving an inhomogeneous differential equation, is also derived. Second, the Kohn-Sham many-electron formalism is discussed and formulae are derived and discussed for implementing perturbation theory within the formalism so as to find corrections to the total energy of a system through second order in the perturbation. Both approximations were used in the calculation of the polarizability of helium, neon, and argon. The calculation included direct and exchange effects by the Kohn-Sham method and full self-consistency was demanded. The results using the differential equation method yielded excellent agreement with the coupled Hartree-Fock results of others and with experiment. Moreover, the OPW approximation yielded satisfactory comparison with the results of calculation by the exact differential equation method. Finally, both approximations were used in the calculation of properties of hydrogen fluoride and methane. The appendix formulates a procedure using group theory and the internal coordinates of a molecular system to simplify the calculation of vibrational frequencies

  9. A New Dyslexia Reading Method and Visual Correction Position Method.

    Science.gov (United States)

    Manilla, George T; de Braga, Joe

    2017-01-01

    Pediatricians and educators may interact daily with several dyslexic patients or students. One dyslexic author accidently developed a personal, effective, corrective reading method. Its effectiveness was evaluated in 3 schools. One school utilized 8 demonstration special education students. Over 3 months, one student grew one third year, 3 grew 1 year, and 4 grew 2 years. In another school, 6 sixth-, seventh-, and eighth-grade classroom teachers followed 45 treated dyslexic students. They all excelled and progressed beyond their classroom peers in 4 months. Using cyclovergence upper gaze, dyslexic reading problems disappeared at one of the Positional Reading Arc positions of 30°, 60°, 90°, 120°, or 150° for 10 dyslexics. Positional Reading Arc on 112 students of the second through eighth grades showed words read per minute, reading errors, and comprehension improved. Dyslexia was visually corrected by use of a new reading method and Positional Reading Arc positions.

  10. Nowcasting Surface Meteorological Parameters Using Successive Correction Method

    National Research Council Canada - National Science Library

    Henmi, Teizi

    2002-01-01

    The successive correction method was examined and evaluated statistically as a nowcasting method for surface meteorological parameters including temperature, dew point temperature, and horizontal wind vector components...

  11. A method to correct coordinate distortion in EBSD maps

    International Nuclear Information System (INIS)

    Zhang, Y.B.; Elbrønd, A.; Lin, F.X.

    2014-01-01

    Drift during electron backscatter diffraction mapping leads to coordinate distortions in resulting orientation maps, which affects, in some cases significantly, the accuracy of analysis. A method, thin plate spline, is introduced and tested to correct such coordinate distortions in the maps after the electron backscatter diffraction measurements. The accuracy of the correction as well as theoretical and practical aspects of using the thin plate spline method is discussed in detail. By comparing with other correction methods, it is shown that the thin plate spline method is most efficient to correct different local distortions in the electron backscatter diffraction maps. - Highlights: • A new method is suggested to correct nonlinear spatial distortion in EBSD maps. • The method corrects EBSD maps more precisely than presently available methods. • Errors less than 1–2 pixels are typically obtained. • Direct quantitative analysis of dynamic data are available after this correction

  12. Weighted backprojection implemented with a non-uniform attenuation map for improved SPECT quantitation

    International Nuclear Information System (INIS)

    Manglos, S.H.; Jaszczak, R.J.; Floyd, C.E.

    1988-01-01

    A method is developed to improve quantitation in SPECT imaging by using an attenuation compensation method which includes the correct non-uniform attenuation spatial distribution (''map''). The method is based on the technique of weighted back projection, previously developed for uniform attenuation. The method is tested by imaging a non-uniform phantom, reconstructing with the known attenuation map, and quantitatively comparing the resultant image with the known activity distribution. Reconstructed image profiles are dramatically improved in comparison to reconstructions without compensation or with an assumed uniform attenuation map. Contrast measurements further quantify the improvement. Line spread function distortions seen previously in non-uniform geometries are essentially eliminated by the method. Therefore, the method appears to be appropriate for these geometries, if the non-uniform map can be determined. Some additional image distortions introduced by the compensation method are noted and will require further study

  13. Compensation Methods for Non-uniform and Incomplete Data Sampling in High Resolution PET with Multiple Scintillation Crystal Layers

    International Nuclear Information System (INIS)

    Lee, Jae Sung; Kim, Soo Mee; Lee, Dong Soo; Hong, Jong Hong; Sim, Kwang Souk; Rhee, June Tak

    2008-01-01

    To establish the methods for sinogram formation and correction in order to appropriately apply the filtered backprojection (FBP) reconstruction algorithm to the data acquired using PET scanner with multiple scintillation crystal layers. Formation for raw PET data storage and conversion methods from listmode data to histogram and sinogram were optimized. To solve the various problems occurred while the raw histogram was converted into sinogram, optimal sampling strategy and sampling efficiency correction method were investigated. Gap compensation methods that is unique in this system were also investigated. All the sinogram data were reconstructed using 2D filtered backprojection algorithm and compared to estimate the improvements by the correction algorithms. Optimal radial sampling interval and number of angular samples in terms of the sampling theorem and sampling efficiency correction algorithm were pitch/2 and 120, respectively. By applying the sampling efficiency correction and gap compensation, artifacts and background noise on the reconstructed image could be reduced. Conversion method from the histogram to sinogram was investigated for the FBP reconstruction of data acquired using multiple scintillation crystal layers. This method will be useful for the fast 2D reconstruction of multiple crystal layer PET data

  14. Attenuation correction of myocardial SPECT by scatter-photopeak window method in normal subjects

    International Nuclear Information System (INIS)

    Okuda, Koichi; Nakajima, Kenichi; Matsuo, Shinro; Kinuya, Seigo; Motomura, Nobutoku; Kubota, Masahiro; Yamaki, Noriyasu; Maeda, Hisato

    2009-01-01

    Segmentation with scatter and photopeak window data using attenuation correction (SSPAC) method can provide a patient-specific non-uniform attenuation coefficient map only by using photopeak and scatter images without X-ray computed tomography (CT). The purpose of this study is to evaluate the performance of attenuation correction (AC) by the SSPAC method on normal myocardial perfusion database. A total of 32 sets of exercise-rest myocardial images with Tc-99m-sestamibi were acquired in both photopeak (140 keV±10%) and scatter (7% of lower side of the photopeak window) energy windows. Myocardial perfusion databases by the SSPAC method and non-AC (NC) were created from 15 female and 17 male subjects with low likelihood of cardiac disease using quantitative perfusion SPECT software. Segmental myocardial counts of a 17-segment model from these databases were compared on the basis of paired t test. AC average myocardial perfusion count was significantly higher than that in NC in the septal and inferior regions (P<0.02). On the contrary, AC average count was significantly lower in the anterolateral and apical regions (P<0.01). Coefficient variation of the AC count in the mid, apical and apex regions was lower than that of NC. The SSPAC method can improve average myocardial perfusion uptake in the septal and inferior regions and provide uniform distribution of myocardial perfusion. The SSPAC method could be a practical method of attenuation correction without X-ray CT. (author)

  15. New decoding methods of interleaved burst error-correcting codes

    Science.gov (United States)

    Nakano, Y.; Kasahara, M.; Namekawa, T.

    1983-04-01

    A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.

  16. Methods of orbit correction system optimization

    International Nuclear Information System (INIS)

    Chao, Yu-Chiu.

    1997-01-01

    Extracting optimal performance out of an orbit correction system is an important component of accelerator design and evaluation. The question of effectiveness vs. economy, however, is not always easily tractable. This is especially true in cases where betatron function magnitude and phase advance do not have smooth or periodic dependencies on the physical distance. In this report a program is presented using linear algebraic techniques to address this problem. A systematic recipe is given, supported with quantitative criteria, for arriving at an orbit correction system design with the optimal balance between performance and economy. The orbit referred to in this context can be generalized to include angle, path length, orbit effects on the optical transfer matrix, and simultaneous effects on multiple pass orbits

  17. Quantitative SPECT reconstruction for brain distribution with a non-uniform attenuation using a regularizing method

    International Nuclear Information System (INIS)

    Soussaline, F.; Bidaut, L.; Raynaud, C.; Le Coq, G.

    1983-06-01

    An analytical solution to the SPECT reconstruction problem, where the actual attenuation effect can be included, was developped using a regularizing iterative method (RIM). The potential of this approach in quantitative brain studies when using a tracer for cerebrovascular disorders is now under evaluation. Mathematical simulations for a distributed activity in the brain surrounded by the skull and physical phantom studies were performed, using a rotating camera based SPECT system, allowing the calibration of the system and the evaluation of the adapted method to be used. On the simulation studies, the contrast obtained along a profile, was less than 5%, the standard deviation 8% and the quantitative accuracy 13%, for a uniform emission distribution of mean = 100 per pixel and a double attenuation coefficient of μ = 0.115 cm -1 and 0.5 cm -1 . Clinical data obtained after injection of 123 I (AMPI) were reconstructed using the RIM without and with cerebrovascular diseases or lesion defects. Contour finding techniques were used for the delineation of the brain and the skull, and measured attenuation coefficients were assumed within these two regions. Using volumes of interest, selected on homogeneous regions on an hemisphere and reported symetrically, the statistical uncertainty for 300 K events in the tomogram was found to be 12%, the index of symetry was of 4% for normal distribution. These results suggest that quantitative SPECT reconstruction for brain distribution is feasible, and that combined with an adapted tracer and an adequate model physiopathological parameters could be extracted

  18. A novel compensation method for the anode gain non-uniformity of multi-anode photomultiplier tubes.

    Science.gov (United States)

    Lee, Chan Mi; Il Kwon, Sun; Ko, Guen Bae; Ito, Mikiko; Yoon, Hyun Suk; Lee, Dong Soo; Hong, Seong Jong; Lee, Jae Sung

    2012-01-07

    The position-sensitive multi-anode photomultiplier tube (MA-PMT) is widely used in high-resolution scintillation detectors. However, the anode gain nonuniformity of this device is a limiting factor that degrades the intrinsic performance of the detector module. The aim of this work was to develop a gain compensation method for the MA-PMT and evaluate the resulting enhancement in the performance of the detector. The method employs a circuit that is composed only of resistors and is placed between the MA-PMT and a resistive charge division network (RCN) used for position encoding. The goal of the circuit is to divide the output current from each anode, so the same current flows into the RCN regardless of the anode gain. The current division is controlled by the combination of a fixed-value series resistor with an output impedance that is much larger than the input impedance of the RCN, and a parallel resistor, which detours part of the current to ground. PSpice simulations of the compensation circuit and the RCN were performed to determine optimal values for the compensation resistors when used with Hamamatsu H8500 MAPMTs. The intrinsic characteristics of a detector module consisting of this MA-PMT and a lutetium-gadolinium-oxyorthosilicate (LGSO) crystal array were tested with and without the gain compensation method. In simulation, the average coefficient of variation and max/min ratio decreased from 15.7% to 2.7% and 2.0 to 1.2, respectively. In the flood map of the LGSO-H8500 detector, the uniformity of the photopeak position for individual crystals and the energy resolution were much improved. The feasibility of the method was shown by applying it to an octagonal prototype positron emission tomography scanner.

  19. A new method of CCD dark current correction via extracting the dark Information from scientific images

    Science.gov (United States)

    Ma, Bin; Shang, Zhaohui; Hu, Yi; Liu, Qiang; Wang, Lifan; Wei, Peng

    2014-07-01

    We have developed a new method to correct dark current at relatively high temperatures for Charge-Coupled Device (CCD) images when dark frames cannot be obtained on the telescope. For images taken with the Antarctic Survey Telescopes (AST3) in 2012, due to the low cooling efficiency, the median CCD temperature was -46°C, resulting in a high dark current level of about 3e-/pix/sec, even comparable to the sky brightness (10e-/pix/sec). If not corrected, the nonuniformity of the dark current could even overweight the photon noise of the sky background. However, dark frames could not be obtained during the observing season because the camera was operated in frame-transfer mode without a shutter, and the telescope was unattended in winter. Here we present an alternative, but simple and effective method to derive the dark current frame from the scientific images. Then we can scale this dark frame to the temperature at which the scientific images were taken, and apply the dark frame corrections to the scientific images. We have applied this method to the AST3 data, and demonstrated that it can reduce the noise to a level roughly as low as the photon noise of the sky brightness, solving the high noise problem and improving the photometric precision. This method will also be helpful for other projects that suffer from similar issues.

  20. A method to correct coordinate distortion in EBSD maps

    DEFF Research Database (Denmark)

    Zhang, Yubin; Elbrønd, Andreas Benjamin; Lin, Fengxiang

    2014-01-01

    Drift during electron backscatter diffraction mapping leads to coordinate distortions in resulting orientation maps, which affects, in some cases significantly, the accuracy of analysis. A method, thin plate spline, is introduced and tested to correct such coordinate distortions in the maps after...... the electron backscatter diffraction measurements. The accuracy of the correction as well as theoretical and practical aspects of using the thin plate spline method is discussed in detail. By comparing with other correction methods, it is shown that the thin plate spline method is most efficient to correct...

  1. Local defect correction for boundary integral equation methods

    NARCIS (Netherlands)

    Kakuba, G.; Anthonissen, M.J.H.

    2014-01-01

    The aim in this paper is to develop a new local defect correction approach to gridding for problems with localised regions of high activity in the boundary element method. The technique of local defect correction has been studied for other methods as finite difference methods and finite volume

  2. Attenuation correction method for single photon emission CT

    Energy Technology Data Exchange (ETDEWEB)

    Morozumi, Tatsuru; Nakajima, Masato [Keio Univ., Yokohama (Japan). Faculty of Science and Technology; Ogawa, Koichi; Yuta, Shinichi

    1983-10-01

    A correction method (Modified Correction Matrix method) is proposed to implement iterative correction by exactly measuring attenuation constant distribution in a test body, calculating a correction factor for every picture element, then multiply the image by these factors. Computer simulation for the comparison of the results showed that the proposed method was specifically more effective to an application to the test body, in which the rate of attenuation constant change is large, than the conventional correction matrix method. Since the actual measurement data always contain quantum noise, the noise was taken into account in the simulation. However, the correction effect was large even under the noise. For verifying its clinical effectiveness, the experiment using an acrylic phantom was also carried out. As the result, the recovery of image quality in the parts with small attenuation constant was remarkable as compared with the conventional method.

  3. Assessment of 12 CHF prediction methods, for an axially non-uniform heat flux distribution, with the RELAP5 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Ferrouk, M. [Laboratoire du Genie Physique des Hydrocarbures, University of Boumerdes, Boumerdes 35000 (Algeria)], E-mail: m_ferrouk@yahoo.fr; Aissani, S. [Laboratoire du Genie Physique des Hydrocarbures, University of Boumerdes, Boumerdes 35000 (Algeria); D' Auria, F.; DelNevo, A.; Salah, A. Bousbia [Dipartimento di Ingegneria Meccanica, Nucleare e della Produzione, Universita di Pisa (Italy)

    2008-10-15

    The present article covers the evaluation of the performance of twelve critical heat flux methods/correlations published in the open literature. The study concerns the simulation of an axially non-uniform heat flux distribution with the RELAP5 computer code in a single boiling water reactor channel benchmark problem. The nodalization scheme employed for the considered particular geometry, as modelled in RELAP5 code, is described. For this purpose a review of critical heat flux models/correlations applicable to non-uniform axial heat profile is provided. Simulation results using the RELAP5 code and those obtained from our computer program, based on three type predictions methods such as local conditions, F-factor and boiling length average approaches were compared.

  4. Simulation of therapeutic electron beam tracking through a non-uniform magnetic field using finite element method.

    Science.gov (United States)

    Tahmasebibirgani, Mohammad Javad; Maskani, Reza; Behrooz, Mohammad Ali; Zabihzadeh, Mansour; Shahbazian, Hojatollah; Fatahiasl, Jafar; Chegeni, Nahid

    2017-04-01

    In radiotherapy, megaelectron volt (MeV) electrons are employed for treatment of superficial cancers. Magnetic fields can be used for deflection and deformation of the electron flow. A magnetic field is composed of non-uniform permanent magnets. The primary electrons are not mono-energetic and completely parallel. Calculation of electron beam deflection requires using complex mathematical methods. In this study, a device was made to apply a magnetic field to an electron beam and the path of electrons was simulated in the magnetic field using finite element method. A mini-applicator equipped with two neodymium permanent magnets was designed that enables tuning the distance between magnets. This device was placed in a standard applicator of Varian 2100 CD linear accelerator. The mini-applicator was simulated in CST Studio finite element software. Deflection angle and displacement of the electron beam was calculated after passing through the magnetic field. By determining a 2 to 5cm distance between two poles, various intensities of transverse magnetic field was created. The accelerator head was turned so that the deflected electrons became vertical to the water surface. To measure the displacement of the electron beam, EBT2 GafChromic films were employed. After being exposed, the films were scanned using HP G3010 reflection scanner and their optical density was extracted using programming in MATLAB environment. Displacement of the electron beam was compared with results of simulation after applying the magnetic field. Simulation results of the magnetic field showed good agreement with measured values. Maximum deflection angle for a 12 MeV beam was 32.9° and minimum deflection for 15 MeV was 12.1°. Measurement with the film showed precision of simulation in predicting the amount of displacement in the electron beam. A magnetic mini-applicator was made and simulated using finite element method. Deflection angle and displacement of electron beam were calculated. With

  5. Corrections for hysteresis curves for rare earth magnet materials measured by open magnetic circuit methods

    International Nuclear Information System (INIS)

    Nakagawa, Yasuaki

    1996-01-01

    The methods for testing permanent magnets stipulated in the usual industrial standards are so-called closed magnetic circuit methods which employ a loop tracer using an iron-core electromagnet. If the coercivity exceeds the highest magnetic field generated by the electromagnet, full hysteresis curves cannot be obtained. In the present work, magnetic fields up to 15 T were generated by a high-power water-cooled magnet, and the magnetization was measured by an induction method with an open magnetic circuit, in which the effect of a demagnetizing field should be taken into account. Various rare earth magnets materials such as sintered or bonded Sm-Co and Nd-Fe-B were provided by a number of manufacturers. Hysteresis curves for cylindrical samples with 10 nm in diameter and 2 mm, 3.5 mm, 5 mm, 14 mm or 28 mm in length were measured. Correction for the demagnetizing field is rather difficult because of its non-uniformity. Roughly speaking, a mean demagnetizing factor for soft magnetic materials can be used for the correction, although the application of this factor to hard magnetic material is hardly justified. Thus the dimensions of the sample should be specified when the data obtained by the open magnetic circuit method are used as industrial standards. (author)

  6. Different partial volume correction methods lead to different conclusions

    DEFF Research Database (Denmark)

    Greve, Douglas N; Salat, David H; Bowen, Spencer L

    2016-01-01

    A cross-sectional group study of the effects of aging on brain metabolism as measured with (18)F-FDG-PET was performed using several different partial volume correction (PVC) methods: no correction (NoPVC), Meltzer (MZ), Müller-Gärtner (MG), and the symmetric geometric transfer matrix (SGTM) usin...

  7. Simulating water hammer with corrective smoothed particle method

    NARCIS (Netherlands)

    Hou, Q.; Kruisbrink, A.C.H.; Tijsseling, A.S.; Keramat, A.

    2012-01-01

    The corrective smoothed particle method (CSPM) is used to simulate water hammer. The spatial derivatives in the water-hammer equations are approximated by a corrective kernel estimate. For the temporal derivatives, the Euler-forward time integration algorithm is employed. The CSPM results are in

  8. Method of absorbance correction in a spectroscopic heating value sensor

    Science.gov (United States)

    Saveliev, Alexei; Jangale, Vilas Vyankatrao; Zelepouga, Sergeui; Pratapas, John

    2013-09-17

    A method and apparatus for absorbance correction in a spectroscopic heating value sensor in which a reference light intensity measurement is made on a non-absorbing reference fluid, a light intensity measurement is made on a sample fluid, and a measured light absorbance of the sample fluid is determined. A corrective light intensity measurement at a non-absorbing wavelength of the sample fluid is made on the sample fluid from which an absorbance correction factor is determined. The absorbance correction factor is then applied to the measured light absorbance of the sample fluid to arrive at a true or accurate absorbance for the sample fluid.

  9. A spectrum correction method for fuel assembly rehomogenization

    International Nuclear Information System (INIS)

    Lee, Kyung Taek; Cho, Nam Zin

    2004-01-01

    To overcome the limitation of existing homogenization methods based on the single assembly calculation with zero current boundary condition, we propose a new rehomogenization method, named spectrum correction method (SCM), consisting of the multigroup energy spectrum approximation by spectrum correction and the condensed two-group heterogeneous single assembly calculations with non-zero current boundary condition. In SCM, the spectrum shifting phenomena caused by current across assembly interfaces are considered by the spectrum correction at group condensation stage at first. Then, heterogeneous single assembly calculations with two-group cross sections condensed by using corrected multigroup energy spectrum are performed to obtain rehomogenized nodal diffusion parameters, i.e., assembly-wise homogenized cross sections and discontinuity factors. To evaluate the performance of SCM, it was applied to the analytic function expansion nodal (AFEN) method and several test problems were solved. The results show that SCM can reduce the errors significantly both in multiplication factors and assembly averaged power distributions

  10. Methods to Increase Educational Effectiveness in an Adult Correctional Setting.

    Science.gov (United States)

    Kuster, Byron

    1998-01-01

    A correctional educator reflects on methods that improve instructional effectiveness. These include teacher-student collaboration, clear goals, student accountability, positive classroom atmosphere, high expectations, and mutual respect. (SK)

  11. Automated general temperature correction method for dielectric soil moisture sensors

    Science.gov (United States)

    Kapilaratne, R. G. C. Jeewantinie; Lu, Minjiao

    2017-08-01

    An effective temperature correction method for dielectric sensors is important to ensure the accuracy of soil water content (SWC) measurements of local to regional-scale soil moisture monitoring networks. These networks are extensively using highly temperature sensitive dielectric sensors due to their low cost, ease of use and less power consumption. Yet there is no general temperature correction method for dielectric sensors, instead sensor or site dependent correction algorithms are employed. Such methods become ineffective at soil moisture monitoring networks with different sensor setups and those that cover diverse climatic conditions and soil types. This study attempted to develop a general temperature correction method for dielectric sensors which can be commonly used regardless of the differences in sensor type, climatic conditions and soil type without rainfall data. In this work an automated general temperature correction method was developed by adopting previously developed temperature correction algorithms using time domain reflectometry (TDR) measurements to ThetaProbe ML2X, Stevens Hydra probe II and Decagon Devices EC-TM sensor measurements. The rainy day effects removal procedure from SWC data was automated by incorporating a statistical inference technique with temperature correction algorithms. The temperature correction method was evaluated using 34 stations from the International Soil Moisture Monitoring Network and another nine stations from a local soil moisture monitoring network in Mongolia. Soil moisture monitoring networks used in this study cover four major climates and six major soil types. Results indicated that the automated temperature correction algorithms developed in this study can eliminate temperature effects from dielectric sensor measurements successfully even without on-site rainfall data. Furthermore, it has been found that actual daily average of SWC has been changed due to temperature effects of dielectric sensors with a

  12. Correction

    DEFF Research Database (Denmark)

    Pinkevych, Mykola; Cromer, Deborah; Tolstrup, Martin

    2016-01-01

    [This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.].......[This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.]....

  13. Metric-based method of software requirements correctness improvement

    Directory of Open Access Journals (Sweden)

    Yaremchuk Svitlana

    2017-01-01

    Full Text Available The work highlights the most important principles of software reliability management (SRM. The SRM concept construes a basis for developing a method of requirements correctness improvement. The method assumes that complicated requirements contain more actual and potential design faults/defects. The method applies a newer metric to evaluate the requirements complexity and double sorting technique evaluating the priority and complexity of a particular requirement. The method enables to improve requirements correctness due to identification of a higher number of defects with restricted resources. Practical application of the proposed method in the course of demands review assured a sensible technical and economic effect.

  14. Local defect correction for boundary integral equation methods

    NARCIS (Netherlands)

    Kakuba, G.; Anthonissen, M.J.H.

    2013-01-01

    This paper presents a new approach to gridding for problems with localised regions of high activity. The technique of local defect correction has been studied for other methods as ¿nite difference methods and ¿nite volume methods. In this paper we develop the technique for the boundary element

  15. A method of detector correction for cosmic ray muon radiography

    International Nuclear Information System (INIS)

    Liu Yuanyuan; Zhao Ziran; Chen Zhiqiang; Zhang Li; Wang Zhentian

    2008-01-01

    Cosmic ray muon radiography which has good penetrability and sensitivity to high-Z materials is an effective way for detecting shielded nuclear materials. The problem of data correction is one of the key points of muon radiography technique. Because of the influence of environmental background, environmental yawp and error of detectors, the raw data can not be used directly. If we used the raw data as the usable data to reconstruct without any corrections, it would turn up terrible artifacts. Based on the characteristics of the muon radiography system, aimed at the error of detectors, this paper proposes a method of detector correction. The simulation experiments demonstrate that this method can effectively correct the error produced by detectors. Therefore, we can say that it does a further step to let the technique of cosmic muon radiography into out real life. (authors)

  16. An corrective method to correct of the inherent flaw of the asynchronization direct counting circuit

    International Nuclear Information System (INIS)

    Wang Renfei; Liu Congzhan; Jin Yongjie; Zhang Zhi; Li Yanguo

    2003-01-01

    As a inherent flaw of the Asynchronization Direct Counting Circuit, the crosstalk, which is resulted from the randomicity of the time-signal always exists between two adjacent channels. In order to reduce the counting error derived from the crosstalk, the author propose an effective method to correct the flaw after analysing the mechanism of the crosstalk

  17. Implementation of the Centroid Method for the Correction of Turbulence

    Directory of Open Access Journals (Sweden)

    Enric Meinhardt-Llopis

    2014-07-01

    Full Text Available The centroid method for the correction of turbulence consists in computing the Karcher-Fréchet mean of the sequence of input images. The direction of deformation between a pair of images is determined by the optical flow. A distinguishing feature of the centroid method is that it can produce useful results from an arbitrarily small set of input images.

  18. [Study on phase correction method of spatial heterodyne spectrometer].

    Science.gov (United States)

    Wang, Xin-Qiang; Ye, Song; Zhang, Li-Juan; Xiong, Wei

    2013-05-01

    Phase distortion exists in collected interferogram because of a variety of measure reasons when spatial heterodyne spectrometers are used in practice. So an improved phase correction method is presented. The phase curve of interferogram was obtained through Fourier inverse transform to extract single side transform spectrum, based on which, the phase distortions were attained by fitting phase slope, so were the phase correction functions, and the convolution was processed between transform spectrum and phase correction function to implement spectrum phase correction. The method was applied to phase correction of actually measured monochromatic spectrum and emulational water vapor spectrum. Experimental results show that the low-frequency false signals in monochromatic spectrum fringe would be eliminated effectively to increase the periodicity and the symmetry of interferogram, in addition when the continuous spectrum imposed phase error was corrected, the standard deviation between it and the original spectrum would be reduced form 0.47 to 0.20, and thus the accuracy of spectrum could be improved.

  19. An attenuation correction method for PET/CT images

    International Nuclear Information System (INIS)

    Ue, Hidenori; Yamazaki, Tomohiro; Haneishi, Hideaki

    2006-01-01

    In PET/CT systems, accurate attenuation correction can be achieved by creating an attenuation map from an X-ray CT image. On the other hand, respiratory-gated PET acquisition is an effective method for avoiding motion blurring of the thoracic and abdominal organs caused by respiratory motion. In PET/CT systems employing respiratory-gated PET, using an X-ray CT image acquired during breath-holding for attenuation correction may have a large effect on the voxel values, especially in regions with substantial respiratory motion. In this report, we propose an attenuation correction method in which, as the first step, a set of respiratory-gated PET images is reconstructed without attenuation correction, as the second step, the motion of each phase PET image from the PET image in the same phase as the CT acquisition timing is estimated by the previously proposed method, as the third step, the CT image corresponding to each respiratory phase is generated from the original CT image by deformation according to the motion vector maps, and as the final step, attenuation correction using these CT images and reconstruction are performed. The effectiveness of the proposed method was evaluated using 4D-NCAT phantoms, and good stability of the voxel values near the diaphragm was observed. (author)

  20. An Automated Baseline Correction Method Based on Iterative Morphological Operations.

    Science.gov (United States)

    Chen, Yunliang; Dai, Liankui

    2018-05-01

    Raman spectra usually suffer from baseline drift caused by fluorescence or other reasons. Therefore, baseline correction is a necessary and crucial step that must be performed before subsequent processing and analysis of Raman spectra. An automated baseline correction method based on iterative morphological operations is proposed in this work. The method can adaptively determine the structuring element first and then gradually remove the spectral peaks during iteration to get an estimated baseline. Experiments on simulated data and real-world Raman data show that the proposed method is accurate, fast, and flexible for handling different kinds of baselines in various practical situations. The comparison of the proposed method with some state-of-the-art baseline correction methods demonstrates its advantages over the existing methods in terms of accuracy, adaptability, and flexibility. Although only Raman spectra are investigated in this paper, the proposed method is hopefully to be used for the baseline correction of other analytical instrumental signals, such as IR spectra and chromatograms.

  1. The various correction methods to the high precision aeromagnetic data

    International Nuclear Information System (INIS)

    Xu Guocang; Zhu Lin; Ning Yuanli; Meng Xiangbao; Zhang Hongjian

    2014-01-01

    In the airborne geophysical survey, an outstanding achievement first depends on the measurement precision of the instrument, and the choice of measurement conditions, the reliability of data collection, followed by the correct method of measurement data processing, the rationality of the data interpretation. Obviously, geophysical data processing is an important task for the comprehensive interpretation of the measurement results, processing method is correct or not directly related to the quality of the final results. we have developed a set of personal computer software to aeromagnetic and radiometric survey data processing in the process of actual production and scientific research in recent years, and successfully applied to the production. The processing methods and flowcharts to the high precision aromagnetic data were simply introduced in this paper. However, the mathematical techniques of the various correction programes to IGRF and flying height and magnetic diurnal variation were stressily discussed in the paper. Their processing effectness were illustrated by taking an example as well. (authors)

  2. A vibration correction method for free-fall absolute gravimeters

    Science.gov (United States)

    Qian, J.; Wang, G.; Wu, K.; Wang, L. J.

    2018-02-01

    An accurate determination of gravitational acceleration, usually approximated as 9.8 m s-2, has been playing an important role in the areas of metrology, geophysics, and geodetics. Absolute gravimetry has been experiencing rapid developments in recent years. Most absolute gravimeters today employ a free-fall method to measure gravitational acceleration. Noise from ground vibration has become one of the most serious factors limiting measurement precision. Compared to vibration isolators, the vibration correction method is a simple and feasible way to reduce the influence of ground vibrations. A modified vibration correction method is proposed and demonstrated. A two-dimensional golden section search algorithm is used to search for the best parameters of the hypothetical transfer function. Experiments using a T-1 absolute gravimeter are performed. It is verified that for an identical group of drop data, the modified method proposed in this paper can achieve better correction effects with much less computation than previous methods. Compared to vibration isolators, the correction method applies to more hostile environments and even dynamic platforms, and is expected to be used in a wider range of applications.

  3. GafChromic EBT film dosimetry with flatbed CCD scanner: a novel background correction method and full dose uncertainty analysis.

    Science.gov (United States)

    Saur, Sigrun; Frengen, Jomar

    2008-07-01

    Film dosimetry using radiochromic EBT film in combination with a flatbed charge coupled device scanner is a useful method both for two-dimensional verification of intensity-modulated radiation treatment plans and for general quality assurance of treatment planning systems and linear accelerators. Unfortunately, the response over the scanner area is nonuniform, and when not corrected for, this results in a systematic error in the measured dose which is both dose and position dependent. In this study a novel method for background correction is presented. The method is based on the subtraction of a correction matrix, a matrix that is based on scans of films that are irradiated to nine dose levels in the range 0.08-2.93 Gy. Because the response of the film is dependent on the film's orientation with respect to the scanner, correction matrices for both landscape oriented and portrait oriented scans were made. In addition to the background correction method, a full dose uncertainty analysis of the film dosimetry procedure was performed. This analysis takes into account the fit uncertainty of the calibration curve, the variation in response for different film sheets, the nonuniformity after background correction, and the noise in the scanned films. The film analysis was performed for film pieces of size 16 x 16 cm, all with the same lot number, and all irradiations were done perpendicular onto the films. The results show that the 2-sigma dose uncertainty at 2 Gy is about 5% and 3.5% for landscape and portrait scans, respectively. The uncertainty gradually increases as the dose decreases, but at 1 Gy the 2-sigma dose uncertainty is still as good as 6% and 4% for landscape and portrait scans, respectively. The study shows that film dosimetry using GafChromic EBT film, an Epson Expression 1680 Professional scanner and a dedicated background correction technique gives precise and accurate results. For the purpose of dosimetric verification, the calculated dose distribution

  4. GafChromic EBT film dosimetry with flatbed CCD scanner: A novel background correction method and full dose uncertainty analysis

    International Nuclear Information System (INIS)

    Saur, Sigrun; Frengen, Jomar

    2008-01-01

    Film dosimetry using radiochromic EBT film in combination with a flatbed charge coupled device scanner is a useful method both for two-dimensional verification of intensity-modulated radiation treatment plans and for general quality assurance of treatment planning systems and linear accelerators. Unfortunately, the response over the scanner area is nonuniform, and when not corrected for, this results in a systematic error in the measured dose which is both dose and position dependent. In this study a novel method for background correction is presented. The method is based on the subtraction of a correction matrix, a matrix that is based on scans of films that are irradiated to nine dose levels in the range 0.08-2.93 Gy. Because the response of the film is dependent on the film's orientation with respect to the scanner, correction matrices for both landscape oriented and portrait oriented scans were made. In addition to the background correction method, a full dose uncertainty analysis of the film dosimetry procedure was performed. This analysis takes into account the fit uncertainty of the calibration curve, the variation in response for different film sheets, the nonuniformity after background correction, and the noise in the scanned films. The film analysis was performed for film pieces of size 16x16 cm, all with the same lot number, and all irradiations were done perpendicular onto the films. The results show that the 2-sigma dose uncertainty at 2 Gy is about 5% and 3.5% for landscape and portrait scans, respectively. The uncertainty gradually increases as the dose decreases, but at 1 Gy the 2-sigma dose uncertainty is still as good as 6% and 4% for landscape and portrait scans, respectively. The study shows that film dosimetry using GafChromic EBT film, an Epson Expression 1680 Professional scanner and a dedicated background correction technique gives precise and accurate results. For the purpose of dosimetric verification, the calculated dose distribution can

  5. A new bias field correction method combining N3 and FCM for improved segmentation of breast density on MRI.

    Science.gov (United States)

    Lin, Muqing; Chan, Siwa; Chen, Jeon-Hor; Chang, Daniel; Nie, Ke; Chen, Shih-Ting; Lin, Cheng-Ju; Shih, Tzu-Ching; Nalcioglu, Orhan; Su, Min-Ying

    2011-01-01

    Quantitative breast density is known as a strong risk factor associated with the development of breast cancer. Measurement of breast density based on three-dimensional breast MRI may provide very useful information. One important step for quantitative analysis of breast density on MRI is the correction of field inhomogeneity to allow an accurate segmentation of the fibroglandular tissue (dense tissue). A new bias field correction method by combining the nonparametric nonuniformity normalization (N3) algorithm and fuzzy-C-means (FCM)-based inhomogeneity correction algorithm is developed in this work. The analysis is performed on non-fat-sat T1-weighted images acquired using a 1.5 T MRI scanner. A total of 60 breasts from 30 healthy volunteers was analyzed. N3 is known as a robust correction method, but it cannot correct a strong bias field on a large area. FCM-based algorithm can correct the bias field on a large area, but it may change the tissue contrast and affect the segmentation quality. The proposed algorithm applies N3 first, followed by FCM, and then the generated bias field is smoothed using Gaussian kernal and B-spline surface fitting to minimize the problem of mistakenly changed tissue contrast. The segmentation results based on the N3+FCM corrected images were compared to the N3 and FCM alone corrected images and another method, coherent local intensity clustering (CLIC), corrected images. The segmentation quality based on different correction methods were evaluated by a radiologist and ranked. The authors demonstrated that the iterative N3+FCM correction method brightens the signal intensity of fatty tissues and that separates the histogram peaks between the fibroglandular and fatty tissues to allow an accurate segmentation between them. In the first reading session, the radiologist found (N3+FCM > N3 > FCM) ranking in 17 breasts, (N3+FCM > N3 = FCM) ranking in 7 breasts, (N3+FCM = N3 > FCM) in 32 breasts, (N3+FCM = N3 = FCM) in 2 breasts, and (N3 > N3

  6. A Horizontal Tilt Correction Method for Ship License Numbers Recognition

    Science.gov (United States)

    Liu, Baolong; Zhang, Sanyuan; Hong, Zhenjie; Ye, Xiuzi

    2018-02-01

    An automatic ship license numbers (SLNs) recognition system plays a significant role in intelligent waterway transportation systems since it can be used to identify ships by recognizing the characters in SLNs. Tilt occurs frequently in many SLNs because the monitors and the ships usually have great vertical or horizontal angles, which decreases the accuracy and robustness of a SLNs recognition system significantly. In this paper, we present a horizontal tilt correction method for SLNs. For an input tilt SLN image, the proposed method accomplishes the correction task through three main steps. First, a MSER-based characters’ center-points computation algorithm is designed to compute the accurate center-points of the characters contained in the input SLN image. Second, a L 1- L 2 distance-based straight line is fitted to the computed center-points using M-estimator algorithm. The tilt angle is estimated at this stage. Finally, based on the computed tilt angle, an affine transformation rotation is conducted to rotate and to correct the input SLN horizontally. At last, the proposed method is tested on 200 tilt SLN images, the proposed method is proved to be effective with a tilt correction rate of 80.5%.

  7. Correction of measured multiplicity distributions by the simulated annealing method

    International Nuclear Information System (INIS)

    Hafidouni, M.

    1993-01-01

    Simulated annealing is a method used to solve combinatorial optimization problems. It is used here for the correction of the observed multiplicity distribution from S-Pb collisions at 200 GeV/c per nucleon. (author) 11 refs., 2 figs

  8. A Hold-out method to correct PCA variance inflation

    DEFF Research Database (Denmark)

    Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Hansen, Lars Kai

    2012-01-01

    In this paper we analyze the problem of variance inflation experienced by the PCA algorithm when working in an ill-posed scenario where the dimensionality of the training set is larger than its sample size. In an earlier article a correction method based on a Leave-One-Out (LOO) procedure...

  9. Evaluation of the Analytical Anisotropic Algorithm (AAA) in dose calculation for fields with non-uniform fluences considering heterogeneity correction; Avaliacao do Algoritmo Analitico Anisotropico (AAA) no calculo de dose para campos com fluencia nao uniforme considerando correcao de heterogeneidade

    Energy Technology Data Exchange (ETDEWEB)

    Bornatto, P.; Funchal, M.; Bruning, F.; Toledo, H.; Lyra, J.; Fernandes, T.; Toledo, F.; Marciao, C., E-mail: pricila_bornatto@yahoo.com.br [Hospital Erasto Gaertner (LPCC), Curitiba, PR (Brazil). Departamento de Radioterapia

    2014-08-15

    The purpose of this study is to evaluate the calculation of dose distribution AAA (Varian Medical Systems) for fields with non-uniform fluences considering heterogeneity correction. Five different phantoms were used with different density materials. These phantoms were scanned in the CT BrightSpeed (©GE Healthcare) upon the array of detectors MAPCHECK2 TM (Sun Nuclear Corporation) and irradiated in a linear accelerator 600 CD (Varian Medical Systems) 6MV and rate dose 400MU/min with isocentric setup. The fluences used were exported from IMRT plans, calculated by ECLIPSE™ planning system (Varian Medical Systems), and a 10x10 cm{sup 2} field to assess the heterogeneity correction for uniform fluence. The measured dose distribution was compared to the calculated by Gamma analysis with approval criteria of 3% / 3 mm and 10% threshold. The evaluation was performed using the software SNCPatient (Sun Nuclear Corporation) and considering absolute dose normalized at maximum. The phantoms best performers were those with low density materials, with an average of 99.2% approval. Already phantoms with plates of higher density material presented various fluences below 95% of the points approved. The average value reached 94.3%. It was observed a dependency between fluency and approved percentage points, whereas for the same fluency, 100% of the points have been approved in all phantoms. The approval criteria for IMRT plans recommended in most centers is 3% / 3mm with at least 95% of points approved, it can be concluded that, under these conditions, the IMRT plans with heterogeneity correction can be performed , however the quality control must be careful because the difficulty of the system to accurately predict the dose distribution in certain situations. (author)

  10. Validation of a non-uniform meshing algorithm for the 3D-FDTD method by means of a two-wire crosstalk experimental set-up

    Directory of Open Access Journals (Sweden)

    Raúl Esteban Jiménez-Mejía

    2015-06-01

    Full Text Available This paper presents an algorithm used to automatically mesh a 3D computational domain in order to solve electromagnetic interaction scenarios by means of the Finite-Difference Time-Domain -FDTD-  Method. The proposed algorithm has been formulated in a general mathematical form, where convenient spacing functions can be defined for the problem space discretization, allowing the inclusion of small sized objects in the FDTD method and the calculation of detailed variations of the electromagnetic field at specified regions of the computation domain. The results obtained by using the FDTD method with the proposed algorithm have been contrasted not only with a typical uniform mesh algorithm, but also with experimental measurements for a two-wire crosstalk set-up, leading to excellent agreement between theoretical and experimental waveforms. A discussion about the advantages of the non-uniform mesh over the uniform one is also presented.

  11. Method for decoupling error correction from privacy amplification

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Hoi-Kwong [Department of Electrical and Computer Engineering and Department of Physics, University of Toronto, 10 King' s College Road, Toronto, Ontario, Canada, M5S 3G4 (Canada)

    2003-04-01

    In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof.

  12. Correcting saturation of detectors for particle/droplet imaging methods

    International Nuclear Information System (INIS)

    Kalt, Peter A M

    2010-01-01

    Laser-based diagnostic methods are being applied to more and more flows of theoretical and practical interest and are revealing interesting new flow features. Imaging particles or droplets in nephelometry and laser sheet dropsizing methods requires a trade-off of maximized signal-to-noise ratio without over-saturating the detector. Droplet and particle imaging results in lognormal distribution of pixel intensities. It is possible to fit a derived lognormal distribution to the histogram of measured pixel intensities. If pixel intensities are clipped at a saturated value, it is possible to estimate a presumed probability density function (pdf) shape without the effects of saturation from the lognormal fit to the unsaturated histogram. Information about presumed shapes of the pixel intensity pdf is used to generate corrections that can be applied to data to account for saturation. The effects of even slight saturation are shown to be a significant source of error on the derived average. The influence of saturation on the derived root mean square (rms) is even more pronounced. It is found that errors on the determined average exceed 5% when the number of saturated samples exceeds 3% of the total. Errors on the rms are 20% for a similar saturation level. This study also attempts to delineate limits, within which the detector saturation can be accurately corrected. It is demonstrated that a simple method for reshaping the clipped part of the pixel intensity histogram makes accurate corrections to account for saturated pixels. These outcomes can be used to correct a saturated signal, quantify the effect of saturation on a derived average and offer a method to correct the derived average in the case of slight to moderate saturation of pixels

  13. Method for decoupling error correction from privacy amplification

    International Nuclear Information System (INIS)

    Lo, Hoi-Kwong

    2003-01-01

    In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof

  14. An efficient dose-compensation method for proximity effect correction

    International Nuclear Information System (INIS)

    Wang Ying; Han Weihua; Yang Xiang; Zhang Yang; Yang Fuhua; Zhang Renping

    2010-01-01

    A novel simple dose-compensation method is developed for proximity effect correction in electron-beam lithography. The sizes of exposed patterns depend on dose factors while other exposure parameters (including accelerate voltage, resist thickness, exposing step size, substrate material, and so on) remain constant. This method is based on two reasonable assumptions in the evaluation of the compensated dose factor: one is that the relation between dose factors and circle-diameters is linear in the range under consideration; the other is that the compensated dose factor is only affected by the nearest neighbors for simplicity. Four-layer-hexagon photonic crystal structures were fabricated as test patterns to demonstrate this method. Compared to the uncorrected structures, the homogeneity of the corrected hole-size in photonic crystal structures was clearly improved. (semiconductor technology)

  15. A rigid motion correction method for helical computed tomography (CT)

    International Nuclear Information System (INIS)

    Kim, J-H; Kyme, A; Fulton, R; Nuyts, J; Kuncic, Z

    2015-01-01

    We propose a method to compensate for six degree-of-freedom rigid motion in helical CT of the head. The method is demonstrated in simulations and in helical scans performed on a 16-slice CT scanner. Scans of a Hoffman brain phantom were acquired while an optical motion tracking system recorded the motion of the bed and the phantom. Motion correction was performed by restoring projection consistency using data from the motion tracking system, and reconstructing with an iterative fully 3D algorithm. Motion correction accuracy was evaluated by comparing reconstructed images with a stationary reference scan. We also investigated the effects on accuracy of tracker sampling rate, measurement jitter, interpolation of tracker measurements, and the synchronization of motion data and CT projections. After optimization of these aspects, motion corrected images corresponded remarkably closely to images of the stationary phantom with correlation and similarity coefficients both above 0.9. We performed a simulation study using volunteer head motion and found similarly that our method is capable of compensating effectively for realistic human head movements. To the best of our knowledge, this is the first practical demonstration of generalized rigid motion correction in helical CT. Its clinical value, which we have yet to explore, may be significant. For example it could reduce the necessity for repeat scans and resource-intensive anesthetic and sedation procedures in patient groups prone to motion, such as young children. It is not only applicable to dedicated CT imaging, but also to hybrid PET/CT and SPECT/CT, where it could also ensure an accurate CT image for lesion localization and attenuation correction of the functional image data. (paper)

  16. Dead time corrections using the backward extrapolation method

    Energy Technology Data Exchange (ETDEWEB)

    Gilad, E., E-mail: gilade@bgu.ac.il [The Unit of Nuclear Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel); Dubi, C. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel); Geslot, B.; Blaise, P. [DEN/CAD/DER/SPEx/LPE, CEA Cadarache, Saint-Paul-les-Durance 13108 (France); Kolin, A. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel)

    2017-05-11

    Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold, up to total saturation of the detector system. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (e.g., paralyzing vs. non paralyzing), and the stochastic nature of the fission chains. In the present study, a new technique is introduced for dead time corrections on the sampled Count Per Second (CPS), based on backward extrapolation of the losses, created by increasingly growing artificially imposed dead time on the data, back to zero. The method has been implemented on actual neutron noise measurements carried out in the MINERVE zero power reactor, demonstrating high accuracy (of 1–2%) in restoring the corrected count rate. - Highlights: • A new method for dead time corrections is introduced and experimentally validated. • The method does not depend on any prior calibration nor assumes any specific model. • Different dead times are imposed on the signal and the losses are extrapolated to zero. • The method is implemented and validated using neutron measurements from the MINERVE. • Result show very good correspondence to empirical results.

  17. Allowance for influence of gravity field nonuniformity

    Science.gov (United States)

    Tsysar, A. P.

    1987-03-01

    The constants of a quartz-metal pendulum used in higher-order gravimetric networks have been determined and a formula has been derived for the total correction for gravity field nonuniformity measurements made with the pendulum. Nomograms were constructed on the basis of these formulas and are used in introducing corrections into pendulum measurements. A table was prepared giving the components of the correction for some values of the derivatives of gravity potential from surrounding masses. Errors can be caused by building walls, the pedestal on which the instrument sits and other factors, and these must be taken into account since they increase the normal gravity gradient. After introducing these correction components for the nonuniform gravity field, the gravity field at the measurement point is related to the instrument point coinciding with the middle of the pendulum knife blade.

  18. Correction of Misclassifications Using a Proximity-Based Estimation Method

    Directory of Open Access Journals (Sweden)

    Shmulevich Ilya

    2004-01-01

    Full Text Available An estimation method for correcting misclassifications in signal and image processing is presented. The method is based on the use of context-based (temporal or spatial information in a sliding-window fashion. The classes can be purely nominal, that is, an ordering of the classes is not required. The method employs nonlinear operations based on class proximities defined by a proximity matrix. Two case studies are presented. In the first, the proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm. In the second, the estimation method is applied to two-dimensional signals for correction of misclassifications in images. In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework. Simulation results are presented in both case studies and the degree of improvement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.

  19. GPU accelerated manifold correction method for spinning compact binaries

    Science.gov (United States)

    Ran, Chong-xi; Liu, Song; Zhong, Shuang-ying

    2018-04-01

    The graphics processing unit (GPU) acceleration of the manifold correction algorithm based on the compute unified device architecture (CUDA) technology is designed to simulate the dynamic evolution of the Post-Newtonian (PN) Hamiltonian formulation of spinning compact binaries. The feasibility and the efficiency of parallel computation on GPU have been confirmed by various numerical experiments. The numerical comparisons show that the accuracy on GPU execution of manifold corrections method has a good agreement with the execution of codes on merely central processing unit (CPU-based) method. The acceleration ability when the codes are implemented on GPU can increase enormously through the use of shared memory and register optimization techniques without additional hardware costs, implying that the speedup is nearly 13 times as compared with the codes executed on CPU for phase space scan (including 314 × 314 orbits). In addition, GPU-accelerated manifold correction method is used to numerically study how dynamics are affected by the spin-induced quadrupole-monopole interaction for black hole binary system.

  20. Equation-Method for correcting clipping errors in OFDM signals.

    Science.gov (United States)

    Bibi, Nargis; Kleerekoper, Anthony; Muhammad, Nazeer; Cheetham, Barry

    2016-01-01

    Orthogonal frequency division multiplexing (OFDM) is the digital modulation technique used by 4G and many other wireless communication systems. OFDM signals have significant amplitude fluctuations resulting in high peak to average power ratios which can make an OFDM transmitter susceptible to non-linear distortion produced by its high power amplifiers (HPA). A simple and popular solution to this problem is to clip the peaks before an OFDM signal is applied to the HPA but this causes in-band distortion and introduces bit-errors at the receiver. In this paper we discuss a novel technique, which we call the Equation-Method, for correcting these errors. The Equation-Method uses the Fast Fourier Transform to create a set of simultaneous equations which, when solved, return the amplitudes of the peaks before they were clipped. We show analytically and through simulations that this method can, correct all clipping errors over a wide range of clipping thresholds. We show that numerical instability can be avoided and new techniques are needed to enable the receiver to differentiate between correctly and incorrectly received frequency-domain constellation symbols.

  1. Statistical evaluation of unobserved nonuniform corrosion in A216 steel

    International Nuclear Information System (INIS)

    Pulsipher, B.A.

    1988-07-01

    Tests designed to promote nonuniform corrosion have been conducted at PNL on A216 steel. In all of the tests performed to date, there have been no manifestations of significant nonuniform corrosion. Although this may suggest that nonuniform corrosion in A216 steel may not be a significant problem in the nuclear waste repository, a question arises as to whether enough tests have been conducted for a sufficient length of time to rule out nonuniform corrosion of A216 steel. In this report, a method for determining the required number of tests is examined for two of the mechanisms of nonuniform corrosion: pitting and crevice corrosion

  2. Attenuation correction for SPECT

    International Nuclear Information System (INIS)

    Hosoba, Minoru

    1986-01-01

    Attenuation correction is required for the reconstruction of a quantitative SPECT image. A new method for detecting body contours, which are important for the correction of tissue attenuation, is presented. The effect of body contours, detected by the newly developed method, on the reconstructed images was evaluated using various techniques for attenuation correction. The count rates in the specified region of interest in the phantom image by the Radial Post Correction (RPC) method, the Weighted Back Projection (WBP) method, Chang's method were strongly affected by the accuracy of the contours, as compared to those by Sorenson's method. To evaluate the effect of non-uniform attenuators on the cardiac SPECT, computer simulation experiments were performed using two types of models, the uniform attenuator model (UAM) and the non-uniform attenuator model (NUAM). The RPC method showed the lowest relative percent error (%ERROR) in UAM (11 %). However, 20 to 30 percent increase in %ERROR was observed for NUAM reconstructed with the RPC, WBP, and Chang's methods. Introducing an average attenuation coefficient (0.12/cm for Tc-99m and 0.14/cm for Tl-201) in the RPC method decreased %ERROR to the levels for UAM. Finally, a comparison between images, which were obtained by 180 deg and 360 deg scans and reconstructed from the RPC method, showed that the degree of the distortion of the contour of the simulated ventricles in the 180 deg scan was 15 % higher than that in the 360 deg scan. (Namekawa, K.)

  3. Method for measuring multiple scattering corrections between liquid scintillators

    Energy Technology Data Exchange (ETDEWEB)

    Verbeke, J.M., E-mail: verbeke2@llnl.gov; Glenn, A.M., E-mail: glenn22@llnl.gov; Keefer, G.J., E-mail: keefer1@llnl.gov; Wurtz, R.E., E-mail: wurtz1@llnl.gov

    2016-07-21

    A time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.

  4. A Method To ModifyCorrect The Performance Of Amplifiers

    Directory of Open Access Journals (Sweden)

    Rohith Krishnan R

    2015-01-01

    Full Text Available Abstract The actual response of the amplifier may vary with the replacement of some aged or damaged components and this method is to compensate that problem. Here we use op-amp Fixator as the design tool. The tool helps us to isolate the selected circuit component from rest of the circuit adjust its operating point to correct the performance deviations and to modify the circuit without changing other parts of the circuit. A method to modifycorrect the performance of amplifiers by properly redesign the circuit is presented in this paper.

  5. New method in obtaining correction factor of power confirming

    International Nuclear Information System (INIS)

    Deng Yongjun; Li Rundong; Liu Yongkang; Zhou Wei

    2010-01-01

    Westcott theory is the most widely used method in reactor power calibration, which particularly suited to research reactor. But this method is very fussy because lots of correction parameters which rely on empirical formula to special reactor type are needed. The incidence coefficient between foil activity and reactor power was obtained by Monte-Carlo calculation, which was carried out with precise description of the reactor core and the foil arrangement position by MCNP input card. So the reactor power was determined by the core neutron fluence profile and the foil activity placed in the position for normalization use. The characteristic of this new method is simpler, more flexible and accurate than Westcott theory. In this paper, the results of SPRR-300 obtained by the new method in theory were compared with the experimental results, which verified the possibility of this new method. (authors)

  6. Evaluation of six scatter correction methods based on spectral analysis in 99m Tc SPECT imaging using SIMIND Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    Mahsa Noori Asl

    2013-01-01

    Full Text Available Compton-scattered photons included within the photopeak pulse-height window result in the degradation of SPECT images both qualitatively and quantitatively. The purpose of this study is to evaluate and compare six scatter correction methods based on setting the energy windows in 99m Tc spectrum. SIMIND Monte Carlo simulation is used to generate the projection images from a cold-sphere hot-background phantom. For evaluation of different scatter correction methods, three assessment criteria including image contrast, signal-to-noise ratio (SNR and relative noise of the background (RNB are considered. Except for the dual-photopeak window (DPW method, the image contrast of the five cold spheres is improved in the range of 2.7-26%. Among methods considered, two methods show a nonuniform correction performance. The RNB for all of the scatter correction methods is ranged from minimum 0.03 for DPW method to maximum 0.0727 for the three energy window (TEW method using trapezoidal approximation. The TEW method using triangular approximation because of ease of implementation, good improvement of the image contrast and the SNR for the five cold spheres, and the low noise level is proposed as most appropriate correction method.

  7. Static Scene Statistical Non-Uniformity Correction

    Science.gov (United States)

    2015-03-01

    Covari- ances and Arbitrary-Order Statistical Moments. Technical Report SAND2008-6212, Sandia National Labs, 2008. [21] Perry, David L. and Eustace L...Oxford University Press, Inc., New York, 5th edition, 2004. [30] Stettner, Roger, Howard Bailey, and Steven Silverman . Three dimensional Flash LADAR focal

  8. Auto correct method of AD converters precision based on ethernet

    Directory of Open Access Journals (Sweden)

    NI Jifeng

    2013-10-01

    Full Text Available Ideal AD conversion should be a straight zero-crossing line in the Cartesian coordinate axis system. While in practical engineering, the signal processing circuit, chip performance and other factors have an impact on the accuracy of conversion. Therefore a linear fitting method is adopted to improve the conversion accuracy. An automatic modification of AD conversion based on Ethernet is presented by using software and hardware. Just by tapping the mouse, all the AD converter channel linearity correction can be automatically completed, and the error, SNR and ENOB (effective number of bits are calculated. Then the coefficients of linear modification are loaded into the onboard AD converter card's EEPROM. Compared with traditional methods, this method is more convenient, accurate and efficient,and has a broad application prospects.

  9. Correction

    CERN Multimedia

    2002-01-01

    Tile Calorimeter modules stored at CERN. The larger modules belong to the Barrel, whereas the smaller ones are for the two Extended Barrels. (The article was about the completion of the 64 modules for one of the latter.) The photo on the first page of the Bulletin n°26/2002, from 24 July 2002, illustrating the article «The ATLAS Tile Calorimeter gets into shape» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.

  10. Filtering of SPECT reconstructions made using Bellini's attenuation correction method

    International Nuclear Information System (INIS)

    Glick, S.J.; Penney, B.C.; King, M.A.

    1991-01-01

    This paper evaluates a three-dimensional (3D) Wiener filter which is used to restore SPECT reconstructions which were made using Bellini's method of attenuation correction. Its performance is compared to that of several pre-reconstruction filers: the one-dimensional (1D) Butterworth, the two-dimensional (2D) Butterworth, and a 2D Wiener filer. A simulation study is used to compare the four filtering methods. An approximation to a clinical liver spleen study was used as the source distribution and algorithm which accounts for the depth and distance dependent blurring in SPECT was used to compute noise free projections. To study the effect of filtering method on tumor detection accuracy, a 2 cm diameter, cool spherical tumor (40% contrast) was placed at a known, but random, location with the liver. Projection sets for ten tumor locations were computed and five noise realizations of each set were obtained by introducing Poisson noise. The simulated projections were either: filtered with the 1D or 2D Butterworth or the 2D Wiener and then reconstructed using Bellini's intrinsic attenuation correction, or reconstructed first, then filtered with the 3D Wiener. The criteria used for comparison were: normalized mean square error (NMSE), cold spot contrast, and accuracy of tumor detection with an automated numerical method. Results indicate that restorations obtained with 3D Wiener filtering yielded significantly higher lesion contrast and lower NMSE values compared to the other methods of processing. The Wiener restoration filters and the 2D Butterworth all provided similar measures of detectability, which were noticeably higher than that obtained with 1D Butterworth smoothing

  11. Empirical method for matrix effects correction in liquid samples

    International Nuclear Information System (INIS)

    Vigoda de Leyt, Dora; Vazquez, Cristina

    1987-01-01

    A simple method for the determination of Cr, Ni and Mo in stainless steels is presented. In order to minimize the matrix effects, the conditions of liquid system to dissolve stainless steels chips has been developed. Pure element solutions were used as standards. Preparation of synthetic solutions with all the elements of steel and also mathematic corrections are avoided. It results in a simple chemical operation which simplifies the method of analysis. The variance analysis of the results obtained with steel samples show that the three elements may be determined from the comparison with the analytical curves obtained with the pure elements if the same parameters in the calibration curves are used. The accuracy and the precision were checked against other techniques using the British Chemical Standards of the Bureau of Anlysed Samples Ltd. (England). (M.E.L.) [es

  12. Correction

    Directory of Open Access Journals (Sweden)

    2012-01-01

    Full Text Available Regarding Gorelik, G., & Shackelford, T.K. (2011. Human sexual conflict from molecules to culture. Evolutionary Psychology, 9, 564–587: The authors wish to correct an omission in citation to the existing literature. In the final paragraph on p. 570, we neglected to cite Burch and Gallup (2006 [Burch, R. L., & Gallup, G. G., Jr. (2006. The psychobiology of human semen. In S. M. Platek & T. K. Shackelford (Eds., Female infidelity and paternal uncertainty (pp. 141–172. New York: Cambridge University Press.]. Burch and Gallup (2006 reviewed the relevant literature on FSH and LH discussed in this paragraph, and should have been cited accordingly. In addition, Burch and Gallup (2006 should have been cited as the originators of the hypothesis regarding the role of FSH and LH in the semen of rapists. The authors apologize for this oversight.

  13. Correction

    CERN Multimedia

    2002-01-01

    The photo on the second page of the Bulletin n°48/2002, from 25 November 2002, illustrating the article «Spanish Visit to CERN» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.   The Spanish delegation, accompanied by Spanish scientists at CERN, also visited the LHC superconducting magnet test hall (photo). From left to right: Felix Rodriguez Mateos of CERN LHC Division, Josep Piqué i Camps, Spanish Minister of Science and Technology, César Dopazo, Director-General of CIEMAT (Spanish Research Centre for Energy, Environment and Technology), Juan Antonio Rubio, ETT Division Leader at CERN, Manuel Aguilar-Benitez, Spanish Delegate to Council, Manuel Delfino, IT Division Leader at CERN, and Gonzalo León, Secretary-General of Scientific Policy to the Minister.

  14. Correction

    Directory of Open Access Journals (Sweden)

    2014-01-01

    Full Text Available Regarding Tagler, M. J., and Jeffers, H. M. (2013. Sex differences in attitudes toward partner infidelity. Evolutionary Psychology, 11, 821–832: The authors wish to correct values in the originally published manuscript. Specifically, incorrect 95% confidence intervals around the Cohen's d values were reported on page 826 of the manuscript where we reported the within-sex simple effects for the significant Participant Sex × Infidelity Type interaction (first paragraph, and for attitudes toward partner infidelity (second paragraph. Corrected values are presented in bold below. The authors would like to thank Dr. Bernard Beins at Ithaca College for bringing these errors to our attention. Men rated sexual infidelity significantly more distressing (M = 4.69, SD = 0.74 than they rated emotional infidelity (M = 4.32, SD = 0.92, F(1, 322 = 23.96, p < .001, d = 0.44, 95% CI [0.23, 0.65], but there was little difference between women's ratings of sexual (M = 4.80, SD = 0.48 and emotional infidelity (M = 4.76, SD = 0.57, F(1, 322 = 0.48, p = .29, d = 0.08, 95% CI [−0.10, 0.26]. As expected, men rated sexual infidelity (M = 1.44, SD = 0.70 more negatively than they rated emotional infidelity (M = 2.66, SD = 1.37, F(1, 322 = 120.00, p < .001, d = 1.12, 95% CI [0.85, 1.39]. Although women also rated sexual infidelity (M = 1.40, SD = 0.62 more negatively than they rated emotional infidelity (M = 2.09, SD = 1.10, this difference was not as large and thus in the evolutionary theory supportive direction, F(1, 322 = 72.03, p < .001, d = 0.77, 95% CI [0.60, 0.94].

  15. A direct method to solve optimal knots of B-spline curves: An application for non-uniform B-spline curves fitting.

    Directory of Open Access Journals (Sweden)

    Van Than Dung

    Full Text Available B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.

  16. Gynecomastia: the horizontal ellipse method for its correction.

    Science.gov (United States)

    Gheita, Alaa

    2008-09-01

    Gynecomastia is an extremely disturbing deformity affecting males, especially when it occurs in young subjects. Such subjects generally have no hormonal anomalies and thus either liposuction or surgical intervention, depending on the type and consistency of the breast, is required for treatment. If there is slight hypertrophy alone with no ptosis, then subcutaneous mastectomy is usually sufficient. However, when hypertrophy and/or ptosis are present, then corrective surgery on the skin and breast is mandatory to obtain a good cosmetic result. Most of the procedures suggested for reduction of the male breast are usually derived from reduction mammaplasty methods used for females. They have some disadvantages, mainly the multiple scars, which remain apparent in males, unusual shape, and the lack of symmetry with regard to the size of both breasts and/or the nipple position. The author presents a new, simple method that has proven superior to any previous method described so far. It consists of a horizontal excision ellipse of the breast's redundant skin and deep excess tissue and a superior pedicle flap carrying the areola-nipple complex to its new site on the chest wall. The method described yields excellent shape, symmetry, and minimal scars. A new method for treating gynecomastis is described in detail, its early and late operative results are shown, and its advantages are discussed.

  17. Intelligent error correction method applied on an active pixel sensor based star tracker

    Science.gov (United States)

    Schmidt, Uwe

    2005-10-01

    Star trackers are opto-electronic sensors used on-board of satellites for the autonomous inertial attitude determination. During the last years star trackers became more and more important in the field of the attitude and orbit control system (AOCS) sensors. High performance star trackers are based up today on charge coupled device (CCD) optical camera heads. The active pixel sensor (APS) technology, introduced in the early 90-ties, allows now the beneficial replacement of CCD detectors by APS detectors with respect to performance, reliability, power, mass and cost. The company's heritage in star tracker design started in the early 80-ties with the launch of the worldwide first fully autonomous star tracker system ASTRO1 to the Russian MIR space station. Jena-Optronik recently developed an active pixel sensor based autonomous star tracker "ASTRO APS" as successor of the CCD based star tracker product series ASTRO1, ASTRO5, ASTRO10 and ASTRO15. Key features of the APS detector technology are, a true xy-address random access, the multiple windowing read out and the on-chip signal processing including the analogue to digital conversion. These features can be used for robust star tracking at high slew rates and under worse conditions like stray light and solar flare induced single event upsets. A special algorithm have been developed to manage the typical APS detector error contributors like fixed pattern noise (FPN), dark signal non-uniformity (DSNU) and white spots. The algorithm works fully autonomous and adapts to e.g. increasing DSNU and up-coming white spots automatically without ground maintenance or re-calibration. In contrast to conventional correction methods the described algorithm does not need calibration data memory like full image sized calibration data sets. The application of the presented algorithm managing the typical APS detector error contributors is a key element for the design of star trackers for long term satellite applications like

  18. Biogeosystem Technique as a method to correct the climate

    Science.gov (United States)

    Kalinitchenko, Valery; Batukaev, Abdulmalik; Batukaev, Magomed; Minkina, Tatiana

    2017-04-01

    can be produced; The less energy is consumed for climate correction, the better. The proposed algorithm was never discussed before because most of its ingredients were unenforceable. Now the possibility to execute the algorithm exists in the framework of our new scientific-technical branch - Biogeosystem Technique (BGT*). The BGT* is a transcendental (non-imitating natural processes) approach to soil processing, regulation of energy, matter, water fluxes and biological productivity of biosphere: intra-soil machining to provide the new highly productive dispersed system of soil; intra-soil pulse continuous-discrete plants watering to reduce the transpiration rate and water consumption of plants for 5-20 times; intra-soil environmentally safe return of matter during intra-soil milling processing and (or) intra-soil pulse continuous-discrete plants watering with nutrition. Are possible: waste management; reducing flow of nutrients to water systems; carbon and other organic and mineral substances transformation into the soil to plant nutrition elements; less degradation of biological matter to greenhouse gases; increasing biological sequestration of carbon dioxide in terrestrial system's photosynthesis; oxidizing methane and hydrogen sulfide by fresh photosynthesis ionized biologically active oxygen; expansion of the active terrestrial site of biosphere. The high biological product output of biosphere will be gained. BGT* robotic systems are of low cost, energy and material consumption. By BGT* methods the uncertainties of climate and biosphere will be reduced. Key words: Biogeosystem Technique, method to correct, climate

  19. Diagnostics and correction of disregulation states by physical methods

    OpenAIRE

    Gorsha, O. V.; Gorsha, V. I.

    2017-01-01

    Nicolaus Copernicus University, Toruń, Poland Ukrainian Research Institute for Medicine of Transport, Odesa, Ukraine Gorsha O. V., Gorsha V. I. Diagnostics and correction of disregulation states by physical methods Горша О. В., Горша В. И. Диагностика и коррекция физическими методами дизрегуляторных состояний Toruń, Odesa 2017 Nicolaus Copernicus University, To...

  20. Evaluation of a scattering correction method for high energy tomography

    Science.gov (United States)

    Tisseur, David; Bhatia, Navnina; Estre, Nicolas; Berge, Léonie; Eck, Daniel; Payan, Emmanuel

    2018-01-01

    One of the main drawbacks of Cone Beam Computed Tomography (CBCT) is the contribution of the scattered photons due to the object and the detector. Scattered photons are deflected from their original path after their interaction with the object. This additional contribution of the scattered photons results in increased measured intensities, since the scattered intensity simply adds to the transmitted intensity. This effect is seen as an overestimation in the measured intensity thus corresponding to an underestimation of absorption. This results in artifacts like cupping, shading, streaks etc. on the reconstructed images. Moreover, the scattered radiation provides a bias for the quantitative tomography reconstruction (for example atomic number and volumic mass measurement with dual-energy technique). The effect can be significant and difficult in the range of MeV energy using large objects due to higher Scatter to Primary Ratio (SPR). Additionally, the incident high energy photons which are scattered by the Compton effect are more forward directed and hence more likely to reach the detector. Moreover, for MeV energy range, the contribution of the photons produced by pair production and Bremsstrahlung process also becomes important. We propose an evaluation of a scattering correction technique based on the method named Scatter Kernel Superposition (SKS). The algorithm uses a continuously thickness-adapted kernels method. The analytical parameterizations of the scatter kernels are derived in terms of material thickness, to form continuously thickness-adapted kernel maps in order to correct the projections. This approach has proved to be efficient in producing better sampling of the kernels with respect to the object thickness. This technique offers applicability over a wide range of imaging conditions and gives users an additional advantage. Moreover, since no extra hardware is required by this approach, it forms a major advantage especially in those cases where

  1. Method and system of doppler correction for mobile communications systems

    Science.gov (United States)

    Georghiades, Costas N. (Inventor); Spasojevic, Predrag (Inventor)

    1999-01-01

    Doppler correction system and method comprising receiving a Doppler effected signal comprising a preamble signal (32). A delayed preamble signal (48) may be generated based on the preamble signal (32). The preamble signal (32) may be multiplied by the delayed preamble signal (48) to generate an in-phase preamble signal (60). The in-phase preamble signal (60) may be filtered to generate a substantially constant in-phase preamble signal (62). A plurality of samples of the substantially constant in-phase preamble signal (62) may be accumulated. A phase-shifted signal (76) may also be generated based on the preamble signal (32). The phase-shifted signal (76) may be multiplied by the delayed preamble signal (48) to generate an out-of-phase preamble signal (80). The out-of-phase preamble signal (80) may be filtered to generate a substantially constant out-of-phase preamble signal (82). A plurality of samples of the substantially constant out-of-phase signal (82) may be accumulated. A sum of the in-phase preamble samples and a sum of the out-of-phase preamble samples may be normalized relative to each other to generate an in-phase Doppler estimator (92) and an out-of-phase Doppler estimator (94).

  2. Casimir energy of a nonuniform string

    Science.gov (United States)

    Hadasz, L.; Lambiase, G.; Nesterenko, V. V.

    2000-07-01

    The Casimir energy of a nonuniform string built up from two pieces with different speeds of sound is calculated. A standard procedure of subtracting the energy of an infinite uniform string is applied, the subtraction being interpreted as the renormalization of the string tension. It is shown that in the case of a homogeneous string this method is completely equivalent to zeta renormalization.

  3. Non-uniform tube representation of proteins

    DEFF Research Database (Denmark)

    Hansen, Mikael Sonne

    Treating the full protein structure is often neither computationally nor physically possible. Instead one is forced to consider various reduced models capturing the properties of interest. Previous work have used tubular neighborhoods of the C-alpha backbone. However, assigning a unique radius...... might not correctly capture volume exclusion - of crucial importance when trying to understand a proteins $3$d-structure. We propose a new reduced model treating the protein as a non-uniform tube with a radius reflecting the positions of atoms. The tube representation is well suited considering X......-ray crystallographic resolution ~ 3Å while a varying radius accounts for the different sizes of side chains. Such a non-uniform tube better capture the protein geometry and has numerous applications in structural/computational biology from the classification of protein structures to sequence-structure prediction....

  4. Gamma camera correction system and method for using the same

    International Nuclear Information System (INIS)

    Inbar, D.; Gafni, G.; Grimberg, E.; Bialick, K.; Koren, J.

    1986-01-01

    A gamma camera is described which consists of: (a) a detector head that includes photodetectors for producing output signals in response to radiation stimuli which are emitted by a radiation field and which interact with the detector head and produce an event; (b) signal processing circuitry responsive to the output signals of the photodetectors for producing a sum signal that is a measure of the total energy of the event; (c) an energy discriminator having a relatively wide window for comparison with the sum signal; (d) the signal processing circuitry including coordinate computation circuitry for operating on the output signals, and calculating an X,Y coordinate of an event when the sum signal lies within the window of the energy discriminator; (e) an energy correction table containing spatially dependent energy windows for producing a validation signal if the total energy of an event lies within the window associated with the X,Y coordinates of the event; (f) the signal processing circuitry including a dislocation correction table containing spatially dependent correction factors for converting the X,Y coordinates of an event to relocated coordinates in accordance with correction factors determined by the X,Y coordinates; (g) a digital memory for storing a map of the radiation field; and (h) means for recording an event at its relocated coordinates in the memory if the energy correction table produces a validation signal

  5. Effect of methods of myopia correction on visual acuity, contrast sensitivity, and depth of focus

    NARCIS (Netherlands)

    Nio, YK; Jansonius, NM; Wijdh, RHJ; Beekhuis, WH; Worst, JGF; Noorby, S; Kooijman, AC

    Purpose. To psychophysically measure spherical and irregular aberrations in patients with various types of myopia correction. Setting: Laboratory of Experimental Ophthalmology, University of Groningen, Groningen, The Netherlands. Methods: Three groups of patients with low myopia correction

  6. Peculiarities of application the method of autogenic training in the correction of eating behavior

    OpenAIRE

    Shebanova, Vitaliya

    2014-01-01

    The article presented peculiarities of applying the method of autogenic training in the correction of eating disorders. Described stages of correction work with desadaptive eating behavior. Author makes accent on the rules self-assembly formula intentions.

  7. Methods and apparatus for environmental correction of thermal neutron logs

    International Nuclear Information System (INIS)

    Preeg, W.E.; Scott, H.D.

    1983-01-01

    An on-line environmentally-corrected measurement of the thermal neutron decay time (tau) of an earth formation traversed by a borehole is provided in a two-detector, pulsed neutron logging tool, by measuring tau at each detector and combining the two tau measurements in accordance with a previously established empirical relationship of the general form: tau = tausub(F) +A(tausub(F) + tausub(N)B) + C, where tausub(F) and tausub(N) are the tau measurements at the far-spaced and near-spaced detectors, respectively, A is a correction coefficient for borehole capture cross section effects, B is a correction coefficient for neutron diffusion effects, and C is a constant related to parameters of the logging tool. Preferred numerical values of A, B and C are disclosed, and a relationship for more accurately approximating the A term to specific borehole conditions. (author)

  8. Semi-exact solution of elastic non-uniform thickness and density rotating disks by homotopy perturbation and Adomian's decomposition methods. Part I: Elastic solution

    International Nuclear Information System (INIS)

    Hojjati, M.H.; Jafari, S.

    2008-01-01

    In this work, two powerful analytical methods, namely homotopy perturbation method (HPM) and Adomian's decomposition method (ADM), are introduced to obtain distributions of stresses and displacements in rotating annular elastic disks with uniform and variable thicknesses and densities. The results obtained by these methods are then compared with the verified variational iteration method (VIM) solution. He's homotopy perturbation method which does not require a 'small parameter' has been used and a homotopy with an imbedding parameter p element of [0,1] is constructed. The method takes the full advantage of the traditional perturbation methods and the homotopy techniques and yields a very rapid convergence of the solution. Adomian's decomposition method is an iterative method which provides analytical approximate solutions in the form of an infinite power series for nonlinear equations without linearization, perturbation or discretization. Variational iteration method, on the other hand, is based on the incorporation of a general Lagrange multiplier in the construction of correction functional for the equation. This study demonstrates the ability of the methods for the solution of those complicated rotating disk cases with either no or difficult to find fairly exact solutions without the need to use commercial finite element analysis software. The comparison among these methods shows that although the numerical results are almost the same, HPM is much easier, more convenient and efficient than ADM and VIM

  9. Nonuniform quantum turbulence in superfluids

    Science.gov (United States)

    Nemirovskii, Sergey K.

    2018-04-01

    The problem of quantum turbulence in a channel with an inhomogeneous counterflow of superfluid turbulent helium is studied. The counterflow velocity Vns x(y ) along the channel is supposed to have a parabolic profile in the transverse direction y . Such statement corresponds to the recent numerical simulation by Khomenko et al. [Phys. Rev. B 91, 180504 (2015), 10.1103/PhysRevB.91.180504]. The authors reported about a sophisticated behavior of the vortex-line density (VLD) L (r ,t ) , different from L ∝Vns x(y) 2 , which follows from the straightforward application of the conventional Vinen theory. It is clear that Vinen theory should be refined by taking into account transverse effects, and the way it ought to be done is the subject of active discussion in the literature. In this work, we discuss several possible mechanisms of the transverse flux of VLD L (r ,t ) which should be incorporated in the standard Vinen equation to describe adequately the inhomogeneous quantum turbulence. It is shown that the most effective among these mechanisms is the one that is related to the phase-slippage phenomenon. The use of this flux in the modernized Vinen equation corrects the situation with an unusual distribution of the vortex-line density, and satisfactorily describes the behavior L (r ,t ) both in stationary and nonstationary situations. The general problem of the phenomenological Vinen theory in the case of nonuniform and nonstationary quantum turbulence is thoroughly discussed.

  10. Haldane model under nonuniform strain

    Science.gov (United States)

    Ho, Yen-Hung; Castro, Eduardo V.; Cazalilla, Miguel A.

    2017-10-01

    We study the Haldane model under strain using a tight-binding approach, and compare the obtained results with the continuum-limit approximation. As in graphene, nonuniform strain leads to a time-reversal preserving pseudomagnetic field that induces (pseudo-)Landau levels. Unlike a real magnetic field, strain lifts the degeneracy of the zeroth pseudo-Landau levels at different valleys. Moreover, for the zigzag edge under uniaxial strain, strain removes the degeneracy within the pseudo-Landau levels by inducing a tilt in their energy dispersion. The latter arises from next-to-leading order corrections to the continuum-limit Hamiltonian, which are absent for a real magnetic field. We show that, for the lowest pseudo-Landau levels in the Haldane model, the dominant contribution to the tilt is different from graphene. In addition, although strain does not strongly modify the dispersion of the edge states, their interplay with the pseudo-Landau levels is different for the armchair and zigzag ribbons. Finally, we study the effect of strain in the band structure of the Haldane model at the critical point of the topological transition, thus shedding light on the interplay between nontrivial topology and strain in quantum anomalous Hall systems.

  11. Thermoelastic analysis of non-uniform pressurized functionally graded cylinder with variable thickness using first order shear deformation theory(FSDT) and perturbation method

    Science.gov (United States)

    Khoshgoftar, M. J.; Mirzaali, M. J.; Rahimi, G. H.

    2015-11-01

    Recently application of functionally graded materials(FGMs) have attracted a great deal of interest. These materials are composed of various materials with different micro-structures which can vary spatially in FGMs. Such composites with varying thickness and non-uniform pressure can be used in the aerospace engineering. Therefore, analysis of such composite is of high importance in engineering problems. Thermoelastic analysis of functionally graded cylinder with variable thickness under non-uniform pressure is considered. First order shear deformation theory and total potential energy approach is applied to obtain the governing equations of non-homogeneous cylinder. Considering the inner and outer solutions, perturbation series are applied to solve the governing equations. Outer solution for out of boundaries and more sensitive variable in inner solution at the boundaries are considered. Combining of inner and outer solution for near and far points from boundaries leads to high accurate displacement field distribution. The main aim of this paper is to show the capability of matched asymptotic solution for different non-homogeneous cylinders with different shapes and different non-uniform pressures. The results can be used to design the optimum thickness of the cylinder and also some properties such as high temperature residence by applying non-homogeneous material.

  12. Practical method of breast attenuation correction for cardiac SPECT

    International Nuclear Information System (INIS)

    Oliveira, Anderson de; Nogueira, Tindyua; Gutterres, Ricardo Fraga; Megueriam, Berdj Aram; Santos, Goncalo Rodrigues dos

    2007-01-01

    The breast attenuation effects on SPECT (Single Photon Emission Tomography) myocardium perfusion procedures have been lately scope of continuous inquiry. The requested attenuation correction factors are usually achieved by transmission analysis, making up the exposure of a standard external source to the SPECT, as a routine step. However, its high cost makes this methodology not fully available to the most of nuclear medicines services in Brazil and abroad. To overcome the problem, a new trend is presented in this work, implementing computational models to balance the breast attenuation effects on the left ventricle anterior wall, during myocardium perfusion scintigraphy procedures with SPECT. A neural network was put on in order to provide the attenuation correction indexes, based upon the following patients individual biotypes features: mass, age, height, chest and breast thicknesses, heart size, as well as the imparted activity intake levels. (author)

  13. Practical method of breast attenuation correction for cardiac SPECT

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Anderson de; Nogueira, Tindyua; Gutterres, Ricardo Fraga [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil). Coordenacao Geral de Instalacoes Medicas e Industriais (CGMI)]. E-mails: anderson@cnen.gov.br; tnogueira@cnen.gov.br; rguterre@cnen.gov.br; Megueriam, Berdj Aram [Instituto Nacional do Cancer (INCA), Rio de Janeiro, RJ (Brazil)]. E-mail: megueriam@hotmail.com; Santos, Goncalo Rodrigues dos [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)]. E-mail: goncalo@cnen.gov.br

    2007-07-01

    The breast attenuation effects on SPECT (Single Photon Emission Tomography) myocardium perfusion procedures have been lately scope of continuous inquiry. The requested attenuation correction factors are usually achieved by transmission analysis, making up the exposure of a standard external source to the SPECT, as a routine step. However, its high cost makes this methodology not fully available to the most of nuclear medicines services in Brazil and abroad. To overcome the problem, a new trend is presented in this work, implementing computational models to balance the breast attenuation effects on the left ventricle anterior wall, during myocardium perfusion scintigraphy procedures with SPECT. A neural network was put on in order to provide the attenuation correction indexes, based upon the following patients individual biotypes features: mass, age, height, chest and breast thicknesses, heart size, as well as the imparted activity intake levels. (author)

  14. Use of digital computers for correction of gamma method and neutron-gamma method indications

    International Nuclear Information System (INIS)

    Lakhnyuk, V.M.

    1978-01-01

    The program for the NAIRI-S computer is described which is intended for accounting and elimination of the effect of by-processes when interpreting gamma and neutron-gamma logging indications. By means of slight corrections it is possible to use the program as a mathematical basis for logging diagram standardization by the method of multidimensional regressive analysis and estimation of rock reservoir properties

  15. Linear q-nonuniform difference equations

    International Nuclear Information System (INIS)

    Bangerezako, Gaspard

    2010-01-01

    We introduce basic concepts of q-nonuniform differentiation and integration and study linear q-nonuniform difference equations and systems, as well as their application in q-nonuniform difference linear control systems. (author)

  16. ASSESSMENT OF ATMOSPHERIC CORRECTION METHODS FOR OPTIMIZING HAZY SATELLITE IMAGERIES

    Directory of Open Access Journals (Sweden)

    Umara Firman Rizidansyah

    2015-04-01

    Full Text Available The purpose of this research is to examine suitability of three types of haze correction methods toward distinctness of surface objects in land cover. Considering the formation of haze therefore the main research are divided into both region namely rural assumed as vegetation and urban assumed as non vegetation area. Region of interest for rural selected Balaraja and urban selected Penjaringan. Haze imagery reduction utilized techniques such as Dark Object Substration, Virtual Cloud Point and Histogram Match. By applying an equation of Haze Optimized Transformation HOT = DNbluesin(∂-DNredcos(∂, the main result of this research includes: in the case of AVNIR-Rural, VCP has good results on Band 1 while the HM has good results on band 2, 3 and 4, therefore in the case of Avnir-Rural can be applied to HM. in the case of AVNIR-Urban, DOS has good result on band 1, 2 and 3 meanwhile HM has good results on band 4, therefore in the case of AVNIR-Urban can be applied to DOS. In the case of Landsat-Rural, DOS has good result on band 1, 2 and 6 meanwhile VCP has good results on band 4 and 5 and the smallest average value of HOT is 106.547 by VCP, therefore in the case of Lansat-Rural can be applied to DOS and VCP. In the case of Landsat-Urban, DOS has good result on band 1, 2 and 6 meanwhile VCP has good results on band 3, 4 and 5, therefore in the case of Landsat-Urban can be applied to VCP.   Tujuan penelitian ini untuk menguji kesesuaian tiga jenis metode koreksi haze terhadap kejelasan obyek permukaan di wilayah tutupan vegetasi dan non vegetasi, berkenaan menghilangkan haze di wilayah citra satelit optis yang memiliki karakteristik tertentu dan diduga proses pembentukan partikel hazenya berbeda. Sehingga daerah penelitian dibagi menjadi wilayah rural yang diasumsikan sebagai daerah vegetasi dan urban sebagai non vegetasi. Pedesaan terpilih kecamatan Balaraja dan Perkotaan terpilih kecamatan Penjaringan. Tiap lokasi menggunakan Avnir-2 dan Landsat

  17. Correction of rotational distortion for catheter-based en face OCT and OCT angiography

    Science.gov (United States)

    Ahsen, Osman O.; Lee, Hsiang-Chieh; Giacomelli, Michael G.; Wang, Zhao; Liang, Kaicheng; Tsai, Tsung-Han; Potsaid, Benjamin; Mashimo, Hiroshi; Fujimoto, James G.

    2015-01-01

    We demonstrate a computationally efficient method for correcting the nonuniform rotational distortion (NURD) in catheter-based imaging systems to improve endoscopic en face optical coherence tomography (OCT) and OCT angiography. The method performs nonrigid registration using fiducial markers on the catheter to correct rotational speed variations. Algorithm performance is investigated with an ultrahigh-speed endoscopic OCT system and micromotor catheter. Scan nonuniformity is quantitatively characterized, and artifacts from rotational speed variations are significantly reduced. Furthermore, we present endoscopic en face OCT and OCT angiography images of human gastrointestinal tract in vivo to demonstrate the image quality improvement using the correction algorithm. PMID:25361133

  18. New methods for the correction of 31P NMR spectra in in vivo NMR spectroscopy

    International Nuclear Information System (INIS)

    Starcuk, Z.; Bartusek, K.; Starcuk, Z. jr.

    1994-01-01

    The new methods for the correction of 31 P NMR spectra in vivo NMR spectroscopy have been performed. A method for the baseline correction of the spectra which represents a combination of time-domain and frequency-domain has been discussed.The method is very fast and efficient for minimization of base line artifacts of biological tissues impact

  19. Scatter correction method with primary modulator for dual energy digital radiography: a preliminary study

    Science.gov (United States)

    Jo, Byung-Du; Lee, Young-Jin; Kim, Dae-Hong; Jeon, Pil-Hyun; Kim, Hee-Joung

    2014-03-01

    In conventional digital radiography (DR) using a dual energy subtraction technique, a significant fraction of the detected photons are scattered within the body, resulting in the scatter component. Scattered radiation can significantly deteriorate image quality in diagnostic X-ray imaging systems. Various methods of scatter correction, including both measurement and non-measurement-based methods have been proposed in the past. Both methods can reduce scatter artifacts in images. However, non-measurement-based methods require a homogeneous object and have insufficient scatter component correction. Therefore, we employed a measurement-based method to correct for the scatter component of inhomogeneous objects from dual energy DR (DEDR) images. We performed a simulation study using a Monte Carlo simulation with a primary modulator, which is a measurement-based method for the DEDR system. The primary modulator, which has a checkerboard pattern, was used to modulate primary radiation. Cylindrical phantoms of variable size were used to quantify imaging performance. For scatter estimation, we used Discrete Fourier Transform filtering. The primary modulation method was evaluated using a cylindrical phantom in the DEDR system. The scatter components were accurately removed using a primary modulator. When the results acquired with scatter correction and without correction were compared, the average contrast-to-noise ratio (CNR) with the correction was 1.35 times higher than that obtained without correction, and the average root mean square error (RMSE) with the correction was 38.00% better than that without correction. In the subtraction study, the average CNR with correction was 2.04 (aluminum subtraction) and 1.38 (polymethyl methacrylate (PMMA) subtraction) times higher than that obtained without the correction. The analysis demonstrated the accuracy of scatter correction and the improvement of image quality using a primary modulator and showed the feasibility of

  20. Method and apparatus for optical phase error correction

    Science.gov (United States)

    DeRose, Christopher; Bender, Daniel A.

    2014-09-02

    The phase value of a phase-sensitive optical device, which includes an optical transport region, is modified by laser processing. At least a portion of the optical transport region is exposed to a laser beam such that the phase value is changed from a first phase value to a second phase value, where the second phase value is different from the first phase value. The portion of the optical transport region that is exposed to the laser beam can be a surface of the optical transport region or a portion of the volume of the optical transport region. In an embodiment of the invention, the phase value of the optical device is corrected by laser processing. At least a portion of the optical transport region is exposed to a laser beam until the phase value of the optical device is within a specified tolerance of a target phase value.

  1. Genomes correction and assembling: present methods and tools

    Science.gov (United States)

    Wojcieszek, Michał; Pawełkowicz, Magdalena; Nowak, Robert; Przybecki, Zbigniew

    2014-11-01

    Recent rapid development of next generation sequencing (NGS) technologies provided significant impact into genomics field of study enabling implementation of many de novo sequencing projects of new species which was previously confined by technological costs. Along with advancement of NGS there was need for adjustment in assembly programs. New algorithms must cope with massive amounts of data computation in reasonable time limits and processing power and hardware is also an important factor. In this paper, we address the issue of assembly pipeline for de novo genome assembly provided by programs presently available for scientist both as commercial and as open - source software. The implementation of four different approaches - Greedy, Overlap - Layout - Consensus (OLC), De Bruijn and Integrated resulting in variation of performance is the main focus of our discussion with additional insight into issue of short and long reads correction.

  2. Detector correction in large container inspection systems

    CERN Document Server

    Kang Ke Jun; Chen Zhi Qiang

    2002-01-01

    In large container inspection systems, the image is constructed by parallel scanning with a one-dimensional detector array with a linac used as the X-ray source. The linear nonuniformity and nonlinearity of multiple detectors and the nonuniform intensity distribution of the X-ray sector beam result in horizontal striations in the scan image. This greatly impairs the image quality, so the image needs to be corrected. The correction parameters are determined experimentally by scaling the detector responses at multiple points with logarithm interpolation of the results. The horizontal striations are eliminated by modifying the original image data with the correction parameters. This method has proven to be effective and applicable in large container inspection systems

  3. Texture analysis by the Schulz reflection method: Defocalization corrections for thin films

    International Nuclear Information System (INIS)

    Chateigner, D.; Germi, P.; Pernet, M.

    1992-01-01

    A new method is described for correcting experimental data obtained from the texture analysis of thin films. The analysis employed for correcting the data usually requires the experimental curves of defocalization for a randomly oriented specimen. In view of difficulties in finding non-oriented films, a theoretical method for these corrections is proposed which uses the defocalization evolution for a bulk sample, the film thickness and the penetration depth of the incident beam in the material. This correction method is applied to a film of YBa 2 CU 3 O 7-δ on an SrTiO 3 single-crystal substrate. (orig.)

  4. Band extension in digital methods of transfer function determination – signal conditioners asymmetry error corrections

    Directory of Open Access Journals (Sweden)

    Zbigniew Staroszczyk

    2014-12-01

    Full Text Available [b]Abstract[/b]. In the paper, the calibrating method for error correction in transfer function determination with the use of DSP has been proposed. The correction limits/eliminates influence of transfer function input/output signal conditioners on the estimated transfer functions in the investigated object. The method exploits frequency domain conditioning paths descriptor found during training observation made on the known reference object.[b]Keywords[/b]: transfer function, band extension, error correction, phase errors

  5. The effects of non-uniform flow velocity on vibrations of single-walled carbon nanotube conveying fluid

    Energy Technology Data Exchange (ETDEWEB)

    Sadeghi-Goughari, Moslem [Shahid Bahonar University of Kerman, Kerman (Iran, Islamic Republic of); Hosseini, Mohammad [Sirjan University of Technology, Sirjan (Iran, Islamic Republic of)

    2015-02-15

    The vibrational behavior of a viscous nanoflow-conveying single-walled carbon nanotube (SWCNT) was investigated. The nonuniformity of the flow velocity distribution caused by the viscosity of fluid and the small-size effects on the flow field was considered. Euler-Bernoulli beam model was used to investigate flow-induced vibration of the nanotube, while the non-uniformity of the flow velocity and the small-size effects of the flow field were formulated through Knudsen number (Kn), as a discriminant parameter. For laminar flow in a circular nanotube, the momentum correction factor was developed as a function of Kn. For Kn = 0 (continuum flow), the momentum correction factor was found to be 1.33, which decreases by the increase in Kn may even reach near 1 for the transition flow regime. We observed that for passage of viscous flow through a nanotube with the non-uniform flow velocity, the critical continuum flow velocity for divergence decreased considerably as opposed to those for the uniform flow velocity, while by increasing Kn, the difference between the uniform and non-uniform flow models may be reduced. In the solution part, the differential transformation method (DTM) was used to solve the governing differential equations of motion.

  6. The effects of non-uniform flow velocity on vibrations of single-walled carbon nanotube conveying fluid

    International Nuclear Information System (INIS)

    Sadeghi-Goughari, Moslem; Hosseini, Mohammad

    2015-01-01

    The vibrational behavior of a viscous nanoflow-conveying single-walled carbon nanotube (SWCNT) was investigated. The nonuniformity of the flow velocity distribution caused by the viscosity of fluid and the small-size effects on the flow field was considered. Euler-Bernoulli beam model was used to investigate flow-induced vibration of the nanotube, while the non-uniformity of the flow velocity and the small-size effects of the flow field were formulated through Knudsen number (Kn), as a discriminant parameter. For laminar flow in a circular nanotube, the momentum correction factor was developed as a function of Kn. For Kn = 0 (continuum flow), the momentum correction factor was found to be 1.33, which decreases by the increase in Kn may even reach near 1 for the transition flow regime. We observed that for passage of viscous flow through a nanotube with the non-uniform flow velocity, the critical continuum flow velocity for divergence decreased considerably as opposed to those for the uniform flow velocity, while by increasing Kn, the difference between the uniform and non-uniform flow models may be reduced. In the solution part, the differential transformation method (DTM) was used to solve the governing differential equations of motion.

  7. Correction to the method of Talmadge and Fitch

    International Nuclear Information System (INIS)

    Sincero, A.P.

    2002-01-01

    The method of Talmadge and Fitch used for calculating thickener areas was published in 1955. Although in the United States, this method has largely been superseded by the solids flux method, there are other parts in the world that use this method even up to the present. The method, however, is erroneous and this needs to be known to potential users. The error lies in the assumption that the underflow concentration, C u , and the time of thickening, t u , in a continuous-flow thickener can be obtained from data obtained in a single batch settling test. This paper will show that this assumption is incorrect. (author)

  8. A new digitized reverse correction method for hypoid gears based on a one-dimensional probe

    Science.gov (United States)

    Li, Tianxing; Li, Jubo; Deng, Xiaozhong; Yang, Jianjun; Li, Genggeng; Ma, Wensuo

    2017-12-01

    In order to improve the tooth surface geometric accuracy and transmission quality of hypoid gears, a new digitized reverse correction method is proposed based on the measurement data from a one-dimensional probe. The minimization of tooth surface geometrical deviations is realized from the perspective of mathematical analysis and reverse engineering. Combining the analysis of complex tooth surface generation principles and the measurement mechanism of one-dimensional probes, the mathematical relationship between the theoretical designed tooth surface, the actual machined tooth surface and the deviation tooth surface is established, the mapping relation between machine-tool settings and tooth surface deviations is derived, and the essential connection between the accurate calculation of tooth surface deviations and the reverse correction method of machine-tool settings is revealed. Furthermore, a reverse correction model of machine-tool settings is built, a reverse correction strategy is planned, and the minimization of tooth surface deviations is achieved by means of the method of numerical iterative reverse solution. On this basis, a digitized reverse correction system for hypoid gears is developed by the organic combination of numerical control generation, accurate measurement, computer numerical processing, and digitized correction. Finally, the correctness and practicability of the digitized reverse correction method are proved through a reverse correction experiment. The experimental results show that the tooth surface geometric deviations meet the engineering requirements after two trial cuts and one correction.

  9. Nonuniform sampling by quantiles

    Science.gov (United States)

    Craft, D. Levi; Sonstrom, Reilly E.; Rovnyak, Virginia G.; Rovnyak, David

    2018-03-01

    A flexible strategy for choosing samples nonuniformly from a Nyquist grid using the concept of statistical quantiles is presented for broad classes of NMR experimentation. Quantile-directed scheduling is intuitive and flexible for any weighting function, promotes reproducibility and seed independence, and is generalizable to multiple dimensions. In brief, weighting functions are divided into regions of equal probability, which define the samples to be acquired. Quantile scheduling therefore achieves close adherence to a probability distribution function, thereby minimizing gaps for any given degree of subsampling of the Nyquist grid. A characteristic of quantile scheduling is that one-dimensional, weighted NUS schedules are deterministic, however higher dimensional schedules are similar within a user-specified jittering parameter. To develop unweighted sampling, we investigated the minimum jitter needed to disrupt subharmonic tracts, and show that this criterion can be met in many cases by jittering within 25-50% of the subharmonic gap. For nD-NUS, three supplemental components to choosing samples by quantiles are proposed in this work: (i) forcing the corner samples to ensure sampling to specified maximum values in indirect evolution times, (ii) providing an option to triangular backfill sampling schedules to promote dense/uniform tracts at the beginning of signal evolution periods, and (iii) providing an option to force the edges of nD-NUS schedules to be identical to the 1D quantiles. Quantile-directed scheduling meets the diverse needs of current NUS experimentation, but can also be used for future NUS implementations such as off-grid NUS and more. A computer program implementing these principles (a.k.a. QSched) in 1D- and 2D-NUS is available under the general public license.

  10. Compensation for nonuniform attenuation in SPECT brain imaging

    International Nuclear Information System (INIS)

    Glick, S.J.; King, M.A.; Pan, T.S.; Soares, E.J.

    1996-01-01

    Accurate compensation for photon attenuation is needed to perform quantitative brain single-photon-emission computed tomographic (SPECT) imaging. Bellini's attenuation-compensation method has been used with a nonuniform attenuation map to account for the nonuniform attenuation properties of the head. Simulation studies using a three-dimensional (3-D) digitized anthropomorphic brain phantom were conducted to compare quantitative accuracy of reconstructions obtained with the nonuniform Bellini method to that obtained with the Chang method and to iterative reconstruction using maximum-likelihood expectation maximization (ML-EM). Using the Chang method and assuming the head to be a uniform attenuator gave reconstructions with an average bias of approximately 6-8%, whereas using the Bellini or the iterative ML-EM method with a nonuniform attenuation map gave an average bias of approximately 1%. The computation time required to implement nonuniform attenuation compensation with the Bellini algorithm is approximately equivalent to the time required to perform one iteration of ML-EM. Thus, using the Bellini method with a nonuniform attenuation map provides accurate compensation for photon attenuation within the head, and the method can be implemented in computation times suitable for routine clinical use

  11. Analysis and development of methods of correcting for heterogeneities to cobalt-60: computing application

    International Nuclear Information System (INIS)

    Kappas, K.

    1982-11-01

    The purpose of this work is the analysis of the influence of inhomogeneities of the human body on the determination of the dose in Cobalt-60 radiation therapy. The first part is dedicated to the physical characteristics of inhomogeneities and to the conventional methods of correction. New methods of correction are proposed based on the analysis of the scatter. This analysis allows to take account, with a greater accuracy of their physical characteristics and of the corresponding modifications of the dose: ''the differential TAR method'' and ''the Beam Substraction Method''. The second part is dedicated to the computer implementation of the second method of correction for routine application in hospital [fr

  12. A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield.

    Science.gov (United States)

    Ringard, Justine; Seyler, Frederique; Linguet, Laurent

    2017-06-16

    Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale.

  13. Efficient SPECT scatter calculation in non-uniform media using correlated Monte Carlo simulation

    International Nuclear Information System (INIS)

    Beekman, F.J.

    1999-01-01

    Accurate simulation of scatter in projection data of single photon emission computed tomography (SPECT) is computationally extremely demanding for activity distribution in non-uniform dense media. This paper suggests how the computation time and memory requirements can be significantly reduced. First the scatter projection of a uniform dense object (P SDSE ) is calculated using a previously developed accurate and fast method which includes all orders of scatter (slab-derived scatter estimation), and then P SDSE is transformed towards the desired projection P which is based on the non-uniform object. The transform of P SDSE is based on two first-order Compton scatter Monte Carlo (MC) simulated projections. One is based on the uniform object (P u ) and the other on the object with non-uniformities (P ν ). P is estimated by P-tilde=P SDSE P ν /P u . A tremendous decrease in noise in P-tilde is achieved by tracking photon paths for P ν identical to those which were tracked for the calculation of P u and by using analytical rather than stochastic modelling of the collimator. The method was validated by comparing the results with standard MC-simulated scatter projections (P) of 99m Tc and 201 Tl point sources in a digital thorax phantom. After correction, excellent agreement was obtained between P-tilde and P. The total computation time required to calculate an accurate scatter projection of an extended distribution in a thorax phantom on a PC is a only few tens of seconds per projection, which makes the method attractive for application in accurate scatter correction in clinical SPECT. Furthermore, the method removes the need of excessive computer memory involved with previously proposed 3D model-based scatter correction methods. (author)

  14. Evaluation and parameterization of ATCOR3 topographic correction method for forest cover mapping in mountain areas

    Science.gov (United States)

    Balthazar, Vincent; Vanacker, Veerle; Lambin, Eric F.

    2012-08-01

    A topographic correction of optical remote sensing data is necessary to improve the quality of quantitative forest cover change analyses in mountainous terrain. The implementation of semi-empirical correction methods requires the calibration of model parameters that are empirically defined. This study develops a method to improve the performance of topographic corrections for forest cover change detection in mountainous terrain through an iterative tuning method of model parameters based on a systematic evaluation of the performance of the correction. The latter was based on: (i) the general matching of reflectances between sunlit and shaded slopes and (ii) the occurrence of abnormal reflectance values, qualified as statistical outliers, in very low illuminated areas. The method was tested on Landsat ETM+ data for rough (Ecuadorian Andes) and very rough mountainous terrain (Bhutan Himalayas). Compared to a reference level (no topographic correction), the ATCOR3 semi-empirical correction method resulted in a considerable reduction of dissimilarities between reflectance values of forested sites in different topographic orientations. Our results indicate that optimal parameter combinations are depending on the site, sun elevation and azimuth and spectral conditions. We demonstrate that the results of relatively simple topographic correction methods can be greatly improved through a feedback loop between parameter tuning and evaluation of the performance of the correction model.

  15. Research on 3-D terrain correction methods of airborne gamma-ray spectrometry survey

    International Nuclear Information System (INIS)

    Liu Yanyang; Liu Qingcheng; Zhang Zhiyong

    2008-01-01

    The general method of height correction is not effectual in complex terrain during the process of explaining airborne gamma-ray spectrometry data, and the 2-D terrain correction method researched in recent years is just available for correction of section measured. A new method of 3-D sector terrain correction is studied. The ground radiator is divided into many small sector radiators by the method, then the irradiation rate is calculated in certain survey distance, and the total value of all small radiate sources is regarded as the irradiation rate of the ground radiator at certain point of aero- survey, and the correction coefficients of every point are calculated which then applied to correct to airborne gamma-ray spectrometry data. The method can achieve the forward calculation, inversion calculation and terrain correction for airborne gamma-ray spectrometry survey in complex topography by dividing the ground radiator into many small sectors. Other factors are considered such as the un- saturated degree of measure scope, uneven-radiator content on ground, and so on. The results of for- ward model and an example analysis show that the 3-D terrain correction method is proper and effectual. (authors)

  16. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method

    International Nuclear Information System (INIS)

    Shidahara, Miho; Kato, Takashi; Kawatsu, Shoji; Yoshimura, Kumiko; Ito, Kengo; Watabe, Hiroshi; Kim, Kyeong Min; Iida, Hidehiro; Kato, Rikio

    2005-01-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99m Tc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I AC μb with Chang's attenuation correction factor. The scatter component image is estimated by convolving I AC μb with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99m Tc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine. (orig.)

  17. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method

    Energy Technology Data Exchange (ETDEWEB)

    Shidahara, Miho; Kato, Takashi; Kawatsu, Shoji; Yoshimura, Kumiko; Ito, Kengo [National Center for Geriatrics and Gerontology Research Institute, Department of Brain Science and Molecular Imaging, Obu, Aichi (Japan); Watabe, Hiroshi; Kim, Kyeong Min; Iida, Hidehiro [National Cardiovascular Center Research Institute, Department of Investigative Radiology, Suita (Japan); Kato, Rikio [National Center for Geriatrics and Gerontology, Department of Radiology, Obu (Japan)

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with {sup 99m}Tc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I{sub AC}{sup {mu}}{sup b} with Chang's attenuation correction factor. The scatter component image is estimated by convolving I{sub AC}{sup {mu}}{sup b} with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and {sup 99m}Tc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine. (orig.)

  18. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method.

    Science.gov (United States)

    Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.

  19. Design of Nonuniform Filter Bank Transceivers for Frequency Selective Channels

    Directory of Open Access Journals (Sweden)

    Yuan-Pei Lin

    2007-01-01

    Full Text Available In recent years, there has been considerable interest in the theory and design of filter bank transceivers due to their superior frequency response. In many applications, it is desired to have transceivers that can support multiple services with different incoming data rates and different quality-of-service requirements. To meet these requirements, we can either do resource allocation or design transceivers with a nonuniform bandwidth partition. In this paper, we propose a method for the design of nonuniform filter bank transceivers for frequency selective channels. Both frequency response and signal-to-interference ratio (SIR can be incorporated in the transceiver design. Moreover, the technique can be extended to the case of nonuniform filter bank transceivers with rational sampling factors. Simulation results show that nonuniform filter bank transceivers with good filter responses as well as high SIR can be obtained by the proposed design method.

  20. Autocalibration method for non-stationary CT bias correction.

    Science.gov (United States)

    Vegas-Sánchez-Ferrero, Gonzalo; Ledesma-Carbayo, Maria J; Washko, George R; Estépar, Raúl San José

    2018-02-01

    Computed tomography (CT) is a widely used imaging modality for screening and diagnosis. However, the deleterious effects of radiation exposure inherent in CT imaging require the development of image reconstruction methods which can reduce exposure levels. The development of iterative reconstruction techniques is now enabling the acquisition of low-dose CT images whose quality is comparable to that of CT images acquired with much higher radiation dosages. However, the characterization and calibration of the CT signal due to changes in dosage and reconstruction approaches is crucial to provide clinically relevant data. Although CT scanners are calibrated as part of the imaging workflow, the calibration is limited to select global reference values and does not consider other inherent factors of the acquisition that depend on the subject scanned (e.g. photon starvation, partial volume effect, beam hardening) and result in a non-stationary noise response. In this work, we analyze the effect of reconstruction biases caused by non-stationary noise and propose an autocalibration methodology to compensate it. Our contributions are: 1) the derivation of a functional relationship between observed bias and non-stationary noise, 2) a robust and accurate method to estimate the local variance, 3) an autocalibration methodology that does not necessarily rely on a calibration phantom, attenuates the bias caused by noise and removes the systematic bias observed in devices from different vendors. The validation of the proposed methodology was performed with a physical phantom and clinical CT scans acquired with different configurations (kernels, doses, algorithms including iterative reconstruction). The results confirmed the suitability of the proposed methods for removing the intra-device and inter-device reconstruction biases. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Simple method for correct enumeration of Staphylococcus aureus

    DEFF Research Database (Denmark)

    Haaber, J.; Cohn, M. T.; Petersen, A.

    2016-01-01

    culture. When grown in such liquid cultures, the human pathogen Staphylococcus aureus is characterized by its aggregation of single cells into clusters of variable size. Here, we show that aggregation during growth in the laboratory standard medium tryptic soy broth (TSB) is common among clinical...... and laboratory S. aureus isolates and that aggregation may introduce significant bias when applying standard enumeration methods on S. aureus growing in laboratory batch cultures. We provide a simple and efficient sonication procedure, which can be applied prior to optical density measurements to give...

  2. Comparison of classical methods for blade design and the influence of tip correction on rotor performance

    DEFF Research Database (Denmark)

    Sørensen, Jens Nørkær; Okulov, Valery; Mikkelsen, Robert Flemming

    2016-01-01

    The classical blade-element/momentum (BE/M) method, which is used together with different types of corrections (e.g. the Prandtl or Glauert tip correction), is today the most basic tool in the design of wind turbine rotors. However, there are other classical techniques based on a combination...

  3. Application of pulse pile-up correction spectrum to the library least-squares method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sang Hoon [Kyungpook National Univ., Daegu (Korea, Republic of)

    2006-12-15

    The Monte Carlo simulation code CEARPPU has been developed and updated to provide pulse pile-up correction spectra for high counting rate cases. For neutron activation analysis, CEARPPU correction spectra were used in library least-squares method to give better isotopic activity results than the convention library least-squares fitting with uncorrected spectra.

  4. Resistivity Correction Factor for the Four-Probe Method: Experiment I

    Science.gov (United States)

    Yamashita, Masato; Yamaguchi, Shoji; Enjoji, Hideo

    1988-05-01

    Experimental verification of the theoretically derived resistivity correction factor (RCF) is presented. Resistivity and sheet resistance measurements by the four-probe method are made on three samples: isotropic graphite, ITO film and Au film. It is indicated that the RCF can correct the apparent variations of experimental data to yield reasonable resistivities and sheet resistances.

  5. Subspace Correction Methods for Total Variation and $\\ell_1$-Minimization

    KAUST Repository

    Fornasier, Massimo

    2009-01-01

    This paper is concerned with the numerical minimization of energy functionals in Hilbert spaces involving convex constraints coinciding with a seminorm for a subspace. The optimization is realized by alternating minimizations of the functional on a sequence of orthogonal subspaces. On each subspace an iterative proximity-map algorithm is implemented via oblique thresholding, which is the main new tool introduced in this work. We provide convergence conditions for the algorithm in order to compute minimizers of the target energy. Analogous results are derived for a parallel variant of the algorithm. Applications are presented in domain decomposition methods for degenerate elliptic PDEs arising in total variation minimization and in accelerated sparse recovery algorithms based on 1-minimization. We include numerical examples which show e.cient solutions to classical problems in signal and image processing. © 2009 Society for Industrial and Applied Physics.

  6. Evaluation of Fresnel's corrections to the eikonal approximation by the separabilization method

    International Nuclear Information System (INIS)

    Musakhanov, M.M.; Zubarev, A.L.

    1975-01-01

    Method of separabilization of potential over the Schroedinger approximate solutions, leading to Schwinger's variational principle for scattering amplitude, is suggested. The results are applied to calculation of the Fresnel corrections to the Glauber approximation

  7. A software-based x-ray scatter correction method for breast tomosynthesis

    OpenAIRE

    Jia Feng, Steve Si; Sechopoulos, Ioannis

    2011-01-01

    Purpose: To develop a software-based scatter correction method for digital breast tomosynthesis (DBT) imaging and investigate its impact on the image quality of tomosynthesis reconstructions of both phantoms and patients.

  8. Discussion on Boiler Efficiency Correction Method with Low Temperature Economizer-Air Heater System

    Science.gov (United States)

    Ke, Liu; Xing-sen, Yang; Fan-jun, Hou; Zhi-hong, Hu

    2017-05-01

    This paper pointed out that it is wrong to take the outlet flue gas temperature of low temperature economizer as exhaust gas temperature in boiler efficiency calculation based on GB10184-1988. What’s more, this paper proposed a new correction method, which decomposed low temperature economizer-air heater system into two hypothetical parts of air preheater and pre condensed water heater and take the outlet equivalent gas temperature of air preheater as exhaust gas temperature in boiler efficiency calculation. This method makes the boiler efficiency calculation more concise, with no air heater correction. It has a positive reference value to deal with this kind of problem correctly.

  9. Influence Of Nonuniformity On Infrared Focal Plane Array Performance

    Science.gov (United States)

    Milton, A. F.; Barone, F. R.; Kruer, M. R.

    1985-08-01

    It is well known that detector response nonuniformity results in pattern noise with staring sensors that is a severe problem in the infrared due to the low intrinsic contrast of IR imagery. The pattern noise can be corrected by electronic processing; however, the ability to correct for pattern noise is limited by the interaction of interscene and intrascene variability with the dynamic range of the processor (number of bits) and, depending upon the algorithm used, by nonlinearities in the detector response. This paper quantifies these limitations and describes the interaction of detector gain nonuniformity and detector nonlinearities. Probabilistic models are developed to determine the maximum sensitivity that can be obtained using a two-point algorithm to correct a nonlinear response curve over a wide temperature range. Curves that permit a prediction of the noise equivalent differential temperature (NEAT) under varying circumstances are presented. A piecewise linear approach to dealing with severe detector response nonlinearities is presented and analyzed for its effectiveness.

  10. Error analysis of motion correction method for laser scanning of moving objects

    Science.gov (United States)

    Goel, S.; Lohani, B.

    2014-05-01

    The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such `motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of "motion correction" explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of "motion correction" method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results.

  11. Radiation heat transfer model using Monte Carlo ray tracing method on hierarchical ortho-Cartesian meshes and non-uniform rational basis spline surfaces for description of boundaries

    Directory of Open Access Journals (Sweden)

    Kuczyński Paweł

    2014-06-01

    Full Text Available The paper deals with a solution of radiation heat transfer problems in enclosures filled with nonparticipating medium using ray tracing on hierarchical ortho-Cartesian meshes. The idea behind the approach is that radiative heat transfer problems can be solved on much coarser grids than their counterparts from computational fluid dynamics (CFD. The resulting code is designed as an add-on to OpenFOAM, an open-source CFD program. Ortho-Cartesian mesh involving boundary elements is created based upon CFD mesh. Parametric non-uniform rational basis spline (NURBS surfaces are used to define boundaries of the enclosure, allowing for dealing with domains of complex shapes. Algorithm for determining random, uniformly distributed locations of rays leaving NURBS surfaces is described. The paper presents results of test cases assuming gray diffusive walls. In the current version of the model the radiation is not absorbed within gases. However, the ultimate aim of the work is to upgrade the functionality of the model, to problems in absorbing, emitting and scattering medium projecting iteratively the results of radiative analysis on CFD mesh and CFD solution on radiative mesh.

  12. A promising hybrid approach to SPECT attenuation correction

    International Nuclear Information System (INIS)

    Lewis, N.H.; Faber, T.L.; Corbett, J.R.; Stokely, E.M.

    1984-01-01

    Most methods for attenuation compensation in SPECT either rely on the assumption of uniform attenuation, or use slow iteration to achieve accuracy. However, hybrid methods that combine iteration with simple multiplicative correction can accommodate nonuniform attenuation, and such methods converge faster than other iterative techniques. The authors evaluated two such methods, which differ in use of a damping factor to control convergence. Both uniform and nonuniform attenuation were modeled, using simulated and phantom data for a rotating gamma camera. For simulations done with 360 0 data and the correct attenuation map, activity levels were reconstructed to within 5% of the correct values after one iteration. Using 180 0 data, reconstructed levels in regions representing lesion and background were within 5% of the correct values in three iterations; however, further iterations were needed to eliminate the characteristic streak artifacts. The damping factor had little effect on 360 0 reconstruction, but was needed for convergence with 180 0 data. For both cold- and hot-lesion models, image contrast was better from the hybrid methods than from the simpler geometric-mean corrector. Results from the hybrid methods were comparable to those obtained using the conjugate-gradient iterative method, but required 50-100% less reconstruction time. The relative speed of the hybrid methods, and their accuracy in reconstructing photon activity in the presence of nonuniform attenuation, make them promising tools for quantitative SPECT reconstruction

  13. Characterization and Processing of Non-Uniformities in Back-Illuminated CCDs

    Science.gov (United States)

    Lemm, Alia D.; Della-Rose, Devin J.; Maddocks, Sally

    2018-01-01

    In astronomical photometry, Charged Coupled Device (CCD) detectors are used to achieve high precision photometry and must be properly calibrated to correct for noise and pixel non-uniformities. Uncalibrated images may contain bias offset, dark current, bias structure and uneven illumination. In addition, standard data reduction is often not sufficient to “normalize” imagery to single-digit millimagnitude (mmag) precision. We are investigating an apparent non-uniformity, or interference pattern, in a back-illuminated sensor, the Alta U-47, attached to a DFM Engineering 41-cm Ritchey-Chrétien f/8 telescope. Based on the amplitude of this effect, we estimate that instrument magnitude peak-to-valley deviations of 50 mmag or more may result. Our initial testing strongly suggests that reflected skylight from high pressure sodium city lights may be the cause of this interference pattern. Our research goals are twofold: to fully characterize this non-uniformity and to determine the best method to remove this interference pattern from our reduced CCD images.

  14. A new correction method for determination on carbohydrates in lignocellulosic biomass.

    Science.gov (United States)

    Li, Hong-Qiang; Xu, Jian

    2013-06-01

    The accurate determination on the key components in lignocellulosic biomass is the premise of pretreatment and bioconversion. Currently, the widely used 72% H2SO4 two-step hydrolysis quantitative saccharification (QS) procedure uses loss coefficient of monosaccharide standards to correct monosaccharide loss in the secondary hydrolysis (SH) of QS and may result in excessive correction. By studying the quantitative relationships of glucose and xylose losses during special hydrolysis conditions and the HMF and furfural productions, a simple correction on the monosaccharide loss from both PH and SH was established by using HMF and furfural as the calibrators. This method was used to the component determination on corn stover, Miscanthus and cotton stalk (raw materials and pretreated) and compared to the NREL method. It has been proved that this method can avoid excessive correction on the samples with high-carbohydrate contents. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Beam-Based Error Identification and Correction Methods for Particle Accelerators

    CERN Document Server

    AUTHOR|(SzGeCERN)692826; Tomas, Rogelio; Nilsson, Thomas

    2014-06-10

    Modern particle accelerators have tight tolerances on the acceptable deviation from their desired machine parameters. The control of the parameters is of crucial importance for safe machine operation and performance. This thesis focuses on beam-based methods and algorithms to identify and correct errors in particle accelerators. The optics measurements and corrections of the Large Hadron Collider (LHC), which resulted in an unprecedented low β-beat for a hadron collider is described. The transverse coupling is another parameter which is of importance to control. Improvement in the reconstruction of the coupling from turn-by-turn data has resulted in a significant decrease of the measurement uncertainty. An automatic coupling correction method, which is based on the injected beam oscillations, has been successfully used in normal operation of the LHC. Furthermore, a new method to measure and correct chromatic coupling that was applied to the LHC, is described. It resulted in a decrease of the chromatic coupli...

  16. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods.

    Science.gov (United States)

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-07

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses' quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups' output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.

  17. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods

    Directory of Open Access Journals (Sweden)

    Huiliang Cao

    2016-01-01

    Full Text Available This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC, Quadrature Force Correction (QFC and Coupling Stiffness Correction (CSC methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.

  18. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods

    Science.gov (United States)

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-01

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability. PMID:26751455

  19. RELIC: a novel dye-bias correction method for Illumina Methylation BeadChip.

    Science.gov (United States)

    Xu, Zongli; Langie, Sabine A S; De Boever, Patrick; Taylor, Jack A; Niu, Liang

    2017-01-03

    The Illumina Infinium HumanMethylation450 BeadChip and its successor, Infinium MethylationEPIC BeadChip, have been extensively utilized in epigenome-wide association studies. Both arrays use two fluorescent dyes (Cy3-green/Cy5-red) to measure methylation level at CpG sites. However, performance difference between dyes can result in biased estimates of methylation levels. Here we describe a novel method, called REgression on Logarithm of Internal Control probes (RELIC) to correct for dye bias on whole array by utilizing the intensity values of paired internal control probes that monitor the two color channels. We evaluate the method in several datasets against other widely used dye-bias correction methods. Results on data quality improvement showed that RELIC correction statistically significantly outperforms alternative dye-bias correction methods. We incorporated the method into the R package ENmix, which is freely available from the Bioconductor website ( https://www.bioconductor.org/packages/release/bioc/html/ENmix.html ). RELIC is an efficient and robust method to correct for dye-bias in Illumina Methylation BeadChip data. It outperforms other alternative methods and conveniently implemented in R package ENmix to facilitate DNA methylation studies.

  20. A novel 3D absorption correction method for quantitative EDX-STEM tomography

    International Nuclear Information System (INIS)

    Burdet, Pierre; Saghi, Z.; Filippin, A.N.; Borrás, A.; Midgley, P.A.

    2016-01-01

    This paper presents a novel 3D method to correct for absorption in energy dispersive X-ray (EDX) microanalysis of heterogeneous samples of unknown structure and composition. By using STEM-based tomography coupled with EDX, an initial 3D reconstruction is used to extract the location of generated X-rays as well as the X-ray path through the sample to the surface. The absorption correction needed to retrieve the generated X-ray intensity is then calculated voxel-by-voxel estimating the different compositions encountered by the X-ray. The method is applied to a core/shell nanowire containing carbon and oxygen, two elements generating highly absorbed low energy X-rays. Absorption is shown to cause major reconstruction artefacts, in the form of an incomplete recovery of the oxide and an erroneous presence of carbon in the shell. By applying the correction method, these artefacts are greatly reduced. The accuracy of the method is assessed using reference X-ray lines with low absorption. - Highlights: • A novel 3D absorption correction method is proposed for 3D EDX-STEM tomography. • The absorption of X-rays along the path to the surface is calculated voxel-by-voxel. • The method is applied on highly absorbed X-rays emitted from a core/shell nanowire. • Absorption is shown to cause major artefacts in the reconstruction. • Using the absorption correction method, the reconstruction artefacts are greatly reduced.

  1. A novel 3D absorption correction method for quantitative EDX-STEM tomography

    Energy Technology Data Exchange (ETDEWEB)

    Burdet, Pierre, E-mail: pierre.burdet@a3.epfl.ch [Department of Materials Science and Metallurgy, University of Cambridge, Charles Babbage Road 27, Cambridge CB3 0FS, Cambridgeshire (United Kingdom); Saghi, Z. [Department of Materials Science and Metallurgy, University of Cambridge, Charles Babbage Road 27, Cambridge CB3 0FS, Cambridgeshire (United Kingdom); Filippin, A.N.; Borrás, A. [Nanotechnology on Surfaces Laboratory, Materials Science Institute of Seville (ICMS), CSIC-University of Seville, C/ Americo Vespucio 49, 41092 Seville (Spain); Midgley, P.A. [Department of Materials Science and Metallurgy, University of Cambridge, Charles Babbage Road 27, Cambridge CB3 0FS, Cambridgeshire (United Kingdom)

    2016-01-15

    This paper presents a novel 3D method to correct for absorption in energy dispersive X-ray (EDX) microanalysis of heterogeneous samples of unknown structure and composition. By using STEM-based tomography coupled with EDX, an initial 3D reconstruction is used to extract the location of generated X-rays as well as the X-ray path through the sample to the surface. The absorption correction needed to retrieve the generated X-ray intensity is then calculated voxel-by-voxel estimating the different compositions encountered by the X-ray. The method is applied to a core/shell nanowire containing carbon and oxygen, two elements generating highly absorbed low energy X-rays. Absorption is shown to cause major reconstruction artefacts, in the form of an incomplete recovery of the oxide and an erroneous presence of carbon in the shell. By applying the correction method, these artefacts are greatly reduced. The accuracy of the method is assessed using reference X-ray lines with low absorption. - Highlights: • A novel 3D absorption correction method is proposed for 3D EDX-STEM tomography. • The absorption of X-rays along the path to the surface is calculated voxel-by-voxel. • The method is applied on highly absorbed X-rays emitted from a core/shell nanowire. • Absorption is shown to cause major artefacts in the reconstruction. • Using the absorption correction method, the reconstruction artefacts are greatly reduced.

  2. Methods of correction of carriage of junior schoolchildren by facilities of physical exercises

    Directory of Open Access Journals (Sweden)

    Gagara V.F.

    2012-08-01

    Full Text Available The results of influence of methods of physical rehabilitation on the organism of children are resulted. In research took part 16 children of lower school with the scoliotic changes of pectoral department of spine. The complex of methods of physical rehabilitation included special correction and general health-improving exercises, medical gymnastics, correction position. Employments on a medical gymnastics during 30-45 minutes 3-4 times per a week were conducted. The improvement of indexes of mobility of spine and state of carriage of schoolchildren is marked. The absolute indexes of the state of carriage and flexibility of spine considerably got around physiology sizes. A rehabilitation complex which includes the elements of correction gymnastics is recommended, medical physical culture, correction, massage of muscles of trunk, position. It is also necessary to adhere to the rational mode of day and feed, provide the normative parameters of working furniture and self-control of the state of carriage.

  3. Efficient color correction method for smartphone camera-based health monitoring application.

    Science.gov (United States)

    Duc Dang; Chae Ho Cho; Daeik Kim; Oh Seok Kwon; Jo Woon Chong

    2017-07-01

    Smartphone health monitoring applications are recently highlighted due to the rapid development of hardware and software performance of smartphones. However, color characteristics of images captured by different smartphone models are dissimilar each other and this difference may give non-identical health monitoring results when the smartphone health monitoring applications monitor physiological information using their embedded smartphone cameras. In this paper, we investigate the differences in color properties of the captured images from different smartphone models and apply a color correction method to adjust dissimilar color values obtained from different smartphone cameras. Experimental results show that the color corrected images using the correction method provide much smaller color intensity errors compared to the images without correction. These results can be applied to enhance the consistency of smartphone camera-based health monitoring applications by reducing color intensity errors among the images obtained from different smartphones.

  4. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    International Nuclear Information System (INIS)

    Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees

    2015-01-01

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts

  5. Ballistic deficit correction methods for large Ge detectors-high counting rate study

    International Nuclear Information System (INIS)

    Duchene, G.; Moszynski, M.

    1995-01-01

    This study presents different ballistic correction methods versus input count rate (from 3 to 50 kcounts/s) using four large Ge detectors of about 70 % relative efficiency. It turns out that the Tennelec TC245 linear amplifier in the BDC mode (Hinshaw method) is the best compromise for energy resolution throughout. All correction methods lead to narrow sum-peaks indistinguishable from single Γ lines. The full energy peak throughput is found representative of the pile-up inspection dead time of the corrector circuits. This work also presents a new and simple representation, plotting simultaneously energy resolution and throughput versus input count rate. (TEC). 12 refs., 11 figs

  6. A non-uniform expansion mechanical safety model of the stent.

    Science.gov (United States)

    Yang, J; Huang, N; Du, Q

    2009-01-01

    Stents have a serial unstable structure that readily leads to non-uniform expansion. Non-uniform expansion in turn creates a stent safety problem. We explain how a stent may be simplified to a serial unstable structure, and present a method to calculate the non-uniform expansion of the stent on the basis of the serial unstable structure. We propose a safety criterion based on the expansion displacement instead of the strain, and explain that the parameter Rd, the ratio of the maximum displacement of the elements to normal displacement, is meaningful to assess the safety level of the stent. We also examine how laser cutting influences non-uniform expansion. The examples illustrate how to calculate the parameter Rd to assess non-uniform expansion of the stent, and demonstrate how the laser cutting offset and strengthening coefficient of the material influence the stent expansion behaviour. The methods are valuable for assessing stent safety due to non-uniform expansion.

  7. N3 Bias Field Correction Explained as a Bayesian Modeling Method

    DEFF Research Database (Denmark)

    Larsen, Christian Thode; Iglesias, Juan Eugenio; Van Leemput, Koen

    2014-01-01

    Although N3 is perhaps the most widely used method for MRI bias field correction, its underlying mechanism is in fact not well understood. Specifically, the method relies on a relatively heuristic recipe of alternating iterative steps that does not optimize any particular objective function. In t...

  8. Integrals of random fields treated by the model correction factor method

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  9. Model correction factor method for reliability problems involving integrals of non-Gaussian random fields

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  10. A Geometric Correction Method of Plane Image Based on OpenCV

    Directory of Open Access Journals (Sweden)

    Li Xiaopeng

    2014-02-01

    Full Text Available Using OpenCV, a geometric correction method of plane image from single grid image in a state of unknown camera position is presented. The method can remove the perspective and lens distortions from an image. The method is simple and easy to implement, and the efficiency is high. Experiments indicate that this method has high precision, and can be used in some domains such as plane measurement.

  11. A New Online Calibration Method Based on Lord's Bias-Correction.

    Science.gov (United States)

    He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei

    2017-09-01

    Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.

  12. An Investigation on the Efficiency Correction Method of the Turbocharger at Low Speed

    Directory of Open Access Journals (Sweden)

    Jin Eun Chung

    2018-01-01

    Full Text Available The heat transfer in the turbocharger occurs due to the temperature difference between the exhaust gas and intake air, coolant, and oil. This heat transfer causes the efficiency of the compressor and turbine to be distorted, which is known to be exacerbated during low rotational speeds. Thus, this study proposes a method to mitigate the distortion of the test result data caused by heat transfer in the turbocharger. With this method, the representative compressor temperature is defined and the heat transfer rate of the compressor is calculated by considering the effect of the oil and turbine inlet temperatures at low rotation speeds, when the cold and the hot gas test are simultaneously performed. The correction of compressor efficiency, depending on the turbine inlet temperature, was performed through both hot and cold gas tests and the results showed a maximum of 16% error prior to correction and a maximum of 3% error after the correction. In addition, it shows that it is possible to correct the efficiency distortion of the turbocharger by heat transfer by correcting to the combined turbine efficiency based on the corrected compressor efficiency.

  13. Evaluation of bias-correction methods for ensemble streamflow volume forecasts

    Directory of Open Access Journals (Sweden)

    T. Hashino

    2007-01-01

    Full Text Available Ensemble prediction systems are used operationally to make probabilistic streamflow forecasts for seasonal time scales. However, hydrological models used for ensemble streamflow prediction often have simulation biases that degrade forecast quality and limit the operational usefulness of the forecasts. This study evaluates three bias-correction methods for ensemble streamflow volume forecasts. All three adjust the ensemble traces using a transformation derived with simulated and observed flows from a historical simulation. The quality of probabilistic forecasts issued when using the three bias-correction methods is evaluated using a distributions-oriented verification approach. Comparisons are made of retrospective forecasts of monthly flow volumes for a north-central United States basin (Des Moines River, Iowa, issued sequentially for each month over a 48-year record. The results show that all three bias-correction methods significantly improve forecast quality by eliminating unconditional biases and enhancing the potential skill. Still, subtle differences in the attributes of the bias-corrected forecasts have important implications for their use in operational decision-making. Diagnostic verification distinguishes these attributes in a context meaningful for decision-making, providing criteria to choose among bias-correction methods with comparable skill.

  14. A multilevel correction adaptive finite element method for Kohn-Sham equation

    Science.gov (United States)

    Hu, Guanghui; Xie, Hehu; Xu, Fei

    2018-02-01

    In this paper, an adaptive finite element method is proposed for solving Kohn-Sham equation with the multilevel correction technique. In the method, the Kohn-Sham equation is solved on a fixed and appropriately coarse mesh with the finite element method in which the finite element space is kept improving by solving the derived boundary value problems on a series of adaptively and successively refined meshes. A main feature of the method is that solving large scale Kohn-Sham system is avoided effectively, and solving the derived boundary value problems can be handled efficiently by classical methods such as the multigrid method. Hence, the significant acceleration can be obtained on solving Kohn-Sham equation with the proposed multilevel correction technique. The performance of the method is examined by a variety of numerical experiments.

  15. Analysis of slippery droplet on tilted plate by development of optical correction method

    Science.gov (United States)

    Ko, Han Seo; Gim, Yeonghyeon; Choi, Sung Ho; Jang, Dong Kyu; Sohn, Dong Kee

    2017-11-01

    Because of distortion effects on a surface of a sessile droplet, the inner flow field of the droplet is measured by a PIV (particle image velocimetry) method with low reliability. In order to solve this problem, many researchers have studied and developed the optical correction method. However, the method cannot be applied for various cases such as the tilted droplet or other asymmetric shaped droplets since most methods were considered only for the axisymmetric shaped droplets. For the optical correction of the asymmetric shaped droplet, the surface function was calculated by the three-dimensional reconstruction using the ellipse curve fitting method. Also, the optical correction using the surface function was verified by the numerical simulation. Then, the developed method was applied to reconstruct the inner flow field of the droplet on the tilted plate. The colloidal droplet of water on the tilted surface was used, and the distorted effect on the surface of the droplet was calculated. Using the obtained results and the PIV method, the corrected flow field for the inner and interface parts of the droplet was reconstructed. Consequently, the error caused by the distortion effect of the velocity vector located on the apex of the droplet was removed. National Research Foundation (NRF) of Korea, (2016R1A2B4011087).

  16. Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.

    Science.gov (United States)

    Jain, Ram B

    2016-08-01

    Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.

  17. Illumination non-uniformity of spirally wobbling beam in heavy ion fusion

    International Nuclear Information System (INIS)

    Suzuki, T.; Noguchi, K.; Kurosaki, T.; Barada, D.; Kawata, S.; Ma, Y. Y.; Ogoyski, A.I.

    2016-01-01

    In inertial confinement fusion, the driver beam illumination non-uniformity leads a degradation of fusion energy output. The illumination non-uniformity allowed is less than a few percent in inertial fusion target implosion. Heavy ion beam (HIB) accelerator provides a capability to oscillate a beam axis with a high frequency. The wobbling beams may provide a new method to reduce or smooth the beam illumination non-uniformity. In this paper the HIBs wobbling illumination scheme was optimized. (paper)

  18. Method and apparatus for reconstructing in-cylinder pressure and correcting for signal decay

    Science.gov (United States)

    Huang, Jian

    2013-03-12

    A method comprises steps for reconstructing in-cylinder pressure data from a vibration signal collected from a vibration sensor mounted on an engine component where it can generate a signal with a high signal-to-noise ratio, and correcting the vibration signal for errors introduced by vibration signal charge decay and sensor sensitivity. The correction factors are determined as a function of estimated motoring pressure and the measured vibration signal itself with each of these being associated with the same engine cycle. Accordingly, the method corrects for charge decay and changes in sensor sensitivity responsive to different engine conditions to allow greater accuracy in the reconstructed in-cylinder pressure data. An apparatus is also disclosed for practicing the disclosed method, comprising a vibration sensor, a data acquisition unit for receiving the vibration signal, a computer processing unit for processing the acquired signal and a controller for controlling the engine operation based on the reconstructed in-cylinder pressure.

  19. Correction method of slit modulation transfer function on digital medical imaging system

    International Nuclear Information System (INIS)

    Kim, Jung Min; Jung, Hoi Woun; Min, Jung Whan; Im, Eon Kyung

    2006-01-01

    By using CR image pixel data, We examined the way how to calculate the MTF and digital characteristic curve. It can be changed to the text-file (Excel) from a pixel data which was printed with a digital x-ray equipment. In this place, We described the way how to figure out and correct the sharpness of a digital images of the MTF from FUJITA. Excel program was utilized to calculate from radiography of slit. Digital characteristic curve, Line Spread Function, Discrete Fourier Transform, Fast Fourier Transform digital specification curve, were indicated in regular sequence. A big advantage of this method, It can be understood easily and you can get results without costly program an without full knowledge of computer language. It shows many different values by using different correction methods. Therefore we need to be handy with appropriate correction method and we should try many experiments to get a precise MTF figures

  20. A third-generation dispersion and third-generation hydrogen bonding corrected PM6 method

    DEFF Research Database (Denmark)

    Kromann, Jimmy Charnley; Christensen, Anders Steen; Svendsen, Casper Steinmann

    2014-01-01

    We present new dispersion and hydrogen bond corrections to the PM6 method, PM6-D3H+, and its implementation in the GAMESS program. The method combines the DFT-D3 dispersion correction by Grimme et al. with a modified version of the H+ hydrogen bond correction by Korth. Overall, the interaction...... in GAMESS, while the corresponding numbers for PM6-DH+ implemented in MOPAC are 54, 17, 15, and 2. The PM6-D3H+ method as implemented in GAMESS offers an attractive alternative to PM6-DH+ in MOPAC in cases where the LBFGS optimizer must be used and a vibrational analysis is needed, e.g., when computing...... vibrational free energies. While the GAMESS implementation is up to 10 times slower for geometry optimizations of proteins in bulk solvent, compared to MOPAC, it is sufficiently fast to make geometry optimizations of small proteins practically feasible....

  1. Correction to the count-rate detection limit and sample/blank time-allocation methods

    International Nuclear Information System (INIS)

    Alvarez, Joseph L.

    2013-01-01

    A common form of count-rate detection limits contains a propagation of uncertainty error. This error originated in methods to minimize uncertainty in the subtraction of the blank counts from the gross sample counts by allocation of blank and sample counting times. Correct uncertainty propagation showed that the time allocation equations have no solution. This publication presents the correct form of count-rate detection limits. -- Highlights: •The paper demonstrated a proper method of propagating uncertainty of count rate differences. •The standard count-rate detection limits were in error. •Count-time allocation methods for minimum uncertainty were in error. •The paper presented the correct form of the count-rate detection limit. •The paper discussed the confusion between count-rate uncertainty and count uncertainty

  2. Characteristic of methods for prevention and correction of moral of alienation of students

    Directory of Open Access Journals (Sweden)

    Z. K. Malieva

    2014-01-01

    Full Text Available A moral alienation is a complex integrative phenomenon characterized by individual’s rejection of universal spiritual and moral values of society. The last opportunity to find a purposeful competent solution of the problem of individual’s moral alienation lies in the space of professional education.The subject of study of this article is to identify methods for prevention and correction of moral alienation of students that can be used by teachers both in the process of extracurricular activities, and in conducting classes in humanitarian disciplines.The purpose of the work is to study methods and techniques that enhance the effectiveness of the prevention and correction of moral alienation of students, identify their characteristics and application in the educational activities of teachers.The paper concretizes a definition of methods to prevent and correct the moral alienation of students who represent a system of interrelated actions of educator and students aimed at: redefining of negative values, rules and norms of behavior; overcoming the negative mental states, negative attitudes, interests and aptitudes of aducatees.The article distinguishes and characterizes the most effective methods for prevention and correction of moral alienation of students: the conviction, the method of "Socrates"; understanding; semiotic analysis; suggestion, method of "explosion." It also presents the rules and necessary conditions for the application of these methods in the educational process.It is ascertained that the choice of effective preventive and corrective methods and techniques is determined by the content of intrapersonal, psychological sources of moral alienation associated with the following: negative attitude due to previous experience; orientation to these or those negative values; inadequate self-esteem, having a negative impact on the development and functioning of the individual’s psyche and behavior; mental states.The conclusions of the

  3. Orbit Determination from Tracking Data of Artificial Satellite Using the Method of Differential Correction

    Directory of Open Access Journals (Sweden)

    Byoung-Sun Lee

    1988-06-01

    Full Text Available The differential correction process determining osculating orbital elements as correct as possible at a given instant of time from tracking data of artificial satellite was accomplished. Preliminary orbital elements were used as an initial value of the differential correction procedure and iterated until the residual of real observation(O and computed observation(C was minimized. Tracking satellite was NOAA-9 or TIROS-N series. Two types of tracking data were prediction data precomputed from mean orbital elements of TBUS and real data obtained from tracking 1.707GHz HRPT signal of NOAA-9 using 5 meter auto-track antenna in Radio Research Laboratory. According to tracking data either Gauss method or Herrick-Gibbs method was applied to preliminary orbit determination. In the differential correction stage we used both of the Escobal(1975's analytical method and numerical ones are nearly consistent. And the differentially corrected orbit converged to the same value in spite of the differences between preliminary orbits of each time span.

  4. Bias-correction of CORDEX-MENA projections using the Distribution Based Scaling method

    Science.gov (United States)

    Bosshard, Thomas; Yang, Wei; Sjökvist, Elin; Arheimer, Berit; Graham, L. Phil

    2014-05-01

    Within the Regional Initiative for the Assessment of the Impact of Climate Change on Water Resources and Socio-Economic Vulnerability in the Arab Region (RICCAR) lead by UN ESCWA, CORDEX RCM projections for the Middle East Northern Africa (MENA) domain are used to drive hydrological impacts models. Bias-correction of newly available CORDEX-MENA projections is a central part of this project. In this study, the distribution based scaling (DBS) method has been applied to 6 regional climate model projections driven by 2 RCP emission scenarios. The DBS method uses a quantile mapping approach and features a conditional temperature correction dependent on the wet/dry state in the climate model data. The CORDEX-MENA domain is particularly challenging for bias-correction as it spans very diverse climates showing pronounced dry and wet seasons. Results show that the regional climate models simulate too low temperatures and often have a displaced rainfall band compared to WATCH ERA-Interim forcing data in the reference period 1979-2008. DBS is able to correct the temperature biases as well as some aspects of the precipitation biases. Special focus is given to the analysis of the influence of the dry-frequency bias (i.e. climate models simulating too few rain days) on the bias-corrected projections and on the modification of the climate change signal by the DBS method.

  5. A new method to make gamma-ray self-absorption correction

    International Nuclear Information System (INIS)

    Tian Dongfeng; Xie Dong; Ho Yukun; Yang Fujia

    2001-01-01

    This paper is devoted to discuss a new method to directly extract the information of the geometric self-absorption correction through the measurement of characteristic γ radiation emitted spontaneously from nuclear fissile material. The numerical simulation tests show that this method can extract the purely original information needed for nondestructive assay method by the γ-ray spectra to be measured, even though the geometric shape of the sample and materials between sample and detector are not known in advance. (author)

  6. Corrected entropy of Friedmann-Robertson-Walker universe in tunneling method

    International Nuclear Information System (INIS)

    Zhu, Tao; Ren, Ji-Rong; Li, Ming-Fan

    2009-01-01

    In this paper, we study the thermodynamic quantities of Friedmann-Robertson-Walker (FRW) universe by using the tunneling formalism beyond semiclassical approximation developed by Banerjee and Majhi [25]. For this we first calculate the corrected Hawking-like temperature on apparent horizon by considering both scalar particle and fermion tunneling. With this corrected Hawking-like temperature, the explicit expressions of the corrected entropy of apparent horizon for various gravity theories including Einstein gravity, Gauss-Bonnet gravity, Lovelock gravity, f(R) gravity and scalar-tensor gravity, are computed. Our results show that the corrected entropy formula for different gravity theories can be written into a general expression (4.39) of a same form. It is also shown that this expression is also valid for black holes. This might imply that the expression for the corrected entropy derived from tunneling method is independent of gravity theory, spacetime and dimension of the spacetime. Moreover, it is concluded that the basic thermodynamical property that the corrected entropy on apparent horizon is a state function is satisfied by the FRW universe

  7. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers

    OpenAIRE

    Dobie, Robert A; Wojcik, Nancy C

    2015-01-01

    Objectives The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60?years. By comparison, recent data (1999?2006) show that hearing thresholds in the US population have improved....

  8. A Novel Flood Forecasting Method Based on Initial State Variable Correction

    Directory of Open Access Journals (Sweden)

    Kuang Li

    2017-12-01

    Full Text Available The influence of initial state variables on flood forecasting accuracy by using conceptual hydrological models is analyzed in this paper and a novel flood forecasting method based on correction of initial state variables is proposed. The new method is abbreviated as ISVC (Initial State Variable Correction. The ISVC takes the residual between the measured and forecasted flows during the initial period of the flood event as the objective function, and it uses a particle swarm optimization algorithm to correct the initial state variables, which are then used to drive the flood forecasting model. The historical flood events of 11 watersheds in south China are forecasted and verified, and important issues concerning the ISVC application are then discussed. The study results show that the ISVC is effective and applicable in flood forecasting tasks. It can significantly improve the flood forecasting accuracy in most cases.

  9. Method for the determination of spectroradiometric corrections of data from multichannel aerospatial spectrometers

    International Nuclear Information System (INIS)

    Bakalova, K.P.; Bakalov, D.D.

    1984-01-01

    Various factors in the aerospatial conditions of operation may lead to changes in the transmission characteristics of the electron-optical medium or environment of spectrometers for remote sensing of the Earth. Consequently, the data obtained need spectroradiometric corrections. In the paper, a unified approach to the determination of these corrections is suggested. The method uses measurements of standard sources with a smooth emission spectrum that is much wider than the width of the channels, such as a lamp with an incandescent filament, Sun and other natural objects, without special spectral reference standards. The presence of additional information about the character of changes occuring in the measurements may considerably simplify the determination of corrections through setting appropriate values of a coefficient and the spectral shift. The method has been used with the Spectrum-15 and SMP-32 spectrometers on the Salyut-7 orbital station and the 'Meteor-Priroda' satelite of the Bulgaria-1300-ii project

  10. Precise method for correcting count-rate losses in scintillation cameras

    International Nuclear Information System (INIS)

    Madsen, M.T.; Nickles, R.J.

    1986-01-01

    Quantitative studies performed with scintillation detectors often require corrections for lost data because of the finite resolving time of the detector. Methods that monitor losses by means of a reference source or pulser have unacceptably large statistical fluctuations associated with their correction factors. Analytic methods that model the detector as a paralyzable system require an accurate estimate of the system resolving time. Because the apparent resolving time depends on many variables, including the window setting, source distribution, and the amount of scattering material, significant errors can be introduced by relying on a resolving time obtained from phantom measurements. These problems can be overcome by curve-fitting the data from a reference source to a paralyzable model in which the true total count rate in the selected window is estimated from the observed total rate. The resolving time becomes a free parameter in this method which is optimized to provide the best fit to the observed reference data. The fitted curve has the inherent accuracy of the reference source method with the precision associated with the observed total image count rate. Correction factors can be simply calculated from the ratio of the true reference source rate and the fitted curve. As a result, the statistical uncertainty of the data corrected by this method is not significantly increased

  11. Comparison of fluorescence rejection methods of baseline correction and shifted excitation Raman difference spectroscopy

    Science.gov (United States)

    Cai, Zhijian; Zou, Wenlong; Wu, Jianhong

    2017-10-01

    Raman spectroscopy has been extensively used in biochemical tests, explosive detection, food additive and environmental pollutants. However, fluorescence disturbance brings a big trouble to the applications of portable Raman spectrometer. Currently, baseline correction and shifted-excitation Raman difference spectroscopy (SERDS) methods are the most prevailing fluorescence suppressing methods. In this paper, we compared the performances of baseline correction and SERDS methods, experimentally and simulatively. Through the comparison, it demonstrates that the baseline correction can get acceptable fluorescence-removed Raman spectrum if the original Raman signal has good signal-to-noise ratio, but it cannot recover the small Raman signals out of large noise background. By using SERDS method, the Raman signals, even very weak compared to fluorescence intensity and noise level, can be clearly extracted, and the fluorescence background can be completely rejected. The Raman spectrum recovered by SERDS has good signal to noise ratio. It's proved that baseline correction is more suitable for large bench-top Raman system with better quality or signal-to-noise ratio, while the SERDS method is more suitable for noisy devices, especially the portable Raman spectrometers.

  12. LDPC Code Design for Nonuniform Power-Line Channels

    Directory of Open Access Journals (Sweden)

    Sanaei Ali

    2007-01-01

    Full Text Available We investigate low-density parity-check code design for discrete multitone channels over power lines. Discrete multitone channels are well modeled as nonuniform channels, that is, different bits experience various channel parameters. We propose a coding system for discrete multitone channels that allows for using a single code over a nonuniform channel. The number of code parameters for the proposed system is much greater than the number of code parameters in conventional channel. Therefore, search-based optimization methods are impractical. We first formulate the problem of optimizing the rate of an irregular low-density parity-check code, with guaranteed convergence over a general nonuniform channel, as an iterative linear programming which is significantly more efficient than search-based methods. Then we use this technique for a typical power-line channel. The methodology of this paper is directly applicable to all decoding algorithms for which a density evolution analysis is possible.

  13. Development and evaluation of attenuation and scatter correction techniques for SPECT using the Monte Carlo method

    International Nuclear Information System (INIS)

    Ljungberg, M.

    1990-05-01

    Quantitative scintigrafic images, obtained by NaI(Tl) scintillation cameras, are limited by photon attenuation and contribution from scattered photons. A Monte Carlo program was developed in order to evaluate these effects. Simple source-phantom geometries and more complex nonhomogeneous cases can be simulated. Comparisons with experimental data for both homogeneous and nonhomogeneous regions and with published results have shown good agreement. The usefulness for simulation of parameters in scintillation camera systems, stationary as well as in SPECT systems, has also been demonstrated. An attenuation correction method based on density maps and build-up functions has been developed. The maps were obtained from a transmission measurement using an external 57 Co flood source and the build-up was simulated by the Monte Carlo code. Two scatter correction methods, the dual-window method and the convolution-subtraction method, have been compared using the Monte Carlo method. The aim was to compare the estimated scatter with the true scatter in the photo-peak window. It was concluded that accurate depth-dependent scatter functions are essential for a proper scatter correction. A new scatter and attenuation correction method has been developed based on scatter line-spread functions (SLSF) obtained for different depths and lateral positions in the phantom. An emission image is used to determine the source location in order to estimate the scatter in the photo-peak window. Simulation studies of a clinically realistic source in different positions in cylindrical water phantoms were made for three photon energies. The SLSF-correction method was also evaluated by simulation studies for 1. a myocardial source, 2. uniform source in the lungs and 3. a tumour located in the lungs in a realistic, nonhomogeneous computer phantom. The results showed that quantitative images could be obtained in nonhomogeneous regions. (67 refs.)

  14. Comparatively Studied Color Correction Methods for Color Calibration of Automated Microscopy Complex of Biomedical Specimens

    Directory of Open Access Journals (Sweden)

    T. A. Kravtsova

    2016-01-01

    Full Text Available The paper considers a task of generating the requirements and creating a calibration target for automated microscopy systems (AMS of biomedical specimens to provide the invariance of algorithms and software to the hardware configuration. The required number of color fields of the calibration target and their color coordinates are mostly determined by the color correction method, for which coefficients of the equations are estimated during the calibration process. The paper analyses existing color calibration techniques for digital imaging systems using an optical microscope and shows that there is a lack of published results of comparative studies to demonstrate a particular useful color correction method for microscopic images. A comparative study of ten image color correction methods in RGB space using polynomials and combinations of color coordinate of different orders was carried out. The method of conditioned least squares to estimate the coefficients in the color correction equations using captured images of 217 color fields of the calibration target Kodak Q60-E3 was applied. The regularization parameter in this method was chosen experimentally. It was demonstrated that the best color correction quality characteristics are provided by the method that uses a combination of color coordinates of the 3rd order. The study of the influence of the number and the set of color fields included in calibration target on color correction quality for microscopic images was performed. Six train sets containing 30, 35, 40, 50, 60 and 80 color fields, and test set of 47 color fields not included in any of the train sets were formed. It was found out that the train set of 60 color fields minimizes the color correction error values for both operating modes of digital camera: using "default" color settings and with automatic white balance. At the same time it was established that the use of color fields from the widely used now Kodak Q60-E3 target does not

  15. Assessment of Atmospheric Correction Methods for Sentinel-2 MSI Images Applied to Amazon Floodplain Lakes

    Directory of Open Access Journals (Sweden)

    Vitor Souza Martins

    2017-03-01

    Full Text Available Satellite data provide the only viable means for extensive monitoring of remote and large freshwater systems, such as the Amazon floodplain lakes. However, an accurate atmospheric correction is required to retrieve water constituents based on surface water reflectance ( R W . In this paper, we assessed three atmospheric correction methods (Second Simulation of a Satellite Signal in the Solar Spectrum (6SV, ACOLITE and Sen2Cor applied to an image acquired by the MultiSpectral Instrument (MSI on-board of the European Space Agency’s Sentinel-2A platform using concurrent in-situ measurements over four Amazon floodplain lakes in Brazil. In addition, we evaluated the correction of forest adjacency effects based on the linear spectral unmixing model, and performed a temporal evaluation of atmospheric constituents from Multi-Angle Implementation of Atmospheric Correction (MAIAC products. The validation of MAIAC aerosol optical depth (AOD indicated satisfactory retrievals over the Amazon region, with a correlation coefficient (R of ~0.7 and 0.85 for Terra and Aqua products, respectively. The seasonal distribution of the cloud cover and AOD revealed a contrast between the first and second half of the year in the study area. Furthermore, simulation of top-of-atmosphere (TOA reflectance showed a critical contribution of atmospheric effects (>50% to all spectral bands, especially the deep blue (92%–96% and blue (84%–92% bands. The atmospheric correction results of the visible bands illustrate the limitation of the methods over dark lakes ( R W < 1%, and better match of the R W shape compared with in-situ measurements over turbid lakes, although the accuracy varied depending on the spectral bands and methods. Particularly above 705 nm, R W was highly affected by Amazon forest adjacency, and the proposed adjacency effect correction minimized the spectral distortions in R W (RMSE < 0.006. Finally, an extensive validation of the methods is required for

  16. Asteroseismic modelling of solar-type stars: internal systematics from input physics and surface correction methods

    Science.gov (United States)

    Nsamba, B.; Campante, T. L.; Monteiro, M. J. P. F. G.; Cunha, M. S.; Rendle, B. M.; Reese, D. R.; Verma, K.

    2018-04-01

    Asteroseismic forward modelling techniques are being used to determine fundamental properties (e.g. mass, radius, and age) of solar-type stars. The need to take into account all possible sources of error is of paramount importance towards a robust determination of stellar properties. We present a study of 34 solar-type stars for which high signal-to-noise asteroseismic data is available from multi-year Kepler photometry. We explore the internal systematics on the stellar properties, that is, associated with the uncertainty in the input physics used to construct the stellar models. In particular, we explore the systematics arising from: (i) the inclusion of the diffusion of helium and heavy elements; and (ii) the uncertainty in solar metallicity mixture. We also assess the systematics arising from (iii) different surface correction methods used in optimisation/fitting procedures. The systematics arising from comparing results of models with and without diffusion are found to be 0.5%, 0.8%, 2.1%, and 16% in mean density, radius, mass, and age, respectively. The internal systematics in age are significantly larger than the statistical uncertainties. We find the internal systematics resulting from the uncertainty in solar metallicity mixture to be 0.7% in mean density, 0.5% in radius, 1.4% in mass, and 6.7% in age. The surface correction method by Sonoi et al. and Ball & Gizon's two-term correction produce the lowest internal systematics among the different correction methods, namely, ˜1%, ˜1%, ˜2%, and ˜8% in mean density, radius, mass, and age, respectively. Stellar masses obtained using the surface correction methods by Kjeldsen et al. and Ball & Gizon's one-term correction are systematically higher than those obtained using frequency ratios.

  17. A novel energy conversion based method for velocity correction in molecular dynamics simulations

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Hanhui [School of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027 (China); Collaborative Innovation Center of Advanced Aero-Engine, Hangzhou 310027 (China); Liu, Ningning [School of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027 (China); Ku, Xiaoke, E-mail: xiaokeku@zju.edu.cn [School of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027 (China); Fan, Jianren [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China)

    2017-05-01

    Molecular dynamics (MD) simulation has become an important tool for studying micro- or nano-scale dynamics and the statistical properties of fluids and solids. In MD simulations, there are mainly two approaches: equilibrium and non-equilibrium molecular dynamics (EMD and NEMD). In this paper, a new energy conversion based correction (ECBC) method for MD is developed. Unlike the traditional systematic correction based on macroscopic parameters, the ECBC method is developed strictly based on the physical interaction processes between the pair of molecules or atoms. The developed ECBC method can apply to EMD and NEMD directly. While using MD with this method, the difference between the EMD and NEMD is eliminated, and no macroscopic parameters such as external imposed potentials or coefficients are needed. With this method, many limits of using MD are lifted. The application scope of MD is greatly extended.

  18. Semiempirical Quantum-Chemical Orthogonalization-Corrected Methods: Benchmarks for Ground-State Properties.

    Science.gov (United States)

    Dral, Pavlo O; Wu, Xin; Spörkel, Lasse; Koslowski, Axel; Thiel, Walter

    2016-03-08

    The semiempirical orthogonalization-corrected OMx methods (OM1, OM2, and OM3) go beyond the standard MNDO model by including additional interactions in the electronic structure calculation. When augmented with empirical dispersion corrections, the resulting OMx-Dn approaches offer a fast and robust treatment of noncovalent interactions. Here we evaluate the performance of the OMx and OMx-Dn methods for a variety of ground-state properties using a large and diverse collection of benchmark sets from the literature, with a total of 13035 original and derived reference data. Extensive comparisons are made with the results from established semiempirical methods (MNDO, AM1, PM3, PM6, and PM7) that also use the NDDO (neglect of diatomic differential overlap) integral approximation. Statistical evaluations show that the OMx and OMx-Dn methods outperform the other methods for most of the benchmark sets.

  19. A novel energy conversion based method for velocity correction in molecular dynamics simulations

    International Nuclear Information System (INIS)

    Jin, Hanhui; Liu, Ningning; Ku, Xiaoke; Fan, Jianren

    2017-01-01

    Molecular dynamics (MD) simulation has become an important tool for studying micro- or nano-scale dynamics and the statistical properties of fluids and solids. In MD simulations, there are mainly two approaches: equilibrium and non-equilibrium molecular dynamics (EMD and NEMD). In this paper, a new energy conversion based correction (ECBC) method for MD is developed. Unlike the traditional systematic correction based on macroscopic parameters, the ECBC method is developed strictly based on the physical interaction processes between the pair of molecules or atoms. The developed ECBC method can apply to EMD and NEMD directly. While using MD with this method, the difference between the EMD and NEMD is eliminated, and no macroscopic parameters such as external imposed potentials or coefficients are needed. With this method, many limits of using MD are lifted. The application scope of MD is greatly extended.

  20. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers.

    Science.gov (United States)

    Dobie, Robert A; Wojcik, Nancy C

    2015-07-13

    The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999-2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Regression analysis was used to derive new age-correction values using audiometric data from the 1999-2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20-75 years. The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20-75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61-75 years. Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to

  1. Regression dilution bias: tools for correction methods and sample size calculation.

    Science.gov (United States)

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  2. A New High-Precision Correction Method of Temperature Distribution in Model Stellar Atmospheres

    Directory of Open Access Journals (Sweden)

    Sapar A.

    2013-06-01

    Full Text Available The main features of the temperature correction methods, suggested and used in modeling of plane-parallel stellar atmospheres, are discussed. The main features of the new method are described. Derivation of the formulae for a version of the Unsöld-Lucy method, used by us in the SMART (Stellar Model Atmospheres and Radiative Transport software for modeling stellar atmospheres, is presented. The method is based on a correction of the model temperature distribution based on minimizing differences of flux from its accepted constant value and on the requirement of the lack of its gradient, meaning that local source and sink terms of radiation must be equal. The final relative flux constancy obtainable by the method with the SMART code turned out to have the precision of the order of 0.5 %. Some of the rapidly converging iteration steps can be useful before starting the high-precision model correction. The corrections of both the flux value and of its gradient, like in Unsöld-Lucy method, are unavoidably needed to obtain high-precision flux constancy. A new temperature correction method to obtain high-precision flux constancy for plane-parallel LTE model stellar atmospheres is proposed and studied. The non-linear optimization is carried out by the least squares, in which the Levenberg-Marquardt correction method and thereafter additional correction by the Broyden iteration loop were applied. Small finite differences of temperature (δT/T = 10−3 are used in the computations. A single Jacobian step appears to be mostly sufficient to get flux constancy of the order 10−2 %. The dual numbers and their generalization – the dual complex numbers (the duplex numbers – enable automatically to get the derivatives in the nilpotent part of the dual numbers. A version of the SMART software is in the stage of refactorization to dual and duplex numbers, what enables to get rid of the finite differences, as an additional source of lowering precision of the

  3. High-order multi-implicit spectral deferred correction methods for problems of reactive flow

    International Nuclear Information System (INIS)

    Bourlioux, Anne; Layton, Anita T.; Minion, Michael L.

    2003-01-01

    Models for reacting flow are typically based on advection-diffusion-reaction (A-D-R) partial differential equations. Many practical cases correspond to situations where the relevant time scales associated with each of the three sub-processes can be widely different, leading to disparate time-step requirements for robust and accurate time-integration. In particular, interesting regimes in combustion correspond to systems in which diffusion and reaction are much faster processes than advection. The numerical strategy introduced in this paper is a general procedure to account for this time-scale disparity. The proposed methods are high-order multi-implicit generalizations of spectral deferred correction methods (MISDC methods), constructed for the temporal integration of A-D-R equations. Spectral deferred correction methods compute a high-order approximation to the solution of a differential equation by using a simple, low-order numerical method to solve a series of correction equations, each of which increases the order of accuracy of the approximation. The key feature of MISDC methods is their flexibility in handling several sub-processes implicitly but independently, while avoiding the splitting errors present in traditional operator-splitting methods and also allowing for different time steps for each process. The stability, accuracy, and efficiency of MISDC methods are first analyzed using a linear model problem and the results are compared to semi-implicit spectral deferred correction methods. Furthermore, numerical tests on simplified reacting flows demonstrate the expected convergence rates for MISDC methods of orders three, four, and five. The gain in efficiency by independently controlling the sub-process time steps is illustrated for nonlinear problems, where reaction and diffusion are much stiffer than advection. Although the paper focuses on this specific time-scales ordering, the generalization to any ordering combination is straightforward

  4. The unbiasedness of a generalized mirage boundary correction method for Monte Carlo integration estimators of volume

    Science.gov (United States)

    Thomas B. Lynch; Jeffrey H. Gove

    2014-01-01

    The typical "double counting" application of the mirage method of boundary correction cannot be applied to sampling systems such as critical height sampling (CHS) that are based on a Monte Carlo sample of a tree (or debris) attribute because the critical height (or other random attribute) sampled from a mirage point is generally not equal to the critical...

  5. A non-parametric method for correction of global radiation observations

    DEFF Research Database (Denmark)

    Bacher, Peder; Madsen, Henrik; Perers, Bengt

    2013-01-01

    in the observations are corrected. These are errors such as: tilt in the leveling of the sensor, shadowing from surrounding objects, clipping and saturation in the signal processing, and errors from dirt and wear. The method is based on a statistical non-parametric clear-sky model which is applied to both...

  6. An FFT-based Method for Attenuation Correction in Fluorescence Confocal Microscopy

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.; Bakker, M.

    1993-01-01

    A problem in three-dimensional imaging by a confocal scanning laser microscope (CSLM) in the (epi)fluorescence mode is the darkening of the deeper layers due to absorption and scattering of both the excitation and the fluorescence light. In this paper we propose a new method to correct for these

  7. A brain MRI bias field correction method created in the Gaussian multi-scale space

    Science.gov (United States)

    Chen, Mingsheng; Qin, Mingxin

    2017-07-01

    A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.

  8. QED radiative correction for the single-W production using a parton shower method

    International Nuclear Information System (INIS)

    Kurihara, Y.; Fujimoto, J.; Ishikawa, T.; Shimizu, Y.; Kato, K.; Tobimatsu, K.; Munehisa, T.

    2001-01-01

    A parton shower method for the photonic radiative correction is applied to single W-boson production processes. The energy scale for the evolution of the parton shower is determined so that the correct soft-photon emission is reproduced. Photon spectra radiated from the partons are compared with those from the exact matrix elements, and show a good agreement. Possible errors due to an inappropriate energy-scale selection or due to the ambiguity of the energy-scale determination are also discussed, particularly for the measurements on triple gauge couplings. (orig.)

  9. Correction method for the error of diamond tool's radius in ultra-precision cutting

    Science.gov (United States)

    Wang, Yi; Yu, Jing-chi

    2010-10-01

    The compensation method for the error of diamond tool's cutting edge is a bottle-neck technology to hinder the high accuracy aspheric surface's directly formation after single diamond turning. Traditional compensation was done according to the measurement result from profile meter, which took long measurement time and caused low processing efficiency. A new compensation method was firstly put forward in the article, in which the correction of the error of diamond tool's cutting edge was done according to measurement result from digital interferometer. First, detailed theoretical calculation related with compensation method was deduced. Then, the effect after compensation was simulated by computer. Finally, φ50 mm work piece finished its diamond turning and new correction turning under Nanotech 250. Testing surface achieved high shape accuracy pv 0.137λ and rms=0.011λ, which approved the new compensation method agreed with predictive analysis, high accuracy and fast speed of error convergence.

  10. A distortion correction method for image intensifier and electronic portal images used in radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Ioannidis, G T; Geramani, K N; Zamboglou, N [Strahlenklinik, Stadtische Kliniken Offenbach, Offenbach (Germany); Uzunoglu, N [Department of Electrical and Computer Engineering, National Technical University of Athens, Athens (Greece)

    1999-12-31

    At the most of radiation departments a simulator and an `on line` verification system of the treated volume, in form of an electronic portal imaging device (EPID), are available. Networking and digital handling (saving, archiving etc.) of the image information is a necessity in the image processing procedures in order to evaluate verification and simulation recordings at the computer screen. Distortion is on the other hand prerequisite for quantitative comparison of both image modalities. Another limitation factor, in order to make quantitative assertions, is the fact that the irradiation fields in radiotherapy are usually bigger than the field of view of an image intensifier. Several segments of the irradiation field must therefore be acquired. Using pattern recognition techniques these segments can be composed into a single image. In this paper a distortion correction method will be presented. The method is based upon a well defined Grid which is embedded during the registration process on the image. The video signal from the image intensifier is acquired and processed. The grid is then recognised using image processing techniques. Ideally if all grid points are recognised, various methods can be applied in order to correct the distortion. But in practice this is not the case. Overlapping structures (bones etc.) have as a consequence that not all of the grid points can be recognised. Mathematical models from the Graph theory are applied in order to reconstruct the whole grid. The deviation of the grid points positions from the rated value is then used to calculate correction coefficients. This method (well defined grid, grid recognition, correction factors) can also be applied in verification images from the EPID or in other image modalities, and therefore a quantitative comparison in radiation treatment is possible. The distortion correction method and the application on simulator images will be presented. (authors)

  11. Gas dynamic improvement of the axial compressor design for reduction of the flow non-uniformity level

    Science.gov (United States)

    Matveev, V. N.; Baturin, O. V.; Kolmakova, D. A.; Popov, G. M.

    2017-01-01

    Circumferential nonuniformity of gas flow is one of the main problems in the gas turbine engine. Usually, the flow circumferential nonuniformity appears near the annular frame located in the flow passage of the engine. The presence of circumferential nonuniformity leads to the increased dynamic stresses in the blade rows and the blade damage. The goal of this research was to find the ways of the flow non-uniformity reduction, which would not require a fundamental changing of the engine design. A new method for reducing the circumferential nonuniformity of the gas flow was proposed that allows the prediction of the pressure peak values of the rotor blades without computationally expensive CFD calculations.

  12. New component-based normalization method to correct PET system models

    International Nuclear Information System (INIS)

    Kinouchi, Shoko; Miyoshi, Yuji; Suga, Mikio; Yamaya, Taiga; Yoshida, Eiji; Nishikido, Fumihiko; Tashima, Hideaki

    2011-01-01

    Normalization correction is necessary to obtain high-quality reconstructed images in positron emission tomography (PET). There are two basic types of normalization methods: the direct method and component-based methods. The former method suffers from the problem that a huge count number in the blank scan data is required. Therefore, the latter methods have been proposed to obtain high statistical accuracy normalization coefficients with a small count number in the blank scan data. In iterative image reconstruction methods, on the other hand, the quality of the obtained reconstructed images depends on the system modeling accuracy. Therefore, the normalization weighing approach, in which normalization coefficients are directly applied to the system matrix instead of a sinogram, has been proposed. In this paper, we propose a new component-based normalization method to correct system model accuracy. In the proposed method, two components are defined and are calculated iteratively in such a way as to minimize errors of system modeling. To compare the proposed method and the direct method, we applied both methods to our small OpenPET prototype system. We achieved acceptable statistical accuracy of normalization coefficients while reducing the count number of the blank scan data to one-fortieth that required in the direct method. (author)

  13. A Time-Walk Correction Method for PET Detectors Based on Leading Edge Discriminators.

    Science.gov (United States)

    Du, Junwei; Schmall, Jeffrey P; Judenhofer, Martin S; Di, Kun; Yang, Yongfeng; Cherry, Simon R

    2017-09-01

    The leading edge timing pick-off technique is the simplest timing extraction method for PET detectors. Due to the inherent time-walk of the leading edge technique, corrections should be made to improve timing resolution, especially for time-of-flight PET. Time-walk correction can be done by utilizing the relationship between the threshold crossing time and the event energy on an event by event basis. In this paper, a time-walk correction method is proposed and evaluated using timing information from two identical detectors both using leading edge discriminators. This differs from other techniques that use an external dedicated reference detector, such as a fast PMT-based detector using constant fraction techniques to pick-off timing information. In our proposed method, one detector was used as reference detector to correct the time-walk of the other detector. Time-walk in the reference detector was minimized by using events within a small energy window (508.5 - 513.5 keV). To validate this method, a coincidence detector pair was assembled using two SensL MicroFB SiPMs and two 2.5 mm × 2.5 mm × 20 mm polished LYSO crystals. Coincidence timing resolutions using different time pick-off techniques were obtained at a bias voltage of 27.5 V and a fixed temperature of 20 °C. The coincidence timing resolution without time-walk correction were 389.0 ± 12.0 ps (425 -650 keV energy window) and 670.2 ± 16.2 ps (250-750 keV energy window). The timing resolution with time-walk correction improved to 367.3 ± 0.5 ps (425 - 650 keV) and 413.7 ± 0.9 ps (250 - 750 keV). For comparison, timing resolutions were 442.8 ± 12.8 ps (425 - 650 keV) and 476.0 ± 13.0 ps (250 - 750 keV) using constant fraction techniques, and 367.3 ± 0.4 ps (425 - 650 keV) and 413.4 ± 0.9 ps (250 - 750 keV) using a reference detector based on the constant fraction technique. These results show that the proposed leading edge based time-walk correction method works well. Timing resolution obtained

  14. Hydraulic correction method (HCM) to enhance the efficiency of SRTM DEM in flood modeling

    Science.gov (United States)

    Chen, Huili; Liang, Qiuhua; Liu, Yong; Xie, Shuguang

    2018-04-01

    Digital Elevation Model (DEM) is one of the most important controlling factors determining the simulation accuracy of hydraulic models. However, the currently available global topographic data is confronted with limitations for application in 2-D hydraulic modeling, mainly due to the existence of vegetation bias, random errors and insufficient spatial resolution. A hydraulic correction method (HCM) for the SRTM DEM is proposed in this study to improve modeling accuracy. Firstly, we employ the global vegetation corrected DEM (i.e. Bare-Earth DEM), developed from the SRTM DEM to include both vegetation height and SRTM vegetation signal. Then, a newly released DEM, removing both vegetation bias and random errors (i.e. Multi-Error Removed DEM), is employed to overcome the limitation of height errors. Last, an approach to correct the Multi-Error Removed DEM is presented to account for the insufficiency of spatial resolution, ensuring flow connectivity of the river networks. The approach involves: (a) extracting river networks from the Multi-Error Removed DEM using an automated algorithm in ArcGIS; (b) correcting the location and layout of extracted streams with the aid of Google Earth platform and Remote Sensing imagery; and (c) removing the positive biases of the raised segment in the river networks based on bed slope to generate the hydraulically corrected DEM. The proposed HCM utilizes easily available data and tools to improve the flow connectivity of river networks without manual adjustment. To demonstrate the advantages of HCM, an extreme flood event in Huifa River Basin (China) is simulated on the original DEM, Bare-Earth DEM, Multi-Error removed DEM, and hydraulically corrected DEM using an integrated hydrologic-hydraulic model. A comparative analysis is subsequently performed to assess the simulation accuracy and performance of four different DEMs and favorable results have been obtained on the corrected DEM.

  15. A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

    Science.gov (United States)

    Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro

    2016-09-01

    This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.

  16. A method of bias correction for maximal reliability with dichotomous measures.

    Science.gov (United States)

    Penev, Spiridon; Raykov, Tenko

    2010-02-01

    This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.

  17. Method of correcting eddy current magnetic fields in particle accelerator vacuum chambers

    Science.gov (United States)

    Danby, Gordon T.; Jackson, John W.

    1991-01-01

    A method for correcting magnetic field aberrations produced by eddy currents induced in a particle accelerator vacuum chamber housing is provided wherein correction windings are attached to selected positions on the housing and the windings are energized by transformer action from secondary coils, which coils are inductively coupled to the poles of electro-magnets that are powered to confine the charged particle beam within a desired orbit as the charged particles are accelerated through the vacuum chamber by a particle-driving rf field. The power inductively coupled to the secondary coils varies as a function of variations in the power supplied by the particle-accelerating rf field to a beam of particles accelerated through the vacuum chamber, so the current in the energized correction coils is effective to cancel eddy current flux fields that would otherwise be induced in the vacuum chamber by power variations in the particle beam.

  18. Consistent calculation of the polarization electric dipole moment by the shell-correction method

    International Nuclear Information System (INIS)

    Denisov, V.Yu.

    1992-01-01

    Macroscopic calculations of the polarization electric dipole moment which arises in nuclei with an octupole deformation are discussed in detail. This dipole moment is shown to depend on the position of the center of gravity. The conditions of consistency of the radii of the proton and neutron potentials and the radii of the proton and neutron surfaces, respectively, are discussed. These conditions must be incorporated in a shell-correction calculation of this dipole moment. A correct calculation of this moment by the shell-correction method is carried out. Dipole transitions between (on the one hand) levels belonging to an octupole vibrational band and (on the other) the ground state in rare-earth nuclei with a large quadrupole deformation are studied. 19 refs., 3 figs

  19. Decomposition and correction overlapping peaks of LIBS using an error compensation method combined with curve fitting.

    Science.gov (United States)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-09-01

    The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.

  20. Correcting for cryptic relatedness by a regression-based genomic control method

    Directory of Open Access Journals (Sweden)

    Yang Yaning

    2009-12-01

    Full Text Available Abstract Background Genomic control (GC method is a useful tool to correct for the cryptic relatedness in population-based association studies. It was originally proposed for correcting for the variance inflation of Cochran-Armitage's additive trend test by using information from unlinked null markers, and was later generalized to be applicable to other tests with the additional requirement that the null markers are matched with the candidate marker in allele frequencies. However, matching allele frequencies limits the number of available null markers and thus limits the applicability of the GC method. On the other hand, errors in genotype/allele frequencies may cause further bias and variance inflation and thereby aggravate the effect of GC correction. Results In this paper, we propose a regression-based GC method using null markers that are not necessarily matched in allele frequencies with the candidate marker. Variation of allele frequencies of the null markers is adjusted by a regression method. Conclusion The proposed method can be readily applied to the Cochran-Armitage's trend tests other than the additive trend test, the Pearson's chi-square test and other robust efficiency tests. Simulation results show that the proposed method is effective in controlling type I error in the presence of population substructure.

  1. Local blur analysis and phase error correction method for fringe projection profilometry systems.

    Science.gov (United States)

    Rao, Li; Da, Feipeng

    2018-05-20

    We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.

  2. Attenuation correction with region growing method used in the positron emission mammography imaging system

    Science.gov (United States)

    Gu, Xiao-Yue; Li, Lin; Yin, Peng-Fei; Yun, Ming-Kai; Chai, Pei; Huang, Xian-Chao; Sun, Xiao-Li; Wei, Long

    2015-10-01

    The Positron Emission Mammography imaging system (PEMi) provides a novel nuclear diagnosis method dedicated for breast imaging. With a better resolution than whole body PET, PEMi can detect millimeter-sized breast tumors. To address the requirement of semi-quantitative analysis with a radiotracer concentration map of the breast, a new attenuation correction method based on a three-dimensional seeded region growing image segmentation (3DSRG-AC) method has been developed. The method gives a 3D connected region as the segmentation result instead of image slices. The continuity property of the segmentation result makes this new method free of activity variation of breast tissues. The threshold value chosen is the key process for the segmentation method. The first valley in the grey level histogram of the reconstruction image is set as the lower threshold, which works well in clinical application. Results show that attenuation correction for PEMi improves the image quality and the quantitative accuracy of radioactivity distribution determination. Attenuation correction also improves the probability of detecting small and early breast tumors. Supported by Knowledge Innovation Project of The Chinese Academy of Sciences (KJCX2-EW-N06)

  3. Scatter correction method for x-ray CT using primary modulation: Phantom studies

    International Nuclear Information System (INIS)

    Gao Hewei; Fahrig, Rebecca; Bennett, N. Robert; Sun Mingshan; Star-Lack, Josh; Zhu Lei

    2010-01-01

    Purpose: Scatter correction is a major challenge in x-ray imaging using large area detectors. Recently, the authors proposed a promising scatter correction method for x-ray computed tomography (CT) using primary modulation. Proof of concept was previously illustrated by Monte Carlo simulations and physical experiments on a small phantom with a simple geometry. In this work, the authors provide a quantitative evaluation of the primary modulation technique and demonstrate its performance in applications where scatter correction is more challenging. Methods: The authors first analyze the potential errors of the estimated scatter in the primary modulation method. On two tabletop CT systems, the method is investigated using three phantoms: A Catphan(c)600 phantom, an anthropomorphic chest phantom, and the Catphan(c)600 phantom with two annuli. Two different primary modulators are also designed to show the impact of the modulator parameters on the scatter correction efficiency. The first is an aluminum modulator with a weak modulation and a low modulation frequency, and the second is a copper modulator with a strong modulation and a high modulation frequency. Results: On the Catphan(c)600 phantom in the first study, the method reduces the error of the CT number in the selected regions of interest (ROIs) from 371.4 to 21.9 Hounsfield units (HU); the contrast to noise ratio also increases from 10.9 to 19.2. On the anthropomorphic chest phantom in the second study, which represents a more difficult case due to the high scatter signals and object heterogeneity, the method reduces the error of the CT number from 327 to 19 HU in the selected ROIs and from 31.4% to 5.7% on the overall average. The third study is to investigate the impact of object size on the efficiency of our method. The scatter-to-primary ratio estimation error on the Catphan(c)600 phantom without any annulus (20 cm in diameter) is at the level of 0.04, it rises to 0.07 and 0.1 on the phantom with an

  4. A software-based x-ray scatter correction method for breast tomosynthesis

    International Nuclear Information System (INIS)

    Jia Feng, Steve Si; Sechopoulos, Ioannis

    2011-01-01

    Purpose: To develop a software-based scatter correction method for digital breast tomosynthesis (DBT) imaging and investigate its impact on the image quality of tomosynthesis reconstructions of both phantoms and patients. Methods: A Monte Carlo (MC) simulation of x-ray scatter, with geometry matching that of the cranio-caudal (CC) view of a DBT clinical prototype, was developed using the Geant4 toolkit and used to generate maps of the scatter-to-primary ratio (SPR) of a number of homogeneous standard-shaped breasts of varying sizes. Dimension-matched SPR maps were then deformed and registered to DBT acquisition projections, allowing for the estimation of the primary x-ray signal acquired by the imaging system. Noise filtering of the estimated projections was then performed to reduce the impact of the quantum noise of the x-ray scatter. Three dimensional (3D) reconstruction was then performed using the maximum likelihood-expectation maximization (MLEM) method. This process was tested on acquisitions of a heterogeneous 50/50 adipose/glandular tomosynthesis phantom with embedded masses, fibers, and microcalcifications and on acquisitions of patients. The image quality of the reconstructions of the scatter-corrected and uncorrected projections was analyzed by studying the signal-difference-to-noise ratio (SDNR), the integral of the signal in each mass lesion (integrated mass signal, IMS), and the modulation transfer function (MTF). Results: The reconstructions of the scatter-corrected projections demonstrated superior image quality. The SDNR of masses embedded in a 5 cm thick tomosynthesis phantom improved 60%-66%, while the SDNR of the smallest mass in an 8 cm thick phantom improved by 59% (p < 0.01). The IMS of the masses in the 5 cm thick phantom also improved by 15%-29%, while the IMS of the masses in the 8 cm thick phantom improved by 26%-62% (p < 0.01). Some embedded microcalcifications in the tomosynthesis phantoms were visible only in the scatter-corrected

  5. Evaluation of a method for correction of scatter radiation in thorax cone beam CT

    International Nuclear Information System (INIS)

    Rinkel, J.; Dinten, J.M.; Esteve, F.

    2004-01-01

    Purpose: Cone beam CT (CBCT) enables three-dimensional imaging with isotropic resolution. X-ray scatter estimation is a big challenge for quantitative CBCT imaging of thorax: scatter level is significantly higher on cone beam systems compared to collimated fan beam systems. The effects of this scattered radiation are cupping artefacts, streaks, and quantification inaccuracies. The beam stops conventional scatter estimation approach can be used for CBCT but leads to a significant increase in terms of dose and acquisition time. At CEA-LETI has been developed an original scatter management process without supplementary acquisition. Methods and Materials: This Analytical Plus Indexing-based method (API) of scatter correction in CBCT is based on scatter calibration through offline acquisitions with beam stops on lucite plates, combined to an analytical transformation issued from physical equations. This approach has been applied with success in bone densitometry and mammography. To evaluate this method in CBCT, acquisitions from a thorax phantom with and without beam stops have been performed. To compare different scatter correction approaches, Feldkamp algorithm has been applied on rough data corrected from scatter by API and by beam stops approaches. Results: The API method provides results in good agreement with the beam stops array approach, suppressing cupping artefact. Otherwise influence of the scatter correction method on the noise in the reconstructed images has been evaluated. Conclusion: The results indicate that the API method is effective for quantitative CBCT imaging of thorax. Compared to a beam stops array method it needs a lower x-ray dose and shortens acquisition time. (authors)

  6. Conservative multi-implicit integral deferred correction methods with adaptive mesh refinement

    International Nuclear Information System (INIS)

    Layton, A.T.

    2004-01-01

    In most models of reacting gas dynamics, the characteristic time scales of chemical reactions are much shorter than the hydrodynamic and diffusive time scales, rendering the reaction part of the model equations stiff. Moreover, nonlinear forcings may introduce into the solutions sharp gradients or shocks, the robust behavior and correct propagation of which require the use of specialized spatial discretization procedures. This study presents high-order conservative methods for the temporal integration of model equations of reacting flows. By means of a method of lines discretization on the flux difference form of the equations, these methods compute approximations to the cell-averaged or finite-volume solution. The temporal discretization is based on a multi-implicit generalization of integral deferred correction methods. The advection term is integrated explicitly, and the diffusion and reaction terms are treated implicitly but independently, with the splitting errors present in traditional operator splitting methods reduced via the integral deferred correction procedure. To reduce computational cost, time steps used to integrate processes with widely-differing time scales may differ in size. (author)

  7. Scatter measurement and correction method for cone-beam CT based on single grating scan

    Science.gov (United States)

    Huang, Kuidong; Shi, Wenlong; Wang, Xinyu; Dong, Yin; Chang, Taoqi; Zhang, Hua; Zhang, Dinghua

    2017-06-01

    In cone-beam computed tomography (CBCT) systems based on flat-panel detector imaging, the presence of scatter significantly reduces the quality of slices. Based on the concept of collimation, this paper presents a scatter measurement and correction method based on single grating scan. First, according to the characteristics of CBCT imaging, the scan method using single grating and the design requirements of the grating are analyzed and figured out. Second, by analyzing the composition of object projection images and object-and-grating projection images, the processing method for the scatter image at single projection angle is proposed. In addition, to avoid additional scan, this paper proposes an angle interpolation method of scatter images to reduce scan cost. Finally, the experimental results show that the scatter images obtained by this method are accurate and reliable, and the effect of scatter correction is obvious. When the additional object-and-grating projection images are collected and interpolated at intervals of 30 deg, the scatter correction error of slices can still be controlled within 3%.

  8. The study on the X-ray correction method of long fracture displacement

    International Nuclear Information System (INIS)

    Jia Bin; Huang Ailing; Chen Fuzhong; Men Chunyan; Sui Chengzong; Cui Yiming; Yang Yundong

    2010-01-01

    Objective: To explore the image correction of fracture displacement by conventional X-ray photography (ortho tropic and lateral) and test by computed tomography (CT). Methods: The correction method of fracture displacement was designed according to geometry of X-ray photography. Selected one midhumeral fracture specimen which designed with lateral shift and angular displacement, and scanned from anteroposterior and position respectively, and also volume scanned using CT, the data obtained from volume scan were processed using multiplanar reconstruction (MPR) and shaded surface display (SSD). The displacement data relied on X-ray image, CT with MPR and SSD processing, actual design of specimens were compared respectively. Results: The direction and degree of displacement among correction data of X-ray images and the data from MPR and SSD, actual design of specimen were little difference, location difference <1.5 mm, degree difference <1.5 degree. Conclusion: It is really reliable for fracture displacement by conventional X-ray photography with coordinate correction, and it is helpful to obviously improve the diagnostic accuracy of the degree of fracture displacement. (authors)

  9. Modular correction method of bending elastic modulus based on sliding behavior of contact point

    International Nuclear Information System (INIS)

    Ma, Zhichao; Zhao, Hongwei; Zhang, Qixun; Liu, Changyi

    2015-01-01

    During the three-point bending test, the sliding behavior of the contact point between the specimen and supports was observed, the sliding behavior was verified to affect the measurements of both deflection and span length, which directly affect the calculation of the bending elastic modulus. Based on the Hertz formula to calculate the elastic contact deformation and the theoretical calculation of the sliding behavior of the contact point, a theoretical model to precisely describe the deflection and span length as a function of bending load was established. Moreover, a modular correction method of bending elastic modulus was proposed, via the comparison between the corrected elastic modulus of three materials (H63 copper–zinc alloy, AZ31B magnesium alloy and 2026 aluminum alloy) and the standard modulus obtained from standard uniaxial tensile tests, the universal feasibility of the proposed correction method was verified. Also, the ratio of corrected to raw elastic modulus presented a monotonically decreasing tendency as the raw elastic modulus of materials increased. (technical note)

  10. Experimental aspects of buoyancy correction in measuring reliable highpressure excess adsorption isotherms using the gravimetric method.

    Science.gov (United States)

    Nguyen, Huong Giang T; Horn, Jarod C; Thommes, Matthias; van Zee, Roger D; Espinal, Laura

    2017-12-01

    Addressing reproducibility issues in adsorption measurements is critical to accelerating the path to discovery of new industrial adsorbents and to understanding adsorption processes. A National Institute of Standards and Technology Reference Material, RM 8852 (ammonium ZSM-5 zeolite), and two gravimetric instruments with asymmetric two-beam balances were used to measure high-pressure adsorption isotherms. This work demonstrates how common approaches to buoyancy correction, a key factor in obtaining the mass change due to surface excess gas uptake from the apparent mass change, can impact the adsorption isotherm data. Three different approaches to buoyancy correction were investigated and applied to the subcritical CO 2 and supercritical N 2 adsorption isotherms at 293 K. It was observed that measuring a collective volume for all balance components for the buoyancy correction (helium method) introduces an inherent bias in temperature partition when there is a temperature gradient (i.e. analysis temperature is not equal to instrument air bath temperature). We demonstrate that a blank subtraction is effective in mitigating the biases associated with temperature partitioning, instrument calibration, and the determined volumes of the balance components. In general, the manual and subtraction methods allow for better treatment of the temperature gradient during buoyancy correction. From the study, best practices specific to asymmetric two-beam balances and more general recommendations for measuring isotherms far from critical temperatures using gravimetric instruments are offered.

  11. A novel method to correct for pitch and yaw patient setup errors in helical tomotherapy

    International Nuclear Information System (INIS)

    Boswell, Sarah A.; Jeraj, Robert; Ruchala, Kenneth J.; Olivera, Gustavo H.; Jaradat, Hazim A.; James, Joshua A.; Gutierrez, Alonso; Pearson, Dave; Frank, Gary; Mackie, T. Rock

    2005-01-01

    An accurate means of determining and correcting for daily patient setup errors is important to the cancer outcome in radiotherapy. While many tools have been developed to detect setup errors, difficulty may arise in accurately adjusting the patient to account for the rotational error components. A novel, automated method to correct for rotational patient setup errors in helical tomotherapy is proposed for a treatment couch that is restricted to motion along translational axes. In tomotherapy, only a narrow superior/inferior section of the target receives a dose at any instant, thus rotations in the sagittal and coronal planes may be approximately corrected for by very slow continuous couch motion in a direction perpendicular to the scanning direction. Results from proof-of-principle tests indicate that the method improves the accuracy of treatment delivery, especially for long and narrow targets. Rotational corrections about an axis perpendicular to the transverse plane continue to be implemented easily in tomotherapy by adjustment of the initial gantry angle

  12. A Correction Method for UAV Helicopter Airborne Temperature and Humidity Sensor

    Directory of Open Access Journals (Sweden)

    Longqing Fan

    2017-01-01

    Full Text Available This paper presents a correction method for UAV helicopter airborne temperature and humidity including an error correction scheme and a bias-calibration scheme. As rotor downwash flow brings measurement error on helicopter airborne sensors inevitably, the error correction scheme constructs a model between the rotor induced velocity and temperature and humidity by building the heat balance equation for platinum resistor temperature sensor and the pressure correction term for humidity sensor. The induced velocity of a spatial point below the rotor disc plane can be calculated by the sum of the induced velocities excited by center line vortex, rotor disk vortex, and skew cylinder vortex based on the generalized vortex theory. In order to minimize the systematic biases, the bias-calibration scheme adopts a multiple linear regression to achieve a systematically consistent result with the tethered balloon profiles. Two temperature and humidity sensors were mounted on “Z-5” UAV helicopter in the field experiment. Overall, the result of applying the calibration method shows that the temperature and relative humidity obtained by UAV helicopter closely align with tethered balloon profiles in providing measurements of the temperature profiles and humidity profiles within marine atmospheric boundary layers.

  13. Perturbation theory corrections to the two-particle reduced density matrix variational method.

    Science.gov (United States)

    Juhasz, Tamas; Mazziotti, David A

    2004-07-15

    In the variational 2-particle-reduced-density-matrix (2-RDM) method, the ground-state energy is minimized with respect to the 2-particle reduced density matrix, constrained by N-representability conditions. Consider the N-electron Hamiltonian H(lambda) as a function of the parameter lambda where we recover the Fock Hamiltonian at lambda=0 and we recover the fully correlated Hamiltonian at lambda=1. We explore using the accuracy of perturbation theory at small lambda to correct the 2-RDM variational energies at lambda=1 where the Hamiltonian represents correlated atoms and molecules. A key assumption in the correction is that the 2-RDM method will capture a fairly constant percentage of the correlation energy for lambda in (0,1] because the nonperturbative 2-RDM approach depends more significantly upon the nature rather than the strength of the two-body Hamiltonian interaction. For a variety of molecules we observe that this correction improves the 2-RDM energies in the equilibrium bonding region, while the 2-RDM energies at stretched or nearly dissociated geometries, already highly accurate, are not significantly changed. At equilibrium geometries the corrected 2-RDM energies are similar in accuracy to those from coupled-cluster singles and doubles (CCSD), but at nonequilibrium geometries the 2-RDM energies are often dramatically more accurate as shown in the bond stretching and dissociation data for water and nitrogen. (c) 2004 American Institute of Physics.

  14. Absolute parametric instability in a nonuniform plane plasma ...

    Indian Academy of Sciences (India)

    Abstract. The paper reports an analysis of the effect of spatial plasma nonuniformity on absolute parametric instability (API) of electrostatic waves in magnetized plane waveguides subjected to an intense high-frequency (HF) electric field using the separation method. In this case the effect of strong static magnetic field is ...

  15. Absolute parametric instability in a nonuniform plane plasma

    Indian Academy of Sciences (India)

    The paper reports an analysis of the effect of spatial plasma nonuniformity on absolute parametric instability (API) of electrostatic waves in magnetized plane waveguides subjected to an intense high-frequency (HF) electric field using the separation method. In this case the effect of strong static magnetic field is considered.

  16. Computer method to detect and correct cycle skipping on sonic logs

    International Nuclear Information System (INIS)

    Muller, D.C.

    1985-01-01

    A simple but effective computer method has been developed to detect cycle skipping on sonic logs and to replace cycle skips with estimates of correct traveltimes. The method can be used to correct observed traveltime pairs from the transmitter to both receivers. The basis of the method is the linearity of a plot of theoretical traveltime from the transmitter to the first receiver versus theoretical traveltime from the transmitter to the second receiver. Theoretical traveltime pairs are calculated assuming that the sonic logging tool is centered in the borehole, that the borehole diameter is constant, that the borehole fluid velocity is constant, and that the formation is homogeneous. The plot is linear for the full range of possible formation-rock velocity. Plots of observed traveltime pairs from a sonic logging tool are also linear but have a large degree of scatter due to borehole rugosity, sharp boundaries exhibiting large velocity contrasts, and system measurement uncertainties. However, this scatter can be reduced to a level that is less than scatter due to cycle skipping, so that cycle skips may be detected and discarded or replaced with estimated values of traveltime. Advantages of the method are that it can be applied in real time, that it can be used with data collected by existing tools, that it only affects data that exhibit cycle skipping and leaves other data unchanged, and that a correction trace can be generated which shows where cycle skipping occurs and the amount of correction applied. The method has been successfully tested on sonic log data taken in two holes drilled at the Nevada Test Site, Nye County, Nevada

  17. Monte Carlo evaluation of scattering correction methods in 131I studies using pinhole collimator

    International Nuclear Information System (INIS)

    López Díaz, Adlin; San Pedro, Aley Palau; Martín Escuela, Juan Miguel; Rodríguez Pérez, Sunay; Díaz García, Angelina

    2017-01-01

    Scattering is quite important for image activity quantification. In order to study the scattering factors and the efficacy of 3 multiple window energy scatter correction methods during 131 I thyroid studies with a pinhole collimator (5 mm hole) a Monte Carlo simulation (MC) was developed. The GAMOS MC code was used to model the gamma camera and the thyroid source geometry. First, to validate the MC gamma camera pinhole-source model, sensibility in air and water of the simulated and measured thyroid phantom geometries were compared. Next, simulations to investigate scattering and the result of triple energy (TEW), Double energy (DW) and Reduced double (RDW) energy windows correction methods were performed for different thyroid sizes and depth thicknesses. The relative discrepancies to MC real event were evaluated. Results: The accuracy of the GAMOS MC model was verified and validated. The image’s scattering contribution was significant, between 27-40 %. The discrepancies between 3 multiple window energy correction method results were significant (between 9-86 %). The Reduce Double Window methods (15%) provide discrepancies of 9-16 %. Conclusions: For the simulated thyroid geometry with pinhole, the RDW (15 %) was the most effective. (author)

  18. Improving the accuracy of CT dimensional metrology by a novel beam hardening correction method

    International Nuclear Information System (INIS)

    Zhang, Xiang; Li, Lei; Zhang, Feng; Xi, Xiaoqi; Deng, Lin; Yan, Bin

    2015-01-01

    Its powerful nondestructive characteristics are attracting more and more research into the study of computed tomography (CT) for dimensional metrology, which offers a practical alternative to the common measurement methods. However, the inaccuracy and uncertainty severely limit the further utilization of CT for dimensional metrology due to many factors, among which the beam hardening (BH) effect plays a vital role. This paper mainly focuses on eliminating the influence of the BH effect in the accuracy of CT dimensional metrology. To correct the BH effect, a novel exponential correction model is proposed. The parameters of the model are determined by minimizing the gray entropy of the reconstructed volume. In order to maintain the consistency and contrast of the corrected volume, a punishment term is added to the cost function, enabling more accurate measurement results to be obtained by the simple global threshold method. The proposed method is efficient, and especially suited to the case where there is a large difference in gray value between material and background. Different spheres with known diameters are used to verify the accuracy of dimensional measurement. Both simulation and real experimental results demonstrate the improvement in measurement precision. Moreover, a more complex workpiece is also tested to show that the proposed method is of general feasibility. (paper)

  19. A method for correcting the depth-of-interaction blurring in PET cameras

    International Nuclear Information System (INIS)

    Rogers, J.G.

    1993-11-01

    A method is presented for the purpose of correcting PET images for the blurring caused by variations in the depth-of-interaction in position-sensitive gamma ray detectors. In the case of a fine-cut 50x50x30 mm BGO block detector, the method is shown to improve the detector resolution by about 25%, measured in the geometry corresponding to detection at the edge of the field-of-view. Strengths and weaknesses of the method are discussed and its potential usefulness for improving the images of future PET cameras is assessed. (author). 8 refs., 3 figs

  20. Hydrological modeling as an evaluation tool of EURO-CORDEX climate projections and bias correction methods

    Science.gov (United States)

    Hakala, Kirsti; Addor, Nans; Seibert, Jan

    2017-04-01

    Streamflow stemming from Switzerland's mountainous landscape will be influenced by climate change, which will pose significant challenges to the water management and policy sector. In climate change impact research, the determination of future streamflow is impeded by different sources of uncertainty, which propagate through the model chain. In this research, we explicitly considered the following sources of uncertainty: (1) climate models, (2) downscaling of the climate projections to the catchment scale, (3) bias correction method and (4) parameterization of the hydrological model. We utilize climate projections at the 0.11 degree 12.5 km resolution from the EURO-CORDEX project, which are the most recent climate projections for the European domain. EURO-CORDEX is comprised of regional climate model (RCM) simulations, which have been downscaled from global climate models (GCMs) from the CMIP5 archive, using both dynamical and statistical techniques. Uncertainties are explored by applying a modeling chain involving 14 GCM-RCMs to ten Swiss catchments. We utilize the rainfall-runoff model HBV Light, which has been widely used in operational hydrological forecasting. The Lindström measure, a combination of model efficiency and volume error, was used as an objective function to calibrate HBV Light. Ten best sets of parameters are then achieved by calibrating using the genetic algorithm and Powell optimization (GAP) method. The GAP optimization method is based on the evolution of parameter sets, which works by selecting and recombining high performing parameter sets with each other. Once HBV is calibrated, we then perform a quantitative comparison of the influence of biases inherited from climate model simulations to the biases stemming from the hydrological model. The evaluation is conducted over two time periods: i) 1980-2009 to characterize the simulation realism under the current climate and ii) 2070-2099 to identify the magnitude of the projected change of

  1. THE EFFECT OF DIFFERENT CORRECTIVE FEEDBACK METHODS ON THE OUTCOME AND SELF CONFIDENCE OF YOUNG ATHLETES

    Directory of Open Access Journals (Sweden)

    George Tzetzis

    2008-09-01

    Full Text Available This experiment investigated the effects of three corrective feedback methods, using different combinations of correction, or error cues and positive feedback for learning two badminton skills with different difficulty (forehand clear - low difficulty, backhand clear - high difficulty. Outcome and self-confidence scores were used as dependent variables. The 48 participants were randomly assigned into four groups. Group A received correction cues and positive feedback. Group B received cues on errors of execution. Group C received positive feedback, correction cues and error cues. Group D was the control group. A pre, post and a retention test was conducted. A three way analysis of variance ANOVA (4 groups X 2 task difficulty X 3 measures with repeated measures on the last factor revealed significant interactions for each depended variable. All the corrective feedback methods groups, increased their outcome scores over time for the easy skill, but only groups A and C for the difficult skill. Groups A and B had significantly better outcome scores than group C and the control group for the easy skill on the retention test. However, for the difficult skill, group C was better than groups A, B and D. The self confidence scores of groups A and C improved over time for the easy skill but not for group B and D. Again, for the difficult skill, only group C improved over time. Finally a regression analysis depicted that the improvement in performance predicted a proportion of the improvement in self confidence for both the easy and the difficult skill. It was concluded that when young athletes are taught skills of different difficulty, different type of instruction, might be more appropriate in order to improve outcome and self confidence. A more integrated approach on teaching will assist coaches or physical education teachers to be more efficient and effective

  2. An improved correlated sampling method for calculating correction factor of detector

    International Nuclear Information System (INIS)

    Wu Zhen; Li Junli; Cheng Jianping

    2006-01-01

    In the case of a small size detector lying inside a bulk of medium, there are two problems in the correction factors calculation of the detectors. One is that the detector is too small for the particles to arrive at and collide in; the other is that the ratio of two quantities is not accurate enough. The method discussed in this paper, which combines correlated sampling with modified particle collision auto-importance sampling, and has been realized on the MCNP-4C platform, can solve these two problems. Besides, other 3 variance reduction techniques are also combined with correlated sampling respectively to calculate a simple calculating model of the correction factors of detectors. The results prove that, although all the variance reduction techniques combined with correlated sampling can improve the calculating efficiency, the method combining the modified particle collision auto-importance sampling with the correlated sampling is the most efficient one. (authors)

  3. A method of measuring and correcting tilt of anti - vibration wind turbines based on screening algorithm

    Science.gov (United States)

    Xiao, Zhongxiu

    2018-04-01

    A Method of Measuring and Correcting Tilt of Anti - vibration Wind Turbines Based on Screening Algorithm is proposed in this paper. First of all, we design a device which the core is the acceleration sensor ADXL203, the inclination is measured by installing it on the tower of the wind turbine as well as the engine room. Next using the Kalman filter algorithm to filter effectively by establishing a state space model for signal and noise. Then we use matlab for simulation. Considering the impact of the tower and nacelle vibration on the collected data, the original data and the filtering data are classified and stored by the Screening algorithm, then filter the filtering data to make the output data more accurate. Finally, we eliminate installation errors by using algorithm to achieve the tilt correction. The device based on this method has high precision, low cost and anti-vibration advantages. It has a wide range of application and promotion value.

  4. Corrected direct force balance method for atomic force microscopy lateral force calibration

    International Nuclear Information System (INIS)

    Asay, David B.; Hsiao, Erik; Kim, Seong H.

    2009-01-01

    This paper reports corrections and improvements of the previously reported direct force balance method (DFBM) developed for lateral calibration of atomic force microscopy. The DFBM method employs the lateral force signal obtained during a force-distance measurement on a sloped surface and relates this signal to the applied load and the slope of the surface to determine the lateral calibration factor. In the original publication [Rev. Sci. Instrum. 77, 043903 (2006)], the tip-substrate contact was assumed to be pinned at the point of contact, i.e., no slip along the slope. In control experiments, the tip was found to slide along the slope during force-distance curve measurement. This paper presents the correct force balance for lateral force calibration.

  5. Calibration of an accountability tank by bubbling pressure method: correction factors to be taken into account

    International Nuclear Information System (INIS)

    Cauchetier, Ph.

    1993-01-01

    To obtain the needed precision in the calibration of an accountability tank by bubbling pressure method, it requires to use very slow bubbling. The measured data (mass and pressure) must be transformed into physical sizes of the vessel (height and cubic capacity). All corrections to take in account (buoyancy, calibration curve of the sensor, density of the liquid, weight of the gas column, bubbling overpressure, temperature...) are reviewed and valuated. We give the used equations. (author). 3 figs., 1 tab., 2 refs

  6. The method of edge anxiety-depressive disorder correction in patients with diabetes mellitus

    Directory of Open Access Journals (Sweden)

    A. Kozhanova

    2015-11-01

    4.    Kazimierz Wielki University, Bydgoszcz, Poland Abstract   The article presents the results of research on the effectiveness of the method developed by the authors for correcting the anxiety and depressive edge disorders in patients with type 2 diabetes through the use of magnetic-therapy.   Tags: anxiety-depressive disorder, hidden depression, diabetes, medical rehabilitation, singlet-oxygen therapy.

  7. BIOFEEDBACK: A NEW METHOD FOR CORRECTION OF MOTOR DISORDERS IN PATIENTS WITH MULTIPLE SCLEROSIS

    Directory of Open Access Journals (Sweden)

    Ya. S. Pekker

    2014-01-01

    Full Text Available Major disabling factors in multiple sclerosis is motor disorders. Rehabilitation of such violations is one of the most important medical and social problems. Currently, most of the role given to the development of methods for correction of motor disorders based on accessing natural resources of the human body. One of these methods is the adaptive control with biofeedback (BFB. The aim of our study was the correction of motor disorders in multiple sclerosis patients using biofeedback training. In the study, we have developed scenarios for training rehabilitation program computer EMG biofeedback aimed at correction of motor disorders in patients with multiple sclerosis (MS. The method was tested in the neurological clinic of SSMU. The study included 9 patients with definite diagnosis of MS with the presence of the clinical picture of combined pyramidal and cerebellar symptoms. Assessed the effectiveness of rehabilitation procedures biofeedback training using specialized scales (rating scale functional systems Kurtzke; questionnaire research quality of life – SF-36, evaluation of disease impact Profile – SIP and score on a scale fatigue – FSS. In the studied group of patients decreased score on a scale of fatigue (FSS, increased motor control (SIP2, the physical and mental components of health (SF-36. The tendency to reduce the amount of neurological deficit by reducing the points on the pyramidal Kurtske violations. Analysis of the exchange rate dynamics of biofeedback training on EMG for trained muscles indicates an increase in the recorded signal OEMG from session to session. Proved a tendency to increase strength and coordination trained muscles of patients studied.Positive results of biofeedback therapy in patients with MS can be recommended to use this method in the complex rehabilitation measures to correct motor and psycho-emotional disorders.

  8. A New Variational Method for Bias Correction and Its Applications to Rodent Brain Extraction.

    Science.gov (United States)

    Chang, Huibin; Huang, Weimin; Wu, Chunlin; Huang, Su; Guan, Cuntai; Sekar, Sakthivel; Bhakoo, Kishore Kumar; Duan, Yuping

    2017-03-01

    Brain extraction is an important preprocessing step for further analysis of brain MR images. Significant intensity inhomogeneity can be observed in rodent brain images due to the high-field MRI technique. Unlike most existing brain extraction methods that require bias corrected MRI, we present a high-order and L 0 regularized variational model for bias correction and brain extraction. The model is composed of a data fitting term, a piecewise constant regularization and a smooth regularization, which is constructed on a 3-D formulation for medical images with anisotropic voxel sizes. We propose an efficient multi-resolution algorithm for fast computation. At each resolution layer, we solve an alternating direction scheme, all subproblems of which have the closed-form solutions. The method is tested on three T2 weighted acquisition configurations comprising a total of 50 rodent brain volumes, which are with the acquisition field strengths of 4.7 Tesla, 9.4 Tesla and 17.6 Tesla, respectively. On one hand, we compare the results of bias correction with N3 and N4 in terms of the coefficient of variations on 20 different tissues of rodent brain. On the other hand, the results of brain extraction are compared against manually segmented gold standards, BET, BSE and 3-D PCNN based on a number of metrics. With the high accuracy and efficiency, our proposed method can facilitate automatic processing of large-scale brain studies.

  9. A method based on moving least squares for XRII image distortion correction

    International Nuclear Information System (INIS)

    Yan Shiju; Wang Chengtao; Ye Ming

    2007-01-01

    This paper presents a novel integrated method to correct geometric distortions of XRII (x-ray image intensifier) images. The method has been compared, in terms of mean-squared residual error measured at control and intermediate points, with two traditional local methods and a traditional global methods. The proposed method is based on the methods of moving least squares (MLS) and polynomial fitting. Extensive experiments were performed on simulated and real XRII images. In simulation, the effect of pincushion distortion, sigmoidal distortion, local distortion, noise, and the number of control points was tested. The traditional local methods were sensitive to pincushion and sigmoidal distortion. The traditional global method was only sensitive to sigmoidal distortion. The proposed method was found neither sensitive to pincushion distortion nor sensitive to sigmoidal distortion. The sensitivity of the proposed method to local distortion was lower than or comparable with that of the traditional global method. The sensitivity of the proposed method to noise was higher than that of all three traditional methods. Nevertheless, provided the standard deviation of noise was not greater than 0.1 pixels, accuracy of the proposed method is still higher than the traditional methods. The sensitivity of the proposed method to the number of control points was greatly lower than that of the traditional methods. Provided that a proper cutoff radius is chosen, accuracy of the proposed method is higher than that of the traditional methods. Experiments on real images, carried out by using a 9 in. XRII, showed that residual error of the proposed method (0.2544±0.2479 pixels) is lower than that of the traditional global method (0.4223±0.3879 pixels) and local methods (0.4555±0.3518 pixels and 0.3696±0.4019 pixels, respectively)

  10. Homotopy Perturbation Method for Creeping Flow of Non-Newtonian Power-Law Nanofluid in a Nonuniform Inclined Channel with Peristalsis

    Science.gov (United States)

    Abou-zeid, Mohamed Y.; Mohamed, Mona A. A.

    2017-09-01

    This article is an analytic discussion for the motion of power-law nanofluid with heat transfer under the effect of viscous dissipation, radiation, and internal heat generation. The governing equations are discussed under the assumptions of long wavelength and low Reynolds number. The solutions for temperature and nanoparticle profiles are obtained by using homotopy perturbation method. Results for the behaviours of the axial velocity, temperature, and nanoparticles as well as the skin friction coefficient, reduced Nusselt number, and Sherwood number with other physical parameters are obtained graphically and analytically. It is found that as the power-law exponent increases, both the axial velocity and temperature increase, whereas nanoparticles decreases. These results may have applicable importance in the research discussions of nanofluid flow in channels with small diameters under the effect of different temperature distributions.

  11. Correction of 157-nm lens based on phase ring aberration extraction method

    Science.gov (United States)

    Meute, Jeff; Rich, Georgia K.; Conley, Will; Smith, Bruce W.; Zavyalova, Lena V.; Cashmore, Julian S.; Ashworth, Dominic; Webb, James E.; Rich, Lisa

    2004-05-01

    Early manufacture and use of 157nm high NA lenses has presented significant challenges including: intrinsic birefringence correction, control of optical surface contamination, and the use of relatively unproven materials, coatings, and metrology. Many of these issues were addressed during the manufacture and use of International SEMATECH"s 0.85NA lens. Most significantly, we were the first to employ 157nm phase measurement interferometry (PMI) and birefringence modeling software for lens optimization. These efforts yielded significant wavefront improvement and produced one of the best wavefront-corrected 157nm lenses to date. After applying the best practices to the manufacture of the lens, we still had to overcome the difficulties of integrating the lens into the tool platform at International SEMATECH instead of at the supplier facility. After lens integration, alignment, and field optimization were complete, conventional lithography and phase ring aberration extraction techniques were used to characterize system performance. These techniques suggested a wavefront error of approximately 0.05 waves RMS--much larger than the 0.03 waves RMS predicted by 157nm PMI. In-situ wavefront correction was planned for in the early stages of this project to mitigate risks introduced by the use of development materials and techniques and field integration of the lens. In this publication, we document the development and use of a phase ring aberration extraction method for characterizing imaging performance and a technique for correcting aberrations with the addition of an optical compensation plate. Imaging results before and after the lens correction are presented and differences between actual and predicted results are discussed.

  12. Use of regularization method in the determination of ring parameters and orbit correction

    International Nuclear Information System (INIS)

    Tang, Y.N.; Krinsky, S.

    1993-01-01

    We discuss applying the regularization method of Tikhonov to the solution of inverse problems arising in accelerator operations. This approach has been successfully used for orbit correction on the NSLS storage rings, and is presently being applied to the determination of betatron functions and phases from the measured response matrix. The inverse problem of differential equation often leads to a set of integral equations of the first kind which are ill-conditioned. The regularization method is used to combat the ill-posedness

  13. Application of the spectral correction method to reanalysis data in South Africa

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Kruger, Andries C.

    2014-01-01

    of this study is to evaluate the applicability of the method to the relevant region. The impacts from the two aspects are investigated for interior and coastal locations. Measurements from five stations from South Africa are used to evaluate the results from the spectral model S(f)=af−5/3 together...... with the hourly time series of the Climate Forecast System Reanalysis (CFSR) 10 m wind at 38 km resolution over South Africa. The results show that using the spectral correction method to the CFSR wind data produce extreme wind atlases in acceptable agreement with the atlas made from limited measurements across...

  14. Nonlinear effect of the structured light profilometry in the phase-shifting method and error correction

    International Nuclear Information System (INIS)

    Zhang Wan-Zhen; Chen Zhe-Bo; Xia Bin-Feng; Lin Bin; Cao Xiang-Qun

    2014-01-01

    Digital structured light (SL) profilometry is increasingly used in three-dimensional (3D) measurement technology. However, the nonlinearity of the off-the-shelf projectors and cameras seriously reduces the measurement accuracy. In this paper, first, we review the nonlinear effects of the projector–camera system in the phase-shifting structured light depth measurement method. We show that high order harmonic wave components lead to phase error in the phase-shifting method. Then a practical method based on frequency domain filtering is proposed for nonlinear error reduction. By using this method, the nonlinear calibration of the SL system is not required. Moreover, both the nonlinear effects of the projector and the camera can be effectively reduced. The simulations and experiments have verified our nonlinear correction method. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  15. A Novel Bias Correction Method for Soil Moisture and Ocean Salinity (SMOS Soil Moisture: Retrieval Ensembles

    Directory of Open Access Journals (Sweden)

    Ju Hyoung Lee

    2015-12-01

    Full Text Available Bias correction is a very important pre-processing step in satellite data assimilation analysis, as data assimilation itself cannot circumvent satellite biases. We introduce a retrieval algorithm-specific and spatially heterogeneous Instantaneous Field of View (IFOV bias correction method for Soil Moisture and Ocean Salinity (SMOS soil moisture. To the best of our knowledge, this is the first paper to present the probabilistic presentation of SMOS soil moisture using retrieval ensembles. We illustrate that retrieval ensembles effectively mitigated the overestimation problem of SMOS soil moisture arising from brightness temperature errors over West Africa in a computationally efficient way (ensemble size: 12, no time-integration. In contrast, the existing method of Cumulative Distribution Function (CDF matching considerably increased the SMOS biases, due to the limitations of relying on the imperfect reference data. From the validation at two semi-arid sites, Benin (moderately wet and vegetated area and Niger (dry and sandy bare soils, it was shown that the SMOS errors arising from rain and vegetation attenuation were appropriately corrected by ensemble approaches. In Benin, the Root Mean Square Errors (RMSEs decreased from 0.1248 m3/m3 for CDF matching to 0.0678 m3/m3 for the proposed ensemble approach. In Niger, the RMSEs decreased from 0.14 m3/m3 for CDF matching to 0.045 m3/m3 for the ensemble approach.

  16. Determination of corrective factors for an ultrasonic flow measuring method in pipes accounting for perturbations

    International Nuclear Information System (INIS)

    Etter, S.

    1982-01-01

    By current ultrasonic flow measuring equipment (UFME) the mean velocity is measured for one or two measuring paths. This mean velocity is not equal to the velocity averaged over the flow cross-section, by means of which the flow rate is calculated. This difference will be found already for axially symmetrical, fully developed velocity profiles and, to a larger extent, for disturbed profiles varying in flow direction and for nonsteady flow. Corrective factors are defined for steady and nonsteady flows. These factors can be derived from the flow profiles within the UFME. By mathematical simulation of the entrainment effect the influence of cross and swirl flows on various ultrasonic measuring methods is studied. The applied UFME with crossed measuring paths is shown to be largely independent of cross and swirl flows. For evaluation in a computer of velocity network measurements in circular cross-sections the equations for interpolation and integration are derived. Results of the mathematical method are the isotach profile, the flow rate and, for fully developed flow, directly the corrective factor. In the experimental part corrective factors are determined in nonsteady flow in a measuring plane before and in form measuring planes behind a perturbation. (orig./RW) [de

  17. X-ray scatter correction method for dedicated breast computed tomography: improvements and initial patient testing

    International Nuclear Information System (INIS)

    Ramamurthy, Senthil; D’Orsi, Carl J; Sechopoulos, Ioannis

    2016-01-01

    A previously proposed x-ray scatter correction method for dedicated breast computed tomography was further developed and implemented so as to allow for initial patient testing. The method involves the acquisition of a complete second set of breast CT projections covering 360° with a perforated tungsten plate in the path of the x-ray beam. To make patient testing feasible, a wirelessly controlled electronic positioner for the tungsten plate was designed and added to a breast CT system. Other improvements to the algorithm were implemented, including automated exclusion of non-valid primary estimate points and the use of a different approximation method to estimate the full scatter signal. To evaluate the effectiveness of the algorithm, evaluation of the resulting image quality was performed with a breast phantom and with nine patient images. The improvements in the algorithm resulted in the avoidance of introduction of artifacts, especially at the object borders, which was an issue in the previous implementation in some cases. Both contrast, in terms of signal difference and signal difference-to-noise ratio were improved with the proposed method, as opposed to with the correction algorithm incorporated in the system, which does not recover contrast. Patient image evaluation also showed enhanced contrast, better cupping correction, and more consistent voxel values for the different tissues. The algorithm also reduces artifacts present in reconstructions of non-regularly shaped breasts. With the implemented hardware and software improvements, the proposed method can be reliably used during patient breast CT imaging, resulting in improvement of image quality, no introduction of artifacts, and in some cases reduction of artifacts already present. The impact of the algorithm on actual clinical performance for detection, diagnosis and other clinical tasks in breast imaging remains to be evaluated. (paper)

  18. Bias correction for estimated QTL effects using the penalized maximum likelihood method.

    Science.gov (United States)

    Zhang, J; Yue, C; Zhang, Y-M

    2012-04-01

    A penalized maximum likelihood method has been proposed as an important approach to the detection of epistatic quantitative trait loci (QTL). However, this approach is not optimal in two special situations: (1) closely linked QTL with effects in opposite directions and (2) small-effect QTL, because the method produces downwardly biased estimates of QTL effects. The present study aims to correct the bias by using correction coefficients and shifting from the use of a uniform prior on the variance parameter of a QTL effect to that of a scaled inverse chi-square prior. The results of Monte Carlo simulation experiments show that the improved method increases the power from 25 to 88% in the detection of two closely linked QTL of equal size in opposite directions and from 60 to 80% in the identification of QTL with small effects (0.5% of the total phenotypic variance). We used the improved method to detect QTL responsible for the barley kernel weight trait using 145 doubled haploid lines developed in the North American Barley Genome Mapping Project. Application of the proposed method to other shrinkage estimation of QTL effects is discussed.

  19. A new method of body habitus correction for total body potassium measurements

    International Nuclear Information System (INIS)

    O'Hehir, S; Green, S; Beddoe, A H

    2006-01-01

    This paper describes an accurate and time-efficient method for the determination of total body potassium via a combination of measurements in the Birmingham whole body counter and the use of the Monte Carlo n-particle (MCNP) simulation code. In developing this method, MCNP has also been used to derive values for some components of the total measurement uncertainty which are difficult to quantify experimentally. A method is proposed for MCNP-assessed body habitus corrections based on a simple generic anthropomorphic model, scaled for individual height and weight. The use of this model increases patient comfort by reducing the need for comprehensive anthropomorphic measurements. The analysis shows that the total uncertainty in potassium weight determination by this whole body counting methodology for water-filled phantoms with a known amount of potassium is 2.7% (SD). The uncertainty in the method of body habitus correction (applicable also to phantom-based methods) is 1.5% (SD). It is concluded that this new strategy provides a sufficiently accurate model for routine clinical use

  20. A new method of body habitus correction for total body potassium measurements

    Energy Technology Data Exchange (ETDEWEB)

    O' Hehir, S [University Hospital Birmingham Foundation NHS Trust, Birmingham (United Kingdom); Green, S [University Hospital Birmingham Foundation NHS Trust, Birmingham (United Kingdom); Beddoe, A H [University Hospital Birmingham Foundation NHS Trust, Birmingham (United Kingdom)

    2006-09-07

    This paper describes an accurate and time-efficient method for the determination of total body potassium via a combination of measurements in the Birmingham whole body counter and the use of the Monte Carlo n-particle (MCNP) simulation code. In developing this method, MCNP has also been used to derive values for some components of the total measurement uncertainty which are difficult to quantify experimentally. A method is proposed for MCNP-assessed body habitus corrections based on a simple generic anthropomorphic model, scaled for individual height and weight. The use of this model increases patient comfort by reducing the need for comprehensive anthropomorphic measurements. The analysis shows that the total uncertainty in potassium weight determination by this whole body counting methodology for water-filled phantoms with a known amount of potassium is 2.7% (SD). The uncertainty in the method of body habitus correction (applicable also to phantom-based methods) is 1.5% (SD). It is concluded that this new strategy provides a sufficiently accurate model for routine clinical use.

  1. Effects of projection and background correction method upon calculation of right ventricular ejection fraction using first-pass radionuclide angiography

    International Nuclear Information System (INIS)

    Caplin, J.L.; Flatman, W.D.; Dymond, D.S.

    1985-01-01

    There is no consensus as to the best projection or correction method for first-pass radionuclide studies of the right ventricle. We assessed the effects of two commonly used projections, 30 degrees right anterior oblique and anterior-posterior, on the calculation of right ventricular ejection fraction. In addition two background correction methods, planar background correction to account for scatter, and right atrial correction to account for right atrio-ventricular overlap were assessed. Two first-pass radionuclide angiograms were performed in 19 subjects, one in each projection, using gold-195m (half-life 30.5 seconds), and each study was analysed using the two methods of correction. Right ventricular ejection fraction was highest using the right anterior oblique projection with right atrial correction 35.6 +/- 12.5% (mean +/- SD), and lowest when using the anterior posterior projection with planar background correction 26.2 +/- 11% (p less than 0.001). The study design allowed assessment of the effects of correction method and projection independently. Correction method appeared to have relatively little effect on right ventricular ejection fraction. Using right atrial correction correlation coefficient (r) between projections was 0.92, and for planar background correction r = 0.76, both p less than 0.001. However, right ventricular ejection fraction was far more dependent upon projection. When the anterior-posterior projection was used calculated right ventricular ejection fraction was much more dependent on correction method (r = 0.65, p = not significant), than using the right anterior oblique projection (r = 0.85, p less than 0.001)

  2. A gamma camera count rate saturation correction method for whole-body planar imaging

    Science.gov (United States)

    Hobbs, Robert F.; Baechler, Sébastien; Senthamizhchelvan, Srinivasan; Prideaux, Andrew R.; Esaias, Caroline E.; Reinhardt, Melvin; Frey, Eric C.; Loeb, David M.; Sgouros, George

    2010-02-01

    Whole-body (WB) planar imaging has long been one of the staple methods of dosimetry, and its quantification has been formalized by the MIRD Committee in pamphlet no 16. One of the issues not specifically addressed in the formalism occurs when the count rates reaching the detector are sufficiently high to result in camera count saturation. Camera dead-time effects have been extensively studied, but all of the developed correction methods assume static acquisitions. However, during WB planar (sweep) imaging, a variable amount of imaged activity exists in the detector's field of view as a function of time and therefore the camera saturation is time dependent. A new time-dependent algorithm was developed to correct for dead-time effects during WB planar acquisitions that accounts for relative motion between detector heads and imaged object. Static camera dead-time parameters were acquired by imaging decaying activity in a phantom and obtaining a saturation curve. Using these parameters, an iterative algorithm akin to Newton's method was developed, which takes into account the variable count rate seen by the detector as a function of time. The algorithm was tested on simulated data as well as on a whole-body scan of high activity Samarium-153 in an ellipsoid phantom. A complete set of parameters from unsaturated phantom data necessary for count rate to activity conversion was also obtained, including build-up and attenuation coefficients, in order to convert corrected count rate values to activity. The algorithm proved successful in accounting for motion- and time-dependent saturation effects in both the simulated and measured data and converged to any desired degree of precision. The clearance half-life calculated from the ellipsoid phantom data was calculated to be 45.1 h after dead-time correction and 51.4 h with no correction; the physical decay half-life of Samarium-153 is 46.3 h. Accurate WB planar dosimetry of high activities relies on successfully compensating

  3. Analysis of efficient preconditioned defect correction methods for nonlinear water waves

    DEFF Research Database (Denmark)

    Engsig-Karup, Allan Peter

    2014-01-01

    Robust computational procedures for the solution of non-hydrostatic, free surface, irrotational and inviscid free-surface water waves in three space dimensions can be based on iterative preconditioned defect correction (PDC) methods. Such methods can be made efficient and scalable to enable...... prediction of free-surface wave transformation and accurate wave kinematics in both deep and shallow waters in large marine areas or for predicting the outcome of experiments in large numerical wave tanks. We revisit the classical governing equations are fully nonlinear and dispersive potential flow...... equations. We present new detailed fundamental analysis using finite-amplitude wave solutions for iterative solvers. We demonstrate that the PDC method in combination with a high-order discretization method enables efficient and scalable solution of the linear system of equations arising in potential flow...

  4. A New Global Regression Analysis Method for the Prediction of Wind Tunnel Model Weight Corrections

    Science.gov (United States)

    Ulbrich, Norbert Manfred; Bridge, Thomas M.; Amaya, Max A.

    2014-01-01

    A new global regression analysis method is discussed that predicts wind tunnel model weight corrections for strain-gage balance loads during a wind tunnel test. The method determines corrections by combining "wind-on" model attitude measurements with least squares estimates of the model weight and center of gravity coordinates that are obtained from "wind-off" data points. The method treats the least squares fit of the model weight separate from the fit of the center of gravity coordinates. Therefore, it performs two fits of "wind- off" data points and uses the least squares estimator of the model weight as an input for the fit of the center of gravity coordinates. Explicit equations for the least squares estimators of the weight and center of gravity coordinates are derived that simplify the implementation of the method in the data system software of a wind tunnel. In addition, recommendations for sets of "wind-off" data points are made that take typical model support system constraints into account. Explicit equations of the confidence intervals on the model weight and center of gravity coordinates and two different error analyses of the model weight prediction are also discussed in the appendices of the paper.

  5. Error-finding and error-correcting methods for the start-up of the SLC

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.

    1987-02-01

    During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper

  6. Non perturbative method for radiative corrections applied to lepton-proton scattering

    International Nuclear Information System (INIS)

    Chahine, C.

    1979-01-01

    We present a new, non perturbative method to effect radiative corrections in lepton (electron or muon)-nucleon scattering, useful for existing or planned experiments. This method relies on a spectral function derived in a previous paper, which takes into account both real soft photons and virtual ones and hence is free from infrared divergence. Hard effects are computed perturbatively and then included in the form of 'hard factors' in the non peturbative soft formulas. Practical computations are effected using the Gauss-Jacobi integration method which reduce the relevant integrals to a rapidly converging sequence. For the simple problem of the radiative quasi-elastic peak, we get an exponentiated form conjectured by Schwinger and found by Yennie, Frautschi and Suura. We compare also our results with the peaking approximation, which we derive independantly, and with the exact one-photon emission formula of Mo and Tsai. Applications of our method to the continuous spectrum include the radiative tail of the Δ 33 resonance in e + p scattering and radiative corrections to the Feynman scale invariant F 2 structure function for the kinematics of two recent high energy muon experiments

  7. Fast pressure-correction method for incompressible Navier-Stokes equations in curvilinear coordinates

    Science.gov (United States)

    Aithal, Abhiram; Ferrante, Antonino

    2017-11-01

    In order to perform direct numerical simulations (DNS) of turbulent flows over curved surfaces and axisymmetric bodies, we have developed the numerical methodology to solve the incompressible Navier-Stokes (NS) equations in curvilinear coordinates for orthogonal meshes. The orthogonal meshes are generated by solving a coupled system of non-linear Poisson equations. The NS equations in orthogonal curvilinear coordinates are discretized in space on a staggered mesh using second-order central-difference scheme and are solved with an FFT-based pressure-correction method. The momentum equation is integrated in time using the second-order Adams-Bashforth scheme. The velocity field is advanced in time by applying the pressure correction to the approximate velocity such that it satisfies the divergence free condition. The novelty of the method stands in solving the variable coefficient Poisson equation for pressure using an FFT-based Poisson solver rather than the slower multigrid methods. We present the verification and validation results of the new numerical method and the DNS results of transitional flow over a curved axisymmetric body.

  8. Covariate measurement error correction methods in mediation analysis with failure time data.

    Science.gov (United States)

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  9. Methods for the correction of vascular artifacts in PET O-15 water brain-mapping studies

    Science.gov (United States)

    Chen, Kewei; Reiman, E. M.; Lawson, M.; Yun, Lang-sheng; Bandy, D.; Palant, A.

    1996-12-01

    While positron emission tomographic (PET) measurements of regional cerebral blood flow (rCBF) can be used to map brain regions that are involved in normal and pathological human behaviors, measurements in the anteromedial temporal lobe can be confounded by the combined effects of radiotracer activity in neighboring arteries and partial-volume averaging. The authors now describe two simple methods to address this vascular artifact. One method utilizes the early frames of a dynamic PET study, while the other method utilizes a coregistered magnetic resonance image (MRI) to characterize the vascular region of interest (VROI). Both methods subsequently assign a common value to each pixel in the VROI for the control (baseline) scan and the activation scan. To study the vascular artifact and to demonstrate the ability of the proposed methods correcting the vascular artifact, four dynamic PET scans were performed in a single subject during the same behavioral state. For each of the four scans, a vascular scan containing vascular activity was computed as the summation of the images acquired 0-60 s after radiotracer administration, and a control scan containing minimal vascular activity was computed as the summation of the images acquired 20-80 s after radiotracer administration. t-score maps calculated from the four pairs of vascular and control scans were used to characterize regional blood flow differences related to vascular activity before and after the application of each vascular artifact correction method. Both methods eliminated the observed differences in vascular activity, as well as the vascular artifact observed in the anteromedial temporal lobes. Using PET data from a study of normal human emotion, these methods permitted the authors to identify rCBF increases in the anteromedial temporal lobe free from the potentially confounding, combined effects of vascular activity and partial-volume averaging.

  10. Methods for the correction of vascular artifacts in PET O-15 water brain-mapping studies

    International Nuclear Information System (INIS)

    Chen, K.; Reiman, E.M.; Good Samaritan Regional Medical Center, Phoenix, AZ; Lawson, M.; Yun, L.S.; Bandy, D.

    1996-01-01

    While positron emission tomographic (PET) measurements of regional cerebral blood flow (rCBF) can be used to map brain regions that are involved in normal and pathological human behaviors, measurements in the anteromedial temporal lobe can be confounded by the combined effects of radiotracer activity in neighboring arteries and partial-volume averaging. The authors now describe two simple methods to address this vascular artifact. One method utilizes the early frames of a dynamic PET study, while the other method utilizes a coregistered magnetic resonance image (MRI) to characterize the vascular region of interest (VROI). Both methods subsequently assign a common value to each pixel in the VROI for the control scan and the activation scan. To study the vascular artifact and to demonstrate the ability of the proposed methods correcting the vascular artifact, four dynamic PET scans were performed in a single subject during the same behavioral state. For each of the four scans, a vascular scan containing vascular activity was computed as the summation of the images acquired 0--60 s after radiotracer administrations, and a control scan containing minimal vascular activity was computed as the summation of the images acquired 20--80 s after radiotracer administration. t-score maps calculated from the four pairs of vascular and control scans were used to characterize regional blood flow differences related to vascular activity before and after the applications of each vascular artifact correction method. Both methods eliminated the observed differences in vascular activity, as well as the vascular artifact observed in the anteromedial temporal lobes. Using PET data from a study of normal human emotion, these methods permitted us to identify rCBF increases in the anteromedial temporal lobe free from the potentially confounding, combined effects of vascular activity and partial-volume averaging

  11. Evaluation of the ICS and DEW scatter correction methods for low statistical content scans in 3D PET

    International Nuclear Information System (INIS)

    Sossi, V.; Oakes, T.R.; Ruth, T.J.

    1996-01-01

    The performance of the Integral Convolution and the Dual Energy Window scatter correction methods in 3D PET has been evaluated over a wide range of statistical content of acquired data (1M to 400M events) The order in which scatter correction and detector normalization should be applied has also been investigated. Phantom and human neuroreceptor studies were used with the following figures of merit: axial and radial uniformity, sinogram and image noise, contrast accuracy and contrast accuracy uniformity. Both scatter correction methods perform reliably in the range of number of events examined. Normalization applied after scatter correction yields better radial uniformity and fewer image artifacts

  12. An improved bias correction method of daily rainfall data using a sliding window technique for climate change impact assessment

    Science.gov (United States)

    Smitha, P. S.; Narasimhan, B.; Sudheer, K. P.; Annamalai, H.

    2018-01-01

    Regional climate models (RCMs) are used to downscale the coarse resolution General Circulation Model (GCM) outputs to a finer resolution for hydrological impact studies. However, RCM outputs often deviate from the observed climatological data, and therefore need bias correction before they are used for hydrological simulations. While there are a number of methods for bias correction, most of them use monthly statistics to derive correction factors, which may cause errors in the rainfall magnitude when applied on a daily scale. This study proposes a sliding window based daily correction factor derivations that help build reliable daily rainfall data from climate models. The procedure is applied to five existing bias correction methods, and is tested on six watersheds in different climatic zones of India for assessing the effectiveness of the corrected rainfall and the consequent hydrological simulations. The bias correction was performed on rainfall data downscaled using Conformal Cubic Atmospheric Model (CCAM) to 0.5° × 0.5° from two different CMIP5 models (CNRM-CM5.0, GFDL-CM3.0). The India Meteorological Department (IMD) gridded (0.25° × 0.25°) observed rainfall data was considered to test the effectiveness of the proposed bias correction method. The quantile-quantile (Q-Q) plots and Nash Sutcliffe efficiency (NSE) were employed for evaluation of different methods of bias correction. The analysis suggested that the proposed method effectively corrects the daily bias in rainfall as compared to using monthly factors. The methods such as local intensity scaling, modified power transformation and distribution mapping, which adjusted the wet day frequencies, performed superior compared to the other methods, which did not consider adjustment of wet day frequencies. The distribution mapping method with daily correction factors was able to replicate the daily rainfall pattern of observed data with NSE value above 0.81 over most parts of India. Hydrological

  13. Monte Carlo evaluation of accuracy and noise properties of two scatter correction methods

    International Nuclear Information System (INIS)

    Narita, Y.; Eberl, S.; Nakamura, T.

    1996-01-01

    Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for 99m Tc and 201 Tl for numerical chest phantoms. Data were reconstructed with ordered-subset ML-EM algorithm including attenuation correction using the transmission data. In the chest phantom simulation, TDCS provided better S/N than TEW, and better accuracy, i.e., 1.0% vs -7.2% in myocardium, and -3.7% vs -30.1% in the ventricular chamber for 99m Tc with TDCS and TEW, respectively. For 201 Tl, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT

  14. Methods for detecting and correcting inaccurate results in inductively coupled plasma-atomic emission spectrometry

    Science.gov (United States)

    Chan, George C. Y. [Bloomington, IN; Hieftje, Gary M [Bloomington, IN

    2010-08-03

    A method for detecting and correcting inaccurate results in inductively coupled plasma-atomic emission spectrometry (ICP-AES). ICP-AES analysis is performed across a plurality of selected locations in the plasma on an unknown sample, collecting the light intensity at one or more selected wavelengths of one or more sought-for analytes, creating a first dataset. The first dataset is then calibrated with a calibration dataset creating a calibrated first dataset curve. If the calibrated first dataset curve has a variability along the location within the plasma for a selected wavelength, errors are present. Plasma-related errors are then corrected by diluting the unknown sample and performing the same ICP-AES analysis on the diluted unknown sample creating a calibrated second dataset curve (accounting for the dilution) for the one or more sought-for analytes. The cross-over point of the calibrated dataset curves yields the corrected value (free from plasma related errors) for each sought-for analyte.

  15. A neural network method to correct bidirectional effects in water-leaving radiance

    Science.gov (United States)

    Fan, Yongzhen; Li, Wei; Voss, Kenneth J.; Gatebe, Charles K.; Stamnes, Knut

    2017-02-01

    The standard method to convert the measured water-leaving radiances from the observation direction to the nadir direction developed by Morel and coworkers requires knowledge of the chlorophyll concentration (CHL). Also, the standard method was developed for open ocean water, which makes it unsuitable for turbid coastal waters. We introduce a neural network method to convert the water-leaving radiance (or the corresponding remote sensing reflectance) from the observation direction to the nadir direction. This method does not require any prior knowledge of the water constituents or the inherent optical properties (IOPs). This method is fast, accurate and can be easily adapted to different remote sensing instruments. Validation using NuRADS measurements in different types of water shows that this method is suitable for both open ocean and coastal waters. In open ocean or chlorophyll-dominated waters, our neural network method produces corrections similar to those of the standard method. In turbid coastal waters, especially sediment-dominated waters, a significant improvement was obtained compared to the standard method.

  16. Effects of Atmospheric Refraction on an Airborne Weather Radar Detection and Correction Method

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2015-01-01

    Full Text Available This study investigates the effect of atmospheric refraction, affected by temperature, atmospheric pressure, and humidity, on airborne weather radar beam paths. Using three types of typical atmospheric background sounding data, we established a simulation model for an actual transmission path and a fitted correction path of an airborne weather radar beam during airplane take-offs and landings based on initial flight parameters and X-band airborne phased-array weather radar parameters. Errors in an ideal electromagnetic beam propagation path are much greater than those of a fitted path when atmospheric refraction is not considered. The rates of change in the atmospheric refraction index differ with weather conditions and the radar detection angles differ during airplane take-off and landing. Therefore, the airborne radar detection path must be revised in real time according to the specific sounding data and flight parameters. However, an error analysis indicates that a direct linear-fitting method produces significant errors in a negatively refractive atmosphere; a piecewise-fitting method can be adopted to revise the paths according to the actual atmospheric structure. This study provides researchers and practitioners in the aeronautics and astronautics field with updated information regarding the effect of atmospheric refraction on airborne weather radar detection and correction methods.

  17. METHOD OF RADIOMETRIC DISTORTION CORRECTION OF MULTISPECTRAL DATA FOR THE EARTH REMOTE SENSING

    Directory of Open Access Journals (Sweden)

    A. N. Grigoriev

    2015-07-01

    Full Text Available The paper deals with technologies of ground secondary processing of heterogeneous multispectral data. The factors of heterogeneous data include uneven illumination of objects on the Earth surface caused by different properties of the relief. A procedure for the image restoration of spectral channels by means of terrain distortion compensation is developed. The object matter of this paper is to improve the quality of the results during image restoration of areas with large and medium landforms. Methods. Researches are based on the elements of the digital image processing theory, statistical processing of the observation results and the theory of multi-dimensional arrays. Main Results. The author has introduced operations on multidimensional arrays: concatenation and elementwise division. Extended model description for input data about the area is given. The model contains all necessary data for image restoration. Correction method for multispectral data radiometric distortions of the Earth remote sensing has been developed. The method consists of two phases: construction of empirical dependences for spectral reflectance on the relief properties and restoration of spectral images according to semiempirical data. Practical Relevance. Research novelty lies in developme nt of the application theory of multidimensional arrays with respect to the processing of multispectral data, together with data on the topography and terrain objects. The results are usable for development of radiometric data correction tools. Processing is performed on the basis of a digital terrain model without carrying out ground works connected with research of the objects reflective properties.

  18. Non-uniform multivariate embedding to assess the information transfer in cardiovascular and cardiorespiratory variability series.

    Science.gov (United States)

    Faes, Luca; Nollo, Giandomenico; Porta, Alberto

    2012-03-01

    The complexity of the short-term cardiovascular control prompts for the introduction of multivariate (MV) nonlinear time series analysis methods to assess directional interactions reflecting the underlying regulatory mechanisms. This study introduces a new approach for the detection of nonlinear Granger causality in MV time series, based on embedding the series by a sequential, non-uniform procedure, and on estimating the information flow from one series to another by means of the corrected conditional entropy. The approach is validated on short realizations of linear stochastic and nonlinear deterministic processes, and then evaluated on heart period, systolic arterial pressure and respiration variability series measured from healthy humans in the resting supine position and in the upright position after head-up tilt. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. The Pierce diode with an external circuit. I. Oscillations about nonuniform equilibria

    International Nuclear Information System (INIS)

    Lawson, W.S.

    1989-01-01

    The nonuniform (nonlinear) equilibria of the classical (short circuit) Pierce diode and the extended (series RLC external circuit) Pierce diode are described, and the spectrum of oscillations (stable and unstable) about these equilibria are worked out. It is found that only the external capacitance alters the equilibria, though all elements alter the spectrum. In particular, the introduction of an external capacitor destabilizes some equilibria that are marginally stable without the capacitor. Computer simulations are performed to test the theoretical predictions for the case of an external capacitor only. It is found that most equilibria are correctly predicted by theory, but that the continuous set of equilibria of the classical Pierce diode at Pierce parameters (α=ω/sub pL//v 0 ) that are multiples of 2π are not observed. This appears to be a failure of the simulation method under the rather singular conditions rather than a failure of the theory

  20. Flexural Free Vibrations of Multistep Nonuniform Beams

    Directory of Open Access Journals (Sweden)

    Guojin Tan

    2016-01-01

    Full Text Available This paper presents an exact approach to investigate the flexural free vibrations of multistep nonuniform beams. Firstly, one-step beam with moment of inertia and mass per unit length varying as I(x=α11+βxr+4 and m(x=α21+βxr was studied. By using appropriate transformations, the differential equation for flexural free vibration of one-step beam with variable cross section is reduced to a four-order differential equation with constant coefficients. According to different types of roots for the characteristic equation of four-order differential equation with constant coefficients, two kinds of modal shape functions are obtained, and the general solutions for flexural free vibration of one-step beam with variable cross section are presented. An exact approach to solve the natural frequencies and modal shapes of multistep beam with variable cross section is presented by using transfer matrix method, the exact general solutions of one-step beam, and iterative method. Numerical examples reveal that the calculated frequencies and modal shapes are in good agreement with the finite element method (FEM, which demonstrates the solutions of present method are exact ones.

  1. Temperature effects on pitfall catches of epigeal arthropods: a model and method for bias correction.

    Science.gov (United States)

    Saska, Pavel; van der Werf, Wopke; Hemerik, Lia; Luff, Martin L; Hatten, Timothy D; Honek, Alois; Pocock, Michael

    2013-02-01

    Carabids and other epigeal arthropods make important contributions to biodiversity, food webs and biocontrol of invertebrate pests and weeds. Pitfall trapping is widely used for sampling carabid populations, but this technique yields biased estimates of abundance ('activity-density') because individual activity - which is affected by climatic factors - affects the rate of catch. To date, the impact of temperature on pitfall catches, while suspected to be large, has not been quantified, and no method is available to account for it. This lack of knowledge and the unavailability of a method for bias correction affect the confidence that can be placed on results of ecological field studies based on pitfall data.Here, we develop a simple model for the effect of temperature, assuming a constant proportional change in the rate of catch per °C change in temperature, r , consistent with an exponential Q 10 response to temperature. We fit this model to 38 time series of pitfall catches and accompanying temperature records from the literature, using first differences and other detrending methods to account for seasonality. We use meta-analysis to assess consistency of the estimated parameter r among studies.The mean rate of increase in total catch across data sets was 0·0863 ± 0·0058 per °C of maximum temperature and 0·0497 ± 0·0107 per °C of minimum temperature. Multiple regression analyses of 19 data sets showed that temperature is the key climatic variable affecting total catch. Relationships between temperature and catch were also identified at species level. Correction for temperature bias had substantial effects on seasonal trends of carabid catches. Synthesis and Applications . The effect of temperature on pitfall catches is shown here to be substantial and worthy of consideration when interpreting results of pitfall trapping. The exponential model can be used both for effect estimation and for bias correction of observed data. Correcting for temperature

  2. A method to correct sampling ghosts in historic near-infrared Fourier transform spectrometer (FTS) measurements

    Science.gov (United States)

    Dohe, S.; Sherlock, V.; Hase, F.; Gisi, M.; Robinson, J.; Sepúlveda, E.; Schneider, M.; Blumenstock, T.

    2013-08-01

    The Total Carbon Column Observing Network (TCCON) has been established to provide ground-based remote sensing measurements of the column-averaged dry air mole fractions (DMF) of key greenhouse gases. To ensure network-wide consistency, biases between Fourier transform spectrometers at different sites have to be well controlled. Errors in interferogram sampling can introduce significant biases in retrievals. In this study we investigate a two-step scheme to correct these errors. In the first step the laser sampling error (LSE) is estimated by determining the sampling shift which minimises the magnitude of the signal intensity in selected, fully absorbed regions of the solar spectrum. The LSE is estimated for every day with measurements which meet certain selection criteria to derive the site-specific time series of the LSEs. In the second step, this sequence of LSEs is used to resample all the interferograms acquired at the site, and hence correct the sampling errors. Measurements acquired at the Izaña and Lauder TCCON sites are used to demonstrate the method. At both sites the sampling error histories show changes in LSE due to instrument interventions (e.g. realignment). Estimated LSEs are in good agreement with sampling errors inferred from the ratio of primary and ghost spectral signatures in optically bandpass-limited tungsten lamp spectra acquired at Lauder. The original time series of Xair and XCO2 (XY: column-averaged DMF of the target gas Y) at both sites show discrepancies of 0.2-0.5% due to changes in the LSE associated with instrument interventions or changes in the measurement sample rate. After resampling, discrepancies are reduced to 0.1% or less at Lauder and 0.2% at Izaña. In the latter case, coincident changes in interferometer alignment may also have contributed to the residual difference. In the future the proposed method will be used to correct historical spectra at all TCCON sites.

  3. A method to correct sampling ghosts in historic near-infrared Fourier transform spectrometer (FTS measurements

    Directory of Open Access Journals (Sweden)

    S. Dohe

    2013-08-01

    Full Text Available The Total Carbon Column Observing Network (TCCON has been established to provide ground-based remote sensing measurements of the column-averaged dry air mole fractions (DMF of key greenhouse gases. To ensure network-wide consistency, biases between Fourier transform spectrometers at different sites have to be well controlled. Errors in interferogram sampling can introduce significant biases in retrievals. In this study we investigate a two-step scheme to correct these errors. In the first step the laser sampling error (LSE is estimated by determining the sampling shift which minimises the magnitude of the signal intensity in selected, fully absorbed regions of the solar spectrum. The LSE is estimated for every day with measurements which meet certain selection criteria to derive the site-specific time series of the LSEs. In the second step, this sequence of LSEs is used to resample all the interferograms acquired at the site, and hence correct the sampling errors. Measurements acquired at the Izaña and Lauder TCCON sites are used to demonstrate the method. At both sites the sampling error histories show changes in LSE due to instrument interventions (e.g. realignment. Estimated LSEs are in good agreement with sampling errors inferred from the ratio of primary and ghost spectral signatures in optically bandpass-limited tungsten lamp spectra acquired at Lauder. The original time series of Xair and XCO2 (XY: column-averaged DMF of the target gas Y at both sites show discrepancies of 0.2–0.5% due to changes in the LSE associated with instrument interventions or changes in the measurement sample rate. After resampling, discrepancies are reduced to 0.1% or less at Lauder and 0.2% at Izaña. In the latter case, coincident changes in interferometer alignment may also have contributed to the residual difference. In the future the proposed method will be used to correct historical spectra at all TCCON sites.

  4. Optimal correction and design parameter search by modern methods of rigorous global optimization

    International Nuclear Information System (INIS)

    Makino, K.; Berz, M.

    2011-01-01

    Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of optimization runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global optimization problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an optimization problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting optimization problem cannot usually be solved. However, recent significant advances in modern methods of rigorous global optimization make these methods feasible for optics design for the first time. The key ideas of the method lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic methods used in particle

  5. Non-uniformity measurements of PbWO4 crystals

    International Nuclear Information System (INIS)

    Depasse, P.; Ernenwein, J.P.; Ille, B.; Martin, F.; Rosset, C.; Zach, F.

    1998-11-01

    Two independent methods have been used to measure the longitudinal non-uniformity scintillation response of 3 different (23-cm long) PbWO 4 crystals. The first one is the classical 60 Co source method. The source is collimated along the crystal, each 1,5-cm, and the scintillation signal is measured with a photomultiplier (a hybrid photomultiplier in our case). The second one is the use of cosmic particles (Minimum Ionizing Particles). A cosmic bench allows reconstructing the track of the MIP's and thus the energy deposit with the help of a full GEANT simulation of the setup. Variations of E along the crystal artificially cut in 1,5-cm divisions, leads to determine the non-uniformity. The conclusion is that both methods agree quite well. Furthermore, a good estimation of crystal light yield can be obtained. (author)

  6. Attenuation correction for renal scintigraphy with 99mTc - DMSA: comparison between Raynaud and the geometric mean methods

    International Nuclear Information System (INIS)

    Argenta, J.; Brambilla, C.R.; Marques da Silva, A.M.

    2009-01-01

    The evaluation of the index of renal function (IF) requires soft-tissue attenuation correction. This paper investigates the impact over the IF, when attenuation correction is applied using the Raynaud method and the geometric mean method in renal planar scintigraphy, using posterior and anterior views. The study was conducted with Monte Carlo simulated images of five GSF family voxel phantoms with different relative uptakes in each kidney from normal (50% -50%) to pathological (10% -90%). The results showed that Raynaud method corrects more efficiently the cases where the renal depth is close to the value of the standard phantom. The geometric mean method showed similar results to the Raynaud method for Baby, Child and Golem. For Helga and Donna models, the errors were above 20%, increasing with relative uptake. Further studies should be conducted to assess the influences of the standard phantom in the correcting attenuation methods. (author)

  7. Attenuation correction for renal scintigraphy with 99mTc-DMSA: analysis between Raynaud and the geometric mean methods

    International Nuclear Information System (INIS)

    Argenta, Jackson; Brambilla, Claudia R.; Silva, Ana Maria M. da

    2010-01-01

    The evaluation of the index of renal function (IF) requires soft-tissue attenuation correction. This paper investigates the impact over the IF, when attenuation correction is applied using the Raynaud method and the Geometric Mean method in renal planar scintigraphy, using posterior and anterior views. The study was conducted with Monte Carlo simulated images of five GSF family voxel phantoms with different relative uptakes in each kidney from normal (50% -50%) to pathological (10% -90%). The results showed that Raynaud method corrects more efficiently the cases where the renal depth is close to the value of the standard phantom. The geometric mean method showed similar results to the Raynaud method for Baby, Child and Golem. For Helga and Donna models, the errors were above 20%, increasing with relative uptake. Further studies should be conducted to assess the influences of the standard phantom in the correcting attenuation methods. (author)

  8. Development of a new technic for breast attenuation correction in myocardial perfusion scintigraphy using computational methods

    International Nuclear Information System (INIS)

    Oliveira, Anderson de

    2015-01-01

    Introduction: One of the limitations of nuclear medicine studies are false-positive results that lead to unnecessary exams and procedures associated to morbidity and costs to the individual and society. One of the most frequent causes for reducing the specificity of myocardial perfusion imaging (MPI) is photon attenuation, especially by breast in women. Objective: To develop a new technique to compensate the photon attenuation by women breasts in myocardial perfusion imaging with 99m Tc-sestamibi, using computational methods. Materials and methods: A procedure was proposed which integrates Monte Carlo simulation, computational methods and experimental techniques. Initially, were obtained the chest attenuation correction percentages using a phantom Jaszczak and breast attenuation percentages by Monte Carlo simulation method, using the EGS4 program. The percentages of attenuation correction were linked to individual patients' characteristics by an artificial neural network and a multivariate analysis. A preliminary technical validation was done by comparing the results of the MPI and catheterism (CAT), before and after applying the technique to 4 patients. The t test for parametric data, Wilcoxon, Mann-Whitney and X 2 for the others were used. Probability values less than 0.05 were considered statistically significant. Results: Each increment of 1 cm in the thickness of breast was associated to an average increment of 6% on photon attenuation, while the maximum increase related to breast composition was about 2%. The average chest attenuation percentage per unit was 2.9%. Both, the artificial neural network and linear regression, showed an error less than 3% as predictive models for percentage of female attenuation. The anatomical-functional correlation between MPI and CAT was maintained after the use of the technique. Conclusion: Results suggest that the proposed technique is promising and could be a possible alternative to other conventional methods employed

  9. 76 FR 53819 - Methods of Accounting Used by Corporations That Acquire the Assets of Other Corporations; Correction

    Science.gov (United States)

    2011-08-30

    ... of Accounting Used by Corporations That Acquire the Assets of Other Corporations; Correction AGENCY... describes corrections to final regulations (TD 9534) relating to the methods of accounting, including the... corporate reorganizations and tax-free liquidations. These regulations were published in the Federal...

  10. A numerical method for determining the radial wave motion correction in plane wave couplers

    DEFF Research Database (Denmark)

    Cutanda Henriquez, Vicente; Barrera Figueroa, Salvador; Torras Rosell, Antoni

    2016-01-01

    Microphones are used for realising the unit of sound pressure level, the pascal (Pa). Electro-acoustic reciprocity is the preferred method for the absolute determination of the sensitivity. This method can be applied in different sound fields: uniform pressure, free field or diffuse field. Pressure...... solution is an analytical expression that estimates the difference between the ideal plane wave sound field and a more complex lossless sound field created by a non-planar movement of the microphone’s membranes. Alternatively, a correction may be calculated numerically by introducing a full model...... of the microphone-coupler system in a Boundary Element formulation. In order to obtain a realistic representation of the sound field, viscous losses must be introduced in the model. This paper presents such a model, and the results of the simulations for different combinations of microphones and couplers...

  11. Evaluation of Machine Learning Methods for LHC Optics Measurements and Corrections Software

    CERN Document Server

    AUTHOR|(CDS)2206853; Henning, Peter

    The field of artificial intelligence is driven by the goal to provide machines with human-like intelligence. However modern science is currently facing problems with high complexity that cannot be solved by humans in the same timescale as by machines. Therefore there is a demand on automation of complex tasks. To identify the category of tasks which can be performed by machines in the domain of optics measurements and correction on the Large Hadron Collider (LHC) is one of the central research subjects of this thesis. The application of machine learning methods and concepts of artificial intelligence can be found in various industry and scientific branches. In High Energy Physics these concepts are mostly used in offline analysis of experiments data and to perform regression tasks. In Accelerator Physics the machine learning approach has not found a wide application yet. Therefore potential tasks for machine learning solutions can be specified in this domain. The appropriate methods and their suitability for...

  12. Correction method for critical extrapolation of control-rods-rising during physical start-up of reactor

    International Nuclear Information System (INIS)

    Zhang Fan; Chen Wenzhen; Yu Lei

    2008-01-01

    During physical start-up of nuclear reactor, the curve got by lifting the con- trol rods to extrapolate to the critical state is often in protruding shape, by which the supercritical phenomena is led. In the paper, the reason why the curve was in protruding was analyzed. A correction method was introduced, and the calculations were carried out by the practical data used in a nuclear power plant. The results show that the correction method reverses the protruding shape of the extrapolating curve, and the risk of reactor supercritical phenomena can be reduced using the extrapolated curve got by the correction method during physical start-up of the reactor. (authors)

  13. Quasiparticles in non-uniformly magnetized plasma

    International Nuclear Information System (INIS)

    Sosenko, P.P.

    1994-01-01

    A quasiparticle concept is generalized for the case of non-uniformly magnetized plasma. Exact and reduced continuity equations for the microscopic density in the quasiparticle phase space are derived, and the nature of quasiparticles is analyzed. The theory is developed for the general case of relativistic particles in electromagnetic fields, besides non-uniform but stationary magnetic fields. Effects of non-stationary magnetic fields are briefly investigated also. 26 refs

  14. Attenuation correction for the NIH ATLAS small animal PET scanner

    CERN Document Server

    Yao, Rutao; Liow, JeihSan; Seidel, Jurgen

    2003-01-01

    We evaluated two methods of attenuation correction for the NIH ATLAS small animal PET scanner: 1) a CT-based method that derives 511 keV attenuation coefficients (mu) by extrapolation from spatially registered CT images; and 2) an analytic method based on the body outline of emission images and an empirical mu. A specially fabricated attenuation calibration phantom with cylindrical inserts that mimic different body tissues was used to derive the relationship to convert CT values to (I for PET. The methods were applied to three test data sets: 1) a uniform cylinder phantom, 2) the attenuation calibration phantom, and 3) a mouse injected with left bracket **1**8F right bracket FDG. The CT-based attenuation correction factors were larger in non-uniform regions of the imaging subject, e.g. mouse head, than the analytic method. The two methods had similar correction factors for regions with uniform density and detectable emission source distributions.

  15. Spectrum correction algorithm for detectors in airborne radioactivity monitoring equipment NH-UAV based on a ratio processing method

    International Nuclear Information System (INIS)

    Cao, Ye; Tang, Xiao-Bin; Wang, Peng; Meng, Jia; Huang, Xi; Wen, Liang-Sheng; Chen, Da

    2015-01-01

    The unmanned aerial vehicle (UAV) radiation monitoring method plays an important role in nuclear accidents emergency. In this research, a spectrum correction algorithm about the UAV airborne radioactivity monitoring equipment NH-UAV was studied to measure the radioactive nuclides within a small area in real time and in a fixed place. The simulation spectra of the high-purity germanium (HPGe) detector and the lanthanum bromide (LaBr 3 ) detector in the equipment were obtained using the Monte Carlo technique. Spectrum correction coefficients were calculated after performing ratio processing techniques about the net peak areas between the double detectors on the detection spectrum of the LaBr 3 detector according to the accuracy of the detection spectrum of the HPGe detector. The relationship between the spectrum correction coefficient and the size of the source term was also investigated. A good linear relation exists between the spectrum correction coefficient and the corresponding energy (R 2 =0.9765). The maximum relative deviation from the real condition reduced from 1.65 to 0.035. The spectrum correction method was verified as feasible. - Highlights: • An airborne radioactivity monitoring equipment NH-UAV was developed to measure radionuclide after a nuclear accident. • A spectrum correction algorithm was proposed to obtain precise information on the detected radioactivity within a small area. • The spectrum correction method was verified as feasible. • The corresponding spectrum correction coefficients increase first and then stay constant

  16. Spectrum correction algorithm for detectors in airborne radioactivity monitoring equipment NH-UAV based on a ratio processing method

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Ye [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Tang, Xiao-Bin, E-mail: tangxiaobin@nuaa.edu.cn [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Jiangsu Key Laboratory of Nuclear Energy Equipment Materials Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Wang, Peng; Meng, Jia; Huang, Xi; Wen, Liang-Sheng [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Chen, Da [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Jiangsu Key Laboratory of Nuclear Energy Equipment Materials Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China)

    2015-10-11

    The unmanned aerial vehicle (UAV) radiation monitoring method plays an important role in nuclear accidents emergency. In this research, a spectrum correction algorithm about the UAV airborne radioactivity monitoring equipment NH-UAV was studied to measure the radioactive nuclides within a small area in real time and in a fixed place. The simulation spectra of the high-purity germanium (HPGe) detector and the lanthanum bromide (LaBr{sub 3}) detector in the equipment were obtained using the Monte Carlo technique. Spectrum correction coefficients were calculated after performing ratio processing techniques about the net peak areas between the double detectors on the detection spectrum of the LaBr{sub 3} detector according to the accuracy of the detection spectrum of the HPGe detector. The relationship between the spectrum correction coefficient and the size of the source term was also investigated. A good linear relation exists between the spectrum correction coefficient and the corresponding energy (R{sup 2}=0.9765). The maximum relative deviation from the real condition reduced from 1.65 to 0.035. The spectrum correction method was verified as feasible. - Highlights: • An airborne radioactivity monitoring equipment NH-UAV was developed to measure radionuclide after a nuclear accident. • A spectrum correction algorithm was proposed to obtain precise information on the detected radioactivity within a small area. • The spectrum correction method was verified as feasible. • The corresponding spectrum correction coefficients increase first and then stay constant.

  17. Fission track dating of volcanic glass: experimental evidence for the validity of the Size-Correction Method

    International Nuclear Information System (INIS)

    Bernardes, C.; Hadler Neto, J.C.; Lattes, C.M.G.; Araya, A.M.O.; Bigazzi, G.; Cesar, M.F.

    1986-01-01

    Two techniques may be employed for correcting thermally lowered fission track ages on glass material: the so called 'size-correcting method' and 'Plateau method'. Several results from fission track dating on obsidian were analysed in order to compare the model rising size-correction method with experimental evidences. The results from this work can be summarized as follows: 1) The assumption that mean size of spontaneous and induced etched tracks are equal on samples unaffected by partial fading is supported by experimental results. If reactor effects such as an enhancing of the etching rate in the irradiated fraction due to the radiation damage and/or to the fact that induced fission releases a quantity of energy slightly greater than spontaneous one exist, their influence on size-correction method is very small. 2) The above two correction techniques produce concordant results. 3) Several samples from the same obsidian, affected by 'instantaneous' as well as 'continuous' natural fading to different degrees were analysed: the curve showing decreasing of spontaneous track mean-size vs. fraction of spontaneous tracks lost by fading is in close agreement with the correction curve constructed for the same obsidian by imparting artificial thermal treatements on induced tracks. By the above points one can conclude that the assumptions on which size-correction method is based are well supported, at least in first approximation. (Author) [pt

  18. Spectral-ratio radon background correction method in airborne γ-ray spectrometry based on compton scattering deduction

    International Nuclear Information System (INIS)

    Gu Yi; Xiong Shengqing; Zhou Jianxin; Fan Zhengguo; Ge Liangquan

    2014-01-01

    γ-ray released by the radon daughter has severe impact on airborne γ-ray spectrometry. The spectral-ratio method is one of the best mathematical methods for radon background deduction in airborne γ-ray spectrometry. In this paper, an advanced spectral-ratio method was proposed which deducts Compton scattering ray by the fast Fourier transform rather than tripping ratios, the relationship between survey height and correction coefficient of the advanced spectral-ratio radon background correction method was studied, the advanced spectral-ratio radon background correction mathematic model was established, and the ground saturation model calibrating technology for correction coefficient was proposed. As for the advanced spectral-ratio radon background correction method, its applicability and correction efficiency are improved, and the application cost is saved. Furthermore, it can prevent the physical meaning lost and avoid the possible errors caused by matrix computation and mathematical fitting based on spectrum shape which is applied in traditional correction coefficient. (authors)

  19. A Novel Optimal Control Method for Impulsive-Correction Projectile Based on Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Ruisheng Sun

    2016-01-01

    Full Text Available This paper presents a new parametric optimization approach based on a modified particle swarm optimization (PSO to design a class of impulsive-correction projectiles with discrete, flexible-time interval, and finite-energy control. In terms of optimal control theory, the task is described as the formulation of minimum working number of impulses and minimum control error, which involves reference model linearization, boundary conditions, and discontinuous objective function. These result in difficulties in finding the global optimum solution by directly utilizing any other optimization approaches, for example, Hp-adaptive pseudospectral method. Consequently, PSO mechanism is employed for optimal setting of impulsive control by considering the time intervals between two neighboring lateral impulses as design variables, which makes the briefness of the optimization process. A modification on basic PSO algorithm is developed to improve the convergence speed of this optimization through linearly decreasing the inertial weight. In addition, a suboptimal control and guidance law based on PSO technique are put forward for the real-time consideration of the online design in practice. Finally, a simulation case coupled with a nonlinear flight dynamic model is applied to validate the modified PSO control algorithm. The results of comparative study illustrate that the proposed optimal control algorithm has a good performance in obtaining the optimal control efficiently and accurately and provides a reference approach to handling such impulsive-correction problem.

  20. Linearization of Nonautonomous Impulsive System with Nonuniform Exponential Dichotomy

    Directory of Open Access Journals (Sweden)

    Yongfei Gao

    2014-01-01

    Full Text Available This paper gives a version of Hartman-Grobman theorem for the impulsive differential equations. We assume that the linear impulsive system has a nonuniform exponential dichotomy. Under some suitable conditions, we proved that the nonlinear impulsive system is topologically conjugated to its linear system. Indeed, we do construct the topologically equivalent function (the transformation. Moreover, the method to prove the topological conjugacy is quite different from those in previous works (e.g., see Barreira and Valls, 2006.

  1. Comparison between MRI-based attenuation correction methods for brain PET in dementia patients

    International Nuclear Information System (INIS)

    Cabello, Jorge; Lukas, Mathias; Pyka, Thomas; Nekolla, Stephan G.; Ziegler, Sibylle I.; Rota Kops, Elena; Shah, N. Jon; Ribeiro, Andre; Yakushev, Igor

    2016-01-01

    The combination of Positron Emission Tomography (PET) with magnetic resonance imaging (MRI) in hybrid PET/MRI scanners offers a number of advantages in investigating brain structure and function. A critical step of PET data reconstruction is attenuation correction (AC). Accounting for bone in attenuation maps (μ-map) was shown to be important in brain PET studies. While there are a number of MRI-based AC methods, no systematic comparison between them has been performed so far. The aim of this work was to study the different performance obtained by some of the recent methods presented in the literature. To perform such a comparison, we focused on [ 18 F]-Fluorodeoxyglucose-PET/MRI neurodegenerative dementing disorders, which are known to exhibit reduced levels of glucose metabolism in certain brain regions. Four novel methods were used to calculate μ-maps from MRI data of 15 patients with Alzheimer's dementia (AD). The methods cover two atlas-based methods, a segmentation method, and a hybrid template/segmentation method. Additionally, the Dixon-based and a UTE-based method, offered by a vendor, were included in the comparison. Performance was assessed at three levels: tissue identification accuracy in the μ-map, quantitative accuracy of reconstructed PET data in specific brain regions, and precision in diagnostic images at identifying hypometabolic areas. Quantitative regional errors of -20-10 % were obtained using the vendor's AC methods, whereas the novel methods produced errors in a margin of ±5 %. The obtained precision at identifying areas with abnormally low levels of glucose uptake, potentially regions affected by AD, were 62.9 and 79.5 % for the two vendor AC methods, the former ignoring bone and the latter including bone information. The precision increased to 87.5-93.3 % in average for the four new methods, exhibiting similar performances. We confirm that the AC methods based on the Dixon and UTE sequences provided by the vendor are inferior

  2. Comparison between MRI-based attenuation correction methods for brain PET in dementia patients

    Energy Technology Data Exchange (ETDEWEB)

    Cabello, Jorge; Lukas, Mathias; Pyka, Thomas; Nekolla, Stephan G.; Ziegler, Sibylle I. [Technische Universitaet Muenchen, Nuklearmedizinische Klinik und Poliklinik, Klinikum rechts der Isar, Munich (Germany); Rota Kops, Elena; Shah, N. Jon [Forschungszentrum Juelich GmbH, Institute of Neuroscience and Medicine 4, Medical Imaging Physics, Juelich (Germany); Ribeiro, Andre [Forschungszentrum Juelich GmbH, Institute of Neuroscience and Medicine 4, Medical Imaging Physics, Juelich (Germany); Institute of Biophysics and Biomedical Engineering, Lisbon (Portugal); Yakushev, Igor [Technische Universitaet Muenchen, Nuklearmedizinische Klinik und Poliklinik, Klinikum rechts der Isar, Munich (Germany); Institute TUM Neuroimaging Center (TUM-NIC), Munich (Germany)

    2016-11-15

    The combination of Positron Emission Tomography (PET) with magnetic resonance imaging (MRI) in hybrid PET/MRI scanners offers a number of advantages in investigating brain structure and function. A critical step of PET data reconstruction is attenuation correction (AC). Accounting for bone in attenuation maps (μ-map) was shown to be important in brain PET studies. While there are a number of MRI-based AC methods, no systematic comparison between them has been performed so far. The aim of this work was to study the different performance obtained by some of the recent methods presented in the literature. To perform such a comparison, we focused on [{sup 18}F]-Fluorodeoxyglucose-PET/MRI neurodegenerative dementing disorders, which are known to exhibit reduced levels of glucose metabolism in certain brain regions. Four novel methods were used to calculate μ-maps from MRI data of 15 patients with Alzheimer's dementia (AD). The methods cover two atlas-based methods, a segmentation method, and a hybrid template/segmentation method. Additionally, the Dixon-based and a UTE-based method, offered by a vendor, were included in the comparison. Performance was assessed at three levels: tissue identification accuracy in the μ-map, quantitative accuracy of reconstructed PET data in specific brain regions, and precision in diagnostic images at identifying hypometabolic areas. Quantitative regional errors of -20-10 % were obtained using the vendor's AC methods, whereas the novel methods produced errors in a margin of ±5 %. The obtained precision at identifying areas with abnormally low levels of glucose uptake, potentially regions affected by AD, were 62.9 and 79.5 % for the two vendor AC methods, the former ignoring bone and the latter including bone information. The precision increased to 87.5-93.3 % in average for the four new methods, exhibiting similar performances. We confirm that the AC methods based on the Dixon and UTE sequences provided by the vendor are

  3. An Improved Dynamical Downscaling Method with GCM Bias Corrections and Its Validation with 30 Years of Climate Simulations

    KAUST Repository

    Xu, Zhongfeng; Yang, Zong-Liang

    2012-01-01

    An improved dynamical downscaling method (IDD) with general circulation model (GCM) bias corrections is developed and assessed over North America. A set of regional climate simulations is performed with the Weather Research and Forecasting Model

  4. Going from microscopic to macroscopic on nonuniform growing domains.

    Science.gov (United States)

    Yates, Christian A; Baker, Ruth E; Erban, Radek; Maini, Philip K

    2012-08-01

    Throughout development, chemical cues are employed to guide the functional specification of underlying tissues while the spatiotemporal distributions of such chemicals can be influenced by the growth of the tissue itself. These chemicals, termed morphogens, are often modeled using partial differential equations (PDEs). The connection between discrete stochastic and deterministic continuum models of particle migration on growing domains was elucidated by Baker, Yates, and Erban [Bull. Math. Biol. 72, 719 (2010)] in which the migration of individual particles was modeled as an on-lattice position-jump process. We build on this work by incorporating a more physically reasonable description of domain growth. Instead of allowing underlying lattice elements to instantaneously double in size and divide, we allow incremental element growth and splitting upon reaching a predefined threshold size. Such a description of domain growth necessitates a nonuniform partition of the domain. We first demonstrate that an individual-based stochastic model for particle diffusion on such a nonuniform domain partition is equivalent to a PDE model of the same phenomenon on a nongrowing domain, providing the transition rates (which we derive) are chosen correctly and we partition the domain in the correct manner. We extend this analysis to the case where the domain is allowed to change in size, altering the transition rates as necessary. Through application of the master equation formalism we derive a PDE for particle density on this growing domain and corroborate our findings with numerical simulations.

  5. Cell-centered particle weighting algorithm for PIC simulations in a non-uniform 2D axisymmetric mesh

    Science.gov (United States)

    Araki, Samuel J.; Wirz, Richard E.

    2014-09-01

    Standard area weighting methods for particle-in-cell simulations result in systematic errors on particle densities for a non-uniform mesh in cylindrical coordinates. These errors can be significantly reduced by using weighted cell volumes for density calculations. A detailed description on the corrected volume calculations and cell-centered weighting algorithm in a non-uniform mesh is provided. The simple formulas for the corrected volume can be used for any type of quadrilateral and/or triangular mesh in cylindrical coordinates. Density errors arising from the cell-centered weighting algorithm are computed for radial density profiles of uniform, linearly decreasing, and Bessel function in an adaptive Cartesian mesh and an unstructured mesh. For all the density profiles, it is shown that the weighting algorithm provides a significant improvement for density calculations. However, relatively large density errors may persist at outermost cells for monotonically decreasing density profiles. A further analysis has been performed to investigate the effect of the density errors in potential calculations, and it is shown that the error at the outermost cell does not propagate into the potential solution for the density profiles investigated.

  6. A level set method for cupping artifact correction in cone-beam CT

    International Nuclear Information System (INIS)

    Xie, Shipeng; Li, Haibo; Ge, Qi; Li, Chunming

    2015-01-01

    Purpose: To reduce cupping artifacts and improve the contrast-to-noise ratio in cone-beam computed tomography (CBCT). Methods: A level set method is proposed to reduce cupping artifacts in the reconstructed image of CBCT. The authors derive a local intensity clustering property of the CBCT image and define a local clustering criterion function of the image intensities in a neighborhood of each point. This criterion function defines an energy in terms of the level set functions, which represent a segmentation result and the cupping artifacts. The cupping artifacts are estimated as a result of minimizing this energy. Results: The cupping artifacts in CBCT are reduced by an average of 90%. The results indicate that the level set-based algorithm is practical and effective for reducing the cupping artifacts and preserving the quality of the reconstructed image. Conclusions: The proposed method focuses on the reconstructed image without requiring any additional physical equipment, is easily implemented, and provides cupping correction through a single-scan acquisition. The experimental results demonstrate that the proposed method successfully reduces the cupping artifacts

  7. Evaluation of three methods for retrospective correction of vignetting on medical microscopy images utilizing two open source software tools.

    Science.gov (United States)

    Babaloukas, Georgios; Tentolouris, Nicholas; Liatis, Stavros; Sklavounou, Alexandra; Perrea, Despoina

    2011-12-01

    Correction of vignetting on images obtained by a digital camera mounted on a microscope is essential before applying image analysis. The aim of this study is to evaluate three methods for retrospective correction of vignetting on medical microscopy images and compare them with a prospective correction method. One digital image from four different tissues was used and a vignetting effect was applied on each of these images. The resulted vignetted image was replicated four times and in each replica a different method for vignetting correction was applied with fiji and gimp software tools. The highest peak signal-to-noise ratio from the comparison of each method to the original image was obtained from the prospective method in all tissues. The morphological filtering method provided the highest peak signal-to-noise ratio value amongst the retrospective methods. The prospective method is suggested as the method of choice for correction of vignetting and if it is not applicable, then the morphological filtering may be suggested as the retrospective alternative method. © 2011 The Authors Journal of Microscopy © 2011 Royal Microscopical Society.

  8. SPECT quantification: a review of the different correction methods with compton scatter, attenuation and spatial deterioration effects

    International Nuclear Information System (INIS)

    Groiselle, C.; Rocchisani, J.M.; Moretti, J.L.; Dreuille, O. de; Gaillard, J.F.; Bendriem, B.

    1997-01-01

    SPECT quantification: a review of the different correction methods with Compton scatter attenuation and spatial deterioration effects. The improvement of gamma-cameras, acquisition and reconstruction software opens new perspectives in term of image quantification in nuclear medicine. In order to meet the challenge, numerous works have been undertaken in recent years to correct for the different physical phenomena that prevent an exact estimation of the radioactivity distribution. The main phenomena that have to betaken into account are scatter, attenuation and resolution. In this work, authors present the physical basis of each issue, its consequences on quantification and the main methods proposed to correct them. (authors)

  9. A new image correction method for live cell atomic force microscopy

    International Nuclear Information System (INIS)

    Shen, Y; Sun, J L; Zhang, A; Hu, J; Xu, L X

    2007-01-01

    During live cell imaging via atomic force microscopy (AFM), the interactions between the AFM probe and the membrane yield distorted cell images. In this work, an image correction method was developed based on the force-distance curve and the modified Hertzian model. The normal loading and lateral forces exerted on the cell membrane by the AFM tip were both accounted for during the scanning. Two assumptions were made in modelling based on the experimental measurements: (1) the lateral force on the endothelial cells was linear to the height; (2) the cell membrane Young's modulus could be derived from the displacement measurement of a normal force curve. Results have shown that the model could be used to recover up to 30% of the actual cell height depending on the loading force. The accuracy of the model was also investigated with respect to the loading force and mechanical property of the cell membrane

  10. A new image correction method for live cell atomic force microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Y; Sun, J L; Zhang, A; Hu, J; Xu, L X [College of Life Science and Biotechnology, Shanghai Jiao Tong University, Shanghai 200030 (China)

    2007-04-21

    During live cell imaging via atomic force microscopy (AFM), the interactions between the AFM probe and the membrane yield distorted cell images. In this work, an image correction method was developed based on the force-distance curve and the modified Hertzian model. The normal loading and lateral forces exerted on the cell membrane by the AFM tip were both accounted for during the scanning. Two assumptions were made in modelling based on the experimental measurements: (1) the lateral force on the endothelial cells was linear to the height; (2) the cell membrane Young's modulus could be derived from the displacement measurement of a normal force curve. Results have shown that the model could be used to recover up to 30% of the actual cell height depending on the loading force. The accuracy of the model was also investigated with respect to the loading force and mechanical property of the cell membrane.

  11. Method for determining correction factors induced by irradiation of ionization chamber cables in large radiation field

    International Nuclear Information System (INIS)

    Rodrigues, L.L.C.

    1988-01-01

    A simple method was developed to be suggested to hospital physicists in order to be followed during large radiation field dosimetry, to evaluate the effects of cables, connectors and extension cables irradiation and to determine correction factors for each system or geometry. All quality control tests were performed according to the International Electrotechnical Commission for three clinical dosimeters. Photon and electron irradiation effects for cables, connectors and extention cables were investigated under different experimental conditions by means of measurements of chamber sensitivity to a standard radiation source of 90 Sr. The radiation induced leakage current was also measured for cables, connectors and extension cables irradiated by photons and electrons. All measurements were performed at standard dosimetry conditions. Finally, measurements were performed in large fields. Cable factors and leakage factors were determined by the relation between chamber responses for irradiated and unirradiated cables. (author) [pt

  12. Method and apparatus for producing a porosity log of a subsurface formation corrected for detector standoff

    International Nuclear Information System (INIS)

    Allen, L.S.; Mills, W.R.; Stromswold, D.C.

    1991-01-01

    This paper describes a method and apparatus for producing a porosity log of a substance formation corrected for detector stand of. It includes: lowering a logging tool having a neutron source and a neutron detector into the borehole, irradiating the subsurface formation with neutrons from the neutron source as the logging tool is traversed along the subsurface formation, recording die-away signals representing the die-away of nuclear radiation in the subsurface formation as detected by the neutron detector, producing intensity signals representing the variations in intensity of the die-away signals, producing a model of the die-away of nuclear radiation in the subsurface formation having terms varying exponentially in response to borehole, formation and background effects on the die-away of nuclear radiation as detected by the detector

  13. On the evaluation of the correction factor μ (rho', tau') for the periodic pulse method

    International Nuclear Information System (INIS)

    Mueller, J.W.

    1976-01-01

    The inconveniences associated with the purely numerical approach we have chosen to solve some of the problems which arise in connection with the source-pulser method are twofold. On the one hand, there is the trouble of calculating the tables for μ, requiring several nights of computer time. On the other hand, apart from some simple limiting values as μ = 1 for tau' = 0 or 1, μ = 1/0.5 + /0.5 - tau'/ for rho' → 0 (and 0 > 1, no appropriate analytical form for the correction factor μ of sufficient precision is known for the moment. This drawback, we hope, is partly removed by a tabulation which should cover the whole region of practical interest. The computer programs for both the evaluation of μ and the Monte Carlo simulation are available upon request

  14. Can bias correction and statistical downscaling methods improve the skill of seasonal precipitation forecasts?

    Science.gov (United States)

    Manzanas, R.; Lucero, A.; Weisheimer, A.; Gutiérrez, J. M.

    2018-02-01

    Statistical downscaling methods are popular post-processing tools which are widely used in many sectors to adapt the coarse-resolution biased outputs from global climate simulations to the regional-to-local scale typically required by users. They range from simple and pragmatic Bias Correction (BC) methods, which directly adjust the model outputs of interest (e.g. precipitation) according to the available local observations, to more complex Perfect Prognosis (PP) ones, which indirectly derive local predictions (e.g. precipitation) from appropriate upper-air large-scale model variables (predictors). Statistical downscaling methods have been extensively used and critically assessed in climate change applications; however, their advantages and limitations in seasonal forecasting are not well understood yet. In particular, a key problem in this context is whether they serve to improve the forecast quality/skill of raw model outputs beyond the adjustment of their systematic biases. In this paper we analyze this issue by applying two state-of-the-art BC and two PP methods to downscale precipitation from a multimodel seasonal hindcast in a challenging tropical region, the Philippines. To properly assess the potential added value beyond the reduction of model biases, we consider two validation scores which are not sensitive to changes in the mean (correlation and reliability categories). Our results show that, whereas BC methods maintain or worsen the skill of the raw model forecasts, PP methods can yield significant skill improvement (worsening) in cases for which the large-scale predictor variables considered are better (worse) predicted by the model than precipitation. For instance, PP methods are found to increase (decrease) model reliability in nearly 40% of the stations considered in boreal summer (autumn). Therefore, the choice of a convenient downscaling approach (either BC or PP) depends on the region and the season.

  15. Long GRBs sources population non-uniformity

    Science.gov (United States)

    Arkhangelskaja, Irene

    Long GRBs observed in the very wide energy band. It is possible to separate two subsets of GRBs with high energy component (E > 500 MeV) presence. First type events energy spectra in low and high energy intervals are similar (as for GRB 021008) and described by Band, power law or broken power law models look like to usual bursts without emission in tens MeV region. For example, Band spectrum of GRB080916C covering 6 orders of magnitude. Second ones contain new additional high energy spectral component (for example, GRB 050525B and GRB 090902B). Both types of GRBs observed since CGRO mission beginning. The low energy precursors existence are typical for all types bursts. Both types of bursts temporal profiles can be similar in the various energy regions during some events or different in other cases. The absence of hard to soft evolution in low energy band and (or) presence of high energy precursors for some events are the special features of second class of GRBs by the results of preliminary data analysis and this facts gives opportunities to suppose differences between these two GRBs subsets sources. Also the results of long GRB redshifts distribution analysis have shown its shape contradiction to uniform population objects one for our Metagalaxy to both total and various redshifts definition methods GRBs sources samples. These evidences allow making preliminary conclusion about non-uniformity of long GRBs sources population.

  16. POSSOL, 2-D Poisson Equation Solver for Nonuniform Grid

    International Nuclear Information System (INIS)

    Orvis, W.J.

    1988-01-01

    1 - Description of program or function: POSSOL is a two-dimensional Poisson equation solver for problems with arbitrary non-uniform gridding in Cartesian coordinates. It is an adaptation of the uniform grid PWSCRT routine developed by Schwarztrauber and Sweet at the National Center for Atmospheric Research (NCAR). 2 - Method of solution: POSSOL will solve the Helmholtz equation on an arbitrary, non-uniform grid on a rectangular domain allowing only one type of boundary condition on any one side. It can also be used to handle more than one type of boundary condition on a side by means of a capacitance matrix technique. There are three types of boundary conditions that can be applied: fixed, derivative, or periodic

  17. An improved level set method for brain MR images segmentation and bias correction.

    Science.gov (United States)

    Chen, Yunjie; Zhang, Jianwei; Macione, Jim

    2009-10-01

    Intensity inhomogeneities cause considerable difficulty in the quantitative analysis of magnetic resonance (MR) images. Thus, bias field estimation is a necessary step before quantitative analysis of MR data can be undertaken. This paper presents a variational level set approach to bias correction and segmentation for images with intensity inhomogeneities. Our method is based on an observation that intensities in a relatively small local region are separable, despite of the inseparability of the intensities in the whole image caused by the overall intensity inhomogeneity. We first define a localized K-means-type clustering objective function for image intensities in a neighborhood around each point. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. The objective function is then integrated over the entire domain to define the data term into the level set framework. Our method is able to capture bias of quite general profiles. Moreover, it is robust to initialization, and thereby allows fully automated applications. The proposed method has been used for images of various modalities with promising results.

  18. A meshless scheme for incompressible fluid flow using a velocity-pressure correction method

    KAUST Repository

    Bourantas, Georgios

    2013-12-01

    A meshless point collocation method is proposed for the numerical solution of the steady state, incompressible Navier-Stokes (NS) equations in their primitive u-v-p formulation. The flow equations are solved in their strong form using either a collocated or a semi-staggered "grid" configuration. The developed numerical scheme approximates the unknown field functions using the Moving Least Squares approximation. A velocity, along with a pressure correction scheme is applied in the context of the meshless point collocation method. The proposed meshless point collocation (MPC) scheme has the following characteristics: (i) it is a truly meshless method, (ii) there is no need for pressure boundary conditions since no pressure constitutive equation is solved, (iii) it incorporates simplicity and accuracy, (iv) results can be obtained using collocated or semi-staggered "grids", (v) there is no need for the usage of a curvilinear system of coordinates and (vi) it can solve steady and unsteady flows. The lid-driven cavity flow problem, for Reynolds numbers up to 5000, has been considered, by using both staggered and collocated grid configurations. Following, the Backward-Facing Step (BFS) flow problem was considered for Reynolds numbers up to 800 using a staggered grid. As a final example, the case of a laminar flow in a two-dimensional tube with an obstacle was examined. © 2013 Elsevier Ltd.

  19. Research of beam hardening correction method for CL system based on SART algorithm

    International Nuclear Information System (INIS)

    Cao Daquan; Wang Yaxiao; Que Jiemin; Sun Cuili; Wei Cunfeng; Wei Long

    2014-01-01

    Computed laminography (CL) is a non-destructive testing technique for large objects, especially for planar objects. Beam hardening artifacts were wildly observed in the CL system and significantly reduce the image quality. This study proposed a novel simultaneous algebraic reconstruction technique (SART) based beam hardening correction (BHC) method for the CL system, namely the SART-BHC algorithm in short. The SART-BHC algorithm took the polychromatic attenuation process in account to formulate the iterative reconstruction update. A novel projection matrix calculation method which was different from the conventional cone-beam or fan-beam geometry was also studied for the CL system. The proposed method was evaluated with simulation data and experimental data, which was generated using the Monte Carlo simulation toolkit Geant4 and a bench-top CL system, respectively. All projection data were reconstructed with SART-BHC algorithm and the standard filtered back projection (FBP) algorithm. The reconstructed images show that beam hardening artifacts are greatly reduced with the SART-BHC algorithm compared to the FBP algorithm. The SART-BHC algorithm doesn't need any prior know-ledge about the object or the X-ray spectrum and it can also mitigate the interlayer aliasing. (authors)

  20. Phylogeny Reconstruction with Alignment-Free Method That Corrects for Horizontal Gene Transfer.

    Directory of Open Access Journals (Sweden)

    Raquel Bromberg

    2016-06-01

    Full Text Available Advances in sequencing have generated a large number of complete genomes. Traditionally, phylogenetic analysis relies on alignments of orthologs, but defining orthologs and separating them from paralogs is a complex task that may not always be suited to the large datasets of the future. An alternative to traditional, alignment-based approaches are whole-genome, alignment-free methods. These methods are scalable and require minimal manual intervention. We developed SlopeTree, a new alignment-free method that estimates evolutionary distances by measuring the decay of exact substring matches as a function of match length. SlopeTree corrects for horizontal gene transfer, for composition variation and low complexity sequences, and for branch-length nonlinearity caused by multiple mutations at the same site. We tested SlopeTree on 495 bacteria, 73 archaea, and 72 strains of Escherichia coli and Shigella. We compared our trees to the NCBI taxonomy, to trees based on concatenated alignments, and to trees produced by other alignment-free methods. The results were consistent with current knowledge about prokaryotic evolution. We assessed differences in tree topology over different methods and settings and found that the majority of bacteria and archaea have a core set of proteins that evolves by descent. In trees built from complete genomes rather than sets of core genes, we observed some grouping by phenotype rather than phylogeny, for instance with a cluster of sulfur-reducing thermophilic bacteria coming together irrespective of their phyla. The source-code for SlopeTree is available at: http://prodata.swmed.edu/download/pub/slopetree_v1/slopetree.tar.gz.

  1. Phylogeny Reconstruction with Alignment-Free Method That Corrects for Horizontal Gene Transfer

    Science.gov (United States)

    Grishin, Nick V.; Otwinowski, Zbyszek

    2016-01-01

    Advances in sequencing have generated a large number of complete genomes. Traditionally, phylogenetic analysis relies on alignments of orthologs, but defining orthologs and separating them from paralogs is a complex task that may not always be suited to the large datasets of the future. An alternative to traditional, alignment-based approaches are whole-genome, alignment-free methods. These methods are scalable and require minimal manual intervention. We developed SlopeTree, a new alignment-free method that estimates evolutionary distances by measuring the decay of exact substring matches as a function of match length. SlopeTree corrects for horizontal gene transfer, for composition variation and low complexity sequences, and for branch-length nonlinearity caused by multiple mutations at the same site. We tested SlopeTree on 495 bacteria, 73 archaea, and 72 strains of Escherichia coli and Shigella. We compared our trees to the NCBI taxonomy, to trees based on concatenated alignments, and to trees produced by other alignment-free methods. The results were consistent with current knowledge about prokaryotic evolution. We assessed differences in tree topology over different methods and settings and found that the majority of bacteria and archaea have a core set of proteins that evolves by descent. In trees built from complete genomes rather than sets of core genes, we observed some grouping by phenotype rather than phylogeny, for instance with a cluster of sulfur-reducing thermophilic bacteria coming together irrespective of their phyla. The source-code for SlopeTree is available at: http://prodata.swmed.edu/download/pub/slopetree_v1/slopetree.tar.gz. PMID:27336403

  2. A direct ROI quantification method for inherent PVE correction: accuracy assessment in striatal SPECT measurements

    Energy Technology Data Exchange (ETDEWEB)

    Vanzi, Eleonora; De Cristofaro, Maria T.; Sotgia, Barbara; Mascalchi, Mario; Formiconi, Andreas R. [University of Florence, Clinical Pathophysiology, Florence (Italy); Ramat, Silvia [University of Florence, Neurological and Psychiatric Sciences, Florence (Italy)

    2007-09-15

    The clinical potential of striatal imaging with dopamine transporter (DAT) SPECT tracers is hampered by the limited capability to recover activity concentration ratios due to partial volume effects (PVE). We evaluated the accuracy of a least squares method that allows retrieval of activity in regions of interest directly from projections (LS-ROI). An Alderson striatal phantom was filled with striatal to background ratios of 6:1, 9:1 and 28:1; the striatal and background ROIs were drawn on a coregistered X-ray CT of the phantom. The activity ratios of these ROIs were derived both with the LS-ROI method and with conventional SPECT EM reconstruction (EM-SPECT). Moreover, the two methods were compared in seven patients with motor symptoms who were examined with N-3-fluoropropyl-2-{beta}-carboxymethoxy-3-{beta}-(4-iodophenyl) (FP-CIT) SPECT, calculating the binding potential (BP). In the phantom study, the activity ratios obtained with EM-SPECT were 3.5, 5.3 and 17.0, respectively, whereas the LS-ROI method resulted in ratios of 6.2, 9.0 and 27.3, respectively. With the LS-ROI method, the BP in the seven patients was approximately 60% higher than with EM-SPECT; a linear correlation between the LS-ROI and the EM estimates was found (r = 0.98, p = 0.03). The LS-ROI PVE correction capability is mainly due to the fact that the ill-conditioning of the LS-ROI approach is lower than that of the EM-SPECT one. The LS-ROI seems to be feasible and accurate in the examination of the dopaminergic system. This approach can be fruitful in monitoring of disease progression and in clinical trials of dopaminergic drugs. (orig.)

  3. The Pierce diode with an external circuit: II, Non-uniform equilibria

    International Nuclear Information System (INIS)

    Lawson, W.S.

    1987-01-01

    The non-uniform (non-linear) equilibria of the classical (short circuit) Pierce diode and the extended (series RLC external circuit) Pierce diode are described theoretically, and explored via computer simulation. It is found that most equilibria are correctly predicted by theory, but that the continuous set of equilibria of the classical Pierce diode at α = 2π are not observed. The stability characteristics of the non-uniform equilibria are also worked out, and are consistent with the simulations. 8 refs., 22 figs., 3 tabs

  4. Electronic Transport as a Driver for Self-Interaction-Corrected Methods

    KAUST Repository

    Pertsova, Anna; Canali, Carlo Maria; Pederson, Mark R.; Rungger, Ivan; Sanvito, Stefano

    2015-01-01

    © 2015 Elsevier Inc. While spintronics often investigates striking collective spin effects in large systems, a very important research direction deals with spin-dependent phenomena in nanostructures, reaching the extreme of a single spin confined in a quantum dot, in a molecule, or localized on an impurity or dopant. The issue considered in this chapter involves taking this extreme to the nanoscale and the quest to use first-principles methods to predict and control the behavior of a few "spins" (down to 1 spin) when they are placed in an interesting environment. Particular interest is on environments for which addressing these systems with external fields and/or electric or spin currents is possible. The realization of such systems, including those that consist of a core of a few transition-metal (TM) atoms carrying a spin, connected and exchanged-coupled through bridging oxo-ligands has been due to work by many experimental researchers at the interface of atomic, molecular and condensed matter physics. This chapter addresses computational problems associated with understanding the behaviors of nano- and molecular-scale spin systems and reports on how the computational complexity increases when such systems are used for elements of electron transport devices. Especially for cases where these elements are attached to substrates with electronegativities that are very different than the molecule, or for coulomb blockade systems, or for cases where the spin-ordering within the molecules is weakly antiferromagnetic, the delocalization error in DFT is particularly problematic and one which requires solutions, such as self-interaction corrections, to move forward. We highlight the intersecting fields of spin-ordered nanoscale molecular magnets, electron transport, and coulomb blockade and highlight cases where self-interaction corrected methodologies can improve our predictive power in this emerging field.

  5. Orbit Determination from Tracking Data of Artificial Satellite Using the Method of Differential Correction

    OpenAIRE

    Byoung-Sun Lee; Jung-Hyun Jo; Sang-Young Park; Kyu-Hong Choi; Chun-Hwey Kim

    1988-01-01

    The differential correction process determining osculating orbital elements as correct as possible at a given instant of time from tracking data of artificial satellite was accomplished. Preliminary orbital elements were used as an initial value of the differential correction procedure and iterated until the residual of real observation(O) and computed observation(C) was minimized. Tracking satellite was NOAA-9 or TIROS-N series. Two types of tracking data were prediction data precomputed fro...

  6. Resistivity Correction Factor for the Four-Probe Method: Experiment II

    Science.gov (United States)

    Yamashita, Masato; Yamaguchi, Shoji; Nishii, Toshifumi; Kurihara, Hiroshi; Enjoji, Hideo

    1989-05-01

    Experimental verification of the theoretically derived resistivity correction factor F is presented. Factor F can be applied to a system consisting of a disk sample and a four-probe array. Measurements are made on isotropic graphite disks and crystalline ITO films. Factor F can correct the apparent variations of the data and lead to reasonable resistivities and sheet resistances. Here factor F is compared to other correction factors; i.e. FASTM and FJIS.

  7. Evaluation of scatter limitation correction: a new method of correcting photopenic artifacts caused by patient motion during whole-body PET/CT imaging.

    Science.gov (United States)

    Miwa, Kenta; Umeda, Takuro; Murata, Taisuke; Wagatsuma, Kei; Miyaji, Noriaki; Terauchi, Takashi; Koizumi, Mitsuru; Sasaki, Masayuki

    2016-02-01

    Overcorrection of scatter caused by patient motion during whole-body PET/computed tomography (CT) imaging can induce the appearance of photopenic artifacts in the PET images. The present study aimed to quantify the accuracy of scatter limitation correction (SLC) for eliminating photopenic artifacts. This study analyzed photopenic artifacts in (18)F-fluorodeoxyglucose ((18)F-FDG) PET/CT images acquired from 12 patients and from a National Electrical Manufacturers Association phantom with two peripheral plastic bottles that simulated the human body and arms, respectively. The phantom comprised a sphere (diameter, 10 or 37 mm) containing fluorine-18 solutions with target-to-background ratios of 2, 4, and 8. The plastic bottles were moved 10 cm posteriorly between CT and PET acquisitions. All PET data were reconstructed using model-based scatter correction (SC), no scatter correction (NSC), and SLC, and the presence or absence of artifacts on the PET images was visually evaluated. The SC and SLC images were also semiquantitatively evaluated using standardized uptake values (SUVs). Photopenic artifacts were not recognizable in any NSC and SLC image from all 12 patients in the clinical study. The SUVmax of mismatched SLC PET/CT images were almost equal to those of matched SC and SLC PET/CT images. Applying NSC and SLC substantially eliminated the photopenic artifacts on SC PET images in the phantom study. SLC improved the activity concentration of the sphere for all target-to-background ratios. The highest %errors of the 10 and 37-mm spheres were 93.3 and 58.3%, respectively, for mismatched SC, and 73.2 and 22.0%, respectively, for mismatched SLC. Photopenic artifacts caused by SC error induced by CT and PET image misalignment were corrected using SLC, indicating that this method is useful and practical for clinical qualitative and quantitative PET/CT assessment.

  8. Temperature effect correction for muon flux at the Earth surface: estimation of the accuracy of different methods

    International Nuclear Information System (INIS)

    Dmitrieva, A N; Astapov, I I; Kovylyaeva, A A; Pankova, D V

    2013-01-01

    Correction of the muon flux at the Earth surface for temperature effect with the help of two simple methods is considered. In the first method, it is assumed that major part of muons are generated at some effective generation level, which altitude depends on the temperature profile of the atmosphere. In the second method, dependence of muon flux on the mass-averaged atmosphere temperature is considered. The methods were tested with the data of muon hodoscope URAGAN (Moscow, Russia). Difference between data corrected with the help of differential in altitude temperature coefficients and simplified methods does not exceed 1-1.5%, so the latter ones may be used for introduction of a fast preliminary correction.

  9. Comparative study of chance coincidence correction in measuring 223Ra and 224Ra by delay coincidence method

    International Nuclear Information System (INIS)

    Yan Yongjun; Huang Derong; Zhou Jianliang; Qiu Shoukang

    2013-01-01

    The delay coincidence measurement of 220 Rn and 219 Rn has been proved to be a valid indirect method for measuring 224 Ra and 223 Ra extracted from natural water, which can provide valuable information on estuarine/ocean mixing, submarine groundwater discharge, and water/soil interactions. In practical operation chance coincidence correction must be considered, mostly Moore's correction method, but Moore's and Giffin's methods were incomplete in some ways. In this paper the modification (method 1) and a new chance coincidence correction formula (method 2) were provided. Experiments results are presented to demonstrate the conclusions. The results show that precision is improved while counting rate is less than 70 min- 1 . (authors)

  10. Iterative correction method for shift-variant blurring caused by collimator aperture in SPECT

    International Nuclear Information System (INIS)

    Ogawa, Koichi; Katsu, Haruto

    1996-01-01

    A collimation system in single photon computed tomography (SPECT) induces blurring on reconstructed images. The blurring varies with the collimator aperture which is determined by the shape of the hole (its diameter and length), and with the distance between the collimator surface and the object. The blurring has shift-variant properties. This paper presents a new iterative method for correcting the shift-variant blurring. The method estimates the ratio of 'ideal projection value' to 'measured projection value' at each sample point. The term 'ideal projection value' means the number of photons which enter the hole perpendicular to the collimator surface, and the term 'measured projection value' means the number of photons which enter the hole at acute angles to the collimator aperture axis. If the estimation is accurate, ideal projection value can be obtained as the product of the measured projection value and the estimated ratio. The accuracy of the estimation is improved iteratively by comparing the measured projection value with a weighted summation of several estimated projection value. The simulation results showed that spatial resolution was improved without amplification of artifacts due to statistical noise. (author)

  11. An inter-crystal scatter correction method for DOI PET image reconstruction

    International Nuclear Information System (INIS)

    Lam, Chih Fung; Hagiwara, Naoki; Obi, Takashi; Yamaguchi, Masahiro; Yamaya, Taiga; Murayama, Hideo

    2006-01-01

    New positron emission tomography (PET) scanners utilize depth-of-interaction (DOI) information to improve image resolution, particularly at the edge of field-of-view while maintaining high detector sensitivity. However, the inter-crystal scatter (ICS) effect cannot be neglected in DOI scanners due to the use of smaller crystals. ICS is the phenomenon wherein there are multiple scintillations for irradiation of a gamma photon due to Compton scatter in detecting crystals. In the case of ICS, only one scintillation position is approximated for detectors with Anger-type logic calculation. This causes an error in position detection and ICS worsens the image contrast, particularly for smaller hotspots. In this study, we propose to model an ICS probability by using a Monte Carlo simulator. The probability is given as a statistical relationship between the gamma photon first interaction crystal pair and the detected crystal pair. It is then used to improve the system matrix of a statistical image reconstruction algorithm, such as maximum likehood expectation maximization (ML-EM) in order to correct for the position error caused by ICS. We apply the proposed method to simulated data of the jPET-D4, which is a four-layer DOI PET being developed at the National Institute of Radiological Sciences. Our computer simulations show that image contrast is recovered successfully by the proposed method. (author)

  12. Single photon emission computed tomography using a regularizing iterative method for attenuation correction

    International Nuclear Information System (INIS)

    Soussaline, Francoise; Cao, A.; Lecoq, G.

    1981-06-01

    An analytically exact solution to the attenuated tomographic operator is proposed. Such a technique called Regularizing Iterative Method (RIM) belongs to the iterative class of procedures where a priori knowledge can be introduced on the evaluation of the size and shape of the activity domain to be reconstructed, and on the exact attenuation distribution. The relaxation factor used is so named because it leads to fast convergence and provides noise filtering for a small number of iteractions. The effectiveness of such a method was tested in the Single Photon Emission Computed Tomography (SPECT) reconstruction problem, with the goal of precise correction for attenuation before quantitative study. Its implementation involves the use of a rotating scintillation camera based SPECT detector connected to a mini computer system. Mathematical simulations of cylindical uniformly attenuated phantoms indicate that in the range of a priori calculated relaxation factor a fast converging solution can always be found with a (contrast) accuracy of the order of 0.2 to 4% given that numerical errors and noise are or not, taken into account. The sensitivity of the (RIM) algorithm to errors in the size of the reconstructed object and in the value of the attenuation coefficient μ was studied, using the same simulation data. Extreme variations of +- 15% in these parameters will lead to errors of the order of +- 20% in the quantitative results. Physical phantoms representing a variety of geometrical situations were also studied

  13. A forward bias method for lag correction of an a-Si flat panel detector

    International Nuclear Information System (INIS)

    Starman, Jared; Tognina, Carlo; Partain, Larry; Fahrig, Rebecca

    2012-01-01

    Purpose: Digital a-Si flat panel (FP) x-ray detectors can exhibit detector lag, or residual signal, of several percent that can cause ghosting in projection images or severe shading artifacts, known as the radar artifact, in cone-beam computed tomography (CBCT) reconstructions. A major contributor to detector lag is believed to be defect states, or traps, in the a-Si layer of the FP. Software methods to characterize and correct for the detector lag exist, but they may make assumptions such as system linearity and time invariance, which may not be true. The purpose of this work is to investigate a new hardware based method to reduce lag in an a-Si FP and to evaluate its effectiveness at removing shading artifacts in CBCT reconstructions. The feasibility of a novel, partially hardware based solution is also examined. Methods: The proposed hardware solution for lag reduction requires only a minor change to the FP. For pulsed irradiation, the proposed method inserts a new operation step between the readout and data collection stages. During this new stage the photodiode is operated in a forward bias mode, which fills the defect states with charge. A Varian 4030CB panel was modified to allow for operation in the forward bias mode. The contrast of residual lag ghosts was measured for lag frames 2 and 100 after irradiation ceased for standard and forward bias modes. Detector step response, lag, SNR, modulation transfer function (MTF), and detective quantum efficiency (DQE) measurements were made with standard and forward bias firmware. CBCT data of pelvic and head phantoms were also collected. Results: Overall, the 2nd and 100th detector lag frame residual signals were reduced 70%-88% using the new method. SNR, MTF, and DQE measurements show a small decrease in collected signal and a small increase in noise. The forward bias hardware successfully reduced the radar artifact in the CBCT reconstruction of the pelvic and head phantoms by 48%-81%. Conclusions: Overall, the

  14. Accuracy of radiotherapy dose calculations based on cone-beam CT: comparison of deformable registration and image correction based methods

    Science.gov (United States)

    Marchant, T. E.; Joshi, K. D.; Moore, C. J.

    2018-03-01

    Radiotherapy dose calculations based on cone-beam CT (CBCT) images can be inaccurate due to unreliable Hounsfield units (HU) in the CBCT. Deformable image registration of planning CT images to CBCT, and direct correction of CBCT image values are two methods proposed to allow heterogeneity corrected dose calculations based on CBCT. In this paper we compare the accuracy and robustness of these two approaches. CBCT images for 44 patients were used including pelvis, lung and head & neck sites. CBCT HU were corrected using a ‘shading correction’ algorithm and via deformable registration of planning CT to CBCT using either Elastix or Niftyreg. Radiotherapy dose distributions were re-calculated with heterogeneity correction based on the corrected CBCT and several relevant dose metrics for target and OAR volumes were calculated. Accuracy of CBCT based dose metrics was determined using an ‘override ratio’ method where the ratio of the dose metric to that calculated on a bulk-density assigned version of the same image is assumed to be constant for each patient, allowing comparison to the patient’s planning CT as a gold standard. Similar performance is achieved by shading corrected CBCT and both deformable registration algorithms, with mean and standard deviation of dose metric error less than 1% for all sites studied. For lung images, use of deformed CT leads to slightly larger standard deviation of dose metric error than shading corrected CBCT with more dose metric errors greater than 2% observed (7% versus 1%).

  15. Long-term results of forearm lengthening and deformity correction by the Ilizarov method.

    Science.gov (United States)

    Orzechowski, Wiktor; Morasiewicz, Leszek; Krawczyk, Artur; Dragan, Szymon; Czapiński, Jacek

    2002-06-30

    Background. Shortening and deformity of the forearm is most frequently caused by congenital disorders or posttraumatic injury. Given its complex anatomy and biomechanics, the forearm is clearly the most difficult segment for lengthening and deformity correction. Material and methods. We analyzed 16 patients with shortening and deformity of the forearm, treated surgically, using the Ilizarov method in our Department from 1989 to 2001. in 9 cases 1-stage surgery was sufficient, while the remaining 7 patients underwent 2-5 stages of treatment. At total of 31 surgical operations were performed. The extent of forearm shortening ranged from 1,5 to 14,5 cm (5-70%). We development a new fixator based on Schanz half-pins. Results. The length of forearm lengthening per operative stage averaged 2,35 cm. the proportion of lengthening ranged from 6% to 48% with an average of 18,3%. The mean lengthening index was 48,15 days/cm. the per-patient rate of complications was 88% compared 45% per stage of treatment, mostly limited rotational mobility and abnormal consolidation of regenerated bone. Conclusions. Despite the high complication rate, the Ilizarov method is the method of choice for patients with forearm shortenings and deformities. Treatment is particularly indicated in patients with shortening caused by disproportionate length of the ulnar and forearm bones. Treatment should be managed so as cause the least possible damage to arm function, even at the cost of limited lengthening. Our new stabilizer based on Schanz half-pins makes it possible to preserve forearm rotation.

  16. Examination of attenuation correction method for cerebral blood Flow SPECT Using MR imaging

    International Nuclear Information System (INIS)

    Mizuno, Takashi; Takahashi, Masaaki

    2009-01-01

    Authors developed a software for attenuation correction using MR imaging (MRAC) (Toshiba Med. System Engineer.) based on the idea that precision of AC could be improved by the head contour in MRI T2-weighted images (T2WI) obtained before 123 I-iofetamine (IMP) single photon emission computed tomography (SPECT) for cerebral blood flow (CBF) measurement. In the present study, this MRAC was retrospectively evaluated by comparison with the previous standard AC methods derived from transmission CT (TCT) and X-CT which overcoming the problem of sinogram threshold Chang method but still having cost and patient exposure issues. MRAC was essentially performed in the Toshiba GMS5500/PI processor where 3D registration was conducted with images of SPECT and MRI of the same patient. The gamma camera for 123 I-IMP SPECT and 99m TcO 4 - TCT was Toshiba 3-detector GCA9300A equipped with the above processor for MRAC and with low energy high resolution (LEHR) fan beam collimator. Machines for MRI and CT were Siemens-Asahi Meditech. MAGNETOM Symphony 1.5T and SOMATOM plus4, respectively. MRAC was examined in 8 patients with images of T1WI, TCT and SPECT, and in 18 of T2WI, CT and SPECT. Evaluation was made by comparison of attenuation coefficients (μ) by the 4 methods. As a result, the present MRAC was found to be closer to AC by TCT and CT than by the Chang method since MRAC, due to exact imaging of the head contour, was independent on radiation count, and was thought to be useful for improving the precision of CBF SPECT. (K.T.)

  17. Properties of multilayer nonuniform holographic structures

    International Nuclear Information System (INIS)

    Pen, E F; Rodionov, Mikhail Yu

    2010-01-01

    Experimental results and analysis of properties of multilayer nonuniform holographic structures formed in photopolymer materials are presented. The theoretical hypotheses is proved that the characteristics of angular selectivity for the considered structures have a set of local maxima, whose number and width are determined by the thicknesses of intermediate layers and deep holograms and that the envelope of the maxima coincides with the selectivity contour of a single holographic array. It is also experimentally shown that hologram nonuniformities substantially distort shapes of selectivity characteristics: they become asymmetric, the local maxima differ in size and the depths of local minima reduce. The modelling results are made similar to experimental data by appropriately choosing the nonuniformity parameters. (imaging and image processing. holography)

  18. On the logarithmic-singularity correction in the kernel function method of subsonic lifting-surface theory

    Science.gov (United States)

    Lan, C. E.; Lamar, J. E.

    1977-01-01

    A logarithmic-singularity correction factor is derived for use in kernel function methods associated with Multhopp's subsonic lifting-surface theory. Because of the form of the factor, a relation was formulated between the numbers of chordwise and spanwise control points needed for good accuracy. This formulation is developed and discussed. Numerical results are given to show the improvement of the computation with the new correction factor.

  19. ELECTRONIC CIRCUIT BOARDS NON-UNIFORM COOLING SYSTEM MODEL

    Directory of Open Access Journals (Sweden)

    D. V. Yevdulov

    2016-01-01

    Full Text Available Abstract. The paper considers a mathematical model of non-uniform cooling of electronic circuit boards. The block diagram of the system implementing this approach, the method of calculation of the electronic board temperature field, as well as the principle of its thermal performance optimizing are presented. In the considered scheme the main heat elimination from electronic board is produced by the radiator system, and additional cooling of the most temperature-sensitive components is produced by thermoelectric batteries. Are given the two-dimensional temperature fields of the electronic board during its uniform and non-uniform cooling, is carried out their comparison. As follows from the calculations results, when using a uniform overall cooling of electronic unit there is a waste of energy for the cooling 0f electronic board parts which temperature is within acceptable temperature range without the cooling system. This approach leads to the increase in the cooling capacity of used thermoelectric batteries in comparison with the desired values. This largely reduces the efficiency of heat elimination system. The use for electronic boards cooling of non-uniform local heat elimination removes this disadvantage. The obtained dependences show that in this case, the energy required to create a given temperature is smaller than when using a common uniform cooling. In this approach the temperature field of the electronic board is more uniform and the cooling is more efficient. 

  20. Nonuniformities in organic liquid ionization calorimeters

    International Nuclear Information System (INIS)

    Wenzel, W.A.

    1989-06-01

    Hermeticity and uniformity in SSC calorimeter designs are compromised by structure and modularity. Some of the consequences of the cryogenic needs of liquid argon calorimetry are relatively well known. If the active medium is an organic liquid (TMP, TMS, etc.), a large number of independent liquid volumes is needed for safety and for rapid liquid exchange to eliminate local contamination. Modular construction ordinarily simplifies fabrication, assembly, handling and preliminary testing at the price of additional walls, other dead regions and many nonuniformities. Here we examine ways of minimizing the impact of some generic nonuniformities on the quality of calorimeter performance. 6 refs., 7 figs

  1. Corrections in the gold foil activation method for determination of neutron beam density

    DEFF Research Database (Denmark)

    Als-Nielsen, Jens Aage

    1967-01-01

    A finite foil thickness and deviation in the cross section from the 1ν law imply corrections in the determination of neutron beam densities by means of foil activation. These corrections, which depend on the neutron velocity distribution, have been examined in general and are given in a specific...

  2. Effect of inter-crystal scatter on estimation methods for random coincidences and subsequent correction

    International Nuclear Information System (INIS)

    Torres-Espallardo, I; Spanoudaki, V; Ziegler, S I; Rafecas, M; McElroy, D P

    2008-01-01

    Random coincidences can contribute substantially to the background in positron emission tomography (PET). Several estimation methods are being used for correcting them. The goal of this study was to investigate the validity of techniques for random coincidence estimation, with various low-energy thresholds (LETs). Simulated singles list-mode data of the MADPET-II small animal PET scanner were used as input. The simulations have been performed using the GATE simulation toolkit. Several sources with different geometries have been employed. We evaluated the number of random events using three methods: delayed window (DW), singles rate (SR) and time histogram fitting (TH). Since the GATE simulations allow random and true coincidences to be distinguished, a comparison between the number of random coincidences estimated using the standard methods and the number obtained using GATE was performed. An overestimation in the number of random events was observed using the DW and SR methods. This overestimation decreases for LETs higher than 255 keV. It is additionally reduced when the single events which have undergone a Compton interaction in crystals before being detected are removed from the data. These two observations lead us to infer that the overestimation is due to inter-crystal scatter. The effect of this mismatch in the reconstructed images is important for quantification because it leads to an underestimation of activity. This was shown using a hot-cold-background source with 3.7 MBq total activity in the background region and a 1.59 MBq total activity in the hot region. For both 200 keV and 400 keV LET, an overestimation of random coincidences for the DW and SR methods was observed, resulting in approximately 1.5% or more (at 200 keV LET: 1.7% for DW and 7% for SR) and less than 1% (at 400 keV LET: both methods) underestimation of activity within the background region. In almost all cases, images obtained by compensating for random events in the reconstruction

  3. A new method for evaluation and correction of thermal reactor power and present operational applications

    International Nuclear Information System (INIS)

    Langenstein, M.; Streit, S.; Laipple, B.; Eitschberger, H.

    2005-01-01

    The determination of the thermal reactor power is traditionally be done by heat balance: 1) for a boiling water reactor (BWR) at the interface of reactor control volume and heat cycle. 2) for a pressurised-water reactor (PWR) at the interface of the steam generator control volume and turbine island on the secondary side. The uncertainty of these traditional methods is not easy to determine and can be in the range of several percent. Technical and legal regulations (e.g. 10CFR50) cover an estimated error of instrumentation up to 2% by increasing the design thermal reactor power for emergency analysis to 102 % of the licensed thermal reactor power. Basically the licensee has the duty to warrant at any time operation inside the analyzed region for thermal reactor power. This is normally done by keeping the indicated reactor power at the licensed 100% value. The better way is to use a method which allows a continuous warranty evaluation. The quantification of the level of fulfilment of this warranty is only achievable by a method which: 1) is independent of single measurements accuracies. 2) results in a certified quality of single process values and for the total heat cycle analysis. 3)leads to complete results including 2-sigma deviation especially for thermal reactor power. Here this method, which is called 'process data reconciliation based on VDI 2048 guideline', is presented [1, 2]. This method allows to determine the true process parameters with a statistical probability of 95%, by considering closed material, mass- and energy balances following the Gaussian correction principle. The amount of redundant process information and complexity of the process improves the final results. This represents the most probable state of the process with minimized uncertainty according to VDI 2048. Hence, calibration and control of the thermal reactor power are possible with low effort but high accuracy and independent of single measurement accuracies. Further more, VDI 2048

  4. Characterizing the marker-dye correction for Gafchromic(®) EBT2 film: a comparison of three analysis methods.

    Science.gov (United States)

    McCaw, Travis J; Micka, John A; Dewerd, Larry A

    2011-10-01

    Gafchromic(®) EBT2 film has a yellow marker dye incorporated into the active layer of the film that can be used to correct the film response for small variations in thickness. This work characterizes the effect of the marker-dye correction on the uniformity and uncertainty of dose measurements with EBT2 film. The effect of variations in time postexposure on the uniformity of EBT2 is also investigated. EBT2 films were used to measure the flatness of a (60)Co field to provide a high-spatial resolution evaluation of the film uniformity. As a reference, the flatness of the (60)Co field was also measured with Kodak EDR2 films. The EBT2 films were digitized with a flatbed document scanner 24, 48, and 72 h postexposure, and the images were analyzed using three methods: (1) the manufacturer-recommended marker-dye correction, (2) an in-house marker-dye correction, and (3) a net optical density (OD) measurement in the red color channel. The field flatness was calculated from orthogonal profiles through the center of the field using each analysis method, and the results were compared with the EDR2 measurements. Uncertainty was propagated through a dose calculation for each analysis method. The change in the measured field flatness for increasing times postexposure was also determined. Both marker-dye correction methods improved the field flatness measured with EBT2 film relative to the net OD method, with a maximum improvement of 1% using the manufacturer-recommended correction. However, the manufacturer-recommended correction also resulted in a dose uncertainty an order of magnitude greater than the other two methods. The in-house marker-dye correction lowered the dose uncertainty relative to the net OD method. The measured field flatness did not exhibit any unidirectional change with increasing time postexposure and showed a maximum change of 0.3%. The marker dye in EBT2 can be used to improve the response uniformity of the film. Depending on the film analysis method used

  5. Exploring field-of-view non-uniformities produced by a hand-held spectroradiometer

    Directory of Open Access Journals (Sweden)

    Tamir Caras

    2011-01-01

    Full Text Available The shape of a spectroradiometer’s field of view (FOV affects the way spectral measurements are acquired. Knowing this property is a prerequisite for the correct use of the spectrometer. If the substrate is heterogeneous, the ability to accurately know what is being measured depends on knowing the FOV location, shape, spectral and spatial sensitivity. The GER1500 is a hand-held spectrometer with a fixed lens light entry slit and has a laser guide that allows control over the target by positioning the entire unit. In the current study, the FOV of the GER1500 was mapped and analysed. The spectral and spatial non-uniformities of the FOV were examined and were found to be spectrally independent. The relationship between the FOV and the built-in laser guide was tested and found to have a linear displacement dependent on the distance to the target. This allows an accurate prediction of the actual FOV position. A correction method to improve the agreement between the expected and measured reflectance over heterogeneous targets was developed and validated. The methods described are applicable and may be of use with other hand-held spectroradiometers.

  6. [Posttraumatic torsional deformities of the forearm : Methods of measurement and decision guidelines for correction].

    Science.gov (United States)

    Blossey, R D; Krettek, C; Liodakis, E

    2018-03-01

    Forearm fractures are common in all age groups. Even if the adjacent joints are not directly involved, these fractures have an intra-articular character. One of the most common complications of these injuries is a painful limitation of the range of motion and especially of pronation and supination. This is often due to an underdiagnosed torsional deformity; however, in recent years new methods have been developed to make these torsional differences visible and quantifiable through the use of sectional imaging. The principle of measurement corresponds to that of the torsion measurement of the lower limbs. Computed tomography (CT) or magnetic resonance imaging (MRI) scans are created at defined heights. By searching for certain landmarks, torsional angles are measured in relation to a defined reference line. A new alternative is the use of 3D reformation models. The presence of a torsional deformity, especial of the radius, leads to an impairment of the pronation and supination of the forearm. In the presence of torsional deformities, radiological measurements can help to decide if an operation is needed or not. Unlike the lower limbs, there are still no uniform cut-off values as to when a correction is indicated. Decisions must be made together with the patient by taking the clinical and radiological results into account.

  7. Method for the depth corrected detection of ionizing events from a co-planar grids sensor

    Science.gov (United States)

    De Geronimo, Gianluigi [Syosset, NY; Bolotnikov, Aleksey E [South Setauket, NY; Carini, Gabriella [Port Jefferson, NY

    2009-05-12

    A method for the detection of ionizing events utilizing a co-planar grids sensor comprising a semiconductor substrate, cathode electrode, collecting grid and non-collecting grid. The semiconductor substrate is sensitive to ionizing radiation. A voltage less than 0 Volts is applied to the cathode electrode. A voltage greater than the voltage applied to the cathode is applied to the non-collecting grid. A voltage greater than the voltage applied to the non-collecting grid is applied to the collecting grid. The collecting grid and the non-collecting grid are summed and subtracted creating a sum and difference respectively. The difference and sum are divided creating a ratio. A gain coefficient factor for each depth (distance between the ionizing event and the collecting grid) is determined, whereby the difference between the collecting electrode and the non-collecting electrode multiplied by the corresponding gain coefficient is the depth corrected energy of an ionizing event. Therefore, the energy of each ionizing event is the difference between the collecting grid and the non-collecting grid multiplied by the corresponding gain coefficient. The depth of the ionizing event can also be determined from the ratio.

  8. Reliability Analysis of a Composite Wind Turbine Blade Section Using the Model Correction Factor Method: Numerical Study and Validation

    DEFF Research Database (Denmark)

    Dimitrov, Nikolay Krasimirov; Friis-Hansen, Peter; Berggreen, Christian

    2013-01-01

    by the composite failure criteria. Each failure mode has been considered in a separate component reliability analysis, followed by a system analysis which gives the total probability of failure of the structure. The Model Correction Factor method used in connection with FORM (First-Order Reliability Method) proved...

  9. Statistical signal processing for gamma spectrometry: application for a pileup correction method

    International Nuclear Information System (INIS)

    Trigano, T.

    2005-12-01

    The main objective of gamma spectrometry is to characterize the radioactive elements of an unknown source by studying the energy of the emitted photons. When a photon interacts with a detector, its energy is converted into an electrical pulse. The histogram obtained by collecting the energies can be used to identify radioactive elements and measure their activity. However, at high counting rates, perturbations which are due to the stochastic aspect of the temporal signal can cripple the identification of the radioactive elements. More specifically, since the detector has a finite resolution, close arrival times of photons which can be modeled as an homogeneous Poisson process cause pile-ups of individual pulses. This phenomenon distorts energy spectra by introducing multiple fake spikes and prolonging artificially the Compton continuum, which can mask spikes of low intensity. The objective of this thesis is to correct the distortion caused by the pile-up phenomenon in the energy spectra. Since the shape of photonic pulses depends on many physical parameters, we consider this problem in a nonparametric framework. By introducing an adapted model based on two marked point processes, we establish a nonlinear relation between the probability measure associated to the observations and the probability density function we wish to estimate. This relation is derived both for continuous and for discrete time signals, and therefore can be used on a large set of detectors and from an analog or digital point of view. It also provides a framework to this problem, which can be considered as a problem of nonlinear density deconvolution and nonparametric density estimation from indirect measurements. Using these considerations, we propose an estimator obtained by direct inversion. We show that this estimator is consistent and almost achieves the usual rate of convergence obtained in classical nonparametric density estimation in the L 2 sense. We have applied our method to a set of

  10. SU-F-T-584: Investigating Correction Methods for Ion Recombination Effects in OCTAVIUS 1000 SRS Measurements

    International Nuclear Information System (INIS)

    Knill, C; Snyder, M; Rakowski, J; J, Burmeister; Zhuang, L; Matuszak, M

    2016-01-01

    Purpose: PTW’s Octavius 1000 SRS array performs IMRT QA measurements with liquid filled ionization chambers (LICs). Collection efficiencies of LICs have been shown to change during IMRT delivery as a function of LINAC pulse frequency and pulse dose, which affects QA results. In this study, two methods were developed to correct changes in collection efficiencies during IMRT QA measurements, and the effects of these corrections on QA pass rates were compared. Methods: For the first correction, Matlab software was developed that calculates pulse frequency and pulse dose for each detector, using measurement and DICOM RT Plan files. Pulse information is converted to collection efficiency and measurements are corrected by multiplying detector dose by ratios of calibration to measured collection efficiencies. For the second correction, MU/min in daily 1000 SRS calibration was chosen to match average MU/min of the VMAT plan. Usefulness of derived corrections were evaluated using 6MV and 10FFF SBRT RapidArc plans delivered to the OCTAVIUS 4D system using a TrueBeam equipped with an HD- MLC. Effects of the two corrections on QA results were examined by performing 3D gamma analysis comparing predicted to measured dose, with and without corrections. Results: After complex Matlab corrections, average 3D gamma pass rates improved by [0.07%,0.40%,1.17%] for 6MV and [0.29%,1.40%,4.57%] for 10FFF using [3%/3mm,2%/2mm,1%/1mm] criteria. Maximum changes in gamma pass rates were [0.43%,1.63%,3.05%] for 6MV and [1.00%,4.80%,11.2%] for 10FFF using [3%/3mm,2%/2mm,1%/1mm] criteria. On average, pass rates of simple daily calibration corrections were within 1% of complex Matlab corrections. Conclusion: Ion recombination effects can potentially be clinically significant for OCTAVIUS 1000 SRS measurements, especially for higher pulse dose unflattened beams when using tighter gamma tolerances. Matching daily 1000 SRS calibration MU/min to average planned MU/min is a simple correction that

  11. Influence of the partial volume correction method on (18)F-fluorodeoxyglucose brain kinetic modelling from dynamic PET images reconstructed with resolution model based OSEM.

    Science.gov (United States)

    Bowen, Spencer L; Byars, Larry G; Michel, Christian J; Chonde, Daniel B; Catana, Ciprian

    2013-10-21

    Kinetic parameters estimated from dynamic (18)F-fluorodeoxyglucose ((18)F-FDG) PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For ordered subsets expectation maximization (OSEM), image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting (18)F-FDG dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation geometric transfer matrix PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in cerebral metabolic rate of glucose estimates, although by less than 5% in

  12. [A new method to orthodontically correct dental occlusal plane canting: wave-shaped arch].

    Science.gov (United States)

    Zheng, X; Hu, X X; Ma, N; Chen, X H

    2017-02-18

    ; after treatment the angles were from -0.17° to 2.57° with a median of 1.87°, the decrease of the angles between AOP and BBP after treatment ranged from 1.08° to 4.15° with a median of 2.21°. Paired Wilcoxon test P was 0.000. The wave-shaped arch can be used independently or in combination with other treatment methods, which can take advantage of left and right interactive anchorage to correct AOPC effectively, so it has certain application value in clinical practice.

  13. Nonuniform code concatenation for universal fault-tolerant quantum computing

    Science.gov (United States)

    Nikahd, Eesa; Sedighi, Mehdi; Saheb Zamani, Morteza

    2017-09-01

    Using transversal gates is a straightforward and efficient technique for fault-tolerant quantum computing. Since transversal gates alone cannot be computationally universal, they must be combined with other approaches such as magic state distillation, code switching, or code concatenation to achieve universality. In this paper we propose an alternative approach for universal fault-tolerant quantum computing, mainly based on the code concatenation approach proposed in [T. Jochym-O'Connor and R. Laflamme, Phys. Rev. Lett. 112, 010505 (2014), 10.1103/PhysRevLett.112.010505], but in a nonuniform fashion. The proposed approach is described based on nonuniform concatenation of the 7-qubit Steane code with the 15-qubit Reed-Muller code, as well as the 5-qubit code with the 15-qubit Reed-Muller code, which lead to two 49-qubit and 47-qubit codes, respectively. These codes can correct any arbitrary single physical error with the ability to perform a universal set of fault-tolerant gates, without using magic state distillation.

  14. N3LO corrections to jet production in deep inelastic scattering using the Projection-to-Born method

    Science.gov (United States)

    Currie, J.; Gehrmann, T.; Glover, E. W. N.; Huss, A.; Niehues, J.; Vogt, A.

    2018-05-01

    Computations of higher-order QCD corrections for processes with exclusive final states require a subtraction method for real-radiation contributions. We present the first-ever generalisation of a subtraction method for third-order (N3LO) QCD corrections. The Projection-to-Born method is used to combine inclusive N3LO coefficient functions with an exclusive second-order (NNLO) calculation for a final state with an extra jet. The input requirements, advantages, and potential applications of the method are discussed, and validations at lower orders are performed. As a test case, we compute the N3LO corrections to kinematical distributions and production rates for single-jet production in deep inelastic scattering in the laboratory frame, and compare them with data from the ZEUS experiment at HERA. The corrections are small in the central rapidity region, where they stabilize the predictions to sub per-cent level. The corrections increase substantially towards forward rapidity where large logarithmic effects are expected, thereby yielding an improved description of the data in this region.

  15. Dissipative dynamics with the corrected propagator method. Numerical comparison between fully quantum and mixed quantum/classical simulations

    International Nuclear Information System (INIS)

    Gelman, David; Schwartz, Steven D.

    2010-01-01

    The recently developed quantum-classical method has been applied to the study of dissipative dynamics in multidimensional systems. The method is designed to treat many-body systems consisting of a low dimensional quantum part coupled to a classical bath. Assuming the approximate zeroth order evolution rule, the corrections to the quantum propagator are defined in terms of the total Hamiltonian and the zeroth order propagator. Then the corrections are taken to the classical limit by introducing the frozen Gaussian approximation for the bath degrees of freedom. The evolution of the primary part is governed by the corrected propagator yielding the exact quantum dynamics. The method has been tested on two model systems coupled to a harmonic bath: (i) an anharmonic (Morse) oscillator and (ii) a double-well potential. The simulations have been performed at zero temperature. The results have been compared to the exact quantum simulations using the surrogate Hamiltonian approach.

  16. Capacitated Vehicle Routing with Nonuniform Speeds

    DEFF Research Database (Denmark)

    Gørtz, Inge Li; Molinaro, Marco; Nagarajan, Viswanath

    2016-01-01

    is the distance traveled divided by its speed.Our algorithm relies on a new approximate minimum spanning tree construction called Level-Prim, which is related to but different from Light Approximate Shortest-path Trees. We also extend the widely used tour-splitting technique to nonuniform speeds, using ideas from...

  17. Stone Stability in Non-uniform Flow

    NARCIS (Netherlands)

    Hoan, N.T.; Stive, M.J.F.; Booij, R.; Hofland, B.; Verhagen, H.J.

    2011-01-01

    This paper presents the results of an experimental study on stone stability under nonuniform turbulent flow, in particular expanding flow. Detailed measurements of both flow and turbulence and the bed stability are described. Than various manners of quantifying the hydraulic loads exerted on the

  18. Stone Stability under Stationary Nonuniform Flows

    NARCIS (Netherlands)

    Steenstra, Remco; Hofland, B.; Paarlberg, Andries; Smale, Alfons; Huthoff, Fredrik; Uijttewaal, W.S.J.

    2016-01-01

    A stability parameter for rock in bed protections under nonuniform stationary flow is derived. The influence of the mean flow velocity, turbulence, and mean acceleration of the flow are included explicitly in the parameter. The relatively new notion of explicitly incorporating the mean acceleration

  19. Radar Doppler Processing with Nonuniform Sampling.

    Energy Technology Data Exchange (ETDEWEB)

    Doerry, Armin W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-07-01

    Conventional signal processing to estimate radar Doppler frequency often assumes uniform pulse/sample spacing. This is for the convenience of t he processing. More recent performance enhancements in processor capability allow optimally processing nonuniform pulse/sample spacing, thereby overcoming some of the baggage that attends uniform sampling, such as Doppler ambiguity and SNR losses due to sidelobe control measures.

  20. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data.

    Science.gov (United States)

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-06-15

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration's (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003-2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW's) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach.

  1. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data

    Directory of Open Access Journals (Sweden)

    Haris Akram Bhatti

    2016-06-01

    Full Text Available With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration’s (NOAA Climate Prediction Centre (CPC morphing technique (CMORPH satellite rainfall product (CMORPH in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003–2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW sizes and for sequential windows (SW’s of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE. To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r and standard deviation (SD. Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach.

  2. Method and system for automatically correcting aberrations of a beam of charged particles

    International Nuclear Information System (INIS)

    1975-01-01

    The location of a beam of charged particles within a deflection field is determined by its orthogonal deflection voltages. With the location of the beam in the field, correction currents are supplied to a focus coil and to each of a pair of stigmator coils to correct for change of focal length and astigmatism due to the beam being deflected away from the center of its deflection field

  3. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data

    Science.gov (United States)

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-01-01

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration’s (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003–2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW’s) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach. PMID:27314363

  4. Modified Ponseti method of treatment for correction of neglected clubfoot in older children and adolescents--a preliminary report.

    Science.gov (United States)

    Bashi, Ramin Haj Zargar; Baghdadi, Taghi; Shirazi, Mehdi Ramezan; Abdi, Reza; Aslani, Hossein

    2016-03-01

    Congenital talipes equinovarus may be the most common congenital orthopedic condition requiring treatment. Nonoperative treatment including different methods is generally accepted as the first step in the deformity correction. Ignacio Ponseti introduced his nonsurgical approach to the treatment of clubfoot in the early 1940s. The method is reportedly successful in treating clubfoot in patients up to 9 years of age. However, whether age at the beginning of treatment affects the rate of effective correction and relapse is unknown. We have applied the Ponseti method successfully with some modifications for 11 patients with a mean age of 11.2 years (range, 6 to 19 years) with neglected and untreated clubbed feet. The mean follow-up was 15 months (12 to 36 months). Correction was achieved with a mean of nine casts (six to 13). Clinically, 17 out of 18 feet (94.4%) were considered to achieve a good result with no need for further surgery. The application of this method of treatment is very simple and also cheap in developing countries with limited financial and social resources for health service. To the best of the authors' knowledge, such a modified method as a correction method for clubfoot in older children and adolescents has not been applied previously for neglected clubfeet in older children in the literature.

  5. Software Design of Mobile Antenna for Auto Satellite Tracking Using Modem Correction and Elevation Azimuth Method

    Directory of Open Access Journals (Sweden)

    Djamhari Sirat

    2010-10-01

    Full Text Available Pointing accuracy is an important thing in satellite communication. Because the satellite’s distance to the surface of the earth's satellite is so huge, thus 1 degree of pointing error will make the antenna can not send data to satellites. To overcome this, the auto-tracking satellite controller is made. This system uses a microcontroller as the controller, with the GPS as the indicator location of the antenna, digital compass as the beginning of antenna pointing direction, rotary encoder as sensor azimuth and elevation, and modem to see Eb/No signal. The microcontroller use serial communication to read the input. Thus the programming should be focused on in the UART and serial communication software UART. This controller use 2 phase in the process of tracking satellites. Early stages is the method Elevation-Azimuth, where at this stage with input from GPS, Digital Compass, and the position of satellites (both coordinates, and height that are stored in microcontroller. Controller will calculate the elevation and azimuth angle, then move the antenna according to the antenna azimuth and elevation angle. Next stages is correction modem, where in this stage controller only use modem as the input, and antenna movement is set up to obtain the largest value of Eb/No signal. From the results of the controller operation, there is a change in the value of the original input level from -81.7 dB to -30.2 dB with end of Eb/No value, reaching 5.7 dB.

  6. Vacuum polarisation in some static nonuniform magnetic fields

    Energy Technology Data Exchange (ETDEWEB)

    Calucci, G. [Trieste Univ. (Italy). Dip. di Fisica Teorica]|[INFN, Trieste (Italy)

    1995-11-01

    Vacuum polarisation in QED in presence of some configurations of external magnetic fields is investigated. The configuration considered correspond to fields is investigated. The configuration considered correspond to fields lying in a plane and without sources. The motion of a Dirac electron in this field configuration is studied and arguments are found to conclude that the lowest level gives the most important contribution. The result is that the main effect is not very different from the uniform case, the possibilities of calculating the corrections due to the uniformity is explicitly shown. A typical effect of nonuniformity of the field shows out in the refractivity of the field shows out in the refractivity of the vacuum.

  7. Decomposed Photo Response Non-Uniformity for Digital Forensic Analysis

    Science.gov (United States)

    Li, Yue; Li, Chang-Tsun

    The last few years have seen the applications of Photo Response Non-Uniformity noise (PRNU) - a unique stochastic fingerprint of image sensors, to various types of digital forensic investigations such as source device identification and integrity verification. In this work we proposed a new way of extracting PRNU noise pattern, called Decomposed PRNU (DPRNU), by exploiting the difference between the physical andartificial color components of the photos taken by digital cameras that use a Color Filter Array for interpolating artificial components from physical ones. Experimental results presented in this work have shown the superiority of the proposed DPRNU to the commonly used version. We also proposed a new performance metrics, Corrected Positive Rate (CPR) to evaluate the performance of the common PRNU and the proposed DPRNU.

  8. Vacuum polarisation in some static nonuniform magnetic fields

    International Nuclear Information System (INIS)

    Calucci, G.

    1995-11-01

    Vacuum polarisation in QED in presence of some configurations of external magnetic fields is investigated. The configuration considered correspond to fields is investigated. The configuration considered correspond to fields lying in a plane and without sources. The motion of a Dirac electron in this field configuration is studied and arguments are found to conclude that the lowest level gives the most important contribution. The result is that the main effect is not very different from the uniform case, the possibilities of calculating the corrections due to the uniformity is explicitly shown. A typical effect of nonuniformity of the field shows out in the refractivity of the field shows out in the refractivity of the vacuum

  9. Validation of phenol red versus gravimetric method for water reabsorption correction and study of gender differences in Doluisio's absorption technique.

    Science.gov (United States)

    Tuğcu-Demiröz, Fatmanur; Gonzalez-Alvarez, Isabel; Gonzalez-Alvarez, Marta; Bermejo, Marival

    2014-10-01

    The aim of the present study was to develop a method for water flux reabsorption measurement in Doluisio's Perfusion Technique based on the use of phenol red as a non-absorbable marker and to validate it by comparison with gravimetric procedure. The compounds selected for the study were metoprolol, atenolol, cimetidine and cefadroxil in order to include low, intermediate and high permeability drugs absorbed by passive diffusion and by carrier mediated mechanism. The intestinal permeabilities (Peff) of the drugs were obtained in male and female Wistar rats and calculated using both methods of water flux correction. The absorption rate coefficients of all the assayed compounds did not show statistically significant differences between male and female rats consequently all the individual values were combined to compare between reabsorption methods. The absorption rate coefficients and permeability values did not show statistically significant differences between the two strategies of concentration correction. The apparent zero order water absorption coefficients were also similar in both correction procedures. In conclusion gravimetric and phenol red method for water reabsorption correction are accurate and interchangeable for permeability estimation in closed loop perfusion method. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Implementing a generic method for bias correction in statistical models using random effects, with spatial and population dynamics examples

    DEFF Research Database (Denmark)

    Thorson, James T.; Kristensen, Kasper

    2016-01-01

    Statistical models play an important role in fisheries science when reconciling ecological theory with available data for wild populations or experimental studies. Ecological models increasingly include both fixed and random effects, and are often estimated using maximum likelihood techniques...... configurations of an age-structured population dynamics model. This simulation experiment shows that the epsilon-method and the existing bias-correction method perform equally well in data-rich contexts, but the epsilon-method is slightly less biased in data-poor contexts. We then apply the epsilon......-method to a spatial regression model when estimating an index of population abundance, and compare results with an alternative bias-correction algorithm that involves Markov-chain Monte Carlo sampling. This example shows that the epsilon-method leads to a biologically significant difference in estimates of average...

  11. Instruction sequence based non-uniform complexity classes

    NARCIS (Netherlands)

    Bergstra, J.A.; Middelburg, C.A.

    2013-01-01

    We present an approach to non-uniform complexity in which single-pass instruction sequences play a key part, and answer various questions that arise from this approach. We introduce several kinds of non-uniform complexity classes. One kind includes a counterpart of the well-known non-uniform

  12. 4SM: A Novel Self-Calibrated Algebraic Ratio Method for Satellite-Derived Bathymetry and Water Column Correction.

    Science.gov (United States)

    Morel, Yann G; Favoretto, Fabio

    2017-07-21

    All empirical water column correction methods have consistently been reported to require existing depth sounding data for the purpose of calibrating a simple depth retrieval model; they yield poor results over very bright or very dark bottoms. In contrast, we set out to (i) use only the relative radiance data in the image along with published data, and several new assumptions; (ii) in order to specify and operate the simplified radiative transfer equation (RTE); (iii) for the purpose of retrieving both the satellite derived bathymetry (SDB) and the water column corrected spectral reflectance over shallow seabeds. Sea truth regressions show that SDB depths retrieved by the method only need tide correction. Therefore it shall be demonstrated that, under such new assumptions, there is no need for (i) formal atmospheric correction; (ii) conversion of relative radiance into calibrated reflectance; or (iii) existing depth sounding data, to specify the simplified RTE and produce both SDB and spectral water column corrected radiance ready for bottom typing. Moreover, the use of the panchromatic band for that purpose is introduced. Altogether, we named this process the Self-Calibrated Supervised Spectral Shallow-sea Modeler (4SM). This approach requires a trained practitioner, though, to produce its results within hours of downloading the raw image. The ideal raw image should be a "near-nadir" view, exhibit homogeneous atmosphere and water column, include some coverage of optically deep waters and bare land, and lend itself to quality removal of haze, atmospheric adjacency effect, and sun/sky glint.

  13. Automated 3-D method for the correction of axial artifacts in spectral-domain optical coherence tomography images

    Science.gov (United States)

    Antony, Bhavna; Abràmoff, Michael D.; Tang, Li; Ramdas, Wishal D.; Vingerling, Johannes R.; Jansonius, Nomdo M.; Lee, Kyungmoo; Kwon, Young H.; Sonka, Milan; Garvin, Mona K.

    2011-01-01

    The 3-D spectral-domain optical coherence tomography (SD-OCT) images of the retina often do not reflect the true shape of the retina and are distorted differently along the x and y axes. In this paper, we propose a novel technique that uses thin-plate splines in two stages to estimate and correct the distinct axial artifacts in SD-OCT images. The method was quantitatively validated using nine pairs of OCT scans obtained with orthogonal fast-scanning axes, where a segmented surface was compared after both datasets had been corrected. The mean unsigned difference computed between the locations of this artifact-corrected surface after the single-spline and dual-spline correction was 23.36 ± 4.04 μm and 5.94 ± 1.09 μm, respectively, and showed a significant difference (p < 0.001 from two-tailed paired t-test). The method was also validated using depth maps constructed from stereo fundus photographs of the optic nerve head, which were compared to the flattened top surface from the OCT datasets. Significant differences (p < 0.001) were noted between the artifact-corrected datasets and the original datasets, where the mean unsigned differences computed over 30 optic-nerve-head-centered scans (in normalized units) were 0.134 ± 0.035 and 0.302 ± 0.134, respectively. PMID:21833377

  14. Effect of attenuation by the cranium on quantitative SPECT measurements of cerebral blood flow and a correction method

    International Nuclear Information System (INIS)

    Iwase, Mikio; Kurono, Kenji; Iida, Akihiko.

    1998-01-01

    Attenuation correction for cerebral blood flow SPECT image reconstruction is usually performed by considering the head as a whole to be equivalent to water, and the effects of differences in attenuation between subjects produced by the cranium have not been taken into account. We determined the differences in attenuation between subjects and assessed a method of correcting quantitative cerebral blood flow values. Attenuations by head on the right and left sides were measured before intravenous injection of 123 I-IMP, and water-converted diameters of both sides (Ta) were calculated from the measurements obtained. After acquiring SPECT images, attenuation correction was conducted according to the method of Sorenson, and images were reconstructed. The diameters of the right and left sides in the same position as the Ta (Tt) were calculated from the contours determined by threshold values. Using Ts given by 2 Ts=Ta-Tt, the correction factor λ=exp(μ 1 Ts) was calculated and multiplied as the correction factor when rCBF was determined. The results revealed significant differences between Tt and Ta. Although no gender differences were observed in Tt, they were seen in both Ta and Ts. Thus, interindividual differences in attenuation by the cranium were found to have an influence that cannot be ignored. Inter-subject correlation is needed to obtain accurate quantitative values. (author)

  15. METHODS FOR CORRECTION OF RHINOPHONIA IN PATIENTS WITH ACQUIRED MAXILLARY DEFECTS

    Directory of Open Access Journals (Sweden)

    E. G. Matyakin

    2012-01-01

    Full Text Available Speech recovery sessions were conducted in 63 patients with acquired maxillary defects. Assessment of speech quality in patients after auditory maxillary resection without a prosthestic has indicated 100 % significant rhinolalia, indistinct articulation. Prosthetic defect replacement completely corrects speech dysfunction and creates conditions for forming correct speech stereotypes. Speech therapy sessions and testing are aimed at increasing the performance of the speech apparatus and at improving the automatizaton of speaking skills. The techniques to remove nasal emission include: – articulation exercises (activation of the muscles of the lips, cheeks, tongue, pharynx, neck, and larynx; – speech respiratory gymnastics; – phonopedic (vocal exercises. The elements of rational psychotherapy have extensive applications during each session and include suggestion, an emotional exposure to correct personality disorders, as well as pedagogical elements. 

  16. Method and apparatus for producing a porosity log of a subsurface formation corrected for detector standoff

    International Nuclear Information System (INIS)

    Allen, L.S.; Leland, F.P.; Lyle, W.D. Jr.; Stromswold, D.C.

    1993-01-01

    A borehole logging tool with a pulsed source of fast neutrons is lowered into a borehole traversing a subsurface formation, and a neutron detector measures the die-away of nuclear radiation in the formation. A model of the die-away is produced using exponential terms varying as the sum of borehole, formation and thermal neutron background components. Exponentially weighted moments of both the die-away measurements and a model are determined and equated. The formation decay constant is determined from the formation and thermal neutron background. An epithermal neutron lifetime is determined from the formation decay constant and is used with the amplitude ratio by a trained neural network to determine a lifetime correction. A standoff corrected lifetime is determined from the epithermal neutron lifetime and the lifetime correction. (author)

  17. A comparison of different experimental methods for general recombination correction for liquid ionization chambers

    DEFF Research Database (Denmark)

    Andersson, Jonas; Kaiser, Franz-Joachim; Gomez, Faustino

    2012-01-01

    Radiation dosimetry of highly modulated dose distributions requires a detector with a high spatial resolution. Liquid filled ionization chambers (LICs) have the potential to become a valuable tool for the characterization of such radiation fields. However, the effect of an increased recombination...... of the charge carriers, as compared to using air as the sensitive medium has to be corrected for. Due to the presence of initial recombination in LICs, the correction for general recombination losses is more complicated than for air-filled ionization chambers. In the present work, recently published...

  18. The strategy of spectral shifts and the sets of correct methods for calculating eigenvalues of general tridiagonal matrices

    International Nuclear Information System (INIS)

    Emel'yanenko, G.A.; Sek, I.E.

    1988-01-01

    Many correctable unknown methods for eigenvalue calculation of general tridiagonal matrices with real elements; criteria of singular tridiagonal matrices; necessary and sufficient conditions of tridiagonal matrix degeneracy; process with boundary conditions according to calculation processes of general upper and lower tridiagonal matrix minors are obtained. 6 refs

  19. Reliability Analysis of Offshore Jacket Structures with Wave Load on Deck using the Model Correction Factor Method

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Friis-Hansen, P.; Nielsen, J.S.

    2006-01-01

    failure/collapse of jacket type platforms with wave in deck loads using the so-called Model Correction Factor Method (MCFM). A simple representative model for the RSR measure is developed and used in the MCFM technique. A realistic example is evaluated and it is seen that it is possible to perform...

  20. Nonuniform transformation field analysis of multiphase elasto viscoplastic materials: application to MOX fuels

    International Nuclear Information System (INIS)

    Roussette, S.

    2005-05-01

    The description of the overall behavior of nonlinear materials with nonlinear dissipative phases requires an infinity of internal variables. An approximate model involving only a finite number of internal variables, Nonuniform Transformation Field Analysis, is obtained by considering a decomposition of these variables on a finite set of nonuniform transformation fields, called plastic modes. The method is initially developed for incompressible elasto viscoplastic materials. Karhunen-Loeve expansion is proposed to optimize the plastic modes. Then the method is extended to porous elasto viscoplastic materials. Finally the transformation field analysis, developed by Dvorak, is applied to nuclear fuels MOX. This method enables to make sensitivity studies to determine the role of some microstructural parameters on the fuel behaviour. Moreover the adequacy of the nonuniform method for fuels MOX is shown, the final objective being to be able to apply the model to the MOX in 3D. (author)

  1. Evaluation of a method for correction of scatter radiation in thorax cone beam CT; Evaluation d'une methode de correction du rayonnement diffuse en tomographie du thorax avec faisceau conique

    Energy Technology Data Exchange (ETDEWEB)

    Rinkel, J.; Dinten, J.M. [CEA Grenoble (DTBS/STD), Lab. d' Electronique et de Technologie de l' Informatique, LETI, 38 (France); Esteve, F. [European Synchrotron Radiation Facility (ESRF), 38 - Grenoble (France)

    2004-07-01

    Purpose: Cone beam CT (CBCT) enables three-dimensional imaging with isotropic resolution. X-ray scatter estimation is a big challenge for quantitative CBCT imaging of thorax: scatter level is significantly higher on cone beam systems compared to collimated fan beam systems. The effects of this scattered radiation are cupping artefacts, streaks, and quantification inaccuracies. The beam stops conventional scatter estimation approach can be used for CBCT but leads to a significant increase in terms of dose and acquisition time. At CEA-LETI has been developed an original scatter management process without supplementary acquisition. Methods and Materials: This Analytical Plus Indexing-based method (API) of scatter correction in CBCT is based on scatter calibration through offline acquisitions with beam stops on lucite plates, combined to an analytical transformation issued from physical equations. This approach has been applied with success in bone densitometry and mammography. To evaluate this method in CBCT, acquisitions from a thorax phantom with and without beam stops have been performed. To compare different scatter correction approaches, Feldkamp algorithm has been applied on rough data corrected from scatter by API and by beam stops approaches. Results: The API method provides results in good agreement with the beam stops array approach, suppressing cupping artefact. Otherwise influence of the scatter correction method on the noise in the reconstructed images has been evaluated. Conclusion: The results indicate that the API method is effective for quantitative CBCT imaging of thorax. Compared to a beam stops array method it needs a lower x-ray dose and shortens acquisition time. (authors)

  2. Effect of nonuniform fuel distribution

    International Nuclear Information System (INIS)

    Katakura, Jun-ichi

    1987-01-01

    In order to ensure the subcriticality of nuclear fuel, the method of controlling the mass, form or dimensions below the limit values and the method of confirming subcriticality by calculation are taken, but at this time, it is often assumed that the concentration of fuel is constant in a fuel region, or fuel rods are arranged at constant intervals. However, in the extraction process in fuel reprocessing or in fuel storage vessels, the concentration distribution may arise in fuel regions even though temporarily. Even if subcriticality is expected in a uniform system, when concentration distribution arises, and an uneven system results in, criticality may occur. Therefore, it is important to grasp the effect of uneven fuel distribution for ensuring the safety against criticality. In this paper, the effect of uneven fuel distribution is discussed, centering around the critical mass. The examples in literatures and the examples of calculation of uneven fuel distribution are shown. As the result of calculation in Japan Atomic Energy Research Institute, in a high enrichment U-235-water system, the critical mass decreased by about 7 % due to uneven distribution, which nearly agreed with the result of Clark of about 6 %. As for a low enrichment system, the conspicuous decrease of the critical mass was not observed. (Kako, I.)

  3. Proton dose distribution measurements using a MOSFET detector with a simple dose-weighted correction method for LET effects.

    Science.gov (United States)

    Kohno, Ryosuke; Hotta, Kenji; Matsuura, Taeko; Matsubara, Kana; Nishioka, Shie; Nishio, Teiji; Kawashima, Mitsuhiko; Ogino, Takashi

    2011-04-04

    We experimentally evaluated the proton beam dose reproducibility, sensitivity, angular dependence and depth-dose relationships for a new Metal Oxide Semiconductor Field Effect Transistor (MOSFET) detector. The detector was fabricated with a thinner oxide layer and was operated at high-bias voltages. In order to accurately measure dose distributions, we developed a practical method for correcting the MOSFET response to proton beams. The detector was tested by examining lateral dose profiles formed by protons passing through an L-shaped bolus. The dose reproducibility, angular dependence and depth-dose response were evaluated using a 190 MeV proton beam. Depth-output curves produced using the MOSFET detectors were compared with results obtained using an ionization chamber (IC). Since accurate measurements of proton dose distribution require correction for LET effects, we developed a simple dose-weighted correction method. The correction factors were determined as a function of proton penetration depth, or residual range. The residual proton range at each measurement point was calculated using the pencil beam algorithm. Lateral measurements in a phantom were obtained for pristine and SOBP beams. The reproducibility of the MOSFET detector was within 2%, and the angular dependence was less than 9%. The detector exhibited a good response at the Bragg peak (0.74 relative to the IC detector). For dose distributions resulting from protons passing through an L-shaped bolus, the corrected MOSFET dose agreed well with the IC results. Absolute proton dosimetry can be performed using MOSFET detectors to a precision of about 3% (1 sigma). A thinner oxide layer thickness improved the LET in proton dosimetry. By employing correction methods for LET dependence, it is possible to measure absolute proton dose using MOSFET detectors.

  4. Proton dose distribution measurements using a MOSFET detector with a simple dose‐weighted correction method for LET effects

    Science.gov (United States)

    Hotta, Kenji; Matsuura, Taeko; Matsubara, Kana; Nishioka, Shie; Nishio, Teiji; Kawashima, Mitsuhiko; Ogino, Takashi

    2011-01-01

    We experimentally evaluated the proton beam dose reproducibility, sensitivity, angular dependence and depth‐dose relationships for a new Metal Oxide Semiconductor Field Effect Transistor (MOSFET) detector. The detector was fabricated with a thinner oxide layer and was operated at high‐bias voltages. In order to accurately measure dose distributions, we developed a practical method for correcting the MOSFET response to proton beams. The detector was tested by examining lateral dose profiles formed by protons passing through an L‐shaped bolus. The dose reproducibility, angular dependence and depth‐dose response were evaluated using a 190 MeV proton beam. Depth‐output curves produced using the MOSFET detectors were compared with results obtained using an ionization chamber (IC). Since accurate measurements of proton dose distribution require correction for LET effects, we developed a simple dose‐weighted correction method. The correction factors were determined as a function of proton penetration depth, or residual range. The residual proton range at each measurement point was calculated using the pencil beam algorithm. Lateral measurements in a phantom were obtained for pristine and SOBP beams. The reproducibility of the MOSFET detector was within 2%, and the angular dependence was less than 9%. The detector exhibited a good response at the Bragg peak (0.74 relative to the IC detector). For dose distributions resulting from protons passing through an L‐shaped bolus, the corrected MOSFET dose agreed well with the IC results. Absolute proton dosimetry can be performed using MOSFET detectors to a precision of about 3% (1 sigma). A thinner oxide layer thickness improved the LET in proton dosimetry. By employing correction methods for LET dependence, it is possible to measure absolute proton dose using MOSFET detectors. PACS number: 87.56.‐v

  5. Analysis of an automated background correction method for cardiovascular MR phase contrast imaging in children and young adults

    Energy Technology Data Exchange (ETDEWEB)

    Rigsby, Cynthia K.; Hilpipre, Nicholas; Boylan, Emma E.; Popescu, Andrada R.; Deng, Jie [Ann and Robert H. Lurie Children' s Hospital of Chicago, Department of Medical Imaging, Chicago, IL (United States); McNeal, Gary R. [Siemens Medical Solutions USA Inc., Customer Solutions Group, Cardiovascular MR R and D, Chicago, IL (United States); Zhang, Gang [Ann and Robert H. Lurie Children' s Hospital of Chicago Research Center, Biostatistics Research Core, Chicago, IL (United States); Choi, Grace [Ann and Robert H. Lurie Children' s Hospital of Chicago, Department of Pediatrics, Chicago, IL (United States); Greiser, Andreas [Siemens AG Healthcare Sector, Erlangen (Germany)

    2014-03-15

    Phase contrast magnetic resonance imaging (MRI) is a powerful tool for evaluating vessel blood flow. Inherent errors in acquisition, such as phase offset, eddy currents and gradient field effects, can cause significant inaccuracies in flow parameters. These errors can be rectified with the use of background correction software. To evaluate the performance of an automated phase contrast MRI background phase correction method in children and young adults undergoing cardiac MR imaging. We conducted a retrospective review of patients undergoing routine clinical cardiac MRI including phase contrast MRI for flow quantification in the aorta (Ao) and main pulmonary artery (MPA). When phase contrast MRI of the right and left pulmonary arteries was also performed, these data were included. We excluded patients with known shunts and metallic implants causing visible MRI artifact and those with more than mild to moderate aortic or pulmonary stenosis. Phase contrast MRI of the Ao, mid MPA, proximal right pulmonary artery (RPA) and left pulmonary artery (LPA) using 2-D gradient echo Fast Low Angle SHot (FLASH) imaging was acquired during normal respiration with retrospective cardiac gating. Standard phase image reconstruction and the automatic spatially dependent background-phase-corrected reconstruction were performed on each phase contrast MRI dataset. Non-background-corrected and background-phase-corrected net flow, forward flow, regurgitant volume, regurgitant fraction, and vessel cardiac output were recorded for each vessel. We compared standard non-background-corrected and background-phase-corrected mean flow values for the Ao and MPA. The ratio of pulmonary to systemic blood flow (Qp:Qs) was calculated for the standard non-background and background-phase-corrected data and these values were compared to each other and for proximity to 1. In a subset of patients who also underwent phase contrast MRI of the MPA, RPA, and LPA a comparison was made between standard non-background-corrected

  6. Best Practices for Controlling Tuberculosis-Training in Correctional Facilities: A Mixed Methods Evaluation

    Science.gov (United States)

    Murray, Ellen R.

    2016-01-01

    According to the literature, identifying and treating tuberculosis (TB) in correctional facilities have been problematic for the inmates and also for the communities into which inmates are released. The importance of training those who can identify this disease early into incarceration is vital to halt the transmission. Although some training has…

  7. Unilateral canine crossbite correction in adults using the Invisalign method: a case report.

    Science.gov (United States)

    Giancotti, Aldo; Mampieri, Gianluca

    2012-01-01

    The aim of this paper is to present and debate the treatment of a unilateral canine crossbite using clear aligners (Invisalign). The possibility of combining partial fixed appliances with removable elastics to optimize the final outcome is also described. The advantages of protected movement, due to the presence of the aligners, to jump the occlusion during crossbite correction is also highlighted.

  8. Vortices in nonuniform upper-hybrid field

    International Nuclear Information System (INIS)

    Davydova, T.A.; Vranjes, J.

    1992-01-01

    The equations describing the interaction of an upper-hybrid pump wave with small low-frequency density perturbations are discussed under assumption that the pump is spatially nonuniform. The conditions for the modulational instability are investigated. Instead of a dispersion relation, describing the growth of perturbations in the case of an uniform pump, in our case of nonuniform pump a differential equation is obtained and from its eigenvalues are found the instability criteria. Taking into account the slow-frequency self-interaction terms some localized solutions similar to dipole vortices are found, but described by analytic functions in all space. It is shown that their characteristic size and speed are determined by the pump intensity and its spatial structure. (au)

  9. A Fixed-Pattern Noise Correction Method Based on Gray Value Compensation for TDI CMOS Image Sensor.

    Science.gov (United States)

    Liu, Zhenwang; Xu, Jiangtao; Wang, Xinlei; Nie, Kaiming; Jin, Weimin

    2015-09-16

    In order to eliminate the fixed-pattern noise (FPN) in the output image of time-delay-integration CMOS image sensor (TDI-CIS), a FPN correction method based on gray value compensation is proposed. One hundred images are first captured under uniform illumination. Then, row FPN (RFPN) and column FPN (CFPN) are estimated based on the row-mean vector and column-mean vector of all collected images, respectively. Finally, RFPN are corrected by adding the estimated RFPN gray value to the original gray values of pixels in the corresponding row, and CFPN are corrected by subtracting the estimated CFPN gray value from the original gray values of pixels in the corresponding column. Experimental results based on a 128-stage TDI-CIS show that, after correcting the FPN in the image captured under uniform illumination with the proposed method, the standard-deviation of row-mean vector decreases from 5.6798 to 0.4214 LSB, and the standard-deviation of column-mean vector decreases from 15.2080 to 13.4623 LSB. Both kinds of FPN in the real images captured by TDI-CIS are eliminated effectively with the proposed method.

  10. Effect of scatter and attenuation correction in ROI analysis of brain perfusion scintigraphy. Phantom experiment and clinical study in patients with unilateral cerebrovascular disease

    Energy Technology Data Exchange (ETDEWEB)

    Bai, J. [Keio Univ., Tokyo (Japan). 21st Century Center of Excellence Program; Hashimoto, J.; Kubo, A. [Keio Univ., Tokyo (Japan). Dept. of Radiology; Ogawa, K. [Hosei Univ., Tokyo (Japan). Dept. of Electronic Informatics; Fukunaga, A.; Onozuka, S. [Keio Univ., Tokyo (Japan). Dept. of Neurosurgery

    2007-07-01

    The aim of this study was to evaluate the effect of scatter and attenuation correction in region of interest (ROI) analysis of brain perfusion single-photon emission tomography (SPECT), and to assess the influence of selecting the reference area on the calculation of lesion-to-reference count ratios. Patients, methods: Data were collected from a brain phantom and ten patients with unilateral internal carotid artery stenosis. A simultaneous emission and transmission scan was performed after injecting {sup 123}I-iodoamphetamine. We reconstructed three SPECT images from common projection data: with scatter correction and nonuniform attenuation correction, with scatter correction and uniform attenuation correction, and with uniform attenuation correction applied to data without scatter correction. Regional count ratios were calculated by using four different reference areas (contralateral intact side, ipsilateral cerebellum, whole brain and hemisphere). Results: Scatter correction improved the accuracy of measuring the count ratios in the phantom experiment. It also yielded marked difference in the count ratio in the clinical study when using the cerebellum, whole brain or hemisphere as the reference. Difference between nonuniform and uniform attenuation correction was not significant in the phantom and clinical studies except when the cerebellar reference was used. Calculation of the lesion-to-normal count ratios referring the same site in the contralateral hemisphere was not dependent on the use of scatter correction or transmission scan-based attenuation correction. Conclusion: Scatter correction was indispensable for accurate measurement in most of the ROI analyses. Nonuniform attenuation correction is not necessary when using the reference area other than the cerebellum. (orig.)

  11. Surface magnetic canting in a nonuniform film

    International Nuclear Information System (INIS)

    Pini, M.G.; Rettori, A.; Pappas, D.P.; Anisimov, A.V.; Popov, A.P.

    2004-01-01

    The zero temperature equilibrium configuration of a nonuniform system made of a ferromagnetic (FM) monolayer on top of a semi-infinite FM film is calculated using a nonlinear mapping formulation of mean-field theory, where the surface is taken into account via an appropriate boundary condition. The analytical criterion for the existence of surface magnetic canting, previously obtained by Popov and Pappas, is also recovered

  12. On natural frequencies of non-uniform beams modulated by finite periodic cells

    International Nuclear Information System (INIS)

    Xu, Yanlong; Zhou, Xiaoling; Wang, Wei; Wang, Longqi; Peng, Fujun; Li, Bin

    2016-01-01

    It is well known that an infinite periodic beam can support flexural wave band gaps. However, in real applications, the number of the periodic cells is always limited. If a uniform beam is replaced by a non-uniform beam with finite periodicity, the vibration changes are vital by mysterious. This paper employs the transfer matrix method (TMM) to study the natural frequencies of the non-uniform beams with modulation by finite periodic cells. The effects of the amounts, cross section ratios, and arrangement forms of the periodic cells on the natural frequencies are explored. The relationship between the natural frequencies of the non-uniform beams with finite periodicity and the band gap boundaries of the corresponding infinite periodic beam is also investigated. Numerical results and conclusions obtained here are favorable for designing beams with good vibration control ability. - Highlights: • The transfer matrix method to study the natural frequencies of the finite periodic non-uniform beams is derived. • The transfer matrix method to study the band gaps of the infinite periodic non-uniform beams is derived. • The effects of the periodic cells on the natural frequencies are explored. • The relationships of the natural frequencies and band gap boundaries are investigated.

  13. On natural frequencies of non-uniform beams modulated by finite periodic cells

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Yanlong, E-mail: xuyanlong@nwpu.edu.cn [School of Aeronautics, Northwestern Polytechnical University, Xi' an 710072, Shaanxi (China); Zhou, Xiaoling [Shanghai Institute of Aerospace System Engineering, Shanghai 201109 (China); Wang, Wei [School of Aeronautics, Northwestern Polytechnical University, Xi' an 710072, Shaanxi (China); Wang, Longqi [School of Civil & Environmental Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798 (Singapore); Peng, Fujun [Shanghai Institute of Aerospace System Engineering, Shanghai 201109 (China); Li, Bin [School of Aeronautics, Northwestern Polytechnical University, Xi' an 710072, Shaanxi (China)

    2016-09-23

    It is well known that an infinite periodic beam can support flexural wave band gaps. However, in real applications, the number of the periodic cells is always limited. If a uniform beam is replaced by a non-uniform beam with finite periodicity, the vibration changes are vital by mysterious. This paper employs the transfer matrix method (TMM) to study the natural frequencies of the non-uniform beams with modulation by finite periodic cells. The effects of the amounts, cross section ratios, and arrangement forms of the periodic cells on the natural frequencies are explored. The relationship between the natural frequencies of the non-uniform beams with finite periodicity and the band gap boundaries of the corresponding infinite periodic beam is also investigated. Numerical results and conclusions obtained here are favorable for designing beams with good vibration control ability. - Highlights: • The transfer matrix method to study the natural frequencies of the finite periodic non-uniform beams is derived. • The transfer matrix method to study the band gaps of the infinite periodic non-uniform beams is derived. • The effects of the periodic cells on the natural frequencies are explored. • The relationships of the natural frequencies and band gap boundaries are investigated.

  14. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples

    Science.gov (United States)

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-01

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses.

  15. Evaluation of metal artifacts in MVCT systems using a model based correction method

    Energy Technology Data Exchange (ETDEWEB)

    Paudel, M. R.; Mackenzie, M.; Fallone, B. G.; Rathee, S. [Department of Oncology, Medical Physics Division, University of Alberta, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada); Department of Medical Physics, Cross Cancer Institute, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada) and Department of Oncology, Medical Physics Division, University of Alberta, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada); Department of Physics, University of Alberta, 11322-89 Avenue, Edmonton, Alberta T6G 2G7 (Canada); Department of Medical Physics, Cross Cancer Institute, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada) and Department of Oncology, Medical Physics Division, University of Alberta, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada); Department of Medical Physics, Cross Cancer Institute, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada) and Department of Oncology, Medical Physics Division, University of Alberta, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada)

    2012-10-15

    Purpose: To evaluate the performance of a model based image reconstruction method in reducing metal artifacts in the megavoltage computed tomography (MVCT) images of a phantom representing bilateral hip prostheses and to compare with the filtered-backprojection (FBP) technique. Methods: An iterative maximum likelihood polychromatic algorithm for CT (IMPACT) is used with an additional model for the pair/triplet production process and the energy dependent response of the detectors. The beam spectra for an in-house bench-top and TomoTherapy Trade-Mark-Sign MVCTs are modeled for use in IMPACT. The empirical energy dependent response of detectors is calculated using a constrained optimization technique that predicts the measured attenuation of the beam by various thicknesses (0-24 cm) of solid water slabs. A cylindrical (19.1 cm diameter) plexiglass phantom containing various cylindrical inserts of relative electron densities 0.295-1.695 positioned between two steel rods (2.7 cm diameter) is scanned in the bench-top MVCT that utilizes the bremsstrahlung radiation from a 6 MeV electron beam passed through 4 cm solid water on the Varian Clinac 2300C and in the imaging beam of the TomoTherapy Trade-Mark-Sign MVCT. The FBP technique in bench-top MVCT reconstructs images from raw signal normalized to air scan and corrected for beam hardening using a uniform plexiglass cylinder (20 cm diameter). The IMPACT starts with a FBP reconstructed seed image and reconstructs the final image in 150 iterations. Results: In both MVCTs, FBP produces visible dark shading in the image connecting the steel rods. In the IMPACT reconstructed images this shading is nearly removed and the uniform background is restored. The average attenuation coefficients of the inserts and the background are very close to the corresponding values in the absence of the steel inserts. In the FBP images of the bench-top MVCT, the shading causes 4%-9.5% underestimation of electron density at the central inserts

  16. Examining the departure in response of non-point detectors due to non-uniform illumination and displacement of effective center

    Energy Technology Data Exchange (ETDEWEB)

    Khabaz, Rahim, E-mail: r.khabaz@gu.ac.ir

    2013-11-11

    A mathematical simulation approach based on the general purpose Monte Carlo N-particle transport code MCNP was developed to calculate the departure in reading of the neutron spectrometer instrument from that expected according to the inverse square law. The calculations were performed to evaluate the effects of beam divergence on the response of a 10 in. spherical device equipped with a long BF{sub 3} counter irradiated by 11 mono-energy neutron beams. The necessary geometry correction factor, because of non-uniform illumination, for the calibration of seven polyethylene spheres with several radionuclide neutron sources, i.e. Ra–Be, {sup 241}Am–Be, {sup 241}Am–B and Po–Be sources was also determined. In all calculations, the displacement of effective center from the geometric center of moderating spheres, when used as an instrument for neutron fluence measurement, was quantified. -- Highlights: • The commonly applied method for measuring the energy spectrum of neutron fields is BSS. • One of the problems of the BSS is the geometry correction factor. • This factor is related to the non-uniform illumination of the spectrometer. • At short distances, serious departure was created in reading from the inverse square law. • This study evaluates a Monte Carlo method to calculate this factor and related parameters.

  17. Examining the departure in response of non-point detectors due to non-uniform illumination and displacement of effective center

    International Nuclear Information System (INIS)

    Khabaz, Rahim

    2013-01-01

    A mathematical simulation approach based on the general purpose Monte Carlo N-particle transport code MCNP was developed to calculate the departure in reading of the neutron spectrometer instrument from that expected according to the inverse square law. The calculations were performed to evaluate the effects of beam divergence on the response of a 10 in. spherical device equipped with a long BF 3 counter irradiated by 11 mono-energy neutron beams. The necessary geometry correction factor, because of non-uniform illumination, for the calibration of seven polyethylene spheres with several radionuclide neutron sources, i.e. Ra–Be, 241 Am–Be, 241 Am–B and Po–Be sources was also determined. In all calculations, the displacement of effective center from the geometric center of moderating spheres, when used as an instrument for neutron fluence measurement, was quantified. -- Highlights: • The commonly applied method for measuring the energy spectrum of neutron fields is BSS. • One of the problems of the BSS is the geometry correction factor. • This factor is related to the non-uniform illumination of the spectrometer. • At short distances, serious departure was created in reading from the inverse square law. • This study evaluates a Monte Carlo method to calculate this factor and related parameters

  18. Bias Correction Methods Explain Much of the Variation Seen in Breast Cancer Risks of BRCA1/2 Mutation Carriers.

    Science.gov (United States)

    Vos, Janet R; Hsu, Li; Brohet, Richard M; Mourits, Marian J E; de Vries, Jakob; Malone, Kathleen E; Oosterwijk, Jan C; de Bock, Geertruida H

    2015-08-10

    Recommendations for treating patients who carry a BRCA1/2 gene are mainly based on cumulative lifetime risks (CLTRs) of breast cancer determined from retrospective cohorts. These risks vary widely (27% to 88%), and it is important to understand why. We analyzed the effects of methods of risk estimation and bias correction and of population factors on CLTRs in this retrospective clinical cohort of BRCA1/2 carriers. The following methods to estimate the breast cancer risk of BRCA1/2 carriers were identified from the literature: Kaplan-Meier, frailty, and modified segregation analyses with bias correction consisting of including or excluding index patients combined with including or excluding first-degree relatives (FDRs) or different conditional likelihoods. These were applied to clinical data of BRCA1/2 families derived from our family cancer clinic for whom a simulation was also performed to evaluate the methods. CLTRs and 95% CIs were estimated and compared with the reference CLTRs. CLTRs ranged from 35% to 83% for BRCA1 and 41% to 86% for BRCA2 carriers at age 70 years width of 95% CIs: 10% to 35% and 13% to 46%, respectively). Relative bias varied from -38% to +16%. Bias correction with inclusion of index patients and untested FDRs gave the smallest bias: +2% (SD, 2%) in BRCA1 and +0.9% (SD, 3.6%) in BRCA2. Much of the variation in breast cancer CLTRs in retrospective clinical BRCA1/2 cohorts is due to the bias-correction method, whereas a smaller part is due to population differences. Kaplan-Meier analyses with bias correction that includes index patients and a proportion of untested FDRs provide suitable CLTRs for carriers counseled in the clinic. © 2015 by American Society of Clinical Oncology.

  19. Study protocol: the empirical investigation of methods to correct for measurement error in biobanks with dietary assessment

    Directory of Open Access Journals (Sweden)

    Masson Lindsey F

    2011-10-01

    Full Text Available Abstract Background The Public Population Project in Genomics (P3G is an organisation that aims to promote collaboration between researchers in the field of population-based genomics. The main objectives of P3G are to encourage collaboration between researchers and biobankers, optimize study design, promote the harmonization of information use in biobanks, and facilitate transfer of knowledge between interested parties. The importance of calibration and harmonisation of methods for environmental exposure assessment to allow pooling of data across studies in the evaluation of gene-environment interactions has been recognised by P3G, which has set up a methodological group on calibration with the aim of; 1 reviewing the published methodological literature on measurement error correction methods with assumptions and methods of implementation; 2 reviewing the evidence available from published nutritional epidemiological studies that have used a calibration approach; 3 disseminating information in the form of a comparison chart on approaches to perform calibration studies and how to obtain correction factors in order to support research groups collaborating within the P3G network that are unfamiliar with the methods employed; 4 with application to the field of nutritional epidemiology, including gene-diet interactions, ultimately developing a inventory of the typical correction factors for various nutrients. Methods/Design Systematic review of (a the methodological literature on methods to correct for measurement error in epidemiological studies; and (b studies that have been designed primarily to investigate the association between diet and disease and have also corrected for measurement error in dietary intake. Discussion The conduct of a systematic review of the methodological literature on calibration will facilitate the evaluation of methods to correct for measurement error and the design of calibration studies for the prospective pooling of

  20. INTESTINAL DYSBIOSIS IN CHILDREN WITH FOOD ALLERGY: PATHOGENETIC ASPECTS AND MODERN CORRECTION METHODS

    Directory of Open Access Journals (Sweden)

    S.G. Makarova

    2008-01-01

    Full Text Available Background paper analyses the role of intestinal micro-flora at the stage of forming immunity, the importance of intestinal microflora abnormalities during the periods of allergic diseases development (primarily food allergies, as well as mechanisms of dysbiosis effect on the allergic processes in child's body. The study discusses the mechanisms of treatment and prevention effect of probiotics for cases of child allergic diseases. The work also specifies modern approaches to correcting dysbiotic abnormalities for children with food allergies, reviews the options of diet and medication treatment of food allergy, suggests a new algorithm of gradual treatment that targets correction of dysbiosis in this patient category.Key words: children, food allergy, dysbiosis, probiotics, prebiotics, diet therapy.

  1. Qualitative evaluation of Chang method of attenuation correction on heart SPECT by using custom made heart phantom

    International Nuclear Information System (INIS)

    Takavar, A.; Eftekhari, M.; Beiki, D.; Saghari, M.; Mostaghim, N.; Sohrabi, M.

    2003-01-01

    SPECT detects γ- rays from administrated radiopharmaceutical within the patient body. γ-rays pass through different tissues before reaching detectors and are attenuated. Attenuation can cause artifacts; therefore different and used to minimize attenuation effects. In our study efficacy of Chang method was evaluated for attenuation purpose, using a custom made heart phantom. Due to different tissues surrounding heart, evaluation is not uniform more over activity distribution around heart is also non- uniform. In Chang method distribution of radioactivity and attenuation due to the surrounding tissue is considered uniform. Our phantom is a piece of plastic producing similar SPECT image as left ventricle. A dual head, ADAC system was used in our study. Images were taken by 180 d ig C (limited angle) and 360 d ig C (total rotation). Images are compared with and without attenuation correction. Our results indicate that Chang attenuation correction method is not capable of eliminating attenuation artifact completely in particular attenuation effects caused by breast

  2. Current estimate of functional vision in patients with bifocal pseudophakia after correction of residual defocus by different methods

    Directory of Open Access Journals (Sweden)

    Yuri V Takhtaev

    2016-03-01

    Full Text Available In this article we evaluated the influence of different surgical methods for correction of residual ametropia on contrast sensitivity at different light conditions and high-order aberrations in patients with bifocal pseudophakia. The study included 45 eyes (30 people after cataract surgery, which studied dependence between contrast sensitivity and aberrations level before and after surgical correction of residual ametropia by of three methods - LASIK, Sulcoflex IOL implantation or IOL exchange. Contrast sensitivity was measured by Optec 6500 and aberration using Pentacam «OCULUS». We processed the results using the Mann-Whitney U-test. This study shows correlation between each method and residual aberrations level and their influence on contrast sensitivity level.

  3. Overview of Akatsuki data products: definition of data levels, method and accuracy of geometric correction

    Science.gov (United States)

    Ogohara, Kazunori; Takagi, Masahiro; Murakami, Shin-ya; Horinouchi, Takeshi; Yamada, Manabu; Kouyama, Toru; Hashimoto, George L.; Imamura, Takeshi; Yamamoto, Yukio; Kashimura, Hiroki; Hirata, Naru; Sato, Naoki; Yamazaki, Atsushi; Satoh, Takehiko; Iwagami, Naomoto; Taguchi, Makoto; Watanabe, Shigeto; Sato, Takao M.; Ohtsuki, Shoko; Fukuhara, Tetsuya; Futaguchi, Masahiko; Sakanoi, Takeshi; Kameda, Shingo; Sugiyama, Ko-ichiro; Ando, Hiroki; Lee, Yeon Joo; Nakamura, Masato; Suzuki, Makoto; Hirose, Chikako; Ishii, Nobuaki; Abe, Takumi

    2017-12-01

    We provide an overview of data products from observations by the Japanese Venus Climate Orbiter, Akatsuki, and describe the definition and content of each data-processing level. Levels 1 and 2 consist of non-calibrated and calibrated radiance (or brightness temperature), respectively, as well as geometry information (e.g., illumination angles). Level 3 data are global-grid data in the regular longitude-latitude coordinate system, produced from the contents of Level 2. Non-negligible errors in navigational data and instrumental alignment can result in serious errors in the geometry calculations. Such errors cause mismapping of the data and lead to inconsistencies between radiances and illumination angles, along with errors in cloud-motion vectors. Thus, we carefully correct the boresight pointing of each camera by fitting an ellipse to the observed Venusian limb to provide improved longitude-latitude maps for Level 3 products, if possible. The accuracy of the pointing correction is also estimated statistically by simulating observed limb distributions. The results show that our algorithm successfully corrects instrumental pointing and will enable a variety of studies on the Venusian atmosphere using Akatsuki data.[Figure not available: see fulltext.

  4. Self-consistent EXAFS PDF Projection Method by Matched Correction of Fourier Filter Signal Distortion

    International Nuclear Information System (INIS)

    Lee, Jay Min; Yang, Dong-Seok

    2007-01-01

    Inverse problem solving computation was performed for solving PDF (pair distribution function) from simulated data EXAFS based on data FEFF. For a realistic comparison with experimental data, we chose a model of the first sub-shell Mn-0 pair showing the Jahn Teller distortion in crystalline LaMnO3. To restore the Fourier filtering signal distortion, involved in the first sub-shell information isolated from higher shell contents, relevant distortion matching function was computed initially from the proximity model, and iteratively from the prior-guess during consecutive regularization computation. Adaptive computation of EXAFS background correction is an issue of algorithm development, but our preliminary test was performed under the simulated background correction perfectly excluding the higher shell interference. In our numerical result, efficient convergence of iterative solution indicates a self-consistent tendency that a true PDF solution is convinced as a counterpart of genuine chi-data, provided that a background correction function is iteratively solved using an extended algorithm of MEPP (Matched EXAFS PDF Projection) under development

  5. Measurement correction method for force sensor used in dynamic pressure calibration based on artificial neural network optimized by genetic algorithm

    Science.gov (United States)

    Gu, Tingwei; Kong, Deren; Shang, Fei; Chen, Jing

    2017-12-01

    We present an optimization algorithm to obtain low-uncertainty dynamic pressure measurements from a force-transducer-based device. In this paper, the advantages and disadvantages of the methods that are commonly used to measure the propellant powder gas pressure, the applicable scope of dynamic pressure calibration devices, and the shortcomings of the traditional comparison calibration method based on the drop-weight device are firstly analysed in detail. Then, a dynamic calibration method for measuring pressure using a force sensor based on a drop-weight device is introduced. This method can effectively save time when many pressure sensors are calibrated simultaneously and extend the life of expensive reference sensors. However, the force sensor is installed between the drop-weight and the hammerhead by transition pieces through the connection mode of bolt fastening, which causes adverse effects such as additional pretightening and inertia forces. To solve these effects, the influence mechanisms of the pretightening force, the inertia force and other influence factors on the force measurement are theoretically analysed. Then a measurement correction method for the force measurement is proposed based on an artificial neural network optimized by a genetic algorithm. The training and testing data sets are obtained from calibration tests, and the selection criteria for the key parameters of the correction model is discussed. The evaluation results for the test data show that the correction model can effectively improve the force measurement accuracy of the force sensor. Compared with the traditional high-accuracy comparison calibration method, the percentage difference of the impact-force-based measurement is less than 0.6% and the relative uncertainty of the corrected force value is 1.95%, which can meet the requirements of engineering applications.

  6. Experimental aspects of buoyancy correction in measuring reliable high-pressure excess adsorption isotherms using the gravimetric method

    Science.gov (United States)

    Nguyen, Huong Giang T.; Horn, Jarod C.; Thommes, Matthias; van Zee, Roger D.; Espinal, Laura

    2017-12-01

    Addressing reproducibility issues in adsorption measurements is critical to accelerating the path to discovery of new industrial adsorbents and to understanding adsorption processes. A National Institute of Standards and Technology Reference Material, RM 8852 (ammonium ZSM-5 zeolite), and two gravimetric instruments with asymmetric two-beam balances were used to measure high-pressure adsorption isotherms. This work demonstrates how common approaches to buoyancy correction, a key factor in obtaining the mass change due to surface excess gas uptake from the apparent mass change, can impact the adsorption isotherm data. Three different approaches to buoyancy correction were investigated and applied to the subcritical CO2 and supercritical N2 adsorption isotherms at 293 K. It was observed that measuring a collective volume for all balance components for the buoyancy correction (helium method) introduces an inherent bias in temperature partition when there is a temperature gradient (i.e. analysis temperature is not equal to instrument air bath temperature). We demonstrate that a blank subtraction is effective in mitigating the biases associated with temperature partitioning, instrument calibration, and the determined volumes of the balance components. In general, the manual and subtraction methods allow for better treatment of the temperature gradient during buoyancy correction. From the study, best practices specific to asymmetric two-beam balances and more general recommendations for measuring isotherms far from critical temperatures using gravimetric instruments are offered.

  7. Additive non-uniform random sampling in superimposed fiber Bragg grating strain gauge

    International Nuclear Information System (INIS)

    Ma, Y C; Liu, H Y; Yan, S B; Li, J M; Tang, J; Yang, Y H; Yang, M W

    2013-01-01

    This paper demonstrates an additive non-uniform random sampling and interrogation method for dynamic and/or static strain gauge using a reflection spectrum from two superimposed fiber Bragg gratings (FBGs). The superimposed FBGs are designed to generate non-equidistant space of a sensing pulse train in the time domain during dynamic strain gauge. By combining centroid finding with smooth filtering methods, both the interrogation speed and accuracy are improved. A 1.9 kHz dynamic strain is measured by generating an additive non-uniform randomly distributed 2 kHz optical sensing pulse train from a mean 500 Hz triangular periodically changing scanning frequency. (paper)

  8. Additive non-uniform random sampling in superimposed fiber Bragg grating strain gauge

    Science.gov (United States)

    Ma, Y. C.; Liu, H. Y.; Yan, S. B.; Yang, Y. H.; Yang, M. W.; Li, J. M.; Tang, J.

    2013-05-01

    This paper demonstrates an additive non-uniform random sampling and interrogation method for dynamic and/or static strain gauge using a reflection spectrum from two superimposed fiber Bragg gratings (FBGs). The superimposed FBGs are designed to generate non-equidistant space of a sensing pulse train in the time domain during dynamic strain gauge. By combining centroid finding with smooth filtering methods, both the interrogation speed and accuracy are improved. A 1.9 kHz dynamic strain is measured by generating an additive non-uniform randomly distributed 2 kHz optical sensing pulse train from a mean 500 Hz triangular periodically changing scanning frequency.

  9. Identification of the material properties in nonuniform nanostructures

    International Nuclear Information System (INIS)

    Bao, Gang; Xu, Xiang

    2015-01-01

    This paper is concerned with addressing two significant challenges arising from quantifying mechanical properties of nanomaterials, namely nonuniformity of the nanomaterial and the high noise level of measurements. For nonuniformity, an explicit solution is derived for the general Euler–Bernoulli equation in terms of the Green function for the Poisson equation. Then, by examining a stochastic source, the systematic error may be removed from measurements, which leads to more accurate estimation of mechanical properties. Based on Itô integral properties, three deterministic Fredholm integral equations can be deduced to extract the stiffness and the structure of the random source from measured data. To overcome ill-posedness and high nonlinearity in solving the Fredholm equations, a Tikhonov regularization method is developed with an a priori strategy of choosing the regularization parameter. Moreover, under a regularity assumption for the stiffness coefficient and structures of the random source, the convergence rate can be obtained in the sense of probability. Numerical examples are presented to illustrate the validity and effectiveness of the novel model and regularization method. (paper)

  10. Experimental study on the location of energy windows for scatter correction by the TEW method in 201Tl imaging

    International Nuclear Information System (INIS)

    Kojima, Akihiro; Matsumoto, Masanori; Ohyama, Yoichi; Tomiguchi, Seiji; Kira, Mitsuko; Takahashi, Mutsumasa.

    1997-01-01

    To investigate validity of scatter correction by the TEW method in 201 Tl imaging, we performed an experimental study using the gamma camera with the capability to perform the TEW method and a plate source with a defect. Images were acquired with the triple energy window which is recommended by the gamma camera manufacturer. The result of the energy spectrum showed that backscattered photons were included within the lower sub-energy window and main energy window, and the spectral shapes in the upper half region of the photopeak (70 keV) were not changed greatly by the source shape and the thickness of scattering materials. The scatter fraction calculated using energy spectra and, visual observation and the contrast values measured at the defect using planar images also showed that substantial primary photons were included in the upper sub-energy window. In TEW method (for scatter correction), two sub-energy windows are expected to be defined on the part of energy region in which total counts mainly consist of scattered photons. Therefore, it is necessary to investigate the use of the upper sub-energy window on scatter correction by the TEW method in 201 Tl imaging. (author)

  11. A novel baseline correction method using convex optimization framework in laser-induced breakdown spectroscopy quantitative analysis

    Science.gov (United States)

    Yi, Cancan; Lv, Yong; Xiao, Han; Ke, Ke; Yu, Xun

    2017-12-01

    For laser-induced breakdown spectroscopy (LIBS) quantitative analysis technique, baseline correction is an essential part for the LIBS data preprocessing. As the widely existing cases, the phenomenon of baseline drift is generated by the fluctuation of laser energy, inhomogeneity of sample surfaces and the background noise, which has aroused the interest of many researchers. Most of the prevalent algorithms usually need to preset some key parameters, such as the suitable spline function and the fitting order, thus do not have adaptability. Based on the characteristics of LIBS, such as the sparsity of spectral peaks and the low-pass filtered feature of baseline, a novel baseline correction and spectral data denoising method is studied in this paper. The improved technology utilizes convex optimization scheme to form a non-parametric baseline correction model. Meanwhile, asymmetric punish function is conducted to enhance signal-noise ratio (SNR) of the LIBS signal and improve reconstruction precision. Furthermore, an efficient iterative algorithm is applied to the optimization process, so as to ensure the convergence of this algorithm. To validate the proposed method, the concentration analysis of Chromium (Cr),Manganese (Mn) and Nickel (Ni) contained in 23 certified high alloy steel samples is assessed by using quantitative models with Partial Least Squares (PLS) and Support Vector Machine (SVM). Because there is no prior knowledge of sample composition and mathematical hypothesis, compared with other methods, the method proposed in this paper has better accuracy in quantitative analysis, and fully reflects its adaptive ability.

  12. Vibration of nonuniform carbon nanotube with attached mass via nonlocal Timoshenko beam theory

    International Nuclear Information System (INIS)

    Tang, Hai Li; Shen, Zhi Bin; Li, Dao Kui

    2014-01-01

    This paper studies the vibrational behavior of nonuniform single-walled carbon nanotube (SWCNT) carrying a nanoparticle. A nonuniform cantilever beam with a concentrated mass at the free end is analyzed according to the nonlocal Timoshenko beam theory. A governing equation of a nonuniform SWCNT with attached mass is established. The transfer function method incorporating with the perturbation method is utilized to obtain the resonant frequencies of a vibrating nonlocal cantilever-mass system. The effects of the nonlocal parameter, taper ratio and attached mass on the natural frequencies and frequency shifts are discussed. Obtained results indicate that the sensitivity of the frequency shifts on the attached mass increases when the length-to-diameter ratio decreases. Tapered SWCNT possesses higher fundamental frequencies if the taper ratio becomes larger.

  13. The usefulness and the problems of attenuation correction using simultaneous transmission and emission data acquisition method. Studies on normal volunteers and phantom

    International Nuclear Information System (INIS)

    Kijima, Tetsuji; Kumita, Shin-ichiro; Mizumura, Sunao; Cho, Keiichi; Ishihara, Makiko; Toba, Masahiro; Kumazaki, Tatsuo; Takahashi, Munehiro.

    1997-01-01

    Attenuation correction using simultaneous transmission data (TCT) and emission data (ECT) acquisition method was applied to 201 Tl myocardial SPECT with ten normal adults and the phantom in order to validate the efficacy of attenuation correction using this method. Normal adults study demonstrated improved 201 Tl accumulation to the septal wall and the posterior wall of the left ventricle and relative decreased activities in the lateral wall with attenuation correction (p 201 Tl uptake organs such as the liver and the stomach pushed up the activities in the septal wall and the posterior wall. Cardiac dynamic phantom studies showed partial volume effect due to cardiac motion contributed to under-correction of the apex, which might be overcome using gated SPECT. Although simultaneous TCT and ECT acquisition was conceived of the advantageous method for attenuation correction, miss-correction of the special myocardial segments should be taken into account in assessment of attenuation correction compensated images. (author)

  14. Analysis of the Elastic Large Deflection Behavior for Metal Plates under Nonuniformly Distributed Lateral Pressure with In-Plane Loads

    Directory of Open Access Journals (Sweden)

    Jeom Kee Paik

    2012-01-01

    Full Text Available The Galerkin method is applied to analyze the elastic large deflection behavior of metal plates subject to a combination of in-plane loads such as biaxial loads, edge shear and biaxial inplane bending moments, and uniformly or nonuniformly distributed lateral pressure loads. The motive of the present study was initiated by the fact that metal plates of ships and ship-shaped offshore structures at sea are often subjected to non-uniformly distributed lateral pressure loads arising from cargo or water pressure, together with inplane axial loads or inplane bending moments, but the current practice of the maritime industry usually applies some simplified design methods assuming that the non-uniform pressure distribution in the plates can be replaced by an equivalence of uniform pressure distribution. Applied examples are presented, demonstrating that the current plate design methods of the maritime industry may be inappropriate when the non-uniformity of lateral pressure loads becomes more significant.

  15. Initial evaluation of a practical PET respiratory motion correction method in clinical simultaneous PET/MRI

    International Nuclear Information System (INIS)

    Manber, Richard; Thielemans, Kris; Hutton, Brian; Barnes, Anna; Ourselin, Sebastien; Arridge, Simon; O’Meara, Celia; Atkinson, David

    2014-01-01

    Respiratory motion during PET acquisitions can cause image artefacts, with sharpness and tracer quantification adversely affected due to count ‘smearing’. Motion correction by registration of PET gates becomes increasingly difficult with shorter scan times and less counts. The advent of simultaneous PET/MRI scanners allows the use of high spatial resolution MRI to capture motion states during respiration [1, 2]. In this work, we use a respiratory signal derived from the PET list-mode data [3, ], with no requirement for an external device or MR sequence modifications.

  16. 4SM: A Novel Self-Calibrated Algebraic Ratio Method for Satellite-Derived Bathymetry and Water Column Correction

    Directory of Open Access Journals (Sweden)

    Yann G. Morel

    2017-07-01

    Full Text Available All empirical water column correction methods have consistently been reported to require existing depth sounding data for the purpose of calibrating a simple depth retrieval model; they yield poor results over very bright or very dark bottoms. In contrast, we set out to (i use only the relative radiance data in the image along with published data, and several new assumptions; (ii in order to specify and operate the simplified radiative transfer equation (RTE; (iii for the purpose of retrieving both the satellite derived bathymetry (SDB and the water column corrected spectral reflectance over shallow seabeds. Sea truth regressions show that SDB depths retrieved by the method only need tide correction. Therefore it shall be demonstrated that, under such new assumptions, there is no need for (i formal atmospheric correction; (ii conversion of relative radiance into calibrated reflectance; or (iii existing depth sounding data, to specify the simplified RTE and produce both SDB and spectral water column corrected radiance ready for bottom typing. Moreover, the use of the panchromatic band for that purpose is introduced. Altogether, we named this process the Self-Calibrated Supervised Spectral Shallow-sea Modeler (4SM. This approach requires a trained practitioner, though, to produce its results within hours of downloading the raw image. The ideal raw image should be a “near-nadir” view, exhibit homogeneous atmosphere and water column, include some coverage of optically deep waters and bare land, and lend itself to quality removal of haze, atmospheric adjacency effect, and sun/sky glint.

  17. Correction of measured charged-particle spectra for energy losses in the target - A comparison of three methods

    CERN Document Server

    Soederberg, J; Alm-Carlsson, G; Olsson, N

    2002-01-01

    The experimental facility, MEDLEY, at the The Svedberg Laboratory in Uppsala, has been constructed to measure neutron-induced charged-particle production cross-sections for (n, xp), (n, xd), (n, xt), (n, x sup 3 He) and (n, x alpha) reactions at neutron energies up to 100 MeV. Corrections for the energy loss of the charged particles in the target are needed in these measurements, as well as for loss of particles. Different approaches have been used in the literature to solve this problem. In this work, a stripping method is developed, which is compared with other methods developed by Rezentes et al. and Slypen et al. The results obtained using the three codes are similar and they could all be used for correction of experimental charged-particle spectra. Statistical fluctuations in the measured spectra cause problems independent of the applied technique, but the way to handle it differs in the three codes.

  18. Monte Carlo calculation of correction factors for radionuclide neutron source emission rate measurement by manganese bath method

    International Nuclear Information System (INIS)

    Li Chunjuan; Liu Yi'na; Zhang Weihua; Wang Zhiqiang

    2014-01-01

    The manganese bath method for measuring the neutron emission rate of radionuclide sources requires corrections to be made for emitted neutrons which are not captured by manganese nuclei. The Monte Carlo particle transport code MCNP was used to simulate the manganese bath system of the standards for the measurement of neutron source intensity. The correction factors were calculated and the reliability of the model was demonstrated through the key comparison for the radionuclide neutron source emission rate measurements organized by BIPM. The uncertainties in the calculated values were evaluated by considering the sensitivities to the solution density, the density of the radioactive material, the positioning of the source, the radius of the bath, and the interaction cross-sections. A new method for the evaluation of the uncertainties in Monte Carlo calculation was given. (authors)

  19. Restoration of non-uniform exposure motion blurred image

    Science.gov (United States)

    Luo, Yuanhong; Xu, Tingfa; Wang, Ningming; Liu, Feng

    2014-11-01

    Restoring motion-blurred image is the key technologies in the opto-electronic detection system. The imaging sensors such as CCD and infrared imaging sensor, which are mounted on the motion platforms, quickly move together with the platforms of high speed. As a result, the images become blur. The image degradation will cause great trouble for the succeeding jobs such as objects detection, target recognition and tracking. So the motion-blurred images must be restoration before detecting motion targets in the subsequent images. On the demand of the real weapon task, in order to deal with targets in the complex background, this dissertation uses the new theories in the field of image processing and computer vision to research the new technology of motion deblurring and motion detection. The principle content is as follows: 1) When the prior knowledge about degradation function is unknown, the uniform motion blurred images are restored. At first, the blur parameters, including the motion blur extent and direction of PSF(point spread function), are estimated individually in domain of logarithmic frequency. The direction of PSF is calculated by extracting the central light line of the spectrum, and the extent is computed by minimizing the correction between the fourier spectrum of the blurred image and a detecting function. Moreover, in order to remove the strip in the deblurred image, windows technique is employed in the algorithm, which makes the deblurred image clear. 2) According to the principle of infrared image non-uniform exposure, a new restoration model for infrared blurred images is developed. The fitting of infrared image non-uniform exposure curve is performed by experiment data. The blurred images are restored by the fitting curve.

  20. Karect: accurate correction of substitution, insertion and deletion errors for next-generation sequencing data

    KAUST Repository

    Allam, Amin

    2015-07-14

    Motivation: Next-generation sequencing generates large amounts of data affected by errors in the form of substitutions, insertions or deletions of bases. Error correction based on the high-coverage information, typically improves de novo assembly. Most existing tools can correct substitution errors only; some support insertions and deletions, but accuracy in many cases is low. Results: We present Karect, a novel error correction technique based on multiple alignment. Our approach supports substitution, insertion and deletion errors. It can handle non-uniform coverage as well as moderately covered areas of the sequenced genome. Experiments with data from Illumina, 454 FLX and Ion Torrent sequencing machines demonstrate that Karect is more accurate than previous methods, both in terms of correcting individual-bases errors (up to 10% increase in accuracy gain) and post de novo assembly quality (up to 10% increase in NGA50). We also introduce an improved framework for evaluating the quality of error correction.

  1. A Realization of Bias Correction Method in the GMAO Coupled System

    Science.gov (United States)

    Chang, Yehui; Koster, Randal; Wang, Hailan; Schubert, Siegfried; Suarez, Max

    2018-01-01

    Over the past several decades, a tremendous effort has been made to improve model performance in the simulation of the climate system. The cold or warm sea surface temperature (SST) bias in the tropics is still a problem common to most coupled ocean atmosphere general circulation models (CGCMs). The precipitation biases in CGCMs are also accompanied by SST and surface wind biases. The deficiencies and biases over the equatorial oceans through their influence on the Walker circulation likely contribute the precipitation biases over land surfaces. In this study, we introduce an approach in the CGCM modeling to correct model biases. This approach utilizes the history of the model's short-term forecasting errors and their seasonal dependence to modify model's tendency term and to minimize its climate drift. The study shows that such an approach removes most of model climate biases. A number of other aspects of the model simulation (e.g. extratropical transient activities) are also improved considerably due to the imposed pre-processed initial 3-hour model drift corrections. Because many regional biases in the GEOS-5 CGCM are common amongst other current models, our approaches and findings are applicable to these other models as well.

  2. A modification to the standard ionospheric correction method used in GPS radio occultation

    Directory of Open Access Journals (Sweden)

    S. B. Healy

    2015-08-01

    Full Text Available A modification to the standard bending-angle correction used in GPS radio occultation (GPS-RO is proposed. The modified approach should reduce systematic residual ionospheric errors in GPS radio occultation climatologies. A new second-order term is introduced in order to account for a known source of systematic error, which is generally neglected. The new term has the form κ(a × (αL1(a-αL2(a2, where a is the impact parameter and (αL1, αL2 are the L1 and L2 bending angles, respectively. The variable κ is a weak function of the impact parameter, a, but it does depend on a priori ionospheric information. The theoretical basis of the new term is examined. The sensitivity of κ to the assumed ionospheric parameters is investigated in one-dimensional simulations, and it is shown that κ ≃ 10–20 rad−1. We note that the current implicit assumption is κ=0, and this is probably adequate for numerical weather prediction applications. However, the uncertainty in κ should be included in the uncertainty estimates for the geophysical climatologies produced from GPS-RO measurements. The limitations of the new ionospheric correction when applied to CHAMP (Challenging Minisatellite Payload measurements are noted. These arise because of the assumption that the refractive index is unity at the satellite, made when deriving bending angles from the Doppler shift values.

  3. BLESS 2: accurate, memory-efficient and fast error correction method.

    Science.gov (United States)

    Heo, Yun; Ramachandran, Anand; Hwu, Wen-Mei; Ma, Jian; Chen, Deming

    2016-08-01

    The most important features of error correction tools for sequencing data are accuracy, memory efficiency and fast runtime. The previous version of BLESS was highly memory-efficient and accurate, but it was too slow to handle reads from large genomes. We have developed a new version of BLESS to improve runtime and accuracy while maintaining a small memory usage. The new version, called BLESS 2, has an error correction algorithm that is more accurate than BLESS, and the algorithm has been parallelized using hybrid MPI and OpenMP programming. BLESS 2 was compared with five top-performing tools, and it was found to be the fastest when it was executed on two computing nodes using MPI, with each node containing twelve cores. Also, BLESS 2 showed at least 11% higher gain while retaining the memory efficiency of the previous version for large genomes. Freely available at https://sourceforge.net/projects/bless-ec dchen@illinois.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. A third-generation dispersion and third-generation hydrogen bonding corrected PM6 method: PM6-D3H+

    Directory of Open Access Journals (Sweden)

    Jimmy C. Kromann

    2014-06-01

    Full Text Available We present new dispersion and hydrogen bond corrections to the PM6 method, PM6-D3H+, and its implementation in the GAMESS program. The method combines the DFT-D3 dispersion correction by Grimme et al. with a modified version of the H+ hydrogen bond correction by Korth. Overall, the interaction energy of PM6-D3H+ is very similar to PM6-DH2 and PM6-DH+, with RMSD and MAD values within 0.02 kcal/mol of one another. The main difference is that the geometry optimizations of 88 complexes result in 82, 6, 0, and 0 geometries with 0, 1, 2, and 3 or more imaginary frequencies using PM6-D3H+ implemented in GAMESS, while the corresponding numbers for PM6-DH+ implemented in MOPAC are 54, 17, 15, and 2. The PM6-D3H+ method as implemented in GAMESS offers an attractive alternative to PM6-DH+ in MOPAC in cases where the LBFGS optimizer must be used and a vibrational analysis is needed, e.g., when computing vibrational free energies. While the GAMESS implementation is up to 10 times slower for geometry optimizations of proteins in bulk solvent, compared to MOPAC, it is sufficiently fast to make geometry optimizations of small proteins practically feasible.

  5. Fixed-pattern noise correction method based on improved moment matching for a TDI CMOS image sensor.

    Science.gov (United States)

    Xu, Jiangtao; Nie, Huafeng; Nie, Kaiming; Jin, Weimin

    2017-09-01

    In this paper, an improved moment matching method based on a spatial correlation filter (SCF) and bilateral filter (BF) is proposed to correct the fixed-pattern noise (FPN) of a time-delay-integration CMOS image sensor (TDI-CIS). First, the values of row FPN (RFPN) and column FPN (CFPN) are estimated and added to the original image through SCF and BF, respectively. Then the filtered image will be processed by an improved moment matching method with a moving window. Experimental results based on a 128-stage TDI-CIS show that, after correcting the FPN in the image captured under uniform illumination, the standard deviation of row mean vector (SDRMV) decreases from 5.6761 LSB to 0.1948 LSB, while the standard deviation of the column mean vector (SDCMV) decreases from 15.2005 LSB to 13.1949LSB. In addition, for different images captured by different TDI-CISs, the average decrease of SDRMV and SDCMV is 5.4922/2.0357 LSB, respectively. Comparative experimental results indicate that the proposed method can effectively correct the FPNs of different TDI-CISs while maintaining image details without any auxiliary equipment.

  6. Correction for tissue attenuation in radionuclide gastric emptying studies: a comparison of a lateral image method and a geometric mean method

    Energy Technology Data Exchange (ETDEWEB)

    Collins, P.J.; Chatterton, B.E. (Royal Adelaide Hospital (Australia)); Horowitz, M.; Shearman, D.J.C. (Adelaide Univ. (Australia). Dept. of Medicine)

    1984-08-01

    Variation in depth of radionuclide within the stomach may result in significant errors in the measurement of gastric emptying if no attempt is made to correct for gamm