WorldWideScience

Sample records for scattering correction algorithm

  1. Coastal Zone Color Scanner atmospheric correction algorithm - Multiple scattering effects

    Science.gov (United States)

    Gordon, Howard R.; Castano, Diego J.

    1987-01-01

    Errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm are analyzed. The analysis is based on radiative transfer computations in model atmospheres, in which the aerosols and molecules are distributed vertically in an exponential manner, with most of the aerosol scattering located below the molecular scattering. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates, making it possible to determine the errors along typical CZCS scan lines. Information provided by the analysis makes it possible to judge the efficacy of the current algorithm with the current sensor and to estimate the impact of the algorithm-induced errors on a variety of applications.

  2. An experimental study of the scatter correction by using a beam-stop-array algorithm with digital breast tomosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ye-Seul; Park, Hye-Suk; Kim, Hee-Joung [Yonsei University, Wonju (Korea, Republic of); Choi, Young-Wook; Choi, Jae-Gu [Korea Electrotechnology Research Institute, Ansan (Korea, Republic of)

    2014-12-15

    Digital breast tomosynthesis (DBT) is a technique that was developed to overcome the limitations of conventional digital mammography by reconstructing slices through the breast from projections acquired at different angles. In developing and optimizing DBT, The x-ray scatter reduction technique remains a significant challenge due to projection geometry and radiation dose limitations. The most common approach to scatter reduction is a beam-stop-array (BSA) algorithm; however, this method raises concerns regarding the additional exposure involved in acquiring the scatter distribution. The compressed breast is roughly symmetric, and the scatter profiles from projections acquired at axially opposite angles are similar to mirror images. The purpose of this study was to apply the BSA algorithm with only two scans with a beam stop array, which estimates the scatter distribution with minimum additional exposure. The results of the scatter correction with angular interpolation were comparable to those of the scatter correction with all scatter distributions at each angle. The exposure increase was less than 13%. This study demonstrated the influence of the scatter correction obtained by using the BSA algorithm with minimum exposure, which indicates its potential for practical applications.

  3. Quantitative Evaluation of 2 Scatter-Correction Techniques for 18F-FDG Brain PET/MRI in Regard to MR-Based Attenuation Correction.

    Science.gov (United States)

    Teuho, Jarmo; Saunavaara, Virva; Tolvanen, Tuula; Tuokkola, Terhi; Karlsson, Antti; Tuisku, Jouni; Teräs, Mika

    2017-10-01

    In PET, corrections for photon scatter and attenuation are essential for visual and quantitative consistency. MR attenuation correction (MRAC) is generally conducted by image segmentation and assignment of discrete attenuation coefficients, which offer limited accuracy compared with CT attenuation correction. Potential inaccuracies in MRAC may affect scatter correction, because the attenuation image (μ-map) is used in single scatter simulation (SSS) to calculate the scatter estimate. We assessed the impact of MRAC to scatter correction using 2 scatter-correction techniques and 3 μ-maps for MRAC. Methods: The tail-fitted SSS (TF-SSS) and a Monte Carlo-based single scatter simulation (MC-SSS) algorithm implementations on the Philips Ingenuity TF PET/MR were used with 1 CT-based and 2 MR-based μ-maps. Data from 7 subjects were used in the clinical evaluation, and a phantom study using an anatomic brain phantom was conducted. Scatter-correction sinograms were evaluated for each scatter correction method and μ-map. Absolute image quantification was investigated with the phantom data. Quantitative assessment of PET images was performed by volume-of-interest and ratio image analysis. Results: MRAC did not result in large differences in scatter algorithm performance, especially with TF-SSS. Scatter sinograms and scatter fractions did not reveal large differences regardless of the μ-map used. TF-SSS showed slightly higher absolute quantification. The differences in volume-of-interest analysis between TF-SSS and MC-SSS were 3% at maximum in the phantom and 4% in the patient study. Both algorithms showed excellent correlation with each other with no visual differences between PET images. MC-SSS showed a slight dependency on the μ-map used, with a difference of 2% on average and 4% at maximum when a μ-map without bone was used. Conclusion: The effect of different MR-based μ-maps on the performance of scatter correction was minimal in non-time-of-flight 18 F-FDG PET

  4. Prior image constrained scatter correction in cone-beam computed tomography image-guided radiation therapy.

    Science.gov (United States)

    Brunner, Stephen; Nett, Brian E; Tolakanahalli, Ranjini; Chen, Guang-Hong

    2011-02-21

    X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes.

  5. Patient-specific scatter correction in clinical cone beam computed tomography imaging made possible by the combination of Monte Carlo simulations and a ray tracing algorithm

    International Nuclear Information System (INIS)

    Thing, Rune S.; Bernchou, Uffe; Brink, Carsten; Mainegra-Hing, Ernesto

    2013-01-01

    Purpose: Cone beam computed tomography (CBCT) image quality is limited by scattered photons. Monte Carlo (MC) simulations provide the ability of predicting the patient-specific scatter contamination in clinical CBCT imaging. Lengthy simulations prevent MC-based scatter correction from being fully implemented in a clinical setting. This study investigates the combination of using fast MC simulations to predict scatter distributions with a ray tracing algorithm to allow calibration between simulated and clinical CBCT images. Material and methods: An EGSnrc-based user code (egs c bct), was used to perform MC simulations of an Elekta XVI CBCT imaging system. A 60keV x-ray source was used, and air kerma scored at the detector plane. Several variance reduction techniques (VRTs) were used to increase the scatter calculation efficiency. Three patient phantoms based on CT scans were simulated, namely a brain, a thorax and a pelvis scan. A ray tracing algorithm was used to calculate the detector signal due to primary photons. A total of 288 projections were simulated, one for each thread on the computer cluster used for the investigation. Results: Scatter distributions for the brain, thorax and pelvis scan were simulated within 2 % statistical uncertainty in two hours per scan. Within the same time, the ray tracing algorithm provided the primary signal for each of the projections. Thus, all the data needed for MC-based scatter correction in clinical CBCT imaging was obtained within two hours per patient, using a full simulation of the clinical CBCT geometry. Conclusions: This study shows that use of MC-based scatter corrections in CBCT imaging has a great potential to improve CBCT image quality. By use of powerful VRTs to predict scatter distributions and a ray tracing algorithm to calculate the primary signal, it is possible to obtain the necessary data for patient specific MC scatter correction within two hours per patient

  6. Research of scatter correction on industry computed tomography

    International Nuclear Information System (INIS)

    Sun Shaohua; Gao Wenhuan; Zhang Li; Chen Zhiqiang

    2002-01-01

    In the scanning process of industry computer tomography, scatter blurs the reconstructed image. The grey values of pixels in the reconstructed image are away from what is true and such effect need to be corrected. If the authors use the conventional method of deconvolution, many steps of iteration are needed and the computing time is not satisfactory. The author discusses a method combining Ordered Subsets Convex algorithm and scatter model to implement scatter correction and promising results are obtained in both speed and image quality

  7. Cross plane scattering correction

    International Nuclear Information System (INIS)

    Shao, L.; Karp, J.S.

    1990-01-01

    Most previous scattering correction techniques for PET are based on assumptions made for a single transaxial plane and are independent of axial variations. These techniques will incorrectly estimate the scattering fraction for volumetric PET imaging systems since they do not take the cross-plane scattering into account. In this paper, the authors propose a new point source scattering deconvolution method (2-D). The cross-plane scattering is incorporated into the algorithm by modeling a scattering point source function. In the model, the scattering dependence both on axial and transaxial directions is reflected in the exponential fitting parameters and these parameters are directly estimated from a limited number of measured point response functions. The authors' results comparing the standard in-plane point source deconvolution to the authors' cross-plane source deconvolution show that for a small source, the former technique overestimates the scatter fraction in the plane of the source and underestimate the scatter fraction in adjacent planes. In addition, the authors also propose a simple approximation technique for deconvolution

  8. Interleaved segment correction achieves higher improvement factors in using genetic algorithm to optimize light focusing through scattering media

    Science.gov (United States)

    Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong

    2017-10-01

    Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.

  9. Atmospheric scattering corrections to solar radiometry

    International Nuclear Information System (INIS)

    Box, M.A.; Deepak, A.

    1979-01-01

    Whenever a solar radiometer is used to measure direct solar radiation, some diffuse sky radiation invariably enters the detector's field of view along with the direct beam. Therefore, the atmospheric optical depth obtained by the use of Bouguer's transmission law (also called Beer-Lambert's law), that is valid only for direct radiation, needs to be corrected by taking account of the scattered radiation. In this paper we shall discuss the correction factors needed to account for the diffuse (i.e., singly and multiply scattered) radiation and the algorithms developed for retrieving aerosol size distribution from such measurements. For a radiometer with a small field of view (half-cone angle 0 ) and relatively clear skies (optical depths <0.4), it is shown that the total diffuse contributions represents approximately l% of the total intensity. It is assumed here that the main contributions to the diffuse radiation within the detector's view cone are due to single scattering by molecules and aerosols and multiple scattering by molecules alone, aerosol multiple scattering contributions being treated as negligibly small. The theory and the numerical results discussed in this paper will be helpful not only in making corrections to the measured optical depth data but also in designing improved solar radiometers

  10. A software-based x-ray scatter correction method for breast tomosynthesis

    International Nuclear Information System (INIS)

    Jia Feng, Steve Si; Sechopoulos, Ioannis

    2011-01-01

    reconstructions. The visibility of the findings in two patient images was also improved by the application of the scatter correction algorithm. The MTF of the images did not change after application of the scatter correction algorithm, indicating that spatial resolution was not adversely affected. Conclusions: Our software-based scatter correction algorithm exhibits great potential in improving the image quality of DBT acquisitions of both phantoms and patients. The proposed algorithm does not require a time-consuming MC simulation for each specific case to be corrected, making it applicable in the clinical realm.

  11. Efficient scatter distribution estimation and correction in CBCT using concurrent Monte Carlo fitting

    Energy Technology Data Exchange (ETDEWEB)

    Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Verhaegen, F. [Department of Radiation Oncology - MAASTRO, GROW—School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4 (Canada); Jaffray, D. A. [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Ontario Cancer Institute, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada)

    2015-01-15

    Purpose: X-ray scatter is a significant impediment to image quality improvements in cone-beam CT (CBCT). The authors present and demonstrate a novel scatter correction algorithm using a scatter estimation method that simultaneously combines multiple Monte Carlo (MC) CBCT simulations through the use of a concurrently evaluated fitting function, referred to as concurrent MC fitting (CMCF). Methods: The CMCF method uses concurrently run MC CBCT scatter projection simulations that are a subset of the projection angles used in the projection set, P, to be corrected. The scattered photons reaching the detector in each MC simulation are simultaneously aggregated by an algorithm which computes the scatter detector response, S{sub MC}. S{sub MC} is fit to a function, S{sub F}, and if the fit of S{sub F} is within a specified goodness of fit (GOF), the simulations are terminated. The fit, S{sub F}, is then used to interpolate the scatter distribution over all pixel locations for every projection angle in the set P. The CMCF algorithm was tested using a frequency limited sum of sines and cosines as the fitting function on both simulated and measured data. The simulated data consisted of an anthropomorphic head and a pelvis phantom created from CT data, simulated with and without the use of a compensator. The measured data were a pelvis scan of a phantom and patient taken on an Elekta Synergy platform. The simulated data were used to evaluate various GOF metrics as well as determine a suitable fitness value. The simulated data were also used to quantitatively evaluate the image quality improvements provided by the CMCF method. A qualitative analysis was performed on the measured data by comparing the CMCF scatter corrected reconstruction to the original uncorrected and corrected by a constant scatter correction reconstruction, as well as a reconstruction created using a set of projections taken with a small cone angle. Results: Pearson’s correlation, r, proved to be a

  12. A scatter-corrected list-mode reconstruction and a practical scatter/random approximation technique for dynamic PET imaging

    International Nuclear Information System (INIS)

    Cheng, J-C; Rahmim, Arman; Blinder, Stephan; Camborde, Marie-Laure; Raywood, Kelvin; Sossi, Vesna

    2007-01-01

    We describe an ordinary Poisson list-mode expectation maximization (OP-LMEM) algorithm with a sinogram-based scatter correction method based on the single scatter simulation (SSS) technique and a random correction method based on the variance-reduced delayed-coincidence technique. We also describe a practical approximate scatter and random-estimation approach for dynamic PET studies based on a time-averaged scatter and random estimate followed by scaling according to the global numbers of true coincidences and randoms for each temporal frame. The quantitative accuracy achieved using OP-LMEM was compared to that obtained using the histogram-mode 3D ordinary Poisson ordered subset expectation maximization (3D-OP) algorithm with similar scatter and random correction methods, and they showed excellent agreement. The accuracy of the approximated scatter and random estimates was tested by comparing time activity curves (TACs) as well as the spatial scatter distribution from dynamic non-human primate studies obtained from the conventional (frame-based) approach and those obtained from the approximate approach. An excellent agreement was found, and the time required for the calculation of scatter and random estimates in the dynamic studies became much less dependent on the number of frames (we achieved a nearly four times faster performance on the scatter and random estimates by applying the proposed method). The precision of the scatter fraction was also demonstrated for the conventional and the approximate approach using phantom studies

  13. Fully 3D iterative scatter-corrected OSEM for HRRT PET using a GPU

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kyung Sang; Ye, Jong Chul, E-mail: kssigari@kaist.ac.kr, E-mail: jong.ye@kaist.ac.kr [Bio-Imaging and Signal Processing Lab., Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), 335 Gwahak-no, Yuseong-gu, Daejon 305-701 (Korea, Republic of)

    2011-08-07

    Accurate scatter correction is especially important for high-resolution 3D positron emission tomographies (PETs) such as high-resolution research tomograph (HRRT) due to large scatter fraction in the data. To address this problem, a fully 3D iterative scatter-corrected ordered subset expectation maximization (OSEM) in which a 3D single scatter simulation (SSS) is alternatively performed with a 3D OSEM reconstruction was recently proposed. However, due to the computational complexity of both SSS and OSEM algorithms for a high-resolution 3D PET, it has not been widely used in practice. The main objective of this paper is, therefore, to accelerate the fully 3D iterative scatter-corrected OSEM using a graphics processing unit (GPU) and verify its performance for an HRRT. We show that to exploit the massive thread structures of the GPU, several algorithmic modifications are necessary. For SSS implementation, a sinogram-driven approach is found to be more appropriate compared to a detector-driven approach, as fast linear interpolation can be performed in the sinogram domain through the use of texture memory. Furthermore, a pixel-driven backprojector and a ray-driven projector can be significantly accelerated by assigning threads to voxels and sinograms, respectively. Using Nvidia's GPU and compute unified device architecture (CUDA), the execution time of a SSS is less than 6 s, a single iteration of OSEM with 16 subsets takes 16 s, and a single iteration of the fully 3D scatter-corrected OSEM composed of a SSS and six iterations of OSEM takes under 105 s for the HRRT geometry, which corresponds to acceleration factors of 125x and 141x for OSEM and SSS, respectively. The fully 3D iterative scatter-corrected OSEM algorithm is validated in simulations using Geant4 application for tomographic emission and in actual experiments using an HRRT.

  14. The analysis and correction of neutron scattering effects in neutron imaging

    International Nuclear Information System (INIS)

    Raine, D.A.; Brenizer, J.S.

    1997-01-01

    A method of correcting for the scattering effects present in neutron radiographic and computed tomographic imaging has been developed. Prior work has shown that beam, object, and imaging system geometry factors, such as the L/D ratio and angular divergence, are the primary sources contributing to the degradation of neutron images. With objects smaller than 20--40 mm in width, a parallel beam approximation can be made where the effects from geometry are negligible. Factors which remain important in the image formation process are the pixel size of the imaging system, neutron scattering, the size of the object, the conversion material, and the beam energy spectrum. The Monte Carlo N-Particle transport code, version 4A (MCNP4A), was used to separate and evaluate the effect that each of these parameters has on neutron image data. The simulations were used to develop a correction algorithm which is easy to implement and requires no a priori knowledge of the object. The correction algorithm is based on the determination of the object scatter function (OSF) using available data outside the object to estimate the shape and magnitude of the OSF based on a Gaussian functional form. For objects smaller than 1 mm (0.04 in.) in width, the correction function can be well approximated by a constant function. Errors in the determination and correction of the MCNP simulated neutron scattering component were under 5% and larger errors were only noted in objects which were at the extreme high end of the range of object sizes simulated. The Monte Carlo data also indicated that scattering does not play a significant role in the blurring of neutron radiographic and tomographic images. The effect of neutron scattering on computed tomography is shown to be minimal at best, with the most serious effect resulting when the basic backprojection method is used

  15. A fast and pragmatic approach for scatter correction in flat-detector CT using elliptic modeling and iterative optimization

    Science.gov (United States)

    Meyer, Michael; Kalender, Willi A.; Kyriakou, Yiannis

    2010-01-01

    Scattered radiation is a major source of artifacts in flat detector computed tomography (FDCT) due to the increased irradiated volumes. We propose a fast projection-based algorithm for correction of scatter artifacts. The presented algorithm combines a convolution method to determine the spatial distribution of the scatter intensity distribution with an object-size-dependent scaling of the scatter intensity distributions using a priori information generated by Monte Carlo simulations. A projection-based (PBSE) and an image-based (IBSE) strategy for size estimation of the scanned object are presented. Both strategies provide good correction and comparable results; the faster PBSE strategy is recommended. Even with such a fast and simple algorithm that in the PBSE variant does not rely on reconstructed volumes or scatter measurements, it is possible to provide a reasonable scatter correction even for truncated scans. For both simulations and measurements, scatter artifacts were significantly reduced and the algorithm showed stable behavior in the z-direction. For simulated voxelized head, hip and thorax phantoms, a figure of merit Q of 0.82, 0.76 and 0.77 was reached, respectively (Q = 0 for uncorrected, Q = 1 for ideal). For a water phantom with 15 cm diameter, for example, a cupping reduction from 10.8% down to 2.1% was achieved. The performance of the correction method has limitations in the case of measurements using non-ideal detectors, intensity calibration, etc. An iterative approach to overcome most of these limitations was proposed. This approach is based on root finding of a cupping metric and may be useful for other scatter correction methods as well. By this optimization, cupping of the measured water phantom was further reduced down to 0.9%. The algorithm was evaluated on a commercial system including truncated and non-homogeneous clinically relevant objects.

  16. The use of anatomical information for molecular image reconstruction algorithms: Attention/Scatter correction, motion compensation, and noise reduction

    Energy Technology Data Exchange (ETDEWEB)

    Chun, Se Young [School of Electrical and Computer Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan (Korea, Republic of)

    2016-03-15

    PET and SPECT are important tools for providing valuable molecular information about patients to clinicians. Advances in nuclear medicine hardware technologies and statistical image reconstruction algorithms enabled significantly improved image quality. Sequentially or simultaneously acquired anatomical images such as CT and MRI from hybrid scanners are also important ingredients for improving the image quality of PET or SPECT further. High-quality anatomical information has been used and investigated for attenuation and scatter corrections, motion compensation, and noise reduction via post-reconstruction filtering and regularization in inverse problems. In this article, we will review works using anatomical information for molecular image reconstruction algorithms for better image quality by describing mathematical models, discussing sources of anatomical information for different cases, and showing some examples.

  17. Compton scatter and randoms corrections for origin ensembles 3D PET reconstructions

    Energy Technology Data Exchange (ETDEWEB)

    Sitek, Arkadiusz [Harvard Medical School, Boston, MA (United States). Dept. of Radiology; Brigham and Women' s Hospital, Boston, MA (United States); Kadrmas, Dan J. [Utah Univ., Salt Lake City, UT (United States). Utah Center for Advanced Imaging Research (UCAIR)

    2011-07-01

    In this work we develop a novel approach to correction for scatter and randoms in reconstruction of data acquired by 3D positron emission tomography (PET) applicable to tomographic reconstruction done by the origin ensemble (OE) approach. The statistical image reconstruction using OE is based on calculation of expectations of the numbers of emitted events per voxel based on complete-data space. Since the OE estimation is fundamentally different than regular statistical estimators such those based on the maximum likelihoods, the standard methods of implementation of scatter and randoms corrections cannot be used. Based on prompts, scatter, and random rates, each detected event is graded in terms of a probability of being a true event. These grades are utilized by the Markov Chain Monte Carlo (MCMC) algorithm used in OE approach for calculation of the expectation over the complete-data space of the number of emitted events per voxel (OE estimator). We show that the results obtained with the OE are almost identical to results obtained by the maximum likelihood-expectation maximization (ML-EM) algorithm for reconstruction for experimental phantom data acquired using Siemens Biograph mCT 3D PET/CT scanner. The developed correction removes artifacts due to scatter and randoms in investigated 3D PET datasets. (orig.)

  18. Multiple scattering corrections to the Beer-Lambert law. 1: Open detector.

    Science.gov (United States)

    Tam, W G; Zardecki, A

    1982-07-01

    Multiple scattering corrections to the Beer-Lambert law are analyzed by means of a rigorous small-angle solution to the radiative transfer equation. Transmission functions for predicting the received radiant power-a directly measured quantity in contrast to the spectral radiance in the Beer-Lambert law-are derived. Numerical algorithms and results relating to the multiple scattering effects for laser propagation in fog, cloud, and rain are presented.

  19. Source distribution dependent scatter correction for PVI

    International Nuclear Information System (INIS)

    Barney, J.S.; Harrop, R.; Dykstra, C.J.

    1993-01-01

    Source distribution dependent scatter correction methods which incorporate different amounts of information about the source position and material distribution have been developed and tested. The techniques use image to projection integral transformation incorporating varying degrees of information on the distribution of scattering material, or convolution subtraction methods, with some information about the scattering material included in one of the convolution methods. To test the techniques, the authors apply them to data generated by Monte Carlo simulations which use geometric shapes or a voxelized density map to model the scattering material. Source position and material distribution have been found to have some effect on scatter correction. An image to projection method which incorporates a density map produces accurate scatter correction but is computationally expensive. Simpler methods, both image to projection and convolution, can also provide effective scatter correction

  20. Monte Carlo evaluation of accuracy and noise properties of two scatter correction methods

    International Nuclear Information System (INIS)

    Narita, Y.; Eberl, S.; Nakamura, T.

    1996-01-01

    Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for 99m Tc and 201 Tl for numerical chest phantoms. Data were reconstructed with ordered-subset ML-EM algorithm including attenuation correction using the transmission data. In the chest phantom simulation, TDCS provided better S/N than TEW, and better accuracy, i.e., 1.0% vs -7.2% in myocardium, and -3.7% vs -30.1% in the ventricular chamber for 99m Tc with TDCS and TEW, respectively. For 201 Tl, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT

  1. TU-F-18C-03: X-Ray Scatter Correction in Breast CT: Advances and Patient Testing

    International Nuclear Information System (INIS)

    Ramamurthy, S; Sechopoulos, I

    2014-01-01

    Purpose: To further develop and perform patient testing of an x-ray scatter correction algorithm for dedicated breast computed tomography (BCT). Methods: A previously proposed algorithm for x-ray scatter signal reduction in BCT imaging was modified and tested with a phantom and on patients. A wireless electronic positioner system was designed and added to the BCT system that positions a tungsten plate in and out of the x-ray beam. The interpolation used by the algorithm was replaced with a radial basis function-based algorithm, with automated exclusion of non-valid sampled points due to patient motion or other factors. A 3D adaptive noise reduction filter was also introduced to reduce the impact of scatter quantum noise post-reconstruction. The impact on image quality of the improved algorithm was evaluated using a breast phantom and seven patient breasts, using quantitative metrics such signal difference (SD) and signal difference-to-noise ratios (SDNR) and qualitatively using image profiles. Results: The improvements in the algorithm resulted in a more robust interpolation step, with no introduction of image artifacts, especially at the imaged object boundaries, which was an issue in the previous implementation. Qualitative evaluation of the reconstructed slices and corresponding profiles show excellent homogeneity of both the background and the higher density features throughout the whole imaged object, as well as increased accuracy in the Hounsfield Units (HU) values of the tissues. Profiles also demonstrate substantial increase in both SD and SDNR between glandular and adipose regions compared to both the uncorrected and system-corrected images. Conclusion: The improved scatter correction algorithm can be reliably used during patient BCT acquisitions with no introduction of artifacts, resulting in substantial improvement in image quality. Its impact on actual clinical performance needs to be evaluated in the future. Research Agreement, Koning Corp., Hologic

  2. Ultrafast cone-beam CT scatter correction with GPU-based Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    Yuan Xu

    2014-03-01

    Full Text Available Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT. We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstruction within 30 seconds.Methods: The method consists of six steps: 1 FDK reconstruction using raw projection data; 2 Rigid Registration of planning CT to the FDK results; 3 MC scatter calculation at sparse view angles using the planning CT; 4 Interpolation of the calculated scatter signals to other angles; 5 Removal of scatter from the raw projections; 6 FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC noise from the simulated scatter images caused by low photon numbers. The method is validated on one simulated head-and-neck case with 364 projection angles.Results: We have examined variation of the scatter signal among projection angles using Fourier analysis. It is found that scatter images at 31 angles are sufficient to restore those at all angles with < 0.1% error. For the simulated patient case with a resolution of 512 × 512 × 100, we simulated 5 × 106 photons per angle. The total computation time is 20.52 seconds on a Nvidia GTX Titan GPU, and the time at each step is 2.53, 0.64, 14.78, 0.13, 0.19, and 2.25 seconds, respectively. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU.Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. It accomplished the whole procedure of scatter correction and reconstruction within 30 seconds.----------------------------Cite this

  3. Novel scatter compensation with energy and spatial dependent corrections in positron emission tomography

    International Nuclear Information System (INIS)

    Guerin, Bastien

    2010-01-01

    We developed and validated a fast Monte Carlo simulation of PET acquisitions based on the SimSET program modeling accurately the propagation of gamma photons in the patient as well as the block-based PET detector. Comparison of our simulation with another well validated code, GATE, and measurements on two GE Discovery ST PET scanners showed that it models accurately energy spectra (errors smaller than 4.6%), the spatial resolution of block-based PET scanners (6.1%), scatter fraction (3.5%), sensitivity (2.3%) and count rates (12.7%). Next, we developed a novel scatter correction incorporating the energy and position of photons detected in list-mode. Our approach is based on the reformulation of the list-mode likelihood function containing the energy distribution of detected coincidences in addition to their spatial distribution, yielding an EM reconstruction algorithm containing spatial and energy dependent correction terms. We also proposed using the energy in addition to the position of gamma photons in the normalization of the scatter sinogram. Finally, we developed a method for estimating primary and scatter photons energy spectra from total spectra detected in different sectors of the PET scanner. We evaluated the accuracy and precision of our new spatio-spectral scatter correction and that of the standard spatial correction using realistic Monte Carlo simulations. These results showed that incorporating the energy in the scatter correction reduces bias in the estimation of the absolute activity level by ∼ 60% in the cold regions of the largest patients and yields quantification errors less than 13% in all regions. (author)

  4. Evaluation of a method for correction of scatter radiation in thorax cone beam CT

    International Nuclear Information System (INIS)

    Rinkel, J.; Dinten, J.M.; Esteve, F.

    2004-01-01

    Purpose: Cone beam CT (CBCT) enables three-dimensional imaging with isotropic resolution. X-ray scatter estimation is a big challenge for quantitative CBCT imaging of thorax: scatter level is significantly higher on cone beam systems compared to collimated fan beam systems. The effects of this scattered radiation are cupping artefacts, streaks, and quantification inaccuracies. The beam stops conventional scatter estimation approach can be used for CBCT but leads to a significant increase in terms of dose and acquisition time. At CEA-LETI has been developed an original scatter management process without supplementary acquisition. Methods and Materials: This Analytical Plus Indexing-based method (API) of scatter correction in CBCT is based on scatter calibration through offline acquisitions with beam stops on lucite plates, combined to an analytical transformation issued from physical equations. This approach has been applied with success in bone densitometry and mammography. To evaluate this method in CBCT, acquisitions from a thorax phantom with and without beam stops have been performed. To compare different scatter correction approaches, Feldkamp algorithm has been applied on rough data corrected from scatter by API and by beam stops approaches. Results: The API method provides results in good agreement with the beam stops array approach, suppressing cupping artefact. Otherwise influence of the scatter correction method on the noise in the reconstructed images has been evaluated. Conclusion: The results indicate that the API method is effective for quantitative CBCT imaging of thorax. Compared to a beam stops array method it needs a lower x-ray dose and shortens acquisition time. (authors)

  5. Evaluation of a scattering correction method for high energy tomography

    Science.gov (United States)

    Tisseur, David; Bhatia, Navnina; Estre, Nicolas; Berge, Léonie; Eck, Daniel; Payan, Emmanuel

    2018-01-01

    One of the main drawbacks of Cone Beam Computed Tomography (CBCT) is the contribution of the scattered photons due to the object and the detector. Scattered photons are deflected from their original path after their interaction with the object. This additional contribution of the scattered photons results in increased measured intensities, since the scattered intensity simply adds to the transmitted intensity. This effect is seen as an overestimation in the measured intensity thus corresponding to an underestimation of absorption. This results in artifacts like cupping, shading, streaks etc. on the reconstructed images. Moreover, the scattered radiation provides a bias for the quantitative tomography reconstruction (for example atomic number and volumic mass measurement with dual-energy technique). The effect can be significant and difficult in the range of MeV energy using large objects due to higher Scatter to Primary Ratio (SPR). Additionally, the incident high energy photons which are scattered by the Compton effect are more forward directed and hence more likely to reach the detector. Moreover, for MeV energy range, the contribution of the photons produced by pair production and Bremsstrahlung process also becomes important. We propose an evaluation of a scattering correction technique based on the method named Scatter Kernel Superposition (SKS). The algorithm uses a continuously thickness-adapted kernels method. The analytical parameterizations of the scatter kernels are derived in terms of material thickness, to form continuously thickness-adapted kernel maps in order to correct the projections. This approach has proved to be efficient in producing better sampling of the kernels with respect to the object thickness. This technique offers applicability over a wide range of imaging conditions and gives users an additional advantage. Moreover, since no extra hardware is required by this approach, it forms a major advantage especially in those cases where

  6. Scatter correction using a primary modulator on a clinical angiography C-arm CT system.

    Science.gov (United States)

    Bier, Bastian; Berger, Martin; Maier, Andreas; Kachelrieß, Marc; Ritschl, Ludwig; Müller, Kerstin; Choi, Jang-Hwan; Fahrig, Rebecca

    2017-09-01

    Cone beam computed tomography (CBCT) suffers from a large amount of scatter, resulting in severe scatter artifacts in the reconstructions. Recently, a new scatter correction approach, called improved primary modulator scatter estimation (iPMSE), was introduced. That approach utilizes a primary modulator that is inserted between the X-ray source and the object. This modulation enables estimation of the scatter in the projection domain by optimizing an objective function with respect to the scatter estimate. Up to now the approach has not been implemented on a clinical angiography C-arm CT system. In our work, the iPMSE method is transferred to a clinical C-arm CBCT. Additional processing steps are added in order to compensate for the C-arm scanner motion and the automatic X-ray tube current modulation. These challenges were overcome by establishing a reference modulator database and a block-matching algorithm. Experiments with phantom and experimental in vivo data were performed to evaluate the method. We show that scatter correction using primary modulation is possible on a clinical C-arm CBCT. Scatter artifacts in the reconstructions are reduced with the newly extended method. Compared to a scan with a narrow collimation, our approach showed superior results with an improvement of the contrast and the contrast-to-noise ratio for the phantom experiments. In vivo data are evaluated by comparing the results with a scan with a narrow collimation and with a constant scatter correction approach. Scatter correction using primary modulation is possible on a clinical CBCT by compensating for the scanner motion and the tube current modulation. Scatter artifacts could be reduced in the reconstructions of phantom scans and in experimental in vivo data. © 2017 American Association of Physicists in Medicine.

  7. How to simplify transmission-based scatter correction for clinical application

    International Nuclear Information System (INIS)

    Baccarne, V.; Hutton, B.F.

    1998-01-01

    Full text: The performances of ordered subsets (OS) EM reconstruction including attenuation, scatter and spatial resolution correction are evaluated using cardiac Monte Carlo data. We demonstrate how simplifications in the scatter model allow one to correct SPECT data for scatter in terms of quantitation and quality in a reasonable time. Initial reconstruction of the 20% window is performed including attenuation correction (broad beam μ values), to estimate the activity quantitatively (accuracy 3%), but not spatially. A rough reconstruction with 2 iterations (subset size: 8) is sufficient for subsequent scatter correction. Estimation of primary photons is obtained by projecting the previous distribution including attenuation (narrow beam μ values). Estimation of the scatter is obtained by convolving the primary estimates by a depth dependent scatter kernel, and scaling the result by a factor calculated from the attenuation map. The correction can be accelerated by convolving several adjacent planes with the same kernel, and using an average scaling factor. Simulation of the effects of the collimator during the scatter correction was demonstrated to be unnecessary. Final reconstruction is performed using 6 iterations OSEM, including attenuation (narrow beam μ values) and spatial resolution correction. Scatter correction is implemented by incorporating the estimated scatter as a constant offset in the forward projection step. The total correction + reconstruction (64 proj. 40x128 pixel) takes 38 minutes on a Sun Sparc 20. Quantitatively, the accuracy is 7% in a reconstructed slice. The SNR inside the whole myocardium (defined from the original object), is equal to 2.1 and 2.3 - in the corrected and the primary slices respectively. The scatter correction preserves the myocardium to ventricle contrast (primary: 0.79, corrected: 0.82). These simplifications allow acceleration of correction without influencing the quality of the result

  8. Corrections for the effects of accidental coincidences, Compton scatter, and object size in positron emission mammography (PEM) imaging

    Science.gov (United States)

    Raylman, R. R.; Majewski, S.; Wojcik, R.; Weisenberger, A. G.; Kross, B.; Popov, V.

    2001-06-01

    Positron emission mammography (PEM) has begun to show promise as an effective method for the detection of breast lesions. Due to its utilization of tumor-avid radiopharmaceuticals labeled with positron-emitting radionuclides, this technique may be especially useful in imaging of women with radiodense or fibrocystic breasts. While the use of these radiotracers affords PEM unique capabilities, it also introduces some limitations. Specifically, acceptance of accidental and Compton-scattered coincidence events can decrease lesion detectability. The authors studied the effect of accidental coincidence events on PEM images produced by the presence of /sup 18/F-Fluorodeoxyglucose in the organs of a subject using an anthropomorphic phantom. A delayed-coincidence technique was tested as a method for correcting PEM images for the occurrence of accidental events. Also, a Compton scatter correction algorithm designed specifically for PEM was developed and tested using a compressed breast phantom. Finally, the effect of object size on image counts and a correction for this effect were explored. The imager used in this study consisted of two PEM detector heads mounted 20 cm apart on a Lorad biopsy apparatus. The results demonstrated that a majority of the accidental coincidence events (/spl sim/80%) detected by this system were produced by radiotracer uptake in the adipose and muscle tissue of the torso. The presence of accidental coincidence events was shown to reduce lesion detectability. Much of this effect was eliminated by correction of the images utilizing estimates of accidental-coincidence contamination acquired with delayed coincidence circuitry built into the PEM system. The Compton scatter fraction for this system was /spl sim/14%. Utilization of a new scatter correction algorithm reduced the scatter fraction to /spl sim/1.5%. Finally, reduction of count recovery due to object size was measured and a correction to the data applied. Application of correction techniques

  9. SU-E-I-07: An Improved Technique for Scatter Correction in PET

    International Nuclear Information System (INIS)

    Lin, S; Wang, Y; Lue, K; Lin, H; Chuang, K

    2014-01-01

    Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends on the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the insufficient

  10. SU-E-I-07: An Improved Technique for Scatter Correction in PET

    Energy Technology Data Exchange (ETDEWEB)

    Lin, S; Wang, Y; Lue, K; Lin, H; Chuang, K [Chuang, National Tsing Hua University, Hsichu, Taiwan (China)

    2014-06-01

    Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends on the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the insufficient

  11. Software correction of scatter coincidence in positron CT

    International Nuclear Information System (INIS)

    Endo, M.; Iinuma, T.A.

    1984-01-01

    This paper describes a software correction of scatter coincidence in positron CT which is based on an estimation of scatter projections from true projections by an integral transform. Kernels for the integral transform are projected distributions of scatter coincidences for a line source at different positions in a water phantom and are calculated by Klein-Nishina's formula. True projections of any composite object can be determined from measured projections by iterative applications of the integral transform. The correction method was tested in computer simulations and phantom experiments with Positologica. The results showed that effects of scatter coincidence are not negligible in the quantitation of images, but the correction reduces them significantly. (orig.)

  12. Proton dose calculation on scatter-corrected CBCT image: Feasibility study for adaptive proton therapy

    Energy Technology Data Exchange (ETDEWEB)

    Park, Yang-Kyun, E-mail: ykpark@mgh.harvard.edu; Sharp, Gregory C.; Phillips, Justin; Winey, Brian A. [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States)

    2015-08-15

    Purpose: To demonstrate the feasibility of proton dose calculation on scatter-corrected cone-beam computed tomographic (CBCT) images for the purpose of adaptive proton therapy. Methods: CBCT projection images were acquired from anthropomorphic phantoms and a prostate patient using an on-board imaging system of an Elekta infinity linear accelerator. Two previously introduced techniques were used to correct the scattered x-rays in the raw projection images: uniform scatter correction (CBCT{sub us}) and a priori CT-based scatter correction (CBCT{sub ap}). CBCT images were reconstructed using a standard FDK algorithm and GPU-based reconstruction toolkit. Soft tissue ROI-based HU shifting was used to improve HU accuracy of the uncorrected CBCT images and CBCT{sub us}, while no HU change was applied to the CBCT{sub ap}. The degree of equivalence of the corrected CBCT images with respect to the reference CT image (CT{sub ref}) was evaluated by using angular profiles of water equivalent path length (WEPL) and passively scattered proton treatment plans. The CBCT{sub ap} was further evaluated in more realistic scenarios such as rectal filling and weight loss to assess the effect of mismatched prior information on the corrected images. Results: The uncorrected CBCT and CBCT{sub us} images demonstrated substantial WEPL discrepancies (7.3 ± 5.3 mm and 11.1 ± 6.6 mm, respectively) with respect to the CT{sub ref}, while the CBCT{sub ap} images showed substantially reduced WEPL errors (2.4 ± 2.0 mm). Similarly, the CBCT{sub ap}-based treatment plans demonstrated a high pass rate (96.0% ± 2.5% in 2 mm/2% criteria) in a 3D gamma analysis. Conclusions: A priori CT-based scatter correction technique was shown to be promising for adaptive proton therapy, as it achieved equivalent proton dose distributions and water equivalent path lengths compared to those of a reference CT in a selection of anthropomorphic phantoms.

  13. Neural network scatter correction technique for digital radiography

    International Nuclear Information System (INIS)

    Boone, J.M.

    1990-01-01

    This paper presents a scatter correction technique based on artificial neural networks. The technique utilizes the acquisition of a conventional digital radiographic image, coupled with the acquisition of a multiple pencil beam (micro-aperture) digital image. Image subtraction results in a sparsely sampled estimate of the scatter component in the image. The neural network is trained to develop a causal relationship between image data on the low-pass filtered open field image and the sparsely sampled scatter image, and then the trained network is used to correct the entire image (pixel by pixel) in a manner which is operationally similar to but potentially more powerful than convolution. The technique is described and is illustrated using clinical primary component images combined with scatter component images that are realistically simulated using the results from previously reported Monte Carlo investigations. The results indicate that an accurate scatter correction can be realized using this technique

  14. X-ray scatter correction method for dedicated breast computed tomography: improvements and initial patient testing

    International Nuclear Information System (INIS)

    Ramamurthy, Senthil; D’Orsi, Carl J; Sechopoulos, Ioannis

    2016-01-01

    A previously proposed x-ray scatter correction method for dedicated breast computed tomography was further developed and implemented so as to allow for initial patient testing. The method involves the acquisition of a complete second set of breast CT projections covering 360° with a perforated tungsten plate in the path of the x-ray beam. To make patient testing feasible, a wirelessly controlled electronic positioner for the tungsten plate was designed and added to a breast CT system. Other improvements to the algorithm were implemented, including automated exclusion of non-valid primary estimate points and the use of a different approximation method to estimate the full scatter signal. To evaluate the effectiveness of the algorithm, evaluation of the resulting image quality was performed with a breast phantom and with nine patient images. The improvements in the algorithm resulted in the avoidance of introduction of artifacts, especially at the object borders, which was an issue in the previous implementation in some cases. Both contrast, in terms of signal difference and signal difference-to-noise ratio were improved with the proposed method, as opposed to with the correction algorithm incorporated in the system, which does not recover contrast. Patient image evaluation also showed enhanced contrast, better cupping correction, and more consistent voxel values for the different tissues. The algorithm also reduces artifacts present in reconstructions of non-regularly shaped breasts. With the implemented hardware and software improvements, the proposed method can be reliably used during patient breast CT imaging, resulting in improvement of image quality, no introduction of artifacts, and in some cases reduction of artifacts already present. The impact of the algorithm on actual clinical performance for detection, diagnosis and other clinical tasks in breast imaging remains to be evaluated. (paper)

  15. Comparative evaluation of scatter correction techniques in 3D positron emission tomography

    CERN Document Server

    Zaidi, H

    2000-01-01

    Much research and development has been concentrated on the scatter compensation required for quantitative 3D PET. Increasingly sophisticated scatter correction procedures are under investigation, particularly those based on accurate scatter models, and iterative reconstruction-based scatter compensation approaches. The main difference among the correction methods is the way in which the scatter component in the selected energy window is estimated. Monte Carlo methods give further insight and might in themselves offer a possible correction procedure. Methods: Five scatter correction methods are compared in this paper where applicable. The dual-energy window (DEW) technique, the convolution-subtraction (CVS) method, two variants of the Monte Carlo-based scatter correction technique (MCBSC1 and MCBSC2) and our newly developed statistical reconstruction-based scatter correction (SRBSC) method. These scatter correction techniques are evaluated using Monte Carlo simulation studies, experimental phantom measurements...

  16. SU-D-206-07: CBCT Scatter Correction Based On Rotating Collimator

    International Nuclear Information System (INIS)

    Yu, G; Feng, Z; Yin, Y; Qiang, L; Li, B; Huang, P; Li, D

    2016-01-01

    Purpose: Scatter correction in cone-beam computed tomography (CBCT) has obvious effect on the removal of image noise, the cup artifact and the increase of image contrast. Several methods using a beam blocker for the estimation and subtraction of scatter have been proposed. However, the inconvenience of mechanics and propensity to residual artifacts limited the further evolution of basic and clinical research. Here, we propose a rotating collimator-based approach, in conjunction with reconstruction based on a discrete Radon transform and Tchebichef moments algorithm, to correct scatter-induced artifacts. Methods: A rotating-collimator, comprising round tungsten alloy strips, was mounted on a linear actuator. The rotating-collimator is divided into 6 portions equally. The round strips space is evenly spaced on each portion but staggered between different portions. A step motor connected to the rotating collimator drove the blocker to around x-ray source during the CBCT acquisition. The CBCT reconstruction based on a discrete Radon transform and Tchebichef moments algorithm is performed. Experimental studies using water phantom and Catphan504 were carried out to evaluate the performance of the proposed scheme. Results: The proposed algorithm was tested on both the Monte Carlo simulation and actual experiments with the Catphan504 phantom. From the simulation result, the mean square error of the reconstruction error decreases from 16% to 1.18%, the cupping (τcup) from 14.005% to 0.66%, and the peak signal-to-noise ratio increase from 16.9594 to 31.45. From the actual experiments, the induced visual artifacts are significantly reduced. Conclusion: We conducted an experiment on CBCT imaging system with a rotating collimator to develop and optimize x-ray scatter control and reduction technique. The proposed method is attractive in applications where a high CBCT image quality is critical, for example, dose calculation in adaptive radiation therapy. We want to thank Dr. Lei

  17. SU-D-206-07: CBCT Scatter Correction Based On Rotating Collimator

    Energy Technology Data Exchange (ETDEWEB)

    Yu, G; Feng, Z [Shandong Normal University, Jinan, Shandong (China); Yin, Y [Shandong Cancer Hospital and Institute, China, Jinan, Shandong (China); Qiang, L [Zhang Jiagang STFK Medical Device Co, Zhangjiangkang, Suzhou (China); Li, B [Shandong Academy of Medical Sciences, Jinan, Shandong provice (China); Huang, P [Shandong Province Key Laboratory of Medical Physics and Image Processing Te, Ji’nan, Shandong province (China); Li, D [School of Physics and Electronics, Shandong Normal University, Jinan, Shandong (China)

    2016-06-15

    Purpose: Scatter correction in cone-beam computed tomography (CBCT) has obvious effect on the removal of image noise, the cup artifact and the increase of image contrast. Several methods using a beam blocker for the estimation and subtraction of scatter have been proposed. However, the inconvenience of mechanics and propensity to residual artifacts limited the further evolution of basic and clinical research. Here, we propose a rotating collimator-based approach, in conjunction with reconstruction based on a discrete Radon transform and Tchebichef moments algorithm, to correct scatter-induced artifacts. Methods: A rotating-collimator, comprising round tungsten alloy strips, was mounted on a linear actuator. The rotating-collimator is divided into 6 portions equally. The round strips space is evenly spaced on each portion but staggered between different portions. A step motor connected to the rotating collimator drove the blocker to around x-ray source during the CBCT acquisition. The CBCT reconstruction based on a discrete Radon transform and Tchebichef moments algorithm is performed. Experimental studies using water phantom and Catphan504 were carried out to evaluate the performance of the proposed scheme. Results: The proposed algorithm was tested on both the Monte Carlo simulation and actual experiments with the Catphan504 phantom. From the simulation result, the mean square error of the reconstruction error decreases from 16% to 1.18%, the cupping (τcup) from 14.005% to 0.66%, and the peak signal-to-noise ratio increase from 16.9594 to 31.45. From the actual experiments, the induced visual artifacts are significantly reduced. Conclusion: We conducted an experiment on CBCT imaging system with a rotating collimator to develop and optimize x-ray scatter control and reduction technique. The proposed method is attractive in applications where a high CBCT image quality is critical, for example, dose calculation in adaptive radiation therapy. We want to thank Dr. Lei

  18. A general framework and review of scatter correction methods in cone beam CT. Part 2: Scatter estimation approaches

    International Nuclear Information System (INIS)

    Ruehrnschopf and, Ernst-Peter; Klingenbeck, Klaus

    2011-01-01

    The main components of scatter correction procedures are scatter estimation and a scatter compensation algorithm. This paper completes a previous paper where a general framework for scatter compensation was presented under the prerequisite that a scatter estimation method is already available. In the current paper, the authors give a systematic review of the variety of scatter estimation approaches. Scatter estimation methods are based on measurements, mathematical-physical models, or combinations of both. For completeness they present an overview of measurement-based methods, but the main topic is the theoretically more demanding models, as analytical, Monte-Carlo, and hybrid models. Further classifications are 3D image-based and 2D projection-based approaches. The authors present a system-theoretic framework, which allows to proceed top-down from a general 3D formulation, by successive approximations, to efficient 2D approaches. A widely useful method is the beam-scatter-kernel superposition approach. Together with the review of standard methods, the authors discuss their limitations and how to take into account the issues of object dependency, spatial variance, deformation of scatter kernels, external and internal absorbers. Open questions for further investigations are indicated. Finally, the authors refer on some special issues and applications, such as bow-tie filter, offset detector, truncated data, and dual-source CT.

  19. Scattering Correction For Image Reconstruction In Flash Radiography

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Liangzhi; Wang, Mengqi; Wu, Hongchun; Liu, Zhouyu; Cheng, Yuxiong; Zhang, Hongbo [Xi' an Jiaotong Univ., Xi' an (China)

    2013-08-15

    Scattered photons cause blurring and distortions in flash radiography, reducing the accuracy of image reconstruction significantly. The effect of the scattered photons is taken into account and an iterative deduction of the scattered photons is proposed to amend the scattering effect for image restoration. In order to deduct the scattering contribution, the flux of scattered photons is estimated as the sum of two components. The single scattered component is calculated accurately together with the uncollided flux along the characteristic ray, while the multiple scattered component is evaluated using correction coefficients pre-obtained from Monte Carlo simulations.The arbitrary geometry pretreatment and ray tracing are carried out based on the customization of AutoCAD. With the above model, an Iterative Procedure for image restORation code, IPOR, is developed. Numerical results demonstrate that the IPOR code is much more accurate than the direct reconstruction solution without scattering correction and it has a very high computational efficiency.

  20. Scattering Correction For Image Reconstruction In Flash Radiography

    International Nuclear Information System (INIS)

    Cao, Liangzhi; Wang, Mengqi; Wu, Hongchun; Liu, Zhouyu; Cheng, Yuxiong; Zhang, Hongbo

    2013-01-01

    Scattered photons cause blurring and distortions in flash radiography, reducing the accuracy of image reconstruction significantly. The effect of the scattered photons is taken into account and an iterative deduction of the scattered photons is proposed to amend the scattering effect for image restoration. In order to deduct the scattering contribution, the flux of scattered photons is estimated as the sum of two components. The single scattered component is calculated accurately together with the uncollided flux along the characteristic ray, while the multiple scattered component is evaluated using correction coefficients pre-obtained from Monte Carlo simulations.The arbitrary geometry pretreatment and ray tracing are carried out based on the customization of AutoCAD. With the above model, an Iterative Procedure for image restORation code, IPOR, is developed. Numerical results demonstrate that the IPOR code is much more accurate than the direct reconstruction solution without scattering correction and it has a very high computational efficiency

  1. TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Y [UT Southwestern Medical Center, Dallas, TX (United States); Southern Medical University, Guangzhou (China); Bai, T [UT Southwestern Medical Center, Dallas, TX (United States); Xi' an Jiaotong University, Xi' an (China); Yan, H; Ouyang, L; Wang, J; Pompos, A; Jiang, S; Jia, X [UT Southwestern Medical Center, Dallas, TX (United States); Zhou, L [Southern Medical University, Guangzhou (China)

    2014-06-15

    Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections; 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research

  2. Energy-angle correlation correction algorithm for monochromatic computed tomography based on Thomson scattering X-ray source

    Science.gov (United States)

    Chi, Zhijun; Du, Yingchao; Huang, Wenhui; Tang, Chuanxiang

    2017-12-01

    The necessity for compact and relatively low cost x-ray sources with monochromaticity, continuous tunability of x-ray energy, high spatial coherence, straightforward polarization control, and high brightness has led to the rapid development of Thomson scattering x-ray sources. To meet the requirement of in-situ monochromatic computed tomography (CT) for large-scale and/or high-attenuation materials based on this type of x-ray source, there is an increasing demand for effective algorithms to correct the energy-angle correlation. In this paper, we take advantage of the parametrization of the x-ray attenuation coefficient to resolve this problem. The linear attenuation coefficient of a material can be decomposed into a linear combination of the energy-dependent photoelectric and Compton cross-sections in the keV energy regime without K-edge discontinuities, and the line integrals of the decomposition coefficients of the above two parts can be determined by performing two spectrally different measurements. After that, the line integral of the linear attenuation coefficient of an imaging object at a certain interested energy can be derived through the above parametrization formula, and monochromatic CT can be reconstructed at this energy using traditional reconstruction methods, e.g., filtered back projection or algebraic reconstruction technique. Not only can monochromatic CT be realized, but also the distributions of the effective atomic number and electron density of the imaging object can be retrieved at the expense of dual-energy CT scan. Simulation results validate our proposal and will be shown in this paper. Our results will further expand the scope of application for Thomson scattering x-ray sources.

  3. Scatter factor corrections for elongated fields

    International Nuclear Information System (INIS)

    Higgins, P.D.; Sohn, W.H.; Sibata, C.H.; McCarthy, W.A.

    1989-01-01

    Measurements have been made to determine scatter factor corrections for elongated fields of Cobalt-60 and for nominal linear accelerator energies of 6 MV (Siemens Mevatron 67) and 18 MV (AECL Therac 20). It was found that for every energy the collimator scatter factor varies by 2% or more as the field length-to-width ratio increases beyond 3:1. The phantom scatter factor is independent of which collimator pair is elongated at these energies. For 18 MV photons it was found that the collimator scatter factor is complicated by field-size-dependent backscatter into the beam monitor

  4. Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.

    Science.gov (United States)

    Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua

    2018-02-01

    Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.

  5. A simple, direct method for x-ray scatter estimation and correction in digital radiography and cone-beam CT

    International Nuclear Information System (INIS)

    Siewerdsen, J.H.; Daly, M.J.; Bakhtiar, B.

    2006-01-01

    X-ray scatter poses a significant limitation to image quality in cone-beam CT (CBCT), resulting in contrast reduction, image artifacts, and lack of CT number accuracy. We report the performance of a simple scatter correction method in which scatter fluence is estimated directly in each projection from pixel values near the edge of the detector behind the collimator leaves. The algorithm operates on the simple assumption that signal in the collimator shadow is attributable to x-ray scatter, and the 2D scatter fluence is estimated by interpolating between pixel values measured along the top and bottom edges of the detector behind the collimator leaves. The resulting scatter fluence estimate is subtracted from each projection to yield an estimate of the primary-only images for CBCT reconstruction. Performance was investigated in phantom experiments on an experimental CBCT benchtop, and the effect on image quality was demonstrated in patient images (head, abdomen, and pelvis sites) obtained on a preclinical system for CBCT-guided radiation therapy. The algorithm provides significant reduction in scatter artifacts without compromise in contrast-to-noise ratio (CNR). For example, in a head phantom, cupping artifact was essentially eliminated, CT number accuracy was restored to within 3%, and CNR (breast-to-water) was improved by up to 50%. Similarly in a body phantom, cupping artifact was reduced by at least a factor of 2 without loss in CNR. Patient images demonstrate significantly increased uniformity, accuracy, and contrast, with an overall improvement in image quality in all sites investigated. Qualitative evaluation illustrates that soft-tissue structures that are otherwise undetectable are clearly delineated in scatter-corrected reconstructions. Since scatter is estimated directly in each projection, the algorithm is robust with respect to system geometry, patient size and heterogeneity, patient motion, etc. Operating without prior information, analytical modeling

  6. Real-time scatter measurement and correction in film radiography

    International Nuclear Information System (INIS)

    Shaw, C.G.

    1987-01-01

    A technique for real-time scatter measurement and correction in scanning film radiography is described. With this technique, collimated x-ray fan beams are used to partially reject scattered radiation. Photodiodes are attached to the aft-collimator for sampled scatter measurement. Such measurement allows the scatter distribution to be reconstructed and subtracted from digitized film image data for accurate transmission measurement. In this presentation the authors discuss the physical and technical considerations of this scatter correction technique. Examples are shown that demonstrate the feasibility of the technique. Improved x-ray transmission measurement and dual-energy subtraction imaging are demonstrated with phantoms

  7. SU-D-206-04: Iterative CBCT Scatter Shading Correction Without Prior Information

    International Nuclear Information System (INIS)

    Bai, Y; Wu, P; Mao, T; Gong, S; Wang, J; Niu, T; Sheng, K; Xie, Y

    2016-01-01

    Purpose: To estimate and remove the scatter contamination in the acquired projection of cone-beam CT (CBCT), to suppress the shading artifacts and improve the image quality without prior information. Methods: The uncorrected CBCT images containing shading artifacts are reconstructed by applying the standard FDK algorithm on CBCT raw projections. The uncorrected image is then segmented to generate an initial template image. To estimate scatter signal, the differences are calculated by subtracting the simulated projections of the template image from the raw projections. Since scatter signals are dominantly continuous and low-frequency in the projection domain, they are estimated by low-pass filtering the difference signals and subtracted from the raw CBCT projections to achieve the scatter correction. Finally, the corrected CBCT image is reconstructed from the corrected projection data. Since an accurate template image is not readily segmented from the uncorrected CBCT image, the proposed scheme is iterated until the produced template is not altered. Results: The proposed scheme is evaluated on the Catphan©600 phantom data and CBCT images acquired from a pelvis patient. The result shows that shading artifacts have been effectively suppressed by the proposed method. Using multi-detector CT (MDCT) images as reference, quantitative analysis is operated to measure the quality of corrected images. Compared to images without correction, the method proposed reduces the overall CT number error from over 200 HU to be less than 50 HU and can increase the spatial uniformity. Conclusion: An iterative strategy without relying on the prior information is proposed in this work to remove the shading artifacts due to scatter contamination in the projection domain. The method is evaluated in phantom and patient studies and the result shows that the image quality is remarkably improved. The proposed method is efficient and practical to address the poor image quality issue of CBCT

  8. SU-D-206-04: Iterative CBCT Scatter Shading Correction Without Prior Information

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Y; Wu, P; Mao, T; Gong, S; Wang, J; Niu, T [Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Institute of Translational Medicine, Zhejiang University, Hangzhou, Zhejiang (China); Sheng, K [Department of Radiation Oncology, University of California, Los Angeles, School of Medicine, Los Angeles, CA (United States); Xie, Y [Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong (China)

    2016-06-15

    Purpose: To estimate and remove the scatter contamination in the acquired projection of cone-beam CT (CBCT), to suppress the shading artifacts and improve the image quality without prior information. Methods: The uncorrected CBCT images containing shading artifacts are reconstructed by applying the standard FDK algorithm on CBCT raw projections. The uncorrected image is then segmented to generate an initial template image. To estimate scatter signal, the differences are calculated by subtracting the simulated projections of the template image from the raw projections. Since scatter signals are dominantly continuous and low-frequency in the projection domain, they are estimated by low-pass filtering the difference signals and subtracted from the raw CBCT projections to achieve the scatter correction. Finally, the corrected CBCT image is reconstructed from the corrected projection data. Since an accurate template image is not readily segmented from the uncorrected CBCT image, the proposed scheme is iterated until the produced template is not altered. Results: The proposed scheme is evaluated on the Catphan©600 phantom data and CBCT images acquired from a pelvis patient. The result shows that shading artifacts have been effectively suppressed by the proposed method. Using multi-detector CT (MDCT) images as reference, quantitative analysis is operated to measure the quality of corrected images. Compared to images without correction, the method proposed reduces the overall CT number error from over 200 HU to be less than 50 HU and can increase the spatial uniformity. Conclusion: An iterative strategy without relying on the prior information is proposed in this work to remove the shading artifacts due to scatter contamination in the projection domain. The method is evaluated in phantom and patient studies and the result shows that the image quality is remarkably improved. The proposed method is efficient and practical to address the poor image quality issue of CBCT

  9. An Uneven Illumination Correction Algorithm for Optical Remote Sensing Images Covered with Thin Clouds

    Directory of Open Access Journals (Sweden)

    Xiaole Shen

    2015-09-01

    Full Text Available The uneven illumination phenomenon caused by thin clouds will reduce the quality of remote sensing images, and bring adverse effects to the image interpretation. To remove the effect of thin clouds on images, an uneven illumination correction can be applied. In this paper, an effective uneven illumination correction algorithm is proposed to remove the effect of thin clouds and to restore the ground information of the optical remote sensing image. The imaging model of remote sensing images covered by thin clouds is analyzed. Due to the transmission attenuation, reflection, and scattering, the thin cloud cover usually increases region brightness and reduces saturation and contrast of the image. As a result, a wavelet domain enhancement is performed for the image in Hue-Saturation-Value (HSV color space. We use images with thin clouds in Wuhan area captured by QuickBird and ZiYuan-3 (ZY-3 satellites for experiments. Three traditional uneven illumination correction algorithms, i.e., multi-scale Retinex (MSR algorithm, homomorphic filtering (HF-based algorithm, and wavelet transform-based MASK (WT-MASK algorithm are performed for comparison. Five indicators, i.e., mean value, standard deviation, information entropy, average gradient, and hue deviation index (HDI are used to analyze the effect of the algorithms. The experimental results show that the proposed algorithm can effectively eliminate the influences of thin clouds and restore the real color of ground objects under thin clouds.

  10. A technique of scatter and glare correction for videodensitometric studies in digital subtraction videoangiography

    International Nuclear Information System (INIS)

    Shaw, C.G.; Ergun, D.L.; Myerowitz, P.D.; Van Lysel, M.S.; Mistretta, C.A.; Zarnstorff, W.C.; Crummy, A.B.

    1982-01-01

    The logarithmic amplification of video signals and the availability of data in digital form make digital subtraction videoangiography a suitable tool for videodensitometric estimation of physiological quantities. A system for this purpose was implemented with a digital video image processor. However, it was found that the radiation scattering and veiling glare present in the image-intensified video must be removed to make meaningful quantitations. An algorithm to make such a correction was developed and is presented. With this correction, the videodensitometry system was calibrated with phantoms and used to measure the left ventricular ejection fraction of a canine heart

  11. WE-AB-207A-08: BEST IN PHYSICS (IMAGING): Advanced Scatter Correction and Iterative Reconstruction for Improved Cone-Beam CT Imaging On the TrueBeam Radiotherapy Machine

    Energy Technology Data Exchange (ETDEWEB)

    Wang, A; Paysan, P; Brehm, M; Maslowski, A; Lehmann, M; Messmer, P; Munro, P; Yoon, S; Star-Lack, J; Seghers, D [Varian Medical Systems, Palo Alto, CA (United States)

    2016-06-15

    Purpose: To improve CBCT image quality for image-guided radiotherapy by applying advanced reconstruction algorithms to overcome scatter, noise, and artifact limitations Methods: CBCT is used extensively for patient setup in radiotherapy. However, image quality generally falls short of diagnostic CT, limiting soft-tissue based positioning and potential applications such as adaptive radiotherapy. The conventional TrueBeam CBCT reconstructor uses a basic scatter correction and FDK reconstruction, resulting in residual scatter artifacts, suboptimal image noise characteristics, and other artifacts like cone-beam artifacts. We have developed an advanced scatter correction that uses a finite-element solver (AcurosCTS) to model the behavior of photons as they pass (and scatter) through the object. Furthermore, iterative reconstruction is applied to the scatter-corrected projections, enforcing data consistency with statistical weighting and applying an edge-preserving image regularizer to reduce image noise. The combined algorithms have been implemented on a GPU. CBCT projections from clinically operating TrueBeam systems have been used to compare image quality between the conventional and improved reconstruction methods. Planning CT images of the same patients have also been compared. Results: The advanced scatter correction removes shading and inhomogeneity artifacts, reducing the scatter artifact from 99.5 HU to 13.7 HU in a typical pelvis case. Iterative reconstruction provides further benefit by reducing image noise and eliminating streak artifacts, thereby improving soft-tissue visualization. In a clinical head and pelvis CBCT, the noise was reduced by 43% and 48%, respectively, with no change in spatial resolution (assessed visually). Additional benefits include reduction of cone-beam artifacts and reduction of metal artifacts due to intrinsic downweighting of corrupted rays. Conclusion: The combination of an advanced scatter correction with iterative reconstruction

  12. Non-eikonal corrections for the scattering of spin-one particles

    Energy Technology Data Exchange (ETDEWEB)

    Gaber, M.W.; Wilkin, C. [Department of Physics and Astronomy, University College London, WC1E 6BT, London (United Kingdom); Al-Khalili, J.S. [Department of Physics, University of Surrey, GU2 7XH, Guildford, Surrey (United Kingdom)

    2004-08-01

    The Wallace Fourier-Bessel expansion of the scattering amplitude is generalised to the case of the scattering of a spin-one particle from a potential with a single tensor coupling as well as central and spin-orbit terms. A generating function for the eikonal-phase (quantum) corrections is evaluated in closed form. For medium-energy deuteron-nucleus scattering, the first-order correction is dominant and is shown to be significant in the interpretation of analysing power measurements. This conclusion is supported by a numerical comparison of the eikonal observables, evaluated with and without corrections, with those obtained from a numerical resolution of the Schroedinger equation for d-{sup 58}Ni scattering at incident deuteron energies of 400 and 700 MeV. (orig.)

  13. Evaluation of a method for correction of scatter radiation in thorax cone beam CT; Evaluation d'une methode de correction du rayonnement diffuse en tomographie du thorax avec faisceau conique

    Energy Technology Data Exchange (ETDEWEB)

    Rinkel, J.; Dinten, J.M. [CEA Grenoble (DTBS/STD), Lab. d' Electronique et de Technologie de l' Informatique, LETI, 38 (France); Esteve, F. [European Synchrotron Radiation Facility (ESRF), 38 - Grenoble (France)

    2004-07-01

    Purpose: Cone beam CT (CBCT) enables three-dimensional imaging with isotropic resolution. X-ray scatter estimation is a big challenge for quantitative CBCT imaging of thorax: scatter level is significantly higher on cone beam systems compared to collimated fan beam systems. The effects of this scattered radiation are cupping artefacts, streaks, and quantification inaccuracies. The beam stops conventional scatter estimation approach can be used for CBCT but leads to a significant increase in terms of dose and acquisition time. At CEA-LETI has been developed an original scatter management process without supplementary acquisition. Methods and Materials: This Analytical Plus Indexing-based method (API) of scatter correction in CBCT is based on scatter calibration through offline acquisitions with beam stops on lucite plates, combined to an analytical transformation issued from physical equations. This approach has been applied with success in bone densitometry and mammography. To evaluate this method in CBCT, acquisitions from a thorax phantom with and without beam stops have been performed. To compare different scatter correction approaches, Feldkamp algorithm has been applied on rough data corrected from scatter by API and by beam stops approaches. Results: The API method provides results in good agreement with the beam stops array approach, suppressing cupping artefact. Otherwise influence of the scatter correction method on the noise in the reconstructed images has been evaluated. Conclusion: The results indicate that the API method is effective for quantitative CBCT imaging of thorax. Compared to a beam stops array method it needs a lower x-ray dose and shortens acquisition time. (authors)

  14. Multiple scattering and attenuation corrections in Deep Inelastic Neutron Scattering experiments

    International Nuclear Information System (INIS)

    Dawidowski, J; Blostein, J J; Granada, J R

    2006-01-01

    Multiple scattering and attenuation corrections in Deep Inelastic Neutron Scattering experiments are analyzed. The theoretical basis of the method is stated, and a Monte Carlo procedure to perform the calculation is presented. The results are compared with experimental data. The importance of the accuracy in the description of the experimental parameters is tested, and the implications of the present results on the data analysis procedures is examined

  15. Angle Statistics Reconstruction: a robust reconstruction algorithm for Muon Scattering Tomography

    Science.gov (United States)

    Stapleton, M.; Burns, J.; Quillin, S.; Steer, C.

    2014-11-01

    Muon Scattering Tomography (MST) is a technique for using the scattering of cosmic ray muons to probe the contents of enclosed volumes. As a muon passes through material it undergoes multiple Coulomb scattering, where the amount of scattering is dependent on the density and atomic number of the material as well as the path length. Hence, MST has been proposed as a means of imaging dense materials, for instance to detect special nuclear material in cargo containers. Algorithms are required to generate an accurate reconstruction of the material density inside the volume from the muon scattering information and some have already been proposed, most notably the Point of Closest Approach (PoCA) and Maximum Likelihood/Expectation Maximisation (MLEM) algorithms. However, whilst PoCA-based algorithms are easy to implement, they perform rather poorly in practice. Conversely, MLEM is a complicated algorithm to implement and computationally intensive and there is currently no published, fast and easily-implementable algorithm that performs well in practice. In this paper, we first provide a detailed analysis of the source of inaccuracy in PoCA-based algorithms. We then motivate an alternative method, based on ideas first laid out by Morris et al, presenting and fully specifying an algorithm that performs well against simulations of realistic scenarios. We argue this new algorithm should be adopted by developers of Muon Scattering Tomography as an alternative to PoCA.

  16. Corrections to the large-angle scattering amplitude

    International Nuclear Information System (INIS)

    Goloskokov, S.V.; Kudinov, A.V.; Kuleshov, S.P.

    1979-01-01

    High-energy behaviour of scattering amplitudes is considered within the frames of Logunov-Tavchelidze quasipotential approach. The representation of scattering amplitude of two scalar particles, convenient for the study of its asymptotic properties is given. Obtained are corrections of the main value of scattering amplitude of the first and the second orders in 1/p, where p is the pulse of colliding particles in the system of the inertia centre. An example of the obtained formulas use for a concrete quasipotential is given

  17. An automated phase correction algorithm for retrieving permittivity and permeability of electromagnetic metamaterials

    Directory of Open Access Journals (Sweden)

    Z. X. Cao

    2014-06-01

    Full Text Available To retrieve complex-valued effective permittivity and permeability of electromagnetic metamaterials (EMMs based on resonant effect from scattering parameters using a complex logarithmic function is not inevitable. When complex values are expressed in terms of magnitude and phase, an infinite number of permissible phase angles is permissible due to the multi-valued property of complex logarithmic functions. Special attention needs to be paid to ensure continuity of the effective permittivity and permeability of lossy metamaterials as frequency sweeps. In this paper, an automated phase correction (APC algorithm is proposed to properly trace and compensate phase angles of the complex logarithmic function which may experience abrupt phase jumps near the resonant frequency region of the concerned EMMs, and hence the continuity of the effective optical properties of lossy metamaterials is ensured. The algorithm is then verified to extract effective optical properties from the simulated scattering parameters of the four different types of metamaterial media: a cut-wire cell array, a split ring resonator (SRR cell array, an electric-LC (E-LC resonator cell array, and a combined SRR and wire cell array respectively. The results demonstrate that the proposed algorithm is highly accurate and effective.

  18. Investigating the effect and photon scattering correction in isotopic scanning with gamma and SPECT

    International Nuclear Information System (INIS)

    Movafeghi, Amir

    1997-01-01

    phantom was elliptical Jaszczak (or Carlson Phantom). The effect of different adjustment of system on scattering were considered. For qualitative comparison in each case, line Spread Function and Modulating Transfer Function were measured and extracted, respectively. This comparison were done for both lab and clinical systems in different experiments, with and without scatter correction. Also image reconstruction with filtered back projection algorithm were performed in both cases (with and without scatter correction) and results has been surveyed. The final result was: scattering has a major rules in degrading of reconstructed images and with scatter correction, it is possible to compensate error and increase image quality

  19. Radiative corrections to neutrino deep inelastic scattering revisited

    International Nuclear Information System (INIS)

    Arbuzov, Andrej B.; Bardin, Dmitry Yu.; Kalinovskaya, Lidia V.

    2005-01-01

    Radiative corrections to neutrino deep inelastic scattering are revisited. One-loop electroweak corrections are re-calculated within the automatic SANC system. Terms with mass singularities are treated including higher order leading logarithmic corrections. Scheme dependence of corrections due to weak interactions is investigated. The results are implemented into the data analysis of the NOMAD experiment. The present theoretical accuracy in description of the process is discussed

  20. Binary moving-blocker-based scatter correction in cone-beam computed tomography with width-truncated projections: proof of concept

    Science.gov (United States)

    Lee, Ho; Fahimian, Benjamin P.; Xing, Lei

    2017-03-01

    This paper proposes a binary moving-blocker (BMB)-based technique for scatter correction in cone-beam computed tomography (CBCT). In concept, a beam blocker consisting of lead strips, mounted in front of the x-ray tube, moves rapidly in and out of the beam during a single gantry rotation. The projections are acquired in alternating phases of blocked and unblocked cone beams, where the blocked phase results in a stripe pattern in the width direction. To derive the scatter map from the blocked projections, 1D B-Spline interpolation/extrapolation is applied by using the detected information in the shaded regions. The scatter map of the unblocked projections is corrected by averaging two scatter maps that correspond to their adjacent blocked projections. The scatter-corrected projections are obtained by subtracting the corresponding scatter maps from the projection data and are utilized to generate the CBCT image by a compressed-sensing (CS)-based iterative reconstruction algorithm. Catphan504 and pelvis phantoms were used to evaluate the method’s performance. The proposed BMB-based technique provided an effective method to enhance the image quality by suppressing scatter-induced artifacts, such as ring artifacts around the bowtie area. Compared to CBCT without a blocker, the spatial nonuniformity was reduced from 9.1% to 3.1%. The root-mean-square error of the CT numbers in the regions of interest (ROIs) was reduced from 30.2 HU to 3.8 HU. In addition to high resolution, comparable to that of the benchmark image, the CS-based reconstruction also led to a better contrast-to-noise ratio in seven ROIs. The proposed technique enables complete scatter-corrected CBCT imaging with width-truncated projections and allows reducing the acquisition time to approximately half. This work may have significant implications for image-guided or adaptive radiation therapy, where CBCT is often used.

  1. Scatter correction method with primary modulator for dual energy digital radiography: a preliminary study

    Science.gov (United States)

    Jo, Byung-Du; Lee, Young-Jin; Kim, Dae-Hong; Jeon, Pil-Hyun; Kim, Hee-Joung

    2014-03-01

    In conventional digital radiography (DR) using a dual energy subtraction technique, a significant fraction of the detected photons are scattered within the body, resulting in the scatter component. Scattered radiation can significantly deteriorate image quality in diagnostic X-ray imaging systems. Various methods of scatter correction, including both measurement and non-measurement-based methods have been proposed in the past. Both methods can reduce scatter artifacts in images. However, non-measurement-based methods require a homogeneous object and have insufficient scatter component correction. Therefore, we employed a measurement-based method to correct for the scatter component of inhomogeneous objects from dual energy DR (DEDR) images. We performed a simulation study using a Monte Carlo simulation with a primary modulator, which is a measurement-based method for the DEDR system. The primary modulator, which has a checkerboard pattern, was used to modulate primary radiation. Cylindrical phantoms of variable size were used to quantify imaging performance. For scatter estimation, we used Discrete Fourier Transform filtering. The primary modulation method was evaluated using a cylindrical phantom in the DEDR system. The scatter components were accurately removed using a primary modulator. When the results acquired with scatter correction and without correction were compared, the average contrast-to-noise ratio (CNR) with the correction was 1.35 times higher than that obtained without correction, and the average root mean square error (RMSE) with the correction was 38.00% better than that without correction. In the subtraction study, the average CNR with correction was 2.04 (aluminum subtraction) and 1.38 (polymethyl methacrylate (PMMA) subtraction) times higher than that obtained without the correction. The analysis demonstrated the accuracy of scatter correction and the improvement of image quality using a primary modulator and showed the feasibility of

  2. Improvement of brain perfusion SPET using iterative reconstruction with scatter and non-uniform attenuation correction

    Energy Technology Data Exchange (ETDEWEB)

    Kauppinen, T.; Vanninen, E.; Kuikka, J.T. [Kuopio Central Hospital (Finland). Dept. of Clinical Physiology; Koskinen, M.O. [Dept. of Clinical Physiology and Nuclear Medicine, Tampere Univ. Hospital, Tampere (Finland); Alenius, S. [Signal Processing Lab., Tampere Univ. of Technology, Tampere (Finland)

    2000-09-01

    Filtered back-projection (FBP) is generally used as the reconstruction method for single-photon emission tomography although it produces noisy images with apparent streak artefacts. It is possible to improve the image quality by using an algorithm with iterative correction steps. The iterative reconstruction technique also has an additional benefit in that computation of attenuation correction can be included in the process. A commonly used iterative method, maximum-likelihood expectation maximisation (ML-EM), can be accelerated using ordered subsets (OS-EM). We have applied to the OS-EM algorithm a Bayesian one-step late correction method utilising median root prior (MRP). Methodological comparison was performed by means of measurements obtained with a brain perfusion phantom and using patient data. The aim of this work was to quantitate the accuracy of iterative reconstruction with scatter and non-uniform attenuation corrections and post-filtering in SPET brain perfusion imaging. SPET imaging was performed using a triple-head gamma camera with fan-beam collimators. Transmission and emission scans were acquired simultaneously. The brain phantom used was a high-resolution three-dimensional anthropomorphic JB003 phantom. Patient studies were performed in ten chronic pain syndrome patients. The images were reconstructed using conventional FBP and iterative OS-EM and MRP techniques including scatter and nonuniform attenuation corrections. Iterative reconstructions were individually post-filtered. The quantitative results obtained with the brain perfusion phantom were compared with the known actual contrast ratios. The calculated difference from the true values was largest with the FBP method; iteratively reconstructed images proved closer to the reality. Similar findings were obtained in the patient studies. The plain OS-EM method improved the contrast whereas in the case of the MRP technique the improvement in contrast was not so evident with post-filtering. (orig.)

  3. Improvement of brain perfusion SPET using iterative reconstruction with scatter and non-uniform attenuation correction

    International Nuclear Information System (INIS)

    Kauppinen, T.; Vanninen, E.; Kuikka, J.T.; Alenius, S.

    2000-01-01

    Filtered back-projection (FBP) is generally used as the reconstruction method for single-photon emission tomography although it produces noisy images with apparent streak artefacts. It is possible to improve the image quality by using an algorithm with iterative correction steps. The iterative reconstruction technique also has an additional benefit in that computation of attenuation correction can be included in the process. A commonly used iterative method, maximum-likelihood expectation maximisation (ML-EM), can be accelerated using ordered subsets (OS-EM). We have applied to the OS-EM algorithm a Bayesian one-step late correction method utilising median root prior (MRP). Methodological comparison was performed by means of measurements obtained with a brain perfusion phantom and using patient data. The aim of this work was to quantitate the accuracy of iterative reconstruction with scatter and non-uniform attenuation corrections and post-filtering in SPET brain perfusion imaging. SPET imaging was performed using a triple-head gamma camera with fan-beam collimators. Transmission and emission scans were acquired simultaneously. The brain phantom used was a high-resolution three-dimensional anthropomorphic JB003 phantom. Patient studies were performed in ten chronic pain syndrome patients. The images were reconstructed using conventional FBP and iterative OS-EM and MRP techniques including scatter and nonuniform attenuation corrections. Iterative reconstructions were individually post-filtered. The quantitative results obtained with the brain perfusion phantom were compared with the known actual contrast ratios. The calculated difference from the true values was largest with the FBP method; iteratively reconstructed images proved closer to the reality. Similar findings were obtained in the patient studies. The plain OS-EM method improved the contrast whereas in the case of the MRP technique the improvement in contrast was not so evident with post-filtering. (orig.)

  4. A Monte Carlo evaluation of analytical multiple scattering corrections for unpolarised neutron scattering and polarisation analysis data

    International Nuclear Information System (INIS)

    Mayers, J.; Cywinski, R.

    1985-03-01

    Some of the approximations commonly used for the analytical estimation of multiple scattering corrections to thermal neutron elastic scattering data from cylindrical and plane slab samples have been tested using a Monte Carlo program. It is shown that the approximations are accurate for a wide range of sample geometries and scattering cross-sections. Neutron polarisation analysis provides the most stringent test of multiple scattering calculations as multiply scattered neutrons may be redistributed not only geometrically but also between the spin flip and non spin flip scattering channels. A very simple analytical technique for correcting for multiple scattering in neutron polarisation analysis has been tested using the Monte Carlo program and has been shown to work remarkably well in most circumstances. (author)

  5. Optimization-based scatter estimation using primary modulation for computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yi; Ma, Jingchen; Zhao, Jun, E-mail: junzhao@sjtu.edu.cn [School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240 (China); Song, Ying [Department of Radiation Oncology, West China Hospital, Sichuan University, Chengdu 610041 (China)

    2016-08-15

    Purpose: Scatter reduces the image quality in computed tomography (CT), but scatter correction remains a challenge. A previously proposed primary modulation method simultaneously obtains the primary and scatter in a single scan. However, separating the scatter and primary in primary modulation is challenging because it is an underdetermined problem. In this study, an optimization-based scatter estimation (OSE) algorithm is proposed to estimate and correct scatter. Methods: In the concept of primary modulation, the primary is modulated, but the scatter remains smooth by inserting a modulator between the x-ray source and the object. In the proposed algorithm, an objective function is designed for separating the scatter and primary. Prior knowledge is incorporated in the optimization-based framework to improve the accuracy of the estimation: (1) the primary is always positive; (2) the primary is locally smooth and the scatter is smooth; (3) the location of penumbra can be determined; and (4) the scatter-contaminated data provide knowledge about which part is smooth. Results: The simulation study shows that the edge-preserving weighting in OSE improves the estimation accuracy near the object boundary. Simulation study also demonstrates that OSE outperforms the two existing primary modulation algorithms for most regions of interest in terms of the CT number accuracy and noise. The proposed method was tested on a clinical cone beam CT, demonstrating that OSE corrects the scatter even when the modulator is not accurately registered. Conclusions: The proposed OSE algorithm improves the robustness and accuracy in scatter estimation and correction. This method is promising for scatter correction of various kinds of x-ray imaging modalities, such as x-ray radiography, cone beam CT, and the fourth-generation CT.

  6. Improved scatter correction with factor analysis for planar and SPECT imaging

    Science.gov (United States)

    Knoll, Peter; Rahmim, Arman; Gültekin, Selma; Šámal, Martin; Ljungberg, Michael; Mirzaei, Siroos; Segars, Paul; Szczupak, Boguslaw

    2017-09-01

    Quantitative nuclear medicine imaging is an increasingly important frontier. In order to achieve quantitative imaging, various interactions of photons with matter have to be modeled and compensated. Although correction for photon attenuation has been addressed by including x-ray CT scans (accurate), correction for Compton scatter remains an open issue. The inclusion of scattered photons within the energy window used for planar or SPECT data acquisition decreases the contrast of the image. While a number of methods for scatter correction have been proposed in the past, in this work, we propose and assess a novel, user-independent framework applying factor analysis (FA). Extensive Monte Carlo simulations for planar and tomographic imaging were performed using the SIMIND software. Furthermore, planar acquisition of two Petri dishes filled with 99mTc solutions and a Jaszczak phantom study (Data Spectrum Corporation, Durham, NC, USA) using a dual head gamma camera were performed. In order to use FA for scatter correction, we subdivided the applied energy window into a number of sub-windows, serving as input data. FA results in two factor images (photo-peak, scatter) and two corresponding factor curves (energy spectra). Planar and tomographic Jaszczak phantom gamma camera measurements were recorded. The tomographic data (simulations and measurements) were processed for each angular position resulting in a photo-peak and a scatter data set. The reconstructed transaxial slices of the Jaszczak phantom were quantified using an ImageJ plugin. The data obtained by FA showed good agreement with the energy spectra, photo-peak, and scatter images obtained in all Monte Carlo simulated data sets. For comparison, the standard dual-energy window (DEW) approach was additionally applied for scatter correction. FA in comparison with the DEW method results in significant improvements in image accuracy for both planar and tomographic data sets. FA can be used as a user

  7. Channel Parameter Estimation for Scatter Cluster Model Using Modified MUSIC Algorithm

    Directory of Open Access Journals (Sweden)

    Jinsheng Yang

    2012-01-01

    Full Text Available Recently, the scatter cluster models which precisely evaluate the performance of the wireless communication system have been proposed in the literature. However, the conventional SAGE algorithm does not work for these scatter cluster-based models because it performs poorly when the transmit signals are highly correlated. In this paper, we estimate the time of arrival (TOA, the direction of arrival (DOA, and Doppler frequency for scatter cluster model by the modified multiple signal classification (MUSIC algorithm. Using the space-time characteristics of the multiray channel, the proposed algorithm combines the temporal filtering techniques and the spatial smoothing techniques to isolate and estimate the incoming rays. The simulation results indicated that the proposed algorithm has lower complexity and is less time-consuming in the dense multipath environment than SAGE algorithm. Furthermore, the estimations’ performance increases with elements of receive array and samples length. Thus, the problem of the channel parameter estimation of the scatter cluster model can be effectively addressed with the proposed modified MUSIC algorithm.

  8. A moving blocker-based strategy for simultaneous megavoltage and kilovoltage scatter correction in cone-beam computed tomography image acquired during volumetric modulated arc therapy

    International Nuclear Information System (INIS)

    Ouyang, Luo; Lee, Huichen Pam; Wang, Jing

    2015-01-01

    Purpose: To evaluate a moving blocker-based approach in estimating and correcting megavoltage (MV) and kilovoltage (kV) scatter contamination in kV cone-beam computed tomography (CBCT) acquired during volumetric modulated arc therapy (VMAT). Methods and materials: During the concurrent CBCT/VMAT acquisition, a physical attenuator (i.e., “blocker”) consisting of equally spaced lead strips was mounted and moved constantly between the CBCT source and patient. Both kV and MV scatter signals were estimated from the blocked region of the imaging panel, and interpolated into the unblocked region. A scatter corrected CBCT was then reconstructed from the unblocked projections after scatter subtraction using an iterative image reconstruction algorithm based on constraint optimization. Experimental studies were performed on a Catphan® phantom and an anthropomorphic pelvis phantom to demonstrate the feasibility of using a moving blocker for kV–MV scatter correction. Results: Scatter induced cupping artifacts were substantially reduced in the moving blocker corrected CBCT images. Quantitatively, the root mean square error of Hounsfield units (HU) in seven density inserts of the Catphan phantom was reduced from 395 to 40. Conclusions: The proposed moving blocker strategy greatly improves the image quality of CBCT acquired with concurrent VMAT by reducing the kV–MV scatter induced HU inaccuracy and cupping artifacts

  9. Binding and Pauli principle corrections in subthreshold pion-nucleus scattering

    International Nuclear Information System (INIS)

    Kam, J. de

    1981-01-01

    In this investigation I develop a three-body model for the single scattering optical potential in which the nucleon binding and the Pauli principle are accounted for. A unitarity pole approximation is used for the nucleon-core interaction. Calculations are presented for the π- 4 He elastic scattering cross sections at energies below the inelastic threshold and for the real part of the π- 4 He scattering length by solving the three-body equations. Off-shell kinematics and the Pauli principle are carefully taken into account. The binding correction and the Pauli principle correction each have an important effect on the differential cross sections and the scattering length. However, large cancellations occur between these two effects. I find an increase in the π- 4 He scattering length by 100%; an increase in the cross sections by 20-30% and shift of the minimum in π - - 4 He scattering to forward angles by 10 0 . (orig.)

  10. Patient-specific scatter correction in clinical cone beam computed tomography imaging made possible by the combination of Monte Carlo simulations and a ray tracing algorithm

    DEFF Research Database (Denmark)

    Slot Thing, Rune; Bernchou, Uffe; Mainegra-Hing, Ernesto

    2013-01-01

    Abstract Purpose. Cone beam computed tomography (CBCT) image quality is limited by scattered photons. Monte Carlo (MC) simulations provide the ability of predicting the patient-specific scatter contamination in clinical CBCT imaging. Lengthy simulations prevent MC-based scatter correction from...

  11. A locally adaptive algorithm for shadow correction in color images

    Science.gov (United States)

    Karnaukhov, Victor; Kober, Vitaly

    2017-09-01

    The paper deals with correction of color images distorted by spatially nonuniform illumination. A serious distortion occurs in real conditions when a part of the scene containing 3D objects close to a directed light source is illuminated much brighter than the rest of the scene. A locally-adaptive algorithm for correction of shadow regions in color images is proposed. The algorithm consists of segmentation of shadow areas with rank-order statistics followed by correction of nonuniform illumination with human visual perception approach. The performance of the proposed algorithm is compared to that of common algorithms for correction of color images containing shadow regions.

  12. Corrections for the effects of accidental coincidences, Compton scatter, and object size in positron emission mammography (PEM) imaging

    Energy Technology Data Exchange (ETDEWEB)

    Raymond Raylman; Stanislaw Majewski; Randolph Wojcik; Andrew Weisenberger; Brian Kross; Vladimir Popov

    2001-06-01

    Positron emission mammography (PEM) has begun to show promise as an effective method for the detection of breast lesions. Due to its utilization of tumor-avid radiopharmaceuticals labeled with positron-emitting radionuclides, this technique may be especially useful in imaging of women with radiodense or fibrocystic breasts. While the use of these radiotracers affords PEM unique capabilities, it also introduces some limitations. Specifically, acceptance of accidental and Compton-scattered coincidence events can decrease lesion detectability. The authors studied the effect of accidental coincidence events on PEM images produced by the presence of 18F-Fluorodeoxyglucose in the organs of a subject using an anthropomorphic phantom. A delayed-coincidence technique was tested as a method for correcting PEM images for the occurrence of accidental events. Also, a Compton scatter correction algorithm designed specifically for PEM was developed and tested using a compressed breast phantom.

  13. Corrections for the effects of accidental coincidences, Compton scatter, and object size in positron emission mammography (PEM) imaging

    International Nuclear Information System (INIS)

    Raymond Raylman; Stanislaw Majewski; Randolph Wojcik; Andrew Weisenberger; Brian Kross; Vladimir Popov

    2001-01-01

    Positron emission mammography (PEM) has begun to show promise as an effective method for the detection of breast lesions. Due to its utilization of tumor-avid radiopharmaceuticals labeled with positron-emitting radionuclides, this technique may be especially useful in imaging of women with radiodense or fibrocystic breasts. While the use of these radiotracers affords PEM unique capabilities, it also introduces some limitations. Specifically, acceptance of accidental and Compton-scattered coincidence events can decrease lesion detectability. The authors studied the effect of accidental coincidence events on PEM images produced by the presence of 18F-Fluorodeoxyglucose in the organs of a subject using an anthropomorphic phantom. A delayed-coincidence technique was tested as a method for correcting PEM images for the occurrence of accidental events. Also, a Compton scatter correction algorithm designed specifically for PEM was developed and tested using a compressed breast phantom

  14. NADH-fluorescence scattering correction for absolute concentration determination in a liquid tissue phantom using a novel multispectral magnetic-resonance-imaging-compatible needle probe

    Science.gov (United States)

    Braun, Frank; Schalk, Robert; Heintz, Annabell; Feike, Patrick; Firmowski, Sebastian; Beuermann, Thomas; Methner, Frank-Jürgen; Kränzlin, Bettina; Gretz, Norbert; Rädle, Matthias

    2017-07-01

    In this report, a quantitative nicotinamide adenine dinucleotide hydrate (NADH) fluorescence measurement algorithm in a liquid tissue phantom using a fiber-optic needle probe is presented. To determine the absolute concentrations of NADH in this phantom, the fluorescence emission spectra at 465 nm were corrected using diffuse reflectance spectroscopy between 600 nm and 940 nm. The patented autoclavable Nitinol needle probe enables the acquisition of multispectral backscattering measurements of ultraviolet, visible, near-infrared and fluorescence spectra. As a phantom, a suspension of calcium carbonate (Calcilit) and water with physiological NADH concentrations between 0 mmol l-1 and 2.0 mmol l-1 were used to mimic human tissue. The light scattering characteristics were adjusted to match the backscattering attributes of human skin by modifying the concentration of Calcilit. To correct the scattering effects caused by the matrices of the samples, an algorithm based on the backscattered remission spectrum was employed to compensate the influence of multiscattering on the optical pathway through the dispersed phase. The monitored backscattered visible light was used to correct the fluorescence spectra and thereby to determine the true NADH concentrations at unknown Calcilit concentrations. Despite the simplicity of the presented algorithm, the root-mean-square error of prediction (RMSEP) was 0.093 mmol l-1.

  15. The modular small-angle X-ray scattering data correction sequence.

    Science.gov (United States)

    Pauw, B R; Smith, A J; Snow, T; Terrill, N J; Thünemann, A F

    2017-12-01

    Data correction is probably the least favourite activity amongst users experimenting with small-angle X-ray scattering: if it is not done sufficiently well, this may become evident only during the data analysis stage, necessitating the repetition of the data corrections from scratch. A recommended comprehensive sequence of elementary data correction steps is presented here to alleviate the difficulties associated with data correction, both in the laboratory and at the synchrotron. When applied in the proposed order to the raw signals, the resulting absolute scattering cross section will provide a high degree of accuracy for a very wide range of samples, with its values accompanied by uncertainty estimates. The method can be applied without modification to any pinhole-collimated instruments with photon-counting direct-detection area detectors.

  16. The Development of a Parameterized Scatter Removal Algorithm for Nuclear Materials Identification System Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Grogan, Brandon Robert [Univ. of Tennessee, Knoxville, TN (United States)

    2010-03-01

    This dissertation presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects non-intrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross-sections of features inside the object can be determined. The cross sections can then be used to identify the materials and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons which are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using

  17. THE DEVELOPMENT OF A PARAMETERIZED SCATTER REMOVAL ALGORITHM FOR NUCLEAR MATERIALS IDENTIFICATION SYSTEM IMAGING

    Energy Technology Data Exchange (ETDEWEB)

    Grogan, Brandon R [ORNL

    2010-05-01

    This report presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects nonintrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross sections of features inside the object can be determined. The cross sections can then be used to identify the materials, and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons that are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized, and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements, and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using the

  18. Higher Order Heavy Quark Corrections to Deep-Inelastic Scattering

    Science.gov (United States)

    Blümlein, Johannes; DeFreitas, Abilio; Schneider, Carsten

    2015-04-01

    The 3-loop heavy flavor corrections to deep-inelastic scattering are essential for consistent next-to-next-to-leading order QCD analyses. We report on the present status of the calculation of these corrections at large virtualities Q2. We also describe a series of mathematical, computer-algebraic and combinatorial methods and special function spaces, needed to perform these calculations. Finally, we briefly discuss the status of measuring αs (MZ), the charm quark mass mc, and the parton distribution functions at next-to-next-to-leading order from the world precision data on deep-inelastic scattering.

  19. Higher order heavy quark corrections to deep-inelastic scattering

    International Nuclear Information System (INIS)

    Bluemlein, J.; Freitas, A. de; Johannes Kepler Univ., Linz; Schneider, C.

    2014-11-01

    The 3-loop heavy flavor corrections to deep-inelastic scattering are essential for consistent next-to-next-to-leading order QCD analyses. We report on the present status of the calculation of these corrections at large virtualities Q 2 . We also describe a series of mathematical, computer-algebraic and combinatorial methods and special function spaces, needed to perform these calculations. Finally, we briefly discuss the status of measuring α s (M Z ), the charm quark mass m c , and the parton distribution functions at next-to-next-to-leading order from the world precision data on deep-inelastic scattering.

  20. An empirical correction for moderate multiple scattering in super-heterodyne light scattering.

    Science.gov (United States)

    Botin, Denis; Mapa, Ludmila Marotta; Schweinfurth, Holger; Sieber, Bastian; Wittenberg, Christopher; Palberg, Thomas

    2017-05-28

    Frequency domain super-heterodyne laser light scattering is utilized in a low angle integral measurement configuration to determine flow and diffusion in charged sphere suspensions showing moderate to strong multiple scattering. We introduce an empirical correction to subtract the multiple scattering background and isolate the singly scattered light. We demonstrate the excellent feasibility of this simple approach for turbid suspensions of transmittance T ≥ 0.4. We study the particle concentration dependence of the electro-kinetic mobility in low salt aqueous suspension over an extended concentration regime and observe a maximum at intermediate concentrations. We further use our scheme for measurements of the self-diffusion coefficients in the fluid samples in the absence or presence of shear, as well as in polycrystalline samples during crystallization and coarsening. We discuss the scope and limits of our approach as well as possible future applications.

  1. Scatter correction using a primary modulator for dual energy digital radiography: A Monte Carlo simulation study

    Science.gov (United States)

    Jo, Byung-Du; Lee, Young-Jin; Kim, Dae-Hong; Kim, Hee-Joung

    2014-08-01

    In conventional digital radiography (DR) using a dual energy subtraction technique, a significant fraction of the detected photons are scattered within the body, making up the scatter component. Scattered radiation can significantly deteriorate image quality in diagnostic X-ray imaging systems. Various methods of scatter correction, including both measurement- and non-measurement-based methods, have been proposed in the past. Both methods can reduce scatter artifacts in images. However, non-measurement-based methods require a homogeneous object and have insufficient scatter component correction. Therefore, we employed a measurement-based method to correct for the scatter component of inhomogeneous objects from dual energy DR (DEDR) images. We performed a simulation study using a Monte Carlo simulation with a primary modulator, which is a measurement-based method for the DEDR system. The primary modulator, which has a checkerboard pattern, was used to modulate the primary radiation. Cylindrical phantoms of variable size were used to quantify the imaging performance. For scatter estimates, we used discrete Fourier transform filtering, e.g., a Gaussian low-high pass filter with a cut-off frequency. The primary modulation method was evaluated using a cylindrical phantom in the DEDR system. The scatter components were accurately removed using a primary modulator. When the results acquired with scatter correction and without scatter correction were compared, the average contrast-to-noise ratio (CNR) with the correction was 1.35 times higher than that obtained without the correction, and the average root mean square error (RMSE) with the correction was 38.00% better than that without the correction. In the subtraction study, the average CNR with the correction was 2.04 (aluminum subtraction) and 1.38 (polymethyl methacrylate (PMMA) subtraction) times higher than that obtained without the correction. The analysis demonstrated the accuracy of the scatter correction and the

  2. Magnetic resonance imaging-guided attenuation and scatter corrections in three-dimensional brain positron emission tomography

    CERN Document Server

    Zaidi, H; Slosman, D O

    2003-01-01

    Reliable attenuation correction represents an essential component of the long chain of modules required for the reconstruction of artifact-free, quantitative brain positron emission tomography (PET) images. In this work we demonstrate the proof of principle of segmented magnetic resonance imaging (MRI)-guided attenuation and scatter corrections in 3D brain PET. We have developed a method for attenuation correction based on registered T1-weighted MRI, eliminating the need of an additional transmission (TX) scan. The MR images were realigned to preliminary reconstructions of PET data using an automatic algorithm and then segmented by means of a fuzzy clustering technique which identifies tissues of significantly different density and composition. The voxels belonging to different regions were classified into air, skull, brain tissue and nasal sinuses. These voxels were then assigned theoretical tissue-dependent attenuation coefficients as reported in the ICRU 44 report followed by Gaussian smoothing and additio...

  3. An Algorithm for Computing Screened Coulomb Scattering in Geant4

    OpenAIRE

    Mendenhall, Marcus H.; Weller, Robert A.

    2004-01-01

    An algorithm has been developed for the Geant4 Monte-Carlo package for the efficient computation of screened Coulomb interatomic scattering. It explicitly integrates the classical equations of motion for scattering events, resulting in precise tracking of both the projectile and the recoil target nucleus. The algorithm permits the user to plug in an arbitrary screening function, such as Lens-Jensen screening, which is good for backscattering calculations, or Ziegler-Biersack-Littmark screenin...

  4. Radiative corrections to deep inelastic muon scattering

    International Nuclear Information System (INIS)

    Akhundov, A.A.; Bardin, D.Yu.; Lohman, W.

    1986-01-01

    A summary is given of the most recent results for the calculaion of radiative corrections to deep inelastic muon-nucleon scattering. Contributions from leptonic electromagnetic processes up to the order a 4 , vacuum polarization by leptons and hadrons, hadronic electromagnetic processes approximately a 3 and γZ interference have been taken into account. The dependence of the individual contributions on kinematical variables is studied. Contributions, not considered in earlier calculations of radiative corrections, reach in certain kinematical regions several per cent at energies above 100 GeV

  5. An algorithm to determine backscattering ratio and single scattering albedo

    Digital Repository Service at National Institute of Oceanography (India)

    Suresh, T.; Desa, E.; Matondkar, S.G.P.; Mascarenhas, A.A.M.Q.; Nayak, S.R.; Naik, P.

    Algorithms to determine the inherent optical properties of water, backscattering probability and single scattering albedo at 490 and 676 nm from the apparent optical property, remote sensing reflectance are presented here. The measured scattering...

  6. An algorithm for 3D target scatterer feature estimation from sparse SAR apertures

    Science.gov (United States)

    Jackson, Julie Ann; Moses, Randolph L.

    2009-05-01

    We present an algorithm for extracting 3D canonical scattering features from complex targets observed over sparse 3D SAR apertures. The algorithm begins with complex phase history data and ends with a set of geometrical features describing the scene. The algorithm provides a pragmatic approach to initialization of a nonlinear feature estimation scheme, using regularization methods to deconvolve the point spread function and obtain sparse 3D images. Regions of high energy are detected in the sparse images, providing location initializations for scattering center estimates. A single canonical scattering feature, corresponding to a geometric shape primitive, is fit to each region via nonlinear optimization of fit error between the regularized data and parametric canonical scattering models. Results of the algorithm are presented using 3D scattering prediction data of a simple scene for both a densely-sampled and a sparsely-sampled SAR measurement aperture.

  7. Evaluation of scatter correction using a single isotope for simultaneous emission and transmission data

    International Nuclear Information System (INIS)

    Yang, J.; Kuikka, J.T.; Vanninen, E.; Laensimies, E.; Kauppinen, T.; Patomaeki, L.

    1999-01-01

    Photon scatter is one of the most important factors degrading the quantitative accuracy of SPECT images. Many scatter correction methods have been proposed. The single isotope method was proposed by us. Aim: We evaluate the scatter correction method of improving the quality of images by acquiring emission and transmission data simultaneously with single isotope scan. Method: To evaluate the proposed scatter correction method, a contrast and linearity phantom was studied. Four female patients with fibromyalgia (FM) syndrome and four with chronic back pain (BP) were imaged. Grey-to-cerebellum (G/C) and grey-to-white matter (G/W) ratios were determined by one skilled operator for 12 regions of interest (ROIs) in each subject. Results: The linearity of activity response was improved after the scatter correction (r=0.999). The y-intercept value of the regression line was 0.036 (p [de

  8. Geometry correction Algorithm for UAV Remote Sensing Image Based on Improved Neural Network

    Science.gov (United States)

    Liu, Ruian; Liu, Nan; Zeng, Beibei; Chen, Tingting; Yin, Ninghao

    2018-03-01

    Aiming at the disadvantage of current geometry correction algorithm for UAV remote sensing image, a new algorithm is proposed. Adaptive genetic algorithm (AGA) and RBF neural network are introduced into this algorithm. And combined with the geometry correction principle for UAV remote sensing image, the algorithm and solving steps of AGA-RBF are presented in order to realize geometry correction for UAV remote sensing. The correction accuracy and operational efficiency is improved through optimizing the structure and connection weight of RBF neural network separately with AGA and LMS algorithm. Finally, experiments show that AGA-RBF algorithm has the advantages of high correction accuracy, high running rate and strong generalization ability.

  9. The Bouguer Correction Algorithm for Gravity with Limited Range

    Directory of Open Access Journals (Sweden)

    MA Jian

    2017-01-01

    Full Text Available The Bouguer correction is an important item in gravity reduction, while the traditional Bouguer correction, whether the plane Bouguer correction or the spherical Bouguer correction, exists approximation error because of far-zone virtual terrain. The error grows as the calculation point gets higher. Therefore gravity reduction using the Bouguer correction with limited range, which was in accordance with the scope of the topographic correction, was researched in this paper. After that, a simplified formula to calculate the Bouguer correction with limited range was proposed. The algorithm, which is innovative and has the value of mathematical theory to some extent, shows consistency with the equation evolved from the strict integral algorithm for topographic correction. The interpolation experiment shows that gravity reduction based on the Bouguer correction with limited range is prior to unlimited range when the calculation point is taller than 1000 m.

  10. WE-DE-207B-12: Scatter Correction for Dedicated Cone Beam Breast CT Based On a Forward Projection Model

    Energy Technology Data Exchange (ETDEWEB)

    Shi, L; Zhu, L [Georgia Institute of Technology, Atlanta, GA (Georgia); Vedantham, S; Karellas, A [University of Massachusetts Medical School, Worcester, MA (United States)

    2016-06-15

    Purpose: The image quality of dedicated cone-beam breast CT (CBBCT) is fundamentally limited by substantial x-ray scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose to suppress x-ray scatter in CBBCT images using a deterministic forward projection model. Method: We first use the 1st-pass FDK-reconstructed CBBCT images to segment fibroglandular and adipose tissue. Attenuation coefficients are assigned to the two tissues based on the x-ray spectrum used for imaging acquisition, and is forward projected to simulate scatter-free primary projections. We estimate the scatter by subtracting the simulated primary projection from the measured projection, and then the resultant scatter map is further refined by a Fourier-domain fitting algorithm after discarding untrusted scatter information. The final scatter estimate is subtracted from the measured projection for effective scatter correction. In our implementation, the proposed scatter correction takes 0.5 seconds for each projection. The method was evaluated using the overall image spatial non-uniformity (SNU) metric and the contrast-to-noise ratio (CNR) with 5 clinical datasets of BI-RADS 4/5 subjects. Results: For the 5 clinical datasets, our method reduced the SNU from 7.79% to 1.68% in coronal view and from 6.71% to 3.20% in sagittal view. The average CNR is improved by a factor of 1.38 in coronal view and 1.26 in sagittal view. Conclusion: The proposed scatter correction approach requires no additional scans or prior images and uses a deterministic model for efficient calculation. Evaluation with clinical datasets demonstrates the feasibility and stability of the method. These features are attractive for clinical CBBCT and make our method distinct from other approaches. Supported partly by NIH R21EB019597, R21CA134128

  11. Bridging Ground Validation and Algorithms: Using Scattering and Integral Tables to Incorporate Observed DSD Correlations into Satellite Algorithms

    Science.gov (United States)

    Williams, C. R.

    2012-12-01

    The NASA Global Precipitation Mission (GPM) raindrop size distribution (DSD) Working Group is composed of NASA PMM Science Team Members and is charged to "investigate the correlations between DSD parameters using Ground Validation (GV) data sets that support, or guide, the assumptions used in satellite retrieval algorithms." Correlations between DSD parameters can be used to constrain the unknowns and reduce the degrees-of-freedom in under-constrained satellite algorithms. Over the past two years, the GPM DSD Working Group has analyzed GV data and has found correlations between the mass-weighted mean raindrop diameter (Dm) and the mass distribution standard deviation (Sm) that follows a power-law relationship. This Dm-Sm power-law relationship appears to be robust and has been observed in surface disdrometer and vertically pointing radar observations. One benefit of a Dm-Sm power-law relationship is that a three parameter DSD can be modeled with just two parameters: Dm and Nw that determines the DSD amplitude. In order to incorporate observed DSD correlations into satellite algorithms, the GPM DSD Working Group is developing scattering and integral tables that can be used by satellite algorithms. Scattering tables describe the interaction of electromagnetic waves on individual particles to generate cross sections of backscattering, extinction, and scattering. Scattering tables are independent of the distribution of particles. Integral tables combine scattering table outputs with DSD parameters and DSD correlations to generate integrated normalized reflectivity, attenuation, scattering, emission, and asymmetry coefficients. Integral tables contain both frequency dependent scattering properties and cloud microphysics. The GPM DSD Working Group has developed scattering tables for raindrops at both Dual Precipitation Radar (DPR) frequencies and at all GMI radiometer frequencies less than 100 GHz. Scattering tables include Mie and T-matrix scattering with H- and V

  12. Improvement of quantitation in SPECT: Attenuation and scatter correction using non-uniform attenuation data

    International Nuclear Information System (INIS)

    Mukai, T.; Torizuka, K.; Douglass, K.H.; Wagner, H.N.

    1985-01-01

    Quantitative assessment of tracer distribution with single photon emission computed tomography (SPECT) is difficult because of attenuation and scattering of gamma rays within the object. A method considering the source geometry was developed, and effects of attenuation and scatter on SPECT quantitation were studied using phantoms with non-uniform attenuation. The distribution of attenuation coefficients (μ) within the source were obtained by transmission CT. The attenuation correction was performed by an iterative reprojection technique. The scatter correction was done by convolution of the attenuation corrected image and an appropriate filter made by line source studies. The filter characteristics depended on μ and SPEC measurement at each pixel. The SPECT obtained by this method showed the most reasonable results than the images reconstructed by other methods. The scatter correction could compensate completely for a 28% scatter components from a long line source, and a 61% component for thick and extended source. Consideration of source geometries was necessary for effective corrections. The present method is expected to be valuable for the quantitative assessment of regional tracer activity

  13. Method for measuring multiple scattering corrections between liquid scintillators

    Energy Technology Data Exchange (ETDEWEB)

    Verbeke, J.M., E-mail: verbeke2@llnl.gov; Glenn, A.M., E-mail: glenn22@llnl.gov; Keefer, G.J., E-mail: keefer1@llnl.gov; Wurtz, R.E., E-mail: wurtz1@llnl.gov

    2016-07-21

    A time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.

  14. Coulomb corrections to scattering length and effective radius

    International Nuclear Information System (INIS)

    Mur, V.D.; Kudryavtsev, A.E.; Popov, V.S.

    1983-01-01

    The problem considered is extraction of the ''purely nuclear'' scattering length asub(s) (corresponding to the strong potential Vsub(s) at the Coulomb interaction switched off) from the Coulomb-nuclear scattering length asub(cs), which is an object of experimental measurement. The difference between asub(s) and asub(cs) is especially large if the potential Vsub(s) has a level (real or virtual) with an energy close to zero. For this case formulae are obtained relating the scattering lengths asub(s) and asub(cs), as well as the effective radii rsub(s) and rsub(cs). The results are extended to states with arbitrary angular momenta l. It is shown that the Coulomb correction is especially large for the coefficient with ksup(2l) in the expansion of the effective radius; in this case the correction contains a large logarithm ln(asub(B)/rsub(0)). The Coulomb renormalization of other terms in the effective radius espansion is of order (rsub(0)/asub(B)), where r 0 is the nuclear force radius, asub(B) is the Bohr radius. The obtained formulae are tried on a number of model potentials Vsub(s), used in nuclear physics

  15. First order correction to quasiclassical scattering amplitude

    International Nuclear Information System (INIS)

    Kuz'menko, A.V.

    1978-01-01

    First order (with respect to h) correction to quasiclassical with the aid of scattering amplitude in nonrelativistic quantum mechanics is considered. This correction is represented by two-loop diagrams and includes the double integrals. With the aid of classical equations of motion, the sum of the contributions of the two-loop diagrams is transformed into the expression which includes one-dimensional integrals only. The specific property of the expression obtained is that the integrand does not possess any singularities in the focal points of the classical trajectory. The general formula takes much simpler form in the case of one-dimensional systems

  16. Mass corrections in deep-inelastic scattering

    International Nuclear Information System (INIS)

    Gross, D.J.; Treiman, S.B.; Wilczek, F.A.

    1977-01-01

    The moment sum rules for deep-inelastic lepton scattering are expected for asymptotically free field theories to display a characteristic pattern of logarithmic departures from scaling at large enough Q 2 . In the large-Q 2 limit these patterns do not depend on hadron or quark masses m. For modest values of Q 2 one expects corrections at the level of powers of m 2 /Q 2 . We discuss the question whether these mass effects are accessible in perturbation theory, as applied to the twist-2 Wilson coefficients and more generally. Our conclusion is that some part of the mass effects must arise from a nonperturbative origin. We also discuss the corrections which arise from higher orders in perturbation theory for very large Q 2 , where mass effects can perhaps be ignored. The emphasis here is on a characterization of the Q 2 , x domain where higher-order corrections are likely to be unimportant

  17. inverse correction of fourier transforms for one-dimensional strongly ...

    African Journals Online (AJOL)

    Hsin Ying-Fei

    2016-05-01

    May 1, 2016 ... As it is widely used in periodic lattice design theory and is particularly useful in aperiodic lattice design [12,13], the accuracy of the FT algorithm under strong scattering conditions is the focus of this paper. We propose an inverse correction approach for the inaccurate FT algorithm in strongly scattering ...

  18. Scatter correction method for x-ray CT using primary modulation: Phantom studies

    International Nuclear Information System (INIS)

    Gao Hewei; Fahrig, Rebecca; Bennett, N. Robert; Sun Mingshan; Star-Lack, Josh; Zhu Lei

    2010-01-01

    Purpose: Scatter correction is a major challenge in x-ray imaging using large area detectors. Recently, the authors proposed a promising scatter correction method for x-ray computed tomography (CT) using primary modulation. Proof of concept was previously illustrated by Monte Carlo simulations and physical experiments on a small phantom with a simple geometry. In this work, the authors provide a quantitative evaluation of the primary modulation technique and demonstrate its performance in applications where scatter correction is more challenging. Methods: The authors first analyze the potential errors of the estimated scatter in the primary modulation method. On two tabletop CT systems, the method is investigated using three phantoms: A Catphan(c)600 phantom, an anthropomorphic chest phantom, and the Catphan(c)600 phantom with two annuli. Two different primary modulators are also designed to show the impact of the modulator parameters on the scatter correction efficiency. The first is an aluminum modulator with a weak modulation and a low modulation frequency, and the second is a copper modulator with a strong modulation and a high modulation frequency. Results: On the Catphan(c)600 phantom in the first study, the method reduces the error of the CT number in the selected regions of interest (ROIs) from 371.4 to 21.9 Hounsfield units (HU); the contrast to noise ratio also increases from 10.9 to 19.2. On the anthropomorphic chest phantom in the second study, which represents a more difficult case due to the high scatter signals and object heterogeneity, the method reduces the error of the CT number from 327 to 19 HU in the selected ROIs and from 31.4% to 5.7% on the overall average. The third study is to investigate the impact of object size on the efficiency of our method. The scatter-to-primary ratio estimation error on the Catphan(c)600 phantom without any annulus (20 cm in diameter) is at the level of 0.04, it rises to 0.07 and 0.1 on the phantom with an

  19. A model of diffraction scattering with unitary corrections

    International Nuclear Information System (INIS)

    Etim, E.; Malecki, A.; Satta, L.

    1989-01-01

    The inability of the multiple scattering model of Glauber and similar geometrical picture models to fit data at Collider energies, to fit low energy data at large momentum transfers and to explain the absence of multiple diffraction dips in the data is noted. It is argued and shown that a unitary correction to the multiple scattering amplitude gives rise to a better model and allows to fit all available data on nucleon-nucleon and nucleus-nucleus collisions at all energies and all momentum transfers. There are no multiple diffraction dips

  20. Effects of scatter correction on regional distribution of cerebral blood flow using I-123-IMP and SPECT

    International Nuclear Information System (INIS)

    Ito, Hiroshi; Iida, Hidehiro; Kinoshita, Toshibumi; Hatazawa, Jun; Okudera, Toshio; Uemura, Kazuo

    1999-01-01

    The transmission dependent convolution subtraction method which is one of the methods for scatter correction of SPECT was applied to the assessment of CBF using SPECT and I-123-IMP. The effects of scatter correction on regional distribution of CBF were evaluated on a pixel by pixel basis by means of an anatomic standardization technique. SPECT scan was performed on six healthy men. Image reconstruction was carried out with and without the scatter correction. All reconstructed images were globally normalized for the radioactivity of each pixel, and transformed into a standard brain anatomy. After anatomic standardization, the average SPECT images were calculated for scatter corrected and uncorrected groups, and these groups were compared on pixel by pixel basis. In the scatter uncorrected group, a significant overestimation of CBF was observed in the deep cerebral white matter, pons, thalamus, putamen, hippocampal region and cingulate gyrus as compared with scatter corrected group. A significant underestimation was observed in all neocortical regions, especially in the occipital and parietal lobes, and the cerebellar cortex. The regional distribution of CBF obtained by scatter corrected SPECT was similar to that obtained by O-15 water PET. The scatter correction is needed for the assessment of CBF using SPECT. (author)

  1. SU-F-J-211: Scatter Correction for Clinical Cone-Beam CT System Using An Optimized Stationary Beam Blocker with a Single Scan

    International Nuclear Information System (INIS)

    Liang, X; Zhang, Z; Xie, Y; Gong, S; Niu, T; Zhou, Q

    2016-01-01

    Purpose: X-ray scatter photons result in significant image quality degradation of cone-beam CT (CBCT). Measurement based algorithms using beam blocker directly acquire the scatter samples and achieve significant improvement on the quality of CBCT image. Within existing algorithms, single-scan and stationary beam blocker proposed previously is promising due to its simplicity and practicability. Although demonstrated effectively on tabletop system, the blocker fails to estimate the scatter distribution on clinical CBCT system mainly due to the gantry wobble. In addition, the uniform distributed blocker strips in our previous design results in primary data loss in the CBCT system and leads to the image artifacts due to data insufficiency. Methods: We investigate the motion behavior of the beam blocker in each projection and design an optimized non-uniform blocker strip distribution which accounts for the data insufficiency issue. An accurate scatter estimation is then achieved from the wobble modeling. Blocker wobble curve is estimated using threshold-based segmentation algorithms in each projection. In the blocker design optimization, the quality of final image is quantified using the number of the primary data loss voxels and the mesh adaptive direct search algorithm is applied to minimize the objective function. Scatter-corrected CT images are obtained using the optimized blocker. Results: The proposed method is evaluated using Catphan@504 phantom and a head patient. On the Catphan©504, our approach reduces the average CT number error from 115 Hounsfield unit (HU) to 11 HU in the selected regions of interest, and improves the image contrast by a factor of 1.45 in the high-contrast regions. On the head patient, the CT number error is reduced from 97 HU to 6 HU in the soft tissue region and image spatial non-uniformity is decreased from 27% to 5% after correction. Conclusion: The proposed optimized blocker design is practical and attractive for CBCT guided radiation

  2. SU-F-J-211: Scatter Correction for Clinical Cone-Beam CT System Using An Optimized Stationary Beam Blocker with a Single Scan

    Energy Technology Data Exchange (ETDEWEB)

    Liang, X; Zhang, Z; Xie, Y [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, GuangDong (China); Gong, S; Niu, T [Department of Radiation Oncology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang (China); Institute of Translational Medicine, Zhejiang University, Hangzhou, Zhejiang (China); Zhou, Q [Department of Radiation Oncology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang (China)

    2016-06-15

    Purpose: X-ray scatter photons result in significant image quality degradation of cone-beam CT (CBCT). Measurement based algorithms using beam blocker directly acquire the scatter samples and achieve significant improvement on the quality of CBCT image. Within existing algorithms, single-scan and stationary beam blocker proposed previously is promising due to its simplicity and practicability. Although demonstrated effectively on tabletop system, the blocker fails to estimate the scatter distribution on clinical CBCT system mainly due to the gantry wobble. In addition, the uniform distributed blocker strips in our previous design results in primary data loss in the CBCT system and leads to the image artifacts due to data insufficiency. Methods: We investigate the motion behavior of the beam blocker in each projection and design an optimized non-uniform blocker strip distribution which accounts for the data insufficiency issue. An accurate scatter estimation is then achieved from the wobble modeling. Blocker wobble curve is estimated using threshold-based segmentation algorithms in each projection. In the blocker design optimization, the quality of final image is quantified using the number of the primary data loss voxels and the mesh adaptive direct search algorithm is applied to minimize the objective function. Scatter-corrected CT images are obtained using the optimized blocker. Results: The proposed method is evaluated using Catphan@504 phantom and a head patient. On the Catphan©504, our approach reduces the average CT number error from 115 Hounsfield unit (HU) to 11 HU in the selected regions of interest, and improves the image contrast by a factor of 1.45 in the high-contrast regions. On the head patient, the CT number error is reduced from 97 HU to 6 HU in the soft tissue region and image spatial non-uniformity is decreased from 27% to 5% after correction. Conclusion: The proposed optimized blocker design is practical and attractive for CBCT guided radiation

  3. A Hierarchical Volumetric Shadow Algorithm for Single Scattering

    OpenAIRE

    Baran, Ilya; Chen, Jiawen; Ragan-Kelley, Jonathan Millar; Durand, Fredo; Lehtinen, Jaakko

    2010-01-01

    Volumetric effects such as beams of light through participating media are an important component in the appearance of the natural world. Many such effects can be faithfully modeled by a single scattering medium. In the presence of shadows, rendering these effects can be prohibitively expensive: current algorithms are based on ray marching, i.e., integrating the illumination scattered towards the camera along each view ray, modulated by visibility to the light source at each sample. Visibility...

  4. Evaluation of scatter correction using a single isotope for simultaneous emission and transmission data

    Energy Technology Data Exchange (ETDEWEB)

    Yang, J.; Kuikka, J.T.; Vanninen, E.; Laensimies, E. [Kuopio Univ. Hospital (Finland). Dept. of Clinical Physiology and Nuclear Medicine; Kauppinen, T.; Patomaeki, L. [Kuopio Univ. (Finland). Dept. of Applied Physics

    1999-05-01

    Photon scatter is one of the most important factors degrading the quantitative accuracy of SPECT images. Many scatter correction methods have been proposed. The single isotope method was proposed by us. Aim: We evaluate the scatter correction method of improving the quality of images by acquiring emission and transmission data simultaneously with single isotope scan. Method: To evaluate the proposed scatter correction method, a contrast and linearity phantom was studied. Four female patients with fibromyalgia (FM) syndrome and four with chronic back pain (BP) were imaged. Grey-to-cerebellum (G/C) and grey-to-white matter (G/W) ratios were determined by one skilled operator for 12 regions of interest (ROIs) in each subject. Results: The linearity of activity response was improved after the scatter correction (r=0.999). The y-intercept value of the regression line was 0.036 (p<0.0001) after scatter correction and the slope was 0.954. Pairwise correlation indicated the agreement between nonscatter corrected and scatter corrected images. Reconstructed slices before and after scatter correction demonstrate a good correlation in the quantitative accuracy of radionuclide concentration. G/C values have significant correlation coefficients between original and corrected data. Conclusion: The transaxial images of human brain studies show that the scatter correction using single isotope in simultaneous transmission and emission tomography provides a good scatter compensation. The contrasts were increased on all 12 ROIs. The scatter compensation enhanced details of physiological lesions. (orig.) [Deutsch] Die Photonenstreuung gehoert zu den wichtigsten Faktoren, die die quantitative Genauigkeit von SPECT-Bildern vermindern. Es wurde eine ganze Reihe von Methoden zur Streuungskorrektur vorgeschlagen. Von uns wurde die Einzelisotopen-Methode empfohlen. Ziel: Wir untersuchten die Streuungskorrektur-Methode zur Verbesserung der Bildqualitaet durch simultane Gewinnung von Emissions

  5. Influence of different contributions of scatter and attenuation on the threshold values in contrast-based algorithms for volume segmentation.

    Science.gov (United States)

    Matheoud, Roberta; Della Monica, Patrizia; Secco, Chiara; Loi, Gianfranco; Krengli, Marco; Inglese, Eugenio; Brambilla, Marco

    2011-01-01

    The aim of this work is to evaluate the role of different amount of attenuation and scatter on FDG-PET image volume segmentation using a contrast-oriented method based on the target-to-background (TB) ratio and target dimensions. A phantom study was designed employing 3 phantom sets, which provided a clinical range of attenuation and scatter conditions, equipped with 6 spheres of different volumes (0.5-26.5 ml). The phantoms were: (1) the Hoffman 3-dimensional brain phantom, (2) a modified International Electro technical Commission (IEC) phantom with an annular ring of water bags of 3 cm thickness fit over the IEC phantom, and (3) a modified IEC phantom with an annular ring of water bags of 9 cm. The phantoms cavities were filled with a solution of FDG at 5.4 kBq/ml activity concentration, and the spheres with activity concentration ratios of about 16, 8, and 4 times the background activity concentration. Images were acquired with a Biograph 16 HI-REZ PET/CT scanner. Thresholds (TS) were determined as a percentage of the maximum intensity in the cross section area of the spheres. To reduce statistical fluctuations a nominal maximum value is calculated as the mean from all voxel > 95%. To find the TS value that yielded an area A best matching the true value, the cross section were auto-contoured in the attenuation corrected slices varying TS in step of 1%, until the area so determined differed by less than 10 mm² versus its known physical value. Multiple regression methods were used to derive an adaptive thresholding algorithm and to test its dependence on different conditions of attenuation and scatter. The errors of scatter and attenuation correction increased with increasing amount of attenuation and scatter in the phantoms. Despite these increasing inaccuracies, PET threshold segmentation algorithms resulted not influenced by the different condition of attenuation and scatter. The test of the hypothesis of coincident regression lines for the three phantoms used

  6. Scatter measurement and correction method for cone-beam CT based on single grating scan

    Science.gov (United States)

    Huang, Kuidong; Shi, Wenlong; Wang, Xinyu; Dong, Yin; Chang, Taoqi; Zhang, Hua; Zhang, Dinghua

    2017-06-01

    In cone-beam computed tomography (CBCT) systems based on flat-panel detector imaging, the presence of scatter significantly reduces the quality of slices. Based on the concept of collimation, this paper presents a scatter measurement and correction method based on single grating scan. First, according to the characteristics of CBCT imaging, the scan method using single grating and the design requirements of the grating are analyzed and figured out. Second, by analyzing the composition of object projection images and object-and-grating projection images, the processing method for the scatter image at single projection angle is proposed. In addition, to avoid additional scan, this paper proposes an angle interpolation method of scatter images to reduce scan cost. Finally, the experimental results show that the scatter images obtained by this method are accurate and reliable, and the effect of scatter correction is obvious. When the additional object-and-grating projection images are collected and interpolated at intervals of 30 deg, the scatter correction error of slices can still be controlled within 3%.

  7. [An automatic color correction algorithm for digital human body sections].

    Science.gov (United States)

    Zhuge, Bin; Zhou, He-qin; Tang, Lei; Lang, Wen-hui; Feng, Huan-qing

    2005-06-01

    To find a new approach to improve the uniformity of color parameters for images data of the serial sections of the human body. An auto-color correction algorithm in the RGB color space based on a standard CMYK color chart was proposed. The gray part of the color chart was auto-segmented from every original image, and fifteen gray values were attained. The transformation function between the measured gray value and the standard gray value of the color chart and the lookup table were obtained. In RGB color space, the colors of images were corrected according to the lookup table. The color of original Chinese Digital Human Girl No. 1 (CDH-G1) database was corrected by using the algorithm with Matlab 6.5, and it took 13.475 s to deal with one picture on a personal computer. Using the algorithm, the color of the original database is corrected automatically and quickly. The uniformity of color parameters for corrected dataset is improved.

  8. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method

    International Nuclear Information System (INIS)

    Shidahara, Miho; Kato, Takashi; Kawatsu, Shoji; Yoshimura, Kumiko; Ito, Kengo; Watabe, Hiroshi; Kim, Kyeong Min; Iida, Hidehiro; Kato, Rikio

    2005-01-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99m Tc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I AC μb with Chang's attenuation correction factor. The scatter component image is estimated by convolving I AC μb with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99m Tc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine. (orig.)

  9. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method.

    Science.gov (United States)

    Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.

  10. Library based x-ray scatter correction for dedicated cone beam breast CT

    International Nuclear Information System (INIS)

    Shi, Linxi; Zhu, Lei; Vedantham, Srinivasan; Karellas, Andrew

    2016-01-01

    Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the GEANT4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correction on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal

  11. Library based x-ray scatter correction for dedicated cone beam breast CT

    Energy Technology Data Exchange (ETDEWEB)

    Shi, Linxi; Zhu, Lei, E-mail: leizhu@gatech.edu [Nuclear and Radiological Engineering and Medical Physics Programs, The George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States); Vedantham, Srinivasan; Karellas, Andrew [Department of Radiology, University of Massachusetts Medical School, Worcester, Massachusetts 01655 (United States)

    2016-08-15

    Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the GEANT4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correction on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal

  12. Transmission-less attenuation correction in time-of-flight PET: analysis of a discrete iterative algorithm

    International Nuclear Information System (INIS)

    Defrise, Michel; Rezaei, Ahmadreza; Nuyts, Johan

    2014-01-01

    The maximum likelihood attenuation correction factors (MLACF) algorithm has been developed to calculate the maximum-likelihood estimate of the activity image and the attenuation sinogram in time-of-flight (TOF) positron emission tomography, using only emission data without prior information on the attenuation. We consider the case of a Poisson model of the data, in the absence of scatter or random background. In this case the maximization with respect to the attenuation factors can be achieved in a closed form and the MLACF algorithm works by updating the activity. Despite promising numerical results, the convergence of this algorithm has not been analysed. In this paper we derive the algorithm and demonstrate that the MLACF algorithm monotonically increases the likelihood, is asymptotically regular, and that the limit points of the iteration are stationary points of the likelihood. Because the problem is not convex, however, the limit points might be saddle points or local maxima. To obtain some empirical insight into the latter question, we present data obtained by applying MLACF to 2D simulated TOF data, using a large number of iterations and different initializations. (paper)

  13. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method

    Energy Technology Data Exchange (ETDEWEB)

    Shidahara, Miho; Kato, Takashi; Kawatsu, Shoji; Yoshimura, Kumiko; Ito, Kengo [National Center for Geriatrics and Gerontology Research Institute, Department of Brain Science and Molecular Imaging, Obu, Aichi (Japan); Watabe, Hiroshi; Kim, Kyeong Min; Iida, Hidehiro [National Cardiovascular Center Research Institute, Department of Investigative Radiology, Suita (Japan); Kato, Rikio [National Center for Geriatrics and Gerontology, Department of Radiology, Obu (Japan)

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with {sup 99m}Tc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I{sub AC}{sup {mu}}{sup b} with Chang's attenuation correction factor. The scatter component image is estimated by convolving I{sub AC}{sup {mu}}{sup b} with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and {sup 99m}Tc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine. (orig.)

  14. A library least-squares approach for scatter correction in gamma-ray tomography

    International Nuclear Information System (INIS)

    Meric, Ilker; Anton Johansen, Geir; Valgueiro Malta Moreira, Icaro

    2015-01-01

    Scattered radiation is known to lead to distortion in reconstructed images in Computed Tomography (CT). The effects of scattered radiation are especially more pronounced in non-scanning, multiple source systems which are preferred for flow imaging where the instantaneous density distribution of the flow components is of interest. In this work, a new method based on a library least-squares (LLS) approach is proposed as a means of estimating the scatter contribution and correcting for this. The validity of the proposed method is tested using the 85-channel industrial gamma-ray tomograph previously developed at the University of Bergen (UoB). The results presented here confirm that the LLS approach can effectively estimate the amounts of transmission and scatter components in any given detector in the UoB gamma-ray tomography system. - Highlights: • A LLS approach is proposed for scatter correction in gamma-ray tomography. • The validity of the LLS approach is tested through experiments. • Gain shift and pulse pile-up affect the accuracy of the LLS approach. • The LLS approach successfully estimates scatter profiles

  15. Compton scatter correction for planner scintigraphic imaging

    Energy Technology Data Exchange (ETDEWEB)

    Vaan Steelandt, E; Dobbeleir, A; Vanregemorter, J [Algemeen Ziekenhuis Middelheim, Antwerp (Belgium). Dept. of Nuclear Medicine and Radiotherapy

    1995-12-01

    A major problem in nuclear medicine is the image degradation due to Compton scatter in the patient. Photons emitted by the radioactive tracer scatter in collision with electrons of the surrounding tissue. Due to the resulting loss of energy and change in direction, the scattered photons induce an object dependant background on the images. This results in a degradation of the contrast of warm and cold lesions. Although theoretically interesting, most of the techniques proposed in literature like the use of symmetrical photopeaks can not be implemented on the commonly used gamma camera due to the energy/linearity/sensitivity corrections applied in the detector. A method for a single energy isotope based on existing methods with adjustments towards daily practice and clinical situations is proposed. It is assumed that the scatter image, recorded from photons collected within a scatter window adjacent to the photo peak, is a reasonable close approximation of the true scatter component of the image reconstructed from the photo peak window. A fraction `k` of the image using the scatter window is subtracted from the image recorded in the photo peak window to produce the compensated image. The principal matter of the method is the right value for the factor `k`, which is determined in a mathematical way and confirmed by experiments. To determine `k`, different kinds of scatter media are used and are positioned in different ways in order to simulate a clinical situation. For a secondary energy window from 100 to 124 keV below a photo peak window from 126 to 154 keV, a value of 0.7 is found. This value has been verified using both an antropomorph thyroid phantom and the Rollo contrast phantom.

  16. Holographic corrections to meson scattering amplitudes

    Energy Technology Data Exchange (ETDEWEB)

    Armoni, Adi; Ireson, Edwin, E-mail: 746616@swansea.ac.uk

    2017-06-15

    We compute meson scattering amplitudes using the holographic duality between confining gauge theories and string theory, in order to consider holographic corrections to the Veneziano amplitude and associated higher-point functions. The generic nature of such computations is explained, thanks to the well-understood nature of confining string backgrounds, and two different examples of the calculation in given backgrounds are used to illustrate the details. The effect we discover, whilst only qualitative, is re-obtainable in many such examples, in four-point but also higher point amplitudes.

  17. Genetic algorithm for chromaticity correction in diffraction limited storage rings

    Directory of Open Access Journals (Sweden)

    M. P. Ehrlichman

    2016-04-01

    Full Text Available A multiobjective genetic algorithm is developed for optimizing nonlinearities in diffraction limited storage rings. This algorithm determines sextupole and octupole strengths for chromaticity correction that deliver optimized dynamic aperture and beam lifetime. The algorithm makes use of dominance constraints to breed desirable properties into the early generations. The momentum aperture is optimized indirectly by constraining the chromatic tune footprint and optimizing the off-energy dynamic aperture. The result is an effective and computationally efficient technique for correcting chromaticity in a storage ring while maintaining optimal dynamic aperture and beam lifetime.

  18. MO-FG-CAMPUS-JeP1-05: Water Equivalent Path Length Calculations Using Scatter-Corrected Head and Neck CBCT Images to Evaluate Patients for Adaptive Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Kim, J; Park, Y; Sharp, G; Winey, B [Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States)

    2016-06-15

    Purpose: To establish a method to evaluate the dosimetric impact of anatomic changes in head and neck patients during proton therapy by using scatter-corrected cone-beam CT (CBCT) images. Methods: The water equivalent path length (WEPL) was calculated to the distal edge of PTV contours by using tomographic images available for six head and neck patients received photon therapy. The proton range variation was measured by calculating the difference between the distal WEPLs calculated with the planning CT and weekly treatment CBCT images. By performing an automatic rigid registration, six degrees-of-freedom (DOF) correction was made to the CBCT images to account for the patient setup uncertainty. For accurate WEPL calculations, an existing CBCT scatter correction algorithm, whose performance was already proven for phantom images, was calibrated for head and neck patient images. Specifically, two different image similarity measures, mutual information (MI) and mean square error (MSE), were tested for the deformable image registration (DIR) in the CBCT scatter correction algorithm. Results: The impact of weight loss was reflected in the distal WEPL differences with the aid of the automatic rigid registration reducing the influence of patient setup uncertainty on the WEPL calculation results. The WEPL difference averaged over distal area was 2.9 ± 2.9 (mm) across all fractions of six patients and its maximum, mostly found at the last available fraction, was 6.2 ± 3.4 (mm). The MSE-based DIR successfully registered each treatment CBCT image to the planning CT image. On the other hand, the MI-based DIR deformed the skin voxels in the planning CT image to the immobilization mask in the treatment CBCT image, most of which was cropped out of the planning CT image. Conclusion: The dosimetric impact of anatomic changes was evaluated by calculating the distal WEPL difference with the existing scatter-correction algorithm appropriately calibrated. Jihun Kim, Yang-Kyun Park

  19. Sensitivity Analysis of the Scattering-Based SARBM3D Despeckling Algorithm.

    Science.gov (United States)

    Di Simone, Alessio

    2016-06-25

    Synthetic Aperture Radar (SAR) imagery greatly suffers from multiplicative speckle noise, typical of coherent image acquisition sensors, such as SAR systems. Therefore, a proper and accurate despeckling preprocessing step is almost mandatory to aid the interpretation and processing of SAR data by human users and computer algorithms, respectively. Very recently, a scattering-oriented version of the popular SAR Block-Matching 3D (SARBM3D) despeckling filter, named Scattering-Based (SB)-SARBM3D, was proposed. The new filter is based on the a priori knowledge of the local topography of the scene. In this paper, an experimental sensitivity analysis of the above-mentioned despeckling algorithm is carried out, and the main results are shown and discussed. In particular, the role of both electromagnetic and geometrical parameters of the surface and the impact of its scattering behavior are investigated. Furthermore, a comprehensive sensitivity analysis of the SB-SARBM3D filter against the Digital Elevation Model (DEM) resolution and the SAR image-DEM coregistration step is also provided. The sensitivity analysis shows a significant robustness of the algorithm against most of the surface parameters, while the DEM resolution plays a key role in the despeckling process. Furthermore, the SB-SARBM3D algorithm outperforms the original SARBM3D in the presence of the most realistic scattering behaviors of the surface. An actual scenario is also presented to assess the DEM role in real-life conditions.

  20. Efficient Color-Dressed Calculation of Virtual Corrections

    CERN Document Server

    Giele, Walter; Winter, Jan

    2010-01-01

    With the advent of generalized unitarity and parametric integration techniques, the construction of a generic Next-to-Leading Order Monte Carlo becomes feasible. Such a generator will entail the treatment of QCD color in the amplitudes. We extend the concept of color dressing to one-loop amplitudes, resulting in the formulation of an explicit algorithmic solution for the calculation of arbitrary scattering processes at Next-to-Leading order. The resulting algorithm is of exponential complexity, that is the numerical evaluation time of the virtual corrections grows by a constant multiplicative factor as the number of external partons is increased. To study the properties of the method, we calculate the virtual corrections to $n$-gluon scattering.

  1. Evaluation of attenuation and scatter correction requirements in small animal PET and SPECT imaging

    Science.gov (United States)

    Konik, Arda Bekir

    Positron emission tomography (PET) and single photon emission tomography (SPECT) are two nuclear emission-imaging modalities that rely on the detection of high-energy photons emitted from radiotracers administered to the subject. The majority of these photons are attenuated (absorbed or scattered) in the body, resulting in count losses or deviations from true detection, which in turn degrades the accuracy of images. In clinical emission tomography, sophisticated correction methods are often required employing additional x-ray CT or radionuclide transmission scans. Having proven their potential in both clinical and research areas, both PET and SPECT are being adapted for small animal imaging. However, despite the growing interest in small animal emission tomography, little scientific information exists about the accuracy of these correction methods on smaller size objects, and what level of correction is required. The purpose of this work is to determine the role of attenuation and scatter corrections as a function of object size through simulations. The simulations were performed using Interactive Data Language (IDL) and a Monte Carlo based package, Geant4 application for emission tomography (GATE). In IDL simulations, PET and SPECT data acquisition were modeled in the presence of attenuation. A mathematical emission and attenuation phantom approximating a thorax slice and slices from real PET/CT data were scaled to 5 different sizes (i.e., human, dog, rabbit, rat and mouse). The simulated emission data collected from these objects were reconstructed. The reconstructed images, with and without attenuation correction, were compared to the ideal (i.e., non-attenuated) reconstruction. Next, using GATE, scatter fraction values (the ratio of the scatter counts to the total counts) of PET and SPECT scanners were measured for various sizes of NEMA (cylindrical phantoms representing small animals and human), MOBY (realistic mouse/rat model) and XCAT (realistic human model

  2. ITERATIVE SCATTER CORRECTION FOR GRID-LESS BEDSIDE CHEST RADIOGRAPHY: PERFORMANCE FOR A CHEST PHANTOM.

    Science.gov (United States)

    Mentrup, Detlef; Jockel, Sascha; Menser, Bernd; Neitzel, Ulrich

    2016-06-01

    The aim of this work was to experimentally compare the contrast improvement factors (CIFs) of a newly developed software-based scatter correction to the CIFs achieved by an antiscatter grid. To this end, three aluminium discs were placed in the lung, the retrocardial and the abdominal areas of a thorax phantom, and digital radiographs of the phantom were acquired both with and without a stationary grid. The contrast generated by the discs was measured in both images, and the CIFs achieved by grid usage were determined for each disc. Additionally, the non-grid images were processed with a scatter correction software. The contrasts generated by the discs were determined in the scatter-corrected images, and the corresponding CIFs were calculated. The CIFs obtained with the grid and with the software were in good agreement. In conclusion, the experiment demonstrates quantitatively that software-based scatter correction allows restoring the image contrast of a non-grid image in a manner comparable with an antiscatter grid. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Multiangle Implementation of Atmospheric Correction (MAIAC): 2. Aerosol Algorithm

    Science.gov (United States)

    Lyapustin, A.; Wang, Y.; Laszlo, I.; Kahn, R.; Korkin, S.; Remer, L.; Levy, R.; Reid, J. S.

    2011-01-01

    An aerosol component of a new multiangle implementation of atmospheric correction (MAIAC) algorithm is presented. MAIAC is a generic algorithm developed for the Moderate Resolution Imaging Spectroradiometer (MODIS), which performs aerosol retrievals and atmospheric correction over both dark vegetated surfaces and bright deserts based on a time series analysis and image-based processing. The MAIAC look-up tables explicitly include surface bidirectional reflectance. The aerosol algorithm derives the spectral regression coefficient (SRC) relating surface bidirectional reflectance in the blue (0.47 micron) and shortwave infrared (2.1 micron) bands; this quantity is prescribed in the MODIS operational Dark Target algorithm based on a parameterized formula. The MAIAC aerosol products include aerosol optical thickness and a fine-mode fraction at resolution of 1 km. This high resolution, required in many applications such as air quality, brings new information about aerosol sources and, potentially, their strength. AERONET validation shows that the MAIAC and MOD04 algorithms have similar accuracy over dark and vegetated surfaces and that MAIAC generally improves accuracy over brighter surfaces due to the SRC retrieval and explicit bidirectional reflectance factor characterization, as demonstrated for several U.S. West Coast AERONET sites. Due to its generic nature and developed angular correction, MAIAC performs aerosol retrievals over bright deserts, as demonstrated for the Solar Village Aerosol Robotic Network (AERONET) site in Saudi Arabia.

  4. Two-loop fermionic corrections to massive Bhabha scattering

    Energy Technology Data Exchange (ETDEWEB)

    Actis, S.; Riemann, T. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Czakon, M. [Wuerzburg Univ. (Germany). Inst. fuer Theoretische Physik und Astrophysik]|[Institute of Nuclear Physics, NSCR DEMOKRITOS, Athens (Greece); Gluza, J. [Silesia Univ., Katowice (Poland). Inst. of Physics

    2007-05-15

    We evaluate the two-loop corrections to Bhabha scattering from fermion loops in the context of pure Quantum Electrodynamics. The differential cross section is expressed by a small number of Master Integrals with exact dependence on the fermion masses m{sub e}, m{sub f} and the Mandelstam invariants s, t, u. We determine the limit of fixed scattering angle and high energy, assuming the hierarchy of scales m{sup 2}{sub e}<

  5. Temporal high-pass non-uniformity correction algorithm based on grayscale mapping and hardware implementation

    Science.gov (United States)

    Jin, Minglei; Jin, Weiqi; Li, Yiyang; Li, Shuo

    2015-08-01

    In this paper, we propose a novel scene-based non-uniformity correction algorithm for infrared image processing-temporal high-pass non-uniformity correction algorithm based on grayscale mapping (THP and GM). The main sources of non-uniformity are: (1) detector fabrication inaccuracies; (2) non-linearity and variations in the read-out electronics and (3) optical path effects. The non-uniformity will be reduced by non-uniformity correction (NUC) algorithms. The NUC algorithms are often divided into calibration-based non-uniformity correction (CBNUC) algorithms and scene-based non-uniformity correction (SBNUC) algorithms. As non-uniformity drifts temporally, CBNUC algorithms must be repeated by inserting a uniform radiation source which SBNUC algorithms do not need into the view, so the SBNUC algorithm becomes an essential part of infrared imaging system. The SBNUC algorithms' poor robustness often leads two defects: artifacts and over-correction, meanwhile due to complicated calculation process and large storage consumption, hardware implementation of the SBNUC algorithms is difficult, especially in Field Programmable Gate Array (FPGA) platform. The THP and GM algorithm proposed in this paper can eliminate the non-uniformity without causing defects. The hardware implementation of the algorithm only based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay: less than 20 lines, it can be transplanted to a variety of infrared detectors equipped with FPGA image processing module, it can reduce the stripe non-uniformity and the ripple non-uniformity.

  6. Correction of oral contrast artifacts in CT-based attenuation correction of PET images using an automated segmentation algorithm

    International Nuclear Information System (INIS)

    Ahmadian, Alireza; Ay, Mohammad R.; Sarkar, Saeed; Bidgoli, Javad H.; Zaidi, Habib

    2008-01-01

    Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map (μmap), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated μmaps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique in

  7. Investigation of Compton scattering correction methods in cardiac SPECT by Monte Carlo simulations

    International Nuclear Information System (INIS)

    Silva, A.M. Marques da; Furlan, A.M.; Robilotta, C.C.

    2001-01-01

    The goal of this work was the use of Monte Carlo simulations to investigate the effects of two scattering correction methods: dual energy window (DEW) and dual photopeak window (DPW), in quantitative cardiac SPECT reconstruction. MCAT torso-cardiac phantom, with 99m Tc and non-uniform attenuation map was simulated. Two different photopeak windows were evaluated in DEW method: 15% and 20%. Two 10% wide subwindows centered symmetrically within the photopeak were used in DPW method. Iterative ML-EM reconstruction with modified projector-backprojector for attenuation correction was applied. Results indicated that the choice of the scattering and photopeak windows determines the correction accuracy. For the 15% window, fitted scatter fraction gives better results than k = 0.5. For the 20% window, DPW is the best method, but it requires parameters estimation using Monte Carlo simulations. (author)

  8. An inter-crystal scatter correction method for DOI PET image reconstruction

    International Nuclear Information System (INIS)

    Lam, Chih Fung; Hagiwara, Naoki; Obi, Takashi; Yamaguchi, Masahiro; Yamaya, Taiga; Murayama, Hideo

    2006-01-01

    New positron emission tomography (PET) scanners utilize depth-of-interaction (DOI) information to improve image resolution, particularly at the edge of field-of-view while maintaining high detector sensitivity. However, the inter-crystal scatter (ICS) effect cannot be neglected in DOI scanners due to the use of smaller crystals. ICS is the phenomenon wherein there are multiple scintillations for irradiation of a gamma photon due to Compton scatter in detecting crystals. In the case of ICS, only one scintillation position is approximated for detectors with Anger-type logic calculation. This causes an error in position detection and ICS worsens the image contrast, particularly for smaller hotspots. In this study, we propose to model an ICS probability by using a Monte Carlo simulator. The probability is given as a statistical relationship between the gamma photon first interaction crystal pair and the detected crystal pair. It is then used to improve the system matrix of a statistical image reconstruction algorithm, such as maximum likehood expectation maximization (ML-EM) in order to correct for the position error caused by ICS. We apply the proposed method to simulated data of the jPET-D4, which is a four-layer DOI PET being developed at the National Institute of Radiological Sciences. Our computer simulations show that image contrast is recovered successfully by the proposed method. (author)

  9. An algorithm for computing screened Coulomb scattering in GEANT4

    Energy Technology Data Exchange (ETDEWEB)

    Mendenhall, Marcus H. [Vanderbilt University Free Electron Laser Center, P.O. Box 351816 Station B, Nashville, TN 37235-1816 (United States)]. E-mail: marcus.h.mendenhall@vanderbilt.edu; Weller, Robert A. [Department of Electrical Engineering and Computer Science, Vanderbilt University, P.O. Box 351821 Station B, Nashville, TN 37235-1821 (United States)]. E-mail: robert.a.weller@vanderbilt.edu

    2005-01-01

    An algorithm has been developed for the GEANT4 Monte-Carlo package for the efficient computation of screened Coulomb interatomic scattering. It explicitly integrates the classical equations of motion for scattering events, resulting in precise tracking of both the projectile and the recoil target nucleus. The algorithm permits the user to plug in an arbitrary screening function, such as Lens-Jensen screening, which is good for backscattering calculations, or Ziegler-Biersack-Littmark screening, which is good for nuclear straggling and implantation problems. This will allow many of the applications of the TRIM and SRIM codes to be extended into the much more general GEANT4 framework where nuclear and other effects can be included.

  10. An algorithm for computing screened Coulomb scattering in GEANT4

    International Nuclear Information System (INIS)

    Mendenhall, Marcus H.; Weller, Robert A.

    2005-01-01

    An algorithm has been developed for the GEANT4 Monte-Carlo package for the efficient computation of screened Coulomb interatomic scattering. It explicitly integrates the classical equations of motion for scattering events, resulting in precise tracking of both the projectile and the recoil target nucleus. The algorithm permits the user to plug in an arbitrary screening function, such as Lens-Jensen screening, which is good for backscattering calculations, or Ziegler-Biersack-Littmark screening, which is good for nuclear straggling and implantation problems. This will allow many of the applications of the TRIM and SRIM codes to be extended into the much more general GEANT4 framework where nuclear and other effects can be included

  11. A reconstruction algorithm for coherent scatter computed tomography based on filtered back-projection

    International Nuclear Information System (INIS)

    Stevendaal, U. van; Schlomka, J.-P.; Harding, A.; Grass, M.

    2003-01-01

    Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter form factor of the investigated object. Reconstruction from coherently scattered x-rays is commonly done using algebraic reconstruction techniques (ART). In this paper, we propose an alternative approach based on filtered back-projection. For the first time, a three-dimensional (3D) filtered back-projection technique using curved 3D back-projection lines is applied to two-dimensional coherent scatter projection data. The proposed algorithm is tested with simulated projection data as well as with projection data acquired with a demonstrator setup similar to a multi-line CT scanner geometry. While yielding comparable image quality as ART reconstruction, the modified 3D filtered back-projection algorithm is about two orders of magnitude faster. In contrast to iterative reconstruction schemes, it has the advantage that subfield-of-view reconstruction becomes feasible. This allows a selective reconstruction of the coherent-scatter form factor for a region of interest. The proposed modified 3D filtered back-projection algorithm is a powerful reconstruction technique to be implemented in a CSCT scanning system. This method gives coherent scatter CT the potential of becoming a competitive modality for medical imaging or nondestructive testing

  12. Modified automatic term selection v2: A faster algorithm to calculate inelastic scattering cross-sections

    Energy Technology Data Exchange (ETDEWEB)

    Rusz, Ján, E-mail: jan.rusz@fysik.uu.se

    2017-06-15

    Highlights: • New algorithm for calculating double differential scattering cross-section. • Shown good convergence properties. • Outperforms older MATS algorithm, particularly in zone axis calculations. - Abstract: We present a new algorithm for calculating inelastic scattering cross-section for fast electrons. Compared to the previous Modified Automatic Term Selection (MATS) algorithm (Rusz et al. [18]), it has far better convergence properties in zone axis calculations and it allows to identify contributions of individual atoms. One can think of it as a blend of MATS algorithm and a method described by Weickenmeier and Kohl [10].

  13. Two-photon exchange corrections in elastic lepton-proton scattering

    Energy Technology Data Exchange (ETDEWEB)

    Tomalak, Oleksandr; Vanderhaeghen, Marc [Johannes Gutenberg Universitaet Mainz (Germany)

    2015-07-01

    The measured value of the proton charge radius from the Lamb shift of energy levels in muonic hydrogen is in strong contradiction, by 7-8 standard deviations, with the value obtained from electronic hydrogen spectroscopy and the value extracted from unpolarized electron-proton scattering data. The dominant unaccounted higher order contribution in scattering experiments corresponds to the two photon exchange (TPE) diagram. The elastic contribution to the TPE correction was studied with the fixed momentum transfer dispersion relations and compared to the hadronic model with off-shell photon-nucleon vertices. A dispersion relation formalism with one subtraction was proposed. Theoretical predictions of the TPE elastic contribution to the unpolarized elastic electron-proton scattering and polarization transfer observables in the low momentum transfer region were made. The TPE formalism was generalized to the case of massive leptons and the elastic contribution was evaluated for the kinematics of upcoming muon-proton scattering experiment (MUSE).

  14. Improving quantitative dosimetry in (177)Lu-DOTATATE SPECT by energy window-based scatter corrections

    DEFF Research Database (Denmark)

    de Nijs, Robin; Lagerburg, Vera; Klausen, Thomas L

    2014-01-01

    and the activity, which depends on the collimator type, the utilized energy windows and the applied scatter correction techniques. In this study, energy window subtraction-based scatter correction methods are compared experimentally and quantitatively. MATERIALS AND METHODS: (177)Lu SPECT images of a phantom...... technique, the measured ratio was close to the real ratio, and the differences between spheres were small. CONCLUSION: For quantitative (177)Lu imaging MEGP collimators are advised. Both energy peaks can be utilized when the ESSE correction technique is applied. The difference between the calculated...

  15. Compton scatter correction in case of multiple crosstalks in SPECT imaging.

    Science.gov (United States)

    Sychra, J J; Blend, M J; Jobe, T H

    1996-02-01

    A strategy for Compton scatter correction in brain SPECT images was proposed recently. It assumes that two radioisotopes are used and that a significant portion of photons of one radioisotope (for example, Tc99m) spills over into the low energy acquisition window of the other radioisotope (for example, Tl201). We are extending this approach to cases of several radioisotopes with mutual, multiple and significant photon spillover. In the example above, one may correct not only the Tl201 image but also the Tc99m image corrupted by the Compton scatter originating from the small component of high energy Tl201 photons. The proposed extension is applicable to other anatomical domains (cardiac imaging).

  16. Monte Carlo evaluation of scattering correction methods in 131I studies using pinhole collimator

    International Nuclear Information System (INIS)

    López Díaz, Adlin; San Pedro, Aley Palau; Martín Escuela, Juan Miguel; Rodríguez Pérez, Sunay; Díaz García, Angelina

    2017-01-01

    Scattering is quite important for image activity quantification. In order to study the scattering factors and the efficacy of 3 multiple window energy scatter correction methods during 131 I thyroid studies with a pinhole collimator (5 mm hole) a Monte Carlo simulation (MC) was developed. The GAMOS MC code was used to model the gamma camera and the thyroid source geometry. First, to validate the MC gamma camera pinhole-source model, sensibility in air and water of the simulated and measured thyroid phantom geometries were compared. Next, simulations to investigate scattering and the result of triple energy (TEW), Double energy (DW) and Reduced double (RDW) energy windows correction methods were performed for different thyroid sizes and depth thicknesses. The relative discrepancies to MC real event were evaluated. Results: The accuracy of the GAMOS MC model was verified and validated. The image’s scattering contribution was significant, between 27-40 %. The discrepancies between 3 multiple window energy correction method results were significant (between 9-86 %). The Reduce Double Window methods (15%) provide discrepancies of 9-16 %. Conclusions: For the simulated thyroid geometry with pinhole, the RDW (15 %) was the most effective. (author)

  17. Development and evaluation of attenuation and scatter correction techniques for SPECT using the Monte Carlo method

    International Nuclear Information System (INIS)

    Ljungberg, M.

    1990-05-01

    Quantitative scintigrafic images, obtained by NaI(Tl) scintillation cameras, are limited by photon attenuation and contribution from scattered photons. A Monte Carlo program was developed in order to evaluate these effects. Simple source-phantom geometries and more complex nonhomogeneous cases can be simulated. Comparisons with experimental data for both homogeneous and nonhomogeneous regions and with published results have shown good agreement. The usefulness for simulation of parameters in scintillation camera systems, stationary as well as in SPECT systems, has also been demonstrated. An attenuation correction method based on density maps and build-up functions has been developed. The maps were obtained from a transmission measurement using an external 57 Co flood source and the build-up was simulated by the Monte Carlo code. Two scatter correction methods, the dual-window method and the convolution-subtraction method, have been compared using the Monte Carlo method. The aim was to compare the estimated scatter with the true scatter in the photo-peak window. It was concluded that accurate depth-dependent scatter functions are essential for a proper scatter correction. A new scatter and attenuation correction method has been developed based on scatter line-spread functions (SLSF) obtained for different depths and lateral positions in the phantom. An emission image is used to determine the source location in order to estimate the scatter in the photo-peak window. Simulation studies of a clinically realistic source in different positions in cylindrical water phantoms were made for three photon energies. The SLSF-correction method was also evaluated by simulation studies for 1. a myocardial source, 2. uniform source in the lungs and 3. a tumour located in the lungs in a realistic, nonhomogeneous computer phantom. The results showed that quantitative images could be obtained in nonhomogeneous regions. (67 refs.)

  18. Evaluation of the ICS and DEW scatter correction methods for low statistical content scans in 3D PET

    International Nuclear Information System (INIS)

    Sossi, V.; Oakes, T.R.; Ruth, T.J.

    1996-01-01

    The performance of the Integral Convolution and the Dual Energy Window scatter correction methods in 3D PET has been evaluated over a wide range of statistical content of acquired data (1M to 400M events) The order in which scatter correction and detector normalization should be applied has also been investigated. Phantom and human neuroreceptor studies were used with the following figures of merit: axial and radial uniformity, sinogram and image noise, contrast accuracy and contrast accuracy uniformity. Both scatter correction methods perform reliably in the range of number of events examined. Normalization applied after scatter correction yields better radial uniformity and fewer image artifacts

  19. Magnetic corrections to π -π scattering lengths in the linear sigma model

    Science.gov (United States)

    Loewe, M.; Monje, L.; Zamora, R.

    2018-03-01

    In this article, we consider the magnetic corrections to π -π scattering lengths in the frame of the linear sigma model. For this, we consider all the one-loop corrections in the s , t , and u channels, associated to the insertion of a Schwinger propagator for charged pions, working in the region of small values of the magnetic field. Our calculation relies on an appropriate expansion for the propagator. It turns out that the leading scattering length, l =0 in the S channel, increases for an increasing value of the magnetic field, in the isospin I =2 case, whereas the opposite effect is found for the I =0 case. The isospin symmetry is valid because the insertion of the magnetic field occurs through the absolute value of the electric charges. The channel I =1 does not receive any corrections. These results, for the channels I =0 and I =2 , are opposite with respect to the thermal corrections found previously in the literature.

  20. Multiple-scattering corrections to the Beer-Lambert law

    International Nuclear Information System (INIS)

    Zardecki, A.

    1983-01-01

    The effect of multiple scattering on the validity of the Beer-Lambert law is discussed for a wide range of particle-size parameters and optical depths. To predict the amount of received radiant power, appropriate correction terms are introduced. For particles larger than or comparable to the wavelength of radiation, the small-angle approximation is adequate; whereas for small densely packed particles, the diffusion theory is advantageously employed. These two approaches are used in the context of the problem of laser-beam propagation in a dense aerosol medium. In addition, preliminary results obtained by using a two-dimensional finite-element discrete-ordinates transport code are described. Multiple-scattering effects for laser propagation in fog, cloud, rain, and aerosol cloud are modeled

  1. Scatter correction, intermediate view estimation and dose characterization in megavoltage cone-beam CT imaging

    Science.gov (United States)

    Sramek, Benjamin Koerner

    The ability to deliver conformal dose distributions in radiation therapy through intensity modulation and the potential for tumor dose escalation to improve treatment outcome has necessitated an increase in localization accuracy of inter- and intra-fractional patient geometry. Megavoltage cone-beam CT imaging using the treatment beam and onboard electronic portal imaging device is one option currently being studied for implementation in image-guided radiation therapy. However, routine clinical use is predicated upon continued improvements in image quality and patient dose delivered during acquisition. The formal statement of hypothesis for this investigation was that the conformity of planned to delivered dose distributions in image-guided radiation therapy could be further enhanced through the application of kilovoltage scatter correction and intermediate view estimation techniques to megavoltage cone-beam CT imaging, and that normalized dose measurements could be acquired and inter-compared between multiple imaging geometries. The specific aims of this investigation were to: (1) incorporate the Feldkamp, Davis and Kress filtered backprojection algorithm into a program to reconstruct a voxelized linear attenuation coefficient dataset from a set of acquired megavoltage cone-beam CT projections, (2) characterize the effects on megavoltage cone-beam CT image quality resulting from the application of Intermediate View Interpolation and Intermediate View Reprojection techniques to limited-projection datasets, (3) incorporate the Scatter and Primary Estimation from Collimator Shadows (SPECS) algorithm into megavoltage cone-beam CT image reconstruction and determine the set of SPECS parameters which maximize image quality and quantitative accuracy, and (4) evaluate the normalized axial dose distributions received during megavoltage cone-beam CT image acquisition using radiochromic film and thermoluminescent dosimeter measurements in anthropomorphic pelvic and head and

  2. Attenuation correction for the HRRT PET-scanner using transmission scatter correction and total variation regularization

    DEFF Research Database (Denmark)

    Keller, Sune H; Svarer, Claus; Sibomana, Merence

    2013-01-01

    scatter correction in the μ-map reconstruction and total variation filtering to the transmission processing. Results: Comparing MAP-TR and the new TXTV with gold standard CT-based attenuation correction, we found that TXTV has less bias as compared to MAP-TR. We also compared images acquired at the HRRT......In the standard software for the Siemens high-resolution research tomograph (HRRT) positron emission tomography (PET) scanner the most commonly used segmentation in the μ -map reconstruction for human brain scans is maximum a posteriori for transmission (MAP-TR). Bias in the lower cerebellum...

  3. Scatter and attenuation correction in SPECT

    International Nuclear Information System (INIS)

    Ljungberg, Michael

    2004-01-01

    The adsorbed dose is related to the activity uptake in the organ and its temporal distribution. Measured count rate with scintillation cameras is related to activity through the system sensitivity, cps/MBq. By accounting for physical processes and imaging limitations we can measure the activity at different time points. Correction for physical factor, such as attenuation and scatter is required for accurate quantitation. Both planar and SPECT imaging can be used to estimate activities for radiopharmaceutical dosimetry. Planar methods have been the most widely used but is a 2D technique. With accurate modelling for imagine in iterative reconstruction, SPECT methods will prove to be more accurate

  4. A software-based x-ray scatter correction method for breast tomosynthesis

    OpenAIRE

    Jia Feng, Steve Si; Sechopoulos, Ioannis

    2011-01-01

    Purpose: To develop a software-based scatter correction method for digital breast tomosynthesis (DBT) imaging and investigate its impact on the image quality of tomosynthesis reconstructions of both phantoms and patients.

  5. Hadron mass corrections in semi-inclusive deep inelastic scattering

    International Nuclear Information System (INIS)

    Accardi, A.; Hobbs, T.; Melnitchouk, W.

    2009-01-01

    We derive mass corrections for semi-inclusive deep inelastic scattering of leptons from nucleons using a collinear factorization framework which incorporates the initial state mass of the target nucleon and the final state mass of the produced hadron h. The hadron mass correction is made by introducing a generalized, finite-Q 2 scaling variable ζ h for the hadron fragmentation function, which approaches the usual energy fraction z h = E h /ν in the Bjorken limit. We systematically examine the kinematic dependencies of the mass corrections to semi-inclusive cross sections, and find that these are even larger than for inclusive structure functions. The hadron mass corrections compete with the experimental uncertainties at kinematics typical of current facilities, Q 2 2 and intermediate x B > 0.3, and will be important to efforts at extracting parton distributions from semi-inclusive processes at intermediate energies.

  6. Complete $O(\\alpha)$ QED corrections to polarized Compton scattering

    CERN Document Server

    Denner, Ansgar

    1999-01-01

    The complete QED corrections of O(alpha) to polarized Compton scattering are calculated for finite electron mass and including the real corrections induced by the processes e^- gamma -> e^- gamma gamma and e^- gamma -> e^- e^- e^+. All relevant formulas are listed in a form that is well suited for a direct implementation in computer codes. We present a detailed numerical discussion of the O(alpha)-corrected cross section and the left-right asymmetry in the energy range of present and future Compton polarimeters, which are used to determine the beam polarization of high-energetic e^+- beams. For photons with energies of a few eV and electrons with SLC energies or smaller, the corrections are of the order of a few per mille. In the energy range of future e^+e^- colliders, however, they reach 1-2% and cannot be neglected in a precision polarization measurement.

  7. Virtual two-loop corrections to Bhabha scattering

    International Nuclear Information System (INIS)

    Bjoerkevoll, K.S.

    1992-03-01

    The author has developed methods for the calculation of contributions from six ladder-like diagrams to Bhabha scattering. The leading terms both for separate diagrams and for the sum of the gauge-invariant set of all diagrams have been calculated. The study has been limited to contributions from Feynman diagrams without real photons, and all calculations have been done with s>> |t| >>m 2 , where s is the center of mass energy squared, t is the square of the transferred four-momentum, and m is the electron mass. For the separate diagrams the results depend upon how λ 2 is related to s, |t| and m 2 , whereas the leading term of the sum of the six diagrams is the same in the cases that have been considered. The methods described should be valuable for calculations of contributions from other Feynman diagrams, in particular QED corrections to Bhabha scattering or pair production at small angles. 23 refs., 5 figs., 5 tabs

  8. Investigation of electron-loss and photon scattering correction factors for FAC-IR-300 ionization chamber

    Science.gov (United States)

    Mohammadi, S. M.; Tavakoli-Anbaran, H.; Zeinali, H. Z.

    2017-02-01

    The parallel-plate free-air ionization chamber termed FAC-IR-300 was designed at the Atomic Energy Organization of Iran, AEOI. This chamber is used for low and medium X-ray dosimetry on the primary standard level. In order to evaluate the air-kerma, some correction factors such as electron-loss correction factor (ke) and photon scattering correction factor (ksc) are needed. ke factor corrects the charge loss from the collecting volume and ksc factor corrects the scattering of photons into collecting volume. In this work ke and ksc were estimated by Monte Carlo simulation. These correction factors are calculated for mono-energy photon. As a result of the simulation data, the ke and ksc values for FAC-IR-300 ionization chamber are 1.0704 and 0.9982, respectively.

  9. Simulation tools for scattering corrections in spectrally resolved x-ray computed tomography using McXtrace

    Science.gov (United States)

    Busi, Matteo; Olsen, Ulrik L.; Knudsen, Erik B.; Frisvad, Jeppe R.; Kehres, Jan; Dreier, Erik S.; Khalil, Mohamad; Haldrup, Kristoffer

    2018-03-01

    Spectral computed tomography is an emerging imaging method that involves using recently developed energy discriminating photon-counting detectors (PCDs). This technique enables measurements at isolated high-energy ranges, in which the dominating undergoing interaction between the x-ray and the sample is the incoherent scattering. The scattered radiation causes a loss of contrast in the results, and its correction has proven to be a complex problem, due to its dependence on energy, material composition, and geometry. Monte Carlo simulations can utilize a physical model to estimate the scattering contribution to the signal, at the cost of high computational time. We present a fast Monte Carlo simulation tool, based on McXtrace, to predict the energy resolved radiation being scattered and absorbed by objects of complex shapes. We validate the tool through measurements using a CdTe single PCD (Multix ME-100) and use it for scattering correction in a simulation of a spectral CT. We found the correction to account for up to 7% relative amplification in the reconstructed linear attenuation. It is a useful tool for x-ray CT to obtain a more accurate material discrimination, especially in the high-energy range, where the incoherent scattering interactions become prevailing (>50 keV).

  10. A numerical study of super-resolution through fast 3D wideband algorithm for scattering in highly-heterogeneous media

    KAUST Repository

    Létourneau, Pierre-David

    2016-09-19

    We present a wideband fast algorithm capable of accurately computing the full numerical solution of the problem of acoustic scattering of waves by multiple finite-sized bodies such as spherical scatterers in three dimensions. By full solution, we mean that no assumption (e.g. Rayleigh scattering, geometrical optics, weak scattering, Born single scattering, etc.) is necessary regarding the properties of the scatterers, their distribution or the background medium. The algorithm is also fast in the sense that it scales linearly with the number of unknowns. We use this algorithm to study the phenomenon of super-resolution in time-reversal refocusing in highly-scattering media recently observed experimentally (Lemoult et al., 2011), and provide numerical arguments towards the fact that such a phenomenon can be explained through a homogenization theory.

  11. An improved non-uniformity correction algorithm and its GPU parallel implementation

    Science.gov (United States)

    Cheng, Kuanhong; Zhou, Huixin; Qin, Hanlin; Zhao, Dong; Qian, Kun; Rong, Shenghui

    2018-05-01

    The performance of SLP-THP based non-uniformity correction algorithm is seriously affected by the result of SLP filter, which always leads to image blurring and ghosting artifacts. To address this problem, an improved SLP-THP based non-uniformity correction method with curvature constraint was proposed. Here we put forward a new way to estimate spatial low frequency component. First, the details and contours of input image were obtained respectively by minimizing local Gaussian curvature and mean curvature of image surface. Then, the guided filter was utilized to combine these two parts together to get the estimate of spatial low frequency component. Finally, we brought this SLP component into SLP-THP method to achieve non-uniformity correction. The performance of proposed algorithm was verified by several real and simulated infrared image sequences. The experimental results indicated that the proposed algorithm can reduce the non-uniformity without detail losing. After that, a GPU based parallel implementation that runs 150 times faster than CPU was presented, which showed the proposed algorithm has great potential for real time application.

  12. Use of scatter correction in quantitative I-123 MIBG scintigraphy for differentiating patients with Parkinsonism: Results from Phantom experiment and clinical study

    International Nuclear Information System (INIS)

    Bai, J.; Hashimoto, J.; Suzuki, T.; Nakahara, T.; Kubo, A.; Ohira, M.; Takao, M.; Ogawa, K.

    2007-01-01

    The aims of this study were to elucidate the feasibility of scatter correction in improving the quantitative accuracy of the Heart-to-Mediastinum (H/M) ratio in I-123 MIBG imaging and to clarify whether the H/M ratio calculated from the scatter corrected image improves the accuracy of differentiating patients with Parkinsonism from other neurological disorders. The H/M ratio was calculated using the counts from planar images processed with and without scatter correction in the phantom and on patients. The triple energy window (TEW) method was used for scatter correction. Fifty five patients were enrolled in the clinical study. The Receiver Operating Characteristic (ROC) Curve analysis was used to evaluate diagnostic performance. The H/M ratio was found to be increased after scatter correction in the phantom simulating normal cardiac uptake, while no changes were observed in the phantom simulating no uptake. It was observed that scatter correction stabilized the H/M ratio by eliminating the influence of scatter photons originating from the liver, especially in the condition of no cardiac uptake. Similarly, scatter correction increased the H/M ratio in conditions other than Parkinson's disease but did not show any change in Parkinson's disease itself to widen the differences in the H/M ratios between the two groups. The overall power of the test did not show any significant improvement after scatter correction in differentiating patients with Parkinsonism. Based on the results of this study it has been concluded that scatter correction improves the quantitative accuracy of H/M ratio in MIBG imaging, but it does not offer any significant incremental diagnostic value over conventional imaging (without scatter correction). Nevertheless it is felt that the scatter correction technique deserves special consideration in order to make the test more robust and obtain stable H/M ratios. (author)

  13. The relative contributions of scatter and attenuation corrections toward improved brain SPECT quantification

    International Nuclear Information System (INIS)

    Stodilka, Robert Z.; Msaki, Peter; Prato, Frank S.; Nicholson, Richard L.; Kemp, B.J.

    1998-01-01

    Mounting evidence indicates that scatter and attenuation are major confounds to objective diagnosis of brain disease by quantitative SPECT. There is considerable debate, however, as to the relative importance of scatter correction (SC) and attenuation correction (AC), and how they should be implemented. The efficacy of SC and AC for 99m Tc brain SPECT was evaluated using a two-compartment fully tissue-equivalent anthropomorphic head phantom. Four correction schemes were implemented: uniform broad-beam AC, non-uniform broad-beam AC, uniform SC+AC, and non-uniform SC+AC. SC was based on non-stationary deconvolution scatter subtraction, modified to incorporate a priori knowledge of either the head contour (uniform SC) or transmission map (non-uniform SC). The quantitative accuracy of the correction schemes was evaluated in terms of contrast recovery, relative quantification (cortical:cerebellar activity), uniformity ((coefficient of variation of 230 macro-voxels) x100%), and bias (relative to a calibration scan). Our results were: uniform broad-beam (μ=0.12cm -1 ) AC (the most popular correction): 71% contrast recovery, 112% relative quantification, 7.0% uniformity, +23% bias. Non-uniform broad-beam (soft tissue μ=0.12cm -1 ) AC: 73%, 114%, 6.0%, +21%, respectively. Uniform SC+AC: 90%, 99%, 4.9%, +12%, respectively. Non-uniform SC+AC: 93%, 101%, 4.0%, +10%, respectively. SC and AC achieved the best quantification; however, non-uniform corrections produce only small improvements over their uniform counterparts. SC+AC was found to be superior to AC; this advantage is distinct and consistent across all four quantification indices. (author)

  14. A weighted least-squares lump correction algorithm for transmission-corrected gamma-ray nondestructive assay

    International Nuclear Information System (INIS)

    Prettyman, T.H.; Sprinkle, J.K. Jr.; Sheppard, G.A.

    1993-01-01

    With transmission-corrected gamma-ray nondestructive assay instruments such as the Segmented Gamma Scanner (SGS) and the Tomographic Gamma Scanner (TGS) that is currently under development at Los Alamos National Laboratory, the amount of gamma-ray emitting material can be underestimated for samples in which the emitting material consists of particles or lumps of highly attenuating material. This problem is encountered in the assay of uranium and plutonium-bearing samples. To correct for this source of bias, we have developed a least-squares algorithm that uses transmission-corrected assay results for several emitted energies and a weighting function to account for statistical uncertainties in the assay results. The variation of effective lump size in the fitted model is parameterized; this allows the correction to be performed for a wide range of lump-size distributions. It may be possible to use the reduced chi-squared value obtained in the fit to identify samples in which assay assumptions have been violated. We found that the algorithm significantly reduced bias in simulated assays and improved SGS assay results for plutonium-bearing samples. Further testing will be conducted with the TGS, which is expected to be less susceptible than the SGS to systematic source of bias

  15. Distortion correction algorithm for UAV remote sensing image based on CUDA

    International Nuclear Information System (INIS)

    Wenhao, Zhang; Yingcheng, Li; Delong, Li; Changsheng, Teng; Jin, Liu

    2014-01-01

    In China, natural disasters are characterized by wide distribution, severe destruction and high impact range, and they cause significant property damage and casualties every year. Following a disaster, timely and accurate acquisition of geospatial information can provide an important basis for disaster assessment, emergency relief, and reconstruction. In recent years, Unmanned Aerial Vehicle (UAV) remote sensing systems have played an important role in major natural disasters, with UAVs becoming an important technique of obtaining disaster information. UAV is equipped with a non-metric digital camera with lens distortion, resulting in larger geometric deformation for acquired images, and affecting the accuracy of subsequent processing. The slow speed of the traditional CPU-based distortion correction algorithm cannot meet the requirements of disaster emergencies. Therefore, we propose a Compute Unified Device Architecture (CUDA)-based image distortion correction algorithm for UAV remote sensing, which takes advantage of the powerful parallel processing capability of the GPU, greatly improving the efficiency of distortion correction. Our experiments show that, compared with traditional CPU algorithms and regardless of image loading and saving times, the maximum acceleration ratio using our proposed algorithm reaches 58 times that using the traditional algorithm. Thus, data processing time can be reduced by one to two hours, thereby considerably improving disaster emergency response capability

  16. A First-order Prediction-Correction Algorithm for Time-varying (Constrained) Optimization: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Simonetto, Andrea [Universite catholique de Louvain

    2017-07-25

    This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are established to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.

  17. Investigation of electron-loss and photon scattering correction factors for FAC-IR-300 ionization chamber

    International Nuclear Information System (INIS)

    Mohammadi, S.M.; Tavakoli-Anbaran, H.; Zeinali, H.Z.

    2017-01-01

    The parallel-plate free-air ionization chamber termed FAC-IR-300 was designed at the Atomic Energy Organization of Iran, AEOI. This chamber is used for low and medium X-ray dosimetry on the primary standard level. In order to evaluate the air-kerma, some correction factors such as electron-loss correction factor (k e ) and photon scattering correction factor (k sc ) are needed. k e factor corrects the charge loss from the collecting volume and k sc factor corrects the scattering of photons into collecting volume. In this work k e and k sc were estimated by Monte Carlo simulation. These correction factors are calculated for mono-energy photon. As a result of the simulation data, the k e and k sc values for FAC-IR-300 ionization chamber are 1.0704 and 0.9982, respectively.

  18. Quantum mean-field decoding algorithm for error-correcting codes

    International Nuclear Information System (INIS)

    Inoue, Jun-ichi; Saika, Yohei; Okada, Masato

    2009-01-01

    We numerically examine a quantum version of TAP (Thouless-Anderson-Palmer)-like mean-field algorithm for the problem of error-correcting codes. For a class of the so-called Sourlas error-correcting codes, we check the usefulness to retrieve the original bit-sequence (message) with a finite length. The decoding dynamics is derived explicitly and we evaluate the average-case performance through the bit-error rate (BER).

  19. Use of x-ray scattering in absorption corrections for x-ray fluorescence analysis of aerosol loaded filters

    International Nuclear Information System (INIS)

    Nielson, K.K.; Garcia, S.R.

    1976-09-01

    Two methods are described for computing multielement x-ray absorption corrections for aerosol samples collected in IPC-1478 and Whatman 41 filters. The first relies on scatter peak intensities and scattering cross sections to estimate the mass of light elements (Z less than 14) in the sample. This mass is used with the measured heavy element (Z greater than or equal to 14) masses to iteratively compute sample absorption corrections. The second method utilizes a linear function of ln(μ) vs ln(E) determined from the scatter peak ratios and estimates sample mass from the scatter peak intensities. Both methods assume a homogeneous depth distribution of aerosol in a fraction of the front of the filters, and the assumption is evaluated with respect to an exponential aerosol depth distribution. Penetration depths for various real, synthethic and liquid aerosols were measured. Aerosol penetration appeared constant over a 1.1 mg/cm 2 range of sample loading for IPC filters, while absorption corrections for Si and S varied by a factor of two over the same loading range. Corrections computed by the two methods were compared with measured absorption corrections and with atomic absorption analyses of the same samples

  20. A comparative study of attenuation correction algorithms in single photon emission computed tomography (SPECT)

    International Nuclear Information System (INIS)

    Murase, Kenya; Itoh, Hisao; Mogami, Hiroshi; Ishine, Masashiro; Kawamura, Masashi; Iio, Atsushi; Hamamoto, Ken

    1987-01-01

    A computer based simulation method was developed to assess the relative effectiveness and availability of various attenuation compensation algorithms in single photon emission computed tomography (SPECT). The effect of the nonuniformity of attenuation coefficient distribution in the body, the errors in determining a body contour and the statistical noise on reconstruction accuracy and the computation time in using the algorithms were studied. The algorithms were classified into three groups: precorrection, post correction and iterative correction methods. Furthermore, a hybrid method was devised by combining several methods. This study will be useful for understanding the characteristics limitations and strengths of the algorithms and searching for a practical correction method for photon attenuation in SPECT. (orig.)

  1. GEO-LEO reflectance band inter-comparison with BRDF and atmospheric scattering corrections

    Science.gov (United States)

    Chang, Tiejun; Xiong, Xiaoxiong Jack; Keller, Graziela; Wu, Xiangqian

    2017-09-01

    The inter-comparison of the reflective solar bands between the instruments onboard a geostationary orbit satellite and onboard a low Earth orbit satellite is very helpful to assess their calibration consistency. GOES-R was launched on November 19, 2016 and Himawari 8 was launched October 7, 2014. Unlike the previous GOES instruments, the Advanced Baseline Imager on GOES-16 (GOES-R became GOES-16 after November 29 when it reached orbit) and the Advanced Himawari Imager (AHI) on Himawari 8 have onboard calibrators for the reflective solar bands. The assessment of calibration is important for their product quality enhancement. MODIS and VIIRS, with their stringent calibration requirements and excellent on-orbit calibration performance, provide good references. The simultaneous nadir overpass (SNO) and ray-matching are widely used inter-comparison methods for reflective solar bands. In this work, the inter-comparisons are performed over a pseudo-invariant target. The use of stable and uniform calibration sites provides comparison with appropriate reflectance level, accurate adjustment for band spectral coverage difference, reduction of impact from pixel mismatching, and consistency of BRDF and atmospheric correction. The site in this work is a desert site in Australia (latitude -29.0 South; longitude 139.8 East). Due to the difference in solar and view angles, two corrections are applied to have comparable measurements. The first is the atmospheric scattering correction. The satellite sensor measurements are top of atmosphere reflectance. The scattering, especially Rayleigh scattering, should be removed allowing the ground reflectance to be derived. Secondly, the angle differences magnify the BRDF effect. The ground reflectance should be corrected to have comparable measurements. The atmospheric correction is performed using a vector version of the Second Simulation of a Satellite Signal in the Solar Spectrum modeling and BRDF correction is performed using a semi

  2. Virtual hadronic and heavy-fermion O({alpha}{sup 2}) corrections to Bhabha scattering

    Energy Technology Data Exchange (ETDEWEB)

    Actis, Stefano [Inst. fuer Theoretische Physik E, RWTH Aachen (Germany); Czakon, Michal [Wuerzburg Univ. (Germany). Inst. fuer Theoretische Physik und Astrophysik]|[Uniwersytet Slaski, Katowice (Poland). Inst. of Physics and Chemistry of Metals; Gluza, Janusz [Uniwersytet Slaski, Katowice (Poland). Inst. of Physics and Chemistry of Metals; Riemann, Tord [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2008-07-15

    Effects of vacuum polarization by hadronic and heavy-fermion insertions were the last unknown two-loop QED corrections to high-energy Bhabha scattering. Here we describe the corrections in detail and explore their numerical influence. The hadronic contributions to the virtual O({alpha}{sup 2}) QED corrections to the Bhabha-scattering cross-section are evaluated using dispersion relations and computing the convolution of hadronic data with perturbatively calculated kernel functions. The technique of dispersion integrals is also employed to derive the virtual O({alpha}{sup 2}) corrections generated by muon-, tau- and top-quark loops in the small electron-mass limit for arbitrary values of the internal-fermion masses. At a meson factory with 1 GeV center-of-mass energy the complete effect of hadronic and heavy-fermion corrections amounts to less than 0.5 per mille and reaches, at 10 GeV, up to about 2 per mille. At the Z resonance it amounts to 2.3 per mille at 3 degrees; overall, hadronic corrections are less than 4 per mille. For ILC energies (500 GeV or above), the combined effect of hadrons and heavy fermions becomes 6 per mille at 3 degrees; hadrons contribute less than 20 per mille in the whole angular region. (orig.)

  3. Development of a 3D muon disappearance algorithm for muon scattering tomography

    Science.gov (United States)

    Blackwell, T. B.; Kudryavtsev, V. A.

    2015-05-01

    Upon passing through a material, muons lose energy, scatter off nuclei and atomic electrons, and can stop in the material. Muons will more readily lose energy in higher density materials. Therefore multiple muon disappearances within a localized volume may signal the presence of high-density materials. We have developed a new technique that improves the sensitivity of standard muon scattering tomography. This technique exploits these muon disappearances to perform non-destructive assay of an inspected volume. Muons that disappear have their track evaluated using a 3D line extrapolation algorithm, which is in turn used to construct a 3D tomographic image of the inspected volume. Results of Monte Carlo simulations that measure muon disappearance in different types of target materials are presented. The ability to differentiate between different density materials using the 3D line extrapolation algorithm is established. Finally the capability of this new muon disappearance technique to enhance muon scattering tomography techniques in detecting shielded HEU in cargo containers has been demonstrated.

  4. Inverse scattering and refraction corrected reflection for breast cancer imaging

    Science.gov (United States)

    Wiskin, J.; Borup, D.; Johnson, S.; Berggren, M.; Robinson, D.; Smith, J.; Chen, J.; Parisky, Y.; Klock, John

    2010-03-01

    Reflection ultrasound (US) has been utilized as an adjunct imaging modality for over 30 years. TechniScan, Inc. has developed unique, transmission and concomitant reflection algorithms which are used to reconstruct images from data gathered during a tomographic breast scanning process called Warm Bath Ultrasound (WBU™). The transmission algorithm yields high resolution, 3D, attenuation and speed of sound (SOS) images. The reflection algorithm is based on canonical ray tracing utilizing refraction correction via the SOS and attenuation reconstructions. The refraction correction reflection algorithm allows 360 degree compounding resulting in the reflection image. The requisite data are collected when scanning the entire breast in a 33° C water bath, on average in 8 minutes. This presentation explains how the data are collected and processed by the 3D transmission and reflection imaging mode algorithms. The processing is carried out using two NVIDIA® Tesla™ GPU processors, accessing data on a 4-TeraByte RAID. The WBU™ images are displayed in a DICOM viewer that allows registration of all three modalities. Several representative cases are presented to demonstrate potential diagnostic capability including: a cyst, fibroadenoma, and a carcinoma. WBU™ images (SOS, attenuation, and reflection modalities) are shown along with their respective mammograms and standard ultrasound images. In addition, anatomical studies are shown comparing WBU™ images and MRI images of a cadaver breast. This innovative technology is designed to provide additional tools in the armamentarium for diagnosis of breast disease.

  5. Fully multidimensional flux-corrected transport algorithms for fluids

    International Nuclear Information System (INIS)

    Zalesak, S.T.

    1979-01-01

    The theory of flux-corrected transport (FCT) developed by Boris and Book is placed in a simple, generalized format, and a new algorithm for implementing the critical flux limiting stage in multidimensions without resort to time splitting is presented. The new flux limiting algorithm allows the use of FCT techniques in multidimensional fluid problems for which time splitting would produce unacceptable numerical results, such as those involving incompressible or nearly incompressible flow fields. The 'clipping' problem associated with the original one dimensional flux limiter is also eliminated or alleviated. Test results and applications to a two dimensional fluid plasma problem are presented

  6. Clinical value of scatter correction for interictal brain 99m Tc-HMPAO SPECT in mesial temporal lobe epilepsy

    International Nuclear Information System (INIS)

    Sanchez Catasus, C.; Morales, L.; Aguila, A.

    2002-01-01

    Aim: It is well known that some patients with temporal lobe epilepsy (TLE) show normal perfusion during interictal SPECT study. The aim of this research was to evaluate if the scatter radiation has some influence on this kind of result. Materials and Methods: We studied 15 patients with TLE by clinical diagnosis and by video-EEG monitoring with surface electrodes (11 left TLE, 4 right TLE), which showed normal perfusion during interictal brain 99m Tc-HMPAO SPECT. The SPECT data were reconstructed by filtered backprojection without scatter correction (A). The same SPECT data were reconstructed after the projections were corrected by dual energy window method of scatter correction (B). Attenuation was corrected in all cases using first order Chang Method. For A and B images groups, cerebellum perfusion ratios were calculated on irregular regions of interest (ROI) drawn on anterior (ATL), lateral (LTL), mesial (MTL) and whole temporal lobe (WTL). To evaluate the influence of scatter radiation, the cerebellum perfusion ratios of each subject were compared with a normal database of 10 normal subjects, with and without scatter correction, using z-score analysis. Results: In group A, the z-score was less than 2 in all cases. In group B, the z-score was more than 2 in 6 cases, 4 in MTL (3 left, 1 right) and 2 in left LTL, which were coincident with the EEG localization. All images of group B showed better contrast than images of group A. Conclusions: These results suggest that scatter correction could improve the sensitivity of interictal brain SPECT to identify epileptic focus in patients with TLE

  7. Corrections to the leading eikonal amplitude for high-energy scattering and quasipotential approach

    International Nuclear Information System (INIS)

    Nguyen Suan Hani; Nguyen Duy Hung

    2003-12-01

    Asymptotic behaviour of the scattering amplitude for two scalar particle at high energy and fixed momentum transfers is reconsidered in quantum field theory. In the framework of the quasipotential approach and the modified perturbation theory a systematic scheme of finding the leading eikonal scattering amplitudes and its corrections is developed and constructed. The connection between the solutions obtained by quasipotential and functional approaches is also discussed. (author)

  8. MUSIC ALGORITHM FOR LOCATING POINT-LIKE SCATTERERS CONTAINED IN A SAMPLE ON FLAT SUBSTRATE

    Institute of Scientific and Technical Information of China (English)

    Dong Heping; Ma Fuming; Zhang Deyue

    2012-01-01

    In this paper,we consider a MUSIC algorithm for locating point-like scatterers contained in a sample on flat substrate.Based on an asymptotic expansion of the scattering amplitude proposed by Ammari et al.,the reconstruction problem can be reduced to a calculation of Green function corresponding to the background medium.In addition,we use an explicit formulation of Green function in the MUSIC algorithm to simplify the calculation when the cross-section of sample is a half-disc.Numerical experiments are included to demonstrate the feasibility of this method.

  9. A library least-squares approach for scatter correction in gamma-ray tomography

    Science.gov (United States)

    Meric, Ilker; Anton Johansen, Geir; Valgueiro Malta Moreira, Icaro

    2015-03-01

    Scattered radiation is known to lead to distortion in reconstructed images in Computed Tomography (CT). The effects of scattered radiation are especially more pronounced in non-scanning, multiple source systems which are preferred for flow imaging where the instantaneous density distribution of the flow components is of interest. In this work, a new method based on a library least-squares (LLS) approach is proposed as a means of estimating the scatter contribution and correcting for this. The validity of the proposed method is tested using the 85-channel industrial gamma-ray tomograph previously developed at the University of Bergen (UoB). The results presented here confirm that the LLS approach can effectively estimate the amounts of transmission and scatter components in any given detector in the UoB gamma-ray tomography system.

  10. TPC cross-talk correction: CERN-Dubna-Milano algorithm and results

    CERN Document Server

    De Min, A; Guskov, A; Krasnoperov, A; Nefedov, Y; Zhemchugov, A

    2003-01-01

    The CDM (CERN-Dubna-Milano) algorithm for TPC Xtalk correction is presented and discussed in detail. It is a data-driven, model-independent approach to the problem of Xtalk correction. It accounts for arbitrary amplitudes and pulse shapes of signals, and corrects (almost) all generations of Xtalk, with a view to handling (almost) correctly even complex multi-track events. Results on preamp amplification and preamp linearity from the analysis of test-charge injection data of all six TPC sectors are presented. The minimal expected error on the measurement of signal charges in the TPC is discussed. Results are given on the application of the CDM Xtalk correction to test-charge events and krypton events.

  11. QED corrections in deep-inelastic scattering from tensor polarized deuteron target

    CERN Document Server

    Gakh, G I

    2001-01-01

    The QED correction in the deep inelastic scattering from the polarized tensor of the deuteron target is considered. The calculations are based on the covariant parametrization of the deuteron quadrupole polarization tensor. The Drell-Yan representations in the electrodynamics are used for describing the radiation real and virtual particles

  12. Coulomb corrections to nuclear scattering lengths and effective ranges for weakly bound systems

    International Nuclear Information System (INIS)

    Mur, V.D.; Popov, V.S.; Sergeev, A.V.

    1996-01-01

    A procedure is considered for extracting the purely nuclear scattering length as and effective range rs (which correspond to a strong-interaction potential Vs with disregarded Coulomb interaction) from the experimentally determined nuclear quantities acs and rcs, which are modified by Coulomb interaction. The Coulomb renormalization of as and rs is especially strong if the system under study involves a level with energy close to zero (on the nuclear scale). This applies to formulas that determine the Coulomb renormalization of the low-energy parameters of s scattering (l=0). Detailed numerical calculations are performed for coefficients appearing in the equations that determine Coulomb corrections for various models of the potential Vs(r). This makes it possible to draw qualitative conclusions that the dependence of Coulomb corrections on the form of the strong-interaction potential and, in particular, on its small-distance behavior. A considerable enhancement of Coulomb corrections to the effective range rs is found for potentials with a barrier

  13. Fast sampling algorithm for the simulation of photon Compton scattering

    International Nuclear Information System (INIS)

    Brusa, D.; Salvat, F.

    1996-01-01

    A simple algorithm for the simulation of Compton interactions of unpolarized photons is described. The energy and direction of the scattered photon, as well as the active atomic electron shell, are sampled from the double-differential cross section obtained by Ribberfors from the relativistic impulse approximation. The algorithm consistently accounts for Doppler broadening and electron binding effects. Simplifications of Ribberfors' formula, required for efficient random sampling, are discussed. The algorithm involves a combination of inverse transform, composition and rejection methods. A parameterization of the Compton profile is proposed from which the simulation of Compton events can be performed analytically in terms of a few parameters that characterize the target atom, namely shell ionization energies, occupation numbers and maximum values of the one-electron Compton profiles. (orig.)

  14. Aethalometer multiple scattering correction Cref for mineral dust aerosols

    Science.gov (United States)

    Di Biagio, Claudia; Formenti, Paola; Cazaunau, Mathieu; Pangui, Edouard; Marchand, Nicolas; Doussin, Jean-François

    2017-08-01

    In this study we provide a first estimate of the Aethalometer multiple scattering correction Cref for mineral dust aerosols. Cref is an empirical constant used to correct the aerosol absorption coefficient measurements for the multiple scattering artefact of the Aethalometer; i.e. the filter fibres on which aerosols are deposited scatter light and this is miscounted as absorption. The Cref at 450 and 660 nm was obtained from the direct comparison of Aethalometer data (Magee Sci. AE31) with (i) the absorption coefficient calculated as the difference between the extinction and scattering coefficients measured by a Cavity Attenuated Phase Shift Extinction analyser (CAPS PMex) and a nephelometer respectively at 450 nm and (ii) the absorption coefficient from a MAAP (Multi-Angle Absorption Photometer) at 660 nm. Measurements were performed on seven dust aerosol samples generated in the laboratory by the mechanical shaking of natural parent soils issued from different source regions worldwide. The single scattering albedo (SSA) at 450 and 660 nm and the size distribution of the aerosols were also measured. Cref for mineral dust varies between 1.81 and 2.56 for a SSA of 0.85-0.96 at 450 nm and between 1.75 and 2.28 for a SSA of 0.98-0.99 at 660 nm. The calculated mean for dust is 2.09 (±0.22) at 450 nm and 1.92 (±0.17) at 660 nm. With this new Cref the dust absorption coefficient by the Aethalometer is about 2 % (450 nm) and 11 % (660 nm) higher than that obtained by using Cref = 2.14 at both 450 and 660 nm, as usually assumed in the literature. This difference induces a change of up to 3 % in the dust SSA at 660 nm. The Cref seems to be independent of the fine and coarse particle size fractions, and so the obtained Cref can be applied to dust both close to sources and following transport. Additional experiments performed with pure kaolinite minerals and polluted ambient aerosols indicate Cref of 2.49 (±0.02) and 2.32 (±0.01) at 450 and 660 nm (SSA = 0.96-0.97) for

  15. New resonance cross section calculational algorithms

    International Nuclear Information System (INIS)

    Mathews, D.R.

    1978-01-01

    Improved resonance cross section calculational algorithms were developed and tested for inclusion in a fast reactor version of the MICROX code. The resonance energy portion of the MICROX code solves the neutron slowing-down equations for a two-region lattice cell on a very detailed energy grid (about 14,500 energies). In the MICROX algorithms, the exact P 0 elastic scattering kernels are replaced by synthetic (approximate) elastic scattering kernels which permit the use of an efficient and numerically stable recursion relation solution of the slowing-down equation. In the work described here, the MICROX algorithms were modified as follows: an additional delta function term was included in the P 0 synthetic scattering kernel. The additional delta function term allows one more moments of the exact elastic scattering kernel to be preserved without much extra computational effort. With the improved synthetic scattering kernel, the flux returns more closely to the exact flux below a resonance than with the original MICROX kernel. The slowing-down calculation was extended to a true B 1 hyperfine energy grid calculatn in each region by using P 1 synthetic scattering kernels and tranport-corrected P 0 collision probabilities to couple the two regions. 1 figure, 6 tables

  16. Effect of scatter correction on quantification of myocardial SPECT and application to dual-energy acquisition using triple-energy window method

    International Nuclear Information System (INIS)

    Nakajima, Kenichi; Matsudaira, Masamichi; Yamada, Masato; Taki, Junichi; Tonami, Norihisa; Hisada, Kinichi

    1995-01-01

    Triple-energy window (TEW) method is a simple and practical approach for correcting Compton scatter in single-photon emission tracer studies. The fraction of scatter correction, with a point source or 30 ml-syringe placed under the camera, was measured by the TEW method. The scatter fraction was 55% for 201 Tl, 29% for 99m Tc and 57% for 123 I. Composite energy spectra were generated and separated by the TEW method. Combination of 99m Tc and 201 Tl was well separated, and 201 Tl and 123 I were separated within an error of 10%; whereas asymmetric photopeak energy window was necessary for separating 123 I and 99m Tc. By applying this method to myocardial SPECT study, the effect of scatter elimination was investigated in each myocardial wall by polar map and profile curve analysis. The effect of scatter was higher in the septum and the inferior wall. The count ratio relative to the anterior wall including scatter was 9% higher in 123 I, 7-8% higher in 99m Tc and 6% higher in 201 Tl. Apparent count loss after scatter correction was 30% for 123 I, 13% for 99m Tc and 38% for 201 Tl. Image contrast, as defined myocardium-to-left ventricular cavity count ratio, improved by scatter correction. Since the influence of Compton scatter was significant in cardiac planar and SPECT studies; the degree of scatter fraction should be kept in mind both in quantification and visual interpretation. (author)

  17. A motion correction algorithm for an image realignment programme useful for sequential radionuclide renography

    International Nuclear Information System (INIS)

    De Agostini, A.; Moretti, R.; Belletti, S.; Maira, G.; Magri, G.C.; Bestagno, M.

    1992-01-01

    The correction of organ movements in sequential radionuclide renography was done using an iterative algorithm that, by means of a set of rectangular regions of interest (ROIs), did not require any anatomical marker or manual elaboration of frames. The realignment programme here proposed is quite independent of the spatial and temporal distribution of activity and analyses the rotational movement in a simplified but reliable way. The position of the object inside a frame is evaluated by choosing the best ROI in a set of ROIs shifted 1 pixel around the central one. Statistical tests have to be fulfilled by the algorithm in order to activate the realignment procedure. Validation of the algorithm was done for different acquisition set-ups and organ movements. Results, summarized in Table 1, show that in about 90% of the stimulated experiments the algorithm is able to correct the movements of the object with a maximum error less of equal to 1 pixel limit. The usefulness of the realignment programme was demonstrated with sequential radionuclide renography as a typical clinical application. The algorithm-corrected curves of a 1-year-old patient were completely different from those obtained without a motion correction procedure. The algorithm may be applicable also to other types of scintigraphic examinations, besides functional imaging in which the realignment of frames of the dynamic sequence was an intrinsic demand. (orig.)

  18. A DSP-based neural network non-uniformity correction algorithm for IRFPA

    Science.gov (United States)

    Liu, Chong-liang; Jin, Wei-qi; Cao, Yang; Liu, Xiu

    2009-07-01

    An effective neural network non-uniformity correction (NUC) algorithm based on DSP is proposed in this paper. The non-uniform response in infrared focal plane array (IRFPA) detectors produces corrupted images with a fixed-pattern noise(FPN).We introduced and analyzed the artificial neural network scene-based non-uniformity correction (SBNUC) algorithm. A design of DSP-based NUC development platform for IRFPA is described. The DSP hardware platform designed is of low power consumption, with 32-bit fixed point DSP TMS320DM643 as the kernel processor. The dependability and expansibility of the software have been improved by DSP/BIOS real-time operating system and Reference Framework 5. In order to realize real-time performance, the calibration parameters update is set at a lower task priority then video input and output in DSP/BIOS. In this way, calibration parameters updating will not affect video streams. The work flow of the system and the strategy of real-time realization are introduced. Experiments on real infrared imaging sequences demonstrate that this algorithm requires only a few frames to obtain high quality corrections. It is computationally efficient and suitable for all kinds of non-uniformity.

  19. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    Science.gov (United States)

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.

  20. In-medium effects in K+ scattering versus Glauber model with noneikonal corrections

    International Nuclear Information System (INIS)

    Eliseev, S.M.; Rihan, T.H.

    1996-01-01

    The discrepancy between the experimental and the theoretical ratio R of the total cross sections, R=σ(K + - 12 C)/6σ(K + - d), at momenta up to 800 MeV/c is discussed in the framework of the Glauber multiple scattering approach. It is shown that various corrections such as adopting relativistic K + -N amplitudes as well as noneikonal corrections seem to fail in reproducing the experimental data especially at higher momenta. 17 refs., 1 fig

  1. The generation algorithm of arbitrary polygon animation based on dynamic correction

    Directory of Open Access Journals (Sweden)

    Hou Ya Wei

    2016-01-01

    Full Text Available This paper, based on the key-frame polygon sequence, proposes a method that makes use of dynamic correction to develop continuous animation. Firstly we use quadratic Bezier curve to interpolate the corresponding sides vector of polygon sequence consecutive frame and realize the continuity of animation sequences. And then, according to Bezier curve characteristic, we conduct dynamic regulation to interpolation parameters and implement the changing smoothness. Meanwhile, we take use of Lagrange Multiplier Method to correct the polygon and close it. Finally, we provide the concrete algorithm flow and present numerical experiment results. The experiment results show that the algorithm acquires excellent effect.

  2. Quantification of myocardial perfusion SPECT for the assessment of coronary artery disease: should we apply scatter correction?

    International Nuclear Information System (INIS)

    Hambye, A.S.; Vervaet, A.; Dobbeleir, A.

    2002-01-01

    Compared to other non invasive testings for CAD diagnosis, myocardial perfusion imaging (MPI) is considered as a very sensitive method which accuracy is however often dimmed by a certain lack of specificity, especially in patients with a small heart. With gated SPECT MPI, use of end-diastolic instead of summed images has been presented as an interesting approach for increasing specificity. Since scatter correction is reported to improve image contrast, it might potentially constitute another way to ameliorate MPI accuracy. We aimed at comparing the value of both approaches, either separate or combined, for CAD diagnosis. Methods. Hundred patients addressed for gated 99m-Tc sestamibi SPECT MPI were prospectively included (Group A). Thirty-five had an end-systolic volume <30ml by QGS-analysis (Group B). All had a coronary angiogram within 3 months of the MPI. Four polar maps (non-corrected and scatter-corrected summed, and non-corrected and scatter-corrected end-diastolic) were created to quantify the extent (EXT) and severity (TDS) of the perfusion defects if any. ROC-curve analysis was applied to define the optimal thresholds of EXT and TDS separating non-CAD from CAD-patients, using a 50%-stenosis on coronary angiogram as cutoff for disease positivity. Results. Significant CAD was present in 86 patients (25 in Group B). In Group A, assessment of EXT and TDS of perfusion defects on scatter-corrected summed images demonstrated the highest accuracy (76% for EXT; sens: 77%; spec: 71%, and 74% for TDS, sens: 73%, spec: 79%). Accuracy of EXT and TDS calculated from the other data sets was slightly but not significantly lower, especially because of a lower sensitivity. As a comparison, visual analysis was 90% accurate for the diagnosis of CAD (sens: 94%, spec: 64%). In group B, overall results were worse mainly due to a decreased sensitivity, with accuracies ranging between 51 and 63%. Again scatter-corrected summed data were the most accurate (EXT: 60%, TDS: 63%, visual

  3. A systematic approach to robust preconditioning for gradient-based inverse scattering algorithms

    International Nuclear Information System (INIS)

    Nordebo, Sven; Fhager, Andreas; Persson, Mikael; Gustafsson, Mats

    2008-01-01

    This paper presents a systematic approach to robust preconditioning for gradient-based nonlinear inverse scattering algorithms. In particular, one- and two-dimensional inverse problems are considered where the permittivity and conductivity profiles are unknown and the input data consist of the scattered field over a certain bandwidth. A time-domain least-squares formulation is employed and the inversion algorithm is based on a conjugate gradient or quasi-Newton algorithm together with an FDTD-electromagnetic solver. A Fisher information analysis is used to estimate the Hessian of the error functional. A robust preconditioner is then obtained by incorporating a parameter scaling such that the scaled Fisher information has a unit diagonal. By improving the conditioning of the Hessian, the convergence rate of the conjugate gradient or quasi-Newton methods are improved. The preconditioner is robust in the sense that the scaling, i.e. the diagonal Fisher information, is virtually invariant to the numerical resolution and the discretization model that is employed. Numerical examples of image reconstruction are included to illustrate the efficiency of the proposed technique

  4. Effect of scatter and attenuation correction in ROI analysis of brain perfusion scintigraphy. Phantom experiment and clinical study in patients with unilateral cerebrovascular disease

    Energy Technology Data Exchange (ETDEWEB)

    Bai, J. [Keio Univ., Tokyo (Japan). 21st Century Center of Excellence Program; Hashimoto, J.; Kubo, A. [Keio Univ., Tokyo (Japan). Dept. of Radiology; Ogawa, K. [Hosei Univ., Tokyo (Japan). Dept. of Electronic Informatics; Fukunaga, A.; Onozuka, S. [Keio Univ., Tokyo (Japan). Dept. of Neurosurgery

    2007-07-01

    The aim of this study was to evaluate the effect of scatter and attenuation correction in region of interest (ROI) analysis of brain perfusion single-photon emission tomography (SPECT), and to assess the influence of selecting the reference area on the calculation of lesion-to-reference count ratios. Patients, methods: Data were collected from a brain phantom and ten patients with unilateral internal carotid artery stenosis. A simultaneous emission and transmission scan was performed after injecting {sup 123}I-iodoamphetamine. We reconstructed three SPECT images from common projection data: with scatter correction and nonuniform attenuation correction, with scatter correction and uniform attenuation correction, and with uniform attenuation correction applied to data without scatter correction. Regional count ratios were calculated by using four different reference areas (contralateral intact side, ipsilateral cerebellum, whole brain and hemisphere). Results: Scatter correction improved the accuracy of measuring the count ratios in the phantom experiment. It also yielded marked difference in the count ratio in the clinical study when using the cerebellum, whole brain or hemisphere as the reference. Difference between nonuniform and uniform attenuation correction was not significant in the phantom and clinical studies except when the cerebellar reference was used. Calculation of the lesion-to-normal count ratios referring the same site in the contralateral hemisphere was not dependent on the use of scatter correction or transmission scan-based attenuation correction. Conclusion: Scatter correction was indispensable for accurate measurement in most of the ROI analyses. Nonuniform attenuation correction is not necessary when using the reference area other than the cerebellum. (orig.)

  5. Two dimensional spatial distortion correction algorithm for scintillation GAMMA cameras

    International Nuclear Information System (INIS)

    Chaney, R.; Gray, E.; Jih, F.; King, S.E.; Lim, C.B.

    1985-01-01

    Spatial distortion in an Anger gamma camera originates fundamentally from the discrete nature of scintillation light sampling with an array of PMT's. Historically digital distortion correction started with the method based on the distortion measurement by using 1-D slit pattern and the subsequent on-line bi-linear approximation with 64 x 64 look-up tables for X and Y. However, the X, Y distortions are inherently two-dimensional in nature, and thus the validity of this 1-D calibration method becomes questionable with the increasing distortion amplitude in association with the effort to get better spatial and energy resolutions. The authors have developed a new accurate 2-D correction algorithm. This method involves the steps of; data collection from 2-D orthogonal hole pattern, 2-D distortion vector measurement, 2-D Lagrangian polynomial interpolation, and transformation to X, Y ADC frame. The impact of numerical precision used in correction and the accuracy of bilinear approximation with varying look-up table size have been carefully examined through computer simulation by using measured single PMT light response function together with Anger positioning logic. Also the accuracy level of different order Lagrangian polynomial interpolations for correction table expansion from hole centroids were investigated. Detailed algorithm and computer simulation are presented along with camera test results

  6. A method and algorithm for correlating scattered light and suspended particles in polluted water

    International Nuclear Information System (INIS)

    Sami Gumaan Daraigan; Mohd Zubir Matjafri; Khiruddin Abdullah; Azlan Abdul Aziz; Abdul Aziz Tajuddin; Mohd Firdaus Othman

    2005-01-01

    An optical model has been developed for measuring total suspended solids TSS concentrations in water. This approach is based on the characteristics of scattered light from the suspended particles in water samples. An optical sensor system (an active spectrometer) has been developed to correlate pollutant (total suspended solids TSS) concentration and the scattered radiation. Scattered light was measured in terms of the output voltage of the phototransistor of the sensor system. The developed algorithm was used to calculate and estimate the concentrations of the polluted water samples. The proposed algorithm was calibrated using the observed readings. The results display a strong correlation between the radiation values and the total suspended solids concentrations. The proposed system yields a high degree of accuracy with the correlation coefficient (R) of 0.99 and the root mean square error (RMS) of 63.57 mg/l. (Author)

  7. Evaluation of scatter limitation correction: a new method of correcting photopenic artifacts caused by patient motion during whole-body PET/CT imaging.

    Science.gov (United States)

    Miwa, Kenta; Umeda, Takuro; Murata, Taisuke; Wagatsuma, Kei; Miyaji, Noriaki; Terauchi, Takashi; Koizumi, Mitsuru; Sasaki, Masayuki

    2016-02-01

    Overcorrection of scatter caused by patient motion during whole-body PET/computed tomography (CT) imaging can induce the appearance of photopenic artifacts in the PET images. The present study aimed to quantify the accuracy of scatter limitation correction (SLC) for eliminating photopenic artifacts. This study analyzed photopenic artifacts in (18)F-fluorodeoxyglucose ((18)F-FDG) PET/CT images acquired from 12 patients and from a National Electrical Manufacturers Association phantom with two peripheral plastic bottles that simulated the human body and arms, respectively. The phantom comprised a sphere (diameter, 10 or 37 mm) containing fluorine-18 solutions with target-to-background ratios of 2, 4, and 8. The plastic bottles were moved 10 cm posteriorly between CT and PET acquisitions. All PET data were reconstructed using model-based scatter correction (SC), no scatter correction (NSC), and SLC, and the presence or absence of artifacts on the PET images was visually evaluated. The SC and SLC images were also semiquantitatively evaluated using standardized uptake values (SUVs). Photopenic artifacts were not recognizable in any NSC and SLC image from all 12 patients in the clinical study. The SUVmax of mismatched SLC PET/CT images were almost equal to those of matched SC and SLC PET/CT images. Applying NSC and SLC substantially eliminated the photopenic artifacts on SC PET images in the phantom study. SLC improved the activity concentration of the sphere for all target-to-background ratios. The highest %errors of the 10 and 37-mm spheres were 93.3 and 58.3%, respectively, for mismatched SC, and 73.2 and 22.0%, respectively, for mismatched SLC. Photopenic artifacts caused by SC error induced by CT and PET image misalignment were corrected using SLC, indicating that this method is useful and practical for clinical qualitative and quantitative PET/CT assessment.

  8. Hybrid wavefront sensing and image correction algorithm for imaging through turbulent media

    Science.gov (United States)

    Wu, Chensheng; Robertson Rzasa, John; Ko, Jonathan; Davis, Christopher C.

    2017-09-01

    It is well known that passive image correction of turbulence distortions often involves using geometry-dependent deconvolution algorithms. On the other hand, active imaging techniques using adaptive optic correction should use the distorted wavefront information for guidance. Our work shows that a hybrid hardware-software approach is possible to obtain accurate and highly detailed images through turbulent media. The processing algorithm also takes much fewer iteration steps in comparison with conventional image processing algorithms. In our proposed approach, a plenoptic sensor is used as a wavefront sensor to guide post-stage image correction on a high-definition zoomable camera. Conversely, we show that given the ground truth of the highly detailed image and the plenoptic imaging result, we can generate an accurate prediction of the blurred image on a traditional zoomable camera. Similarly, the ground truth combined with the blurred image from the zoomable camera would provide the wavefront conditions. In application, our hybrid approach can be used as an effective way to conduct object recognition in a turbulent environment where the target has been significantly distorted or is even unrecognizable.

  9. The integration of improved Monte Carlo compton scattering algorithms into the Integrated TIGER Series

    International Nuclear Information System (INIS)

    Quirk, Thomas J. IV

    2004-01-01

    The Integrated TIGER Series (ITS) is a software package that solves coupled electron-photon transport problems. ITS performs analog photon tracking for energies between 1 keV and 1 GeV. Unlike its deterministic counterpart, the Monte Carlo calculations of ITS do not require a memory-intensive meshing of phase space; however, its solutions carry statistical variations. Reducing these variations is heavily dependent on runtime. Monte Carlo simulations must therefore be both physically accurate and computationally efficient. Compton scattering is the dominant photon interaction above 100 keV and below 5-10 MeV, with higher cutoffs occurring in lighter atoms. In its current model of Compton scattering, ITS corrects the differential Klein-Nishina cross sections (which assumes a stationary, free electron) with the incoherent scattering function, a function dependent on both the momentum transfer and the atomic number of the scattering medium. While this technique accounts for binding effects on the scattering angle, it excludes the Doppler broadening the Compton line undergoes because of the momentum distribution in each bound state. To correct for these effects, Ribbefor's relativistic impulse approximation (IA) will be employed to create scattering cross section differential in both energy and angle for each element. Using the parameterizations suggested by Brusa et al., scattered photon energies and angle can be accurately sampled at a high efficiency with minimal physical data. Two-body kinematics then dictates the electron's scattered direction and energy. Finally, the atomic ionization is relaxed via Auger emission or fluorescence. Future work will extend these improvements in incoherent scattering to compounds and to adjoint calculations.

  10. Mathematical model of rhodium self-powered detectors and algorithms for correction of their time delay

    International Nuclear Information System (INIS)

    Bur'yan, V.I.; Kozlova, L.V.; Kuzhil', A.S.; Shikalov, V.F.

    2005-01-01

    The development of algorithms for correction of self-powered neutron detector (SPND) inertial is caused by necessity to increase the fast response of the in-core instrumentation systems (ICIS). The increase of ICIS fast response will permit to monitor in real time fast transient processes in the core, and in perspective - to use the signals of rhodium SPND for functions of emergency protection by local parameters. In this paper it is proposed to use mathematical model of neutron flux measurements by means of SPND in integral form for creation of correction algorithms. This approach, in the case, is the most convenient for creation of recurrent algorithms for flux estimation. The results of comparison for estimation of neutron flux and reactivity by readings of ionization chambers and SPND signals, corrected by proposed algorithms, are presented [ru

  11. The whole space three-dimensional magnetotelluric inversion algorithm with static shift correction

    Science.gov (United States)

    Zhang, K.

    2016-12-01

    Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results.The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm. The verification and application example of 3D inversion algorithm is shown in Figure 1. From the comparison of figure 1, the inversion model can reflect all the abnormal bodies and terrain clearly regardless of what type of data (impedance/tipper/impedance and tipper). And the resolution of the bodies' boundary can be improved by using tipper data. The algorithm is very effective for terrain inversion. So it is very useful for the study of continental shelf with continuous exploration of land, marine and underground.The three-dimensional electrical model of the ore zone reflects the basic information of stratum, rock and structure. Although it cannot indicate the ore body position directly, the important clues are provided for prospecting work by the delineation of diorite pluton uplift range. The test results show that, the high quality of

  12. Poster – 02: Positron Emission Tomography (PET) Imaging Reconstruction using higher order Scattered Photon Coincidences

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Hongwei; Pistorius, Stephen [Department of Physics and Astronomy, University of Manitoba, CancerCare, Manitoba (Canada)

    2016-08-15

    PET images are affected by the presence of scattered photons. Incorrect scatter-correction may cause artifacts, particularly in 3D PET systems. Current scatter reconstruction methods do not distinguish between single and higher order scattered photons. A dual-scattered reconstruction method (GDS-MLEM) that is independent of the number of Compton scattering interactions and less sensitive to the need for high energy resolution detectors, is proposed. To avoid overcorrecting for scattered coincidences, the attenuation coefficient was calculated by integrating the differential Klein-Nishina cross-section over a restricted energy range, accounting only for scattered photons that were not detected. The optimum image can be selected by choosing an energy threshold which is the upper energy limit for the calculation of the cross-section and the lower limit for scattered photons in the reconstruction. Data was simulated using the GATE platform. 500,000 multiple scattered photon coincidences with perfect energy resolution were reconstructed using various methods. The GDS-MLEM algorithm had the highest confidence (98%) in locating the annihilation position and was capable of reconstructing the two largest hot regions. 100,000 photon coincidences, with a scatter fraction of 40%, were used to test the energy resolution dependence of different algorithms. With a 350–650 keV energy window and the restricted attenuation correction model, the GDS-MLEM algorithm was able to improve contrast recovery and reduce the noise by 7.56%–13.24% and 12.4%–24.03%, respectively. This approach is less sensitive to the energy resolution and shows promise if detector energy resolutions of 12% can be achieved.

  13. Surface roughness considerations for atmospheric correction of ocean color sensors. I - The Rayleigh-scattering component. II - Error in the retrieved water-leaving radiance

    Science.gov (United States)

    Gordon, Howard R.; Wang, Menghua

    1992-01-01

    The first step in the Coastal Zone Color Scanner (CZCS) atmospheric-correction algorithm is the computation of the Rayleigh-scattering (RS) contribution, L sub r, to the radiance leaving the top of the atmosphere over the ocean. In the present algorithm, L sub r is computed by assuming that the ocean surface is flat. Calculations of the radiance leaving an RS atmosphere overlying a rough Fresnel-reflecting ocean are presented to evaluate the radiance error caused by the flat-ocean assumption. Simulations are carried out to evaluate the error incurred when the CZCS-type algorithm is applied to a realistic ocean in which the surface is roughened by the wind. In situations where there is no direct sun glitter, it is concluded that the error induced by ignoring the Rayleigh-aerosol interaction is usually larger than that caused by ignoring the surface roughness. This suggests that, in refining algorithms for future sensors, more effort should be focused on dealing with the Rayleigh-aerosol interaction than on the roughness of the sea surface.

  14. A fast calculating two-stream-like multiple scattering algorithm that captures azimuthal and elevation variations

    Science.gov (United States)

    Fiorino, Steven T.; Elmore, Brannon; Schmidt, Jaclyn; Matchefts, Elizabeth; Burley, Jarred L.

    2016-05-01

    Properly accounting for multiple scattering effects can have important implications for remote sensing and possibly directed energy applications. For example, increasing path radiance can affect signal noise. This study describes the implementation of a fast-calculating two-stream-like multiple scattering algorithm that captures azimuthal and elevation variations into the Laser Environmental Effects Definition and Reference (LEEDR) atmospheric characterization and radiative transfer code. The multiple scattering algorithm fully solves for molecular, aerosol, cloud, and precipitation single-scatter layer effects with a Mie algorithm at every calculation point/layer rather than an interpolated value from a pre-calculated look-up-table. This top-down cumulative diffusivity method first considers the incident solar radiance contribution to a given layer accounting for solid angle and elevation, and it then measures the contribution of diffused energy from previous layers based on the transmission of the current level to produce a cumulative radiance that is reflected from a surface and measured at the aperture at the observer. Then a unique set of asymmetry and backscattering phase function parameter calculations are made which account for the radiance loss due to the molecular and aerosol constituent reflectivity within a level and allows for a more accurate characterization of diffuse layers that contribute to multiple scattered radiances in inhomogeneous atmospheres. The code logic is valid for spectral bands between 200 nm and radio wavelengths, and the accuracy is demonstrated by comparing the results from LEEDR to observed sky radiance data.

  15. Actuator Disc Model Using a Modified Rhie-Chow/SIMPLE Pressure Correction Algorithm

    DEFF Research Database (Denmark)

    Rethore, Pierre-Elouan; Sørensen, Niels

    2008-01-01

    An actuator disc model for the flow solver EllipSys (2D&3D) is proposed. It is based on a correction of the Rhie-Chow algorithm for using discreet body forces in collocated variable finite volume CFD code. It is compared with three cases where an analytical solution is known.......An actuator disc model for the flow solver EllipSys (2D&3D) is proposed. It is based on a correction of the Rhie-Chow algorithm for using discreet body forces in collocated variable finite volume CFD code. It is compared with three cases where an analytical solution is known....

  16. Aethalometer multiple scattering correction Cref for mineral dust aerosols

    Directory of Open Access Journals (Sweden)

    C. Di Biagio

    2017-08-01

    Full Text Available In this study we provide a first estimate of the Aethalometer multiple scattering correction Cref for mineral dust aerosols. Cref is an empirical constant used to correct the aerosol absorption coefficient measurements for the multiple scattering artefact of the Aethalometer; i.e. the filter fibres on which aerosols are deposited scatter light and this is miscounted as absorption. The Cref at 450 and 660 nm was obtained from the direct comparison of Aethalometer data (Magee Sci. AE31 with (i the absorption coefficient calculated as the difference between the extinction and scattering coefficients measured by a Cavity Attenuated Phase Shift Extinction analyser (CAPS PMex and a nephelometer respectively at 450 nm and (ii the absorption coefficient from a MAAP (Multi-Angle Absorption Photometer at 660 nm. Measurements were performed on seven dust aerosol samples generated in the laboratory by the mechanical shaking of natural parent soils issued from different source regions worldwide. The single scattering albedo (SSA at 450 and 660 nm and the size distribution of the aerosols were also measured. Cref for mineral dust varies between 1.81 and 2.56 for a SSA of 0.85–0.96 at 450 nm and between 1.75 and 2.28 for a SSA of 0.98–0.99 at 660 nm. The calculated mean for dust is 2.09 (±0.22 at 450 nm and 1.92 (±0.17 at 660 nm. With this new Cref the dust absorption coefficient by the Aethalometer is about 2 % (450 nm and 11 % (660 nm higher than that obtained by using Cref  =  2.14 at both 450 and 660 nm, as usually assumed in the literature. This difference induces a change of up to 3 % in the dust SSA at 660 nm. The Cref seems to be independent of the fine and coarse particle size fractions, and so the obtained Cref can be applied to dust both close to sources and following transport. Additional experiments performed with pure kaolinite minerals and polluted ambient aerosols indicate Cref of 2.49 (±0.02 and 2

  17. Attenuation correction of myocardial SPECT by scatter-photopeak window method in normal subjects

    International Nuclear Information System (INIS)

    Okuda, Koichi; Nakajima, Kenichi; Matsuo, Shinro; Kinuya, Seigo; Motomura, Nobutoku; Kubota, Masahiro; Yamaki, Noriyasu; Maeda, Hisato

    2009-01-01

    Segmentation with scatter and photopeak window data using attenuation correction (SSPAC) method can provide a patient-specific non-uniform attenuation coefficient map only by using photopeak and scatter images without X-ray computed tomography (CT). The purpose of this study is to evaluate the performance of attenuation correction (AC) by the SSPAC method on normal myocardial perfusion database. A total of 32 sets of exercise-rest myocardial images with Tc-99m-sestamibi were acquired in both photopeak (140 keV±10%) and scatter (7% of lower side of the photopeak window) energy windows. Myocardial perfusion databases by the SSPAC method and non-AC (NC) were created from 15 female and 17 male subjects with low likelihood of cardiac disease using quantitative perfusion SPECT software. Segmental myocardial counts of a 17-segment model from these databases were compared on the basis of paired t test. AC average myocardial perfusion count was significantly higher than that in NC in the septal and inferior regions (P<0.02). On the contrary, AC average count was significantly lower in the anterolateral and apical regions (P<0.01). Coefficient variation of the AC count in the mid, apical and apex regions was lower than that of NC. The SSPAC method can improve average myocardial perfusion uptake in the septal and inferior regions and provide uniform distribution of myocardial perfusion. The SSPAC method could be a practical method of attenuation correction without X-ray CT. (author)

  18. Scatter and crosstalk corrections for 99mTc/123I dual-radionuclide imaging using a CZT SPECT system with pinhole collimators

    International Nuclear Information System (INIS)

    Fan, Peng; Hutton, Brian F.; Holstensson, Maria; Ljungberg, Michael; Hendrik Pretorius, P.; Prasad, Rameshwar; Liu, Chi; Ma, Tianyu; Liu, Yaqiang; Wang, Shi; Thorn, Stephanie L.; Stacy, Mitchel R.; Sinusas, Albert J.

    2015-01-01

    Purpose: The energy spectrum for a cadmium zinc telluride (CZT) detector has a low energy tail due to incomplete charge collection and intercrystal scattering. Due to these solid-state detector effects, scatter would be overestimated if the conventional triple-energy window (TEW) method is used for scatter and crosstalk corrections in CZT-based imaging systems. The objective of this work is to develop a scatter and crosstalk correction method for 99m Tc/ 123 I dual-radionuclide imaging for a CZT-based dedicated cardiac SPECT system with pinhole collimators (GE Discovery NM 530c/570c). Methods: A tailing model was developed to account for the low energy tail effects of the CZT detector. The parameters of the model were obtained using 99m Tc and 123 I point source measurements. A scatter model was defined to characterize the relationship between down-scatter and self-scatter projections. The parameters for this model were obtained from Monte Carlo simulation using SIMIND. The tailing and scatter models were further incorporated into a projection count model, and the primary and self-scatter projections of each radionuclide were determined with a maximum likelihood expectation maximization (MLEM) iterative estimation approach. The extracted scatter and crosstalk projections were then incorporated into MLEM image reconstruction as an additive term in forward projection to obtain scatter- and crosstalk-corrected images. The proposed method was validated using Monte Carlo simulation, line source experiment, anthropomorphic torso phantom studies, and patient studies. The performance of the proposed method was also compared to that obtained with the conventional TEW method. Results: Monte Carlo simulations and line source experiment demonstrated that the TEW method overestimated scatter while their proposed method provided more accurate scatter estimation by considering the low energy tail effect. In the phantom study, improved defect contrasts were observed with both

  19. On the radiative corrections to the neutrino deep inelastic scattering

    International Nuclear Information System (INIS)

    Bardin, D.Yu.; Dokuchaeva, V.A.

    1986-01-01

    A unique set of formulae is presented for the radiative corrections to the double differential cross section of deep inelastic neutrino scattering in channels of charged and neutral currents within a simple quark parton model in a renormalization scheme on mass-shell. It is shown that these cross sections when being integrated up to the one-dimensional distribution or up to the total cross section reproduce many results existing in the literature

  20. Research and implementation of the algorithm for unwrapped and distortion correction basing on CORDIC for panoramic image

    Science.gov (United States)

    Zhang, Zhenhai; Li, Kejie; Wu, Xiaobing; Zhang, Shujiang

    2008-03-01

    The unwrapped and correcting algorithm based on Coordinate Rotation Digital Computer (CORDIC) and bilinear interpolation algorithm was presented in this paper, with the purpose of processing dynamic panoramic annular image. An original annular panoramic image captured by panoramic annular lens (PAL) can be unwrapped and corrected to conventional rectangular image without distortion, which is much more coincident with people's vision. The algorithm for panoramic image processing is modeled by VHDL and implemented in FPGA. The experimental results show that the proposed panoramic image algorithm for unwrapped and distortion correction has the lower computation complexity and the architecture for dynamic panoramic image processing has lower hardware cost and power consumption. And the proposed algorithm is valid.

  1. CORRECTING FOR INTERPLANETARY SCATTERING IN VELOCITY DISPERSION ANALYSIS OF SOLAR ENERGETIC PARTICLES

    International Nuclear Information System (INIS)

    Laitinen, T.; Dalla, S.; Huttunen-Heikinmaa, K.; Valtonen, E.

    2015-01-01

    To understand the origin of Solar Energetic Particles (SEPs), we must study their injection time relative to other solar eruption manifestations. Traditionally the injection time is determined using the Velocity Dispersion Analysis (VDA) where a linear fit of the observed event onset times at 1 AU to the inverse velocities of SEPs is used to derive the injection time and path length of the first-arriving particles. VDA does not, however, take into account that the particles that produce a statistically observable onset at 1 AU have scattered in the interplanetary space. We use Monte Carlo test particle simulations of energetic protons to study the effect of particle scattering on the observable SEP event onset above pre-event background, and consequently on VDA results. We find that the VDA results are sensitive to the properties of the pre-event and event particle spectra as well as SEP injection and scattering parameters. In particular, a VDA-obtained path length that is close to the nominal Parker spiral length does not imply that the VDA injection time is correct. We study the delay to the observed onset caused by scattering of the particles and derive a simple estimate for the delay time by using the rate of intensity increase at the SEP onset as a parameter. We apply the correction to a magnetically well-connected SEP event of 2000 June 10, and show it to improve both the path length and injection time estimates, while also increasing the error limits to better reflect the inherent uncertainties of VDA

  2. SPECT quantification: a review of the different correction methods with compton scatter, attenuation and spatial deterioration effects

    International Nuclear Information System (INIS)

    Groiselle, C.; Rocchisani, J.M.; Moretti, J.L.; Dreuille, O. de; Gaillard, J.F.; Bendriem, B.

    1997-01-01

    SPECT quantification: a review of the different correction methods with Compton scatter attenuation and spatial deterioration effects. The improvement of gamma-cameras, acquisition and reconstruction software opens new perspectives in term of image quantification in nuclear medicine. In order to meet the challenge, numerous works have been undertaken in recent years to correct for the different physical phenomena that prevent an exact estimation of the radioactivity distribution. The main phenomena that have to betaken into account are scatter, attenuation and resolution. In this work, authors present the physical basis of each issue, its consequences on quantification and the main methods proposed to correct them. (authors)

  3. Experimental study on the location of energy windows for scatter correction by the TEW method in 201Tl imaging

    International Nuclear Information System (INIS)

    Kojima, Akihiro; Matsumoto, Masanori; Ohyama, Yoichi; Tomiguchi, Seiji; Kira, Mitsuko; Takahashi, Mutsumasa.

    1997-01-01

    To investigate validity of scatter correction by the TEW method in 201 Tl imaging, we performed an experimental study using the gamma camera with the capability to perform the TEW method and a plate source with a defect. Images were acquired with the triple energy window which is recommended by the gamma camera manufacturer. The result of the energy spectrum showed that backscattered photons were included within the lower sub-energy window and main energy window, and the spectral shapes in the upper half region of the photopeak (70 keV) were not changed greatly by the source shape and the thickness of scattering materials. The scatter fraction calculated using energy spectra and, visual observation and the contrast values measured at the defect using planar images also showed that substantial primary photons were included in the upper sub-energy window. In TEW method (for scatter correction), two sub-energy windows are expected to be defined on the part of energy region in which total counts mainly consist of scattered photons. Therefore, it is necessary to investigate the use of the upper sub-energy window on scatter correction by the TEW method in 201 Tl imaging. (author)

  4. Study of radiative corrections with application to the electron-neutrino scattering

    International Nuclear Information System (INIS)

    Oliveira, L.C.S. de.

    1977-01-01

    The radiative correction method is studied which appears in Quantum Field Theory, for some weak interaction processes. e.g., Beta decay and muon decay. Such a method is then applied to calculate transition probability for the electron-neutrino scattering using the U-A theory as a base. The calculations of infrared and ultraviolet divergences are also discussed. (L.C.) [pt

  5. Next-to-soft corrections to high energy scattering in QCD and gravity

    Energy Technology Data Exchange (ETDEWEB)

    Luna, A.; Melville, S. [SUPA, School of Physics and Astronomy, University of Glasgow,Glasgow G12 8QQ, Scotland (United Kingdom); Naculich, S.G. [Department of Physics, Bowdoin College,Brunswick, ME 04011 (United States); White, C.D. [Centre for Research in String Theory, School of Physics and Astronomy,Queen Mary University of London,327 Mile End Road, London E1 4NS (United Kingdom)

    2017-01-12

    We examine the Regge (high energy) limit of 4-point scattering in both QCD and gravity, using recently developed techniques to systematically compute all corrections up to next-to-leading power in the exchanged momentum i.e. beyond the eikonal approximation. We consider the situation of two scalar particles of arbitrary mass, thus generalising previous calculations in the literature. In QCD, our calculation describes power-suppressed corrections to the Reggeisation of the gluon. In gravity, we confirm a previous conjecture that next-to-soft corrections correspond to two independent deflection angles for the incoming particles. Our calculations in QCD and gravity are consistent with the well-known double copy relating amplitudes in the two theories.

  6. Finite-Geometry and Polarized Multiple-Scattering Corrections of Experimental Fast- Neutron Polarization Data by Means of Monte Carlo Methods

    Energy Technology Data Exchange (ETDEWEB)

    Aspelund, O; Gustafsson, B

    1967-05-15

    After an introductory discussion of various methods for correction of experimental left-right ratios for polarized multiple-scattering and finite-geometry effects necessary and sufficient formulas for consistent tracking of polarization effects in successive scattering orders are derived. The simplifying assumptions are then made that the scattering is purely elastic and nuclear, and that in the description of the kinematics of the arbitrary Scattering {mu}, only one triple-parameter - the so-called spin rotation parameter {beta}{sup ({mu})} - is required. Based upon these formulas a general discussion of the importance of the correct inclusion of polarization effects in any scattering order is presented. Special attention is then paid to the question of depolarization of an already polarized beam. Subsequently, the afore-mentioned formulas are incorporated in the comprehensive Monte Carlo program MULTPOL, which has been designed so as to correctly account for finite-geometry effects in the sense that both the scattering sample and the detectors (both having cylindrical shapes) are objects of finite dimensions located at finite distances from each other and from the source of polarized fast-neutrons. A special feature of MULTPOL is the application of the method of correlated sampling for reduction of the standard deviations .of the results of the simulated experiment. Typical data of performance of MULTPOL have been obtained by the application of this program to the correction of experimental polarization data observed in n + '{sup 12}C elastic scattering between 1 and 2 MeV. Finally, in the concluding remarks the possible modification of MULTPOL to other experimental geometries is briefly discussed.

  7. Research on the Phase Aberration Correction with a Deformable Mirror Controlled by a Genetic Algorithm

    International Nuclear Information System (INIS)

    Yang, P; Hu, S J; Chen, S Q; Yang, W; Xu, B; Jiang, W H

    2006-01-01

    In order to improve laser beam quality, a real number encoding genetic algorithm based on adaptive optics technology was presented. This algorithm was applied to control a 19-channel deformable mirror to correct phase aberration in laser beam. It is known that when traditional adaptive optics system is used to correct laser beam wave-front phase aberration, a precondition is to measure the phase aberration information in the laser beam. However, using genetic algorithms, there is no necessary to know the phase aberration information in the laser beam beforehand. The only parameter need to know is the Light intensity behind the pinhole on the focal plane. This parameter was used as the fitness function for the genetic algorithm. Simulation results show that the optimal shape of the 19-channel deformable mirror applied to correct the phase aberration can be ascertained. The peak light intensity was improved by a factor of 21, and the encircled energy strehl ratio was increased to 0.34 from 0.02 as the phase aberration was corrected with this technique

  8. [Application of an Adaptive Inertia Weight Particle Swarm Algorithm in the Magnetic Resonance Bias Field Correction].

    Science.gov (United States)

    Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao

    2016-06-01

    An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.

  9. Intercomparison of attenuation correction algorithms for single-polarized X-band radars

    Science.gov (United States)

    Lengfeld, K.; Berenguer, M.; Sempere Torres, D.

    2018-03-01

    Attenuation due to liquid water is one of the largest uncertainties in radar observations. The effects of attenuation are generally inversely proportional to the wavelength, i.e. observations from X-band radars are more affected by attenuation than those from C- or S-band systems. On the other hand, X-band radars can measure precipitation fields in higher temporal and spatial resolution and are more mobile and easier to install due to smaller antennas. A first algorithm for attenuation correction in single-polarized systems was proposed by Hitschfeld and Bordan (1954) (HB), but it gets unstable in case of small errors (e.g. in the radar calibration) and strong attenuation. Therefore, methods have been developed that restrict attenuation correction to keep the algorithm stable, using e.g. surface echoes (for space-borne radars) and mountain returns (for ground radars) as a final value (FV), or adjustment of the radar constant (C) or the coefficient α. In the absence of mountain returns, measurements from C- or S-band radars can be used to constrain the correction. All these methods are based on the statistical relation between reflectivity and specific attenuation. Another way to correct for attenuation in X-band radar observations is to use additional information from less attenuated radar systems, e.g. the ratio between X-band and C- or S-band radar measurements. Lengfeld et al. (2016) proposed such a method based isotonic regression of the ratio between X- and C-band radar observations along the radar beam. This study presents a comparison of the original HB algorithm and three algorithms based on the statistical relation between reflectivity and specific attenuation as well as two methods implementing additional information of C-band radar measurements. Their performance in two precipitation events (one mainly convective and the other one stratiform) shows that a restriction of the HB is necessary to avoid instabilities. A comparison with vertically pointing

  10. Pile-up correction by Genetic Algorithm and Artificial Neural Network

    Science.gov (United States)

    Kafaee, M.; Saramad, S.

    2009-08-01

    Pile-up distortion is a common problem for high counting rates radiation spectroscopy in many fields such as industrial, nuclear and medical applications. It is possible to reduce pulse pile-up using hardware-based pile-up rejections. However, this phenomenon may not be eliminated completely by this approach and the spectrum distortion caused by pile-up rejection can be increased as well. In addition, inaccurate correction or rejection of pile-up artifacts in applications such as energy dispersive X-ray (EDX) spectrometers can lead to losses of counts, will give poor quantitative results and even false element identification. Therefore, it is highly desirable to use software-based models to predict and correct any recognized pile-up signals in data acquisition systems. The present paper describes two new intelligent approaches for pile-up correction; the Genetic Algorithm (GA) and Artificial Neural Networks (ANNs). The validation and testing results of these new methods have been compared, which shows excellent agreement with the measured data with 60Co source and NaI detector. The Monte Carlo simulation of these new intelligent algorithms also shows their advantages over hardware-based pulse pile-up rejection methods.

  11. Corrections on energy spectrum and scattering for fast neutron radiography at NECTAR facility

    International Nuclear Information System (INIS)

    Liu Shuquan; Thomas, Boucherl; Li Hang; Zou Yubin; Lu Yuanrong; Guo Zhiyu

    2013-01-01

    Distortions caused by the neutron spectrum and scattered neutrons are major problems in fast neutron radiography and should be considered for improving the image quality. This paper puts emphasis on the removal of these image distortions and deviations for fast neutron radiography performed at the NECTAR facility of the research reactor FRM-Ⅱ in Technische Universitaet Mounchen (TUM), Germany. The NECTAR energy spectrum is analyzed and established to modify the influence caused by the neutron spectrum, and the Point Scattered Function (PScF) simulated by the Monte-Carlo program MCNPX is used to evaluate scattering effects from the object and improve image quality. Good analysis results prove the sound effects of the above two corrections. (authors)

  12. Corrections on energy spectrum and scatterings for fast neutron radiography at NECTAR facility

    Science.gov (United States)

    Liu, Shu-Quan; Bücherl, Thomas; Li, Hang; Zou, Yu-Bin; Lu, Yuan-Rong; Guo, Zhi-Yu

    2013-11-01

    Distortions caused by the neutron spectrum and scattered neutrons are major problems in fast neutron radiography and should be considered for improving the image quality. This paper puts emphasis on the removal of these image distortions and deviations for fast neutron radiography performed at the NECTAR facility of the research reactor FRM- II in Technische Universität München (TUM), Germany. The NECTAR energy spectrum is analyzed and established to modify the influence caused by the neutron spectrum, and the Point Scattered Function (PScF) simulated by the Monte-Carlo program MCNPX is used to evaluate scattering effects from the object and improve image quality. Good analysis results prove the sound effects of the above two corrections.

  13. Quantum algorithms and quantum maps - implementation and error correction

    International Nuclear Information System (INIS)

    Alber, G.; Shepelyansky, D.

    2005-01-01

    Full text: We investigate the dynamics of the quantum tent map under the influence of errors and explore the possibilities of quantum error correcting methods for the purpose of stabilizing this quantum algorithm. It is known that static but uncontrollable inter-qubit couplings between the qubits of a quantum information processor lead to a rapid Gaussian decay of the fidelity of the quantum state. We present a new error correcting method which slows down this fidelity decay to a linear-in-time exponential one. One of its advantages is that it does not require redundancy so that all physical qubits involved can be used for logical purposes. We also study the influence of decoherence due to spontaneous decay processes which can be corrected by quantum jump-codes. It is demonstrated how universal encoding can be performed in these code spaces. For this purpose we discuss a new entanglement gate which can be used for lowest level encoding in concatenated error-correcting architectures. (author)

  14. Calculation of the flux attenuation and multiple scattering correction factors in time of flight technique for double differential cross section measurements

    International Nuclear Information System (INIS)

    Martin, G.; Coca, M.; Capote, R.

    1996-01-01

    Using Monte Carlo method technique , a computer code which simulates the time of flight experiment to measure double differential cross section was developed. The correction factor for flux attenuation and multiple scattering, that make a deformation to the measured spectrum, were calculated. The energy dependence of the correction factor was determined and a comparison with other works is shown. Calculations for Fe 56 at two different scattering angles were made. We also reproduce the experiment performed at the Nuclear Analysis Laboratory for C 12 at 25 celsius degree and the calculated correction factor for the is measured is shown. We found a linear relation between the scatter size and the correction factor for flux attenuation

  15. Non perturbative method for radiative corrections applied to lepton-proton scattering

    International Nuclear Information System (INIS)

    Chahine, C.

    1979-01-01

    We present a new, non perturbative method to effect radiative corrections in lepton (electron or muon)-nucleon scattering, useful for existing or planned experiments. This method relies on a spectral function derived in a previous paper, which takes into account both real soft photons and virtual ones and hence is free from infrared divergence. Hard effects are computed perturbatively and then included in the form of 'hard factors' in the non peturbative soft formulas. Practical computations are effected using the Gauss-Jacobi integration method which reduce the relevant integrals to a rapidly converging sequence. For the simple problem of the radiative quasi-elastic peak, we get an exponentiated form conjectured by Schwinger and found by Yennie, Frautschi and Suura. We compare also our results with the peaking approximation, which we derive independantly, and with the exact one-photon emission formula of Mo and Tsai. Applications of our method to the continuous spectrum include the radiative tail of the Δ 33 resonance in e + p scattering and radiative corrections to the Feynman scale invariant F 2 structure function for the kinematics of two recent high energy muon experiments

  16. Evaluation of systematic uncertainties caused by radiative corrections in experiments on deep inelastic νsub(l)N-scattering

    International Nuclear Information System (INIS)

    Bardin, D.Yu.

    1979-01-01

    Basing on the simple quark-parton model of strong interaction and on the Weinberg-Salam theory compact formulae are derived for the radiative correction to the charged current induced deep inelastic scattering of neutrinos on nucleons. The radiative correction is found to be around 20-30%, i.e., the value typical for deep inelastic lN-scattering. The results obtained are rather different from the presently available estimations of the effect under consideration

  17. Scatter and cross-talk correction for one-day acquisition of 123I-BMIPP and 99mtc-tetrofosmin myocardial SPECT.

    Science.gov (United States)

    Kaneta, Tomohiro; Kurihara, Hideyuki; Hakamatsuka, Takashi; Ito, Hiroshi; Maruoka, Shin; Fukuda, Hiroshi; Takahashi, Shoki; Yamada, Shogo

    2004-12-01

    123I-15-(p-iodophenyl)-3-(R,S)-methylpentadecanoic acid (BMIPP) and 99mTc-tetrofosmin (TET) are widely used for evaluation of myocardial fatty acid metabolism and perfusion, respectively. ECG-gated TET SPECT is also used for evaluation of myocardial wall motion. These tests are often performed on the same day to minimize both the time required and inconvenience to patients and medical staff. However, as 123I and 99mTc have similar emission energies (159 keV and 140 keV, respectively), it is necessary to consider not only scattered photons, but also primary photons of each radionuclide detected in the wrong window (cross-talk). In this study, we developed and evaluated the effectiveness of a new scatter and cross-talk correction imaging protocol. Fourteen patients with ischemic heart disease or heart failure (8 men and 6 women with a mean age of 69.4 yr, ranging from 45 to 94 yr) were enrolled in this study. In the routine one-day acquisition protocol, BMIPP SPECT was performed in the morning, with TET SPECT performed 4 h later. An additional SPECT was performed just before injection of TET with the energy window for 99mTc. These data correspond to the scatter and cross-talk factor of the next TET SPECT. The correction was performed by subtraction of the scatter and cross-talk factor from TET SPECT. Data are presented as means +/- S.E. Statistical analyses were performed using Wilcoxon's matched-pairs signed-ranks test, and p corrected total count was 26.0 +/- 5.3%. EDV and ESV after correction were significantly greater than those before correction (p = 0.019 and 0.016, respectively). After correction, EF was smaller than that before correction, but the difference was not significant. Perfusion scores (17 segments per heart) were significantly lower after as compared with those before correction (p correction revealed significant differences in EDV, ESV, and perfusion scores. These observations indicate that scatter and cross-talk correction is required for one

  18. Flux-corrected transport principles, algorithms, and applications

    CERN Document Server

    Kuzmin, Dmitri; Turek, Stefan

    2005-01-01

    Addressing students and researchers as well as CFD practitioners, this book describes the state of the art in the development of high-resolution schemes based on the Flux-Corrected Transport (FCT) paradigm. Intended for readers who have a solid background in Computational Fluid Dynamics, the book begins with historical notes by J.P. Boris and D.L. Book. Review articles that follow describe recent advances in the design of FCT algorithms as well as various algorithmic aspects. The topics addressed in the book and its main highlights include: the derivation and analysis of classical FCT schemes with special emphasis on the underlying physical and mathematical constraints; flux limiting for hyperbolic systems; generalization of FCT to implicit time-stepping and finite element discretizations on unstructured meshes and its role as a subgrid scale model for Monotonically Integrated Large Eddy Simulation (MILES) of turbulent flows. The proposed enhancements of the FCT methodology also comprise the prelimiting and '...

  19. Coulomb correction to the screening angle of the Moliere multiple scattering theory

    International Nuclear Information System (INIS)

    Kuraev, E.A.; Voskresenskaya, O.O.; Tarasov, A.V.

    2012-01-01

    Coulomb correction to the screening angular parameter of the Moliere multiple scattering theory is found. Numerical calculations are presented in the range of nuclear charge 4 ≤ Z ≤ 82. Comparison with the Moliere result for the screening angle reveals up to 30% deviation from it for sufficiently heavy elements of the target material

  20. Flux-corrected transport principles, algorithms, and applications

    CERN Document Server

    Löhner, Rainald; Turek, Stefan

    2012-01-01

    Many modern high-resolution schemes for Computational Fluid Dynamics trace their origins to the Flux-Corrected Transport (FCT) paradigm. FCT maintains monotonicity using a nonoscillatory low-order scheme to determine the bounds for a constrained high-order approximation. This book begins with historical notes by J.P. Boris and D.L. Book who invented FCT in the early 1970s. The chapters that follow describe the design of fully multidimensional FCT algorithms for structured and unstructured grids, limiting for systems of conservation laws, and the use of FCT as an implicit subgrid scale model. The second edition presents 200 pages of additional material. The main highlights of the three new chapters include: FCT-constrained interpolation for Arbitrary Lagrangian-Eulerian methods, an optimization-based approach to flux correction, and FCT simulations of high-speed flows on overset grids. Addressing students and researchers, as well as CFD practitioners, the book is focused on computational aspects and contains m...

  1. Scatter-Reducing Sounding Filtration Using a Genetic Algorithm and Mean Monthly Standard Deviation

    Science.gov (United States)

    Mandrake, Lukas

    2013-01-01

    Retrieval algorithms like that used by the Orbiting Carbon Observatory (OCO)-2 mission generate massive quantities of data of varying quality and reliability. A computationally efficient, simple method of labeling problematic datapoints or predicting soundings that will fail is required for basic operation, given that only 6% of the retrieved data may be operationally processed. This method automatically obtains a filter designed to reduce scatter based on a small number of input features. Most machine-learning filter construction algorithms attempt to predict error in the CO2 value. By using a surrogate goal of Mean Monthly STDEV, the goal is to reduce the retrieved CO2 scatter rather than solving the harder problem of reducing CO2 error. This lends itself to improved interpretability and performance. This software reduces the scatter of retrieved CO2 values globally based on a minimum number of input features. It can be used as a prefilter to reduce the number of soundings requested, or as a post-filter to label data quality. The use of the MMS (Mean Monthly Standard deviation) provides a much cleaner, clearer filter than the standard ABS(CO2-truth) metrics previously employed by competitor methods. The software's main strength lies in a clearer (i.e., fewer features required) filter that more efficiently reduces scatter in retrieved CO2 rather than focusing on the more complex (and easily removed) bias issues.

  2. Wall attenuation and scatter corrections for ion chambers: measurements versus calculations

    Energy Technology Data Exchange (ETDEWEB)

    Rogers, D W.O.; Bielajew, A F [National Research Council of Canada, Ottawa, ON (Canada). Div. of Physics

    1990-08-01

    In precision ion chamber dosimetry in air, wall attenuation and scatter are corrected for A{sub wall} (K{sub att} in IAEA terminology, K{sub w}{sup -1} in standards laboratory terminology). Using the EGS4 system the authors show that Monte Carlo calculated A{sub wall} factors predict relative variations in detector response with wall thickness which agree with all available experimental data within a statistical uncertainty of less than 0.1%. They calculated correction factors for use in exposure and air kerma standards are different by up to 1% from those obtained by extrapolating these same measurements. Using calculated correction factors would imply increases of 0.7-1.0% in the exposure and air kerma standards based on spherical and large diameter, large length cylindrical chambers and decreases of 0.3-0.5% for standards based on large diameter pancake chambers. (author).

  3. Deterministic simulation of first-order scattering in virtual X-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Freud, N. E-mail: nicolas.freud@insa-lyon.fr; Duvauchelle, P.; Pistrui-Maximean, S.A.; Letang, J.-M.; Babot, D

    2004-07-01

    A deterministic algorithm is proposed to compute the contribution of first-order Compton- and Rayleigh-scattered radiation in X-ray imaging. This algorithm has been implemented in a simulation code named virtual X-ray imaging. The physical models chosen to account for photon scattering are the well-known form factor and incoherent scattering function approximations, which are recalled in this paper and whose limits of validity are briefly discussed. The proposed algorithm, based on a voxel discretization of the inspected object, is presented in detail, as well as its results in simple configurations, which are shown to converge when the sampling steps are chosen sufficiently small. Simple criteria for choosing correct sampling steps (voxel and pixel size) are established. The order of magnitude of the computation time necessary to simulate first-order scattering images amounts to hours with a PC architecture and can even be decreased down to minutes, if only a profile is computed (along a linear detector). Finally, the results obtained with the proposed algorithm are compared to the ones given by the Monte Carlo code Geant4 and found to be in excellent accordance, which constitutes a validation of our algorithm. The advantages and drawbacks of the proposed deterministic method versus the Monte Carlo method are briefly discussed.

  4. A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting

    Science.gov (United States)

    Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao

    2014-01-01

    We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813

  5. Prediction of e± elastic scattering cross-section ratio based on phenomenological two-photon exchange corrections

    Science.gov (United States)

    Qattan, I. A.

    2017-06-01

    I present a prediction of the e± elastic scattering cross-section ratio, Re+e-, as determined using a new parametrization of the two-photon exchange (TPE) corrections to electron-proton elastic scattering cross section σR. The extracted ratio is compared to several previous phenomenological extractions, TPE hadronic calculations, and direct measurements from the comparison of electron and positron scattering. The TPE corrections and the ratio Re+e- show a clear change of sign at low Q2, which is necessary to explain the high-Q2 form factors discrepancy while being consistent with the known Q2→0 limit. While my predictions are in generally good agreement with previous extractions, TPE hadronic calculations, and existing world data including the recent two measurements from the CLAS and VEPP-3 Novosibirsk experiments, they are larger than the new OLYMPUS measurements at larger Q2 values.

  6. Clinical usefulness of scatter and attenuation correction for brain single photon emission computed tomography (SPECT) in pediatrics

    International Nuclear Information System (INIS)

    Adachi, Itaru; Doi, Kenji; Komori, Tsuyoshi; Hou, Nobuyoshi; Tabuchi, Koujirou; Matsui, Ritsuo; Sueyoshi, Kouzou; Utsunomiya, Keita; Narabayashi, Isamu

    1998-01-01

    This investigation was undertaken to study clinical usefulness of scatter and attenuation correction (SAC) of brain SPECT in infants to compare the standard reconstruction (STD). The brain SPECT was performed in 31 patients with 19 epilepsy, 5 cerebro-vascular disease, 2 brain tumor, 3 meningitis, 1 hydrocephalus and psychosis (mean age 5.0±4.9 years old). Many patients was necessary to be injected sedatives for restraining body motion after Technetium-99m hexamethylpropylene amine oxime ( 99m Tc-HMPAO) was injected at the convulsion or rest. Brain SPECT data were acquired with triple detector gamma camera (GCA-9300 Toshiba Japan). These data were reconstructed by filtered backprojection after the raw data were corrected by triple energy windows method of scatter correction and Chang filtered method of attenuation correction. The same data was reconstructed by filtered backprojection without these corrections. Both SAC and STD SPECT images were analyzed by the visual interpretation. The uptake ratio of cerebral basal nuclei was calculated by the counts of the thalamus or lenticular nuclei divided by the cortex. All images of SAC method were excellent than that of STD method. The thalamic uptake ratio in SAC method was higher than that of STD method (1.22±0.09>0.87±0.22 p 1.02±0.16 p<0.01). Transmission scan is the most suitable method of absorption correction. But the transmission scan is not adequate for examination of children, because this scan needs a lot of time and the infants are exposed by the line source radioisotope. It was concluded that these scatter and absorption corrections were most suitable method for brain SPECT in pediatrics. (author)

  7. Evaluation of six scatter correction methods based on spectral analysis in 99m Tc SPECT imaging using SIMIND Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    Mahsa Noori Asl

    2013-01-01

    Full Text Available Compton-scattered photons included within the photopeak pulse-height window result in the degradation of SPECT images both qualitatively and quantitatively. The purpose of this study is to evaluate and compare six scatter correction methods based on setting the energy windows in 99m Tc spectrum. SIMIND Monte Carlo simulation is used to generate the projection images from a cold-sphere hot-background phantom. For evaluation of different scatter correction methods, three assessment criteria including image contrast, signal-to-noise ratio (SNR and relative noise of the background (RNB are considered. Except for the dual-photopeak window (DPW method, the image contrast of the five cold spheres is improved in the range of 2.7-26%. Among methods considered, two methods show a nonuniform correction performance. The RNB for all of the scatter correction methods is ranged from minimum 0.03 for DPW method to maximum 0.0727 for the three energy window (TEW method using trapezoidal approximation. The TEW method using triangular approximation because of ease of implementation, good improvement of the image contrast and the SNR for the five cold spheres, and the low noise level is proposed as most appropriate correction method.

  8. A baseline correction algorithm for Raman spectroscopy by adaptive knots B-spline

    International Nuclear Information System (INIS)

    Wang, Xin; Fan, Xian-guang; Xu, Ying-jie; Wang, Xiu-fen; He, Hao; Zuo, Yong

    2015-01-01

    The Raman spectroscopy technique is a powerful and non-invasive technique for molecular fingerprint detection which has been widely used in many areas, such as food safety, drug safety, and environmental testing. But Raman signals can be easily corrupted by a fluorescent background, therefore we presented a baseline correction algorithm to suppress the fluorescent background in this paper. In this algorithm, the background of the Raman signal was suppressed by fitting a curve called a baseline using a cyclic approximation method. Instead of the traditional polynomial fitting, we used the B-spline as the fitting algorithm due to its advantages of low-order and smoothness, which can avoid under-fitting and over-fitting effectively. In addition, we also presented an automatic adaptive knot generation method to replace traditional uniform knots. This algorithm can obtain the desired performance for most Raman spectra with varying baselines without any user input or preprocessing step. In the simulation, three kinds of fluorescent background lines were introduced to test the effectiveness of the proposed method. We showed that two real Raman spectra (parathion-methyl and colza oil) can be detected and their baselines were also corrected by the proposed method. (paper)

  9. SIMSAS - a window based software package for simulation and analysis of multiple small-angle scattering data

    International Nuclear Information System (INIS)

    Jayaswal, B.; Mazumder, S.

    1998-09-01

    Small-angle scattering data from strong scattering systems, e.g. porous materials, cannot be analysed invoking single scattering approximation as specimen needed to replicate the bulk matrix in essential properties are too thick to validate the approximation. The presence of multiple scattering is indicated by invalidity of the functional invariance property of the observed scattering profile with variation of sample thickness and/or wave length of the probing radiation. This article delineates how non accounting of multiple scattering affects the results of analysis and then how to correct the data for its effect. It deals with an algorithm to extract single scattering profile from small-angle scattering data affected by multiple scattering. The algorithm can process the scattering data and deduce single scattering profile in absolute scale. A software package, SIMSAS, is introduced for executing this inversion step. This package is useful both to simulate and to analyse multiple small-angle scattering data. (author)

  10. A fully automated algorithm of baseline correction based on wavelet feature points and segment interpolation

    Science.gov (United States)

    Qian, Fang; Wu, Yihui; Hao, Peng

    2017-11-01

    Baseline correction is a very important part of pre-processing. Baseline in the spectrum signal can induce uneven amplitude shifts across different wavenumbers and lead to bad results. Therefore, these amplitude shifts should be compensated before further analysis. Many algorithms are used to remove baseline, however fully automated baseline correction is convenient in practical application. A fully automated algorithm based on wavelet feature points and segment interpolation (AWFPSI) is proposed. This algorithm finds feature points through continuous wavelet transformation and estimates baseline through segment interpolation. AWFPSI is compared with three commonly introduced fully automated and semi-automated algorithms, using simulated spectrum signal, visible spectrum signal and Raman spectrum signal. The results show that AWFPSI gives better accuracy and has the advantage of easy use.

  11. Tunable output-frequency filter algorithm for imaging through scattering media under LED illumination

    Science.gov (United States)

    Zhou, Meiling; Singh, Alok Kumar; Pedrini, Giancarlo; Osten, Wolfgang; Min, Junwei; Yao, Baoli

    2018-03-01

    We present a tunable output-frequency filter (TOF) algorithm to reconstruct the object from noisy experimental data under low-power partially coherent illumination, such as LED, when imaging through scattering media. In the iterative algorithm, we employ Gaussian functions with different filter windows at different stages of iteration process to reduce corruption from experimental noise to search for a global minimum in the reconstruction. In comparison with the conventional iterative phase retrieval algorithm, we demonstrate that the proposed TOF algorithm achieves consistent and reliable reconstruction in the presence of experimental noise. Moreover, the spatial resolution and distinctive features are retained in the reconstruction since the filter is applied only to the region outside the object. The feasibility of the proposed method is proved by experimental results.

  12. Three-loop corrections to the soft anomalous dimension in multileg scattering

    CERN Document Server

    Almelid, Øyvind; Gardi, Einan

    2016-01-01

    We present the three-loop result for the soft anomalous dimension governing long-distance singularities of multi-leg gauge-theory scattering amplitudes of massless partons. We compute all contributing webs involving semi-infinite Wilson lines at three loops and obtain the complete three-loop correction to the dipole formula. We find that non-dipole corrections appear already for three coloured partons, where the correction is a constant without kinematic dependence. Kinematic dependence appears only through conformally-invariant cross ratios for four coloured partons or more, and the result can be expressed in terms of single-valued harmonic polylogarithms of weight five. While the non-dipole three-loop term does not vanish in two-particle collinear limits, its contribution to the splitting amplitude anomalous dimension reduces to a constant, and it only depends on the colour charges of the collinear pair, thereby preserving strict collinear factorization properties. Finally we verify that our result is consi...

  13. Scatter and crosstalk corrections for {sup 99m}Tc/{sup 123}I dual-radionuclide imaging using a CZT SPECT system with pinhole collimators

    Energy Technology Data Exchange (ETDEWEB)

    Fan, Peng [Department of Diagnostic Radiology, Yale University, New Haven, Connecticut 06520 and Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Hutton, Brian F. [Institute of Nuclear Medicine, University College London, London WC1E 6BT, United Kingdom and Centre for Medical Radiation Physics, University of Wollongong, New South Wales 2522 (Australia); Holstensson, Maria [Department of Nuclear Medicine, Karolinska University Hospital, Stockholm 14186 (Sweden); Ljungberg, Michael [Department of Medical Radiation Physics, Lund University, Lund 222 41 (Sweden); Hendrik Pretorius, P. [Department of Radiology, University of Massachusetts Medical School, Worcester, Massachusetts 01655 (United States); Prasad, Rameshwar; Liu, Chi, E-mail: chi.liu@yale.edu [Department of Diagnostic Radiology, Yale University, New Haven, Connecticut 06520 (United States); Ma, Tianyu; Liu, Yaqiang; Wang, Shi [Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Thorn, Stephanie L.; Stacy, Mitchel R.; Sinusas, Albert J. [Department of Internal Medicine, Yale Translational Research Imaging Center, Yale University, New Haven, Connecticut 06520 (United States)

    2015-12-15

    Purpose: The energy spectrum for a cadmium zinc telluride (CZT) detector has a low energy tail due to incomplete charge collection and intercrystal scattering. Due to these solid-state detector effects, scatter would be overestimated if the conventional triple-energy window (TEW) method is used for scatter and crosstalk corrections in CZT-based imaging systems. The objective of this work is to develop a scatter and crosstalk correction method for {sup 99m}Tc/{sup 123}I dual-radionuclide imaging for a CZT-based dedicated cardiac SPECT system with pinhole collimators (GE Discovery NM 530c/570c). Methods: A tailing model was developed to account for the low energy tail effects of the CZT detector. The parameters of the model were obtained using {sup 99m}Tc and {sup 123}I point source measurements. A scatter model was defined to characterize the relationship between down-scatter and self-scatter projections. The parameters for this model were obtained from Monte Carlo simulation using SIMIND. The tailing and scatter models were further incorporated into a projection count model, and the primary and self-scatter projections of each radionuclide were determined with a maximum likelihood expectation maximization (MLEM) iterative estimation approach. The extracted scatter and crosstalk projections were then incorporated into MLEM image reconstruction as an additive term in forward projection to obtain scatter- and crosstalk-corrected images. The proposed method was validated using Monte Carlo simulation, line source experiment, anthropomorphic torso phantom studies, and patient studies. The performance of the proposed method was also compared to that obtained with the conventional TEW method. Results: Monte Carlo simulations and line source experiment demonstrated that the TEW method overestimated scatter while their proposed method provided more accurate scatter estimation by considering the low energy tail effect. In the phantom study, improved defect contrasts were

  14. Automatic Correction Algorithm of Hyfrology Feature Attribute in National Geographic Census

    Science.gov (United States)

    Li, C.; Guo, P.; Liu, X.

    2017-09-01

    A subset of the attributes of hydrologic features data in national geographic census are not clear, the current solution to this problem was through manual filling which is inefficient and liable to mistakes. So this paper proposes an automatic correction algorithm of hydrologic features attribute. Based on the analysis of the structure characteristics and topological relation, we put forward three basic principles of correction which include network proximity, structure robustness and topology ductility. Based on the WJ-III map workstation, we realize the automatic correction of hydrologic features. Finally, practical data is used to validate the method. The results show that our method is highly reasonable and efficient.

  15. Cell light scattering characteristic numerical simulation research based on FDTD algorithm

    Science.gov (United States)

    Lin, Xiaogang; Wan, Nan; Zhu, Hao; Weng, Lingdong

    2017-01-01

    In this study, finite-difference time-domain (FDTD) algorithm has been used to work out the cell light scattering problem. Before beginning to do the simulation contrast, finding out the changes or the differences between normal cells and abnormal cells which may be cancerous or maldevelopment is necessary. The preparation of simulation are building up the simple cell model of cell which consists of organelles, nucleus and cytoplasm and setting up the suitable precision of mesh. Meanwhile, setting up the total field scattering field source as the excitation source and far field projection analysis group is also important. Every step need to be explained by the principles of mathematic such as the numerical dispersion, perfect matched layer boundary condition and near-far field extrapolation. The consequences of simulation indicated that the position of nucleus changed will increase the back scattering intensity and the significant difference on the peak value of scattering intensity may result from the changes of the size of cytoplasm. The study may help us find out the regulations based on the simulation consequences and the regulations can be meaningful for early diagnosis of cancers.

  16. Study of Six Energy-Window Settings for Scatter Correction in Quantitative 111In Imaging: Comparative analysis Using SIMIND

    International Nuclear Information System (INIS)

    Gomez Facenda, A.; Castillo Lopez, J. P.; Torres Aroche, L. A.; Coca Perez, M. A.

    2013-01-01

    Activity quantification in nuclear medicine imaging is highly desirable, particularly for dosimetry and biodistribution studies of radiopharmaceuticals. Quantitative 111 In imaging is increasingly important with the current interest in therapy using 90 Y-radiolabeled compounds. Photons scattered in the patient are one of the major problems in quantification, which leads to degradation of image quality. The aim of this work was to assess the configuration of energy windows and the best weight factor for the scatter correction in 111 In images. All images were obtained using the Monte Carlo simulation code, Simind, configured to emulate the gamma camera Nucline SPIRIT DH-V. Simulations were validated by a positive agreement between experimental and simulated line-spread functions (LSF) of 99 mTc. It was examined the sensitivity, the scatter-to-total ratio, the contrast and the spatial resolution for scatter-compensated images obtained from six different multi-windows scatter corrections. Taking into consideration the results, the best energy-window setting was two 20% windows centered at 171 and 245keV, together with a 10% scatter window located between the photo peaks at 209keV. (Author)

  17. Meson exchange corrections in deep inelastic scattering on deuteron

    International Nuclear Information System (INIS)

    Kaptari, L.P.; Titov, A.I.

    1989-01-01

    Starting with the general equations of motion of the nucleons interacting with the mesons the one-particle Schroedinger-like equation for the nucleon wave function and the deep inelastic scattering amplitude with the meson-exchange currents are obtained. Effective pion-, sigma-, and omega-meson exchanges are considered. It is found that the mesonic corrections only partially (about 60%) restore the energy sum rule breaking because of the nucleon off-mass-shell effects in nuclei. This results contradicts with the prediction based on the calculation of the energy sum rule limited by the second order of the nucleon-meson vertex and static approximation. 17 refs.; 3 figs

  18. Dispersion corrections to the forward Rayleigh scattering amplitudes of tantalum, mercury and lead derived using photon interaction cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Appaji Gowda, S.B. [Department of Studies in Physics, Manasagangothri, University of Mysore, Mysore 570006 (India); Umesh, T.K. [Department of Studies in Physics, Manasagangothri, University of Mysore, Mysore 570006 (India)]. E-mail: tku@physics.uni-mysore.ac.in

    2006-01-15

    Dispersion corrections to the forward Rayleigh scattering amplitudes of tantalum, mercury and lead in the photon energy range 24-136 keV have been determined by a numerical evaluation of the dispersion integral that relates them through optical theorem to the photo effect cross sections. The photo effect cross sections have been extracted by subtracting the coherent and incoherent scattering contribution from the measured total attenuation cross section, using high-resolution high-purity germanium detector in a narrow beam good geometry set up. The real part of the dispersion correction to which the relativistic corrections calculated by Kissel and Pratt (S-matrix approach) or Creagh and McAuley (multipole corrections) have been included are in better agreement with the available theoretical values.

  19. A new algorithm for least-cost path analysis by correcting digital elevation models of natural landscapes

    Science.gov (United States)

    Baek, Jieun; Choi, Yosoon

    2017-04-01

    Most algorithms for least-cost path analysis usually calculate the slope gradient between the source cell and the adjacent cells to reflect the weights for terrain slope into the calculation of travel costs. However, these algorithms have limitations that they cannot analyze the least-cost path between two cells when obstacle cells with very high or low terrain elevation exist between the source cell and the target cell. This study presents a new algorithm for least-cost path analysis by correcting digital elevation models of natural landscapes to find possible paths satisfying the constraint of maximum or minimum slope gradient. The new algorithm calculates the slope gradient between the center cell and non-adjacent cells using the concept of extended move-sets. If the algorithm finds possible paths between the center cell and non-adjacent cells with satisfying the constraint of slope condition, terrain elevation of obstacle cells existing between two cells is corrected from the digital elevation model. After calculating the cumulative travel costs to the destination by reflecting the weight of the difference between the original and corrected elevations, the algorithm analyzes the least-cost path. The results of applying the proposed algorithm to the synthetic data sets and the real-world data sets provide proof that the new algorithm can provide more accurate least-cost paths than other conventional algorithms implemented in commercial GIS software such as ArcGIS.

  20. SU-E-QI-03: Compartment Modeling of Dynamic Brain PET - The Effect of Scatter and Random Corrections On Parameter Errors

    International Nuclear Information System (INIS)

    Häggström, I; Karlsson, M; Larsson, A; Schmidtlein, C

    2014-01-01

    Purpose: To investigate the effects of corrections for random and scattered coincidences on kinetic parameters in brain tumors, by using ten Monte Carlo (MC) simulated dynamic FLT-PET brain scans. Methods: The GATE MC software was used to simulate ten repetitions of a 1 hour dynamic FLT-PET scan of a voxelized head phantom. The phantom comprised six normal head tissues, plus inserted regions for blood and tumor tissue. Different time-activity-curves (TACs) for all eight tissue types were used in the simulation and were generated in Matlab using a 2-tissue model with preset parameter values (K1,k2,k3,k4,Va,Ki). The PET data was reconstructed into 28 frames by both ordered-subset expectation maximization (OSEM) and 3D filtered back-projection (3DFBP). Five image sets were reconstructed, all with normalization and different additional corrections C (A=attenuation, R=random, S=scatter): Trues (AC), trues+randoms (ARC), trues+scatters (ASC), total counts (ARSC) and total counts (AC). Corrections for randoms and scatters were based on real random and scatter sinograms that were back-projected, blurred and then forward projected and scaled to match the real counts. Weighted non-linearleast- squares fitting of TACs from the blood and tumor regions was used to obtain parameter estimates. Results: The bias was not significantly different for trues (AC), trues+randoms (ARC), trues+scatters (ASC) and total counts (ARSC) for either 3DFBP or OSEM (p<0.05). Total counts with only AC stood out however, with an up to 160% larger bias. In general, there was no difference in bias found between 3DFBP and OSEM, except in parameter Va and Ki. Conclusion: According to our results, the methodology of correcting the PET data for randoms and scatters performed well for the dynamic images where frames have much lower counts compared to static images. Generally, no bias was introduced by the corrections and their importance was emphasized since omitting them increased bias extensively

  1. A numerical study of super-resolution through fast 3D wideband algorithm for scattering in highly-heterogeneous media

    KAUST Repository

    Lé tourneau, Pierre-David; Wu, Ying; Papanicolaou, George; Garnier, Josselin; Darve, Eric

    2016-01-01

    We present a wideband fast algorithm capable of accurately computing the full numerical solution of the problem of acoustic scattering of waves by multiple finite-sized bodies such as spherical scatterers in three dimensions. By full solution, we

  2. Multirobot FastSLAM Algorithm Based on Landmark Consistency Correction

    Directory of Open Access Journals (Sweden)

    Shi-Ming Chen

    2014-01-01

    Full Text Available Considering the influence of uncertain map information on multirobot SLAM problem, a multirobot FastSLAM algorithm based on landmark consistency correction is proposed. Firstly, electromagnetism-like mechanism is introduced to the resampling procedure in single-robot FastSLAM, where we assume that each sampling particle is looked at as a charged electron and attraction-repulsion mechanism in electromagnetism field is used to simulate interactive force between the particles to improve the distribution of particles. Secondly, when multiple robots observe the same landmarks, every robot is regarded as one node and Kalman-Consensus Filter is proposed to update landmark information, which further improves the accuracy of localization and mapping. Finally, the simulation results show that the algorithm is suitable and effective.

  3. Clinical usefulness of scatter and attenuation correction for brain single photon emission computed tomography (SPECT) in pediatrics

    Energy Technology Data Exchange (ETDEWEB)

    Adachi, Itaru; Doi, Kenji; Komori, Tsuyoshi; Hou, Nobuyoshi; Tabuchi, Koujirou; Matsui, Ritsuo; Sueyoshi, Kouzou; Utsunomiya, Keita; Narabayashi, Isamu [Osaka Medical Coll., Takatsuki (Japan)

    1998-01-01

    This investigation was undertaken to study clinical usefulness of scatter and attenuation correction (SAC) of brain SPECT in infants to compare the standard reconstruction (STD). The brain SPECT was performed in 31 patients with 19 epilepsy, 5 cerebro-vascular disease, 2 brain tumor, 3 meningitis, 1 hydrocephalus and psychosis (mean age 5.0{+-}4.9 years old). Many patients was necessary to be injected sedatives for restraining body motion after Technetium-99m hexamethylpropylene amine oxime ({sup 99m}Tc-HMPAO) was injected at the convulsion or rest. Brain SPECT data were acquired with triple detector gamma camera (GCA-9300 Toshiba Japan). These data were reconstructed by filtered backprojection after the raw data were corrected by triple energy windows method of scatter correction and Chang filtered method of attenuation correction. The same data was reconstructed by filtered backprojection without these corrections. Both SAC and STD SPECT images were analyzed by the visual interpretation. The uptake ratio of cerebral basal nuclei was calculated by the counts of the thalamus or lenticular nuclei divided by the cortex. All images of SAC method were excellent than that of STD method. The thalamic uptake ratio in SAC method was higher than that of STD method (1.22{+-}0.09>0.87{+-}0.22 p<0.01). The lenticular nuclear uptake ratio in SAC method was higher than that of STD method (1.26{+-}0.15>1.02{+-}0.16 p<0.01). Transmission scan is the most suitable method of absorption correction. But the transmission scan is not adequate for examination of children, because this scan needs a lot of time and the infants are exposed by the line source radioisotope. It was concluded that these scatter and absorption corrections were most suitable method for brain SPECT in pediatrics. (author)

  4. The Algorithm Theoretical Basis Document for Tidal Corrections

    Science.gov (United States)

    Fricker, Helen A.; Ridgway, Jeff R.; Minster, Jean-Bernard; Yi, Donghui; Bentley, Charles R.`

    2012-01-01

    This Algorithm Theoretical Basis Document deals with the tidal corrections that need to be applied to range measurements made by the Geoscience Laser Altimeter System (GLAS). These corrections result from the action of ocean tides and Earth tides which lead to deviations from an equilibrium surface. Since the effect of tides is dependent of the time of measurement, it is necessary to remove the instantaneous tide components when processing altimeter data, so that all measurements are made to the equilibrium surface. The three main tide components to consider are the ocean tide, the solid-earth tide and the ocean loading tide. There are also long period ocean tides and the pole tide. The approximate magnitudes of these components are illustrated in Table 1, together with estimates of their uncertainties (i.e. the residual error after correction). All of these components are important for GLAS measurements over the ice sheets since centimeter-level accuracy for surface elevation change detection is required. The effect of each tidal component is to be removed by approximating their magnitude using tidal prediction models. Conversely, assimilation of GLAS measurements into tidal models will help to improve them, especially at high latitudes.

  5. Fully iterative scatter corrected digital breast tomosynthesis using GPU-based fast Monte Carlo simulation and composition ratio update

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kyungsang; Ye, Jong Chul, E-mail: jong.ye@kaist.ac.kr [Bio Imaging and Signal Processing Laboratory, Department of Bio and Brain Engineering, KAIST 291, Daehak-ro, Yuseong-gu, Daejeon 34141 (Korea, Republic of); Lee, Taewon; Cho, Seungryong [Medical Imaging and Radiotherapeutics Laboratory, Department of Nuclear and Quantum Engineering, KAIST 291, Daehak-ro, Yuseong-gu, Daejeon 34141 (Korea, Republic of); Seong, Younghun; Lee, Jongha; Jang, Kwang Eun [Samsung Advanced Institute of Technology, Samsung Electronics, 130, Samsung-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do, 443-803 (Korea, Republic of); Choi, Jaegu; Choi, Young Wook [Korea Electrotechnology Research Institute (KERI), 111, Hanggaul-ro, Sangnok-gu, Ansan-si, Gyeonggi-do, 426-170 (Korea, Republic of); Kim, Hak Hee; Shin, Hee Jung; Cha, Joo Hee [Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro, 43-gil, Songpa-gu, Seoul, 138-736 (Korea, Republic of)

    2015-09-15

    Purpose: In digital breast tomosynthesis (DBT), scatter correction is highly desirable, as it improves image quality at low doses. Because the DBT detector panel is typically stationary during the source rotation, antiscatter grids are not generally compatible with DBT; thus, a software-based scatter correction is required. This work proposes a fully iterative scatter correction method that uses a novel fast Monte Carlo simulation (MCS) with a tissue-composition ratio estimation technique for DBT imaging. Methods: To apply MCS to scatter estimation, the material composition in each voxel should be known. To overcome the lack of prior accurate knowledge of tissue composition for DBT, a tissue-composition ratio is estimated based on the observation that the breast tissues are principally composed of adipose and glandular tissues. Using this approximation, the composition ratio can be estimated from the reconstructed attenuation coefficients, and the scatter distribution can then be estimated by MCS using the composition ratio. The scatter estimation and image reconstruction procedures can be performed iteratively until an acceptable accuracy is achieved. For practical use, (i) the authors have implemented a fast MCS using a graphics processing unit (GPU), (ii) the MCS is simplified to transport only x-rays in the energy range of 10–50 keV, modeling Rayleigh and Compton scattering and the photoelectric effect using the tissue-composition ratio of adipose and glandular tissues, and (iii) downsampling is used because the scatter distribution varies rather smoothly. Results: The authors have demonstrated that the proposed method can accurately estimate the scatter distribution, and that the contrast-to-noise ratio of the final reconstructed image is significantly improved. The authors validated the performance of the MCS by changing the tissue thickness, composition ratio, and x-ray energy. The authors confirmed that the tissue-composition ratio estimation was quite

  6. Baseline correction combined partial least squares algorithm and its application in on-line Fourier transform infrared quantitative analysis.

    Science.gov (United States)

    Peng, Jiangtao; Peng, Silong; Xie, Qiong; Wei, Jiping

    2011-04-01

    In order to eliminate the lower order polynomial interferences, a new quantitative calibration algorithm "Baseline Correction Combined Partial Least Squares (BCC-PLS)", which combines baseline correction and conventional PLS, is proposed. By embedding baseline correction constraints into PLS weights selection, the proposed calibration algorithm overcomes the uncertainty in baseline correction and can meet the requirement of on-line attenuated total reflectance Fourier transform infrared (ATR-FTIR) quantitative analysis. The effectiveness of the algorithm is evaluated by the analysis of glucose and marzipan ATR-FTIR spectra. BCC-PLS algorithm shows improved prediction performance over PLS. The root mean square error of cross-validation (RMSECV) on marzipan spectra for the prediction of the moisture is found to be 0.53%, w/w (range 7-19%). The sugar content is predicted with a RMSECV of 2.04%, w/w (range 33-68%). Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Effect of scatter correction on the compartmental measurement of striatal and extrastriatal dopamine D2 receptors using [123I]epidepride SPET

    International Nuclear Information System (INIS)

    Fujita, Masahiro; Seneca, Nicholas; Innis, Robert B.; Varrone, Andrea; Kim, Kyeong Min; Watabe, Hiroshi; Iida, Hidehiro; Zoghbi, Sami S.; Tipre, Dnyanesh; Seibyl, John P.

    2004-01-01

    Prior studies with anthropomorphic phantoms and single, static in vivo brain images have demonstrated that scatter correction significantly improves the accuracy of regional quantitation of single-photon emission tomography (SPET) brain images. Since the regional distribution of activity changes following a bolus injection of a typical neuroreceptor ligand, we examined the effect of scatter correction on the compartmental modeling of serial dynamic images of striatal and extrastriatal dopamine D 2 receptors using [ 123 I]epidepride. Eight healthy human subjects [age 30±8 (range 22-46) years] participated in a study with a bolus injection of 373±12 (354-389) MBq [ 123 I]epidepride and data acquisition over a period of 14 h. A transmission scan was obtained in each study for attenuation and scatter correction. Distribution volumes were calculated by means of compartmental nonlinear least-squares analysis using metabolite-corrected arterial input function and brain data processed with scatter correction using narrow-beam geometry μ (SC) and without scatter correction using broad-beam μ (NoSC). Effects of SC were markedly different among brain regions. SC increased activities in the putamen and thalamus after 1-1.5 h while it decreased activity during the entire experiment in the temporal cortex and cerebellum. Compared with NoSC, SC significantly increased specific distribution volume in the putamen (58%, P=0.0001) and thalamus (23%, P=0.0297). Compared with NoSC, SC made regional distribution of the specific distribution volume closer to that of [ 18 F]fallypride. It is concluded that SC is required for accurate quantification of distribution volumes of receptor ligands in SPET studies. (orig.)

  8. Evaluation of various energy windows at different radionuclides for scatter and attenuation correction in nuclear medicine.

    Science.gov (United States)

    Asgari, Afrouz; Ashoor, Mansour; Sohrabpour, Mostafa; Shokrani, Parvaneh; Rezaei, Ali

    2015-05-01

    Improving signal to noise ratio (SNR) and qualified images by the various methods is very important for detecting the abnormalities at the body organs. Scatter and attenuation of photons by the organs lead to errors in radiopharmaceutical estimation as well as degradation of images. The choice of suitable energy window and the radionuclide have a key role in nuclear medicine which appearing the lowest scatter fraction as well as having a nearly constant linear attenuation coefficient as a function of phantom thickness. The energy windows of symmetrical window (SW), asymmetric window (ASW), high window (WH) and low window (WL) using Tc-99m and Sm-153 radionuclide with solid water slab phantom (RW3) and Teflon bone phantoms have been compared, and Matlab software and Monte Carlo N-Particle (MCNP4C) code were modified to simulate these methods and obtaining the amounts of FWHM and full width at tenth maximum (FWTM) using line spread functions (LSFs). The experimental data were obtained from the Orbiter Scintron gamma camera. Based on the results of the simulation as well as experimental work, the performance of WH and ASW display of the results, lowest scatter fraction as well as constant linear attenuation coefficient as a function of phantom thickness. WH and ASW were optimal windows in nuclear medicine imaging for Tc-99m in RW3 phantom and Sm-153 in Teflon bone phantom. Attenuation correction was done for WH and ASW optimal windows and for these radionuclides using filtered back projection algorithm. Results of simulation and experimental show that very good agreement between the set of experimental with simulation as well as theoretical values with simulation data were obtained which was nominally less than 7.07 % for Tc-99m and less than 8.00 % for Sm-153. Corrected counts were not affected by the thickness of scattering material. The Simulated results of Line Spread Function (LSF) for Sm-153 and Tc-99m in phantom based on four windows and TEW method were

  9. Forward two-photon exchange in elastic lepton-proton scattering and hyperfine-splitting correction

    Energy Technology Data Exchange (ETDEWEB)

    Tomalak, Oleksandr [Johannes Gutenberg Universitaet, Institut fuer Kernphysik and PRISMA Cluster of Excellence, Mainz (Germany)

    2017-08-15

    We relate the forward two-photon exchange (TPE) amplitudes to integrals of the inclusive lepton-proton scattering cross sections. These relations yield an alternative way for the evaluation of the TPE correction to hyperfine-splitting (HFS) in the hydrogen-like atoms with an equivalent to the standard approach (Iddings, Drell and Sullivan) result implying the Burkhardt-Cottingham sum rule. For evaluation of the individual effects (e.g., elastic contribution) our approach yields a distinct result. We compare both methods numerically on examples of the elastic contribution and the full TPE correction to HFS in electronic and muonic hydrogen. (orig.)

  10. Research on correction algorithm of laser positioning system based on four quadrant detector

    Science.gov (United States)

    Gao, Qingsong; Meng, Xiangyong; Qian, Weixian; Cai, Guixia

    2018-02-01

    This paper first introduces the basic principle of the four quadrant detector, and a set of laser positioning experiment system is built based on the four quadrant detector. Four quadrant laser positioning system in the actual application, not only exist interference of background light and detector dark current noise, and the influence of random noise, system stability, spot equivalent error can't be ignored, so it is very important to system calibration and correction. This paper analyzes the various factors of system positioning error, and then propose an algorithm for correcting the system error, the results of simulation and experiment show that the modified algorithm can improve the effect of system error on positioning and improve the positioning accuracy.

  11. Implementation of pencil kernel and depth penetration algorithms for treatment planning of proton beams

    International Nuclear Information System (INIS)

    Russell, K.R.; Saxner, M.; Ahnesjoe, A.; Montelius, A.; Grusell, E.; Dahlgren, C.V.

    2000-01-01

    The implementation of two algorithms for calculating dose distributions for radiation therapy treatment planning of intermediate energy proton beams is described. A pencil kernel algorithm and a depth penetration algorithm have been incorporated into a commercial three-dimensional treatment planning system (Helax-TMS, Helax AB, Sweden) to allow conformal planning techniques using irregularly shaped fields, proton range modulation, range modification and dose calculation for non-coplanar beams. The pencil kernel algorithm is developed from the Fermi-Eyges formalism and Moliere multiple-scattering theory with range straggling corrections applied. The depth penetration algorithm is based on the energy loss in the continuous slowing down approximation with simple correction factors applied to the beam penumbra region and has been implemented for fast, interactive treatment planning. Modelling of the effects of air gaps and range modifying device thickness and position are implicit to both algorithms. Measured and calculated dose values are compared for a therapeutic proton beam in both homogeneous and heterogeneous phantoms of varying complexity. Both algorithms model the beam penumbra as a function of depth in a homogeneous phantom with acceptable accuracy. Results show that the pencil kernel algorithm is required for modelling the dose perturbation effects from scattering in heterogeneous media. (author)

  12. Histogram-driven cupping correction (HDCC) in CT

    Science.gov (United States)

    Kyriakou, Y.; Meyer, M.; Lapp, R.; Kalender, W. A.

    2010-04-01

    Typical cupping correction methods are pre-processing methods which require either pre-calibration measurements or simulations of standard objects to approximate and correct for beam hardening and scatter. Some of them require the knowledge of spectra, detector characteristics, etc. The aim of this work was to develop a practical histogram-driven cupping correction (HDCC) method to post-process the reconstructed images. We use a polynomial representation of the raw-data generated by forward projection of the reconstructed images; forward and backprojection are performed on graphics processing units (GPU). The coefficients of the polynomial are optimized using a simplex minimization of the joint entropy of the CT image and its gradient. The algorithm was evaluated using simulations and measurements of homogeneous and inhomogeneous phantoms. For the measurements a C-arm flat-detector CT (FD-CT) system with a 30×40 cm2 detector, a kilovoltage on board imager (radiation therapy simulator) and a micro-CT system were used. The algorithm reduced cupping artifacts both in simulations and measurements using a fourth-order polynomial and was in good agreement to the reference. The minimization algorithm required less than 70 iterations to adjust the coefficients only performing a linear combination of basis images, thus executing without time consuming operations. HDCC reduced cupping artifacts without the necessity of pre-calibration or other scan information enabling a retrospective improvement of CT image homogeneity. However, the method can work with other cupping correction algorithms or in a calibration manner, as well.

  13. O({alpha}{sub s}) heavy flavor corrections to charged current deep-inelastic scattering in Mellin space

    Energy Technology Data Exchange (ETDEWEB)

    Bluemlein, J.; Hasselhuhn, A.; Kovacikova, P.; Moch, S.

    2011-04-15

    We provide a fast and precise Mellin-space implementation of the O({alpha}{sub s}) heavy flavor Wilson coefficients for charged current deep inelastic scattering processes. They are of importance for the extraction of the strange quark distribution in neutrino-nucleon scattering and the QCD analyses of the HERA charged current data. Errors in the literature are corrected. We also discuss a series of more general parton parameterizations in Mellin space. (orig.)

  14. Focusing light through strongly scattering media using genetic algorithm with SBR discriminant

    Science.gov (United States)

    Zhang, Bin; Zhang, Zhenfeng; Feng, Qi; Liu, Zhipeng; Lin, Chengyou; Ding, Yingchun

    2018-02-01

    In this paper, we have experimentally demonstrated light focusing through strongly scattering media by performing binary amplitude optimization with a genetic algorithm. In the experiments, we control 160 000 mirrors of digital micromirror device to modulate and optimize the light transmission paths in the strongly scattering media. We replace the universal target-position-intensity (TPI) discriminant with signal-to-background ratio (SBR) discriminant in genetic algorithm. With 400 incident segments, a relative enhancement value of 17.5% with a ground glass diffuser is achieved, which is higher than the theoretical value of 1/(2π )≈ 15.9 % for binary amplitude optimization. According to our repetitive experiments, we conclude that, with the same segment number, the enhancement for the SBR discriminant is always higher than that for the TPI discriminant, which results from the background-weakening effect of SBR discriminant. In addition, with the SBR discriminant, the diameters of the focus can be changed ranging from 7 to 70 μm at arbitrary positions. Besides, multiple foci with high enhancement are obtained. Our work provides a meaningful reference for the study of binary amplitude optimization in the wavefront shaping field.

  15. Improvement of transport-corrected scattering stability and performance using a Jacobi inscatter algorithm for 2D-MOC

    International Nuclear Information System (INIS)

    Stimpson, Shane; Collins, Benjamin; Kochunas, Brendan

    2017-01-01

    The MPACT code, being developed collaboratively by the University of Michigan and Oak Ridge National Laboratory, is the primary deterministic neutron transport solver being deployed within the Virtual Environment for Reactor Applications (VERA) as part of the Consortium for Advanced Simulation of Light Water Reactors (CASL). In many applications of the MPACT code, transport-corrected scattering has proven to be an obstacle in terms of stability, and considerable effort has been made to try to resolve the convergence issues that arise from it. Most of the convergence problems seem related to the transport-corrected cross sections, particularly when used in the 2D method of characteristics (MOC) solver, which is the focus of this work. Here in this paper, the stability and performance of the 2-D MOC solver in MPACT is evaluated for two iteration schemes: Gauss-Seidel and Jacobi. With the Gauss-Seidel approach, as the MOC solver loops over groups, it uses the flux solution from the previous group to construct the inscatter source for the next group. Alternatively, the Jacobi approach uses only the fluxes from the previous outer iteration to determine the inscatter source for each group. Consequently for the Jacobi iteration, the loop over groups can be moved from the outermost loop-as is the case with the Gauss-Seidel sweeper-to the innermost loop, allowing for a substantial increase in efficiency by minimizing the overhead of retrieving segment, region, and surface index information from the ray tracing data. Several test problems are assessed: (1) Babcock & Wilcox 1810 Core I, (2) Dimple S01A-Sq, (3) VERA Progression Problem 5a, and (4) VERA Problem 2a. The Jacobi iteration exhibits better stability than Gauss-Seidel, allowing for converged solutions to be obtained over a much wider range of iteration control parameters. Additionally, the MOC solve time with the Jacobi approach is roughly 2.0-2.5× faster per sweep. While the performance and stability of the Jacobi

  16. The collapsed cone algorithm for (192)Ir dosimetry using phantom-size adaptive multiple-scatter point kernels.

    Science.gov (United States)

    Tedgren, Åsa Carlsson; Plamondon, Mathieu; Beaulieu, Luc

    2015-07-07

    The aim of this work was to investigate how dose distributions calculated with the collapsed cone (CC) algorithm depend on the size of the water phantom used in deriving the point kernel for multiple scatter. A research version of the CC algorithm equipped with a set of selectable point kernels for multiple-scatter dose that had initially been derived in water phantoms of various dimensions was used. The new point kernels were generated using EGSnrc in spherical water phantoms of radii 5 cm, 7.5 cm, 10 cm, 15 cm, 20 cm, 30 cm and 50 cm. Dose distributions derived with CC in water phantoms of different dimensions and in a CT-based clinical breast geometry were compared to Monte Carlo (MC) simulations using the Geant4-based brachytherapy specific MC code Algebra. Agreement with MC within 1% was obtained when the dimensions of the phantom used to derive the multiple-scatter kernel were similar to those of the calculation phantom. Doses are overestimated at phantom edges when kernels are derived in larger phantoms and underestimated when derived in smaller phantoms (by around 2% to 7% depending on distance from source and phantom dimensions). CC agrees well with MC in the high dose region of a breast implant and is superior to TG43 in determining skin doses for all multiple-scatter point kernel sizes. Increased agreement between CC and MC is achieved when the point kernel is comparable to breast dimensions. The investigated approximation in multiple scatter dose depends on the choice of point kernel in relation to phantom size and yields a significant fraction of the total dose only at distances of several centimeters from a source/implant which correspond to volumes of low doses. The current implementation of the CC algorithm utilizes a point kernel derived in a comparatively large (radius 20 cm) water phantom. A fixed point kernel leads to predictable behaviour of the algorithm with the worst case being a source/implant located well within a patient

  17. Simulation of co-phase error correction of optical multi-aperture imaging system based on stochastic parallel gradient decent algorithm

    Science.gov (United States)

    He, Xiaojun; Ma, Haotong; Luo, Chuanxin

    2016-10-01

    The optical multi-aperture imaging system is an effective way to magnify the aperture and increase the resolution of telescope optical system, the difficulty of which lies in detecting and correcting of co-phase error. This paper presents a method based on stochastic parallel gradient decent algorithm (SPGD) to correct the co-phase error. Compared with the current method, SPGD method can avoid detecting the co-phase error. This paper analyzed the influence of piston error and tilt error on image quality based on double-aperture imaging system, introduced the basic principle of SPGD algorithm, and discuss the influence of SPGD algorithm's key parameters (the gain coefficient and the disturbance amplitude) on error control performance. The results show that SPGD can efficiently correct the co-phase error. The convergence speed of the SPGD algorithm is improved with the increase of gain coefficient and disturbance amplitude, but the stability of the algorithm reduced. The adaptive gain coefficient can solve this problem appropriately. This paper's results can provide the theoretical reference for the co-phase error correction of the multi-aperture imaging system.

  18. Effects of scatter and attenuation corrections on phantom and clinical brain SPECT

    International Nuclear Information System (INIS)

    Prando, S.; Robilotta, C.C.R.; Oliveira, M.A.; Alves, T.C.; Busatto Filho, G.

    2002-01-01

    Aim: The present work evaluated the effects of combinations of scatter and attenuation corrections on the analysis of brain SPECT. Materials and Methods: We studied images of the 3D Hoffman brain phantom and from a group of 20 depressive patients with confirmed cardiac insufficiency (CI) and 14 matched healthy controls (HC). Data were acquired with a Sophy-DST/SMV-GE dual-head camera after venous injection of 1110MBq 99m Tc-HMPAO. Two energy windows, 15% on 140keV and 30% centered on 108keV of the Compton distribution, were used to obtain corresponding sets of 128x128x128 projections. Tomograms were reconstructed using OSEM (2 iterations, 8 sub-sets) and Metz filter (order 8, 4 pixels FWHM psf) and FBP with Butterworth filter (order 10, frequency 0.7 Nyquist). Ten combinations of Jaszczak correction (factors 0.3, 0.4 and 0.5) and the 1st order Chang correction (u=0.12cm -1 and 0.159cm -1 ) were applied on the phantom data. In all the phantom images, contrast and signal-noise ratio between 3 ROIs (ventricle, occipital and thalamus) and cerebellum, as well as the ratio between activities in gray and white matters, were calculated and compared with the expected values. The patients images were corrected with k=0.5 and u=0.159cm -1 and reconstructed with OSEM and Metz filter. The images were inspected visually and blood flow comparisons between the CI and the HC groups were performed using Statistical Parametric Mapping (SPM). Results: The best results in the analysis of the contrast and activities ratio were obtained with k=0.5 and u=0.159cm -1 . The results of the activities ratio obtained with OSEM e Metz filter are similar to those published by Laere et al.[J.Nucl.Med 2000;41:2051-2062]. The method of correction using effective attenuation coefficient produced results visually acceptable, but inadequate for the quantitative evaluation. The results of signal-noise ratio are better with OSEM than FBP reconstruction method. The corrections in the CI patients studies

  19. Computational study of scattering of a zero-order Bessel beam by large nonspherical homogeneous particles with the multilevel fast multipole algorithm

    Science.gov (United States)

    Yang, Minglin; Wu, Yueqian; Sheng, Xinqing; Ren, Kuan Fang

    2017-12-01

    Computation of scattering of shaped beams by large nonspherical particles is a challenge in both optics and electromagnetics domains since it concerns many research fields. In this paper, we report our new progress in the numerical computation of the scattering diagrams. Our algorithm permits to calculate the scattering of a particle of size as large as 110 wavelengths or 700 in size parameter. The particle can be transparent or absorbing of arbitrary shape, smooth or with a sharp surface, such as the Chebyshev particles or ice crystals. To illustrate the capacity of the algorithm, a zero order Bessel beam is taken as the incident beam, and the scattering of ellipsoidal particles and Chebyshev particles are taken as examples. Some special phenomena have been revealed and examined. The scattering problem is formulated with the combined tangential formulation and solved iteratively with the aid of the multilevel fast multipole algorithm, which is well parallelized with the message passing interface on the distributed memory computer platform using the hybrid partitioning strategy. The numerical predictions are compared with the results of the rigorous method for a spherical particle to validate the accuracy of the approach. The scattering diagrams of large ellipsoidal particles with various parameters are examined. The effect of aspect ratios, as well as half-cone angle of the incident zero-order Bessel beam and the off-axis distance on scattered intensity, is studied. Scattering by asymmetry Chebyshev particle with size parameter larger than 700 is also given to show the capability of the method for computing scattering by arbitrary shaped particles.

  20. Fast analytical scatter estimation using graphics processing units.

    Science.gov (United States)

    Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris

    2015-01-01

    To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.

  1. An EPID response calculation algorithm using spatial beam characteristics of primary, head scattered and MLC transmitted radiation

    International Nuclear Information System (INIS)

    Rosca, Florin; Zygmanski, Piotr

    2008-01-01

    We have developed an independent algorithm for the prediction of electronic portal imaging device (EPID) response. The algorithm uses a set of images [open beam, closed multileaf collimator (MLC), various fence and modified sweeping gap patterns] to separately characterize the primary and head-scatter contributions to EPID response. It also characterizes the relevant dosimetric properties of the MLC: Transmission, dosimetric gap, MLC scatter [P. Zygmansky et al., J. Appl. Clin. Med. Phys. 8(4) (2007)], inter-leaf leakage, and tongue and groove [F. Lorenz et al., Phys. Med. Biol. 52, 5985-5999 (2007)]. The primary radiation is modeled with a single Gaussian distribution defined at the target position, while the head-scatter radiation is modeled with a triple Gaussian distribution defined downstream of the target. The distances between the target and the head-scatter source, jaws, and MLC are model parameters. The scatter associated with the EPID is implicit in the model. Open beam images are predicted to within 1% of the maximum value across the image. Other MLC test patterns and intensity-modulated radiation therapy fluences are predicted to within 1.5% of the maximum value. The presented method was applied to the Varian aS500 EPID but is designed to work with any planar detector with sufficient spatial resolution

  2. Validities of three multislice algorithms for quantitative low-energy transmission electron microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Ming, W.Q.; Chen, J.H., E-mail: jhchen123@hnu.edu.cn

    2013-11-15

    Three different types of multislice algorithms, namely the conventional multislice (CMS) algorithm, the propagator-corrected multislice (PCMS) algorithm and the fully-corrected multislice (FCMS) algorithm, have been evaluated in comparison with respect to the accelerating voltages in transmission electron microscopy. Detailed numerical calculations have been performed to test their validities. The results show that the three algorithms are equivalent for accelerating voltage above 100 kV. However, below 100 kV, the CMS algorithm will introduce significant errors, not only for higher-order Laue zone (HOLZ) reflections but also for zero-order Laue zone (ZOLZ) reflections. The differences between the PCMS and FCMS algorithms are negligible and mainly appear in HOLZ reflections. Nonetheless, when the accelerating voltage is further lowered to 20 kV or below, the PCMS algorithm will also yield results deviating from the FCMS results. The present study demonstrates that the propagation of the electron wave from one slice to the next slice is actually cross-correlated with the crystal potential in a complex manner, such that when the accelerating voltage is lowered to 10 kV, the accuracy of the algorithms is dependent of the scattering power of the specimen. - Highlights: • Three multislice algorithms for low-energy transmission electron microscopy are evaluated. • The propagator-corrected algorithm is a good alternative for voltages down to 20 kV. • Below 20 kV, a fully-corrected algorithm has to be employed for quantitative simulations.

  3. Validities of three multislice algorithms for quantitative low-energy transmission electron microscopy

    International Nuclear Information System (INIS)

    Ming, W.Q.; Chen, J.H.

    2013-01-01

    Three different types of multislice algorithms, namely the conventional multislice (CMS) algorithm, the propagator-corrected multislice (PCMS) algorithm and the fully-corrected multislice (FCMS) algorithm, have been evaluated in comparison with respect to the accelerating voltages in transmission electron microscopy. Detailed numerical calculations have been performed to test their validities. The results show that the three algorithms are equivalent for accelerating voltage above 100 kV. However, below 100 kV, the CMS algorithm will introduce significant errors, not only for higher-order Laue zone (HOLZ) reflections but also for zero-order Laue zone (ZOLZ) reflections. The differences between the PCMS and FCMS algorithms are negligible and mainly appear in HOLZ reflections. Nonetheless, when the accelerating voltage is further lowered to 20 kV or below, the PCMS algorithm will also yield results deviating from the FCMS results. The present study demonstrates that the propagation of the electron wave from one slice to the next slice is actually cross-correlated with the crystal potential in a complex manner, such that when the accelerating voltage is lowered to 10 kV, the accuracy of the algorithms is dependent of the scattering power of the specimen. - Highlights: • Three multislice algorithms for low-energy transmission electron microscopy are evaluated. • The propagator-corrected algorithm is a good alternative for voltages down to 20 kV. • Below 20 kV, a fully-corrected algorithm has to be employed for quantitative simulations

  4. On the radiative corrections of deep inelastic scattering of muon neutrino on nucleon

    International Nuclear Information System (INIS)

    So Sang Guk

    1986-01-01

    The radiative corrections of deep inelastic scattering process VΜP→ ΜN are considered. Matrix element which takes Feynman one photon exchange diagrams into account at high transfer momentum are used. Based on calculation of the matrix element one can obtain matrix element for given process. It is shown that the effective cross section which takes one photon exchange into account is obtained. (author)

  5. Attenuation correction for the HRRT PET-scanner using transmission scatter correction and total variation regularization.

    Science.gov (United States)

    Keller, Sune H; Svarer, Claus; Sibomana, Merence

    2013-09-01

    In the standard software for the Siemens high-resolution research tomograph (HRRT) positron emission tomography (PET) scanner the most commonly used segmentation in the μ -map reconstruction for human brain scans is maximum a posteriori for transmission (MAP-TR). Bias in the lower cerebellum and pons in HRRT brain images have been reported. The two main sources of the problem with MAP-TR are poor bone/soft tissue segmentation below the brain and overestimation of bone mass in the skull. We developed the new transmission processing with total variation (TXTV) method that introduces scatter correction in the μ-map reconstruction and total variation filtering to the transmission processing. Comparing MAP-TR and the new TXTV with gold standard CT-based attenuation correction, we found that TXTV has less bias as compared to MAP-TR. We also compared images acquired at the HRRT scanner using TXTV to the GE Advance scanner images and found high quantitative correspondence. TXTV has been used to reconstruct more than 4000 HRRT scans at seven different sites with no reports of biases. TXTV-based reconstruction is recommended for human brain scans on the HRRT.

  6. A GPU-based finite-size pencil beam algorithm with 3D-density correction for radiotherapy dose calculation

    International Nuclear Information System (INIS)

    Gu Xuejun; Jia Xun; Jiang, Steve B; Jelen, Urszula; Li Jinsheng

    2011-01-01

    Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite-size pencil beam (FSPB) algorithm with a 3D-density correction method on graphics processing unit (GPU). This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework (Gu et al 2009 Phys. Med. Biol. 54 6287-97). Dosimetric evaluations against Monte Carlo dose calculations are conducted on ten IMRT treatment plans (five head-and-neck cases and five lung cases). For all cases, there is improvement with the 3D-density correction over the conventional FSPB algorithm and for most cases the improvement is significant. Regarding the efficiency, because of the appropriate arrangement of memory access and the usage of GPU intrinsic functions, the dose calculation for an IMRT plan can be accomplished well within 1 s (except for one case) with this new GPU-based FSPB algorithm. Compared to the previous GPU-based FSPB algorithm without 3D-density correction, this new algorithm, though slightly sacrificing the computational efficiency (∼5-15% lower), has significantly improved the dose calculation accuracy, making it more suitable for online IMRT replanning.

  7. Monte Carlo simulation and scatter correction of the GE Advance PET scanner with SimSET and Geant4

    International Nuclear Information System (INIS)

    Barret, Olivier; Carpenter, T Adrian; Clark, John C; Ansorge, Richard E; Fryer, Tim D

    2005-01-01

    For Monte Carlo simulations to be used as an alternative solution to perform scatter correction, accurate modelling of the scanner as well as speed is paramount. General-purpose Monte Carlo packages (Geant4, EGS, MCNP) allow a detailed description of the scanner but are not efficient at simulating voxel-based geometries (patient images). On the other hand, dedicated codes (SimSET, PETSIM) will perform well for voxel-based objects but will be poor in their capacity of simulating complex geometries such as a PET scanner. The approach adopted in this work was to couple a dedicated code (SimSET) with a general-purpose package (Geant4) to have the efficiency of the former and the capabilities of the latter. The combined SimSET+Geant4 code (SimG4) was assessed on the GE Advance PET scanner and compared to the use of SimSET only. A better description of the resolution and sensitivity of the scanner and of the scatter fraction was obtained with SimG4. The accuracy of scatter correction performed with SimG4 and SimSET was also assessed from data acquired with the 20 cm NEMA phantom. SimG4 was found to outperform SimSET and to give slightly better results than the GE scatter correction methods installed on the Advance scanner (curve fitting and scatter modelling for the 300-650 keV and 375-650 keV energy windows, respectively). In the presence of a hot source close to the edge of the field of view (as found in oxygen scans), the GE curve-fitting method was found to fail whereas SimG4 maintained its performance

  8. Relativistic corrections to the elastic electron scattering from 208Pb

    International Nuclear Information System (INIS)

    Chandra, H.; Sauer, G.

    1976-01-01

    In the present work we have calculated the differential cross sections for the elastic electron scattering from 208 Pb using the charge distributions resulting from various corrections. The point proton and neutron mass distributions have been calculated from the spherical wave functions for 208 Pb obtained by Kolb et al. The relativistic correction to the nuclear charge distribution coming from the electromagnetic structure of the nucleon has been accomplished by assuming a linear superposition of Gaussian shapes for the proton and the neutron charge form factor. Results of this calculation are quite similar to an earlier calculation by Bertozzi et al., who have used a different wave function for 208 Pb and have assumed exponential smearing for the proton corresponding to the dipole fit for the form factor. Also in the present work, reason for the small spin orbit contribution to the effective charge distribution is discussed in some detail. It is also shown that the use of a single Gaussian shape for the proton smearing usually underestimates the actual theoretical cross section

  9. Three-dimensional ophthalmic optical coherence tomography with a refraction correction algorithm

    Science.gov (United States)

    Zawadzki, Robert J.; Leisser, Christoph; Leitgeb, Rainer; Pircher, Michael; Fercher, Adolf F.

    2003-10-01

    We built an optical coherence tomography (OCT) system with a rapid scanning optical delay (RSOD) line, which allows probing full axial eye length. The system produces Three-dimensional (3D) data sets that are used to generate 3D tomograms of the model eye. The raw tomographic data were processed by an algorithm, which is based on Snell"s law to correct the interface positions. The Zernike polynomials representation of the interfaces allows quantitative wave aberration measurements. 3D images of our results are presented to illustrate the capabilities of the system and the algorithm performance. The system allows us to measure intra-ocular distances.

  10. Effect of scatter correction on the compartmental measurement of striatal and extrastriatal dopamine D{sub 2} receptors using [{sup 123}I]epidepride SPET

    Energy Technology Data Exchange (ETDEWEB)

    Fujita, Masahiro; Seneca, Nicholas; Innis, Robert B. [Department of Psychiatry, Yale University School of Medicine and VA Connecticut Healthcare System, West Haven, CT (United States); Molecular Imaging Branch, National Institute of Mental Health, Bethesda, MD (United States); Varrone, Andrea [Department of Psychiatry, Yale University School of Medicine and VA Connecticut Healthcare System, West Haven, CT (United States); Biostructure and Bioimaging Institute, National Research Council, Napoli (Italy); Kim, Kyeong Min; Watabe, Hiroshi; Iida, Hidehiro [Department of Investigative Radiology, National Cardiovascular Center Research Institute, Osaka (Japan); Zoghbi, Sami S. [Department of Psychiatry, Yale University School of Medicine and VA Connecticut Healthcare System, West Haven, CT (United States); Molecular Imaging Branch, National Institute of Mental Health, Bethesda, MD (United States); Department of Radiology, Yale University School of Medicine and VA Connecticut Healthcare System, West Haven, CT (United States); Tipre, Dnyanesh [Molecular Imaging Branch, National Institute of Mental Health, Bethesda, MD (United States); Seibyl, John P. [Institute for Neurodegenerative Disorders, New Haven, CT (United States)

    2004-05-01

    Prior studies with anthropomorphic phantoms and single, static in vivo brain images have demonstrated that scatter correction significantly improves the accuracy of regional quantitation of single-photon emission tomography (SPET) brain images. Since the regional distribution of activity changes following a bolus injection of a typical neuroreceptor ligand, we examined the effect of scatter correction on the compartmental modeling of serial dynamic images of striatal and extrastriatal dopamine D{sub 2} receptors using [{sup 123}I]epidepride. Eight healthy human subjects [age 30{+-}8 (range 22-46) years] participated in a study with a bolus injection of 373{+-}12 (354-389) MBq [{sup 123}I]epidepride and data acquisition over a period of 14 h. A transmission scan was obtained in each study for attenuation and scatter correction. Distribution volumes were calculated by means of compartmental nonlinear least-squares analysis using metabolite-corrected arterial input function and brain data processed with scatter correction using narrow-beam geometry {mu} (SC) and without scatter correction using broad-beam {mu} (NoSC). Effects of SC were markedly different among brain regions. SC increased activities in the putamen and thalamus after 1-1.5 h while it decreased activity during the entire experiment in the temporal cortex and cerebellum. Compared with NoSC, SC significantly increased specific distribution volume in the putamen (58%, P=0.0001) and thalamus (23%, P=0.0297). Compared with NoSC, SC made regional distribution of the specific distribution volume closer to that of [{sup 18}F]fallypride. It is concluded that SC is required for accurate quantification of distribution volumes of receptor ligands in SPET studies. (orig.)

  11. Comparison of two heterogeneity correction algorithms in pituitary gland treatments with intensity-modulated radiation therapy

    International Nuclear Information System (INIS)

    Albino, Lucas D.; Santos, Gabriela R.; Ribeiro, Victor A.B.; Rodrigues, Laura N.; Weltman, Eduardo; Braga, Henrique F.

    2013-01-01

    The dose accuracy calculated by a treatment planning system is directly related to the chosen algorithm. Nowadays, several calculation doses algorithms are commercially available and they differ in calculation time and accuracy, especially when individual tissue densities are taken into account. The aim of this study was to compare two different calculation algorithms from iPlan®, BrainLAB, in the treatment of pituitary gland tumor with intensity-modulated radiation therapy (IMRT). These tumors are located in a region with variable electronic density tissues. The deviations from the plan with no heterogeneity correction were evaluated. To initial validation of the data inserted into the planning system, an IMRT plan was simulated in a anthropomorphic phantom and the dose distribution was measured with a radiochromic film. The gamma analysis was performed in the film, comparing it with dose distributions calculated with X-ray Voxel Monte Carlo (XVMC) algorithm and pencil beam convolution (PBC). Next, 33 patients plans, initially calculated by PBC algorithm, were recalculated with XVMC algorithm. The treatment volumes and organs-at-risk dose-volume histograms were compared. No relevant differences were found in dose-volume histograms between XVMC and PBC. However, differences were obtained when comparing each plan with the plan without heterogeneity correction. (author)

  12. An algorithm developed in Matlab for the automatic selection of cut-off frequencies, in the correction of strong motion data

    Science.gov (United States)

    Sakkas, Georgios; Sakellariou, Nikolaos

    2018-05-01

    Strong motion recordings are the key in many earthquake engineering applications and are also fundamental for seismic design. The present study focuses on the automated correction of accelerograms, analog and digital. The main feature of the proposed algorithm is the automatic selection for the cut-off frequencies based on a minimum spectral value in a predefined frequency bandwidth, instead of the typical signal-to-noise approach. The algorithm follows the basic steps of the correction procedure (instrument correction, baseline correction and appropriate filtering). Besides the corrected time histories, Peak Ground Acceleration, Peak Ground Velocity, Peak Ground Displacement values and the corrected Fourier Spectra are also calculated as well as the response spectra. The algorithm is written in Matlab environment, is fast enough and can be used for batch processing or in real-time applications. In addition, the possibility to also perform a signal-to-noise ratio is added as well as to perform causal or acausal filtering. The algorithm has been tested in six significant earthquakes (Kozani-Grevena 1995, Aigio 1995, Athens 1999, Lefkada 2003 and Kefalonia 2014) of the Greek territory with analog and digital accelerograms.

  13. Nuclear corrections in neutrino deep inelastic scattering and the extraction of the strange quark distribution

    International Nuclear Information System (INIS)

    Boros, C.

    1999-01-01

    Recent measurement of the structure function F 2 υ in neutrino deep inelastic scattering allows us to compare structure functions measured in neutrino and charged lepton scattering for the first time with reasonable precision. The comparison between neutrino and muon structure functions made by the CCFR Collaboration indicates that there is a discrepancy between these structure functions at small Bjorken x values. In this talk I examine two effects which might account for this experimental discrepancy: nuclear shadowing corrections for neutrinos and contributions from strange and anti-strange quarks. Copyright (1999) World Scientific Publishing Co. Pte. Ltd

  14. Nonuniform Illumination Correction Algorithm for Underwater Images Using Maximum Likelihood Estimation Method

    Directory of Open Access Journals (Sweden)

    Sonali Sachin Sankpal

    2016-01-01

    Full Text Available Scattering and absorption of light is main reason for limited visibility in water. The suspended particles and dissolved chemical compounds in water are also responsible for scattering and absorption of light in water. The limited visibility in water results in degradation of underwater images. The visibility can be increased by using artificial light source in underwater imaging system. But the artificial light illuminates the scene in a nonuniform fashion. It produces bright spot at the center with the dark region at surroundings. In some cases imaging system itself creates dark region in the image by producing shadow on the objects. The problem of nonuniform illumination is neglected by the researchers in most of the image enhancement techniques of underwater images. Also very few methods are discussed showing the results on color images. This paper suggests a method for nonuniform illumination correction for underwater images. The method assumes that natural underwater images are Rayleigh distributed. This paper used maximum likelihood estimation of scale parameter to map distribution of image to Rayleigh distribution. The method is compared with traditional methods for nonuniform illumination correction using no-reference image quality metrics like average luminance, average information entropy, normalized neighborhood function, average contrast, and comprehensive assessment function.

  15. Monte Carlo simulation of scatter in non-uniform symmetrical attenuating media for point and distributed sources

    International Nuclear Information System (INIS)

    Henry, L.J.; Rosenthal, M.S.

    1992-01-01

    We report results of scatter simulations for both point and distributed sources of 99m Tc in symmetrical non-uniform attenuating media. The simulations utilized Monte Carlo techniques and were tested against experimental phantoms. Both point and ring sources were used inside a 10.5 cm radius acrylic phantom. Attenuating media consisted of combinations of water, ground beef (to simulate muscle mass), air and bone meal (to simulate bone mass). We estimated/measured energy spectra, detector efficiencies and peak height ratios for all cases. In all cases, the simulated spectra agree with the experimentally measured spectra within 2 SD. Detector efficiencies and peak height ratios also are in agreement. The Monte Carlo code is able to properly model the non-uniform attenuating media used in this project. With verification of the simulations, it is possible to perform initial evaluation studies of scatter correction algorithms by evaluating the mechanisms of action of the correction algorithm on the simulated spectra where the magnitude and sources of scatter are known. (author)

  16. NNLO leptonic and hadronic corrections to Bhabha scattering and luminosity monitoring at meson factories

    Energy Technology Data Exchange (ETDEWEB)

    Carloni Calame, C. [Southampton Univ. (United Kingdom). School of Physics; Czyz, H.; Gluza, J.; Gunia, M. [Silesia Univ., Katowice (Poland). Dept. of Field Theory and Particle Physics; Montagna, G. [Pavia Univ. (Italy). Dipt. di Fisica Nucleare e Teorica; INFN, Sezione di Pavia (Italy); Nicrosini, O.; Piccinini, F. [INFN, Sezione di Pavia (Italy); Riemann, T. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Worek, M. [Wuppertal Univ. (Germany). Fachbereich C Physik

    2011-07-15

    Virtual fermionic N{sub f}=1 and N{sub f}=2 contributions to Bhabha scattering are combined with realistic real corrections at next-to-next-to-leading order in QED. The virtual corrections are determined by the package BHANNLOHF, and real corrections with the Monte Carlo generators BHAGEN-1PH, HELAC-PHEGAS and EKHARA. Numerical results are discussed at the energies of and with realistic cuts used at the {phi} factory DA{phi}NE, at the B factories PEP-II and KEK, and at the charm/{tau} factory BEPC II. We compare these complete calculations with the approximate ones realized in the generator BABAYAGA rate at NLO used at meson factories to evaluate their luminosities. For realistic reference event selections we find agreement for the NNLO leptonic and hadronic corrections within 0.07% or better and conclude that they are well accounted for in the generator by comparison with the present experimental accuracy. (orig.)

  17. A scattering-based over-land rainfall retrieval algorithm for South Korea using GCOM-W1/AMSR-2 data

    Science.gov (United States)

    Kwon, Young-Joo; Shin, Hayan; Ban, Hyunju; Lee, Yang-Won; Park, Kyung-Ae; Cho, Jaeil; Park, No-Wook; Hong, Sungwook

    2017-08-01

    Heavy summer rainfall is a primary natural disaster affecting lives and properties in the Korean Peninsula. This study presents a satellite-based rainfall rate retrieval algorithm for the South Korea combining polarization-corrected temperature ( PCT) and scattering index ( SI) data from the 36.5 and 89.0 GHz channels of the Advanced microwave Scanning Radiometer 2 (AMSR-2) onboard the Global Change Observation Mission (GCOM)-W1 satellite. The coefficients for the algorithm were obtained from spatial and temporal collocation data from the AMSR-2 and groundbased automatic weather station rain gauges from 1 July - 30 August during the years, 2012-2015. There were time delays of about 25 minutes between the AMSR-2 observations and the ground raingauge measurements. A new linearly-combined rainfall retrieval algorithm focused on heavy rain for the PCT and SI was validated using ground-based rainfall observations for the South Korea from 1 July - 30 August, 2016. The validation presented PCT and SI methods showed slightly improved results for rainfall > 5 mm h-1 compared to the current ASMR-2 level 2 data. The best bias and root mean square error (RMSE) for the PCT method at AMSR-2 36.5 GHz were 2.09 mm h-1 and 7.29 mm h-1, respectively, while the current official AMSR-2 rainfall rates show a larger bias and RMSE (4.80 mm h-1 and 9.35 mm h-1, respectively). This study provides a scatteringbased over-land rainfall retrieval algorithm for South Korea affected by stationary front rain and typhoons with the advantages of the previous PCT and SI methods to be applied to a variety of spaceborne passive microwave radiometers.

  18. Spectral-ratio radon background correction method in airborne γ-ray spectrometry based on compton scattering deduction

    International Nuclear Information System (INIS)

    Gu Yi; Xiong Shengqing; Zhou Jianxin; Fan Zhengguo; Ge Liangquan

    2014-01-01

    γ-ray released by the radon daughter has severe impact on airborne γ-ray spectrometry. The spectral-ratio method is one of the best mathematical methods for radon background deduction in airborne γ-ray spectrometry. In this paper, an advanced spectral-ratio method was proposed which deducts Compton scattering ray by the fast Fourier transform rather than tripping ratios, the relationship between survey height and correction coefficient of the advanced spectral-ratio radon background correction method was studied, the advanced spectral-ratio radon background correction mathematic model was established, and the ground saturation model calibrating technology for correction coefficient was proposed. As for the advanced spectral-ratio radon background correction method, its applicability and correction efficiency are improved, and the application cost is saved. Furthermore, it can prevent the physical meaning lost and avoid the possible errors caused by matrix computation and mathematical fitting based on spectrum shape which is applied in traditional correction coefficient. (authors)

  19. A Robust In-Situ Warp-Correction Algorithm For VISAR Streak Camera Data at the National Ignition Facility

    International Nuclear Information System (INIS)

    Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.; Kalantar, Daniel H.

    2015-01-01

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.

  20. A Robust In-Situ Warp-Correction Algorithm For VISAR Streak Camera Data at the National Ignition Facility

    Energy Technology Data Exchange (ETDEWEB)

    Labaria, George R. [Univ. of California, Santa Cruz, CA (United States); Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Warrick, Abbie L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Celliers, Peter M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kalantar, Daniel H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-01-12

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.

  1. Four-Component Scattering Power Decomposition Algorithm with Rotation of Covariance Matrix Using ALOS-PALSAR Polarimetric Data

    Directory of Open Access Journals (Sweden)

    Yasuhiro Nakamura

    2012-07-01

    Full Text Available The present study introduces the four-component scattering power decomposition (4-CSPD algorithm with rotation of covariance matrix, and presents an experimental proof of the equivalence between the 4-CSPD algorithms based on rotation of covariance matrix and coherency matrix. From a theoretical point of view, the 4-CSPD algorithms with rotation of the two matrices are identical. Although it seems obvious, no experimental evidence has yet been presented. In this paper, using polarimetric synthetic aperture radar (POLSAR data acquired by Phased Array L-band SAR (PALSAR on board of Advanced Land Observing Satellite (ALOS, an experimental proof is presented to show that both algorithms indeed produce identical results.

  2. A method of measuring and correcting tilt of anti - vibration wind turbines based on screening algorithm

    Science.gov (United States)

    Xiao, Zhongxiu

    2018-04-01

    A Method of Measuring and Correcting Tilt of Anti - vibration Wind Turbines Based on Screening Algorithm is proposed in this paper. First of all, we design a device which the core is the acceleration sensor ADXL203, the inclination is measured by installing it on the tower of the wind turbine as well as the engine room. Next using the Kalman filter algorithm to filter effectively by establishing a state space model for signal and noise. Then we use matlab for simulation. Considering the impact of the tower and nacelle vibration on the collected data, the original data and the filtering data are classified and stored by the Screening algorithm, then filter the filtering data to make the output data more accurate. Finally, we eliminate installation errors by using algorithm to achieve the tilt correction. The device based on this method has high precision, low cost and anti-vibration advantages. It has a wide range of application and promotion value.

  3. Monte Carlo and experimental evaluation of accuracy and noise properties of two scatter correction methods for SPECT

    International Nuclear Information System (INIS)

    Narita, Y.; Eberl, S.; Bautovich, G.; Iida, H.; Hutton, B.F.; Braun, M.; Nakamura, T.

    1996-01-01

    Scatter correction is a prerequisite for quantitative SPECT, but potentially increases noise. Monte Carlo simulations (EGS4) and physical phantom measurements were used to compare accuracy and noise properties of two scatter correction techniques: the triple-energy window (TEW), and the transmission dependent convolution subtraction (TDCS) techniques. Two scatter functions were investigated for TDCS: (i) the originally proposed mono-exponential function (TDCS mono ) and (ii) an exponential plus Gaussian scatter function (TDCS Gauss ) demonstrated to be superior from our Monte Carlo simulations. Signal to noise ratio (S/N) and accuracy were investigated in cylindrical phantoms and a chest phantom. Results from each method were compared to the true primary counts (simulations), or known activity concentrations (phantom studies). 99m Tc was used in all cases. The optimized TDCS Gauss method overall performed best, with an accuracy of better than 4% for all simulations and physical phantom studies. Maximum errors for TEW and TDCS mono of -30 and -22%, respectively, were observed in the heart chamber of the simulated chest phantom. TEW had the worst S/N ratio of the three techniques. The S/N ratios of the two TDCS methods were similar and only slightly lower than those of simulated true primary data. Thus, accurate quantitation can be obtained with TDCS Gauss , with a relatively small reduction in S/N ratio. (author)

  4. NNLO massive corrections to Bhabha scattering and theoretical precision of BabaYaga rate at NLO

    International Nuclear Information System (INIS)

    Carloni Calame, C.M.; Nicrosini, O.; Piccinini, F.; Riemann, T.; Worek, M.

    2011-12-01

    We provide an exact calculation of next-to-next-to-leading order (NNLO) massive corrections to Bhabha scattering in QED, relevant for precision luminosity monitoring at meson factories. Using realistic reference event selections, exact numerical results for leptonic and hadronic corrections are given and compared with the corresponding approximate predictions of the event generator BabaYaga rate at NLO. It is shown that the NNLO massive corrections are necessary for luminosity measurements with per mille precision. At the same time they are found to be well accounted for in the generator at an accuracy level below the one per mille. An update of the total theoretical precision of BabaYaga rate at NLO is presented and possible directions for a further error reduction are sketched. (orig.)

  5. Atmospheric and Radiometric Correction Algorithms for the Multitemporal Assessment of Grasslands Productivity

    Directory of Open Access Journals (Sweden)

    Jesús A. Prieto-Amparan

    2018-02-01

    Full Text Available A key step in the processing of satellite imagery is the radiometric correction of images to account for reflectance that water vapor, atmospheric dust, and other atmospheric elements add to the images, causing imprecisions in variables of interest estimated at the earth’s surface level. That issue is important when performing spatiotemporal analyses to determine ecosystems’ productivity. In this study, three correction methods were applied to satellite images for the period 2010–2014. These methods were Atmospheric Correction for Flat Terrain 2 (ATCOR2, Fast Line-of-Sight Atmospheric Analysis of Spectral Hypercubes (FLAASH, and Dark Object Substract 1 (DOS1. The images included 12 sub-scenes from the Landsat Thematic Mapper (TM and the Operational Land Imager (OLI sensors. The images corresponded to three Permanent Monitoring Sites (PMS of grasslands, ‘Teseachi’, ‘Eden’, and ‘El Sitio’, located in the state of Chihuahua, Mexico. After the corrections were applied to the images, they were evaluated in terms of their precision for biomass estimation. For that, biomass production was measured during the study period at the three PMS to calibrate production models developed with simple and multiple linear regression (SLR and MLR techniques. When the estimations were made with MLR, DOS1 obtained an R2 of 0.97 (p < 0.05 for 2012 and values greater than 0.70 (p < 0.05 during 2013–2014. The rest of the algorithms did not show significant results and DOS1, which is the simplest algorithm, resulted in the best biomass estimator. Thus, in the multitemporal analysis of grassland based on spectral information, it is not necessary to apply complex correction procedures. The maps of biomass production, elaborated from images corrected with DOS1, can be used as a reference point for the assessment of the grassland condition, as well as to determine the grazing capacity and thus the potential animal production in such ecosystems.

  6. Solving large instances of the quadratic cost of partition problem on dense graphs by data correcting algorithms

    NARCIS (Netherlands)

    Goldengorin, Boris; Vink, Marius de

    1999-01-01

    The Data-Correcting Algorithm (DCA) corrects the data of a hard problem instance in such a way that we obtain an instance of a well solvable special case. For a given prescribed accuracy of the solution, the DCA uses a branch and bound scheme to make sure that the solution of the corrected instance

  7. Assessment of the scatter correction procedures in single photon emission computed tomography imaging using simulation and clinical study

    Directory of Open Access Journals (Sweden)

    Mehravar Rafati

    2017-01-01

    Conclusion: The simulation and the clinical studies showed that the new approach could be better performance than DEW, TEW methods, according to values of the contrast, and the SNR for scatter correction.

  8. An improved cone-beam filtered backprojection reconstruction algorithm based on x-ray angular correction and multiresolution analysis

    International Nuclear Information System (INIS)

    Sun, Y.; Hou, Y.; Yan, Y.

    2004-01-01

    With the extensive application of industrial computed tomography in the field of non-destructive testing, how to improve the quality of the reconstructed image is receiving more and more concern. It is well known that in the existing cone-beam filtered backprojection reconstruction algorithms the cone angle is controlled within a narrow range. The reason of this limitation is the incompleteness of projection data when the cone angle increases. Thus the size of the tested workpiece is limited. Considering the characteristic of X-ray cone angle, an improved cone-beam filtered back-projection reconstruction algorithm taking account of angular correction is proposed in this paper. The aim of our algorithm is to correct the cone-angle effect resulted from the incompleteness of projection data in the conventional algorithm. The basis of the correction is the angular relationship among X-ray source, tested workpiece and the detector. Thus the cone angle is not strictly limited and this algorithm may be used to detect larger workpiece. Further more, adaptive wavelet filter is used to make multiresolution analysis, which can modify the wavelet decomposition series adaptively according to the demand for resolution of local reconstructed area. Therefore the computation and the time of reconstruction can be reduced, and the quality of the reconstructed image can also be improved. (author)

  9. Detector normalization and scatter correction for the jPET-D4: A 4-layer depth-of-interaction PET scanner

    Energy Technology Data Exchange (ETDEWEB)

    Kitamura, Keishi [Shimadzu Corporation, 1 Nishinokyo-Kuwabaracho, Nakagyo-ku, Kyoto-shi, Kyoto 604-8511 (Japan)]. E-mail: kitam@shimadzu.co.jp; Ishikawa, Akihiro [Shimadzu Corporation, 1 Nishinokyo-Kuwabaracho, Nakagyo-ku, Kyoto-shi, Kyoto 604-8511 (Japan); Mizuta, Tetsuro [Shimadzu Corporation, 1 Nishinokyo-Kuwabaracho, Nakagyo-ku, Kyoto-shi, Kyoto 604-8511 (Japan); Yamaya, Taiga [National Institute of Radiological Sciences, 9-1 Anagawa-4, Inage-ku, Chiba-shi, Chiba 263-8555 (Japan); Yoshida, Eiji [National Institute of Radiological Sciences, 9-1 Anagawa-4, Inage-ku, Chiba-shi, Chiba 263-8555 (Japan); Murayama, Hideo [National Institute of Radiological Sciences, 9-1 Anagawa-4, Inage-ku, Chiba-shi, Chiba 263-8555 (Japan)

    2007-02-01

    The jPET-D4 is a brain positron emission tomography (PET) scanner composed of 4-layer depth-of-interaction (DOI) detectors with a large number of GSO crystals, which achieves both high spatial resolution and high scanner sensitivity. Since the sensitivity of each crystal element is highly dependent on DOI layer depth and incidental {gamma} ray energy, it is difficult to estimate normalization factors and scatter components with high statistical accuracy. In this work, we implemented a hybrid scatter correction method combined with component-based normalization, which estimates scatter components with a dual energy acquisition using a convolution subtraction-method for an estimation of trues from an upper energy window. In order to reduce statistical noise in sinograms, the implemented scheme uses the DOI compression (DOIC) method, that combines deep pairs of DOI layers into the nearest shallow pairs of DOI layers with natural detector samplings. Since the compressed data preserve the block detector configuration, as if the data are acquired using 'virtual' detectors with high {gamma}-ray stopping power, these correction methods can be applied directly to DOIC sinograms. The proposed method provides high-quality corrected images with low statistical noise, even for a multi-layer DOI-PET.

  10. Research of beam hardening correction method for CL system based on SART algorithm

    International Nuclear Information System (INIS)

    Cao Daquan; Wang Yaxiao; Que Jiemin; Sun Cuili; Wei Cunfeng; Wei Long

    2014-01-01

    Computed laminography (CL) is a non-destructive testing technique for large objects, especially for planar objects. Beam hardening artifacts were wildly observed in the CL system and significantly reduce the image quality. This study proposed a novel simultaneous algebraic reconstruction technique (SART) based beam hardening correction (BHC) method for the CL system, namely the SART-BHC algorithm in short. The SART-BHC algorithm took the polychromatic attenuation process in account to formulate the iterative reconstruction update. A novel projection matrix calculation method which was different from the conventional cone-beam or fan-beam geometry was also studied for the CL system. The proposed method was evaluated with simulation data and experimental data, which was generated using the Monte Carlo simulation toolkit Geant4 and a bench-top CL system, respectively. All projection data were reconstructed with SART-BHC algorithm and the standard filtered back projection (FBP) algorithm. The reconstructed images show that beam hardening artifacts are greatly reduced with the SART-BHC algorithm compared to the FBP algorithm. The SART-BHC algorithm doesn't need any prior know-ledge about the object or the X-ray spectrum and it can also mitigate the interlayer aliasing. (authors)

  11. Dosimetric evaluation of the impacts of different heterogeneity correction algorithms on target doses in stereotactic body radiation therapy for lung tumors

    International Nuclear Information System (INIS)

    Narabayashi, Masaru; Mizowaki, Takashi; Matsuo, Yukinori; Nakamura, Mitsuhiro; Takayama, Kenji; Norihisa, Yoshiki; Sakanaka, Katsuyuki; Hiraoka, Masahiro

    2012-01-01

    Heterogeneity correction algorithms can have a large impact on the dose distributions of stereotactic body radiation therapy (SBRT) for lung tumors. Treatment plans of 20 patients who underwent SBRT for lung tumors with the prescribed dose of 48 Gy in four fractions at the isocenter were reviewed retrospectively and recalculated with different heterogeneity correction algorithms: the pencil beam convolution algorithm with a Batho power-law correction (BPL) in Eclipse, the radiological path length algorithm (RPL), and the X-ray Voxel Monte Carlo algorithm (XVMC) in iPlan. The doses at the periphery (minimum dose and D95) of the planning target volume (PTV) were compared using the same monitor units among the three heterogeneity correction algorithms, and the monitor units were compared between two methods of dose prescription, that is, an isocenter dose prescription (IC prescription) and dose-volume based prescription (D95 prescription). Mean values of the dose at the periphery of the PTV were significantly lower with XVMC than with BPL using the same monitor units (P<0.001). In addition, under IC prescription using BPL, RPL and XVMC, the ratios of mean values of monitor units were 1, 0.959 and 0.986, respectively. Under D95 prescription, they were 1, 0.937 and 1.088, respectively. These observations indicated that the application of XVMC under D95 prescription results in an increase in the actually delivered dose by 8.8% on average compared with the application of BPL. The appropriateness of switching heterogeneity correction algorithms and dose prescription methods should be carefully validated from a clinical viewpoint. (author)

  12. Simulation of small-angle scattering patterns using a CPU-efficient algorithm

    Science.gov (United States)

    Anitas, E. M.

    2017-12-01

    Small-angle scattering (of neutrons, x-ray or light; SAS) is a well-established experimental technique for structural analysis of disordered systems at nano and micro scales. For complex systems, such as super-molecular assemblies or protein molecules, analytic solutions of SAS intensity are generally not available. Thus, a frequent approach to simulate the corresponding patterns is to use a CPU-efficient version of the Debye formula. For this purpose, in this paper we implement the well-known DALAI algorithm in Mathematica software. We present calculations for a series of 2D Sierpinski gaskets and respectively of pentaflakes, obtained from chaos game representation.

  13. Attenuation and scatter correction in SPECT

    International Nuclear Information System (INIS)

    Pant, G.S.; Pandey, A.K.

    2000-01-01

    While passing through matter, photons undergo various types of interactions. In the process, some photons are completely absorbed, some are scattered in different directions with or without any change in their energy and some pass through unattenuated. These unattenuated photons carry the information with them. However, the image data gets corrupted with attenuation and scatter processes. This paper deals with the effect of these two processes in nuclear medicine images and suggests the methods to overcome them

  14. Analytical multiple scattering correction to the Mie theory: Application to the analysis of the lidar signal

    Science.gov (United States)

    Flesia, C.; Schwendimann, P.

    1992-01-01

    The contribution of the multiple scattering to the lidar signal is dependent on the optical depth tau. Therefore, the radar analysis, based on the assumption that the multiple scattering can be neglected is limited to cases characterized by low values of the optical depth (tau less than or equal to 0.1) and hence it exclude scattering from most clouds. Moreover, all inversion methods relating lidar signal to number densities and particle size must be modified since the multiple scattering affects the direct analysis. The essential requests of a realistic model for lidar measurements which include the multiple scattering and which can be applied to practical situations follow. (1) Requested are not only a correction term or a rough approximation describing results of a certain experiment, but a general theory of multiple scattering tying together the relevant physical parameter we seek to measure. (2) An analytical generalization of the lidar equation which can be applied in the case of a realistic aerosol is requested. A pure analytical formulation is important in order to avoid the convergency and stability problems which, in the case of numerical approach, are due to the large number of events that have to be taken into account in the presence of large depth and/or a strong experimental noise.

  15. Bias correction of daily satellite precipitation data using genetic algorithm

    Science.gov (United States)

    Pratama, A. W.; Buono, A.; Hidayat, R.; Harsa, H.

    2018-05-01

    Climate Hazards Group InfraRed Precipitation with Stations (CHIRPS) was producted by blending Satellite-only Climate Hazards Group InfraRed Precipitation (CHIRP) with Stasion observations data. The blending process was aimed to reduce bias of CHIRP. However, Biases of CHIRPS on statistical moment and quantil values were high during wet season over Java Island. This paper presented a bias correction scheme to adjust statistical moment of CHIRP using observation precipitation data. The scheme combined Genetic Algorithm and Nonlinear Power Transformation, the results was evaluated based on different season and different elevation level. The experiment results revealed that the scheme robustly reduced bias on variance around 100% reduction and leaded to reduction of first, and second quantile biases. However, bias on third quantile only reduced during dry months. Based on different level of elevation, the performance of bias correction process is only significantly different on skewness indicators.

  16. Dual-energy digital mammography for calcification imaging: Scatter and nonuniformity corrections

    International Nuclear Information System (INIS)

    Kappadath, S. Cheenu; Shaw, Chris C.

    2005-01-01

    Mammographic images of small calcifications, which are often the earliest signs of breast cancer, can be obscured by overlapping fibroglandular tissue. We have developed and implemented a dual-energy digital mammography (DEDM) technique for calcification imaging under full-field imaging conditions using a commercially available aSi:H/CsI:Tl flat-panel based digital mammography system. The low- and high-energy images were combined using a nonlinear mapping function to cancel the tissue structures and generate the dual-energy (DE) calcification images. The total entrance-skin exposure and mean-glandular dose from the low- and high-energy images were constrained so that they were similar to screening-examination levels. To evaluate the DE calcification image, we designed a phantom using calcium carbonate crystals to simulate calcifications of various sizes (212-425 μm) overlaid with breast-tissue-equivalent material 5 cm thick with a continuously varying glandular-tissue ratio from 0% to 100%. We report on the effects of scatter radiation and nonuniformity in x-ray intensity and detector response on the DE calcification images. The nonuniformity was corrected by normalizing the low- and high-energy images with full-field reference images. Correction of scatter in the low- and high-energy images significantly reduced the background signal in the DE calcification image. Under the current implementation of DEDM, utilizing the mammography system and dose level tested, calcifications in the 300-355 μm size range were clearly visible in DE calcification images. Calcification threshold sizes decreased to the 250-280 μm size range when the visibility criteria were lowered to barely visible. Calcifications smaller than ∼250 μm were usually not visible in most cases. The visibility of calcifications with our DEDM imaging technique was limited by quantum noise, not system noise

  17. The lowest order total electromagnetic correction to the deep inelastic scattering of polarized leptons on polarized nucleons

    International Nuclear Information System (INIS)

    Shumeiko, N.M.; Timoshin, S.I.

    1991-01-01

    Compact formulae for a total 1-loop electromagnetic corrections, including the contribution of electromagnetic hadron effects to the deep inelastic scattering of polarized leptons on polarized nucleons in the quark-parton model have been obtained. The cases of longitudinal and transverse nucleon polarization are considered in detail. A thorough numerical calculation of corrections to cross sections and polarization asymmetries at muon (electron) energies over the range of 200-2000 GeV (10-16 GeV) has been made. It has been established that the contribution of corrections to the hadron current considerably affects the behaviour of longitudinal asymmetry. A satisfactory agreement is found between the model calculations of corrections to the lepton current and the phenomenological calculation results, which makes it possible to find the total 1-loop correction within the framework of a common approach. (Author)

  18. Scattering at low energies by potentials containing power-law corrections to the Coulomb interaction

    International Nuclear Information System (INIS)

    Kuitsinskii, A.A.

    1986-01-01

    The low-energy asymptotic behavior is found for the phase shifts and scattering amplitudes in the case of central potentials which decrease at infinity as n/r+ar /sup -a/,a 1. In problems of atomic and nuclear physics one is generally interested in collisions of clusters consisting of several charged particles. The effective interaction potential of such clusters contains long-range power law corrections to the Coulomb interaction that is presented

  19. Cardiac MRI in mice at 9.4 Tesla with a transmit-receive surface coil and a cardiac-tailored intensity-correction algorithm.

    Science.gov (United States)

    Sosnovik, David E; Dai, Guangping; Nahrendorf, Matthias; Rosen, Bruce R; Seethamraju, Ravi

    2007-08-01

    To evaluate the use of a transmit-receive surface (TRS) coil and a cardiac-tailored intensity-correction algorithm for cardiac MRI in mice at 9.4 Tesla (9.4T). Fast low-angle shot (FLASH) cines, with and without delays alternating with nutations for tailored excitation (DANTE) tagging, were acquired in 13 mice. An intensity-correction algorithm was developed to compensate for the sensitivity profile of the surface coil, and was tailored to account for the unique distribution of noise and flow artifacts in cardiac MR images. Image quality was extremely high and allowed fine structures such as trabeculations, valve cusps, and coronary arteries to be clearly visualized. The tag lines created with the surface coil were also sharp and clearly visible. Application of the intensity-correction algorithm improved signal intensity, tissue contrast, and image quality even further. Importantly, the cardiac-tailored properties of the correction algorithm prevented noise and flow artifacts from being significantly amplified. The feasibility and value of cardiac MRI in mice with a TRS coil has been demonstrated. In addition, a cardiac-tailored intensity-correction algorithm has been developed and shown to improve image quality even further. The use of these techniques could produce significant potential benefits over a broad range of scanners, coil configurations, and field strengths. (c) 2007 Wiley-Liss, Inc.

  20. A New Adaptive Gamma Correction Based Algorithm Using DWT-SVD for Non-Contrast CT Image Enhancement.

    Science.gov (United States)

    Kallel, Fathi; Ben Hamida, Ahmed

    2017-12-01

    The performances of medical image processing techniques, in particular CT scans, are usually affected by poor contrast quality introduced by some medical imaging devices. This suggests the use of contrast enhancement methods as a solution to adjust the intensity distribution of the dark image. In this paper, an advanced adaptive and simple algorithm for dark medical image enhancement is proposed. This approach is principally based on adaptive gamma correction using discrete wavelet transform with singular-value decomposition (DWT-SVD). In a first step, the technique decomposes the input medical image into four frequency sub-bands by using DWT and then estimates the singular-value matrix of the low-low (LL) sub-band image. In a second step, an enhanced LL component is generated using an adequate correction factor and inverse singular value decomposition (SVD). In a third step, for an additional improvement of LL component, obtained LL sub-band image from SVD enhancement stage is classified into two main classes (low contrast and moderate contrast classes) based on their statistical information and therefore processed using an adaptive dynamic gamma correction function. In fact, an adaptive gamma correction factor is calculated for each image according to its class. Finally, the obtained LL sub-band image undergoes inverse DWT together with the unprocessed low-high (LH), high-low (HL), and high-high (HH) sub-bands for enhanced image generation. Different types of non-contrast CT medical images are considered for performance evaluation of the proposed contrast enhancement algorithm based on adaptive gamma correction using DWT-SVD (DWT-SVD-AGC). Results show that our proposed algorithm performs better than other state-of-the-art techniques.

  1. An electron tomography algorithm for reconstructing 3D morphology using surface tangents of projected scattering interfaces

    Science.gov (United States)

    Petersen, T. C.; Ringer, S. P.

    2010-03-01

    Upon discerning the mere shape of an imaged object, as portrayed by projected perimeters, the full three-dimensional scattering density may not be of particular interest. In this situation considerable simplifications to the reconstruction problem are possible, allowing calculations based upon geometric principles. Here we describe and provide an algorithm which reconstructs the three-dimensional morphology of specimens from tilt series of images for application to electron tomography. Our algorithm uses a differential approach to infer the intersection of projected tangent lines with surfaces which define boundaries between regions of different scattering densities within and around the perimeters of specimens. Details of the algorithm implementation are given and explained using reconstruction calculations from simulations, which are built into the code. An experimental application of the algorithm to a nano-sized Aluminium tip is also presented to demonstrate practical analysis for a real specimen. Program summaryProgram title: STOMO version 1.0 Catalogue identifier: AEFS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2988 No. of bytes in distributed program, including test data, etc.: 191 605 Distribution format: tar.gz Programming language: C/C++ Computer: PC Operating system: Windows XP RAM: Depends upon the size of experimental data as input, ranging from 200 Mb to 1.5 Gb Supplementary material: Sample output files, for the test run provided, are available. Classification: 7.4, 14 External routines: Dev-C++ ( http://www.bloodshed.net/devcpp.html) Nature of problem: Electron tomography of specimens for which conventional back projection may fail and/or data for which there is a limited angular

  2. The O(α{sub s}{sup 2}) heavy quark corrections to charged current deep-inelastic scattering at large virtualities

    Energy Technology Data Exchange (ETDEWEB)

    Blümlein, Johannes, E-mail: Johannes.Bluemlein@desy.de [Deutsches Elektronen–Synchrotron, DESY, Platanenallee 6, D-15738 Zeuthen (Germany); Hasselhuhn, Alexander [Deutsches Elektronen–Synchrotron, DESY, Platanenallee 6, D-15738 Zeuthen (Germany); Research Institute for Symbolic Computation (RISC), Johannes Kepler University, Altenbergerstraße 69, A-4040 Linz (Austria); Pfoh, Torsten [Deutsches Elektronen–Synchrotron, DESY, Platanenallee 6, D-15738 Zeuthen (Germany)

    2014-04-15

    We calculate the O(α{sub s}{sup 2}) heavy flavor corrections to charged current deep-inelastic scattering at large scales Q{sup 2}≫m{sup 2}. The contributing Wilson coefficients are given as convolutions between massive operator matrix elements and massless Wilson coefficients. Foregoing results in the literature are extended and corrected. Numerical results are presented for the kinematic region of the HERA data.

  3. Effects of defect pixel correction algorithms for x-ray detectors on image quality in planar projection and volumetric CT data sets

    International Nuclear Information System (INIS)

    Kuttig, Jan; Steiding, Christian; Hupfer, Martin; Karolczak, Marek; Kolditz, Daniel

    2015-01-01

    In this study we compared various defect pixel correction methods for reducing artifact appearance within projection images used for computed tomography (CT) reconstructions.Defect pixel correction algorithms were examined with respect to their artifact behaviour within planar projection images as well as in volumetric CT reconstructions. We investigated four algorithms: nearest neighbour, linear and adaptive linear interpolation, and a frequency-selective spectral-domain approach.To characterise the quality of each algorithm in planar image data, we inserted line defects of varying widths and orientations into images. The structure preservation of each algorithm was analysed by corrupting and correcting the image of a slit phantom pattern and by evaluating its line spread function (LSF). The noise preservation was assessed by interpolating corrupted flat images and estimating the noise power spectrum (NPS) of the interpolated region.For the volumetric investigations, we examined the structure and noise preservation within a structured aluminium foam, a mid-contrast cone-beam phantom and a homogeneous Polyurethane (PUR) cylinder.The frequency-selective algorithm showed the best structure and noise preservation for planar data of the correction methods tested. For volumetric data it still showed the best noise preservation, whereas the structure preservation was outperformed by the linear interpolation.The frequency-selective spectral-domain approach in the correction of line defects is recommended for planar image data, but its abilities within high-contrast volumes are restricted. In that case, the application of a simple linear interpolation might be the better choice to correct line defects within projection images used for CT. (paper)

  4. Evaluation of a scatter correlation technique for single photon transmission measurements in PET by means of Monte Carlo simulations

    International Nuclear Information System (INIS)

    Wegmann, K.; Brix, G.

    2000-01-01

    Purpose: Single photon transmission (SPT) measurements offer a new approach for the determination of attenuation correction factors (ACF) in PET. It was the aim of the present work, to evaluate a scatter correction alogrithm proposed by C. Watson by means of Monte Carlo simulations. Methods: SPT measurements with a Cs-137 point source were simulated for a whole-body PET scanner (ECAT EXACT HR + ) in both the 2D and 3D mode. To examine the scatter fraction (SF) in the transmission data, the detected photons were classified as unscattered or scattered. The simulated data were used to determine (i) the spatial distribution of the SFs, (ii) an ACF sinogram from all detected events (ACF tot ) and (iii) from the unscattered events only (ACF unscattered ), and (iv) an ACF cor =(ACF tot ) 1+Κ sinogram corrected according to the Watson algorithm. In addition, density images were reconstructed in order to quantitatively evaluate linear attenuation coefficients. Results: A high correlation was found between the SF and the ACF tot sinograms. For the cylinder and the EEC phantom, similar correction factors Κ were estimated. The determined values resulted in an accurate scatter correction in both the 2D and 3D mode. (orig.) [de

  5. Evaluation of a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography of scaphoid fixation screws.

    Science.gov (United States)

    Filli, Lukas; Marcon, Magda; Scholz, Bernhard; Calcagni, Maurizio; Finkenstädt, Tim; Andreisek, Gustav; Guggenberger, Roman

    2014-12-01

    The aim of this study was to evaluate a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography (FDCT) of scaphoid fixation screws. FDCT has gained interest in imaging small anatomic structures of the appendicular skeleton. Angiographic C-arm systems with flat detectors allow fluoroscopy and FDCT imaging in a one-stop procedure emphasizing their role as an ideal intraoperative imaging tool. However, FDCT imaging can be significantly impaired by artefacts induced by fixation screws. Following ethical board approval, commercially available scaphoid fixation screws were inserted into six cadaveric specimens in order to fix artificially induced scaphoid fractures. FDCT images corrected with the algorithm were compared to uncorrected images both quantitatively and qualitatively by two independent radiologists in terms of artefacts, screw contour, fracture line visibility, bone visibility, and soft tissue definition. Normal distribution of variables was evaluated using the Kolmogorov-Smirnov test. In case of normal distribution, quantitative variables were compared using paired Student's t tests. The Wilcoxon signed-rank test was used for quantitative variables without normal distribution and all qualitative variables. A p value of < 0.05 was considered to indicate statistically significant differences. Metal artefacts were significantly reduced by the correction algorithm (p < 0.001), and the fracture line was more clearly defined (p < 0.01). The inter-observer reliability was "almost perfect" (intra-class correlation coefficient 0.85, p < 0.001). The prototype correction algorithm in FDCT for metal artefacts induced by scaphoid fixation screws may facilitate intra- and postoperative follow-up imaging. Flat detector computed tomography (FDCT) is a helpful imaging tool for scaphoid fixation. The correction algorithm significantly reduces artefacts in FDCT induced by scaphoid fixation screws. This may facilitate intra

  6. SU-F-T-452: Influence of Dose Calculation Algorithm and Heterogeneity Correction On Risk Categorization of Patients with Cardiac Implanted Electronic Devices Undergoing Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Iwai, P; Lins, L Nadler [AC Camargo Cancer Center, Sao Paulo (Brazil)

    2016-06-15

    Purpose: There is a lack of studies with significant cohort data about patients using pacemaker (PM), implanted cardioverter defibrillator (ICD) or cardiac resynchronization therapy (CRT) device undergoing radiotherapy. There is no literature comparing the cumulative doses delivered to those cardiac implanted electronic devices (CIED) calculated by different algorithms neither studies comparing doses with heterogeneity correction or not. The aim of this study was to evaluate the influence of the algorithms Pencil Beam Convolution (PBC), Analytical Anisotropic Algorithm (AAA) and Acuros XB (AXB) as well as heterogeneity correction on risk categorization of patients. Methods: A retrospective analysis of 19 3DCRT or IMRT plans of 17 patients was conducted, calculating the dose delivered to CIED using three different calculation algorithms. Doses were evaluated with and without heterogeneity correction for comparison. Risk categorization of the patients was based on their CIED dependency and cumulative dose in the devices. Results: Total estimated doses at CIED calculated by AAA or AXB were higher than those calculated by PBC in 56% of the cases. In average, the doses at CIED calculated by AAA and AXB were higher than those calculated by PBC (29% and 4% higher, respectively). The maximum difference of doses calculated by each algorithm was about 1 Gy, either using heterogeneity correction or not. Values of maximum dose calculated with heterogeneity correction showed that dose at CIED was at least equal or higher in 84% of the cases with PBC, 77% with AAA and 67% with AXB than dose obtained with no heterogeneity correction. Conclusion: The dose calculation algorithm and heterogeneity correction did not change the risk categorization. Since higher estimated doses delivered to CIED do not compromise treatment precautions to be taken, it’s recommend that the most sophisticated algorithm available should be used to predict dose at the CIED using heterogeneity correction.

  7. SU-F-T-452: Influence of Dose Calculation Algorithm and Heterogeneity Correction On Risk Categorization of Patients with Cardiac Implanted Electronic Devices Undergoing Radiotherapy

    International Nuclear Information System (INIS)

    Iwai, P; Lins, L Nadler

    2016-01-01

    Purpose: There is a lack of studies with significant cohort data about patients using pacemaker (PM), implanted cardioverter defibrillator (ICD) or cardiac resynchronization therapy (CRT) device undergoing radiotherapy. There is no literature comparing the cumulative doses delivered to those cardiac implanted electronic devices (CIED) calculated by different algorithms neither studies comparing doses with heterogeneity correction or not. The aim of this study was to evaluate the influence of the algorithms Pencil Beam Convolution (PBC), Analytical Anisotropic Algorithm (AAA) and Acuros XB (AXB) as well as heterogeneity correction on risk categorization of patients. Methods: A retrospective analysis of 19 3DCRT or IMRT plans of 17 patients was conducted, calculating the dose delivered to CIED using three different calculation algorithms. Doses were evaluated with and without heterogeneity correction for comparison. Risk categorization of the patients was based on their CIED dependency and cumulative dose in the devices. Results: Total estimated doses at CIED calculated by AAA or AXB were higher than those calculated by PBC in 56% of the cases. In average, the doses at CIED calculated by AAA and AXB were higher than those calculated by PBC (29% and 4% higher, respectively). The maximum difference of doses calculated by each algorithm was about 1 Gy, either using heterogeneity correction or not. Values of maximum dose calculated with heterogeneity correction showed that dose at CIED was at least equal or higher in 84% of the cases with PBC, 77% with AAA and 67% with AXB than dose obtained with no heterogeneity correction. Conclusion: The dose calculation algorithm and heterogeneity correction did not change the risk categorization. Since higher estimated doses delivered to CIED do not compromise treatment precautions to be taken, it’s recommend that the most sophisticated algorithm available should be used to predict dose at the CIED using heterogeneity correction.

  8. The data correction algorithms in sup 6 sup 0 Co train inspection system

    CERN Document Server

    Yuan Ya Ding; LiuXiMing; Miao Ji Cheng

    2002-01-01

    Because of the physical characteristics of the sup 6 sup 0 Co train inspection system and the application of high-speed data collection system based on current integral, the original images have been distorted in a certain degree. Authors investigate into the reasons why the distortion comes into being, and accordingly present the data correction algorithm

  9. Application of transmission scan-based attenuation compensation to scatter-corrected thallium-201 myocardial single-photon emission tomographic images

    International Nuclear Information System (INIS)

    Hashimoto, Jun; Kubo, Atsushi; Ogawa, Koichi; Ichihara, Takashi; Motomura, Nobutoku; Takayama, Takuzo; Iwanaga, Shiro; Mitamura, Hideo; Ogawa, Satoshi

    1998-01-01

    A practical method for scatter and attenuation compensation was employed in thallium-201 myocardial single-photon emission tomography (SPET or ECT) with the triple-energy-window (TEW) technique and an iterative attenuation correction method by using a measured attenuation map. The map was reconstructed from technetium-99m transmission CT (TCT) data. A dual-headed SPET gamma camera system equipped with parallel-hole collimators was used for ECT/TCT data acquisition and a new type of external source named ''sheet line source'' was designed for TCT data acquisition. This sheet line source was composed of a narrow long fluoroplastic tube embedded in a rectangular acrylic board. After injection of 99m Tc solution into the tube by an automatic injector, the board was attached in front of the collimator surface of one of the two detectors. After acquiring emission and transmission data separately or simultaneously, we eliminated scattered photons in the transmission and emission data with the TEW method, and reconstructed both images. Then, the effect of attenuation in the scatter-corrected ECT images was compensated with Chang's iterative method by using measured attenuation maps. Our method was validated by several phantom studies and clinical cardiac studies. The method offered improved homogeneity in distribution of myocardial activity and accurate measurements of myocardial tracer uptake. We conclude that the above correction method is feasible because a new type of 99m Tc external source may not produce truncation in TCT images and is cost-effective and easy to prepare in clinical situations. (orig.)

  10. Algorithms for solving atomic structures of nanodimensional clusters in single crystals based on X-ray and neutron diffuse scattering data

    International Nuclear Information System (INIS)

    Andrushevskii, N.M.; Shchedrin, B.M.; Simonov, V.I.

    2004-01-01

    New algorithms for solving the atomic structure of equivalent nanodimensional clusters of the same orientations randomly distributed over the initial single crystal (crystal matrix) have been suggested. A cluster is a compact group of substitutional, interstitial or other atoms displaced from their positions in the crystal matrix. The structure is solved based on X-ray or neutron diffuse scattering data obtained from such objects. The use of the mathematical apparatus of Fourier transformations of finite functions showed that the appropriate sampling of the intensities of continuous diffuse scattering allows one to synthesize multiperiodic difference Patterson functions that reveal the systems of the interatomic vectors of an individual cluster. The suggested algorithms are tested on a model one-dimensional structure

  11. Extending 3D near-cloud corrections from shorter to longer wavelengths

    International Nuclear Information System (INIS)

    Marshak, Alexander; Evans, K. Frank; Várnai, Tamás; Wen, Guoyong

    2014-01-01

    Satellite observations have shown a positive correlation between cloud amount and aerosol optical thickness (AOT) that can be explained by the humidification of aerosols near clouds, and/or by cloud contamination by sub-pixel size clouds and the cloud adjacency effect. The last effect may substantially increase reflected radiation in cloud-free columns, leading to overestimates in the retrieved AOT. For clear-sky areas near boundary layer clouds the main contribution to the enhancement of clear sky reflectance at shorter wavelengths comes from the radiation scattered into clear areas by clouds and then scattered to the sensor by air molecules. Because of the wavelength dependence of air molecule scattering, this process leads to a larger reflectance increase at shorter wavelengths, and can be corrected using a simple two-layer model [18]. However, correcting only for molecular scattering skews spectral properties of the retrieved AOT. Kassianov and Ovtchinnikov [9] proposed a technique that uses spectral reflectance ratios to retrieve AOT in the vicinity of clouds; they assumed that the cloud adjacency effect influences the spectral ratio between reflectances at two wavelengths less than it influences the reflectances themselves. This paper combines the two approaches: It assumes that the 3D correction for the shortest wavelength is known with some uncertainties, and then it estimates the 3D correction for longer wavelengths using a modified ratio method. The new approach is tested with 3D radiances simulated for 26 cumulus fields from Large-Eddy Simulations, supplemented with 40 aerosol profiles. The results showed that (i) for a variety of cumulus cloud scenes and aerosol profiles over ocean the 3D correction due to cloud adjacency effect can be extended from shorter to longer wavelengths and (ii) the 3D corrections for longer wavelengths are not very sensitive to unbiased random uncertainties in the 3D corrections at shorter wavelengths. - Highlights:

  12. TUnfold, an algorithm for correcting migration effects in high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Schmitt, Stefan

    2012-07-15

    TUnfold is a tool for correcting migration and background effects in high energy physics for multi-dimensional distributions. It is based on a least square fit with Tikhonov regularisation and an optional area constraint. For determining the strength of the regularisation parameter, the L-curve method and scans of global correlation coefficients are implemented. The algorithm supports background subtraction and error propagation of statistical and systematic uncertainties, in particular those originating from limited knowledge of the response matrix. The program is interfaced to the ROOT analysis framework.

  13. Direct cone-beam cardiac reconstruction algorithm with cardiac banding artifact correction

    International Nuclear Information System (INIS)

    Taguchi, Katsuyuki; Chiang, Beshan S.; Hein, Ilmar A.

    2006-01-01

    Multislice helical computed tomography (CT) is a promising noninvasive technique for coronary artery imaging. Various factors can cause inconsistencies in cardiac CT data, which can result in degraded image quality. These inconsistencies may be the result of the patient physiology (e.g., heart rate variations), the nature of the data (e.g., cone-angle), or the reconstruction algorithm itself. An algorithm which provides the best temporal resolution for each slice, for example, often provides suboptimal image quality for the entire volume since the cardiac temporal resolution (TRc) changes from slice to slice. Such variations in TRc can generate strong banding artifacts in multi-planar reconstruction images or three-dimensional images. Discontinuous heart walls and coronary arteries may compromise the accuracy of the diagnosis. A β-blocker is often used to reduce and stabilize patients' heart rate but cannot eliminate the variation. In order to obtain robust and optimal image quality, a software solution that increases the temporal resolution and decreases the effect of heart rate is highly desirable. This paper proposes an ECG-correlated direct cone-beam reconstruction algorithm (TCOT-EGR) with cardiac banding artifact correction (CBC) and disconnected projections redundancy compensation technique (DIRECT). First the theory and analytical model of the cardiac temporal resolution is outlined. Next, the performance of the proposed algorithms is evaluated by using computer simulations as well as patient data. It will be shown that the proposed algorithms enhance the robustness of the image quality against inconsistencies by guaranteeing smooth transition of heart cycles used in reconstruction

  14. Errors and corrections in the separation of spin-flip and non-spin-flip thermal neutron scattering using the polarization analysis technique

    International Nuclear Information System (INIS)

    Williams, W.G.

    1975-01-01

    The use of the polarization analysis technique to separate spin-flip from non-spin-flip thermal neutron scattering is especially important in determining magnetic scattering cross-sections. In order to identify a spin-flip ratio in the scattering with a particular scattering process, it is necessary to correct the experimentally observed 'flipping-ratio' to allow for the efficiencies of the vital instrument components (polarizers and spin-flippers), as well as multiple scattering effects in the sample. Analytical expressions for these corections are presented and their magnitudes in typical cases estimated. The errors in measurement depend strongly on the uncertainties in the calibration of the efficiencies of the polarizers and the spin-flipper. The final section is devoted to a discussion of polarization analysis instruments

  15. A necessary condition for applying MUSIC algorithm in limited-view inverse scattering problem

    Science.gov (United States)

    Park, Taehoon; Park, Won-Kwang

    2015-09-01

    Throughout various results of numerical simulations, it is well-known that MUltiple SIgnal Classification (MUSIC) algorithm can be applied in the limited-view inverse scattering problems. However, the application is somehow heuristic. In this contribution, we identify a necessary condition of MUSIC for imaging of collection of small, perfectly conducting cracks. This is based on the fact that MUSIC imaging functional can be represented as an infinite series of Bessel function of integer order of the first kind. Numerical experiments from noisy synthetic data supports our investigation.

  16. Implications of changing scattering properties on Greenland ice sheet volume change from Cryosat-2 altimetry

    DEFF Research Database (Denmark)

    Simonsen, Sebastian Bjerregaard; Sørensen, Louise Sandberg

    2017-01-01

    ) in the elevation change algorithm, to correct for temporal changes in the ratio between surface- and volume-scatter in Cryosat-2 observations. We present elevation and volume changes for the Greenland ice sheet in the period from 2010 until 2014. The waveform parameters considered here are the backscatter...... waveform parameters to be applicable for correcting for changes in volume scattering. The best results in the Synthetic Aperture Radar Interferometric mode area of the GrIS are found when applying only the backscatter correction, whereas the best result in the Low Resolution Mode area is obtained by only......Long-term observations of surface elevation change of the Greenland ice sheet (GrIS) is of utmost importance when assessing the state of the ice sheet. Satellite radar altimetry offers a long time series of data over the GrIS, starting with ERS-1 in 1991. ESA's Cryosat-2 mission, launched in 2010...

  17. Evaluation of the global orbit correction algorithm for the APS real-time orbit feedback system

    International Nuclear Information System (INIS)

    Carwardine, J.; Evans, K. Jr.

    1997-01-01

    The APS real-time orbit feedback system uses 38 correctors per plane and has available up to 320 rf beam position monitors. Orbit correction is implemented using multiple digital signal processors. Singular value decomposition is used to generate a correction matrix from a linear response matrix model of the storage ring lattice. This paper evaluates the performance of the APS system in terms of its ability to correct localized and distributed sources of orbit motion. The impact of regulator gain and bandwidth, choice of beam position monitors, and corrector dynamics are discussed. The weighted least-squares algorithm is reviewed in the context of local feedback

  18. Improved Global Ocean Color Using Polymer Algorithm

    Science.gov (United States)

    Steinmetz, Francois; Ramon, Didier; Deschamps, ierre-Yves; Stum, Jacques

    2010-12-01

    A global ocean color product has been developed based on the use of the POLYMER algorithm to correct atmospheric scattering and sun glint and to process the data to a Level 2 ocean color product. Thanks to the use of this algorithm, the coverage and accuracy of the MERIS ocean color product have been significantly improved when compared to the standard product, therefore increasing its usefulness for global ocean monitor- ing applications like GLOBCOLOUR. We will present the latest developments of the algorithm, its first application to MODIS data and its validation against in-situ data from the MERMAID database. Examples will be shown of global NRT chlorophyll maps produced by CLS with POLYMER for operational applications like fishing or oil and gas industry, as well as its use by Scripps for a NASA study of the Beaufort and Chukchi seas.

  19. Thermal diffuse scattering in angular-dispersive neutron diffraction

    International Nuclear Information System (INIS)

    Popa, N.C.; Willis, B.T.M.

    1998-01-01

    The theoretical treatment of one-phonon thermal diffuse scattering (TDS) in single-crystal neutron diffraction at fixed incident wavelength is reanalysed in the light of the analysis given by Popa and Willis [Acta Cryst. (1994), (1997)] for the time-of-flight method. Isotropic propagation of sound with different velocities for the longitudinal and transverse modes is assumed. As in time-of-flight diffraction, there exists, for certain scanning variables, a forbidden range in the one-phonon TDS of slower-than-sound neutrons, and this permits the determination of the sound velocity in the crystal. A fast algorithm is given for the TDS correction of neutron diffraction data collected at a fixed wavelength: this algorithm is similar to that reported earlier for the time-of-flight case. (orig.)

  20. Beam-centric algorithm for pretreatment patient position correction in external beam radiation therapy

    International Nuclear Information System (INIS)

    Bose, Supratik; Shukla, Himanshu; Maltz, Jonathan

    2010-01-01

    Purpose: In current image guided pretreatment patient position adjustment methods, image registration is used to determine alignment parameters. Since most positioning hardware lacks the full six degrees of freedom (DOF), accuracy is compromised. The authors show that such compromises are often unnecessary when one models the planned treatment beams as part of the adjustment calculation process. The authors present a flexible algorithm for determining optimal realizable adjustments for both step-and-shoot and arc delivery methods. Methods: The beam shape model is based on the polygonal intersection of each beam segment with the plane in pretreatment image volume that passes through machine isocenter perpendicular to the central axis of the beam. Under a virtual six-DOF correction, ideal positions of these polygon vertices are computed. The proposed method determines the couch, gantry, and collimator adjustments that minimize the total mismatch of all vertices over all segments with respect to their ideal positions. Using this geometric error metric as a function of the number of available DOF, the user may select the most desirable correction regime. Results: For a simulated treatment plan consisting of three equally weighted coplanar fixed beams, the authors achieve a 7% residual geometric error (with respect to the ideal correction, considered 0% error) by applying gantry rotation as well as translation and isocentric rotation of the couch. For a clinical head-and-neck intensity modulated radiotherapy plan with seven beams and five segments per beam, the corresponding error is 6%. Correction involving only couch translation (typical clinical practice) leads to a much larger 18% mismatch. Clinically significant consequences of more accurate adjustment are apparent in the dose volume histograms of target and critical structures. Conclusions: The algorithm achieves improvements in delivery accuracy using standard delivery hardware without significantly increasing

  1. Segmentation-free empirical beam hardening correction for CT

    Energy Technology Data Exchange (ETDEWEB)

    Schüller, Sören; Sawall, Stefan [German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, Heidelberg 69120 (Germany); Stannigel, Kai; Hülsbusch, Markus; Ulrici, Johannes; Hell, Erich [Sirona Dental Systems GmbH, Fabrikstraße 31, 64625 Bensheim (Germany); Kachelrieß, Marc, E-mail: marc.kachelriess@dkfz.de [German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg (Germany)

    2015-02-15

    Purpose: The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one is a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. Methods: To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the

  2. Segmentation-free empirical beam hardening correction for CT.

    Science.gov (United States)

    Schüller, Sören; Sawall, Stefan; Stannigel, Kai; Hülsbusch, Markus; Ulrici, Johannes; Hell, Erich; Kachelrieß, Marc

    2015-02-01

    The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one is a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the proposed

  3. WE-AB-207A-09: Optimization of the Design of a Moving Blocker for Cone-Beam CT Scatter Correction: Experimental Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Chen, X; Ouyang, L; Jia, X; Zhang, Y; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States); Yan, H [Cyber Medical Corporation, Xi’an (China)

    2016-06-15

    Purpose: A moving blocker based strategy has shown promising results for scatter correction in cone-beam computed tomography (CBCT). Different geometry designs and moving speeds of the blocker affect its performance in image reconstruction accuracy. The goal of this work is to optimize the geometric design and moving speed of the moving blocker system through experimental evaluations. Methods: An Elekta Synergy XVI system and an anthropomorphic pelvis phantom CIRS 801-P were used for our experiment. A blocker consisting of lead strips was inserted between the x-ray source and the phantom moving back and forth along rotation axis to measure the scatter signal. Accoriding to our Monte Carlo simulation results, three blockers were used, which have the same lead strip width 3.2mm and different gap between neighboring lead strips, 3.2, 6.4 and 9.6mm. For each blocker, three moving speeds were evaluated, 10, 20 and 30 pixels per projection (on the detector plane). Scatter signal in the unblocked region was estimated by cubic B-spline based interpolation from the blocked region. CBCT image was reconstructed by a total variation (TV) based algebraic iterative reconstruction (ART) algorithm from the partially blocked projection data. Reconstruction accuracy in each condition is quantified as CT number error of region of interest (ROI) by comparing to a CBCT reconstructed image from analytically simulated unblocked and scatter free projection data. Results: Highest reconstruction accuracy is achieved when the blocker width is 3.2 mm, the gap between neighboring lead strips is 9.6 mm and the moving speed is 20 pixels per projection. RMSE of the CT number of ROIs can be reduced from 436 to 27. Conclusions: Image reconstruction accuracy is greatly affected by the geometry design of the blocker. The moving speed does not have a very strong effect on reconstruction result if it is over 20 pixels per projection.

  4. A necessary condition for applying MUSIC algorithm in limited-view inverse scattering problem

    International Nuclear Information System (INIS)

    Park, Taehoon; Park, Won-Kwang

    2015-01-01

    Throughout various results of numerical simulations, it is well-known that MUltiple SIgnal Classification (MUSIC) algorithm can be applied in the limited-view inverse scattering problems. However, the application is somehow heuristic. In this contribution, we identify a necessary condition of MUSIC for imaging of collection of small, perfectly conducting cracks. This is based on the fact that MUSIC imaging functional can be represented as an infinite series of Bessel function of integer order of the first kind. Numerical experiments from noisy synthetic data supports our investigation. (paper)

  5. Septal penetration correction in I-131 imaging following thyroid cancer treatment

    Science.gov (United States)

    Barrack, Fiona; Scuffham, James; McQuaid, Sarah

    2018-04-01

    Whole body gamma camera images acquired after I-131 treatment for thyroid cancer can suffer from collimator septal penetration artefacts because of the high energy of the gamma photons. This results in the appearance of ‘spoke’ artefacts, emanating from regions of high activity concentration, caused by the non-isotropic attenuation of the collimator. Deconvolution has the potential to reduce such artefacts, by taking into account the non-Gaussian point-spread-function (PSF) of the system. A Richardson–Lucy deconvolution algorithm, with and without prior scatter-correction was tested as a method of reducing septal penetration in planar gamma camera images. Phantom images (hot spheres within a warm background) were acquired and deconvolution using a measured PSF was applied. The results were evaluated through region-of-interest and line profile analysis to determine the success of artefact reduction and the optimal number of deconvolution iterations and damping parameter (λ). Without scatter-correction, the optimal results were obtained with 15 iterations and λ  =  0.01, with the counts in the spokes reduced to 20% of the original value, indicating a substantial decrease in their prominence. When a triple-energy-window scatter-correction was applied prior to deconvolution, the optimal results were obtained with six iterations and λ  =  0.02, which reduced the spoke counts to 3% of the original value. The prior application of scatter-correction therefore produced the best results, with a marked change in the appearance of the images. The optimal settings were then applied to six patient datasets, to demonstrate its utility in the clinical setting. In all datasets, spoke artefacts were substantially reduced after the application of scatter-correction and deconvolution, with the mean spoke count being reduced to 10% of the original value. This indicates that deconvolution is a promising technique for septal penetration artefact reduction that

  6. Magnetic photon scattering

    International Nuclear Information System (INIS)

    Lovesey, S.W.

    1987-05-01

    The report reviews, at an introductory level, the theory of photon scattering from condensed matter. Magnetic scattering, which arises from first-order relativistic corrections to the Thomson scattering amplitude, is treated in detail and related to the corresponding interaction in the magnetic neutron diffraction amplitude. (author)

  7. Music algorithm for imaging of a sound-hard arc in limited-view inverse scattering problem

    Science.gov (United States)

    Park, Won-Kwang

    2017-07-01

    MUltiple SIgnal Classification (MUSIC) algorithm for a non-iterative imaging of sound-hard arc in limited-view inverse scattering problem is considered. In order to discover mathematical structure of MUSIC, we derive a relationship between MUSIC and an infinite series of Bessel functions of integer order. This structure enables us to examine some properties of MUSIC in limited-view problem. Numerical simulations are performed to support the identified structure of MUSIC.

  8. Spectrum correction algorithm for detectors in airborne radioactivity monitoring equipment NH-UAV based on a ratio processing method

    International Nuclear Information System (INIS)

    Cao, Ye; Tang, Xiao-Bin; Wang, Peng; Meng, Jia; Huang, Xi; Wen, Liang-Sheng; Chen, Da

    2015-01-01

    The unmanned aerial vehicle (UAV) radiation monitoring method plays an important role in nuclear accidents emergency. In this research, a spectrum correction algorithm about the UAV airborne radioactivity monitoring equipment NH-UAV was studied to measure the radioactive nuclides within a small area in real time and in a fixed place. The simulation spectra of the high-purity germanium (HPGe) detector and the lanthanum bromide (LaBr 3 ) detector in the equipment were obtained using the Monte Carlo technique. Spectrum correction coefficients were calculated after performing ratio processing techniques about the net peak areas between the double detectors on the detection spectrum of the LaBr 3 detector according to the accuracy of the detection spectrum of the HPGe detector. The relationship between the spectrum correction coefficient and the size of the source term was also investigated. A good linear relation exists between the spectrum correction coefficient and the corresponding energy (R 2 =0.9765). The maximum relative deviation from the real condition reduced from 1.65 to 0.035. The spectrum correction method was verified as feasible. - Highlights: • An airborne radioactivity monitoring equipment NH-UAV was developed to measure radionuclide after a nuclear accident. • A spectrum correction algorithm was proposed to obtain precise information on the detected radioactivity within a small area. • The spectrum correction method was verified as feasible. • The corresponding spectrum correction coefficients increase first and then stay constant

  9. Spectrum correction algorithm for detectors in airborne radioactivity monitoring equipment NH-UAV based on a ratio processing method

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Ye [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Tang, Xiao-Bin, E-mail: tangxiaobin@nuaa.edu.cn [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Jiangsu Key Laboratory of Nuclear Energy Equipment Materials Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Wang, Peng; Meng, Jia; Huang, Xi; Wen, Liang-Sheng [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Chen, Da [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Jiangsu Key Laboratory of Nuclear Energy Equipment Materials Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China)

    2015-10-11

    The unmanned aerial vehicle (UAV) radiation monitoring method plays an important role in nuclear accidents emergency. In this research, a spectrum correction algorithm about the UAV airborne radioactivity monitoring equipment NH-UAV was studied to measure the radioactive nuclides within a small area in real time and in a fixed place. The simulation spectra of the high-purity germanium (HPGe) detector and the lanthanum bromide (LaBr{sub 3}) detector in the equipment were obtained using the Monte Carlo technique. Spectrum correction coefficients were calculated after performing ratio processing techniques about the net peak areas between the double detectors on the detection spectrum of the LaBr{sub 3} detector according to the accuracy of the detection spectrum of the HPGe detector. The relationship between the spectrum correction coefficient and the size of the source term was also investigated. A good linear relation exists between the spectrum correction coefficient and the corresponding energy (R{sup 2}=0.9765). The maximum relative deviation from the real condition reduced from 1.65 to 0.035. The spectrum correction method was verified as feasible. - Highlights: • An airborne radioactivity monitoring equipment NH-UAV was developed to measure radionuclide after a nuclear accident. • A spectrum correction algorithm was proposed to obtain precise information on the detected radioactivity within a small area. • The spectrum correction method was verified as feasible. • The corresponding spectrum correction coefficients increase first and then stay constant.

  10. Evaluation of attenuation correction, scatter correction and resolution recovery in myocardial Tc-99m MIBI SPECT

    Energy Technology Data Exchange (ETDEWEB)

    Larcos, G.; Hutton, B.F.; Farlow, D.C.; Campbell- Rodgers, N.; Gruenewald, S.M.; Lau, Y.H. [Westmead Hospital, Westmead, Sydney, NSW (Australia). Departments of Nuclear Medicine and Ultrasound and Medical Physics

    1998-06-01

    Full text: The introduction of transmission based attenuation correction (AC) has increased the diagnostic accuracy of Tc-99m MIBI myocardial perfusion SPECT. The aim of this study is to evaluate recent developments, including scatter correction (SC) and resolution recovery (RR). We reviewed 13 patients who underwent Tc-99m MIBI SPECT (two day protocol) and coronary angiography and 4 manufacturer supplied studies assigned a low pretest likelihood of coronary artery disease (CAD). Patients had a mean age of 59 years (range: 41-78). Data were reconstructed using filtered backprojection (FBP; method 1), maximum likelihood (ML) incorporating AC (method 2), ADAC software using sinogram based SC+RR followed by ML with AC (method 3) and ordered subset ML incorporating AC,SC and RR (method 4). Images were reported by two of three blinded experienced physicians using a standard semiquantitative scoring scheme. Fixed or reversible perfusion defects were considered abnormal; CAD was considered present with stenoses > 50%. Patients had normal coronary anatomy (n=9), single (n=4) or two vessel CAD (n=4) (four in each of LAD, RCA and LCX). There were no statistically significant differences for any combination. Normalcy rate = 100% for all methods. Physicians graded 3/17 (methods 2,4) and 1/17 (method 3) images as fair or poor in quality. Thus, AC or AC+SC+RR produce good quality images in most patients; there is potential for improvement in sensitivity over standard FBP with no significant change in normalcy or specificity

  11. N3LO corrections to jet production in deep inelastic scattering using the Projection-to-Born method

    Science.gov (United States)

    Currie, J.; Gehrmann, T.; Glover, E. W. N.; Huss, A.; Niehues, J.; Vogt, A.

    2018-05-01

    Computations of higher-order QCD corrections for processes with exclusive final states require a subtraction method for real-radiation contributions. We present the first-ever generalisation of a subtraction method for third-order (N3LO) QCD corrections. The Projection-to-Born method is used to combine inclusive N3LO coefficient functions with an exclusive second-order (NNLO) calculation for a final state with an extra jet. The input requirements, advantages, and potential applications of the method are discussed, and validations at lower orders are performed. As a test case, we compute the N3LO corrections to kinematical distributions and production rates for single-jet production in deep inelastic scattering in the laboratory frame, and compare them with data from the ZEUS experiment at HERA. The corrections are small in the central rapidity region, where they stabilize the predictions to sub per-cent level. The corrections increase substantially towards forward rapidity where large logarithmic effects are expected, thereby yielding an improved description of the data in this region.

  12. Evaluation of a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography of scaphoid fixation screws

    Energy Technology Data Exchange (ETDEWEB)

    Filli, Lukas; Finkenstaedt, Tim; Andreisek, Gustav; Guggenberger, Roman [University Hospital of Zurich, Department of Diagnostic and Interventional Radiology, Zurich (Switzerland); Marcon, Magda [University Hospital of Zurich, Department of Diagnostic and Interventional Radiology, Zurich (Switzerland); University of Udine, Institute of Diagnostic Radiology, Department of Medical and Biological Sciences, Udine (Italy); Scholz, Bernhard [Imaging and Therapy Division, Siemens AG, Healthcare Sector, Forchheim (Germany); Calcagni, Maurizio [University Hospital of Zurich, Division of Plastic Surgery and Hand Surgery, Zurich (Switzerland)

    2014-12-15

    The aim of this study was to evaluate a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography (FDCT) of scaphoid fixation screws. FDCT has gained interest in imaging small anatomic structures of the appendicular skeleton. Angiographic C-arm systems with flat detectors allow fluoroscopy and FDCT imaging in a one-stop procedure emphasizing their role as an ideal intraoperative imaging tool. However, FDCT imaging can be significantly impaired by artefacts induced by fixation screws. Following ethical board approval, commercially available scaphoid fixation screws were inserted into six cadaveric specimens in order to fix artificially induced scaphoid fractures. FDCT images corrected with the algorithm were compared to uncorrected images both quantitatively and qualitatively by two independent radiologists in terms of artefacts, screw contour, fracture line visibility, bone visibility, and soft tissue definition. Normal distribution of variables was evaluated using the Kolmogorov-Smirnov test. In case of normal distribution, quantitative variables were compared using paired Student's t tests. The Wilcoxon signed-rank test was used for quantitative variables without normal distribution and all qualitative variables. A p value of < 0.05 was considered to indicate statistically significant differences. Metal artefacts were significantly reduced by the correction algorithm (p < 0.001), and the fracture line was more clearly defined (p < 0.01). The inter-observer reliability was ''almost perfect'' (intra-class correlation coefficient 0.85, p < 0.001). The prototype correction algorithm in FDCT for metal artefacts induced by scaphoid fixation screws may facilitate intra- and postoperative follow-up imaging. (orig.)

  13. Evaluation of a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography of scaphoid fixation screws

    International Nuclear Information System (INIS)

    Filli, Lukas; Finkenstaedt, Tim; Andreisek, Gustav; Guggenberger, Roman; Marcon, Magda; Scholz, Bernhard; Calcagni, Maurizio

    2014-01-01

    The aim of this study was to evaluate a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography (FDCT) of scaphoid fixation screws. FDCT has gained interest in imaging small anatomic structures of the appendicular skeleton. Angiographic C-arm systems with flat detectors allow fluoroscopy and FDCT imaging in a one-stop procedure emphasizing their role as an ideal intraoperative imaging tool. However, FDCT imaging can be significantly impaired by artefacts induced by fixation screws. Following ethical board approval, commercially available scaphoid fixation screws were inserted into six cadaveric specimens in order to fix artificially induced scaphoid fractures. FDCT images corrected with the algorithm were compared to uncorrected images both quantitatively and qualitatively by two independent radiologists in terms of artefacts, screw contour, fracture line visibility, bone visibility, and soft tissue definition. Normal distribution of variables was evaluated using the Kolmogorov-Smirnov test. In case of normal distribution, quantitative variables were compared using paired Student's t tests. The Wilcoxon signed-rank test was used for quantitative variables without normal distribution and all qualitative variables. A p value of < 0.05 was considered to indicate statistically significant differences. Metal artefacts were significantly reduced by the correction algorithm (p < 0.001), and the fracture line was more clearly defined (p < 0.01). The inter-observer reliability was ''almost perfect'' (intra-class correlation coefficient 0.85, p < 0.001). The prototype correction algorithm in FDCT for metal artefacts induced by scaphoid fixation screws may facilitate intra- and postoperative follow-up imaging. (orig.)

  14. Improved ocean-color remote sensing in the Arctic using the POLYMER algorithm

    Science.gov (United States)

    Frouin, Robert; Deschamps, Pierre-Yves; Ramon, Didier; Steinmetz, François

    2012-10-01

    Atmospheric correction of ocean-color imagery in the Arctic brings some specific challenges that the standard atmospheric correction algorithm does not address, namely low solar elevation, high cloud frequency, multi-layered polar clouds, presence of ice in the field-of-view, and adjacency effects from highly reflecting surfaces covered by snow and ice and from clouds. The challenges may be addressed using a flexible atmospheric correction algorithm, referred to as POLYMER (Steinmetz and al., 2011). This algorithm does not use a specific aerosol model, but fits the atmospheric reflectance by a polynomial with a non spectral term that accounts for any non spectral scattering (clouds, coarse aerosol mode) or reflection (glitter, whitecaps, small ice surfaces within the instrument field of view), a spectral term with a law in wavelength to the power -1 (fine aerosol mode), and a spectral term with a law in wavelength to the power -4 (molecular scattering, adjacency effects from clouds and white surfaces). Tests are performed on selected MERIS imagery acquired over Arctic Seas. The derived ocean properties, i.e., marine reflectance and chlorophyll concentration, are compared with those obtained with the standard MEGS algorithm. The POLYMER estimates are more realistic in regions affected by the ice environment, e.g., chlorophyll concentration is higher near the ice edge, and spatial coverage is substantially increased. Good retrievals are obtained in the presence of thin clouds, with ocean-color features exhibiting spatial continuity from clear to cloudy regions. The POLYMER estimates of marine reflectance agree better with in situ measurements than the MEGS estimates. Biases are 0.001 or less in magnitude, except at 412 and 443 nm, where they reach 0.005 and 0.002, respectively, and root-mean-squared difference decreases from 0.006 at 412 nm to less than 0.001 at 620 and 665 nm. A first application to MODIS imagery is presented, revealing that the POLYMER algorithm is

  15. Dose Calculation Accuracy of the Monte Carlo Algorithm for CyberKnife Compared with Other Commercially Available Dose Calculation Algorithms

    International Nuclear Information System (INIS)

    Sharma, Subhash; Ott, Joseph; Williams, Jamone; Dickow, Danny

    2011-01-01

    Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required.

  16. Electromagnetic corrections to ππ scattering lengths: some lessons for the construction of effective hadronic field theories

    International Nuclear Information System (INIS)

    Maltman, K.

    1998-01-01

    Using the framework of effective chiral Lagrangians, we show that, in order to correctly implement electromagnetism (EM), as generated from the Standard Model, into effective hadronic theories (such as meson-exchange models) it is insufficient to consider only graphs in the low-energy effective theory containing explicit photon lines. The Standard Model requires the presence of contact interactions in the effective theory which are electromagnetic in origin, but which involve no photons in the effective theory. We illustrate the problems which can result from a ''standard'' EM subtraction: i.e., from assuming that removing all contributions in the effective theory generated by graphs with explicit photon lines fully removes EM effects, by considering the case of the s-wave ππ scattering lengths. In this case it is shown that such a subtraction procedure would lead to the incorrect conclusion that the strong interaction isospin-breaking contributions to these quantities were large when, in fact, they are known to vanish at leading order in m d -m u . The leading EM contact corrections for the channels employed in the extraction of the I=0,2 s-wave ππ scattering lengths from experiment are also evaluated. (orig.)

  17. QCD and power corrections to sum rules in deep-inelastic lepton-nucleon scattering

    International Nuclear Information System (INIS)

    Ravindran, V.; Neerven, W.L. van

    2001-01-01

    In this paper we study QCD and power corrections to sum rules which show up in deep-inelastic lepton-hadron scattering. Furthermore we will make a distinction between fundamental sum rules which can be derived from quantum field theory and those which are of a phenomenological origin. Using current algebra techniques the fundamental sum rules can be expressed into expectation values of (partially) conserved (axial-)vector currents sandwiched between hadronic states. These expectation values yield the quantum numbers of the corresponding hadron which are determined by the underlying flavour group SU(n) F . In this case one can show that there exist an intimate relation between the appearance of power and QCD corrections. The above features do not hold for phenomenological sum rules, hereafter called non-fundamental. They have no foundation in quantum field theory and they mostly depend on certain assumptions made for the structure functions like super-convergence relations or the parton model. Therefore only the fundamental sum rules provide us with a stringent test of QCD

  18. Subroutine MLTGRD: a multigrid algorithm based on multiplicative correction and implicit non-stationary iteration

    International Nuclear Information System (INIS)

    Barry, J.M.; Pollard, J.P.

    1986-11-01

    A FORTRAN subroutine MLTGRD is provided to solve efficiently the large systems of linear equations arising from a five-point finite difference discretisation of some elliptic partial differential equations. MLTGRD is a multigrid algorithm which provides multiplicative correction to iterative solution estimates from successively reduced systems of linear equations. It uses the method of implicit non-stationary iteration for all grid levels

  19. Study on fitness functions of genetic algorithm for dynamically correcting nuclide atmospheric diffusion model

    International Nuclear Information System (INIS)

    Ji Zhilong; Ma Yuanwei; Wang Dezhong

    2014-01-01

    Background: In radioactive nuclides atmospheric diffusion models, the empirical dispersion coefficients were deduced under certain experiment conditions, whose difference with nuclear accident conditions is a source of deviation. A better estimation of the radioactive nuclide's actual dispersion process could be done by correcting dispersion coefficients with observation data, and Genetic Algorithm (GA) is an appropriate method for this correction procedure. Purpose: This study is to analyze the fitness functions' influence on the correction procedure and the forecast ability of diffusion model. Methods: GA, coupled with Lagrange dispersion model, was used in a numerical simulation to compare 4 fitness functions' impact on the correction result. Results: In the numerical simulation, the fitness function with observation deviation taken into consideration stands out when significant deviation exists in the observed data. After performing the correction procedure on the Kincaid experiment data, a significant boost was observed in the diffusion model's forecast ability. Conclusion: As the result shows, in order to improve dispersion models' forecast ability using GA, observation data should be given different weight in the fitness function corresponding to their error. (authors)

  20. Parton-parton scattering at two-loops

    International Nuclear Information System (INIS)

    Tejeda Yeomans, M.E.

    2001-01-01

    Abstract We present an algorithm for the calculation of scalar and tensor one- and two-loop integrals that contribute to the virtual corrections of 2 → 2 partonic scattering. First, the tensor integrals are related to scalar integrals that contain an irreducible propagator-like structure in the numerator. Then, we use Integration by Parts and Lorentz Invariance recurrence relations to build a general system of equations that enables the reduction of any scalar integral (with and without structure in the numerator) to a basis set of master integrals. Their expansions in ε = 2 - D/2 have already been calculated and we present a summary of the techniques that have been used to this end, as well as a compilation of the expansions we need in the different physical regions. We then apply this algorithm to the direct evaluation of the Feynman diagrams contributing to the O(α s 4 ) one- and two-loop matrix-elements for massless like and unlike quark-quark, quark-gluon and gluon-gluon scattering. The analytic expressions we provide are regularised in Convensional Dimensional Regularisation and renormalised in the MS-bar scheme. Finally, we show that the structure of the infrared divergences agrees with that predicted by the application of Catani's formalism to the analysis of each partonic scattering process. The results presented in this thesis provide the complete calculation of the one- and two-loop matrix-elements for 2 → 2 processes needed for the next-to-next-to-leading order contribution to inclusive jet production at hadron colliders. (author)

  1. Deviation from Trajectory Detection in Vision based Robotic Navigation using SURF and Subsequent Restoration by Dynamic Auto Correction Algorithm

    Directory of Open Access Journals (Sweden)

    Ray Debraj

    2015-01-01

    Full Text Available Speeded Up Robust Feature (SURF is used to position a robot with respect to an environment and aid in vision-based robotic navigation. During the course of navigation irregularities in the terrain, especially in an outdoor environment may deviate a robot from the track. Another reason for deviation can be unequal speed of the left and right robot wheels. Hence it is essential to detect such deviations and perform corrective operations to bring the robot back to the track. In this paper we propose a novel algorithm that uses image matching using SURF to detect deviation of a robot from the trajectory and subsequent restoration by corrective operations. This algorithm is executed in parallel to positioning and navigation algorithms by distributing tasks among different CPU cores using Open Multi-Processing (OpenMP API.

  2. Incoherent-scatter computed tomography with monochromatic synchrotron x ray: feasibility of multi-CT imaging system for simultaneous measurement-of fluorescent and incoherent scatter x rays

    Science.gov (United States)

    Yuasa, T.; Akiba, M.; Takeda, T.; Kazama, M.; Hoshino, A.; Watanabe, Y.; Hyodo, K.; Dilmanian, F. A.; Akatsuka, T.; Itai, Y.

    1997-10-01

    We describe a new system of incoherent scatter computed tomography (ISCT) using monochromatic synchrotron X rays, and we discuss its potential to be used in in vivo imaging for medical use. The system operates on the basis of computed tomography (CT) of the first generation. The reconstruction method for ISCT uses the least squares method with singular value decomposition. The research was carried out at the BLNE-5A bending magnet beam line of the Tristan Accumulation Ring in KEK, Japan. An acrylic cylindrical phantom of 20-mm diameter containing a cross-shaped channel was imaged. The channel was filled with a diluted iodine solution with a concentration of 200 /spl mu/gI/ml. Spectra obtained with the system's high purity germanium (HPGe) detector separated the incoherent X-ray line from the other notable peaks, i.e., the iK/sub /spl alpha// and K/sub /spl beta/1/ X-ray fluorescent lines and the coherent scattering peak. CT images were reconstructed from projections generated by integrating the counts In the energy window centering around the incoherent scattering peak and whose width was approximately 2 keV. The reconstruction routine employed an X-ray attenuation correction algorithm. The resulting image showed more homogeneity than one without the attenuation correction.

  3. A graphics processing unit accelerated motion correction algorithm and modular system for real-time fMRI.

    Science.gov (United States)

    Scheinost, Dustin; Hampson, Michelle; Qiu, Maolin; Bhawnani, Jitendra; Constable, R Todd; Papademetris, Xenophon

    2013-07-01

    Real-time functional magnetic resonance imaging (rt-fMRI) has recently gained interest as a possible means to facilitate the learning of certain behaviors. However, rt-fMRI is limited by processing speed and available software, and continued development is needed for rt-fMRI to progress further and become feasible for clinical use. In this work, we present an open-source rt-fMRI system for biofeedback powered by a novel Graphics Processing Unit (GPU) accelerated motion correction strategy as part of the BioImage Suite project ( www.bioimagesuite.org ). Our system contributes to the development of rt-fMRI by presenting a motion correction algorithm that provides an estimate of motion with essentially no processing delay as well as a modular rt-fMRI system design. Using empirical data from rt-fMRI scans, we assessed the quality of motion correction in this new system. The present algorithm performed comparably to standard (non real-time) offline methods and outperformed other real-time methods based on zero order interpolation of motion parameters. The modular approach to the rt-fMRI system allows the system to be flexible to the experiment and feedback design, a valuable feature for many applications. We illustrate the flexibility of the system by describing several of our ongoing studies. Our hope is that continuing development of open-source rt-fMRI algorithms and software will make this new technology more accessible and adaptable, and will thereby accelerate its application in the clinical and cognitive neurosciences.

  4. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set-Effect of Pasteurization.

    Science.gov (United States)

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-02-26

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified.

  5. A study of the dosimetry of small field photon beams used in intensity modulated radiation therapy in inhomogeneous media: Monte Carlo simulations, and algorithm comparisons and corrections

    International Nuclear Information System (INIS)

    Jones, Andrew Osler

    2004-01-01

    There is an increasing interest in the use of inhomogeneity corrections for lung, air, and bone in radiotherapy treatment planning. Traditionally, corrections based on physical density have been used. Modern algorithms use the electron density derived from CT images. Small fields are used in both conformal radiotherapy and IMRT, however, their beam characteristics in inhomogeneous media have not been extensively studied. This work compares traditional and modern treatment planning algorithms to Monte Carlo simulations in and near low-density inhomogeneities. Field sizes ranging from 0.5 cm to 5 cm in diameter are projected onto a phantom containing inhomogeneities and depth dose curves are compared. Comparisons of the Dose Perturbation Factors (DPF) are presented as functions of density and field size. Dose Correction Factors (DCF), which scale the algorithms to the Monte Carlo data, are compared for each algorithm. Physical scaling algorithms such as Batho and Equivalent Pathlength (EPL) predict an increase in dose for small fields passing through lung tissue, where Monte Carlo simulations show a sharp dose drop. The physical model-based collapsed cone convolution (CCC) algorithm correctly predicts the dose drop, but does not accurately predict the magnitude. Because the model-based algorithms do not correctly account for the change in backscatter, the dose drop predicted by CCC occurs farther downstream compared to that predicted by the Monte Carlo simulations. Beyond the tissue inhomogeneity all of the algorithms studied predict dose distributions in close agreement with Monte Carlo simulations. Dose-volume relationships are important in understanding the effects of radiation to the lung. The dose within the lung is affected by a complex function of beam energy, lung tissue density, and field size. Dose algorithms vary in their abilities to correctly predict the dose to the lung tissue. A thorough analysis of the effects of density, and field size on dose to the

  6. WE-DE-207B-10: Library-Based X-Ray Scatter Correction for Dedicated Cone-Beam Breast CT: Clinical Validation

    Energy Technology Data Exchange (ETDEWEB)

    Shi, L; Zhu, L [Georgia Institute of Technology, Atlanta, GA (Georgia); Vedantham, S; Karellas, A [University of Massachusetts Medical School, Worcester, MA (United States)

    2016-06-15

    Purpose: Scatter contamination is detrimental to image quality in dedicated cone-beam breast CT (CBBCT), resulting in cupping artifacts and loss of contrast in reconstructed images. Such effects impede visualization of breast lesions and the quantitative accuracy. Previously, we proposed a library-based software approach to suppress scatter on CBBCT images. In this work, we quantify the efficacy and stability of this approach using datasets from 15 human subjects. Methods: A pre-computed scatter library is generated using Monte Carlo simulations for semi-ellipsoid breast models and homogeneous fibroglandular/adipose tissue mixture encompassing the range reported in literature. Projection datasets from 15 human subjects that cover 95 percentile of breast dimensions and fibroglandular volume fraction were included in the analysis. Our investigations indicate that it is sufficient to consider the breast dimensions alone and variation in fibroglandular fraction does not significantly affect the scatter-to-primary ratio. The breast diameter is measured from a first-pass reconstruction; the appropriate scatter distribution is selected from the library; and, deformed by considering the discrepancy in total projection intensity between the clinical dataset and the simulated semi-ellipsoidal breast. The deformed scatter-distribution is subtracted from the measured projections for scatter correction. Spatial non-uniformity (SNU) and contrast-to-noise ratio (CNR) were used as quantitative metrics to evaluate the results. Results: On the 15 patient cases, our method reduced the overall image spatial non-uniformity (SNU) from 7.14%±2.94% (mean ± standard deviation) to 2.47%±0.68% in coronal view and from 10.14%±4.1% to 3.02% ±1.26% in sagittal view. The average contrast to noise ratio (CNR) improved by a factor of 1.49±0.40 in coronal view and by 2.12±1.54 in sagittal view. Conclusion: We demonstrate the robustness and effectiveness of a library-based scatter correction

  7. A review of neutron scattering correction for the calibration of neutron survey meters using the shadow cone method

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sang In; Kim, Bong Hwan; Kim, Jang Lyul; Lee, Jung Il [Health Physics Team, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-12-15

    The calibration methods of neutron-measuring devices such as the neutron survey meter have advantages and disadvantages. To compare the calibration factors obtained by the shadow cone method and semi-empirical method, 10 neutron survey meters of five different types were used in this study. This experiment was performed at the Korea Atomic Energy Research Institute (KAERI; Daejeon, South Korea), and the calibration neutron fields were constructed using a {sup 252}Californium ({sup 252}Cf) neutron source, which was positioned in the center of the neutron irradiation room. The neutron spectra of the calibration neutron fields were measured by a europium-activated lithium iodide scintillator in combination with KAERI's Bonner sphere system. When the shadow cone method was used, 10 single moderator-based survey meters exhibited a smaller calibration factor by as much as 3.1 - 9.3% than that of the semi-empirical method. This finding indicates that neutron survey meters underestimated the scattered neutrons and attenuated neutrons (i.e., the total scatter corrections). This underestimation of the calibration factor was attributed to the fact that single moderator-based survey meters have an under-ambient dose equivalent response in the thermal or thermal-dominant neutron field. As a result, when the shadow cone method is used for a single moderator-based survey meter, an additional correction and the International Organization for Standardization standard 8529-2 for room-scattered neutrons should be considered.

  8. A review of neutron scattering correction for the calibration of neutron survey meters using the shadow cone method

    International Nuclear Information System (INIS)

    Kim, Sang In; Kim, Bong Hwan; Kim, Jang Lyul; Lee, Jung Il

    2015-01-01

    The calibration methods of neutron-measuring devices such as the neutron survey meter have advantages and disadvantages. To compare the calibration factors obtained by the shadow cone method and semi-empirical method, 10 neutron survey meters of five different types were used in this study. This experiment was performed at the Korea Atomic Energy Research Institute (KAERI; Daejeon, South Korea), and the calibration neutron fields were constructed using a 252 Californium ( 252 Cf) neutron source, which was positioned in the center of the neutron irradiation room. The neutron spectra of the calibration neutron fields were measured by a europium-activated lithium iodide scintillator in combination with KAERI's Bonner sphere system. When the shadow cone method was used, 10 single moderator-based survey meters exhibited a smaller calibration factor by as much as 3.1 - 9.3% than that of the semi-empirical method. This finding indicates that neutron survey meters underestimated the scattered neutrons and attenuated neutrons (i.e., the total scatter corrections). This underestimation of the calibration factor was attributed to the fact that single moderator-based survey meters have an under-ambient dose equivalent response in the thermal or thermal-dominant neutron field. As a result, when the shadow cone method is used for a single moderator-based survey meter, an additional correction and the International Organization for Standardization standard 8529-2 for room-scattered neutrons should be considered

  9. Potassium-based algorithm allows correction for the hematocrit bias in quantitative analysis of caffeine and its major metabolite in dried blood spots.

    Science.gov (United States)

    De Kesel, Pieter M M; Capiau, Sara; Stove, Veronique V; Lambert, Willy E; Stove, Christophe P

    2014-10-01

    Although dried blood spot (DBS) sampling is increasingly receiving interest as a potential alternative to traditional blood sampling, the impact of hematocrit (Hct) on DBS results is limiting its final breakthrough in routine bioanalysis. To predict the Hct of a given DBS, potassium (K(+)) proved to be a reliable marker. The aim of this study was to evaluate whether application of an algorithm, based upon predicted Hct or K(+) concentrations as such, allowed correction for the Hct bias. Using validated LC-MS/MS methods, caffeine, chosen as a model compound, was determined in whole blood and corresponding DBS samples with a broad Hct range (0.18-0.47). A reference subset (n = 50) was used to generate an algorithm based on K(+) concentrations in DBS. Application of the developed algorithm on an independent test set (n = 50) alleviated the assay bias, especially at lower Hct values. Before correction, differences between DBS and whole blood concentrations ranged from -29.1 to 21.1%. The mean difference, as obtained by Bland-Altman comparison, was -6.6% (95% confidence interval (CI), -9.7 to -3.4%). After application of the algorithm, differences between corrected and whole blood concentrations lay between -19.9 and 13.9% with a mean difference of -2.1% (95% CI, -4.5 to 0.3%). The same algorithm was applied to a separate compound, paraxanthine, which was determined in 103 samples (Hct range, 0.17-0.47), yielding similar results. In conclusion, a K(+)-based algorithm allows correction for the Hct bias in the quantitative analysis of caffeine and its metabolite paraxanthine.

  10. Numerical correction of anti-symmetric aberrations in single HRTEM images of weakly scattering 2D-objects

    International Nuclear Information System (INIS)

    Lehtinen, Ossi; Geiger, Dorin; Lee, Zhongbo; Whitwick, Michael Brian; Chen, Ming-Wei; Kis, Andras; Kaiser, Ute

    2015-01-01

    Here, we present a numerical post-processing method for removing the effect of anti-symmetric residual aberrations in high-resolution transmission electron microscopy (HRTEM) images of weakly scattering 2D-objects. The method is based on applying the same aberrations with the opposite phase to the Fourier transform of the recorded image intensity and subsequently inverting the Fourier transform. We present the theoretical justification of the method, and its verification based on simulated images in the case of low-order anti-symmetric aberrations. Ultimately the method is applied to experimental hardware aberration-corrected HRTEM images of single-layer graphene and MoSe 2 resulting in images with strongly reduced residual low-order aberrations, and consequently improved interpretability. Alternatively, this method can be used to estimate by trial and error the residual anti-symmetric aberrations in HRTEM images of weakly scattering objects

  11. Algorithms of CT value correction for reconstructing a radiotherapy simulation image through axial CT images

    International Nuclear Information System (INIS)

    Ogino, Takashi; Egawa, Sunao

    1991-01-01

    New algorithms of CT value correction for reconstructing a radiotherapy simulation image through axial CT images were developed. One, designated plane weighting method, is to correct CT value in proportion to the position of the beam element passing through the voxel. The other, designated solid weighting method, is to correct CT value in proportion to the length of the beam element passing through the voxel and the volume of voxel. Phantom experiments showed fair spatial resolution in the transverse direction. In the longitudinal direction, however, spatial resolution of under slice thickness could not be obtained. Contrast resolution was equivalent for both methods. In patient studies, the reconstructed radiotherapy simulation image was almost similar in visual perception of the density resolution to a simulation film taken by X-ray simulator. (author)

  12. High-energy expansion for nuclear multiple scattering

    International Nuclear Information System (INIS)

    Wallace, S.J.

    1975-01-01

    The Watson multiple scattering series is expanded to develop the Glauber approximation plus systematic corrections arising from three (1) deviations from eikonal propagation between scatterings, (2) Fermi motion of struck nucleons, and (3) the kinematic transformation which relates the many-body scattering operators of the Watson series to the physical two-body scattering amplitude. Operators which express effects ignored at the outset to obtain the Glauber approximation are subsequently reintroduced via perturbation expansions. Hence a particular set of approximations is developed which renders the sum of the Watson series to the Glauber form in the center of mass system, and an expansion is carried out to find leading order corrections to that summation. Although their physical origins are quite distinct, the eikonal, Fermi motion, and kinematic corrections produce strikingly similar contributions to the scattering amplitude. It is shown that there is substantial cancellation between their effects and hence the Glauber approximation is more accurate than the individual approximations used in its derivation. It is shown that the leading corrections produce effects of order (2kR/subc/) -1 relative to the double scattering term in the uncorrected Glauber amplitude, hk being momentum and R/subc/ the nuclear char []e radius. The leading order corrections are found to be small enough to validate quatitative analyses of experimental data for many intermediate to high energy cases and for scattering angles not limited to the very forward region. In a Gaussian model, the leading corrections to the Glauber amplitude are given as convenient analytic expressions

  13. A new method for x-ray scatter correction: first assessment on a cone-beam CT experimental setup

    International Nuclear Information System (INIS)

    Rinkel, J; Gerfault, L; Esteve, F; Dinten, J-M

    2007-01-01

    Cone-beam computed tomography (CBCT) enables three-dimensional imaging with isotropic resolution and a shorter acquisition time compared to a helical CT scanner. Because a larger object volume is exposed for each projection, scatter levels are much higher than in collimated fan-beam systems, resulting in cupping artifacts, streaks and quantification inaccuracies. In this paper, a general method to correct for scatter in CBCT, without supplementary on-line acquisition, is presented. This method is based on scatter calibration through off-line acquisition combined with on-line analytical transformation based on physical equations, to adapt calibration to the object observed. The method was tested on a PMMA phantom and on an anthropomorphic thorax phantom. The results were validated by comparison to simulation for the PMMA phantom and by comparison to scans obtained on a commercial multi-slice CT scanner for the thorax phantom. Finally, the improvements achieved with the new method were compared to those obtained using a standard beam-stop method. The new method provided results that closely agreed with the simulation and with the conventional CT scanner, eliminating cupping artifacts and significantly improving quantification. Compared to the beam-stop method, lower x-ray doses and shorter acquisition times were needed, both divided by a factor of 9 for the same scatter estimation accuracy

  14. Scatter Correction with Combined Single-Scatter Simulation and Monte Carlo Simulation Scaling Improved the Visual Artifacts and Quantification in 3-Dimensional Brain PET/CT Imaging with 15O-Gas Inhalation.

    Science.gov (United States)

    Magota, Keiichi; Shiga, Tohru; Asano, Yukari; Shinyama, Daiki; Ye, Jinghan; Perkins, Amy E; Maniawski, Piotr J; Toyonaga, Takuya; Kobayashi, Kentaro; Hirata, Kenji; Katoh, Chietsugu; Hattori, Naoya; Tamaki, Nagara

    2017-12-01

    In 3-dimensional PET/CT imaging of the brain with 15 O-gas inhalation, high radioactivity in the face mask creates cold artifacts and affects the quantitative accuracy when scatter is corrected by conventional methods (e.g., single-scatter simulation [SSS] with tail-fitting scaling [TFS-SSS]). Here we examined the validity of a newly developed scatter-correction method that combines SSS with a scaling factor calculated by Monte Carlo simulation (MCS-SSS). Methods: We performed phantom experiments and patient studies. In the phantom experiments, a plastic bottle simulating a face mask was attached to a cylindric phantom simulating the brain. The cylindric phantom was filled with 18 F-FDG solution (3.8-7.0 kBq/mL). The bottle was filled with nonradioactive air or various levels of 18 F-FDG (0-170 kBq/mL). Images were corrected either by TFS-SSS or MCS-SSS using the CT data of the bottle filled with nonradioactive air. We compared the image activity concentration in the cylindric phantom with the true activity concentration. We also performed 15 O-gas brain PET based on the steady-state method on patients with cerebrovascular disease to obtain quantitative images of cerebral blood flow and oxygen metabolism. Results: In the phantom experiments, a cold artifact was observed immediately next to the bottle on TFS-SSS images, where the image activity concentrations in the cylindric phantom were underestimated by 18%, 36%, and 70% at the bottle radioactivity levels of 2.4, 5.1, and 9.7 kBq/mL, respectively. At higher bottle radioactivity, the image activity concentrations in the cylindric phantom were greater than 98% underestimated. For the MCS-SSS, in contrast, the error was within 5% at each bottle radioactivity level, although the image generated slight high-activity artifacts around the bottle when the bottle contained significantly high radioactivity. In the patient imaging with 15 O 2 and C 15 O 2 inhalation, cold artifacts were observed on TFS-SSS images, whereas

  15. Measured attenuation correction methods

    International Nuclear Information System (INIS)

    Ostertag, H.; Kuebler, W.K.; Doll, J.; Lorenz, W.J.

    1989-01-01

    Accurate attenuation correction is a prerequisite for the determination of exact local radioactivity concentrations in positron emission tomography. Attenuation correction factors range from 4-5 in brain studies to 50-100 in whole body measurements. This report gives an overview of the different methods of determining the attenuation correction factors by transmission measurements using an external positron emitting source. The long-lived generator nuclide 68 Ge/ 68 Ga is commonly used for this purpose. The additional patient dose from the transmission source is usually a small fraction of the dose due to the subsequent emission measurement. Ring-shaped transmission sources as well as rotating point or line sources are employed in modern positron tomographs. By masking a rotating line or point source, random and scattered events in the transmission scans can be effectively suppressed. The problems of measured attenuation correction are discussed: Transmission/emission mismatch, random and scattered event contamination, counting statistics, transmission/emission scatter compensation, transmission scan after administration of activity to the patient. By using a double masking technique simultaneous emission and transmission scans become feasible. (orig.)

  16. Practical Atmospheric Correction Algorithms for a Multi-Spectral Sensor From the Visible Through the Thermal Spectral Regions

    Energy Technology Data Exchange (ETDEWEB)

    Borel, C.C.; Villeneuve, P.V.; Clodium, W.B.; Szymenski, J.J.; Davis, A.B.

    1999-04-04

    Deriving information about the Earth's surface requires atmospheric corrections of the measured top-of-the-atmosphere radiances. One possible path is to use atmospheric radiative transfer codes to predict how the radiance leaving the ground is affected by the scattering and attenuation. In practice the atmosphere is usually not well known and thus it is necessary to use more practical methods. The authors will describe how to find dark surfaces, estimate the atmospheric optical depth, estimate path radiance and identify thick clouds using thresholds on reflectance and NDVI and columnar water vapor. The authors describe a simple method to correct a visible channel contaminated by a thin cirrus clouds.

  17. Experimental Validation of Advanced Dispersed Fringe Sensing (ADFS) Algorithm Using Advanced Wavefront Sensing and Correction Testbed (AWCT)

    Science.gov (United States)

    Wang, Xu; Shi, Fang; Sigrist, Norbert; Seo, Byoung-Joon; Tang, Hong; Bikkannavar, Siddarayappa; Basinger, Scott; Lay, Oliver

    2012-01-01

    Large aperture telescope commonly features segment mirrors and a coarse phasing step is needed to bring these individual segments into the fine phasing capture range. Dispersed Fringe Sensing (DFS) is a powerful coarse phasing technique and its alteration is currently being used for JWST.An Advanced Dispersed Fringe Sensing (ADFS) algorithm is recently developed to improve the performance and robustness of previous DFS algorithms with better accuracy and unique solution. The first part of the paper introduces the basic ideas and the essential features of the ADFS algorithm and presents the some algorithm sensitivity study results. The second part of the paper describes the full details of algorithm validation process through the advanced wavefront sensing and correction testbed (AWCT): first, the optimization of the DFS hardware of AWCT to ensure the data accuracy and reliability is illustrated. Then, a few carefully designed algorithm validation experiments are implemented, and the corresponding data analysis results are shown. Finally the fiducial calibration using Range-Gate-Metrology technique is carried out and a <10nm or <1% algorithm accuracy is demonstrated.

  18. Using phase for radar scatterer classification

    Science.gov (United States)

    Moore, Linda J.; Rigling, Brian D.; Penno, Robert P.; Zelnio, Edmund G.

    2017-04-01

    Traditional synthetic aperture radar (SAR) systems tend to discard phase information of formed complex radar imagery prior to automatic target recognition (ATR). This practice has historically been driven by available hardware storage, processing capabilities, and data link capacity. Recent advances in high performance computing (HPC) have enabled extremely dense storage and processing solutions. Therefore, previous motives for discarding radar phase information in ATR applications have been mitigated. First, we characterize the value of phase in one-dimensional (1-D) radar range profiles with respect to the ability to correctly estimate target features, which are currently employed in ATR algorithms for target discrimination. These features correspond to physical characteristics of targets through radio frequency (RF) scattering phenomenology. Physics-based electromagnetic scattering models developed from the geometrical theory of diffraction are utilized for the information analysis presented here. Information is quantified by the error of target parameter estimates from noisy radar signals when phase is either retained or discarded. Operating conditions (OCs) of signal-tonoise ratio (SNR) and bandwidth are considered. Second, we investigate the value of phase in 1-D radar returns with respect to the ability to correctly classify canonical targets. Classification performance is evaluated via logistic regression for three targets (sphere, plate, tophat). Phase information is demonstrated to improve radar target classification rates, particularly at low SNRs and low bandwidths.

  19. Multifocal multiphoton microscopy with adaptive optical correction

    Science.gov (United States)

    Coelho, Simao; Poland, Simon; Krstajic, Nikola; Li, David; Monypenny, James; Walker, Richard; Tyndall, David; Ng, Tony; Henderson, Robert; Ameer-Beg, Simon

    2013-02-01

    Fluorescence lifetime imaging microscopy (FLIM) is a well established approach for measuring dynamic signalling events inside living cells, including detection of protein-protein interactions. The improvement in optical penetration of infrared light compared with linear excitation due to Rayleigh scattering and low absorption have provided imaging depths of up to 1mm in brain tissue but significant image degradation occurs as samples distort (aberrate) the infrared excitation beam. Multiphoton time-correlated single photon counting (TCSPC) FLIM is a method for obtaining functional, high resolution images of biological structures. In order to achieve good statistical accuracy TCSPC typically requires long acquisition times. We report the development of a multifocal multiphoton microscope (MMM), titled MegaFLI. Beam parallelization performed via a 3D Gerchberg-Saxton (GS) algorithm using a Spatial Light Modulator (SLM), increases TCSPC count rate proportional to the number of beamlets produced. A weighted 3D GS algorithm is employed to improve homogeneity. An added benefit is the implementation of flexible and adaptive optical correction. Adaptive optics performed by means of Zernike polynomials are used to correct for system induced aberrations. Here we present results with significant improvement in throughput obtained using a novel complementary metal-oxide-semiconductor (CMOS) 1024 pixel single-photon avalanche diode (SPAD) array, opening the way to truly high-throughput FLIM.

  20. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set—Effect of Pasteurization

    Science.gov (United States)

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-01-01

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified. PMID:26927169

  1. Comparison of different Aethalometer correction schemes and a reference multi-wavelength absorption technique for ambient aerosol data

    Science.gov (United States)

    Saturno, Jorge; Pöhlker, Christopher; Massabò, Dario; Brito, Joel; Carbone, Samara; Cheng, Yafang; Chi, Xuguang; Ditas, Florian; Hrabě de Angelis, Isabella; Morán-Zuloaga, Daniel; Pöhlker, Mira L.; Rizzo, Luciana V.; Walter, David; Wang, Qiaoqiao; Artaxo, Paulo; Prati, Paolo; Andreae, Meinrat O.

    2017-08-01

    Deriving absorption coefficients from Aethalometer attenuation data requires different corrections to compensate for artifacts related to filter-loading effects, scattering by filter fibers, and scattering by aerosol particles. In this study, two different correction schemes were applied to seven-wavelength Aethalometer data, using multi-angle absorption photometer (MAAP) data as a reference absorption measurement at 637 nm. The compensation algorithms were compared to five-wavelength offline absorption measurements obtained with a multi-wavelength absorbance analyzer (MWAA), which serves as a multiple-wavelength reference measurement. The online measurements took place in the Amazon rainforest, from the wet-to-dry transition season to the dry season (June-September 2014). The mean absorption coefficient (at 637 nm) during this period was 1.8 ± 2.1 Mm-1, with a maximum of 15.9 Mm-1. Under these conditions, the filter-loading compensation was negligible. One of the correction schemes was found to artificially increase the short-wavelength absorption coefficients. It was found that accounting for the aerosol optical properties in the scattering compensation significantly affects the absorption Ångström exponent (åABS) retrievals. Proper Aethalometer data compensation schemes are crucial to retrieve the correct åABS, which is commonly implemented in brown carbon contribution calculations. Additionally, we found that the wavelength dependence of uncompensated Aethalometer attenuation data significantly correlates with the åABS retrieved from offline MWAA measurements.

  2. Autofocus algorithm for synthetic aperture radar imaging with large curvilinear apertures

    Science.gov (United States)

    Bleszynski, E.; Bleszynski, M.; Jaroszewicz, T.

    2013-05-01

    An approach to autofocusing for large curved synthetic aperture radar (SAR) apertures is presented. Its essential feature is that phase corrections are being extracted not directly from SAR images, but rather from reconstructed SAR phase-history data representing windowed patches of the scene, of sizes sufficiently small to allow the linearization of the forward- and back-projection formulae. The algorithm processes data associated with each patch independently and in two steps. The first step employs a phase-gradient-type method in which phase correction compensating (possibly rapid) trajectory perturbations are estimated from the reconstructed phase history for the dominant scattering point on the patch. The second step uses phase-gradient-corrected data and extracts the absolute phase value, removing in this way phase ambiguities and reducing possible imperfections of the first stage, and providing the distances between the sensor and the scattering point with accuracy comparable to the wavelength. The features of the proposed autofocusing method are illustrated in its applications to intentionally corrupted small-scene 2006 Gotcha data. The examples include the extraction of absolute phases (ranges) for selected prominent point targets. They are then used to focus the scene and determine relative target-target distances.

  3. Full correction of scattering effects by using the radiative transfer theory for improved quantitative analysis of absorbing species in suspensions.

    Science.gov (United States)

    Steponavičius, Raimundas; Thennadil, Suresh N

    2013-05-01

    Sample-to-sample photon path length variations that arise due to multiple scattering can be removed by decoupling absorption and scattering effects by using the radiative transfer theory, with a suitable set of measurements. For samples where particles both scatter and absorb light, the extracted bulk absorption spectrum is not completely free from nonlinear particle effects, since it is related to the absorption cross-section of particles that changes nonlinearly with particle size and shape. For the quantitative analysis of absorbing-only (i.e., nonscattering) species present in a matrix that contains a particulate species that absorbs and scatters light, a method to eliminate particle effects completely is proposed here, which utilizes the particle size information contained in the bulk scattering coefficient extracted by using the Mie theory to carry out an additional correction step to remove particle effects from bulk absorption spectra. This should result in spectra that are equivalent to spectra collected with only the liquid species in the mixture. Such an approach has the potential to significantly reduce the number of calibration samples as well as improve calibration performance. The proposed method was tested with both simulated and experimental data from a four-component model system.

  4. Scattered-field FDTD and PSTD algorithms with CPML absorbing boundary conditions for light scattering by aerosols

    International Nuclear Information System (INIS)

    Sun, Wenbo; Videen, Gorden; Fu, Qiang; Hu, Yongxiang

    2013-01-01

    As fundamental parameters for polarized-radiative-transfer calculations, the single-scattering phase matrix of irregularly shaped aerosol particles must be accurately modeled. In this study, a scattered-field finite-difference time-domain (FDTD) model and a scattered-field pseudo-spectral time-domain (PSTD) model are developed for light scattering by arbitrarily shaped dielectric aerosols. The convolutional perfectly matched layer (CPML) absorbing boundary condition (ABC) is used to truncate the computational domain. It is found that the PSTD method is generally more accurate than the FDTD in calculation of the single-scattering properties given similar spatial cell sizes. Since the PSTD can use a coarser grid for large particles, it can lower the memory requirement in the calculation. However, the Fourier transformations in the PSTD need significantly more CPU time than simple subtractions in the FDTD, and the fast Fourier transform requires a power of 2 elements in calculations, thus using the PSTD could not significantly reduce the CPU time required in the numerical modeling. Furthermore, because the scattered-field FDTD/PSTD equations include incident-wave source terms, the FDTD/PSTD model allows for the inclusion of an arbitrarily incident wave source, including a plane parallel wave or a Gaussian beam like those emitted by lasers usually used in laboratory particle characterizations, etc. The scattered-field FDTD and PSTD light-scattering models can be used to calculate single-scattering properties of arbitrarily shaped aerosol particles over broad size and wavelength ranges. -- Highlights: • Scattered-field FDTD and PSTD models are developed for light scattering by aerosols. • Convolutional perfectly matched layer absorbing boundary condition is used. • PSTD is generally more accurate than FDTD in calculating single-scattering properties. • Using same spatial resolution, PSTD requires much larger CPU time than FDTD

  5. Singular characteristic tracking algorithm for improved solution accuracy of the discrete ordinates method with isotropic scattering

    International Nuclear Information System (INIS)

    Duo, J. I.; Azmy, Y. Y.

    2007-01-01

    A new method, the Singular Characteristics Tracking algorithm, is developed to account for potential non-smoothness across the singular characteristics in the exact solution of the discrete ordinates approximation of the transport equation. Numerical results show improved rate of convergence of the solution to the discrete ordinates equations in two spatial dimensions with isotropic scattering using the proposed methodology. Unlike the standard Weighted Diamond Difference methods, the new algorithm achieves local convergence in the case of discontinuous angular flux along the singular characteristics. The method also significantly reduces the error for problems where the angular flux presents discontinuous spatial derivatives across these lines. For purposes of verifying the results, the Method of Manufactured Solutions is used to generate analytical reference solutions that permit estimating the local error in the numerical solution. (authors)

  6. Backscatter Correction Algorithm for TBI Treatment Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez-Nieto, B.; Sanchez-Doblado, F.; Arrans, R.; Terron, J.A. [Dpto. Fisiología Médica y Biofísica, Universidad de Sevilla, Avda. Sánchez Pizjuán, 4. E-41009, Sevilla (Spain); Errazquin, L. [Servicio Oncología Radioterápica, Hospital Univ.V. Macarena. Dr. Fedriani, s/n. E-41009, Sevilla (Spain)

    2015-01-15

    The accuracy requirements in target dose delivery is, according to ICRU, ±5%. This is so not only in standard radiotherapy but also in total body irradiation (TBI). Physical dosimetry plays an important role in achieving this recommended level. The semi-infinite phantoms, customarily used for dosimetry purposes, give scatter conditions different to those of the finite thickness of the patient. So dose calculated in patient’s points close to beam exit surface may be overestimated. It is then necessary to quantify the backscatter factor in order to decrease the uncertainty in this dose calculation. The backward scatter has been well studied at standard distances. The present work intends to evaluate the backscatter phenomenon under our particular TBI treatment conditions. As a consequence of this study, a semi-empirical expression has been derived to calculate (within 0.3% uncertainty) the backscatter factor. This factor depends lineally on the depth and exponentially on the underlying tissue. Differences found in the qualitative behavior with respect to standard distances are due to scatter in the bunker wall close to the measurement point.

  7. Implementation of Cascade Gamma and Positron Range Corrections for I-124 Small Animal PET

    Science.gov (United States)

    Harzmann, S.; Braun, F.; Zakhnini, A.; Weber, W. A.; Pietrzyk, U.; Mix, M.

    2014-02-01

    Small animal Positron Emission Tomography (PET) should provide accurate quantification of regional radiotracer concentrations and high spatial resolution. This is challenging for non-pure positron emitters with high positron endpoint energies, such as I-124: On the one hand the cascade gammas emitted from this isotope can produce coincidence events with the 511 keV annihilation photons leading to quantification errors. On the other hand the long range of the high energy positron degrades spatial resolution. This paper presents the implementation of a comprehensive correction technique for both of these effects. The established corrections include a modified sinogram-based tail-fitting approach to correct for scatter, random and cascade gamma coincidences and a compensation for resolution degradation effects during the image reconstruction. Resolution losses were compensated for by an iterative algorithm which incorporates a convolution kernel derived from line source measurements for the microPET Focus 120 system. The entire processing chain for these corrections was implemented, whereas previous work has only addressed parts of this process. Monte Carlo simulations with GATE and measurements of mice with the microPET Focus 120 show that the proposed method reduces absolute quantification errors on average to 2.6% compared to 15.6% for the ordinary Maximum Likelihood Expectation Maximization algorithm. Furthermore resolution was improved in the order of 11-29% depending on the number of convolution iterations. In summary, a comprehensive, fast and robust algorithm for the correction of small animal PET studies with I-124 was developed which improves quantitative accuracy and spatial resolution.

  8. MUSIC algorithms for rebar detection

    International Nuclear Information System (INIS)

    Solimene, Raffaele; Leone, Giovanni; Dell’Aversano, Angela

    2013-01-01

    The MUSIC (MUltiple SIgnal Classification) algorithm is employed to detect and localize an unknown number of scattering objects which are small in size as compared to the wavelength. The ensemble of objects to be detected consists of both strong and weak scatterers. This represents a scattering environment challenging for detection purposes as strong scatterers tend to mask the weak ones. Consequently, the detection of more weakly scattering objects is not always guaranteed and can be completely impaired when the noise corrupting data is of a relatively high level. To overcome this drawback, here a new technique is proposed, starting from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed in the second stage focusing only on the weak scatterers. The role of an adequate scattering model is emphasized to improve drastically detection performance in realistic scenarios. (paper)

  9. Leading quantum gravitational corrections to scalar QED

    International Nuclear Information System (INIS)

    Bjerrum-Bohr, N.E.J.

    2002-01-01

    We consider the leading post-Newtonian and quantum corrections to the non-relativistic scattering amplitude of charged scalars in the combined theory of general relativity and scalar QED. The combined theory is treated as an effective field theory. This allows for a consistent quantization of the gravitational field. The appropriate vertex rules are extracted from the action, and the non-analytic contributions to the 1-loop scattering matrix are calculated in the non-relativistic limit. The non-analytical parts of the scattering amplitude, which are known to give the long range, low energy, leading quantum corrections, are used to construct the leading post-Newtonian and quantum corrections to the two-particle non-relativistic scattering matrix potential for two charged scalars. The result is discussed in relation to experimental verifications

  10. Non-eikonal effects in high-energy scattering IV. Inelastic scattering

    International Nuclear Information System (INIS)

    Gurvitz, S.A.; Kok, L.P.; Rinat, A.S.

    1978-01-01

    Amplitudes of inelastically scattered high-energy projections were calculated. In the scattering on 12 C(Tsub(P)=1 GeV) sizeable non-eikonal corrections in diffraction extrema even for relatively small q 2 are demonstrated. At least part of the anomaly in the 3 - distribution may be due to these non-eikonal effects. (B.G.)

  11. Quadratic Regression-based Non-uniform Response Correction for Radiochromic Film Scanners

    International Nuclear Information System (INIS)

    Jeong, Hae Sun; Kim, Chan Hyeong; Han, Young Yih; Kum, O Yeon

    2009-01-01

    In recent years, several types of radiochromic films have been extensively used for two-dimensional dose measurements such as dosimetry in radiotherapy as well as imaging and radiation protection applications. One of the critical aspects in radiochromic film dosimetry is the accurate readout of the scanner without dose distortion. However, most of charge-coupled device (CCD) scanners used for the optical density readout of the film employ a fluorescent lamp or a coldcathode lamp as a light source, which leads to a significant amount of light scattering on the active layer of the film. Due to the effect of the light scattering, dose distortions are produced with non-uniform responses, although the dose is uniformly irradiated to the film. In order to correct the distorted doses, a method based on correction factors (CF) has been reported and used. However, the prediction of the real incident doses is difficult when the indiscreet doses are delivered to the film, since the dose correction with the CF-based method is restrictively used in case that the incident doses are already known. In a previous study, therefore, a pixel-based algorithm with linear regression was developed to correct the dose distortion of a flatbed scanner, and to estimate the initial doses. The result, however, was not very good for some cases especially when the incident dose is under approximately 100 cGy. In the present study, the problem was addressed by replacing the linear regression with the quadratic regression. The corrected doses using this method were also compared with the results of other conventional methods

  12. Heavy flavour corrections to polarised and unpolarised deep-inelastic scattering at 3-loop order

    International Nuclear Information System (INIS)

    Ablinger, J.; Round, M.; Schneider, C.; Hasselhuhn, A.

    2016-11-01

    We report on progress in the calculation of 3-loop corrections to the deep-inelastic structure functions from massive quarks in the asymptotic region of large momentum transfer Q"2. Recently completed results allow us to obtain the O(a"3_s) contributions to several heavy flavour Wilson coefficients which enter both polarised and unpolarised structure functions for lepton-nucleon scattering. In particular, we obtain the non-singlet contributions to the unpolarised structure functions F_2(x,Q"2) and xF_3(x,Q"2) and the polarised structure function g_1(x,Q"2). From these results we also obtain the heavy flavour contributions to the Gross-Llewellyn-Smith and the Bjorken sum rules.

  13. A parallelizable compression scheme for Monte Carlo scatter system matrices in PET image reconstruction

    International Nuclear Information System (INIS)

    Rehfeld, Niklas; Alber, Markus

    2007-01-01

    Scatter correction techniques in iterative positron emission tomography (PET) reconstruction increasingly utilize Monte Carlo (MC) simulations which are very well suited to model scatter in the inhomogeneous patient. Due to memory constraints the results of these simulations are not stored in the system matrix, but added or subtracted as a constant term or recalculated in the projector at each iteration. This implies that scatter is not considered in the back-projector. The presented scheme provides a method to store the simulated Monte Carlo scatter in a compressed scatter system matrix. The compression is based on parametrization and B-spline approximation and allows the formation of the scatter matrix based on low statistics simulations. The compression as well as the retrieval of the matrix elements are parallelizable. It is shown that the proposed compression scheme provides sufficient compression so that the storage in memory of a scatter system matrix for a 3D scanner is feasible. Scatter matrices of two different 2D scanner geometries were compressed and used for reconstruction as a proof of concept. Compression ratios of 0.1% could be achieved and scatter induced artifacts in the images were successfully reduced by using the compressed matrices in the reconstruction algorithm

  14. Drift-corrected Odin-OSIRIS ozone product: algorithm and updated stratospheric ozone trends

    Directory of Open Access Journals (Sweden)

    A. E. Bourassa

    2018-01-01

    Full Text Available A small long-term drift in the Optical Spectrograph and Infrared Imager System (OSIRIS stratospheric ozone product, manifested mostly since 2012, is quantified and attributed to a changing bias in the limb pointing knowledge of the instrument. A correction to this pointing drift using a predictable shape in the measured limb radiance profile is implemented and applied within the OSIRIS retrieval algorithm. This new data product, version 5.10, displays substantially better both long- and short-term agreement with Microwave Limb Sounder (MLS ozone throughout the stratosphere due to the pointing correction. Previously reported stratospheric ozone trends over the time period 1984–2013, which were derived by merging the altitude–number density ozone profile measurements from the Stratospheric Aerosol and Gas Experiment (SAGE II satellite instrument (1984–2005 and from OSIRIS (2002–2013, are recalculated using the new OSIRIS version 5.10 product and extended to 2017. These results still show statistically significant positive trends throughout the upper stratosphere since 1997, but at weaker levels that are more closely in line with estimates from other data records.

  15. A fingerprint key binding algorithm based on vector quantization and error correction

    Science.gov (United States)

    Li, Liang; Wang, Qian; Lv, Ke; He, Ning

    2012-04-01

    In recent years, researches on seamless combination cryptosystem with biometric technologies, e.g. fingerprint recognition, are conducted by many researchers. In this paper, we propose a binding algorithm of fingerprint template and cryptographic key to protect and access the key by fingerprint verification. In order to avoid the intrinsic fuzziness of variant fingerprints, vector quantization and error correction technique are introduced to transform fingerprint template and then bind with key, after a process of fingerprint registration and extracting global ridge pattern of fingerprint. The key itself is secure because only hash value is stored and it is released only when fingerprint verification succeeds. Experimental results demonstrate the effectiveness of our ideas.

  16. Electron-translation effects in heavy-ion scattering

    International Nuclear Information System (INIS)

    Heinz, U.; Greiner, W.; Mueller, B.

    1981-01-01

    The origin and importance of electron-translation effects within a molecular description of electronic excitations in heavy-ion collisions is investigated. First, a fully consistent quantum-mechanical description of the scattering process is developed; the electrons are described by relativistic molecular orbitals, while the nuclear motion is approximated nonrelativistically. Leaving the quantum-mechanical level by using the semiclassical approximation for the nuclear motion, a set of coupled differential equations for the occupation amplitudes of the molecular orbitals is derived. In these coupled-channel equations the spurious asymptotic dynamical couplings are corrected for by additional matrix elements stemming from the electron translation. Hence, a molecular description of electronic excitations in heavy-ion scattering has been achieved, which is free from the spurious asymptotic couplings of the conventional perturbated stationary-state approach. The importance of electron-translation effects for continuum electrons and positrons is investigated. To this end an algorithm for the description of continuum electrons is proposed, which for the first time should allow for the calculation of angular distributions for delta electrons. Finally, the practical consequences of electron-translation effects are studied by calculating the corrected coupling matrix elements for the Pb-Cm system and comparing the corresponding K-vacancy probabilities with conventional calculations. We critically discuss conventional methods for cutting off the coupling matrix elements in coupled-channel calculations

  17. HECTOR 1.00. A program for the calculation of QED, QCD and electroweak corrections to ep and l±N deep inelastic neutral and charged current scattering

    International Nuclear Information System (INIS)

    Arbuzov, A.; Kalinovskaya, L.; Bardin, D.; Deutsches Elektronen-Synchrotron; Bluemlein, J.; Riemann, T.

    1995-11-01

    A description of the Fortran program HECTOR for a variety of semi-analytical calculations of radiative QED, QCD, and electroweak corrections to the double-differential cross sections of NC and CC deep inelastic charged lepton proton (or lepton deuteron) scattering is presented. HECTOR originates from the substantially improved and extended earlier programs HELIOS and TERAD91. It is mainly intended for applications at HERA or LEP x LHC, but may be used also for μN scattering in fixed target experiments. The QED corrections may be calculated in different sets of variables: leptonic, hadronic, mixed, Jaquet-Blondel, double angle etc. Besides the leading logarithmic approximation up to order O(α 2 ), exact O(α) corrections and inclusive soft photon exponentiation are taken into account. The photoproduction region is also covered. (orig.)

  18. Investigation of radiative corrections in the scattering at 180 deg. of 240 MeV positrons on atomic electrons

    International Nuclear Information System (INIS)

    Poux, J.P.

    1972-06-01

    In this research thesis, after a recall of processes of elastic scattering of positrons on electrons (kinematics and cross section), and of involved radiative corrections, the author describes the experimental installation (positron beam, ionization chamber, targets, spectrometer, electronic logics associated with the counter telescope) which has been used to measure the differential cross section of recoil electrons, and the methods which have been used. In a third part, the author reports the calculation of corrections and the obtained spectra. In the next part, the author reports the interpretation of results and their comparison with the experiment performed by Browman, Grossetete and Yount. The author shows that both experiments are complementary to each other, and are in agreement with the calculation performed by Yennie, Hearn and Kuo

  19. Thomson scattering measurements in atmospheric plasma jets

    International Nuclear Information System (INIS)

    Gregori, G.; Schein, J.; Schwendinger, P.; Kortshagen, U.; Heberlein, J.; Pfender, E.

    1999-01-01

    Electron temperature and electron density in a dc plasma jet at atmospheric pressure have been obtained using Thomson laser scattering. Measurements performed at various scattering angles have revealed effects that are not accounted for by the standard scattering theory. Differences between the predicted and experimental results suggest that higher order corrections to the theory may be required, and that corrections to the form of the spectral density function may play an important role. copyright 1999 The American Physical Society

  20. Safety, Efficacy, Predictability and Stability Indices of Photorefractive Keratectomy for Correction of Myopic Astigmatism with Plano-Scan and Tissue-Saving Algorithms

    Directory of Open Access Journals (Sweden)

    Mehrdad Mohammadpour

    2013-10-01

    Full Text Available Purpose: To assess the safety, efficacy and predictability of photorefractive keratectomy (PRK [Tissue-saving (TS versus Plano-scan (PS ablation algorithms] of Technolas 217z excimer laser for correction of myopic astigmatismMethods: In this retrospective study one hundred and seventy eyes of 85 patients (107 eyes (62.9% with PS and 63 eyes (37.1% with TS algorithm were included. TS algorithm was applied for those with central corneal thickness less than 500 µm or estimated residual stromal thickness less than 420 µm. Mitomycin C (MMC was applied for 120 eyes (70.6%; in case of an ablation depth more than 60 μm and/or astigmatic correction more than one diopter (D. Mean sphere, cylinder, spherical equivalent (SE refraction, uncorrected visual acuity (UCVA, best corrected visual acuity (BCVA were measured preoperatively, and 4 weeks,12 weeks and 24 weeks postoperatively.Results: One, three and six months postoperatively, 60%, 92.9%, 97.5% of eyes had UCVA of 20/20 or better, respectively. Mean preoperative and 1, 3, 6 months postoperative SE were -3.48±1.28 D (-1.00 to -8.75, -0.08±0.62D, -0.02±0.57 and -0.004± 0.29, respectively. And also, 87.6%, 94.1% and 100% were within ±1.0 D of emmetropia and 68.2, 75.3, 95% were within ±0.5 of emmetropia. The safety and efficacy indices were 0.99 and 0.99 at 12 weeks and 1.009 and 0.99 at 24 weeks, respectively. There was no clinically or statistically significant difference between the outcomes of PS or TS algorithms or between those with or without MMC in either group in terms of safety, efficacy, predictability or stability. Dividing the eyes with subjective SE≤4 D and SE≥4 D postoperatively, there was no significant difference between the predictability of the two groups. There was no intra- or postoperative complication.Conclusion: Outcomes of PRK for correction of myopic astigmatism showed great promise with both PS and TS algorithms.

  1. Comparison of different Aethalometer correction schemes and a reference multi-wavelength absorption technique for ambient aerosol data

    Directory of Open Access Journals (Sweden)

    J. Saturno

    2017-08-01

    Full Text Available Deriving absorption coefficients from Aethalometer attenuation data requires different corrections to compensate for artifacts related to filter-loading effects, scattering by filter fibers, and scattering by aerosol particles. In this study, two different correction schemes were applied to seven-wavelength Aethalometer data, using multi-angle absorption photometer (MAAP data as a reference absorption measurement at 637 nm. The compensation algorithms were compared to five-wavelength offline absorption measurements obtained with a multi-wavelength absorbance analyzer (MWAA, which serves as a multiple-wavelength reference measurement. The online measurements took place in the Amazon rainforest, from the wet-to-dry transition season to the dry season (June–September 2014. The mean absorption coefficient (at 637 nm during this period was 1.8 ± 2.1 Mm−1, with a maximum of 15.9 Mm−1. Under these conditions, the filter-loading compensation was negligible. One of the correction schemes was found to artificially increase the short-wavelength absorption coefficients. It was found that accounting for the aerosol optical properties in the scattering compensation significantly affects the absorption Ångström exponent (åABS retrievals. Proper Aethalometer data compensation schemes are crucial to retrieve the correct åABS, which is commonly implemented in brown carbon contribution calculations. Additionally, we found that the wavelength dependence of uncompensated Aethalometer attenuation data significantly correlates with the åABS retrieved from offline MWAA measurements.

  2. Evaluation of Radiometric and Atmospheric Correction Algorithms for Aboveground Forest Biomass Estimation Using Landsat 5 TM Data

    Directory of Open Access Journals (Sweden)

    Pablito M. López-Serrano

    2016-04-01

    Full Text Available Solar radiation is affected by absorption and emission phenomena during its downward trajectory from the Sun to the Earth’s surface and during the upward trajectory detected by satellite sensors. This leads to distortion of the ground radiometric properties (reflectance recorded by satellite images, used in this study to estimate aboveground forest biomass (AGB. Atmospherically-corrected remote sensing data can be used to estimate AGB on a global scale and with moderate effort. The objective of this study was to evaluate four atmospheric correction algorithms (for surface reflectance, ATCOR2 (Atmospheric Correction for Flat Terrain, COST (Cosine of the Sun Zenith Angle, FLAASH (Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes and 6S (Second Simulation of Satellite Signal in the Solar, and one radiometric correction algorithm (for reflectance at the sensor ToA (Apparent Reflectance at the Top of Atmosphere to estimate AGB in temperate forest in the northeast of the state of Durango, Mexico. The AGB was estimated from Landsat 5 TM imagery and ancillary information from a digital elevation model (DEM using the non-parametric multivariate adaptive regression splines (MARS technique. Field reference data for the model training were collected by systematic sampling of 99 permanent forest growth and soil research sites (SPIFyS established during the winter of 2011. The following predictor variables were identified in the MARS model: Band 7, Band 5, slope (β, Wetness Index (WI, NDVI and MSAVI2. After cross-validation, 6S was found to be the optimal model for estimating AGB (R2 = 0.71 and RMSE = 33.5 Mg·ha−1; 37.61% of the average stand biomass. We conclude that atmospheric and radiometric correction of satellite images can be used along with non-parametric techniques to estimate AGB with acceptable accuracy.

  3. Study for correction of neutron scattering in the calibration of the albedo individual monitor from the Neutron Laboratory (LN), IRD/CNEN-RJ, Brazil

    International Nuclear Information System (INIS)

    Freitas, B.M.; Silva, A.X. da

    2014-01-01

    The Instituto de Radioprotecao e Dosimetria (IRD) runs a neutron individual monitoring service with albedo type monitor and thermoluminescent detectors (TLD). Moreover the largest number of workers exposed to neutrons in Brazil is exposed to 241 Am-Be fields. Therefore a study of the response of albedo dosemeter due to neutron scattering from 241 Am-Be source is important for a proper calibration. In this work, it has been evaluated the influence of the scattering correction in two distances at the Low Scattering Laboratory of the Neutron Laboratory of the Brazilian National Laboratory (Lab. Nacional de Metrologia Brasileira de Radiacoes Ionizantes) in the calibration of that albedo dosemeter for a 241 Am-Be source. (author)

  4. Radiative corrections to high-energy neutrino scattering

    International Nuclear Information System (INIS)

    Rujula, A. de; Petronzio, R.; Savoy-Navarro, A.

    1979-01-01

    Motivated by precise neutrino experiments, the electromagnetic radiative corrections to the data are reconsidered. The usefulness is investigated and the simplicity demonstrated of the 'leading log' approximation: the calculation to order α ln (Q/μ), α ln (Q/msub(q)). Here Q is an energy scale of the overall process, μ is the lepton mass and msub(q) is a hadronic mass, the effective quark mass in a parton model. The leading log radiative corrections to dsigma/dy distributions and to suitably interpreted dsigma/dx distributions are quark-mass independent. The authors improve upon the conventional leading log approximation and compute explicitly the largest terms that lie beyond the leading log level. In practice this means that the model-independent formulae, though approximate, are likely to be excellent estimates everywhere except at low energy or very large y. It is pointed out that radiative corrections to measurements of deviations from the Callan-Gross relation and to measurements of the 'sea' constituency of nucleons are gigantic. The QCD inspired study of deviations from scaling is of particular interest. The authors compute, beyond the leading log level, the radiative corrections of the QCD predictions. (Auth.)

  5. Correction of CT artifacts and its influence on Monte Carlo dose calculations

    International Nuclear Information System (INIS)

    Bazalova, Magdalena; Beaulieu, Luc; Palefsky, Steven; Verhaegen, Frank

    2007-01-01

    Computed tomography (CT) images of patients having metallic implants or dental fillings exhibit severe streaking artifacts. These artifacts may disallow tumor and organ delineation and compromise dose calculation outcomes in radiotherapy. We used a sinogram interpolation metal streaking artifact correction algorithm on several phantoms of exact-known compositions and on a prostate patient with two hip prostheses. We compared original CT images and artifact-corrected images of both. To evaluate the effect of the artifact correction on dose calculations, we performed Monte Carlo dose calculation in the EGSnrc/DOSXYZnrc code. For the phantoms, we performed calculations in the exact geometry, in the original CT geometry and in the artifact-corrected geometry for photon and electron beams. The maximum errors in 6 MV photon beam dose calculation were found to exceed 25% in original CT images when the standard DOSXYZnrc/CTCREATE calibration is used but less than 2% in artifact-corrected images when an extended calibration is used. The extended calibration includes an extra calibration point for a metal. The patient dose volume histograms of a hypothetical target irradiated by five 18 MV photon beams in a hypothetical treatment differ significantly in the original CT geometry and in the artifact-corrected geometry. This was found to be mostly due to miss-assignment of tissue voxels to air due to metal artifacts. We also developed a simple Monte Carlo model for a CT scanner and we simulated the contribution of scatter and beam hardening to metal streaking artifacts. We found that whereas beam hardening has a minor effect on metal artifacts, scatter is an important cause of these artifacts

  6. Large-angle hadron scattering at high energies

    International Nuclear Information System (INIS)

    Goloskokov, S.V.; Kudinov, A.V.; Kuleshov, S.P.

    1981-01-01

    Basing on the quasipotential Logunov-Tavkhelidze approach, corrections to the amplitude of high-energy large-angle meson-nucleon scattering are estimated. The estimates are compared with the available experimental data on pp- and π +- p-scattering, so as to check the adequacy of the suggested scheme to account for the preasymptotic deffects. The compared results are presented in the form of tables and graphs. The following conclusions are drawn: 1. the account for corrections, due to the long-range interaction, to the amplituda gives a good aghreee main asymptotic termment between the theoretical and experimental data. 2. in the case of π +- p- scattering the corrections prove to be comparable with the main asymptotic term up to the values of transferred pulses psub(lambdac)=50 GeV/c, which results in a noticeable deviation form the quark counting rules at such energies. Nevertheless, the preasymptotic formulae do well, beginning with psub(lambdac) approximately 6 GeV/c. In case of pp-scattering the corrections are mutually compensated to a considerable degree, and the deviation from the quark counting rules is negligible

  7. Development of a golden beam data set for the commissioning of a proton double-scattering system in a pencil-beam dose calculation algorithm

    International Nuclear Information System (INIS)

    Slopsema, R. L.; Flampouri, S.; Yeung, D.; Li, Z.; Lin, L.; McDonough, J. E.; Palta, J.

    2014-01-01

    Purpose: The purpose of this investigation is to determine if a single set of beam data, described by a minimal set of equations and fitting variables, can be used to commission different installations of a proton double-scattering system in a commercial pencil-beam dose calculation algorithm. Methods: The beam model parameters required to commission the pencil-beam dose calculation algorithm (virtual and effective SAD, effective source size, and pristine-peak energy spread) are determined for a commercial double-scattering system. These parameters are measured in a first room and parameterized as function of proton energy and nozzle settings by fitting four analytical equations to the measured data. The combination of these equations and fitting values constitutes the golden beam data (GBD). To determine the variation in dose delivery between installations, the same dosimetric properties are measured in two additional rooms at the same facility, as well as in a single room at another facility. The difference between the room-specific measurements and the GBD is evaluated against tolerances that guarantee the 3D dose distribution in each of the rooms matches the GBD-based dose distribution within clinically reasonable limits. The pencil-beam treatment-planning algorithm is commissioned with the GBD. The three-dimensional dose distribution in water is evaluated in the four treatment rooms and compared to the treatment-planning calculated dose distribution. Results: The virtual and effective SAD measurements fall between 226 and 257 cm. The effective source size varies between 2.4 and 6.2 cm for the large-field options, and 1.0 and 2.0 cm for the small-field options. The pristine-peak energy spread decreases from 1.05% at the lowest range to 0.6% at the highest. The virtual SAD as well as the effective source size can be accurately described by a linear relationship as function of the inverse of the residual energy. An additional linear correction term as function of

  8. Development of a golden beam data set for the commissioning of a proton double-scattering system in a pencil-beam dose calculation algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Slopsema, R. L., E-mail: rslopsema@floridaproton.org; Flampouri, S.; Yeung, D.; Li, Z. [University of Florida Proton Therapy Institute, 2015 North Jefferson Street, Jacksonville, Florida 32205 (United States); Lin, L.; McDonough, J. E. [Department of Radiation Oncology, University of Pennsylvania, 3400 Civic Boulevard, 2326W TRC, PCAM, Philadelphia, Pennsylvania 19104 (United States); Palta, J. [VCU Massey Cancer Center, Virginia Commonwealth University, 401 College Street, Richmond, Virginia 23298 (United States)

    2014-09-15

    Purpose: The purpose of this investigation is to determine if a single set of beam data, described by a minimal set of equations and fitting variables, can be used to commission different installations of a proton double-scattering system in a commercial pencil-beam dose calculation algorithm. Methods: The beam model parameters required to commission the pencil-beam dose calculation algorithm (virtual and effective SAD, effective source size, and pristine-peak energy spread) are determined for a commercial double-scattering system. These parameters are measured in a first room and parameterized as function of proton energy and nozzle settings by fitting four analytical equations to the measured data. The combination of these equations and fitting values constitutes the golden beam data (GBD). To determine the variation in dose delivery between installations, the same dosimetric properties are measured in two additional rooms at the same facility, as well as in a single room at another facility. The difference between the room-specific measurements and the GBD is evaluated against tolerances that guarantee the 3D dose distribution in each of the rooms matches the GBD-based dose distribution within clinically reasonable limits. The pencil-beam treatment-planning algorithm is commissioned with the GBD. The three-dimensional dose distribution in water is evaluated in the four treatment rooms and compared to the treatment-planning calculated dose distribution. Results: The virtual and effective SAD measurements fall between 226 and 257 cm. The effective source size varies between 2.4 and 6.2 cm for the large-field options, and 1.0 and 2.0 cm for the small-field options. The pristine-peak energy spread decreases from 1.05% at the lowest range to 0.6% at the highest. The virtual SAD as well as the effective source size can be accurately described by a linear relationship as function of the inverse of the residual energy. An additional linear correction term as function of

  9. A correction scheme for a simplified analytical random walk model algorithm of proton dose calculation in distal Bragg peak regions

    Science.gov (United States)

    Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.

    2016-10-01

    The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.

  10. 4D cone-beam computed tomography (CBCT) using a moving blocker for simultaneous radiation dose reduction and scatter correction

    Science.gov (United States)

    Zhao, Cong; Zhong, Yuncheng; Duan, Xinhui; Zhang, You; Huang, Xiaokun; Wang, Jing; Jin, Mingwu

    2018-06-01

    Four-dimensional (4D) x-ray cone-beam computed tomography (CBCT) is important for a precise radiation therapy for lung cancer. Due to the repeated use and 4D acquisition over a course of radiotherapy, the radiation dose becomes a concern. Meanwhile, the scatter contamination in CBCT deteriorates image quality for treatment tasks. In this work, we propose the use of a moving blocker (MB) during the 4D CBCT acquisition (‘4D MB’) and to combine motion-compensated reconstruction to address these two issues simultaneously. In 4D MB CBCT, the moving blocker reduces the x-ray flux passing through the patient and collects the scatter information in the blocked region at the same time. The scatter signal is estimated from the blocked region for correction. Even though the number of projection views and projection data in each view are not complete for conventional reconstruction, 4D reconstruction with a total-variation (TV) constraint and a motion-compensated temporal constraint can utilize both spatial gradient sparsity and temporal correlations among different phases to overcome the missing data problem. The feasibility simulation studies using the 4D NCAT phantom showed that 4D MB with motion-compensated reconstruction with 1/3 imaging dose reduction could produce satisfactory images and achieve 37% improvement on structural similarity (SSIM) index and 55% improvement on root mean square error (RMSE), compared to 4D reconstruction at the regular imaging dose without scatter correction. For the same 4D MB data, 4D reconstruction outperformed 3D TV reconstruction by 28% on SSIM and 34% on RMSE. A study of synthetic patient data also demonstrated the potential of 4D MB to reduce the radiation dose by 1/3 without compromising the image quality. This work paves the way for more comprehensive studies to investigate the dose reduction limit offered by this novel 4D MB method using physical phantom experiments and real patient data based on clinical relevant metrics.

  11. Iterative CT shading correction with no prior information

    Science.gov (United States)

    Wu, Pengwei; Sun, Xiaonan; Hu, Hongjie; Mao, Tingyu; Zhao, Wei; Sheng, Ke; Cheung, Alice A.; Niu, Tianye

    2015-11-01

    Shading artifacts in CT images are caused by scatter contamination, beam-hardening effect and other non-ideal imaging conditions. The purpose of this study is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT images (e.g. cone-beam CT, low-kVp CT) without relying on prior information. The method is based on the general knowledge of the relatively uniform CT number distribution in one tissue component. The CT image is first segmented to construct a template image where each structure is filled with the same CT number of a specific tissue type. Then, by subtracting the ideal template from the CT image, the residual image from various error sources are generated. Since forward projection is an integration process, non-continuous shading artifacts in the image become continuous signals in a line integral. Thus, the residual image is forward projected and its line integral is low-pass filtered in order to estimate the error that causes shading artifacts. A compensation map is reconstructed from the filtered line integral error using a standard FDK algorithm and added back to the original image for shading correction. As the segmented image does not accurately depict a shaded CT image, the proposed scheme is iterated until the variation of the residual image is minimized. The proposed method is evaluated using cone-beam CT images of a Catphan©600 phantom and a pelvis patient, and low-kVp CT angiography images for carotid artery assessment. Compared with the CT image without correction, the proposed method reduces the overall CT number error from over 200 HU to be less than 30 HU and increases the spatial uniformity by a factor of 1.5. Low-contrast object is faithfully retained after the proposed correction. An effective iterative algorithm for shading correction in CT imaging is proposed that is only assisted by general anatomical information without relying on prior knowledge. The proposed method is thus practical

  12. Evaluation of dose calculation algorithms using the treatment planning system Xi O with tissue heterogeneity correction turned on

    International Nuclear Information System (INIS)

    Fairbanks, Leandro R.; Barbi, Gustavo L.; Silva, Wiliam T.; Reis, Eduardo G.F.; Borges, Leandro F.; Bertucci, Edenyse C.; Maciel, Marina F.; Amaral, Leonardo L.

    2011-01-01

    Since the cross-section for various radiation interactions is dependent upon tissue material, the presence of heterogeneities affects the final dose delivered. This paper aims to analyze how different treatment planning algorithms (Fast Fourier Transform, Convolution, Superposition, Fast Superposition and Clarkson) work when heterogeneity corrections are used. To that end, a farmer-type ionization chamber was positioned reproducibly (during the time of CT as well as irradiation) inside several phantoms made of aluminum, bone, cork and solid water slabs. The percent difference between the dose measured and calculated by the various algorithms was less than 5%.The convolution method shows better results for high density materials (difference ∼1 %), whereas the Superposition algorithm is more accurate for low densities (around 1,1%). (author)

  13. Neutron Inelastic Scattering Study of Liquid Argon

    Energy Technology Data Exchange (ETDEWEB)

    Skoeld, K; Rowe, J M; Ostrowski, G [Solid State Science Div., Argonne National Laboratory, Argonne, Illinois (US); Randolph, P D [Nuclear Technology Div., Idaho Nuclear Corporation, Idaho Falls, Idaho (US)

    1972-02-15

    The inelastic scattering functions for liquid argon have been measured at 85.2 K. The coherent scattering function was obtained from a measurement on pure A-36 and the incoherent function was derived from the result obtained from the A-36 sample and the result obtained from a mixture of A-36 and A-40 for which the scattering is predominantly incoherent. The data, which are presented as smooth scattering functions at constant values of the wave vector transfer in the range 10 - 44/nm, are corrected for multiple scattering contributions and for resolution effects. Such corrections are shown to be essential in the derivation of reliable scattering functions from neutron scattering data. The incoherent data are compared to recent molecular dynamics results and the mean square displacement as a function of time is derived. The coherent data are compared to molecular dynamics results and also, briefly, to some recent theoretical models

  14. Scattering of targets over layered half space using a semi-analytic method in conjunction with FDTD algorithm.

    Science.gov (United States)

    Cao, Le; Wei, Bing

    2014-08-25

    Finite-difference time-domain (FDTD) algorithm with a new method of plane wave excitation is used to investigate the RCS (Radar Cross Section) characteristics of targets over layered half space. Compare with the traditional excitation plane wave method, the calculation memory and time requirement is greatly decreased. The FDTD calculation is performed with a plane wave incidence, and the RCS of far field is obtained by extrapolating the currently calculated data on the output boundary. However, methods available for extrapolating have to evaluate the half space Green function. In this paper, a new method which avoids using the complex and time-consuming half space Green function is proposed. Numerical results show that this method is in good agreement with classic algorithm and it can be used in the fast calculation of scattering and radiation of targets over layered half space.

  15. Effect of inter-crystal scatter on estimation methods for random coincidences and subsequent correction

    International Nuclear Information System (INIS)

    Torres-Espallardo, I; Spanoudaki, V; Ziegler, S I; Rafecas, M; McElroy, D P

    2008-01-01

    Random coincidences can contribute substantially to the background in positron emission tomography (PET). Several estimation methods are being used for correcting them. The goal of this study was to investigate the validity of techniques for random coincidence estimation, with various low-energy thresholds (LETs). Simulated singles list-mode data of the MADPET-II small animal PET scanner were used as input. The simulations have been performed using the GATE simulation toolkit. Several sources with different geometries have been employed. We evaluated the number of random events using three methods: delayed window (DW), singles rate (SR) and time histogram fitting (TH). Since the GATE simulations allow random and true coincidences to be distinguished, a comparison between the number of random coincidences estimated using the standard methods and the number obtained using GATE was performed. An overestimation in the number of random events was observed using the DW and SR methods. This overestimation decreases for LETs higher than 255 keV. It is additionally reduced when the single events which have undergone a Compton interaction in crystals before being detected are removed from the data. These two observations lead us to infer that the overestimation is due to inter-crystal scatter. The effect of this mismatch in the reconstructed images is important for quantification because it leads to an underestimation of activity. This was shown using a hot-cold-background source with 3.7 MBq total activity in the background region and a 1.59 MBq total activity in the hot region. For both 200 keV and 400 keV LET, an overestimation of random coincidences for the DW and SR methods was observed, resulting in approximately 1.5% or more (at 200 keV LET: 1.7% for DW and 7% for SR) and less than 1% (at 400 keV LET: both methods) underestimation of activity within the background region. In almost all cases, images obtained by compensating for random events in the reconstruction

  16. Maximum likelihood estimation and EM algorithm of Copas-like selection model for publication bias correction.

    Science.gov (United States)

    Ning, Jing; Chen, Yong; Piao, Jin

    2017-07-01

    Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Topics in bound-state dynamical processes: semiclassical eigenvalues, reactive scattering kernels and gas-surface scattering models

    International Nuclear Information System (INIS)

    Adams, J.E.

    1979-05-01

    The difficulty of applying the WKB approximation to problems involving arbitrary potentials has been confronted. Recent work has produced a convenient expression for the potential correction term. However, this approach does not yield a unique correction term and hence cannot be used to construct the proper modification. An attempt is made to overcome the uniqueness difficulties by imposing a criterion which permits identification of the correct modification. Sections of this work are: semiclassical eigenvalues for potentials defined on a finite interval; reactive scattering exchange kernels; a unified model for elastic and inelastic scattering from a solid surface; and selective absorption on a solid surface

  18. SU-F-T-143: Implementation of a Correction-Based Output Model for a Compact Passively Scattered Proton Therapy System

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, S; Ahmad, S; Chen, Y; Ferreira, C; Islam, M; Lau, A; Jin, H [University of Oklahoma Health Sciences Center, Oklahoma City, OK (United States); Keeling, V [Carti, Inc., Little Rock, AR (United States)

    2016-06-15

    Purpose: To commission and investigate the accuracy of an output (cGy/MU) prediction model for a compact passively scattered proton therapy system. Methods: A previously published output prediction model (Sahoo et al, Med Phys, 35, 5088–5097, 2008) was commissioned for our Mevion S250 proton therapy system. This model is a correction-based model that multiplies correction factors (d/MUwnc=ROFxSOBPF xRSFxSOBPOCFxOCRxFSFxISF). These factors accounted for changes in output due to options (12 large, 5 deep, and 7 small), modulation width M, range R, off-center, off-axis, field-size, and off-isocenter. In this study, the model was modified to ROFxSOBPFxRSFxOCRxFSFxISF-OCFxGACF by merging SOBPOCF and ISF for simplicity and introducing a gantry angle correction factor (GACF). To commission the model, outputs over 1,000 data points were taken at the time of the system commissioning. The output was predicted by interpolation (1D for SOBPF, FSF, and GACF; 2D for RSF and OCR) with inverse-square calculation (ISF-OCR). The outputs of 273 combinations of R and M covering total 24 options were measured to test the model. To minimize fluence perturbation, scattered dose from range compensator and patient was not considered. The percent differences between the predicted (P) and measured (M) outputs were calculated to test the prediction accuracy ([P-M]/Mx100%). Results: GACF was required because of up to 3.5% output variation dependence on the gantry angle. A 2D interpolation was required for OCR because the dose distribution was not radially symmetric especially for the deep options. The average percent differences were −0.03±0.98% (mean±SD) and the differences of all the measurements fell within ±3%. Conclusion: It is concluded that the model can be clinically used for the compact passively scattered proton therapy system. However, great care should be taken when the field-size is less than 5×5 cm{sup 2} where a direct output measurement is required due to substantial

  19. A simple algorithm for subregional striatal uptake analysis with partial volume correction in dopaminergic PET imaging

    International Nuclear Information System (INIS)

    Lue Kunhan; Lin Hsinhon; Chuang Kehshih; Kao Chihhao, K.; Hsieh Hungjen; Liu Shuhsin

    2014-01-01

    In positron emission tomography (PET) of the dopaminergic system, quantitative measurements of nigrostriatal dopamine function are useful for differential diagnosis. A subregional analysis of striatal uptake enables the diagnostic performance to be more powerful. However, the partial volume effect (PVE) induces an underestimation of the true radioactivity concentration in small structures. This work proposes a simple algorithm for subregional analysis of striatal uptake with partial volume correction (PVC) in dopaminergic PET imaging. The PVC algorithm analyzes the separate striatal subregions and takes into account the PVE based on the recovery coefficient (RC). The RC is defined as the ratio of the PVE-uncorrected to PVE-corrected radioactivity concentration, and is derived from a combination of the traditional volume of interest (VOI) analysis and the large VOI technique. The clinical studies, comprising 11 patients with Parkinson's disease (PD) and 6 healthy subjects, were used to assess the impact of PVC on the quantitative measurements. Simulations on a numerical phantom that mimicked realistic healthy and neurodegenerative situations were used to evaluate the performance of the proposed PVC algorithm. In both the clinical and the simulation studies, the striatal-to-occipital ratio (SOR) values for the entire striatum and its subregions were calculated with and without PVC. In the clinical studies, the SOR values in each structure (caudate, anterior putamen, posterior putamen, putamen, and striatum) were significantly higher by using PVC in contrast to those without. Among the PD patients, the SOR values in each structure and quantitative disease severity ratings were shown to be significantly related only when PVC was used. For the simulation studies, the average absolute percentage error of the SOR estimates before and after PVC were 22.74% and 1.54% in the healthy situation, respectively; those in the neurodegenerative situation were 20.69% and 2

  20. Inelastic neutron scattering, Raman, vibrational analysis with anharmonic corrections, and scaled quantum mechanical force field for polycrystalline L-alanine

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Robert W. [Department of Biomedical Informatics, Uniformed Services University, 4301 Jones Bridge Road, Bethesda, MD 20815 (United States)], E-mail: bob@bob.usuhs.mil; Schluecker, Sebastian [Institute of Physical Chemistry, University of Wuerzburg, Wuerzburg (Germany); Hudson, Bruce S. [Department of Chemistry, Syracuse University, Syracuse, NY (United States)

    2008-01-22

    A scaled quantum mechanical harmonic force field (SQMFF) corrected for anharmonicity is obtained for the 23 K L-alanine crystal structure using van der Waals corrected periodic boundary condition density functional theory (DFT) calculations with the PBE functional. Scale factors are obtained with comparisons to inelastic neutron scattering (INS), Raman, and FT-IR spectra of polycrystalline L-alanine at 15-23 K. Calculated frequencies for all 153 normal modes differ from observed frequencies with a standard deviation of 6 wavenumbers. Non-bonded external k = 0 lattice modes are included, but assignments to these modes are presently ambiguous. The extension of SQMFF methodology to lattice modes is new, as are the procedures used here for providing corrections for anharmonicity and van der Waals interactions in DFT calculations on crystals. First principles Born-Oppenheimer molecular dynamics (BOMD) calculations are performed on the L-alanine crystal structure at a series of classical temperatures ranging from 23 K to 600 K. Corrections for zero-point energy (ZPE) are estimated by finding the classical temperature that reproduces the mean square displacements (MSDs) measured from the diffraction data at 23 K. External k = 0 lattice motions are weakly coupled to bonded internal modes.

  1. Inelastic neutron scattering, Raman, vibrational analysis with anharmonic corrections, and scaled quantum mechanical force field for polycrystalline L-alanine

    International Nuclear Information System (INIS)

    Williams, Robert W.; Schluecker, Sebastian; Hudson, Bruce S.

    2008-01-01

    A scaled quantum mechanical harmonic force field (SQMFF) corrected for anharmonicity is obtained for the 23 K L-alanine crystal structure using van der Waals corrected periodic boundary condition density functional theory (DFT) calculations with the PBE functional. Scale factors are obtained with comparisons to inelastic neutron scattering (INS), Raman, and FT-IR spectra of polycrystalline L-alanine at 15-23 K. Calculated frequencies for all 153 normal modes differ from observed frequencies with a standard deviation of 6 wavenumbers. Non-bonded external k = 0 lattice modes are included, but assignments to these modes are presently ambiguous. The extension of SQMFF methodology to lattice modes is new, as are the procedures used here for providing corrections for anharmonicity and van der Waals interactions in DFT calculations on crystals. First principles Born-Oppenheimer molecular dynamics (BOMD) calculations are performed on the L-alanine crystal structure at a series of classical temperatures ranging from 23 K to 600 K. Corrections for zero-point energy (ZPE) are estimated by finding the classical temperature that reproduces the mean square displacements (MSDs) measured from the diffraction data at 23 K. External k = 0 lattice motions are weakly coupled to bonded internal modes

  2. Heavy ion elastic scatterings

    International Nuclear Information System (INIS)

    Mermaz, M.C.

    1984-01-01

    Diffraction and refraction play an important role in particle elastic scattering. The optical model treats correctly and simultaneously both phenomena but without disentangling them. Semi-classical discussions in terms of trajectories emphasize the refractive aspect due to the real part of the optical potential. The separation due to to R.C. Fuller of the quantal cross section into two components coming from opposite side of the target nucleus allows to understand better the refractive phenomenon and the origin of the observed oscillations in the elastic scattering angular distributions. We shall see that the real part of the potential is responsible of a Coulomb and a nuclear rainbow which allows to determine better the nuclear potential in the interior region near the nuclear surface since the volume absorption eliminates any effect of the real part of the potential for the internal partial scattering waves. Resonance phenomena seen in heavy ion scattering will be discussed in terms of optical model potential and Regge pole analysis. Compound nucleus resonances or quasi-molecular states can be indeed the more correct and fundamental alternative

  3. The Bouguer Correction Algorithm for Gravity with Limited Range

    OpenAIRE

    MA Jian; WEI Ziqing; WU Lili; YANG Zhenghui

    2017-01-01

    The Bouguer correction is an important item in gravity reduction, while the traditional Bouguer correction, whether the plane Bouguer correction or the spherical Bouguer correction, exists approximation error because of far-zone virtual terrain. The error grows as the calculation point gets higher. Therefore gravity reduction using the Bouguer correction with limited range, which was in accordance with the scope of the topographic correction, was researched in this paper. After that, a simpli...

  4. Explaining algorithms using metaphors

    CERN Document Server

    Forišek, Michal

    2013-01-01

    There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo

  5. A parallel wavelet-enhanced PWTD algorithm for analyzing transient scattering from electrically very large PEC targets

    KAUST Repository

    Liu, Yang

    2014-07-01

    The computational complexity and memory requirements of classically formulated marching-on-in-time (MOT)-based surface integral equation (SIE) solvers scale as O(Nt Ns 2) and O(Ns 2), respectively; here Nt and Ns denote the number of temporal and spatial degrees of freedom of the current density. The multilevel plane wave time domain (PWTD) algorithm, viz., the time domain counterpart of the multilevel fast multipole method, reduces these costs to O(Nt Nslog2 Ns) and O(Ns 1.5) (Ergin et al., IEEE Trans. Antennas Mag., 41, 39-52, 1999). Previously, PWTD-accelerated MOT-SIE solvers have been used to analyze transient scattering from perfect electrically conducting (PEC) and homogeneous dielectric objects discretized in terms of a million spatial unknowns (Shanker et al., IEEE Trans. Antennas Propag., 51, 628-641, 2003). More recently, an efficient parallelized solver that employs an advanced hierarchical and provably scalable spatial, angular, and temporal load partitioning strategy has been developed to analyze transient scattering problems that involve ten million spatial unknowns (Liu et. al., in URSI Digest, 2013).

  6. Influence on dose calculation by difference of dose calculation algorithms in stereotactic lung irradiation. Comparison of pencil beam convolution (inhomogeneity correction: batho power law) and analytical anisotropic algorithm

    International Nuclear Information System (INIS)

    Tachibana, Masayuki; Noguchi, Yoshitaka; Fukunaga, Jyunichi; Hirano, Naomi; Yoshidome, Satoshi; Hirose, Takaaki

    2009-01-01

    The monitor unit (MU) was calculated by pencil beam convolution (inhomogeneity correction algorithm: batho power law) [PBC (BPL)] which is the dose calculation algorithm based on measurement in the past in the stereotactic lung irradiation study. The recalculation was done by analytical anisotropic algorithm (AAA), which is the dose calculation algorithm based on theory data. The MU calculated by PBC (BPL) and AAA was compared for each field. In the result of the comparison of 1031 fields in 136 cases, the MU calculated by PBC (BPL) was about 2% smaller than that calculated by AAA. This depends on whether one does the calculation concerning the extension of the second electrons. In particular, the difference in the MU is influenced by the X-ray energy. With the same X-ray energy, when the irradiation field size is small, the lung pass length is long, the lung pass length percentage is large, and the CT value of the lung is low, and the difference of MU is increased. (author)

  7. The effect of scatter correction on {sup 123}I-IMP brain perfusion SPET with the triple energy window method in normal subjects using SPM analysis

    Energy Technology Data Exchange (ETDEWEB)

    Shiga, Tohru; Takano, Akihiro; Tsukamoto, Eriko; Tamaki, Nagara [Department of Nuclear Medicine, Hokkaido University School of Medicine, Sapporo (Japan); Kubo, Naoki [Department of Radiological Technology, College of Medical Technology, Hokkaido University, Sapporo (Japan); Kobayashi, Junko; Takeda, Yoji; Nakamura, Fumihiro; Koyama, Tsukasa [Department of Psychiatry and Neurology, Hokkaido University School of Medicine, Sapporo (Japan); Katoh, Chietsugu [Department of Tracer Kinetics, Hokkaido University School of Medicine, Sapporo (Japan)

    2002-03-01

    Scatter correction (SC) using the triple energy window method (TEW) has recently been applied for brain perfusion single-photon emission tomography (SPET). The aim of this study was to investigate the effect of scatter correction using TEW on N-isopropyl-p-[{sup 123}I]iodoamphetamine ({sup 123}I-IMP) SPET in normal subjects. The study population consisted of 15 right-handed normal subjects. SPET data were acquired from 20 min to 40 min after the injection of 167 MBq of IMP, using a triple-head gamma camera. Images were reconstructed with and without SC. 3D T1-weighted magnetic resonance (MR) images were also obtained with a 1.5-Tesla scanner. First, IMP images with and without SC were co-registered to the 3D MRI. Second, the two co-registered IMP images were normalised using SPM96. A t statistic image for the contrast condition effect was constructed. We investigated areas using a voxel-level threshold of 0.001, with a corrected threshold of 0.05. Compared with results obtained without SC, the IMP distribution with SC was significantly decreased in the peripheral areas of the cerebellum, the cortex and the ventricle, and also in the lateral occipital cortex and the base of the temporal lobe. On the other hand, the IMP distribution with SC was significantly increased in the anterior and posterior cingulate cortex, the insular cortex and the medial part of the thalamus. It is concluded that differences in the IMP distribution with and without SC exist not only in the peripheral areas of the cerebellum, the cortex and the ventricle but also in the occipital lobe, the base of the temporal lobe, the insular cortex, the medial part of the thalamus, and the anterior and posterior cingulate cortex. This needs to be recognised for adequate interpretation of IMP brain perfusion SPET after scatter correction. (orig.)

  8. Monte Carlo generator ELRADGEN 2.0 for simulation of radiative events in elastic ep-scattering of polarized particles

    Science.gov (United States)

    Akushevich, I.; Filoti, O. F.; Ilyichev, A.; Shumeiko, N.

    2012-07-01

    The structure and algorithms of the Monte Carlo generator ELRADGEN 2.0 designed to simulate radiative events in polarized ep-scattering are presented. The full set of analytical expressions for the QED radiative corrections is presented and discussed in detail. Algorithmic improvements implemented to provide faster simulation of hard real photon events are described. Numerical tests show high quality of generation of photonic variables and radiatively corrected cross section. The comparison of the elastic radiative tail simulated within the kinematical conditions of the BLAST experiment at MIT BATES shows a good agreement with experimental data. Catalogue identifier: AELO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1299 No. of bytes in distributed program, including test data, etc.: 11 348 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: All Operating system: Any RAM: 1 MB Classification: 11.2, 11.4 Nature of problem: Simulation of radiative events in polarized ep-scattering. Solution method: Monte Carlo simulation according to the distributions of the real photon kinematic variables that are calculated by the covariant method of QED radiative correction estimation. The approach provides rather fast and accurate generation. Running time: The simulation of 108 radiative events for itest:=1 takes up to 52 seconds on Pentium(R) Dual-Core 2.00 GHz processor.

  9. Application of Deconvolution Algorithm of Point Spread Function in Improving Image Quality: An Observer Preference Study on Chest Radiography.

    Science.gov (United States)

    Chae, Kum Ju; Goo, Jin Mo; Ahn, Su Yeon; Yoo, Jin Young; Yoon, Soon Ho

    2018-01-01

    To evaluate the preference of observers for image quality of chest radiography using the deconvolution algorithm of point spread function (PSF) (TRUVIEW ART algorithm, DRTECH Corp.) compared with that of original chest radiography for visualization of anatomic regions of the chest. Prospectively enrolled 50 pairs of posteroanterior chest radiographs collected with standard protocol and with additional TRUVIEW ART algorithm were compared by four chest radiologists. This algorithm corrects scattered signals generated by a scintillator. Readers independently evaluated the visibility of 10 anatomical regions and overall image quality with a 5-point scale of preference. The significance of the differences in reader's preference was tested with a Wilcoxon's signed rank test. All four readers preferred the images applied with the algorithm to those without algorithm for all 10 anatomical regions (mean, 3.6; range, 3.2-4.0; p chest anatomical structures applied with the deconvolution algorithm of PSF was superior to the original chest radiography.

  10. Efficient sampling algorithms for Monte Carlo based treatment planning

    International Nuclear Information System (INIS)

    DeMarco, J.J.; Solberg, T.D.; Chetty, I.; Smathers, J.B.

    1998-01-01

    Efficient sampling algorithms are necessary for producing a fast Monte Carlo based treatment planning code. This study evaluates several aspects of a photon-based tracking scheme and the effect of optimal sampling algorithms on the efficiency of the code. Four areas were tested: pseudo-random number generation, generalized sampling of a discrete distribution, sampling from the exponential distribution, and delta scattering as applied to photon transport through a heterogeneous simulation geometry. Generalized sampling of a discrete distribution using the cutpoint method can produce speedup gains of one order of magnitude versus conventional sequential sampling. Photon transport modifications based upon the delta scattering method were implemented and compared with a conventional boundary and collision checking algorithm. The delta scattering algorithm is faster by a factor of six versus the conventional algorithm for a boundary size of 5 mm within a heterogeneous geometry. A comparison of portable pseudo-random number algorithms and exponential sampling techniques is also discussed

  11. Sensitivity of Depth-Integrated Satellite Lidar to Subaqueous Scattering

    Directory of Open Access Journals (Sweden)

    Michael F. Jasinski

    2011-07-01

    Full Text Available A method is presented for estimating subaqueous integrated backscatter using near-nadir viewing satellite lidar. The algorithm takes into account specular reflection of laser light, laser scattering by wind-generated foam as well as sun glint and solar scattering from foam. The formulation is insensitive to the estimate of wind speed but sensitive to the estimate of transmittance used in the atmospheric correction. As a case study, CALIOP data over Tampa Bay were compared to MODIS 645 nm remote sensing reflectance, which previously has been shown to be nearly linearly related to turbidity. The results indicate good correlation on nearly all CALIOP cloud-free dates during the period 2006 through 2007, particularly those with relatively high atmospheric transmittance. The correlation decreases when data are composited over all dates but is still statistically significant, a possible indication of variability in the biogeochemical composition in the water. Overall, the favorable results show promise for the application of satellite lidar integrated backscatter in providing information about subsurface backscatter properties, which can be extracted using appropriate models.

  12. Pseudo-deterministic Algorithms

    OpenAIRE

    Goldwasser , Shafi

    2012-01-01

    International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...

  13. Deconvolution of shift-variant broadening for Compton scatter imaging

    International Nuclear Information System (INIS)

    Evans, Brian L.; Martin, Jeffrey B.; Roggemann, Michael C.

    1999-01-01

    A technique is presented for deconvolving shift-variant Doppler broadening of singly Compton scattered gamma rays from their recorded energy distribution. Doppler broadening is important in Compton scatter imaging techniques employing gamma rays with energies below roughly 100 keV. The deconvolution unfolds an approximation to the angular distribution of scattered photons from their recorded energy distribution in the presence of statistical noise and background counts. Two unfolding methods are presented, one based on a least-squares algorithm and one based on a maximum likelihood algorithm. Angular distributions unfolded from measurements made on small scattering targets show less evidence of Compton broadening. This deconvolution is shown to improve the quality of filtered backprojection images in multiplexed Compton scatter tomography. Improved sharpness and contrast are evident in the images constructed from unfolded signals

  14. A New Quaternion-Based Kalman Filter for Real-Time Attitude Estimation Using the Two-Step Geometrically-Intuitive Correction Algorithm.

    Science.gov (United States)

    Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun

    2017-09-19

    In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions.

  15. Adaptive algorithms for a self-shielding wavelet-based Galerkin method

    International Nuclear Information System (INIS)

    Fournier, D.; Le Tellier, R.

    2009-01-01

    The treatment of the energy variable in deterministic neutron transport methods is based on a multigroup discretization, considering the flux and cross-sections to be constant within a group. In this case, a self-shielding calculation is mandatory to correct sections of resonant isotopes. In this paper, a different approach based on a finite element discretization on a wavelet basis is used. We propose adaptive algorithms constructed from error estimates. Such an approach is applied to within-group scattering source iterations. A first implementation is presented in the special case of the fine structure equation for an infinite homogeneous medium. Extension to spatially-dependent cases is discussed. (authors)

  16. Λ scattering equations

    Science.gov (United States)

    Gomez, Humberto

    2016-06-01

    The CHY representation of scattering amplitudes is based on integrals over the moduli space of a punctured sphere. We replace the punctured sphere by a double-cover version. The resulting scattering equations depend on a parameter Λ controlling the opening of a branch cut. The new representation of scattering amplitudes possesses an enhanced redundancy which can be used to fix, modulo branches, the location of four punctures while promoting Λ to a variable. Via residue theorems we show how CHY formulas break up into sums of products of smaller (off-shell) ones times a propagator. This leads to a powerful way of evaluating CHY integrals of generic rational functions, which we call the Λ algorithm.

  17. A simple algorithm for calculating the scattering angle in atomic collisions

    International Nuclear Information System (INIS)

    Belchior, J.C.; Braga, J.P.

    1996-01-01

    A geometric approach to calculate the classical atomic scattering angle is presented. The trajectory of the particle is divided into several straight-lines and changing in direction from one sector to the other is used to calculate the scattering angle. In this model, calculation of the scattering angle does not involve either the direct evaluation of integrals nor classical turning points. (author)

  18. Adaptive testing with equated number-correct scoring

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1999-01-01

    A constrained CAT algorithm is presented that automatically equates the number-correct scores on adaptive tests. The algorithm can be used to equate number-correct scores across different administrations of the same adaptive test as well as to an external reference test. The constraints are derived

  19. Beam hardening correction algorithm in microtomography images

    International Nuclear Information System (INIS)

    Sales, Erika S.; Lima, Inaya C.B.; Lopes, Ricardo T.; Assis, Joaquim T. de

    2009-01-01

    Quantification of mineral density of bone samples is directly related to the attenuation coefficient of bone. The X-rays used in microtomography images are polychromatic and have a moderately broad spectrum of energy, which makes the low-energy X-rays passing through a sample to be absorbed, causing a decrease in the attenuation coefficient and possibly artifacts. This decrease in the attenuation coefficient is due to a process called beam hardening. In this work the beam hardening of microtomography images of vertebrae of Wistar rats subjected to a study of hyperthyroidism was corrected by the method of linearization of the projections. It was discretized using a spectrum in energy, also called the spectrum of Herman. The results without correction for beam hardening showed significant differences in bone volume, which could lead to a possible diagnosis of osteoporosis. But the data with correction showed a decrease in bone volume, but this decrease was not significant in a confidence interval of 95%. (author)

  20. Beam hardening correction algorithm in microtomography images

    Energy Technology Data Exchange (ETDEWEB)

    Sales, Erika S.; Lima, Inaya C.B.; Lopes, Ricardo T., E-mail: esales@con.ufrj.b, E-mail: ricardo@lin.ufrj.b [Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Lab. de Instrumentacao Nuclear; Assis, Joaquim T. de, E-mail: joaquim@iprj.uerj.b [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Inst. Politecnico. Dept. de Engenharia Mecanica

    2009-07-01

    Quantification of mineral density of bone samples is directly related to the attenuation coefficient of bone. The X-rays used in microtomography images are polychromatic and have a moderately broad spectrum of energy, which makes the low-energy X-rays passing through a sample to be absorbed, causing a decrease in the attenuation coefficient and possibly artifacts. This decrease in the attenuation coefficient is due to a process called beam hardening. In this work the beam hardening of microtomography images of vertebrae of Wistar rats subjected to a study of hyperthyroidism was corrected by the method of linearization of the projections. It was discretized using a spectrum in energy, also called the spectrum of Herman. The results without correction for beam hardening showed significant differences in bone volume, which could lead to a possible diagnosis of osteoporosis. But the data with correction showed a decrease in bone volume, but this decrease was not significant in a confidence interval of 95%. (author)

  1. The Scatter Search Based Algorithm to Revenue Management Problem in Broadcasting Companies

    Science.gov (United States)

    Pishdad, Arezoo; Sharifyazdi, Mehdi; Karimpour, Reza

    2009-09-01

    The problem under question in this paper which is faced by broadcasting companies is how to benefit from a limited advertising space. This problem is due to the stochastic behavior of customers (advertiser) in different fare classes. To address this issue we propose a mathematical constrained nonlinear multi period model which incorporates cancellation and overbooking. The objective function is to maximize the total expected revenue and our numerical method performs it by determining the sales limits for each class of customer to present the revenue management control policy. Scheduling the advertising spots in breaks is another area of concern and we consider it as a constraint in our model. In this paper an algorithm based on Scatter search is developed to acquire a good feasible solution. This method uses simulation over customer arrival and in a continuous finite time horizon [0, T]. Several sensitivity analyses are conducted in computational result for depicting the effectiveness of proposed method. It also provides insight into better results of considering revenue management (control policy) compared to "no sales limit" policy in which sooner demand will served first.

  2. Multiple scattering corrections to the Beer-Lambert law. 2: Detector with a variable field of view.

    Science.gov (United States)

    Zardecki, A; Tam, W G

    1982-07-01

    The multiple scattering corrections to the Beer-Lambert law in the case of a detector with a variable field of view are analyzed. We introduce transmission functions relating the received radiant power to reference power levels relevant to two different experimental situations. In the first case, the transmission function relates the received power to a reference power level appropriate to a nonattenuating medium. In the second case, the reference power level is established by bringing the receiver to the close-up position with respect to the source. To examine the effect of the variation of the detector field of view the behavior of the gain factor is studied. Numerical results modeling the laser beam propagation in fog, cloud, and rain are presented.

  3. Coherence and diffraction limited resolution in microscopic OCT by a unified approach for the correction of dispersion and aberrations

    Science.gov (United States)

    Schulz-Hildebrandt, H.; Münter, Michael; Ahrens, M.; Spahr, H.; Hillmann, D.; König, P.; Hüttmann, G.

    2018-03-01

    Optical coherence tomography (OCT) images scattering tissues with 5 to 15 μm resolution. This is usually not sufficient for a distinction of cellular and subcellular structures. Increasing axial and lateral resolution and compensation of artifacts caused by dispersion and aberrations is required to achieve cellular and subcellular resolution. This includes defocus which limit the usable depth of field at high lateral resolution. OCT gives access the phase of the scattered light and hence correction of dispersion and aberrations is possible by numerical algorithms. Here we present a unified dispersion/aberration correction which is based on a polynomial parameterization of the phase error and an optimization of the image quality using Shannon's entropy. For validation, a supercontinuum light sources and a costume-made spectrometer with 400 nm bandwidth were combined with a high NA microscope objective in a setup for tissue and small animal imaging. Using this setup and computation corrections, volumetric imaging at 1.5 μm resolution is possible. Cellular and near cellular resolution is demonstrated in porcine cornea and the drosophila larva, when computational correction of dispersion and aberrations is used. Due to the excellent correction of the used microscope objective, defocus was the main contribution to the aberrations. In addition, higher aberrations caused by the sample itself were successfully corrected. Dispersion and aberrations are closely related artifacts in microscopic OCT imaging. Hence they can be corrected in the same way by optimization of the image quality. This way microscopic resolution is easily achieved in OCT imaging of static biological tissues.

  4. Executable Pseudocode for Graph Algorithms

    NARCIS (Netherlands)

    B. Ó Nualláin (Breanndán)

    2015-01-01

    textabstract Algorithms are written in pseudocode. However the implementation of an algorithm in a conventional, imperative programming language can often be scattered over hundreds of lines of code thus obscuring its essence. This can lead to difficulties in understanding or verifying the

  5. A Label Correcting Algorithm for Partial Disassembly Sequences in the Production Planning for End-of-Life Products

    Directory of Open Access Journals (Sweden)

    Pei-Fang (Jennifer Tsai

    2012-01-01

    Full Text Available Remanufacturing of used products has become a strategic issue for cost-sensitive businesses. Due to the nature of uncertain supply of end-of-life (EoL products, the reverse logistic can only be sustainable with a dynamic production planning for disassembly process. This research investigates the sequencing of disassembly operations as a single-period partial disassembly optimization (SPPDO problem to minimize total disassembly cost. AND/OR graph representation is used to include all disassembly sequences of a returned product. A label correcting algorithm is proposed to find an optimal partial disassembly plan if a specific reusable subpart is retrieved from the original return. Then, a heuristic procedure that utilizes this polynomial-time algorithm is presented to solve the SPPDO problem. Numerical examples are used to demonstrate the effectiveness of this solution procedure.

  6. A novel image-domain-based cone-beam computed tomography enhancement algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Li Xiang; Li Tianfang; Yang Yong; Heron, Dwight E; Huq, M Saiful, E-mail: lix@upmc.edu [Department of Radiation Oncology, University of Pittsburgh Cancer Institute, Pittsburgh, PA 15232 (United States)

    2011-05-07

    Kilo-voltage (kV) cone-beam computed tomography (CBCT) plays an important role in image-guided radiotherapy. However, due to a large cone-beam angle, scatter effects significantly degrade the CBCT image quality and limit its clinical application. The goal of this study is to develop an image enhancement algorithm to reduce the low-frequency CBCT image artifacts, which are also called the bias field. The proposed algorithm is based on the hypothesis that image intensities of different types of materials in CBCT images are approximately globally uniform (in other words, a piecewise property). A maximum a posteriori probability framework was developed to estimate the bias field contribution from a given CBCT image. The performance of the proposed CBCT image enhancement method was tested using phantoms and clinical CBCT images. Compared to the original CBCT images, the corrected images using the proposed method achieved a more uniform intensity distribution within each tissue type and significantly reduced cupping and shading artifacts. In a head and a pelvic case, the proposed method reduced the Hounsfield unit (HU) errors within the region of interest from 300 HU to less than 60 HU. In a chest case, the HU errors were reduced from 460 HU to less than 110 HU. The proposed CBCT image enhancement algorithm demonstrated a promising result by the reduction of the scatter-induced low-frequency image artifacts commonly encountered in kV CBCT imaging.

  7. Charting taxonomic knowledge through ontologies and ranking algorithms

    Science.gov (United States)

    Huber, Robert; Klump, Jens

    2009-04-01

    Since the inception of geology as a modern science, paleontologists have described a large number of fossil species. This makes fossilized organisms an important tool in the study of stratigraphy and past environments. Since taxonomic classifications of organisms, and thereby their names, change frequently, the correct application of this tool requires taxonomic expertise in finding correct synonyms for a given species name. Much of this taxonomic information has already been published in journals and books where it is compiled in carefully prepared synonymy lists. Because this information is scattered throughout the paleontological literature, it is difficult to find and sometimes not accessible. Also, taxonomic information in the literature is often difficult to interpret for non-taxonomists looking for taxonomic synonymies as part of their research. The highly formalized structure makes Open Nomenclature synonymy lists ideally suited for computer aided identification of taxonomic synonyms. Because a synonymy list is a list of citations related to a taxon name, its bibliographic nature allows the application of bibliometric techniques to calculate the impact of synonymies and taxonomic concepts. TaxonRank is a ranking algorithm based on bibliometric analysis and Internet page ranking algorithms. TaxonRank uses published synonymy list data stored in TaxonConcept, a taxonomic information system. The basic ranking algorithm has been modified to include a measure of confidence on species identification based on the Open Nomenclature notation used in synonymy list, as well as other synonymy specific criteria. The results of our experiments show that the output of the proposed ranking algorithm gives a good estimate of the impact a published taxonomic concept has on the taxonomic opinions in the geological community. Also, our results show that treating taxonomic synonymies as part of on an ontology is a way to record and manage taxonomic knowledge, and thus contribute

  8. Impact on dose and image quality of a software-based scatter correction in mammography.

    Science.gov (United States)

    Monserrat, Teresa; Prieto, Elena; Barbés, Benigno; Pina, Luis; Elizalde, Arlette; Fernández, Belén

    2017-01-01

    Background In 2014, Siemens developed a new software-based scatter correction (Progressive Reconstruction Intelligently Minimizing Exposure [PRIME]), enabling grid-less digital mammography. Purpose To compare doses and image quality between PRIME (grid-less) and standard (with anti-scatter grid) modes. Material and Methods Contrast-to-noise ratio (CNR) was measured for various polymethylmethacrylate (PMMA) thicknesses and dose values provided by the mammograph were recorded. CDMAM phantom images were acquired for various PMMA thicknesses and inverse Image Quality Figure (IQF inv ) was calculated. Values of incident entrance surface air kerma (ESAK) and average glandular dose (AGD) were obtained from the DICOM header for a total of 1088 pairs of clinical cases. Two experienced radiologists compared subjectively the image quality of a total of 149 pairs of clinical cases. Results CNR values were higher and doses were lower in PRIME mode for all thicknesses. IQF inv values in PRIME mode were lower for all thicknesses except for 40 mm of PMMA equivalent, in which IQF inv was slightly greater in PRIME mode. A mean reduction of 10% in ESAK and 12% in AGD in PRIME mode with respect to standard mode was obtained. The clinical image quality in PRIME and standard acquisitions resulted to be similar in most of the cases (84% for the first radiologist and 67% for the second one). Conclusion The use of PRIME software reduces, in average, the dose of radiation to the breast without affecting image quality. This reduction is greater for thinner and denser breasts.

  9. Maximum likelihood positioning algorithm for high-resolution PET scanners

    International Nuclear Information System (INIS)

    Gross-Weege, Nicolas; Schug, David; Hallen, Patrick; Schulz, Volkmar

    2016-01-01

    algorithm is less prone to missing channel information. A likelihood filter visually improved the image quality, i.e., the peak-to-valley increased up to a factor of 3 for 2-mm-diameter phantom rods by rejecting 87% of the coincidences. A relative improvement of the energy resolution of up to 12.8% was also measured rejecting 91% of the coincidences. Conclusions: The developed ML algorithm increases the sensitivity by correctly handling missing channel information without influencing energy resolution or image quality. Furthermore, the authors showed that energy resolution and image quality can be improved substantially by rejecting events that do not comply well with the single-gamma-interaction model, such as Compton-scattered events.

  10. Memory sparing, fast scattering formalism for rigorous diffraction modeling

    Science.gov (United States)

    Iff, W.; Kämpfe, T.; Jourlin, Y.; Tishchenko, A. V.

    2017-07-01

    The basics and algorithmic steps of a novel scattering formalism suited for memory sparing and fast electromagnetic calculations are presented. The formalism, called ‘S-vector algorithm’ (by analogy with the known scattering-matrix algorithm), allows the calculation of the collective scattering spectra of individual layered micro-structured scattering objects. A rigorous method of linear complexity is applied to model the scattering at individual layers; here the generalized source method (GSM) resorting to Fourier harmonics as basis functions is used as one possible method of linear complexity. The concatenation of the individual scattering events can be achieved sequentially or in parallel, both having pros and cons. The present development will largely concentrate on a consecutive approach based on the multiple reflection series. The latter will be reformulated into an implicit formalism which will be associated with an iterative solver, resulting in improved convergence. The examples will first refer to 1D grating diffraction for the sake of simplicity and intelligibility, with a final 2D application example.

  11. Riemann–Hilbert problem approach for two-dimensional flow inverse scattering

    Energy Technology Data Exchange (ETDEWEB)

    Agaltsov, A. D., E-mail: agalets@gmail.com [Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University, 119991 Moscow (Russian Federation); Novikov, R. G., E-mail: novikov@cmap.polytechnique.fr [CNRS (UMR 7641), Centre de Mathématiques Appliquées, Ecole Polytechnique, 91128 Palaiseau (France); IEPT RAS, 117997 Moscow (Russian Federation); Moscow Institute of Physics and Technology, Dolgoprudny (Russian Federation)

    2014-10-15

    We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given.

  12. Riemann–Hilbert problem approach for two-dimensional flow inverse scattering

    International Nuclear Information System (INIS)

    Agaltsov, A. D.; Novikov, R. G.

    2014-01-01

    We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given

  13. Corrections in clinical Magnetic Resonance Spectroscopy and SPECT

    DEFF Research Database (Denmark)

    de Nijs, Robin

    infants. In Iodine-123 SPECT the problem of downscatter was addressed. This thesis is based on two papers. Paper I deals with the problem of motion in Single Voxel Spectroscopy. Two novel methods for the identification of outliers in the set of repeated measurements were implemented and compared...... a detrimental effect of the extra-uterine environment on brain development. Paper II describes a method to correct for downscatter in low count Iodine-123 SPECT with a broad energy window above the normal imaging window. Both spatial dependency and weight factors were measured. As expected, the implicitly...... be performed by the subtraction of an energy window, a method was developed to perform scatter and downscatter correction simultaneously. A phantom study has been performed, where the in paper II described downscatter correction was extended with scatter correction. This new combined correction was compared...

  14. Development of rubber mixing process mathematical model and synthesis of control correction algorithm by process temperature mode using an artificial neural network

    Directory of Open Access Journals (Sweden)

    V. S. Kudryashov

    2016-01-01

    Full Text Available The article is devoted to the development of a correction control algorithm by temperature mode of a periodic rubber mixing process for JSC "Voronezh tire plant". The algorithm is designed to perform in the main controller a section of rubber mixing Siemens S7 CPU319F-3 PN/DP, which forms tasks for the local temperature controllers HESCH HE086 and Jumo dTRON304, operating by tempering stations. To compile the algorithm was performed a systematic analysis of rubber mixing process as an object of control and was developed a mathematical model of the process based on the heat balance equations describing the processes of heat transfer through the walls of technological devices, the change of coolant temperature and the temperature of the rubber compound mixing until discharge from the mixer chamber. Due to the complexity and nonlinearity of the control object – Rubber mixers and the availability of methods and a wide experience of this device control in an industrial environment, a correction algorithm is implemented on the basis of an artificial single-layer neural network and it provides the correction of tasks for local controllers on the cooling water temperature and air temperature in the workshop, which may vary considerably depending on the time of the year, and during prolonged operation of the equipment or its downtime. Tempering stations control is carried out by changing the flow of cold water from the cooler and on/off control of the heating elements. The analysis of the model experiments results and practical research at the main controller programming in the STEP 7 environment at the enterprise showed a decrease in the mixing time for different types of rubbers by reducing of heat transfer process control error.

  15. Accurate 3D reconstruction by a new PDS-OSEM algorithm for HRRT

    International Nuclear Information System (INIS)

    Chen, Tai-Been; Horng-Shing Lu, Henry; Kim, Hang-Keun; Son, Young-Don; Cho, Zang- Hee

    2014-01-01

    State-of-the-art high resolution research tomography (HRRT) provides high resolution PET images with full 3D human brain scanning. But, a short time frame in dynamic study causes many problems related to the low counts in the acquired data. The PDS-OSEM algorithm was proposed to reconstruct the HRRT image with a high signal-to-noise ratio that provides accurate information for dynamic data. The new algorithm was evaluated by simulated image, empirical phantoms, and real human brain data. Meanwhile, the time activity curve was adopted to validate a reconstructed performance of dynamic data between PDS-OSEM and OP-OSEM algorithms. According to simulated and empirical studies, the PDS-OSEM algorithm reconstructs images with higher quality, higher accuracy, less noise, and less average sum of square error than those of OP-OSEM. The presented algorithm is useful to provide quality images under the condition of low count rates in dynamic studies with a short scan time. - Highlights: • The PDS-OSEM reconstructs PET images with iteratively compensating random and scatter corrections from prompt sinogram. • The PDS-OSEM can reconstruct PET images with low count data and data contaminations. • The PDS-OSEM provides less noise and higher quality of reconstructed images than those of OP-OSEM algorithm in statistical sense

  16. A practical procedure to improve the accuracy of radiochromic film dosimetry. A integration with a correction method of uniformity correction and a red/blue correction method

    International Nuclear Information System (INIS)

    Uehara, Ryuzo; Tachibana, Hidenobu; Ito, Yasushi; Yoshino, Shinichi; Matsubayashi, Fumiyasu; Sato, Tomoharu

    2013-01-01

    It has been reported that the light scattering could worsen the accuracy of dose distribution measurement using a radiochromic film. The purpose of this study was to investigate the accuracy of two different films, EDR2 and EBT2, as film dosimetry tools. The effectiveness of a correction method for the non-uniformity caused from EBT2 film and the light scattering was also evaluated. In addition the efficacy of this correction method integrated with the red/blue correction method was assessed. EDR2 and EBT2 films were read using a flatbed charge-coupled device scanner (EPSON 10000 G). Dose differences on the axis perpendicular to the scanner lamp movement axis were within 1% with EDR2, but exceeded 3% (Maximum: +8%) with EBT2. The non-uniformity correction method, after a single film exposure, was applied to the readout of the films. A corrected dose distribution data was subsequently created. The correction method showed more than 10%-better pass ratios in dose difference evaluation than when the correction method was not applied. The red/blue correction method resulted in 5%-improvement compared with the standard procedure that employed red color only. The correction method with EBT2 proved to be able to rapidly correct non-uniformity, and has potential for routine clinical intensity modulated radiation therapy (IMRT) dose verification if the accuracy of EBT2 is required to be similar to that of EDR2. The use of red/blue correction method may improve the accuracy, but we recommend we should use the red/blue correction method carefully and understand the characteristics of EBT2 for red color only and the red/blue correction method. (author)

  17. [A practical procedure to improve the accuracy of radiochromic film dosimetry: a integration with a correction method of uniformity correction and a red/blue correction method].

    Science.gov (United States)

    Uehara, Ryuzo; Tachibana, Hidenobu; Ito, Yasushi; Yoshino, Shinichi; Matsubayashi, Fumiyasu; Sato, Tomoharu

    2013-06-01

    It has been reported that the light scattering could worsen the accuracy of dose distribution measurement using a radiochromic film. The purpose of this study was to investigate the accuracy of two different films, EDR2 and EBT2, as film dosimetry tools. The effectiveness of a correction method for the non-uniformity caused from EBT2 film and the light scattering was also evaluated. In addition the efficacy of this correction method integrated with the red/blue correction method was assessed. EDR2 and EBT2 films were read using a flatbed charge-coupled device scanner (EPSON 10000G). Dose differences on the axis perpendicular to the scanner lamp movement axis were within 1% with EDR2, but exceeded 3% (Maximum: +8%) with EBT2. The non-uniformity correction method, after a single film exposure, was applied to the readout of the films. A corrected dose distribution data was subsequently created. The correction method showed more than 10%-better pass ratios in dose difference evaluation than when the correction method was not applied. The red/blue correction method resulted in 5%-improvement compared with the standard procedure that employed red color only. The correction method with EBT2 proved to be able to rapidly correct non-uniformity, and has potential for routine clinical IMRT dose verification if the accuracy of EBT2 is required to be similar to that of EDR2. The use of red/blue correction method may improve the accuracy, but we recommend we should use the red/blue correction method carefully and understand the characteristics of EBT2 for red color only and the red/blue correction method.

  18. The fortran programme for the calculation of the absorption and double scattering corrections in cross-section measurements with fast neutrons using the monte Carlo method (1963); Programme fortran pour le calcul des corrections d'absorption et de double diffusion dans les mesures de sections efficaces pour les neutrons rapides par la methode de monte-carlo (1963)

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez, B [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1963-07-01

    A calculation for double scattering and absorption corrections in fast neutron scattering experiments using Monte-Carlo method is given. Application to cylindrical target is presented in FORTRAN symbolic language. (author) [French] Un calcul des corrections de double diffusion et d'absorption dans les experiences de diffusion de neutrons rapides par la methode de Monte-Carlo est presente. L'application au cas d'une cible cylindrique est traitee en langage symbolique FORTRAN. (auteur)

  19. 2.5D Inversion Algorithm of Frequency-Domain Airborne Electromagnetics with Topography

    Directory of Open Access Journals (Sweden)

    Jianjun Xi

    2016-01-01

    Full Text Available We presented a 2.5D inversion algorithm with topography for frequency-domain airborne electromagnetic data. The forward modeling is based on edge finite element method and uses the irregular hexahedron to adapt the topography. The electric and magnetic fields are split into primary (background and secondary (scattered field to eliminate the source singularity. For the multisources of frequency-domain airborne electromagnetic method, we use the large-scale sparse matrix parallel shared memory direct solver PARDISO to solve the linear system of equations efficiently. The inversion algorithm is based on Gauss-Newton method, which has the efficient convergence rate. The Jacobian matrix is calculated by “adjoint forward modelling” efficiently. The synthetic inversion examples indicated that our proposed method is correct and effective. Furthermore, ignoring the topography effect can lead to incorrect results and interpretations.

  20. Stack emission monitoring using non-dispersive infrared spectroscopy with an optimized nonlinear absorption cross interference correction algorithm

    Directory of Open Access Journals (Sweden)

    Y. W. Sun

    2013-08-01

    Full Text Available In this paper, we present an optimized analysis algorithm for non-dispersive infrared (NDIR to in situ monitor stack emissions. The proposed algorithm simultaneously compensates for nonlinear absorption and cross interference among different gases. We present a mathematical derivation for the measurement error caused by variations in interference coefficients when nonlinear absorption occurs. The proposed algorithm is derived from a classical one and uses interference functions to quantify cross interference. The interference functions vary proportionally with the nonlinear absorption. Thus, interference coefficients among different gases can be modeled by the interference functions whether gases are characterized by linear or nonlinear absorption. In this study, the simultaneous analysis of two components (CO2 and CO serves as an example for the validation of the proposed algorithm. The interference functions in this case can be obtained by least-squares fitting with third-order polynomials. Experiments show that the results of cross interference correction are improved significantly by utilizing the fitted interference functions when nonlinear absorptions occur. The dynamic measurement ranges of CO2 and CO are improved by about a factor of 1.8 and 3.5, respectively. A commercial analyzer with high accuracy was used to validate the CO and CO2 measurements derived from the NDIR analyzer prototype in which the new algorithm was embedded. The comparison of the two analyzers show that the prototype works well both within the linear and nonlinear ranges.

  1. Investigation of scattered radiation in 3D whole-body positron emission tomography using Monte Carlo simulations

    International Nuclear Information System (INIS)

    Adam, L.-E.; Brix, G.

    1999-01-01

    The correction of scattered radiation is one of the most challenging tasks in 3D positron emission tomography (PET) and knowledge about the amount of scatter and its distribution is a prerequisite for performing an accurate correction. One concern in 3D PET in contrast to 2D PET is the scatter contribution from activity outside the field-of-view (FOV) and multiple scatter. Using Monte Carlo simulations, we examined the scatter distribution for various phantoms. The simulations were performed for a whole-body PET system (ECAT EXACT HR + , Siemens/CTI) with an axial FOV of 15.5 cm and a ring diameter of 82.7 cm. With (without) interplane septa, up to one (two) out of three detected events are scattered (for a centred point source in a water-filled cylinder that nearly fills out the patient port), whereby the relative scatter fraction varies significantly with the axial position. Our results show that for an accurate scatter correction, activity as well as scattering media outside the FOV have to be taken into account. Furthermore it could be shown that there is a considerable amount of multiple scatter which has a different spatial distribution from single scatter. This means that multiple scatter cannot be corrected by simply rescaling the single scatter component. (author)

  2. The Impact of Microstructure on an Accurate Snow Scattering Parameterization at Microwave Wavelengths

    Science.gov (United States)

    Honeyager, Ryan

    High frequency microwave instruments are increasingly used to observe ice clouds and snow. These instruments are significantly more sensitive than conventional precipitation radar. This is ideal for analyzing ice-bearing clouds, for ice particles are tenuously distributed and have effective densities that are far less than liquid water. However, at shorter wavelengths, the electromagnetic response of ice particles is no longer solely dependent on particle mass. The shape of the ice particles also plays a significant role. Thus, in order to understand the observations of high frequency microwave radars and radiometers, it is essential to model the scattering properties of snowflakes correctly. Several research groups have proposed detailed models of snow aggregation. These particle models are coupled with computer codes that determine the particles' electromagnetic properties. However, there is a discrepancy between the particle model outputs and the requirements of the electromagnetic models. Snowflakes have countless variations in structure, but we also know that physically similar snowflakes scatter light in much the same manner. Structurally exact electromagnetic models, such as the discrete dipole approximation (DDA), require a high degree of structural resolution. Such methods are slow, spending considerable time processing redundant (i.e. useless) information. Conversely, when using techniques that incorporate too little structural information, the resultant radiative properties are not physically realistic. Then, we ask the question, what features are most important in determining scattering? This dissertation develops a general technique that can quickly parameterize the important structural aspects that determine the scattering of many diverse snowflake morphologies. A Voronoi bounding neighbor algorithm is first employed to decompose aggregates into well-defined interior and surface regions. The sensitivity of scattering to interior randomization is then

  3. High order QED corrections in Z physics

    International Nuclear Information System (INIS)

    Marck, S.C. van der.

    1991-01-01

    In this thesis a number of calculations of higher order QED corrections are presented, all applying to the standard LEP/SLC processes e + e - → f-bar f, where f stands for any fermion. In cases where f≠ e - , ν e , the above process is only possible via annihilation of the incoming electron positron pair. At LEP/SLC this mainly occurs via the production and the subsequent decay of a Z boson, i.e. the cross section is heavily dominated by the Z resonance. These processes and the corrections to them, treated in a semi-analytical way, are discussed (ch. 2). In the case f = e - (Bhabha scattering) the process can also occur via the exchange of a virtual photon in the t-channel. Since the latter contribution is dominant at small scattering angles one has to exclude these angles if one is interested in Z physics. Having excluded that region one has to recalculate all QED corrections (ch. 3). The techniques introduced there enables for the calculation the difference between forward and backward scattering, the forward backward symmetry, for the cases f ≠ e - , ν e (ch. 4). At small scattering angles, where Bhabha scattering is dominated by photon exchange in the t-channel, this process is used in experiments to determine the luminosity of the e + e - accelerator. hence an accurate theoretical description of this process at small angles is of vital interest to the overall normalization of all measurements at LEP/SLC. Ch. 5 gives such a description in a semi-analytical way. The last two chapters discuss Monte Carlo techniques that are used for the cases f≠ e - , ν e . Ch. 6 describes the simulation of two photon bremsstrahlung, which is a second order QED correction effect. The results are compared with results of the semi-analytical treatment in ch. 2. Finally ch. 7 reviews several techniques that have been used to simulate higher order QED corrections for the cases f≠ e - , ν e . (author). 132 refs.; 10 figs.; 16 tabs

  4. Calculation of radiative corrections to virtual compton scattering - absolute measurement of the energy of Jefferson Lab. electron beam (hall A) by a magnetic method: arc project

    International Nuclear Information System (INIS)

    Marchand, D.

    1998-11-01

    This thesis presents the radiative corrections to the virtual compton scattering and the magnetic method adopted in the Hall A at Jefferson Laboratory, to measure the electrons beam energy with an accuracy of 10 4 . The virtual compton scattering experiments allow the access to the generalised polarizabilities of the protons. The extraction of these polarizabilities is obtained by the experimental and theoretical cross sections comparison. That's why the systematic errors and the radiative effects of the experiments have to be controlled very seriously. In this scope, a whole calculation of the internal radiative corrections has been realised in the framework of the quantum electrodynamic. The method of the dimensional regularisation has been used to the treatment of the ultraviolet and infra-red divergences. The absolute measure method of the energy, takes into account the magnetic deviation, made up of eight identical dipoles. The energy is determined from the deviation angle calculation of the beam and the measure of the magnetic field integral along the deviation

  5. Dose calculations for irregular fields using three-dimensional first-scatter integration

    International Nuclear Information System (INIS)

    Boesecke, R.; Scharfenberg, H.; Schlegel, W.; Hartmann, G.H.

    1986-01-01

    This paper describes a method of dose calculations for irregular fields which requires only the mean energy of the incident photons, the geometrical properties of the irregular field and of the therapy unit, and the attenuation coefficient of tissue. The method goes back to an approach including spatial aspects of photon scattering for inhomogeneities for the calculation of dose reduction factors as proposed by Sontag and Cunningham (1978). It is based on the separation of dose into a primary component and a scattered component. The scattered component can generally be calculated for each field by integration over dose contributions from scattering in neighbouring volume elements. The quotient of this scattering contribution in the irregular field and the scattering contribution in the equivalent open field is then the correction factor for scattering in an irregular field. A correction factor for the primary component can be calculated if the attenuation of the photons in the shielding block is properly taken into account. The correction factor is simply given by the quotient of primary photons of the irregular field and the primary photons of the open field. (author)

  6. Dual matrix ordered subsets reconstruction for accelerated 3D scatter compensation in single-photon emission tomography

    International Nuclear Information System (INIS)

    Kamphuis, C.; Beekman, F.J.; Van Rijk, P.P.; Viergever, M.A.

    1998-01-01

    Three-dimensional (3D) iterative maximum likelihood expectation maximization (ML-EM) algorithms for single-photon emission tomography (SPET) are capable of correcting image-degrading effects of non-uniform attenuation, distance-dependent camera response and patient shape-dependent scatter. However, the resulting improvements in quantitation, resolution and signal-to-noise ratio (SNR) are obtained at the cost of a huge computational burden. This paper presents a new acceleration method for ML-EM: dual matrix ordered subsets (DM-OS). DM-OS combines two acceleration methods: (a) different matrices for projection and back-projection and (b) ordered subsets of projections. DM-OS was compared with ML-EM on simulated data and on physical thorax phantom data, for both 180 and 360 orbits. Contrast, normalized standard deviation and mean squared error were calculated for the digital phantom experiment. DM-OS resulted in similar image quality to ML-EM, even for speed-up factors of 200 compared to ML-EM in the case of 120 projections. The thorax phantom data could be reconstructed 50 times faster (60 projections) using DM-OS with preservation of image quality. ML-EM and DM-OS with scatter compensation showed significant improvement of SNR compared to ML-EM without scatter compensation. Furthermore, inclusion of complex image formation models in the computer code is simplified in the case of DM-OS. It is thus shown that DM-OS is a fast and relatively simple algorithm for 3D iterative scatter compensation, with similar results to conventional ML-EM, for both 180 and 360 acquired data. (orig.)

  7. Mosaic crystal algorithm for Monte Carlo simulations

    CERN Document Server

    Seeger, P A

    2002-01-01

    An algorithm is presented for calculating reflectivity, absorption, and scattering of mosaic crystals in Monte Carlo simulations of neutron instruments. The algorithm uses multi-step transport through the crystal with an exact solution of the Darwin equations at each step. It relies on the kinematical model for Bragg reflection (with parameters adjusted to reproduce experimental data). For computation of thermal effects (the Debye-Waller factor and coherent inelastic scattering), an expansion of the Debye integral as a rapidly converging series of exponential terms is also presented. Any crystal geometry and plane orientation may be treated. The algorithm has been incorporated into the neutron instrument simulation package NISP. (orig.)

  8. Correction of Magnetic Optics and Beam Trajectory Using LOCO Based Algorithm with Expanded Experimental Data Sets

    Energy Technology Data Exchange (ETDEWEB)

    Romanov, A.; Edstrom, D.; Emanov, F. A.; Koop, I. A.; Perevedentsev, E. A.; Rogovsky, Yu. A.; Shwartz, D. B.; Valishev, A.

    2017-03-28

    Precise beam based measurement and correction of magnetic optics is essential for the successful operation of accelerators. The LOCO algorithm is a proven and reliable tool, which in some situations can be improved by using a broader class of experimental data. The standard data sets for LOCO include the closed orbit responses to dipole corrector variation, dispersion, and betatron tunes. This paper discusses the benefits from augmenting the data with four additional classes of experimental data: the beam shape measured with beam profile monitors; responses of closed orbit bumps to focusing field variations; betatron tune responses to focusing field variations; BPM-to-BPM betatron phase advances and beta functions in BPMs from turn-by-turn coordinates of kicked beam. All of the described features were implemented in the Sixdsimulation software that was used to correct the optics of the VEPP-2000 collider, the VEPP-5 injector booster ring, and the FAST linac.

  9. Scattering calculation and image reconstruction using elevation-focused beams.

    Science.gov (United States)

    Duncan, David P; Astheimer, Jeffrey P; Waag, Robert C

    2009-05-01

    Pressure scattered by cylindrical and spherical objects with elevation-focused illumination and reception has been analytically calculated, and corresponding cross sections have been reconstructed with a two-dimensional algorithm. Elevation focusing was used to elucidate constraints on quantitative imaging of three-dimensional objects with two-dimensional algorithms. Focused illumination and reception are represented by angular spectra of plane waves that were efficiently computed using a Fourier interpolation method to maintain the same angles for all temporal frequencies. Reconstructions were formed using an eigenfunction method with multiple frequencies, phase compensation, and iteration. The results show that the scattered pressure reduces to a two-dimensional expression, and two-dimensional algorithms are applicable when the region of a three-dimensional object within an elevation-focused beam is approximately constant in elevation. The results also show that energy scattered out of the reception aperture by objects contained within the focused beam can result in the reconstructed values of attenuation slope being greater than true values at the boundary of the object. Reconstructed sound speed images, however, appear to be relatively unaffected by the loss in scattered energy. The broad conclusion that can be drawn from these results is that two-dimensional reconstructions require compensation to account for uncaptured three-dimensional scattering.

  10. Non-Uniformity Correction Using Nonlinear Characteristic Performance Curves for Calibration

    Science.gov (United States)

    Lovejoy, McKenna Roberts

    Infrared imaging is an expansive field with many applications. Advances in infrared technology have lead to a greater demand from both commercial and military sectors. However, a known problem with infrared imaging is its non-uniformity. This non-uniformity stems from the fact that each pixel in an infrared focal plane array has its own photoresponse. Many factors such as exposure time, temperature, and amplifier choice affect how the pixels respond to incoming illumination and thus impact image uniformity. To improve performance non-uniformity correction (NUC) techniques are applied. Standard calibration based techniques commonly use a linear model to approximate the nonlinear response. This often leaves unacceptable levels of residual non-uniformity. Calibration techniques often have to be repeated during use to continually correct the image. In this dissertation alternates to linear NUC algorithms are investigated. The goal of this dissertation is to determine and compare nonlinear non-uniformity correction algorithms. Ideally the results will provide better NUC performance resulting in less residual non-uniformity as well as reduce the need for recalibration. This dissertation will consider new approaches to nonlinear NUC such as higher order polynomials and exponentials. More specifically, a new gain equalization algorithm has been developed. The various nonlinear non-uniformity correction algorithms will be compared with common linear non-uniformity correction algorithms. Performance will be compared based on RMS errors, residual non-uniformity, and the impact quantization has on correction. Performance will be improved by identifying and replacing bad pixels prior to correction. Two bad pixel identification and replacement techniques will be investigated and compared. Performance will be presented in the form of simulation results as well as before and after images taken with short wave infrared cameras. The initial results show, using a third order

  11. SU-F-SPS-06: Implementation of a Back-Projection Algorithm for 2D in Vivo Dosimetry with An EPID System

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez Reyes, B; Rodriguez Perez, E; Sosa Aquino, M [Universidad de Guanajuato, Leon, Guanajuato (Mexico)

    2016-06-15

    Purpose: To implement a back-projection algorithm for 2D dose reconstructions for in vivo dosimetry in radiation therapy using an Electronic Portal Imaging Device (EPID) based on amorphous silicon. Methods: An EPID system was used to calculate dose-response function, pixel sensitivity map, exponential scatter kernels and beam hardenig correction for the back-projection algorithm. All measurements were done with a 6 MV beam. A 2D dose reconstruction for an irradiated water phantom (30×30×30 cm{sup 3}) was done to verify the algorithm implementation. Gamma index evaluation between the 2D reconstructed dose and the calculated with a treatment planning system (TPS) was done. Results: A linear fit was found for the dose-response function. The pixel sensitivity map has a radial symmetry and was calculated with a profile of the pixel sensitivity variation. The parameters for the scatter kernels were determined only for a 6 MV beam. The primary dose was estimated applying the scatter kernel within EPID and scatter kernel within the patient. The beam hardening coefficient is σBH= 3.788×10{sup −4} cm{sup 2} and the effective linear attenuation coefficient is µAC= 0.06084 cm{sup −1}. The 95% of points evaluated had γ values not longer than the unity, with gamma criteria of ΔD = 3% and Δd = 3 mm, and within the 50% isodose surface. Conclusion: The use of EPID systems proved to be a fast tool for in vivo dosimetry, but the implementation is more complex that the elaborated for pre-treatment dose verification, therefore, a simplest method must be investigated. The accuracy of this method should be improved modifying the algorithm in order to compare lower isodose curves.

  12. Direct and inverse scattering for viscoelastic media

    International Nuclear Information System (INIS)

    Ammicht, E.; Corones, J.P.; Krueger, R.J.

    1987-01-01

    A time domain approach to direct and inverse scattering problems for one-dimensional viscoelastic media is presented. Such media can be characterized as having a constitutive relation between stress and strain which involves the past history of the strain through a memory function, the relaxation modulus. In the approach in this article, the relaxation modulus of a material is shown to be related to the reflection properties of the material. This relation provides a constructive algorithm for direct and inverse scattering problems. A numerical implementation of this algorithm is tested on several problems involving realistic relaxation moduli

  13. Unitarity corrections and high field strengths in high energy hard collisions

    International Nuclear Information System (INIS)

    Kovchegov, Y.V.; Mueller, A.H.

    1997-01-01

    Unitarity corrections to the BFKL description of high energy hard scattering are viewed in large N c QCD in light-cone quantization. In a center of mass frame unitarity corrections to high energy hard scattering are manifestly perturbatively calculable and unrelated to questions of parton saturation. In a frame where one of the hadrons is initially at rest unitarity corrections are related to parton saturation effects and involve potential strengths A μ ∝1/g. In such a frame we describe the high energy scattering in terms of the expectation value of a Wilson loop. The large potentials A μ ∝1/g are shown to be pure gauge terms allowing perturbation theory to again describe unitarity corrections and parton saturation effects. Genuine nonperturbative effects only come in at energies well beyond those energies where unitarity constraints first become important. (orig.)

  14. Technical Note: Modification of the standard gain correction algorithm to compensate for the number of used reference flat frames in detector performance studies

    International Nuclear Information System (INIS)

    Konstantinidis, Anastasios C.; Olivo, Alessandro; Speller, Robert D.

    2011-01-01

    Purpose: The x-ray performance evaluation of digital x-ray detectors is based on the calculation of the modulation transfer function (MTF), the noise power spectrum (NPS), and the resultant detective quantum efficiency (DQE). The flat images used for the extraction of the NPS should not contain any fixed pattern noise (FPN) to avoid contamination from nonstochastic processes. The ''gold standard'' method used for the reduction of the FPN (i.e., the different gain between pixels) in linear x-ray detectors is based on normalization with an average reference flat-field. However, the noise in the corrected image depends on the number of flat frames used for the average flat image. The aim of this study is to modify the standard gain correction algorithm to make it independent on the used reference flat frames. Methods: Many publications suggest the use of 10-16 reference flat frames, while other studies use higher numbers (e.g., 48 frames) to reduce the propagated noise from the average flat image. This study quantifies experimentally the effect of the number of used reference flat frames on the NPS and DQE values and appropriately modifies the gain correction algorithm to compensate for this effect. Results: It is shown that using the suggested gain correction algorithm a minimum number of reference flat frames (i.e., down to one frame) can be used to eliminate the FPN from the raw flat image. This saves computer memory and time during the x-ray performance evaluation. Conclusions: The authors show that the method presented in the study (a) leads to the maximum DQE value that one would have by using the conventional method and very large number of frames and (b) has been compared to an independent gain correction method based on the subtraction of flat-field images, leading to identical DQE values. They believe this provides robust validation of the proposed method.

  15. Modifications Of Discrete Ordinate Method For Computations With High Scattering Anisotropy: Comparative Analysis

    Science.gov (United States)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.

    2012-01-01

    A numerical accuracy analysis of the radiative transfer equation (RTE) solution based on separation of the diffuse light field into anisotropic and smooth parts is presented. The analysis uses three different algorithms based on the discrete ordinate method (DOM). Two methods, DOMAS and DOM2+, that do not use the truncation of the phase function, are compared against the TMS-method. DOMAS and DOM2+ use the Small-Angle Modification of RTE and the single scattering term, respectively, as an anisotropic part. The TMS method uses Delta-M method for truncation of the phase function along with the single scattering correction. For reference, a standard discrete ordinate method, DOM, is also included in analysis. The obtained results for cases with high scattering anisotropy show that at low number of streams (16, 32) only DOMAS provides an accurate solution in the aureole area. Outside of the aureole, the convergence and accuracy of DOMAS, and TMS is found to be approximately similar: DOMAS was found more accurate in cases with coarse aerosol and liquid water cloud models, except low optical depth, while the TMS showed better results in case of ice cloud.

  16. An Improved Algorithm to Delineate Urban Targets with Model-Based Decomposition of PolSAR Data

    Directory of Open Access Journals (Sweden)

    Dingfeng Duan

    2017-10-01

    Full Text Available In model-based decomposition algorithms using polarimetric synthetic aperture radar (PolSAR data, urban targets are typically identified based on the existence of strong double-bounced scattering. However, urban targets with large azimuth orientation angles (AOAs produce strong volumetric scattering that appears similar to scattering characteristics from tree canopies. Due to scattering ambiguity, urban targets can be classified into the vegetation category if the same classification scheme of the model-based PolSAR decomposition algorithms is followed. To resolve the ambiguity and to reduce the misclassification eventually, we introduced a correlation coefficient that characterized scattering mechanisms of urban targets with variable AOAs. Then, an existing volumetric scattering model was modified, and a PolSAR decomposition algorithm developed. The validity and effectiveness of the algorithm were examined using four PolSAR datasets. The algorithm was valid and effective to delineate urban targets with a wide range of AOAs, and applicable to a broad range of ground targets from urban areas, and from upland and flooded forest stands.

  17. Determining the water content in concrete by gamma scattering method

    International Nuclear Information System (INIS)

    Priyada, P.; Ramar, R.; Shivaramu

    2014-01-01

    Highlights: • Gamma scattering technique for estimation of water content in concrete is given. • The scattered intensity increases with the volumetric water content. • Attenuation correction is provided to the scattered intensities. • Volumetric water content of 137 Cs radioactive source and a high resolution HPGe detector based energy dispersive gamma ray spectrometer. Concrete samples of uniform density ≈2.4 g/cm 3 are chosen for the study and the scattered intensities found to vary with the amount of water present in the specimen. The scattered intensities are corrected for attenuation effects and the results obtained with reference to a dry sample are compared with those obtained by gravimetrical and gamma transmission methods. A good agreement is seen between gamma scattering results and those obtained by gravimetric and transmission methods within accuracy of 6% and <2% change in water content can be detected

  18. Iterative optimization of quantum error correcting codes

    International Nuclear Information System (INIS)

    Reimpell, M.; Werner, R.F.

    2005-01-01

    We introduce a convergent iterative algorithm for finding the optimal coding and decoding operations for an arbitrary noisy quantum channel. This algorithm does not require any error syndrome to be corrected completely, and hence also finds codes outside the usual Knill-Laflamme definition of error correcting codes. The iteration is shown to improve the figure of merit 'channel fidelity' in every step

  19. Discrete inverse scattering theory and the continuum limit

    International Nuclear Information System (INIS)

    Berryman, J.G.; Greene, R.R.

    1978-01-01

    The class of satisfactory difference approximations for the Schroedinger equation in discrete inverse scattering theory is shown smaller than previously supposed. A fast algorithm (analogous to the Levinson algorithm for Toeplitz matrices) is found for solving the discrete inverse problem. (Auth.)

  20. Phase correction of MR perfusion/diffusion images

    International Nuclear Information System (INIS)

    Chenevert, T.L.; Pipe, J.G.; Brunberg, J.A.; Yeung, H.N.

    1989-01-01

    Apparent diffusion coefficient (ADC) and perfusion MR sequences are exceptionally sensitive to minute motion and, therefore, are prone to bulk motions that hamper ADC/perfusion quantification. The authors have developed a phase correction algorithm to substantially reduce this error. The algorithm uses a diffusion-insensitive data set to correct data that are diffusion sensitive but phase corrupt. An assumption of the algorithm is that bulk motion phase shifts are uniform in one dimension, although they may be arbitrarily large and variable from acquisition to acquisition. This is facilitated by orthogonal section selection. The correction is applied after one Fourier transform of a two-dimensional Fourier transform reconstruction. Imaging experiments on rat and human brain demonstrate significant artifact reduction in ADC and perfusion measurements

  1. Fast Neutron Elastic and Inelastic Scattering of Vanadium

    Energy Technology Data Exchange (ETDEWEB)

    Holmqvist, B; Johansson, S G; Lodin, G; Wiedling, T

    1969-11-15

    Fast neutron scattering interactions with vanadium were studied using time-of-flight techniques at several energies in the interval 1.5 to 8.1 MeV. The experimental differential elastic scattering cross sections have been fitted to optical model calculations and the inelastic scattering cross sections have been compared with Hauser-Feshbach calculations, corrected for the fluctuation of compound-nuclear level widths.

  2. Pion nucleus scattering lengths

    International Nuclear Information System (INIS)

    Huang, W.T.; Levinson, C.A.; Banerjee, M.K.

    1971-09-01

    Soft pion theory and the Fubini-Furlan mass dispersion relations have been used to analyze the pion nucleon scattering lengths and obtain a value for the sigma commutator term. With this value and using the same principles, scattering lengths have been predicted for nuclei with mass number ranging from 6 to 23. Agreement with experiment is very good. For those who believe in the Gell-Mann-Levy sigma model, the evaluation of the commutator yields the value 0.26(m/sub σ//m/sub π/) 2 for the sigma nucleon coupling constant. The large dispersive corrections for the isosymmetric case implies that the basic idea behind many of the soft pion calculations, namely, slow variation of matrix elements from the soft pion limit to the physical pion mass, is not correct. 11 refs., 1 fig., 3 tabs

  3. Modified Decoding Algorithm of LLR-SPA

    Directory of Open Access Journals (Sweden)

    Zhongxun Wang

    2014-09-01

    Full Text Available In wireless sensor networks, the energy consumption is mainly occurred in the stage of information transmission. The Low Density Parity Check code can make full use of the channel information to save energy. Because of the widely used decoding algorithm of the Low Density Parity Check code, this paper proposes a new decoding algorithm which is based on the LLR-SPA (Sum-Product Algorithm in Log-Likelihood-domain to improve the accuracy of the decoding algorithm. In the modified algorithm, a piecewise linear function is used to approximate the complicated Jacobi correction term in LLR-SPA decoding algorithm. Construct the tangent by the tangency point to the function of Jacobi correction term, which is based on the first order Taylor Series. In this way, the proposed piecewise linear approximation offers almost a perfect match to the function of Jacobi correction term. Meanwhile, the proposed piecewise linear approximation could avoid the operation of logarithmic which is more suitable for practical application. The simulation results show that the proposed algorithm could improve the decoding accuracy greatly without noticeable variation of the computational complexity.

  4. Dose calculation in eye brachytherapy with Ir-192 threads using the Sievert integral and corrected by attenuation and scattering with the Meisberg polynomials

    International Nuclear Information System (INIS)

    Vivanco, M.G. Bernui de; Cardenas R, A.

    2006-01-01

    The ocular brachytherapy many times unique alternative to conserve the visual organ in patients of ocular cancer, one comes carrying out in the National Institute of Neoplastic Illnesses (INEN) using threads of Iridium 192; those which, they are placed in radial form on the interior surface of a spherical cap of gold of 18 K; the cap remains in the eye until reaching the prescribed dose by the doctor. The main objective of this work is to be able to calculate in a correct and practical way the one time that the treatment of ocular brachytherapy should last to reach the dose prescribed by the doctor. To reach this objective I use the Sievert integral corrected by attenuation effects and scattering (Meisberg polynomials); calculating it by the Simpson method. In the calculations by means of the Sievert integral doesn't take into account the scattering produced by the gold cap neither the variation of the constant of frequency of exposure with the distance. The calculations by means of Sievert integral are compared with those obtained using the Monte Carlo Penelope simulation code, where it is observed that they agree at distances of the surface of the cap greater or equal to 2mm. (Author)

  5. Adaptive handling of Rayleigh and Raman scatter of fluorescence data based on evaluation of the degree of spectral overlap

    Science.gov (United States)

    Hu, Yingtian; Liu, Chao; Wang, Xiaoping; Zhao, Dongdong

    2018-06-01

    At present the general scatter handling methods are unsatisfactory when scatter and fluorescence seriously overlap in excitation emission matrix. In this study, an adaptive method for scatter handling of fluorescence data is proposed. Firstly, the Raman scatter was corrected by subtracting the baseline of deionized water which was collected in each experiment to adapt to the intensity fluctuations. Then, the degrees of spectral overlap between Rayleigh scatter and fluorescence were classified into three categories based on the distance between the spectral peaks. The corresponding algorithms, including setting to zero, fitting on single or both sides, were implemented after the evaluation of the degree of overlap for individual emission spectra. The proposed method minimized the number of fitting and interpolation processes, which reduced complexity, saved time, avoided overfitting, and most importantly assured the authenticity of data. Furthermore, the effectiveness of this procedure on the subsequent PARAFAC analysis was assessed and compared to Delaunay interpolation by conducting experiments with four typical organic chemicals and real water samples. Using this method, we conducted long-term monitoring of tap water and river water near a dyeing and printing plant. This method can be used for improving adaptability and accuracy in the scatter handling of fluorescence data.

  6. Verification of a table of phantom scatter factors for radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Arts, J.K.; Bailey, M.J.; Hill, R.

    2004-01-01

    Full text: Many commercially available treatment planning systems require the medical physicist to measure and enter significant quantities of data for the verification of physics based algorithms. The CMS XiO (St. Louis, USA) treatment planning system requires a table of phantom scatter factors amongst other data. In a previous paper by Storchi et al, a table of phantom scatter factors is described. This table gives the phantom scatter factor as a function of field size and quality index determined from a collection of measured data for the total scatter factor and the collimator scatter factor from 25 different beam qualities ranging from 4MV up to 25MV. These factors have been determined at a fixed reference depth of 10cm for square fields of various sizes. This work investigates the claim that this table can be used as an alternative to calculated phantom scatter curve from measured data of a particular treatment unit. According to definition, it is difficult to directly measure the phantom scatter correction factor (Sp). This problem can be solved using the relation; S cp (A) = S c (A)S p (A) where S cp (A)) is the measured total scatter factor for a field size of square side dimension, A and S c (A) is the measured collimator scatter factor for a field size of square side dimension, A (Khan et al 1980, van Gasteren et al 1991). The total scatter correction factor (Sc,p) was measured in a full phantom, and the collimator scatter factor (Sc) measured using an ESTRO mini-phantom. These factors were measured on three Siemens linear accelerators (Concord, USA) with energies 6MV and 18MV and square field sizes ranging from 4x4cm to 40x40cm. The Primus and KD Mevatron produced 6 and 18MV X-rays and the MXE Mevatron produced 6Mv X-rays only. The values for Sp were calculated by rearranging equation (1). Phantom scatter factors were calculated from the data provided by Storchi et al using the quality index of each beam. For comparison, a set of Sp values was

  7. Efficient Fixed-Offset GPR Scattering Analysis

    DEFF Research Database (Denmark)

    Meincke, Peter; Chen, Xianyao

    2004-01-01

    The electromagnetic scattering by buried three-dimensional penetrable objects, as involved in the analysis of ground penetrating radar systems, is calculated using the extended Born approximation. The involved scattering tensor is calculated using fast Fourier transforms (FFT's). We incorporate...... in the scattering calculation the correct radiation patterns of the ground penetrating radar antennas by using their plane-wave transmitting and receiving spectra. Finally, we derive an efficient FFT-based method to analyze a fixed-offset configuration in which the location of the transmitting antenna is different...

  8. Second Born approximation in elastic-electron scattering from nuclear static electro-magnetic multipoles

    International Nuclear Information System (INIS)

    Al-Khamiesi, I.M.; Kerimov, B.K.

    1988-01-01

    Second Born approximation corrections to electron scattering by nuclei with arbitrary spin are considered. Explicit integral expressions for the charge, magnetic dipole and interference differential cross sections are obtained. Magnetic and interference relative corrections are then investigated in the case of backward electron scattering using shell model form factors for nuclear targets 9 Be, 10 B, and 14 N. To understand exponential growth of these corrections with square of the electron energy K 0 2 , the case of electron scattering by 6 Li is considered using monopole model charge form factor with power-law asymptotics. 11 refs., 2 figs. (author)

  9. Determination of the mass attenuation coefficients for X-ray fluorescence measurements correction by the Rayleigh to Compton scattering ratio

    Energy Technology Data Exchange (ETDEWEB)

    Conti, C.C., E-mail: ccconti@ird.gov.br [Institute for Radioprotection and Dosimetry – IRD/CNEN, Rio de Janeiro (Brazil); Physics Institute, State University of Rio de Janeiro – UERJ, Rio de Janeiro (Brazil); Anjos, M.J. [Physics Institute, State University of Rio de Janeiro – UERJ, Rio de Janeiro (Brazil); Salgado, C.M. [Nuclear Engineering Institute – IEN/CNEN, Rio de Janeiro (Brazil)

    2014-09-15

    Highlights: •This work describes a procedure for sample self-absorption correction. •The use of Monte Carlo simulation to calculate the mass attenuation coefficients curve was effective. •No need for transmission measurement, saving time, financial resources and effort. •This article provides de curves for the 90° scattering angle. •Calculation on-line at (www.macx.net.br). -- Abstract: X-ray fluorescence technique plays an important role in nondestructive analysis nowadays. The development of equipment, including portable ones, enables a wide assortment of possibilities for analysis of stable elements, even in trace concentrations. Nevertheless, despite of the advantages, one important drawback is radiation self-attenuation in the sample being measured, which needs to be considered in the calculation for the proper determination of elemental concentration. The mass attenuation coefficient can be determined by transmission measurement, but, in this case, the sample must be in slab shape geometry and demands two different setups and measurements. The Rayleigh to Compton scattering ratio, determined from the X-ray fluorescence spectrum, provides a link to the mass attenuation coefficient by means of a polynomial type equation. This work presents a way to construct a Rayleigh to Compton scattering ratio versus mass attenuation coefficient curve by using the MCNP5 Monte Carlo computer code. The comparison between the calculated and literature values of the mass attenuation coefficient for some known samples showed to be within 15%. This calculation procedure is available on-line at (www.macx.net.br)

  10. Nuclear Compton scattering

    International Nuclear Information System (INIS)

    Christillin, P.

    1986-01-01

    The theory of nuclear Compton scattering is reformulated with explicit consideration of both virtual and real pionic degrees of freedom. The effects due to low-lying nuclear states, to seagull terms, to pion condensation and to the Δ dynamics in the nucleus and their interplay in the different energy regions are examined. It is shown that all corrections to the one-body terms, of diffractive behaviour determined by the nuclear form factor, have an effective two-body character. The possibility of using Compton scattering as a complementary source of information about nuclear dynamics is restressed. (author)

  11. Electron scattering in dense atomic and molecular gases: An empirical correlation of polarizability and electron scattering length

    International Nuclear Information System (INIS)

    Rupnik, K.; Asaf, U.; McGlynn, S.P.

    1990-01-01

    A linear correlation exists between the electron scattering length, as measured by a pressure shift method, and the polarizabilities for He, Ne, Ar, Kr, and Xe gases. The correlative algorithm has excellent predictive capability for the electron scattering lengths of mixtures of rare gases, simple molecular gases such as H 2 and N 2 and even complex molecular entities such as methane, CH 4

  12. Efficient error correction for next-generation sequencing of viral amplicons.

    Science.gov (United States)

    Skums, Pavel; Dimitrova, Zoya; Campo, David S; Vaughan, Gilberto; Rossi, Livia; Forbi, Joseph C; Yokosawa, Jonny; Zelikovsky, Alex; Khudyakov, Yury

    2012-06-25

    Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses.The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm.

  13. Target mass effects in polarized deep-inelastic scattering

    International Nuclear Information System (INIS)

    Piccione, A.

    1998-01-01

    We present a computation of nucleon mass corrections to nucleon structure functions for polarized deep-inelastic scattering. We perform a fit to existing data including mass corrections at first order in m 2 /Q 2 and we study the effect of these corrections on physically interesting quantities. We conclude that mass corrections are generally small, and compatible with current estimates of higher twist uncertainties, when available. (orig.)

  14. Measurement correction method for force sensor used in dynamic pressure calibration based on artificial neural network optimized by genetic algorithm

    Science.gov (United States)

    Gu, Tingwei; Kong, Deren; Shang, Fei; Chen, Jing

    2017-12-01

    We present an optimization algorithm to obtain low-uncertainty dynamic pressure measurements from a force-transducer-based device. In this paper, the advantages and disadvantages of the methods that are commonly used to measure the propellant powder gas pressure, the applicable scope of dynamic pressure calibration devices, and the shortcomings of the traditional comparison calibration method based on the drop-weight device are firstly analysed in detail. Then, a dynamic calibration method for measuring pressure using a force sensor based on a drop-weight device is introduced. This method can effectively save time when many pressure sensors are calibrated simultaneously and extend the life of expensive reference sensors. However, the force sensor is installed between the drop-weight and the hammerhead by transition pieces through the connection mode of bolt fastening, which causes adverse effects such as additional pretightening and inertia forces. To solve these effects, the influence mechanisms of the pretightening force, the inertia force and other influence factors on the force measurement are theoretically analysed. Then a measurement correction method for the force measurement is proposed based on an artificial neural network optimized by a genetic algorithm. The training and testing data sets are obtained from calibration tests, and the selection criteria for the key parameters of the correction model is discussed. The evaluation results for the test data show that the correction model can effectively improve the force measurement accuracy of the force sensor. Compared with the traditional high-accuracy comparison calibration method, the percentage difference of the impact-force-based measurement is less than 0.6% and the relative uncertainty of the corrected force value is 1.95%, which can meet the requirements of engineering applications.

  15. Neutron Brillouin scattering in dense fluids

    Energy Technology Data Exchange (ETDEWEB)

    Verkerk, P [Technische Univ. Delft (Netherlands); FINGO Collaboration

    1997-04-01

    Thermal neutron scattering is a typical microscopic probe for investigating dynamics and structure in condensed matter. In contrast, light (Brillouin) scattering with its three orders of magnitude larger wavelength is a typical macroscopic probe. In a series of experiments using the improved small-angle facility of IN5 a significant step forward is made towards reducing the gap between the two. For the first time the transition from the conventional single line in the neutron spectrum scattered by a fluid to the Rayleigh-Brillouin triplet known from light-scattering experiments is clearly and unambiguously observed in the raw neutron data without applying any corrections. Results of these experiments are presented. (author).

  16. Classical- and quantum mechanical Coulomb scattering

    International Nuclear Information System (INIS)

    Gratzl, W.

    1987-01-01

    Because in textbooks the quantum mechanical Coulomb scattering is either ignored or treated unsatisfactory, the present work attempts to present a physically plausible, mathematically correct but elementary treatment in a way that it can be used in textbooks and lectures on quantum mechanics. Coulomb scattering is derived as a limiting case of a screened Coulomb potential (finite range) within a time dependent quantum scattering theory. The difference in the asymptotic conditions for potentials of finite versus infinite range leads back to the classical Coulomb scattering. In the classical framework many concepts of the quantum theory can be introduced and are useful in an intuitive understanding of the quantum theory. The differences between classical and quantum scattering theory are likewise useful for didactic purposes. (qui)

  17. Doppler distortion correction based on microphone array and matching pursuit algorithm for a wayside train bearing monitoring system

    International Nuclear Information System (INIS)

    Liu, Xingchen; Hu, Zhiyong; He, Qingbo; Zhang, Shangbin; Zhu, Jun

    2017-01-01

    Doppler distortion and background noise can reduce the effectiveness of wayside acoustic train bearing monitoring and fault diagnosis. This paper proposes a method of combining a microphone array and matching pursuit algorithm to overcome these difficulties. First, a dictionary is constructed based on the characteristics and mechanism of a far-field assumption. Then, the angle of arrival of the train bearing is acquired when applying matching pursuit to analyze the acoustic array signals. Finally, after obtaining the resampling time series, the Doppler distortion can be corrected, which is convenient for further diagnostic work. Compared with traditional single-microphone Doppler correction methods, the advantages of the presented array method are its robustness to background noise and its barely requiring pre-measuring parameters. Simulation and experimental study show that the proposed method is effective in performing wayside acoustic bearing fault diagnosis. (paper)

  18. Doppler distortion correction based on microphone array and matching pursuit algorithm for a wayside train bearing monitoring system

    Science.gov (United States)

    Liu, Xingchen; Hu, Zhiyong; He, Qingbo; Zhang, Shangbin; Zhu, Jun

    2017-10-01

    Doppler distortion and background noise can reduce the effectiveness of wayside acoustic train bearing monitoring and fault diagnosis. This paper proposes a method of combining a microphone array and matching pursuit algorithm to overcome these difficulties. First, a dictionary is constructed based on the characteristics and mechanism of a far-field assumption. Then, the angle of arrival of the train bearing is acquired when applying matching pursuit to analyze the acoustic array signals. Finally, after obtaining the resampling time series, the Doppler distortion can be corrected, which is convenient for further diagnostic work. Compared with traditional single-microphone Doppler correction methods, the advantages of the presented array method are its robustness to background noise and its barely requiring pre-measuring parameters. Simulation and experimental study show that the proposed method is effective in performing wayside acoustic bearing fault diagnosis.

  19. Scattering theory

    CERN Document Server

    Friedrich, Harald

    2016-01-01

    This corrected and updated second edition of "Scattering Theory" presents a concise and modern coverage of the subject. In the present treatment, special attention is given to the role played by the long-range behaviour of the projectile-target interaction, and a theory is developed, which is well suited to describe near-threshold bound and continuum states in realistic binary systems such as diatomic molecules or molecular ions. It is motivated by the fact that experimental advances have shifted and broadened the scope of applications where concepts from scattering theory are used, e.g. to the field of ultracold atoms and molecules, which has been experiencing enormous growth in recent years, largely triggered by the successful realization of Bose-Einstein condensates of dilute atomic gases in 1995. The book contains sections on special topics such as near-threshold quantization, quantum reflection, Feshbach resonances and the quantum description of scattering in two dimensions. The level of abstraction is k...

  20. Pion inelastic scattering and the pion-nucleus effective interaction

    International Nuclear Information System (INIS)

    Carr, J.A.

    1983-01-01

    This work examines pion inelastic scattering with the primary purpose of gaining a better understanding of the properties of the pion-nucleus interaction. The main conclusion of the work is that an effective interaction which incorporates the most obvious theoretical corrections to the impulse approximation does a good job of explaining pion elastic and inelastic scattering from zero to 200 MeV without significant adjustments to the strength parameters of the force. Watson's multiple scattering theory is used to develop a theoretical interaction starting from the free pion-nucleon interaction. Elastic scattering was used to calibrate the isoscalar central interaction. It was found that the impulse approximation did poorly at low energy, while the multiple scattering corrections gave good agreement with all of the data after a few minor adjustments in the force. The distorted wave approximation for the inelastic transition matrix elements are evaluated for both natural and unnatural parity excitations. The isoscalar natural parity transitions are used to test the reaction theory, and it is found that the effective interaction calibrated by elastic scattering produces good agreement with the inelastic data. Calculations are also shown for other inelastic and charge exchange reactions. It appears that the isovector central interaction is reasonable, but the importance of medium corrections cannot be determined. The unnatural parity transitions are also reasonably described by the theoretical estimate of the spin-orbit interaction, but not enough systematic data exists to reach a firm conclusion

  1. A multifrequency MUSIC algorithm for locating small inhomogeneities in inverse scattering

    International Nuclear Information System (INIS)

    Griesmaier, Roland; Schmiedecke, Christian

    2017-01-01

    We consider an inverse scattering problem for time-harmonic acoustic or electromagnetic waves with sparse multifrequency far field data-sets. The goal is to localize several small penetrable objects embedded inside an otherwise homogeneous background medium from observations of far fields of scattered waves corresponding to incident plane waves with one fixed incident direction but several different frequencies. We assume that the far field is measured at a few observation directions only. Taking advantage of the smallness of the scatterers with respect to wavelength we utilize an asymptotic representation formula for the far field to design and analyze a MUSIC-type reconstruction method for this setup. We establish lower bounds on the number of frequencies and receiver directions that are required to recover the number and the positions of an ensemble of scatterers from the given measurements. Furthermore we briefly sketch a possible application of the reconstruction method to the practically relevant case of multifrequency backscattering data. Numerical examples are presented to document the potentials and limitations of this approach. (paper)

  2. Long-Wavelength Phonon Scattering in Nonpolar Semiconductors

    DEFF Research Database (Denmark)

    Lawætz, Peter

    1969-01-01

    The long-wavelength acoustic- and optical-phonon scattering of carriers in nonpolar semiconductors is considered from a general point of view. The deformation-potential approximation is defined and it is shown that long-range electrostatic forces give a nontrivial correction to the scattering...... of the very-short-range nature of interactions in a covalent semiconductor....

  3. Edge corrections to electromagnetic Casimir energies from general-purpose Mathieu-function routines

    Science.gov (United States)

    Blose, Elizabeth Noelle; Ghimire, Biswash; Graham, Noah; Stratton-Smith, Jeremy

    2015-01-01

    Scattering theory methods make it possible to calculate the Casimir energy of a perfectly conducting elliptic cylinder opposite a perfectly conducting plane in terms of Mathieu functions. In the limit of zero radius, the elliptic cylinder becomes a finite-width strip, which allows for the study of edge effects. However, existing packages for computing Mathieu functions are insufficient for this calculation because none can compute Mathieu functions of both the first and second kind for complex arguments. To address this shortcoming, we have written a general-purpose Mathieu-function package, based on algorithms developed by Alhargan. We use these routines to find edge corrections to the proximity force approximation for the Casimir energy of a perfectly conducting strip opposite a perfectly conducting plane.

  4. Characterization of the Photon Counting CHASE Jr., Chip Built in a 40-nm CMOS Process With a Charge Sharing Correction Algorithm Using a Collimated X-Ray Beam

    Energy Technology Data Exchange (ETDEWEB)

    Krzyżanowska, A. [AGH-UST, Cracow; Deptuch, G. W. [Fermilab; Maj, P. [AGH-UST, Cracow; Gryboś, P. [AGH-UST, Cracow; Szczygieł, R. [AGH-UST, Cracow

    2017-08-01

    This paper presents the detailed characterization of a single photon counting chip, named CHASE Jr., built in a CMOS 40-nm process, operating with synchrotron radiation. The chip utilizes an on-chip implementation of the C8P1 algorithm. The algorithm eliminates the charge sharing related uncertainties, namely, the dependence of the number of registered photons on the discriminator’s threshold, set for monochromatic irradiation, and errors in the assignment of an event to a certain pixel. The article presents a short description of the algorithm as well as the architecture of the CHASE Jr., chip. The analog and digital functionalities, allowing for proper operation of the C8P1 algorithm are described, namely, an offset correction for two discriminators independently, two-stage gain correction, and different operation modes of the digital blocks. The results of tests of the C8P1 operation are presented for the chip bump bonded to a silicon sensor and exposed to the 3.5- μm -wide pencil beam of 8-keV photons of synchrotron radiation. It was studied how sensitive the algorithm performance is to the chip settings, as well as the uniformity of parameters of the analog front-end blocks. Presented results prove that the C8P1 algorithm enables counting all photons hitting the detector in between readout channels and retrieving the actual photon energy.

  5. Scattering amplitudes and static atomic correction factors for the composition-sensitive 002 reflection in sphalerite ternary III-V and II-VI semiconductors.

    Science.gov (United States)

    Schowalter, M; Müller, K; Rosenauer, A

    2012-01-01

    Modified atomic scattering amplitudes (MASAs), taking into account the redistribution of charge due to bonds, and the respective correction factors considering the effect of static atomic displacements were computed for the chemically sensitive 002 reflection for ternary III-V and II-VI semiconductors. MASAs were derived from computations within the density functional theory formalism. Binary eight-atom unit cells were strained according to each strain state s (thin, intermediate, thick and fully relaxed electron microscopic specimen) and each concentration (x = 0, …, 1 in 0.01 steps), where the lattice parameters for composition x in strain state s were calculated using continuum elasticity theory. The concentration dependence was derived by computing MASAs for each of these binary cells. Correction factors for static atomic displacements were computed from relaxed atom positions by generating 50 × 50 × 50 supercells using the lattice parameter of the eight-atom unit cells. Atoms were randomly distributed according to the required composition. Polynomials were fitted to the composition dependence of the MASAs and the correction factors for the different strain states. Fit parameters are given in the paper.

  6. Two-Loop Scattering Amplitudes from the Riemann Sphere

    CERN Document Server

    Geyer, Yvonne; Monteiro, Ricardo; Tourkine, Piotr

    2016-01-01

    The scattering equations give striking formulae for massless scattering amplitudes at tree level and, as shown recently, at one loop. The progress at loop level was based on ambitwistor string theory, which naturally yields the scattering equations. We proposed that, for ambitwistor strings, the standard loop expansion in terms of the genus of the worldsheet is equivalent to an expansion in terms of nodes of a Riemann sphere, with the nodes carrying the loop momenta. In this paper, we show how to obtain two-loop scattering equations with the correct factorization properties. We adapt genus-two integrands from the ambitwistor string to the nodal Riemann sphere and show that these yield correct answers, by matching standard results for the four-point two-loop amplitudes of maximal supergravity and super-Yang-Mills theory. In the Yang-Mills case, this requires the loop analogue of the Parke-Taylor factor carrying the colour dependence, which includes non-planar contributions.

  7. Scattering properties of electromagnetic waves from metal object in the lower terahertz region

    Science.gov (United States)

    Chen, Gang; Dang, H. X.; Hu, T. Y.; Su, Xiang; Lv, R. C.; Li, Hao; Tan, X. M.; Cui, T. J.

    2018-01-01

    An efficient hybrid algorithm is proposed to analyze the electromagnetic scattering properties of metal objects in the lower terahertz (THz) frequency. The metal object can be viewed as perfectly electrical conducting object with a slightly rough surface in the lower THz region. Hence the THz scattered field from metal object can be divided into coherent and incoherent parts. The physical optics and truncated-wedge incremental-length diffraction coefficients methods are combined to compute the coherent part; while the small perturbation method is used for the incoherent part. With the MonteCarlo method, the radar cross section of the rough metal surface is computed by the multilevel fast multipole algorithm and the proposed hybrid algorithm, respectively. The numerical results show that the proposed algorithm has good accuracy to simulate the scattering properties rapidly in the lower THz region.

  8. An algebraic approach to the scattering equations

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Rijun; Rao, Junjie [Zhejiang Institute of Modern Physics, Zhejiang University,Hangzhou, 310027 (China); Feng, Bo [Zhejiang Institute of Modern Physics, Zhejiang University,Hangzhou, 310027 (China); Center of Mathematical Science, Zhejiang University,Hangzhou, 310027 (China); He, Yang-Hui [School of Physics, NanKai University,Tianjin, 300071 (China); Department of Mathematics, City University,London, EC1V 0HB (United Kingdom); Merton College, University of Oxford,Oxford, OX14JD (United Kingdom)

    2015-12-10

    We employ the so-called companion matrix method from computational algebraic geometry, tailored for zero-dimensional ideals, to study the scattering equations. The method renders the CHY-integrand of scattering amplitudes computable using simple linear algebra and is amenable to an algorithmic approach. Certain identities in the amplitudes as well as rationality of the final integrand become immediate in this formalism.

  9. Diffuse scattering from crystals with point defects

    International Nuclear Information System (INIS)

    Andrushevsky, N.M.; Shchedrin, B.M.; Simonov, V.I.; Malakhova, L.F.

    2002-01-01

    The analytical expressions for calculating the intensities of X-ray diffuse scattering from a crystal of finite dimensions and monatomic substitutional, interstitial, or vacancy-type point defects have been derived. The method for the determination of the three-dimensional structure by experimental diffuse-scattering data from crystals with point defects having various concentrations is discussed and corresponding numerical algorithms are suggested

  10. An algebraic approach to the scattering equations

    International Nuclear Information System (INIS)

    Huang, Rijun; Rao, Junjie; Feng, Bo; He, Yang-Hui

    2015-01-01

    We employ the so-called companion matrix method from computational algebraic geometry, tailored for zero-dimensional ideals, to study the scattering equations. The method renders the CHY-integrand of scattering amplitudes computable using simple linear algebra and is amenable to an algorithmic approach. Certain identities in the amplitudes as well as rationality of the final integrand become immediate in this formalism.

  11. Optimisation of positron emission tomography in heavy ion therapy on the basis of X-ray tomograms

    International Nuclear Information System (INIS)

    Poenisch, F.

    2003-05-01

    The main goal of the present study was to develop a correction method for the Compton scatter of annihilation quanta. This scatter impairs image quality as well as the imaging accuracy of the detector system, making it difficult to perform a comparison between measured and predictively calculated activity distribution. Applying the scatter correction methods known from nuclear medicine to PET in heavy ion therapy is not possible without limitations. For this reason a BASTEI system adapted to the scatter correction method was developed and implemented in clinical practice. The selected method is a combination of the Monte Carlo method and the use of the simple scatter algorithm. It supplies reliable results, in a manner adapted to the requirements of the therapy, both when applied to experimental and simulated data. The present scatter correction method has been in routine use since 2002. This has been possible through porting the program code to a current PC system and using a time-efficient algorithm for reconstruction

  12. Scattering properties of ultrafast laser-induced refractive index shaping lenticular structures in hydrogels

    Science.gov (United States)

    Wozniak, Kaitlin T.; Germer, Thomas A.; Butler, Sam C.; Brooks, Daniel R.; Huxlin, Krystel R.; Ellis, Jonathan D.

    2018-02-01

    We present measurements of light scatter induced by a new ultrafast laser technique being developed for laser refractive correction in transparent ophthalmic materials such as cornea, contact lenses, and/or intraocular lenses. In this new technique, called intra-tissue refractive index shaping (IRIS), a 405 nm femtosecond laser is focused and scanned below the corneal surface, inducing a spatially-varying refractive index change that corrects vision errors. In contrast with traditional laser correction techniques, such as laser in-situ keratomileusis (LASIK) or photorefractive keratectomy (PRK), IRIS does not operate via photoablation, but rather changes the refractive index of transparent materials such as cornea and hydrogels. A concern with any laser eye correction technique is additional scatter induced by the process, which can adversely affect vision, especially at night. The goal of this investigation is to identify sources of scatter induced by IRIS and to mitigate possible effects on visual performance in ophthalmic applications. Preliminary light scattering measurements on patterns written into hydrogel showed four sources of scatter, differentiated by distinct behaviors: (1) scattering from scanned lines; (2) scattering from stitching errors, resulting from adjacent scanning fields not being aligned to one another; (3) diffraction from Fresnel zone discontinuities; and (4) long-period variations in the scans that created distinct diffraction peaks, likely due to inconsistent line spacing in the writing instrument. By knowing the nature of these different scattering errors, it will now be possible to modify and optimize the design of IRIS structures to mitigate potential deficits in visual performance in human clinical trials.

  13. Window selection for dual photopeak window scatter correction in Tc-99m imaging

    International Nuclear Information System (INIS)

    Vries, D.J. de; King, M.A.

    1994-01-01

    The width and placement of the windows for the dual photopeak window (DPW) scatter subtraction method for Tc-99m imaging is investigated in order to obtain a method that is stable on a multihead detector system for single photon emission computed tomography (SPECT) and is capable of providing a good scatter estimate for extended objects. For various window pairs, stability and noise were examined with experiments using a SPECT system, while Monte Carlo simulations were used to predict the accuracy of scatter estimates for a variety of objects and to guide the development of regression relations for various window pairs. The DPW method that resulted from this study was implemented with a symmetric 20% photopeak window composed of a 15% asymmetric photopeak window and a 5% lower window abutted at 7 keV below the peak. A power function regression was used to relate the scatter-to-total ratio to the lower window-to-total ratio at each pixel, from which an estimated scatter image was calculated. DPW demonstrated good stability, achieved by abutting the two windows away from the peak. Performance was assessed and compared with Compton window subtraction (CWS). For simulated extended objects, DPW generally produced a less biased scatter estimate than the commonly used CWS method with k = 0.5. In acquisitions of a clinical SPECT phantom, contrast recovery was comparable for both DPW and CWS; however, DPW showed greater visual contrast in clinical SPECT bone studies

  14. Dynamics of liquid N2 studied by neutron inelastic scattering

    DEFF Research Database (Denmark)

    Pedersen, Karen Schou; Carneiro, Kim; Hansen, Flemming Yssing

    1982-01-01

    Neutron inelastic-scattering data from liquid N2 at wave-vector transfer κ between 0.18 and 2.1 Å-1 and temperatures ranging from T=65-77 K are presented. The data are corrected for the contribution from multiple scattering and incoherent scattering. The resulting dynamic structure factor S (κ,ω)...

  15. Delbrueck scattering of monoenergetic photons

    International Nuclear Information System (INIS)

    Kahane, S.

    1978-05-01

    The Delbrueck effect was experimentally investigated in high Z nuclei with monoenergetic photons in the range 6.8-11.4 MeV. Two different methods were used for measurements of the differential scattering cross-section, in the 25-140 deg range and in the forward direction (theta = 1.5 deg), respectively. The known Compton scattering cross-section was used in a new and unique way for the determination of the elastic scattering cross-section. Isolation of the contribution of the real Delbrueck amplitudes to the cross-section was crried out successfully. Experimental confirmation of the theoretical calculations of Papatzacos and Mork and measurement, for the first time, of the Rayleigh scattering in the 10 MeV region are also reported. One of the most interesting findings is the presence of Coulomb corrections in Delbrueck scattering at these energies. More theoretical effort is needed in this last direction. (author)

  16. Cold moderator scattering kernels

    International Nuclear Information System (INIS)

    MacFarlane, R.E.

    1989-01-01

    New thermal-scattering-law files in ENDF format have been developed for solid methane, liquid methane liquid ortho- and para-hydrogen, and liquid ortho- and para-deuterium using up-to-date models that include such effects as incoherent elastic scattering in the solid, diffusion and hindered vibration and rotations in the liquids, and spin correlations for the hydrogen and deuterium. These files were generated with the new LEAPR module of the NJOY Nuclear Data Processing System. Other modules of this system were used to produce cross sections for these moderators in the correct format for the continuous-energy Monte Carlo code (MCNP) being used for cold-moderator-design calculations at the Los Alamos Neutron Scattering Center (LANSCE). 20 refs., 14 figs

  17. Multigroup computation of the temperature-dependent Resonance Scattering Model (RSM) and its implementation

    Energy Technology Data Exchange (ETDEWEB)

    Ghrayeb, S. Z. [Dept. of Mechanical and Nuclear Engineering, Pennsylvania State Univ., 230 Reber Building, Univ. Park, PA 16802 (United States); Ouisloumen, M. [Westinghouse Electric Company, 1000 Westinghouse Drive, Cranberry Township, PA 16066 (United States); Ougouag, A. M. [Idaho National Laboratory, MS-3860, PO Box 1625, Idaho Falls, ID 83415 (United States); Ivanov, K. N.

    2012-07-01

    A multi-group formulation for the exact neutron elastic scattering kernel is developed. This formulation is intended for implementation into a lattice physics code. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering, which in turn affect the estimation of core reactivity and burnup characteristics. A computer program has been written to test the formulation for various nuclides. Results of the multi-group code have been verified against the correct analytic scattering kernel. In both cases neutrons were started at various energies and temperatures and the corresponding scattering kernels were tallied. (authors)

  18. Axisymmetric charge-conservative electromagnetic particle simulation algorithm on unstructured grids: Application to microwave vacuum electronic devices

    Science.gov (United States)

    Na, Dong-Yeop; Omelchenko, Yuri A.; Moon, Haksu; Borges, Ben-Hur V.; Teixeira, Fernando L.

    2017-10-01

    We present a charge-conservative electromagnetic particle-in-cell (EM-PIC) algorithm optimized for the analysis of vacuum electronic devices (VEDs) with cylindrical symmetry (axisymmetry). We exploit the axisymmetry present in the device geometry, fields, and sources to reduce the dimensionality of the problem from 3D to 2D. Further, we employ 'transformation optics' principles to map the original problem in polar coordinates with metric tensor diag (1 ,ρ2 , 1) to an equivalent problem on a Cartesian metric tensor diag (1 , 1 , 1) with an effective (artificial) inhomogeneous medium introduced. The resulting problem in the meridian (ρz) plane is discretized using an unstructured 2D mesh considering TEϕ-polarized fields. Electromagnetic field and source (node-based charges and edge-based currents) variables are expressed as differential forms of various degrees, and discretized using Whitney forms. Using leapfrog time integration, we obtain a mixed E - B finite-element time-domain scheme for the full-discrete Maxwell's equations. We achieve a local and explicit time update for the field equations by employing the sparse approximate inverse (SPAI) algorithm. Interpolating field values to particles' positions for solving Newton-Lorentz equations of motion is also done via Whitney forms. Particles are advanced using the Boris algorithm with relativistic correction. A recently introduced charge-conserving scatter scheme tailored for 2D unstructured grids is used in the scatter step. The algorithm is validated considering cylindrical cavity and space-charge-limited cylindrical diode problems. We use the algorithm to investigate the physical performance of VEDs designed to harness particle bunching effects arising from the coherent (resonance) Cerenkov electron beam interactions within micro-machined slow wave structures.

  19. Characterization of Scattered X-Ray Photons in Dental Cone-Beam Computed Tomography.

    Science.gov (United States)

    Yang, Ching-Ching

    2016-01-01

    Scatter is a very important artifact causing factor in dental cone-beam CT (CBCT), which has a major influence on the detectability of details within images. This work aimed to improve the image quality of dental CBCT through scatter correction. Scatter was estimated in the projection domain from the low frequency component of the difference between the raw CBCT projection and the projection obtained by extrapolating the model fitted to the raw projections acquired with 2 different sizes of axial field-of-view (FOV). The function for curve fitting was optimized by using Monte Carlo simulation. To validate the proposed method, an anthropomorphic phantom and a water-filled cylindrical phantom with rod inserts simulating different tissue materials were scanned using 120 kVp, 5 mA and 9-second scanning time covering an axial FOV of 4 cm and 13 cm. The detectability of the CT image was evaluated by calculating the contrast-to-noise ratio (CNR). Beam hardening and cupping artifacts were observed in CBCT images without scatter correction, especially in those acquired with 13 cm FOV. These artifacts were reduced in CBCT images corrected by the proposed method, demonstrating its efficacy on scatter correction. After scatter correction, the image quality of CBCT was improved in terms of target detectability which was quantified as the CNR for rod inserts in the cylindrical phantom. Hopefully the calculations performed in this work can provide a route to reach a high level of diagnostic image quality for CBCT imaging used in oral and maxillofacial structures whilst ensuring patient dose as low as reasonably achievable, which may ultimately make CBCT scan a reliable and safe tool in clinical practice.

  20. Characterization of Scattered X-Ray Photons in Dental Cone-Beam Computed Tomography.

    Directory of Open Access Journals (Sweden)

    Ching-Ching Yang

    Full Text Available Scatter is a very important artifact causing factor in dental cone-beam CT (CBCT, which has a major influence on the detectability of details within images. This work aimed to improve the image quality of dental CBCT through scatter correction.Scatter was estimated in the projection domain from the low frequency component of the difference between the raw CBCT projection and the projection obtained by extrapolating the model fitted to the raw projections acquired with 2 different sizes of axial field-of-view (FOV. The function for curve fitting was optimized by using Monte Carlo simulation. To validate the proposed method, an anthropomorphic phantom and a water-filled cylindrical phantom with rod inserts simulating different tissue materials were scanned using 120 kVp, 5 mA and 9-second scanning time covering an axial FOV of 4 cm and 13 cm. The detectability of the CT image was evaluated by calculating the contrast-to-noise ratio (CNR.Beam hardening and cupping artifacts were observed in CBCT images without scatter correction, especially in those acquired with 13 cm FOV. These artifacts were reduced in CBCT images corrected by the proposed method, demonstrating its efficacy on scatter correction. After scatter correction, the image quality of CBCT was improved in terms of target detectability which was quantified as the CNR for rod inserts in the cylindrical phantom.Hopefully the calculations performed in this work can provide a route to reach a high level of diagnostic image quality for CBCT imaging used in oral and maxillofacial structures whilst ensuring patient dose as low as reasonably achievable, which may ultimately make CBCT scan a reliable and safe tool in clinical practice.