WorldWideScience

Sample records for scatter correction methods

  1. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method

    International Nuclear Information System (INIS)

    Shidahara, Miho; Kato, Takashi; Kawatsu, Shoji; Yoshimura, Kumiko; Ito, Kengo; Watabe, Hiroshi; Kim, Kyeong Min; Iida, Hidehiro; Kato, Rikio

    2005-01-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99m Tc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I AC μb with Chang's attenuation correction factor. The scatter component image is estimated by convolving I AC μb with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99m Tc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine. (orig.)

  2. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method.

    Science.gov (United States)

    Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.

  3. Scatter correction method with primary modulator for dual energy digital radiography: a preliminary study

    Science.gov (United States)

    Jo, Byung-Du; Lee, Young-Jin; Kim, Dae-Hong; Jeon, Pil-Hyun; Kim, Hee-Joung

    2014-03-01

    In conventional digital radiography (DR) using a dual energy subtraction technique, a significant fraction of the detected photons are scattered within the body, resulting in the scatter component. Scattered radiation can significantly deteriorate image quality in diagnostic X-ray imaging systems. Various methods of scatter correction, including both measurement and non-measurement-based methods have been proposed in the past. Both methods can reduce scatter artifacts in images. However, non-measurement-based methods require a homogeneous object and have insufficient scatter component correction. Therefore, we employed a measurement-based method to correct for the scatter component of inhomogeneous objects from dual energy DR (DEDR) images. We performed a simulation study using a Monte Carlo simulation with a primary modulator, which is a measurement-based method for the DEDR system. The primary modulator, which has a checkerboard pattern, was used to modulate primary radiation. Cylindrical phantoms of variable size were used to quantify imaging performance. For scatter estimation, we used Discrete Fourier Transform filtering. The primary modulation method was evaluated using a cylindrical phantom in the DEDR system. The scatter components were accurately removed using a primary modulator. When the results acquired with scatter correction and without correction were compared, the average contrast-to-noise ratio (CNR) with the correction was 1.35 times higher than that obtained without correction, and the average root mean square error (RMSE) with the correction was 38.00% better than that without correction. In the subtraction study, the average CNR with correction was 2.04 (aluminum subtraction) and 1.38 (polymethyl methacrylate (PMMA) subtraction) times higher than that obtained without the correction. The analysis demonstrated the accuracy of scatter correction and the improvement of image quality using a primary modulator and showed the feasibility of

  4. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method

    Energy Technology Data Exchange (ETDEWEB)

    Shidahara, Miho; Kato, Takashi; Kawatsu, Shoji; Yoshimura, Kumiko; Ito, Kengo [National Center for Geriatrics and Gerontology Research Institute, Department of Brain Science and Molecular Imaging, Obu, Aichi (Japan); Watabe, Hiroshi; Kim, Kyeong Min; Iida, Hidehiro [National Cardiovascular Center Research Institute, Department of Investigative Radiology, Suita (Japan); Kato, Rikio [National Center for Geriatrics and Gerontology, Department of Radiology, Obu (Japan)

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with {sup 99m}Tc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I{sub AC}{sup {mu}}{sup b} with Chang's attenuation correction factor. The scatter component image is estimated by convolving I{sub AC}{sup {mu}}{sup b} with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and {sup 99m}Tc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine. (orig.)

  5. Method for measuring multiple scattering corrections between liquid scintillators

    Energy Technology Data Exchange (ETDEWEB)

    Verbeke, J.M., E-mail: verbeke2@llnl.gov; Glenn, A.M., E-mail: glenn22@llnl.gov; Keefer, G.J., E-mail: keefer1@llnl.gov; Wurtz, R.E., E-mail: wurtz1@llnl.gov

    2016-07-21

    A time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.

  6. A software-based x-ray scatter correction method for breast tomosynthesis

    International Nuclear Information System (INIS)

    Jia Feng, Steve Si; Sechopoulos, Ioannis

    2011-01-01

    Purpose: To develop a software-based scatter correction method for digital breast tomosynthesis (DBT) imaging and investigate its impact on the image quality of tomosynthesis reconstructions of both phantoms and patients. Methods: A Monte Carlo (MC) simulation of x-ray scatter, with geometry matching that of the cranio-caudal (CC) view of a DBT clinical prototype, was developed using the Geant4 toolkit and used to generate maps of the scatter-to-primary ratio (SPR) of a number of homogeneous standard-shaped breasts of varying sizes. Dimension-matched SPR maps were then deformed and registered to DBT acquisition projections, allowing for the estimation of the primary x-ray signal acquired by the imaging system. Noise filtering of the estimated projections was then performed to reduce the impact of the quantum noise of the x-ray scatter. Three dimensional (3D) reconstruction was then performed using the maximum likelihood-expectation maximization (MLEM) method. This process was tested on acquisitions of a heterogeneous 50/50 adipose/glandular tomosynthesis phantom with embedded masses, fibers, and microcalcifications and on acquisitions of patients. The image quality of the reconstructions of the scatter-corrected and uncorrected projections was analyzed by studying the signal-difference-to-noise ratio (SDNR), the integral of the signal in each mass lesion (integrated mass signal, IMS), and the modulation transfer function (MTF). Results: The reconstructions of the scatter-corrected projections demonstrated superior image quality. The SDNR of masses embedded in a 5 cm thick tomosynthesis phantom improved 60%-66%, while the SDNR of the smallest mass in an 8 cm thick phantom improved by 59% (p < 0.01). The IMS of the masses in the 5 cm thick phantom also improved by 15%-29%, while the IMS of the masses in the 8 cm thick phantom improved by 26%-62% (p < 0.01). Some embedded microcalcifications in the tomosynthesis phantoms were visible only in the scatter-corrected

  7. Scatter correction method for x-ray CT using primary modulation: Phantom studies

    International Nuclear Information System (INIS)

    Gao Hewei; Fahrig, Rebecca; Bennett, N. Robert; Sun Mingshan; Star-Lack, Josh; Zhu Lei

    2010-01-01

    Purpose: Scatter correction is a major challenge in x-ray imaging using large area detectors. Recently, the authors proposed a promising scatter correction method for x-ray computed tomography (CT) using primary modulation. Proof of concept was previously illustrated by Monte Carlo simulations and physical experiments on a small phantom with a simple geometry. In this work, the authors provide a quantitative evaluation of the primary modulation technique and demonstrate its performance in applications where scatter correction is more challenging. Methods: The authors first analyze the potential errors of the estimated scatter in the primary modulation method. On two tabletop CT systems, the method is investigated using three phantoms: A Catphan(c)600 phantom, an anthropomorphic chest phantom, and the Catphan(c)600 phantom with two annuli. Two different primary modulators are also designed to show the impact of the modulator parameters on the scatter correction efficiency. The first is an aluminum modulator with a weak modulation and a low modulation frequency, and the second is a copper modulator with a strong modulation and a high modulation frequency. Results: On the Catphan(c)600 phantom in the first study, the method reduces the error of the CT number in the selected regions of interest (ROIs) from 371.4 to 21.9 Hounsfield units (HU); the contrast to noise ratio also increases from 10.9 to 19.2. On the anthropomorphic chest phantom in the second study, which represents a more difficult case due to the high scatter signals and object heterogeneity, the method reduces the error of the CT number from 327 to 19 HU in the selected ROIs and from 31.4% to 5.7% on the overall average. The third study is to investigate the impact of object size on the efficiency of our method. The scatter-to-primary ratio estimation error on the Catphan(c)600 phantom without any annulus (20 cm in diameter) is at the level of 0.04, it rises to 0.07 and 0.1 on the phantom with an

  8. Evaluation of a method for correction of scatter radiation in thorax cone beam CT

    International Nuclear Information System (INIS)

    Rinkel, J.; Dinten, J.M.; Esteve, F.

    2004-01-01

    Purpose: Cone beam CT (CBCT) enables three-dimensional imaging with isotropic resolution. X-ray scatter estimation is a big challenge for quantitative CBCT imaging of thorax: scatter level is significantly higher on cone beam systems compared to collimated fan beam systems. The effects of this scattered radiation are cupping artefacts, streaks, and quantification inaccuracies. The beam stops conventional scatter estimation approach can be used for CBCT but leads to a significant increase in terms of dose and acquisition time. At CEA-LETI has been developed an original scatter management process without supplementary acquisition. Methods and Materials: This Analytical Plus Indexing-based method (API) of scatter correction in CBCT is based on scatter calibration through offline acquisitions with beam stops on lucite plates, combined to an analytical transformation issued from physical equations. This approach has been applied with success in bone densitometry and mammography. To evaluate this method in CBCT, acquisitions from a thorax phantom with and without beam stops have been performed. To compare different scatter correction approaches, Feldkamp algorithm has been applied on rough data corrected from scatter by API and by beam stops approaches. Results: The API method provides results in good agreement with the beam stops array approach, suppressing cupping artefact. Otherwise influence of the scatter correction method on the noise in the reconstructed images has been evaluated. Conclusion: The results indicate that the API method is effective for quantitative CBCT imaging of thorax. Compared to a beam stops array method it needs a lower x-ray dose and shortens acquisition time. (authors)

  9. Source distribution dependent scatter correction for PVI

    International Nuclear Information System (INIS)

    Barney, J.S.; Harrop, R.; Dykstra, C.J.

    1993-01-01

    Source distribution dependent scatter correction methods which incorporate different amounts of information about the source position and material distribution have been developed and tested. The techniques use image to projection integral transformation incorporating varying degrees of information on the distribution of scattering material, or convolution subtraction methods, with some information about the scattering material included in one of the convolution methods. To test the techniques, the authors apply them to data generated by Monte Carlo simulations which use geometric shapes or a voxelized density map to model the scattering material. Source position and material distribution have been found to have some effect on scatter correction. An image to projection method which incorporates a density map produces accurate scatter correction but is computationally expensive. Simpler methods, both image to projection and convolution, can also provide effective scatter correction

  10. Scatter measurement and correction method for cone-beam CT based on single grating scan

    Science.gov (United States)

    Huang, Kuidong; Shi, Wenlong; Wang, Xinyu; Dong, Yin; Chang, Taoqi; Zhang, Hua; Zhang, Dinghua

    2017-06-01

    In cone-beam computed tomography (CBCT) systems based on flat-panel detector imaging, the presence of scatter significantly reduces the quality of slices. Based on the concept of collimation, this paper presents a scatter measurement and correction method based on single grating scan. First, according to the characteristics of CBCT imaging, the scan method using single grating and the design requirements of the grating are analyzed and figured out. Second, by analyzing the composition of object projection images and object-and-grating projection images, the processing method for the scatter image at single projection angle is proposed. In addition, to avoid additional scan, this paper proposes an angle interpolation method of scatter images to reduce scan cost. Finally, the experimental results show that the scatter images obtained by this method are accurate and reliable, and the effect of scatter correction is obvious. When the additional object-and-grating projection images are collected and interpolated at intervals of 30 deg, the scatter correction error of slices can still be controlled within 3%.

  11. Evaluation of a scattering correction method for high energy tomography

    Science.gov (United States)

    Tisseur, David; Bhatia, Navnina; Estre, Nicolas; Berge, Léonie; Eck, Daniel; Payan, Emmanuel

    2018-01-01

    One of the main drawbacks of Cone Beam Computed Tomography (CBCT) is the contribution of the scattered photons due to the object and the detector. Scattered photons are deflected from their original path after their interaction with the object. This additional contribution of the scattered photons results in increased measured intensities, since the scattered intensity simply adds to the transmitted intensity. This effect is seen as an overestimation in the measured intensity thus corresponding to an underestimation of absorption. This results in artifacts like cupping, shading, streaks etc. on the reconstructed images. Moreover, the scattered radiation provides a bias for the quantitative tomography reconstruction (for example atomic number and volumic mass measurement with dual-energy technique). The effect can be significant and difficult in the range of MeV energy using large objects due to higher Scatter to Primary Ratio (SPR). Additionally, the incident high energy photons which are scattered by the Compton effect are more forward directed and hence more likely to reach the detector. Moreover, for MeV energy range, the contribution of the photons produced by pair production and Bremsstrahlung process also becomes important. We propose an evaluation of a scattering correction technique based on the method named Scatter Kernel Superposition (SKS). The algorithm uses a continuously thickness-adapted kernels method. The analytical parameterizations of the scatter kernels are derived in terms of material thickness, to form continuously thickness-adapted kernel maps in order to correct the projections. This approach has proved to be efficient in producing better sampling of the kernels with respect to the object thickness. This technique offers applicability over a wide range of imaging conditions and gives users an additional advantage. Moreover, since no extra hardware is required by this approach, it forms a major advantage especially in those cases where

  12. Investigation of Compton scattering correction methods in cardiac SPECT by Monte Carlo simulations

    International Nuclear Information System (INIS)

    Silva, A.M. Marques da; Furlan, A.M.; Robilotta, C.C.

    2001-01-01

    The goal of this work was the use of Monte Carlo simulations to investigate the effects of two scattering correction methods: dual energy window (DEW) and dual photopeak window (DPW), in quantitative cardiac SPECT reconstruction. MCAT torso-cardiac phantom, with 99m Tc and non-uniform attenuation map was simulated. Two different photopeak windows were evaluated in DEW method: 15% and 20%. Two 10% wide subwindows centered symmetrically within the photopeak were used in DPW method. Iterative ML-EM reconstruction with modified projector-backprojector for attenuation correction was applied. Results indicated that the choice of the scattering and photopeak windows determines the correction accuracy. For the 15% window, fitted scatter fraction gives better results than k = 0.5. For the 20% window, DPW is the best method, but it requires parameters estimation using Monte Carlo simulations. (author)

  13. Monte Carlo evaluation of scattering correction methods in 131I studies using pinhole collimator

    International Nuclear Information System (INIS)

    López Díaz, Adlin; San Pedro, Aley Palau; Martín Escuela, Juan Miguel; Rodríguez Pérez, Sunay; Díaz García, Angelina

    2017-01-01

    Scattering is quite important for image activity quantification. In order to study the scattering factors and the efficacy of 3 multiple window energy scatter correction methods during 131 I thyroid studies with a pinhole collimator (5 mm hole) a Monte Carlo simulation (MC) was developed. The GAMOS MC code was used to model the gamma camera and the thyroid source geometry. First, to validate the MC gamma camera pinhole-source model, sensibility in air and water of the simulated and measured thyroid phantom geometries were compared. Next, simulations to investigate scattering and the result of triple energy (TEW), Double energy (DW) and Reduced double (RDW) energy windows correction methods were performed for different thyroid sizes and depth thicknesses. The relative discrepancies to MC real event were evaluated. Results: The accuracy of the GAMOS MC model was verified and validated. The image’s scattering contribution was significant, between 27-40 %. The discrepancies between 3 multiple window energy correction method results were significant (between 9-86 %). The Reduce Double Window methods (15%) provide discrepancies of 9-16 %. Conclusions: For the simulated thyroid geometry with pinhole, the RDW (15 %) was the most effective. (author)

  14. Development and evaluation of attenuation and scatter correction techniques for SPECT using the Monte Carlo method

    International Nuclear Information System (INIS)

    Ljungberg, M.

    1990-05-01

    Quantitative scintigrafic images, obtained by NaI(Tl) scintillation cameras, are limited by photon attenuation and contribution from scattered photons. A Monte Carlo program was developed in order to evaluate these effects. Simple source-phantom geometries and more complex nonhomogeneous cases can be simulated. Comparisons with experimental data for both homogeneous and nonhomogeneous regions and with published results have shown good agreement. The usefulness for simulation of parameters in scintillation camera systems, stationary as well as in SPECT systems, has also been demonstrated. An attenuation correction method based on density maps and build-up functions has been developed. The maps were obtained from a transmission measurement using an external 57 Co flood source and the build-up was simulated by the Monte Carlo code. Two scatter correction methods, the dual-window method and the convolution-subtraction method, have been compared using the Monte Carlo method. The aim was to compare the estimated scatter with the true scatter in the photo-peak window. It was concluded that accurate depth-dependent scatter functions are essential for a proper scatter correction. A new scatter and attenuation correction method has been developed based on scatter line-spread functions (SLSF) obtained for different depths and lateral positions in the phantom. An emission image is used to determine the source location in order to estimate the scatter in the photo-peak window. Simulation studies of a clinically realistic source in different positions in cylindrical water phantoms were made for three photon energies. The SLSF-correction method was also evaluated by simulation studies for 1. a myocardial source, 2. uniform source in the lungs and 3. a tumour located in the lungs in a realistic, nonhomogeneous computer phantom. The results showed that quantitative images could be obtained in nonhomogeneous regions. (67 refs.)

  15. A software-based x-ray scatter correction method for breast tomosynthesis

    OpenAIRE

    Jia Feng, Steve Si; Sechopoulos, Ioannis

    2011-01-01

    Purpose: To develop a software-based scatter correction method for digital breast tomosynthesis (DBT) imaging and investigate its impact on the image quality of tomosynthesis reconstructions of both phantoms and patients.

  16. Comparative evaluation of scatter correction techniques in 3D positron emission tomography

    CERN Document Server

    Zaidi, H

    2000-01-01

    Much research and development has been concentrated on the scatter compensation required for quantitative 3D PET. Increasingly sophisticated scatter correction procedures are under investigation, particularly those based on accurate scatter models, and iterative reconstruction-based scatter compensation approaches. The main difference among the correction methods is the way in which the scatter component in the selected energy window is estimated. Monte Carlo methods give further insight and might in themselves offer a possible correction procedure. Methods: Five scatter correction methods are compared in this paper where applicable. The dual-energy window (DEW) technique, the convolution-subtraction (CVS) method, two variants of the Monte Carlo-based scatter correction technique (MCBSC1 and MCBSC2) and our newly developed statistical reconstruction-based scatter correction (SRBSC) method. These scatter correction techniques are evaluated using Monte Carlo simulation studies, experimental phantom measurements...

  17. Monte Carlo evaluation of accuracy and noise properties of two scatter correction methods

    International Nuclear Information System (INIS)

    Narita, Y.; Eberl, S.; Nakamura, T.

    1996-01-01

    Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for 99m Tc and 201 Tl for numerical chest phantoms. Data were reconstructed with ordered-subset ML-EM algorithm including attenuation correction using the transmission data. In the chest phantom simulation, TDCS provided better S/N than TEW, and better accuracy, i.e., 1.0% vs -7.2% in myocardium, and -3.7% vs -30.1% in the ventricular chamber for 99m Tc with TDCS and TEW, respectively. For 201 Tl, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT

  18. Evaluation of the ICS and DEW scatter correction methods for low statistical content scans in 3D PET

    International Nuclear Information System (INIS)

    Sossi, V.; Oakes, T.R.; Ruth, T.J.

    1996-01-01

    The performance of the Integral Convolution and the Dual Energy Window scatter correction methods in 3D PET has been evaluated over a wide range of statistical content of acquired data (1M to 400M events) The order in which scatter correction and detector normalization should be applied has also been investigated. Phantom and human neuroreceptor studies were used with the following figures of merit: axial and radial uniformity, sinogram and image noise, contrast accuracy and contrast accuracy uniformity. Both scatter correction methods perform reliably in the range of number of events examined. Normalization applied after scatter correction yields better radial uniformity and fewer image artifacts

  19. Evaluation of a method for correction of scatter radiation in thorax cone beam CT; Evaluation d'une methode de correction du rayonnement diffuse en tomographie du thorax avec faisceau conique

    Energy Technology Data Exchange (ETDEWEB)

    Rinkel, J.; Dinten, J.M. [CEA Grenoble (DTBS/STD), Lab. d' Electronique et de Technologie de l' Informatique, LETI, 38 (France); Esteve, F. [European Synchrotron Radiation Facility (ESRF), 38 - Grenoble (France)

    2004-07-01

    Purpose: Cone beam CT (CBCT) enables three-dimensional imaging with isotropic resolution. X-ray scatter estimation is a big challenge for quantitative CBCT imaging of thorax: scatter level is significantly higher on cone beam systems compared to collimated fan beam systems. The effects of this scattered radiation are cupping artefacts, streaks, and quantification inaccuracies. The beam stops conventional scatter estimation approach can be used for CBCT but leads to a significant increase in terms of dose and acquisition time. At CEA-LETI has been developed an original scatter management process without supplementary acquisition. Methods and Materials: This Analytical Plus Indexing-based method (API) of scatter correction in CBCT is based on scatter calibration through offline acquisitions with beam stops on lucite plates, combined to an analytical transformation issued from physical equations. This approach has been applied with success in bone densitometry and mammography. To evaluate this method in CBCT, acquisitions from a thorax phantom with and without beam stops have been performed. To compare different scatter correction approaches, Feldkamp algorithm has been applied on rough data corrected from scatter by API and by beam stops approaches. Results: The API method provides results in good agreement with the beam stops array approach, suppressing cupping artefact. Otherwise influence of the scatter correction method on the noise in the reconstructed images has been evaluated. Conclusion: The results indicate that the API method is effective for quantitative CBCT imaging of thorax. Compared to a beam stops array method it needs a lower x-ray dose and shortens acquisition time. (authors)

  20. A general framework and review of scatter correction methods in cone beam CT. Part 2: Scatter estimation approaches

    International Nuclear Information System (INIS)

    Ruehrnschopf and, Ernst-Peter; Klingenbeck, Klaus

    2011-01-01

    The main components of scatter correction procedures are scatter estimation and a scatter compensation algorithm. This paper completes a previous paper where a general framework for scatter compensation was presented under the prerequisite that a scatter estimation method is already available. In the current paper, the authors give a systematic review of the variety of scatter estimation approaches. Scatter estimation methods are based on measurements, mathematical-physical models, or combinations of both. For completeness they present an overview of measurement-based methods, but the main topic is the theoretically more demanding models, as analytical, Monte-Carlo, and hybrid models. Further classifications are 3D image-based and 2D projection-based approaches. The authors present a system-theoretic framework, which allows to proceed top-down from a general 3D formulation, by successive approximations, to efficient 2D approaches. A widely useful method is the beam-scatter-kernel superposition approach. Together with the review of standard methods, the authors discuss their limitations and how to take into account the issues of object dependency, spatial variance, deformation of scatter kernels, external and internal absorbers. Open questions for further investigations are indicated. Finally, the authors refer on some special issues and applications, such as bow-tie filter, offset detector, truncated data, and dual-source CT.

  1. Attenuation correction of myocardial SPECT by scatter-photopeak window method in normal subjects

    International Nuclear Information System (INIS)

    Okuda, Koichi; Nakajima, Kenichi; Matsuo, Shinro; Kinuya, Seigo; Motomura, Nobutoku; Kubota, Masahiro; Yamaki, Noriyasu; Maeda, Hisato

    2009-01-01

    Segmentation with scatter and photopeak window data using attenuation correction (SSPAC) method can provide a patient-specific non-uniform attenuation coefficient map only by using photopeak and scatter images without X-ray computed tomography (CT). The purpose of this study is to evaluate the performance of attenuation correction (AC) by the SSPAC method on normal myocardial perfusion database. A total of 32 sets of exercise-rest myocardial images with Tc-99m-sestamibi were acquired in both photopeak (140 keV±10%) and scatter (7% of lower side of the photopeak window) energy windows. Myocardial perfusion databases by the SSPAC method and non-AC (NC) were created from 15 female and 17 male subjects with low likelihood of cardiac disease using quantitative perfusion SPECT software. Segmental myocardial counts of a 17-segment model from these databases were compared on the basis of paired t test. AC average myocardial perfusion count was significantly higher than that in NC in the septal and inferior regions (P<0.02). On the contrary, AC average count was significantly lower in the anterolateral and apical regions (P<0.01). Coefficient variation of the AC count in the mid, apical and apex regions was lower than that of NC. The SSPAC method can improve average myocardial perfusion uptake in the septal and inferior regions and provide uniform distribution of myocardial perfusion. The SSPAC method could be a practical method of attenuation correction without X-ray CT. (author)

  2. Non perturbative method for radiative corrections applied to lepton-proton scattering

    International Nuclear Information System (INIS)

    Chahine, C.

    1979-01-01

    We present a new, non perturbative method to effect radiative corrections in lepton (electron or muon)-nucleon scattering, useful for existing or planned experiments. This method relies on a spectral function derived in a previous paper, which takes into account both real soft photons and virtual ones and hence is free from infrared divergence. Hard effects are computed perturbatively and then included in the form of 'hard factors' in the non peturbative soft formulas. Practical computations are effected using the Gauss-Jacobi integration method which reduce the relevant integrals to a rapidly converging sequence. For the simple problem of the radiative quasi-elastic peak, we get an exponentiated form conjectured by Schwinger and found by Yennie, Frautschi and Suura. We compare also our results with the peaking approximation, which we derive independantly, and with the exact one-photon emission formula of Mo and Tsai. Applications of our method to the continuous spectrum include the radiative tail of the Δ 33 resonance in e + p scattering and radiative corrections to the Feynman scale invariant F 2 structure function for the kinematics of two recent high energy muon experiments

  3. Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.

    Science.gov (United States)

    Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua

    2018-02-01

    Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.

  4. Research of scatter correction on industry computed tomography

    International Nuclear Information System (INIS)

    Sun Shaohua; Gao Wenhuan; Zhang Li; Chen Zhiqiang

    2002-01-01

    In the scanning process of industry computer tomography, scatter blurs the reconstructed image. The grey values of pixels in the reconstructed image are away from what is true and such effect need to be corrected. If the authors use the conventional method of deconvolution, many steps of iteration are needed and the computing time is not satisfactory. The author discusses a method combining Ordered Subsets Convex algorithm and scatter model to implement scatter correction and promising results are obtained in both speed and image quality

  5. Cross plane scattering correction

    International Nuclear Information System (INIS)

    Shao, L.; Karp, J.S.

    1990-01-01

    Most previous scattering correction techniques for PET are based on assumptions made for a single transaxial plane and are independent of axial variations. These techniques will incorrectly estimate the scattering fraction for volumetric PET imaging systems since they do not take the cross-plane scattering into account. In this paper, the authors propose a new point source scattering deconvolution method (2-D). The cross-plane scattering is incorporated into the algorithm by modeling a scattering point source function. In the model, the scattering dependence both on axial and transaxial directions is reflected in the exponential fitting parameters and these parameters are directly estimated from a limited number of measured point response functions. The authors' results comparing the standard in-plane point source deconvolution to the authors' cross-plane source deconvolution show that for a small source, the former technique overestimates the scatter fraction in the plane of the source and underestimate the scatter fraction in adjacent planes. In addition, the authors also propose a simple approximation technique for deconvolution

  6. Software correction of scatter coincidence in positron CT

    International Nuclear Information System (INIS)

    Endo, M.; Iinuma, T.A.

    1984-01-01

    This paper describes a software correction of scatter coincidence in positron CT which is based on an estimation of scatter projections from true projections by an integral transform. Kernels for the integral transform are projected distributions of scatter coincidences for a line source at different positions in a water phantom and are calculated by Klein-Nishina's formula. True projections of any composite object can be determined from measured projections by iterative applications of the integral transform. The correction method was tested in computer simulations and phantom experiments with Positologica. The results showed that effects of scatter coincidence are not negligible in the quantitation of images, but the correction reduces them significantly. (orig.)

  7. Effect of scatter correction on quantification of myocardial SPECT and application to dual-energy acquisition using triple-energy window method

    International Nuclear Information System (INIS)

    Nakajima, Kenichi; Matsudaira, Masamichi; Yamada, Masato; Taki, Junichi; Tonami, Norihisa; Hisada, Kinichi

    1995-01-01

    Triple-energy window (TEW) method is a simple and practical approach for correcting Compton scatter in single-photon emission tracer studies. The fraction of scatter correction, with a point source or 30 ml-syringe placed under the camera, was measured by the TEW method. The scatter fraction was 55% for 201 Tl, 29% for 99m Tc and 57% for 123 I. Composite energy spectra were generated and separated by the TEW method. Combination of 99m Tc and 201 Tl was well separated, and 201 Tl and 123 I were separated within an error of 10%; whereas asymmetric photopeak energy window was necessary for separating 123 I and 99m Tc. By applying this method to myocardial SPECT study, the effect of scatter elimination was investigated in each myocardial wall by polar map and profile curve analysis. The effect of scatter was higher in the septum and the inferior wall. The count ratio relative to the anterior wall including scatter was 9% higher in 123 I, 7-8% higher in 99m Tc and 6% higher in 201 Tl. Apparent count loss after scatter correction was 30% for 123 I, 13% for 99m Tc and 38% for 201 Tl. Image contrast, as defined myocardium-to-left ventricular cavity count ratio, improved by scatter correction. Since the influence of Compton scatter was significant in cardiac planar and SPECT studies; the degree of scatter fraction should be kept in mind both in quantification and visual interpretation. (author)

  8. Experimental study on the location of energy windows for scatter correction by the TEW method in 201Tl imaging

    International Nuclear Information System (INIS)

    Kojima, Akihiro; Matsumoto, Masanori; Ohyama, Yoichi; Tomiguchi, Seiji; Kira, Mitsuko; Takahashi, Mutsumasa.

    1997-01-01

    To investigate validity of scatter correction by the TEW method in 201 Tl imaging, we performed an experimental study using the gamma camera with the capability to perform the TEW method and a plate source with a defect. Images were acquired with the triple energy window which is recommended by the gamma camera manufacturer. The result of the energy spectrum showed that backscattered photons were included within the lower sub-energy window and main energy window, and the spectral shapes in the upper half region of the photopeak (70 keV) were not changed greatly by the source shape and the thickness of scattering materials. The scatter fraction calculated using energy spectra and, visual observation and the contrast values measured at the defect using planar images also showed that substantial primary photons were included in the upper sub-energy window. In TEW method (for scatter correction), two sub-energy windows are expected to be defined on the part of energy region in which total counts mainly consist of scattered photons. Therefore, it is necessary to investigate the use of the upper sub-energy window on scatter correction by the TEW method in 201 Tl imaging. (author)

  9. Evaluation of six scatter correction methods based on spectral analysis in 99m Tc SPECT imaging using SIMIND Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    Mahsa Noori Asl

    2013-01-01

    Full Text Available Compton-scattered photons included within the photopeak pulse-height window result in the degradation of SPECT images both qualitatively and quantitatively. The purpose of this study is to evaluate and compare six scatter correction methods based on setting the energy windows in 99m Tc spectrum. SIMIND Monte Carlo simulation is used to generate the projection images from a cold-sphere hot-background phantom. For evaluation of different scatter correction methods, three assessment criteria including image contrast, signal-to-noise ratio (SNR and relative noise of the background (RNB are considered. Except for the dual-photopeak window (DPW method, the image contrast of the five cold spheres is improved in the range of 2.7-26%. Among methods considered, two methods show a nonuniform correction performance. The RNB for all of the scatter correction methods is ranged from minimum 0.03 for DPW method to maximum 0.0727 for the three energy window (TEW method using trapezoidal approximation. The TEW method using triangular approximation because of ease of implementation, good improvement of the image contrast and the SNR for the five cold spheres, and the low noise level is proposed as most appropriate correction method.

  10. SPECT quantification: a review of the different correction methods with compton scatter, attenuation and spatial deterioration effects

    International Nuclear Information System (INIS)

    Groiselle, C.; Rocchisani, J.M.; Moretti, J.L.; Dreuille, O. de; Gaillard, J.F.; Bendriem, B.

    1997-01-01

    SPECT quantification: a review of the different correction methods with Compton scatter attenuation and spatial deterioration effects. The improvement of gamma-cameras, acquisition and reconstruction software opens new perspectives in term of image quantification in nuclear medicine. In order to meet the challenge, numerous works have been undertaken in recent years to correct for the different physical phenomena that prevent an exact estimation of the radioactivity distribution. The main phenomena that have to betaken into account are scatter, attenuation and resolution. In this work, authors present the physical basis of each issue, its consequences on quantification and the main methods proposed to correct them. (authors)

  11. Multiple scattering and attenuation corrections in Deep Inelastic Neutron Scattering experiments

    International Nuclear Information System (INIS)

    Dawidowski, J; Blostein, J J; Granada, J R

    2006-01-01

    Multiple scattering and attenuation corrections in Deep Inelastic Neutron Scattering experiments are analyzed. The theoretical basis of the method is stated, and a Monte Carlo procedure to perform the calculation is presented. The results are compared with experimental data. The importance of the accuracy in the description of the experimental parameters is tested, and the implications of the present results on the data analysis procedures is examined

  12. Spectral-ratio radon background correction method in airborne γ-ray spectrometry based on compton scattering deduction

    International Nuclear Information System (INIS)

    Gu Yi; Xiong Shengqing; Zhou Jianxin; Fan Zhengguo; Ge Liangquan

    2014-01-01

    γ-ray released by the radon daughter has severe impact on airborne γ-ray spectrometry. The spectral-ratio method is one of the best mathematical methods for radon background deduction in airborne γ-ray spectrometry. In this paper, an advanced spectral-ratio method was proposed which deducts Compton scattering ray by the fast Fourier transform rather than tripping ratios, the relationship between survey height and correction coefficient of the advanced spectral-ratio radon background correction method was studied, the advanced spectral-ratio radon background correction mathematic model was established, and the ground saturation model calibrating technology for correction coefficient was proposed. As for the advanced spectral-ratio radon background correction method, its applicability and correction efficiency are improved, and the application cost is saved. Furthermore, it can prevent the physical meaning lost and avoid the possible errors caused by matrix computation and mathematical fitting based on spectrum shape which is applied in traditional correction coefficient. (authors)

  13. A scatter-corrected list-mode reconstruction and a practical scatter/random approximation technique for dynamic PET imaging

    International Nuclear Information System (INIS)

    Cheng, J-C; Rahmim, Arman; Blinder, Stephan; Camborde, Marie-Laure; Raywood, Kelvin; Sossi, Vesna

    2007-01-01

    We describe an ordinary Poisson list-mode expectation maximization (OP-LMEM) algorithm with a sinogram-based scatter correction method based on the single scatter simulation (SSS) technique and a random correction method based on the variance-reduced delayed-coincidence technique. We also describe a practical approximate scatter and random-estimation approach for dynamic PET studies based on a time-averaged scatter and random estimate followed by scaling according to the global numbers of true coincidences and randoms for each temporal frame. The quantitative accuracy achieved using OP-LMEM was compared to that obtained using the histogram-mode 3D ordinary Poisson ordered subset expectation maximization (3D-OP) algorithm with similar scatter and random correction methods, and they showed excellent agreement. The accuracy of the approximated scatter and random estimates was tested by comparing time activity curves (TACs) as well as the spatial scatter distribution from dynamic non-human primate studies obtained from the conventional (frame-based) approach and those obtained from the approximate approach. An excellent agreement was found, and the time required for the calculation of scatter and random estimates in the dynamic studies became much less dependent on the number of frames (we achieved a nearly four times faster performance on the scatter and random estimates by applying the proposed method). The precision of the scatter fraction was also demonstrated for the conventional and the approximate approach using phantom studies

  14. Finite-Geometry and Polarized Multiple-Scattering Corrections of Experimental Fast- Neutron Polarization Data by Means of Monte Carlo Methods

    Energy Technology Data Exchange (ETDEWEB)

    Aspelund, O; Gustafsson, B

    1967-05-15

    After an introductory discussion of various methods for correction of experimental left-right ratios for polarized multiple-scattering and finite-geometry effects necessary and sufficient formulas for consistent tracking of polarization effects in successive scattering orders are derived. The simplifying assumptions are then made that the scattering is purely elastic and nuclear, and that in the description of the kinematics of the arbitrary Scattering {mu}, only one triple-parameter - the so-called spin rotation parameter {beta}{sup ({mu})} - is required. Based upon these formulas a general discussion of the importance of the correct inclusion of polarization effects in any scattering order is presented. Special attention is then paid to the question of depolarization of an already polarized beam. Subsequently, the afore-mentioned formulas are incorporated in the comprehensive Monte Carlo program MULTPOL, which has been designed so as to correctly account for finite-geometry effects in the sense that both the scattering sample and the detectors (both having cylindrical shapes) are objects of finite dimensions located at finite distances from each other and from the source of polarized fast-neutrons. A special feature of MULTPOL is the application of the method of correlated sampling for reduction of the standard deviations .of the results of the simulated experiment. Typical data of performance of MULTPOL have been obtained by the application of this program to the correction of experimental polarization data observed in n + '{sup 12}C elastic scattering between 1 and 2 MeV. Finally, in the concluding remarks the possible modification of MULTPOL to other experimental geometries is briefly discussed.

  15. Evaluation of scatter correction using a single isotope for simultaneous emission and transmission data

    International Nuclear Information System (INIS)

    Yang, J.; Kuikka, J.T.; Vanninen, E.; Laensimies, E.; Kauppinen, T.; Patomaeki, L.

    1999-01-01

    Photon scatter is one of the most important factors degrading the quantitative accuracy of SPECT images. Many scatter correction methods have been proposed. The single isotope method was proposed by us. Aim: We evaluate the scatter correction method of improving the quality of images by acquiring emission and transmission data simultaneously with single isotope scan. Method: To evaluate the proposed scatter correction method, a contrast and linearity phantom was studied. Four female patients with fibromyalgia (FM) syndrome and four with chronic back pain (BP) were imaged. Grey-to-cerebellum (G/C) and grey-to-white matter (G/W) ratios were determined by one skilled operator for 12 regions of interest (ROIs) in each subject. Results: The linearity of activity response was improved after the scatter correction (r=0.999). The y-intercept value of the regression line was 0.036 (p [de

  16. Scatter correction using a primary modulator for dual energy digital radiography: A Monte Carlo simulation study

    Science.gov (United States)

    Jo, Byung-Du; Lee, Young-Jin; Kim, Dae-Hong; Kim, Hee-Joung

    2014-08-01

    In conventional digital radiography (DR) using a dual energy subtraction technique, a significant fraction of the detected photons are scattered within the body, making up the scatter component. Scattered radiation can significantly deteriorate image quality in diagnostic X-ray imaging systems. Various methods of scatter correction, including both measurement- and non-measurement-based methods, have been proposed in the past. Both methods can reduce scatter artifacts in images. However, non-measurement-based methods require a homogeneous object and have insufficient scatter component correction. Therefore, we employed a measurement-based method to correct for the scatter component of inhomogeneous objects from dual energy DR (DEDR) images. We performed a simulation study using a Monte Carlo simulation with a primary modulator, which is a measurement-based method for the DEDR system. The primary modulator, which has a checkerboard pattern, was used to modulate the primary radiation. Cylindrical phantoms of variable size were used to quantify the imaging performance. For scatter estimates, we used discrete Fourier transform filtering, e.g., a Gaussian low-high pass filter with a cut-off frequency. The primary modulation method was evaluated using a cylindrical phantom in the DEDR system. The scatter components were accurately removed using a primary modulator. When the results acquired with scatter correction and without scatter correction were compared, the average contrast-to-noise ratio (CNR) with the correction was 1.35 times higher than that obtained without the correction, and the average root mean square error (RMSE) with the correction was 38.00% better than that without the correction. In the subtraction study, the average CNR with the correction was 2.04 (aluminum subtraction) and 1.38 (polymethyl methacrylate (PMMA) subtraction) times higher than that obtained without the correction. The analysis demonstrated the accuracy of the scatter correction and the

  17. Ultrafast cone-beam CT scatter correction with GPU-based Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    Yuan Xu

    2014-03-01

    Full Text Available Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT. We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstruction within 30 seconds.Methods: The method consists of six steps: 1 FDK reconstruction using raw projection data; 2 Rigid Registration of planning CT to the FDK results; 3 MC scatter calculation at sparse view angles using the planning CT; 4 Interpolation of the calculated scatter signals to other angles; 5 Removal of scatter from the raw projections; 6 FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC noise from the simulated scatter images caused by low photon numbers. The method is validated on one simulated head-and-neck case with 364 projection angles.Results: We have examined variation of the scatter signal among projection angles using Fourier analysis. It is found that scatter images at 31 angles are sufficient to restore those at all angles with < 0.1% error. For the simulated patient case with a resolution of 512 × 512 × 100, we simulated 5 × 106 photons per angle. The total computation time is 20.52 seconds on a Nvidia GTX Titan GPU, and the time at each step is 2.53, 0.64, 14.78, 0.13, 0.19, and 2.25 seconds, respectively. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU.Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. It accomplished the whole procedure of scatter correction and reconstruction within 30 seconds.----------------------------Cite this

  18. Improvement of quantitation in SPECT: Attenuation and scatter correction using non-uniform attenuation data

    International Nuclear Information System (INIS)

    Mukai, T.; Torizuka, K.; Douglass, K.H.; Wagner, H.N.

    1985-01-01

    Quantitative assessment of tracer distribution with single photon emission computed tomography (SPECT) is difficult because of attenuation and scattering of gamma rays within the object. A method considering the source geometry was developed, and effects of attenuation and scatter on SPECT quantitation were studied using phantoms with non-uniform attenuation. The distribution of attenuation coefficients (μ) within the source were obtained by transmission CT. The attenuation correction was performed by an iterative reprojection technique. The scatter correction was done by convolution of the attenuation corrected image and an appropriate filter made by line source studies. The filter characteristics depended on μ and SPEC measurement at each pixel. The SPECT obtained by this method showed the most reasonable results than the images reconstructed by other methods. The scatter correction could compensate completely for a 28% scatter components from a long line source, and a 61% component for thick and extended source. Consideration of source geometries was necessary for effective corrections. The present method is expected to be valuable for the quantitative assessment of regional tracer activity

  19. Improved scatter correction with factor analysis for planar and SPECT imaging

    Science.gov (United States)

    Knoll, Peter; Rahmim, Arman; Gültekin, Selma; Šámal, Martin; Ljungberg, Michael; Mirzaei, Siroos; Segars, Paul; Szczupak, Boguslaw

    2017-09-01

    Quantitative nuclear medicine imaging is an increasingly important frontier. In order to achieve quantitative imaging, various interactions of photons with matter have to be modeled and compensated. Although correction for photon attenuation has been addressed by including x-ray CT scans (accurate), correction for Compton scatter remains an open issue. The inclusion of scattered photons within the energy window used for planar or SPECT data acquisition decreases the contrast of the image. While a number of methods for scatter correction have been proposed in the past, in this work, we propose and assess a novel, user-independent framework applying factor analysis (FA). Extensive Monte Carlo simulations for planar and tomographic imaging were performed using the SIMIND software. Furthermore, planar acquisition of two Petri dishes filled with 99mTc solutions and a Jaszczak phantom study (Data Spectrum Corporation, Durham, NC, USA) using a dual head gamma camera were performed. In order to use FA for scatter correction, we subdivided the applied energy window into a number of sub-windows, serving as input data. FA results in two factor images (photo-peak, scatter) and two corresponding factor curves (energy spectra). Planar and tomographic Jaszczak phantom gamma camera measurements were recorded. The tomographic data (simulations and measurements) were processed for each angular position resulting in a photo-peak and a scatter data set. The reconstructed transaxial slices of the Jaszczak phantom were quantified using an ImageJ plugin. The data obtained by FA showed good agreement with the energy spectra, photo-peak, and scatter images obtained in all Monte Carlo simulated data sets. For comparison, the standard dual-energy window (DEW) approach was additionally applied for scatter correction. FA in comparison with the DEW method results in significant improvements in image accuracy for both planar and tomographic data sets. FA can be used as a user

  20. Monte Carlo and experimental evaluation of accuracy and noise properties of two scatter correction methods for SPECT

    International Nuclear Information System (INIS)

    Narita, Y.; Eberl, S.; Bautovich, G.; Iida, H.; Hutton, B.F.; Braun, M.; Nakamura, T.

    1996-01-01

    Scatter correction is a prerequisite for quantitative SPECT, but potentially increases noise. Monte Carlo simulations (EGS4) and physical phantom measurements were used to compare accuracy and noise properties of two scatter correction techniques: the triple-energy window (TEW), and the transmission dependent convolution subtraction (TDCS) techniques. Two scatter functions were investigated for TDCS: (i) the originally proposed mono-exponential function (TDCS mono ) and (ii) an exponential plus Gaussian scatter function (TDCS Gauss ) demonstrated to be superior from our Monte Carlo simulations. Signal to noise ratio (S/N) and accuracy were investigated in cylindrical phantoms and a chest phantom. Results from each method were compared to the true primary counts (simulations), or known activity concentrations (phantom studies). 99m Tc was used in all cases. The optimized TDCS Gauss method overall performed best, with an accuracy of better than 4% for all simulations and physical phantom studies. Maximum errors for TEW and TDCS mono of -30 and -22%, respectively, were observed in the heart chamber of the simulated chest phantom. TEW had the worst S/N ratio of the three techniques. The S/N ratios of the two TDCS methods were similar and only slightly lower than those of simulated true primary data. Thus, accurate quantitation can be obtained with TDCS Gauss , with a relatively small reduction in S/N ratio. (author)

  1. Evaluation of scatter correction using a single isotope for simultaneous emission and transmission data

    Energy Technology Data Exchange (ETDEWEB)

    Yang, J.; Kuikka, J.T.; Vanninen, E.; Laensimies, E. [Kuopio Univ. Hospital (Finland). Dept. of Clinical Physiology and Nuclear Medicine; Kauppinen, T.; Patomaeki, L. [Kuopio Univ. (Finland). Dept. of Applied Physics

    1999-05-01

    Photon scatter is one of the most important factors degrading the quantitative accuracy of SPECT images. Many scatter correction methods have been proposed. The single isotope method was proposed by us. Aim: We evaluate the scatter correction method of improving the quality of images by acquiring emission and transmission data simultaneously with single isotope scan. Method: To evaluate the proposed scatter correction method, a contrast and linearity phantom was studied. Four female patients with fibromyalgia (FM) syndrome and four with chronic back pain (BP) were imaged. Grey-to-cerebellum (G/C) and grey-to-white matter (G/W) ratios were determined by one skilled operator for 12 regions of interest (ROIs) in each subject. Results: The linearity of activity response was improved after the scatter correction (r=0.999). The y-intercept value of the regression line was 0.036 (p<0.0001) after scatter correction and the slope was 0.954. Pairwise correlation indicated the agreement between nonscatter corrected and scatter corrected images. Reconstructed slices before and after scatter correction demonstrate a good correlation in the quantitative accuracy of radionuclide concentration. G/C values have significant correlation coefficients between original and corrected data. Conclusion: The transaxial images of human brain studies show that the scatter correction using single isotope in simultaneous transmission and emission tomography provides a good scatter compensation. The contrasts were increased on all 12 ROIs. The scatter compensation enhanced details of physiological lesions. (orig.) [Deutsch] Die Photonenstreuung gehoert zu den wichtigsten Faktoren, die die quantitative Genauigkeit von SPECT-Bildern vermindern. Es wurde eine ganze Reihe von Methoden zur Streuungskorrektur vorgeschlagen. Von uns wurde die Einzelisotopen-Methode empfohlen. Ziel: Wir untersuchten die Streuungskorrektur-Methode zur Verbesserung der Bildqualitaet durch simultane Gewinnung von Emissions

  2. Measured attenuation correction methods

    International Nuclear Information System (INIS)

    Ostertag, H.; Kuebler, W.K.; Doll, J.; Lorenz, W.J.

    1989-01-01

    Accurate attenuation correction is a prerequisite for the determination of exact local radioactivity concentrations in positron emission tomography. Attenuation correction factors range from 4-5 in brain studies to 50-100 in whole body measurements. This report gives an overview of the different methods of determining the attenuation correction factors by transmission measurements using an external positron emitting source. The long-lived generator nuclide 68 Ge/ 68 Ga is commonly used for this purpose. The additional patient dose from the transmission source is usually a small fraction of the dose due to the subsequent emission measurement. Ring-shaped transmission sources as well as rotating point or line sources are employed in modern positron tomographs. By masking a rotating line or point source, random and scattered events in the transmission scans can be effectively suppressed. The problems of measured attenuation correction are discussed: Transmission/emission mismatch, random and scattered event contamination, counting statistics, transmission/emission scatter compensation, transmission scan after administration of activity to the patient. By using a double masking technique simultaneous emission and transmission scans become feasible. (orig.)

  3. Quantitative Evaluation of 2 Scatter-Correction Techniques for 18F-FDG Brain PET/MRI in Regard to MR-Based Attenuation Correction.

    Science.gov (United States)

    Teuho, Jarmo; Saunavaara, Virva; Tolvanen, Tuula; Tuokkola, Terhi; Karlsson, Antti; Tuisku, Jouni; Teräs, Mika

    2017-10-01

    In PET, corrections for photon scatter and attenuation are essential for visual and quantitative consistency. MR attenuation correction (MRAC) is generally conducted by image segmentation and assignment of discrete attenuation coefficients, which offer limited accuracy compared with CT attenuation correction. Potential inaccuracies in MRAC may affect scatter correction, because the attenuation image (μ-map) is used in single scatter simulation (SSS) to calculate the scatter estimate. We assessed the impact of MRAC to scatter correction using 2 scatter-correction techniques and 3 μ-maps for MRAC. Methods: The tail-fitted SSS (TF-SSS) and a Monte Carlo-based single scatter simulation (MC-SSS) algorithm implementations on the Philips Ingenuity TF PET/MR were used with 1 CT-based and 2 MR-based μ-maps. Data from 7 subjects were used in the clinical evaluation, and a phantom study using an anatomic brain phantom was conducted. Scatter-correction sinograms were evaluated for each scatter correction method and μ-map. Absolute image quantification was investigated with the phantom data. Quantitative assessment of PET images was performed by volume-of-interest and ratio image analysis. Results: MRAC did not result in large differences in scatter algorithm performance, especially with TF-SSS. Scatter sinograms and scatter fractions did not reveal large differences regardless of the μ-map used. TF-SSS showed slightly higher absolute quantification. The differences in volume-of-interest analysis between TF-SSS and MC-SSS were 3% at maximum in the phantom and 4% in the patient study. Both algorithms showed excellent correlation with each other with no visual differences between PET images. MC-SSS showed a slight dependency on the μ-map used, with a difference of 2% on average and 4% at maximum when a μ-map without bone was used. Conclusion: The effect of different MR-based μ-maps on the performance of scatter correction was minimal in non-time-of-flight 18 F-FDG PET

  4. Evaluation of scatter limitation correction: a new method of correcting photopenic artifacts caused by patient motion during whole-body PET/CT imaging.

    Science.gov (United States)

    Miwa, Kenta; Umeda, Takuro; Murata, Taisuke; Wagatsuma, Kei; Miyaji, Noriaki; Terauchi, Takashi; Koizumi, Mitsuru; Sasaki, Masayuki

    2016-02-01

    Overcorrection of scatter caused by patient motion during whole-body PET/computed tomography (CT) imaging can induce the appearance of photopenic artifacts in the PET images. The present study aimed to quantify the accuracy of scatter limitation correction (SLC) for eliminating photopenic artifacts. This study analyzed photopenic artifacts in (18)F-fluorodeoxyglucose ((18)F-FDG) PET/CT images acquired from 12 patients and from a National Electrical Manufacturers Association phantom with two peripheral plastic bottles that simulated the human body and arms, respectively. The phantom comprised a sphere (diameter, 10 or 37 mm) containing fluorine-18 solutions with target-to-background ratios of 2, 4, and 8. The plastic bottles were moved 10 cm posteriorly between CT and PET acquisitions. All PET data were reconstructed using model-based scatter correction (SC), no scatter correction (NSC), and SLC, and the presence or absence of artifacts on the PET images was visually evaluated. The SC and SLC images were also semiquantitatively evaluated using standardized uptake values (SUVs). Photopenic artifacts were not recognizable in any NSC and SLC image from all 12 patients in the clinical study. The SUVmax of mismatched SLC PET/CT images were almost equal to those of matched SC and SLC PET/CT images. Applying NSC and SLC substantially eliminated the photopenic artifacts on SC PET images in the phantom study. SLC improved the activity concentration of the sphere for all target-to-background ratios. The highest %errors of the 10 and 37-mm spheres were 93.3 and 58.3%, respectively, for mismatched SC, and 73.2 and 22.0%, respectively, for mismatched SLC. Photopenic artifacts caused by SC error induced by CT and PET image misalignment were corrected using SLC, indicating that this method is useful and practical for clinical qualitative and quantitative PET/CT assessment.

  5. Higher Order Heavy Quark Corrections to Deep-Inelastic Scattering

    Science.gov (United States)

    Blümlein, Johannes; DeFreitas, Abilio; Schneider, Carsten

    2015-04-01

    The 3-loop heavy flavor corrections to deep-inelastic scattering are essential for consistent next-to-next-to-leading order QCD analyses. We report on the present status of the calculation of these corrections at large virtualities Q2. We also describe a series of mathematical, computer-algebraic and combinatorial methods and special function spaces, needed to perform these calculations. Finally, we briefly discuss the status of measuring αs (MZ), the charm quark mass mc, and the parton distribution functions at next-to-next-to-leading order from the world precision data on deep-inelastic scattering.

  6. Higher order heavy quark corrections to deep-inelastic scattering

    International Nuclear Information System (INIS)

    Bluemlein, J.; Freitas, A. de; Johannes Kepler Univ., Linz; Schneider, C.

    2014-11-01

    The 3-loop heavy flavor corrections to deep-inelastic scattering are essential for consistent next-to-next-to-leading order QCD analyses. We report on the present status of the calculation of these corrections at large virtualities Q 2 . We also describe a series of mathematical, computer-algebraic and combinatorial methods and special function spaces, needed to perform these calculations. Finally, we briefly discuss the status of measuring α s (M Z ), the charm quark mass m c , and the parton distribution functions at next-to-next-to-leading order from the world precision data on deep-inelastic scattering.

  7. How to simplify transmission-based scatter correction for clinical application

    International Nuclear Information System (INIS)

    Baccarne, V.; Hutton, B.F.

    1998-01-01

    Full text: The performances of ordered subsets (OS) EM reconstruction including attenuation, scatter and spatial resolution correction are evaluated using cardiac Monte Carlo data. We demonstrate how simplifications in the scatter model allow one to correct SPECT data for scatter in terms of quantitation and quality in a reasonable time. Initial reconstruction of the 20% window is performed including attenuation correction (broad beam μ values), to estimate the activity quantitatively (accuracy 3%), but not spatially. A rough reconstruction with 2 iterations (subset size: 8) is sufficient for subsequent scatter correction. Estimation of primary photons is obtained by projecting the previous distribution including attenuation (narrow beam μ values). Estimation of the scatter is obtained by convolving the primary estimates by a depth dependent scatter kernel, and scaling the result by a factor calculated from the attenuation map. The correction can be accelerated by convolving several adjacent planes with the same kernel, and using an average scaling factor. Simulation of the effects of the collimator during the scatter correction was demonstrated to be unnecessary. Final reconstruction is performed using 6 iterations OSEM, including attenuation (narrow beam μ values) and spatial resolution correction. Scatter correction is implemented by incorporating the estimated scatter as a constant offset in the forward projection step. The total correction + reconstruction (64 proj. 40x128 pixel) takes 38 minutes on a Sun Sparc 20. Quantitatively, the accuracy is 7% in a reconstructed slice. The SNR inside the whole myocardium (defined from the original object), is equal to 2.1 and 2.3 - in the corrected and the primary slices respectively. The scatter correction preserves the myocardium to ventricle contrast (primary: 0.79, corrected: 0.82). These simplifications allow acceleration of correction without influencing the quality of the result

  8. Library based x-ray scatter correction for dedicated cone beam breast CT

    International Nuclear Information System (INIS)

    Shi, Linxi; Zhu, Lei; Vedantham, Srinivasan; Karellas, Andrew

    2016-01-01

    Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the GEANT4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correction on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal

  9. Library based x-ray scatter correction for dedicated cone beam breast CT

    Energy Technology Data Exchange (ETDEWEB)

    Shi, Linxi; Zhu, Lei, E-mail: leizhu@gatech.edu [Nuclear and Radiological Engineering and Medical Physics Programs, The George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States); Vedantham, Srinivasan; Karellas, Andrew [Department of Radiology, University of Massachusetts Medical School, Worcester, Massachusetts 01655 (United States)

    2016-08-15

    Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the GEANT4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correction on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal

  10. The modular small-angle X-ray scattering data correction sequence.

    Science.gov (United States)

    Pauw, B R; Smith, A J; Snow, T; Terrill, N J; Thünemann, A F

    2017-12-01

    Data correction is probably the least favourite activity amongst users experimenting with small-angle X-ray scattering: if it is not done sufficiently well, this may become evident only during the data analysis stage, necessitating the repetition of the data corrections from scratch. A recommended comprehensive sequence of elementary data correction steps is presented here to alleviate the difficulties associated with data correction, both in the laboratory and at the synchrotron. When applied in the proposed order to the raw signals, the resulting absolute scattering cross section will provide a high degree of accuracy for a very wide range of samples, with its values accompanied by uncertainty estimates. The method can be applied without modification to any pinhole-collimated instruments with photon-counting direct-detection area detectors.

  11. Atmospheric scattering corrections to solar radiometry

    International Nuclear Information System (INIS)

    Box, M.A.; Deepak, A.

    1979-01-01

    Whenever a solar radiometer is used to measure direct solar radiation, some diffuse sky radiation invariably enters the detector's field of view along with the direct beam. Therefore, the atmospheric optical depth obtained by the use of Bouguer's transmission law (also called Beer-Lambert's law), that is valid only for direct radiation, needs to be corrected by taking account of the scattered radiation. In this paper we shall discuss the correction factors needed to account for the diffuse (i.e., singly and multiply scattered) radiation and the algorithms developed for retrieving aerosol size distribution from such measurements. For a radiometer with a small field of view (half-cone angle 0 ) and relatively clear skies (optical depths <0.4), it is shown that the total diffuse contributions represents approximately l% of the total intensity. It is assumed here that the main contributions to the diffuse radiation within the detector's view cone are due to single scattering by molecules and aerosols and multiple scattering by molecules alone, aerosol multiple scattering contributions being treated as negligibly small. The theory and the numerical results discussed in this paper will be helpful not only in making corrections to the measured optical depth data but also in designing improved solar radiometers

  12. An inter-crystal scatter correction method for DOI PET image reconstruction

    International Nuclear Information System (INIS)

    Lam, Chih Fung; Hagiwara, Naoki; Obi, Takashi; Yamaguchi, Masahiro; Yamaya, Taiga; Murayama, Hideo

    2006-01-01

    New positron emission tomography (PET) scanners utilize depth-of-interaction (DOI) information to improve image resolution, particularly at the edge of field-of-view while maintaining high detector sensitivity. However, the inter-crystal scatter (ICS) effect cannot be neglected in DOI scanners due to the use of smaller crystals. ICS is the phenomenon wherein there are multiple scintillations for irradiation of a gamma photon due to Compton scatter in detecting crystals. In the case of ICS, only one scintillation position is approximated for detectors with Anger-type logic calculation. This causes an error in position detection and ICS worsens the image contrast, particularly for smaller hotspots. In this study, we propose to model an ICS probability by using a Monte Carlo simulator. The probability is given as a statistical relationship between the gamma photon first interaction crystal pair and the detected crystal pair. It is then used to improve the system matrix of a statistical image reconstruction algorithm, such as maximum likehood expectation maximization (ML-EM) in order to correct for the position error caused by ICS. We apply the proposed method to simulated data of the jPET-D4, which is a four-layer DOI PET being developed at the National Institute of Radiological Sciences. Our computer simulations show that image contrast is recovered successfully by the proposed method. (author)

  13. N3LO corrections to jet production in deep inelastic scattering using the Projection-to-Born method

    Science.gov (United States)

    Currie, J.; Gehrmann, T.; Glover, E. W. N.; Huss, A.; Niehues, J.; Vogt, A.

    2018-05-01

    Computations of higher-order QCD corrections for processes with exclusive final states require a subtraction method for real-radiation contributions. We present the first-ever generalisation of a subtraction method for third-order (N3LO) QCD corrections. The Projection-to-Born method is used to combine inclusive N3LO coefficient functions with an exclusive second-order (NNLO) calculation for a final state with an extra jet. The input requirements, advantages, and potential applications of the method are discussed, and validations at lower orders are performed. As a test case, we compute the N3LO corrections to kinematical distributions and production rates for single-jet production in deep inelastic scattering in the laboratory frame, and compare them with data from the ZEUS experiment at HERA. The corrections are small in the central rapidity region, where they stabilize the predictions to sub per-cent level. The corrections increase substantially towards forward rapidity where large logarithmic effects are expected, thereby yielding an improved description of the data in this region.

  14. The analysis and correction of neutron scattering effects in neutron imaging

    International Nuclear Information System (INIS)

    Raine, D.A.; Brenizer, J.S.

    1997-01-01

    A method of correcting for the scattering effects present in neutron radiographic and computed tomographic imaging has been developed. Prior work has shown that beam, object, and imaging system geometry factors, such as the L/D ratio and angular divergence, are the primary sources contributing to the degradation of neutron images. With objects smaller than 20--40 mm in width, a parallel beam approximation can be made where the effects from geometry are negligible. Factors which remain important in the image formation process are the pixel size of the imaging system, neutron scattering, the size of the object, the conversion material, and the beam energy spectrum. The Monte Carlo N-Particle transport code, version 4A (MCNP4A), was used to separate and evaluate the effect that each of these parameters has on neutron image data. The simulations were used to develop a correction algorithm which is easy to implement and requires no a priori knowledge of the object. The correction algorithm is based on the determination of the object scatter function (OSF) using available data outside the object to estimate the shape and magnitude of the OSF based on a Gaussian functional form. For objects smaller than 1 mm (0.04 in.) in width, the correction function can be well approximated by a constant function. Errors in the determination and correction of the MCNP simulated neutron scattering component were under 5% and larger errors were only noted in objects which were at the extreme high end of the range of object sizes simulated. The Monte Carlo data also indicated that scattering does not play a significant role in the blurring of neutron radiographic and tomographic images. The effect of neutron scattering on computed tomography is shown to be minimal at best, with the most serious effect resulting when the basic backprojection method is used

  15. X-ray scatter correction method for dedicated breast computed tomography: improvements and initial patient testing

    International Nuclear Information System (INIS)

    Ramamurthy, Senthil; D’Orsi, Carl J; Sechopoulos, Ioannis

    2016-01-01

    A previously proposed x-ray scatter correction method for dedicated breast computed tomography was further developed and implemented so as to allow for initial patient testing. The method involves the acquisition of a complete second set of breast CT projections covering 360° with a perforated tungsten plate in the path of the x-ray beam. To make patient testing feasible, a wirelessly controlled electronic positioner for the tungsten plate was designed and added to a breast CT system. Other improvements to the algorithm were implemented, including automated exclusion of non-valid primary estimate points and the use of a different approximation method to estimate the full scatter signal. To evaluate the effectiveness of the algorithm, evaluation of the resulting image quality was performed with a breast phantom and with nine patient images. The improvements in the algorithm resulted in the avoidance of introduction of artifacts, especially at the object borders, which was an issue in the previous implementation in some cases. Both contrast, in terms of signal difference and signal difference-to-noise ratio were improved with the proposed method, as opposed to with the correction algorithm incorporated in the system, which does not recover contrast. Patient image evaluation also showed enhanced contrast, better cupping correction, and more consistent voxel values for the different tissues. The algorithm also reduces artifacts present in reconstructions of non-regularly shaped breasts. With the implemented hardware and software improvements, the proposed method can be reliably used during patient breast CT imaging, resulting in improvement of image quality, no introduction of artifacts, and in some cases reduction of artifacts already present. The impact of the algorithm on actual clinical performance for detection, diagnosis and other clinical tasks in breast imaging remains to be evaluated. (paper)

  16. A new method for x-ray scatter correction: first assessment on a cone-beam CT experimental setup

    International Nuclear Information System (INIS)

    Rinkel, J; Gerfault, L; Esteve, F; Dinten, J-M

    2007-01-01

    Cone-beam computed tomography (CBCT) enables three-dimensional imaging with isotropic resolution and a shorter acquisition time compared to a helical CT scanner. Because a larger object volume is exposed for each projection, scatter levels are much higher than in collimated fan-beam systems, resulting in cupping artifacts, streaks and quantification inaccuracies. In this paper, a general method to correct for scatter in CBCT, without supplementary on-line acquisition, is presented. This method is based on scatter calibration through off-line acquisition combined with on-line analytical transformation based on physical equations, to adapt calibration to the object observed. The method was tested on a PMMA phantom and on an anthropomorphic thorax phantom. The results were validated by comparison to simulation for the PMMA phantom and by comparison to scans obtained on a commercial multi-slice CT scanner for the thorax phantom. Finally, the improvements achieved with the new method were compared to those obtained using a standard beam-stop method. The new method provided results that closely agreed with the simulation and with the conventional CT scanner, eliminating cupping artifacts and significantly improving quantification. Compared to the beam-stop method, lower x-ray doses and shorter acquisition times were needed, both divided by a factor of 9 for the same scatter estimation accuracy

  17. Efficient scatter distribution estimation and correction in CBCT using concurrent Monte Carlo fitting

    Energy Technology Data Exchange (ETDEWEB)

    Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Verhaegen, F. [Department of Radiation Oncology - MAASTRO, GROW—School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4 (Canada); Jaffray, D. A. [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Ontario Cancer Institute, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada)

    2015-01-15

    Purpose: X-ray scatter is a significant impediment to image quality improvements in cone-beam CT (CBCT). The authors present and demonstrate a novel scatter correction algorithm using a scatter estimation method that simultaneously combines multiple Monte Carlo (MC) CBCT simulations through the use of a concurrently evaluated fitting function, referred to as concurrent MC fitting (CMCF). Methods: The CMCF method uses concurrently run MC CBCT scatter projection simulations that are a subset of the projection angles used in the projection set, P, to be corrected. The scattered photons reaching the detector in each MC simulation are simultaneously aggregated by an algorithm which computes the scatter detector response, S{sub MC}. S{sub MC} is fit to a function, S{sub F}, and if the fit of S{sub F} is within a specified goodness of fit (GOF), the simulations are terminated. The fit, S{sub F}, is then used to interpolate the scatter distribution over all pixel locations for every projection angle in the set P. The CMCF algorithm was tested using a frequency limited sum of sines and cosines as the fitting function on both simulated and measured data. The simulated data consisted of an anthropomorphic head and a pelvis phantom created from CT data, simulated with and without the use of a compensator. The measured data were a pelvis scan of a phantom and patient taken on an Elekta Synergy platform. The simulated data were used to evaluate various GOF metrics as well as determine a suitable fitness value. The simulated data were also used to quantitatively evaluate the image quality improvements provided by the CMCF method. A qualitative analysis was performed on the measured data by comparing the CMCF scatter corrected reconstruction to the original uncorrected and corrected by a constant scatter correction reconstruction, as well as a reconstruction created using a set of projections taken with a small cone angle. Results: Pearson’s correlation, r, proved to be a

  18. A review of neutron scattering correction for the calibration of neutron survey meters using the shadow cone method

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sang In; Kim, Bong Hwan; Kim, Jang Lyul; Lee, Jung Il [Health Physics Team, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-12-15

    The calibration methods of neutron-measuring devices such as the neutron survey meter have advantages and disadvantages. To compare the calibration factors obtained by the shadow cone method and semi-empirical method, 10 neutron survey meters of five different types were used in this study. This experiment was performed at the Korea Atomic Energy Research Institute (KAERI; Daejeon, South Korea), and the calibration neutron fields were constructed using a {sup 252}Californium ({sup 252}Cf) neutron source, which was positioned in the center of the neutron irradiation room. The neutron spectra of the calibration neutron fields were measured by a europium-activated lithium iodide scintillator in combination with KAERI's Bonner sphere system. When the shadow cone method was used, 10 single moderator-based survey meters exhibited a smaller calibration factor by as much as 3.1 - 9.3% than that of the semi-empirical method. This finding indicates that neutron survey meters underestimated the scattered neutrons and attenuated neutrons (i.e., the total scatter corrections). This underestimation of the calibration factor was attributed to the fact that single moderator-based survey meters have an under-ambient dose equivalent response in the thermal or thermal-dominant neutron field. As a result, when the shadow cone method is used for a single moderator-based survey meter, an additional correction and the International Organization for Standardization standard 8529-2 for room-scattered neutrons should be considered.

  19. A review of neutron scattering correction for the calibration of neutron survey meters using the shadow cone method

    International Nuclear Information System (INIS)

    Kim, Sang In; Kim, Bong Hwan; Kim, Jang Lyul; Lee, Jung Il

    2015-01-01

    The calibration methods of neutron-measuring devices such as the neutron survey meter have advantages and disadvantages. To compare the calibration factors obtained by the shadow cone method and semi-empirical method, 10 neutron survey meters of five different types were used in this study. This experiment was performed at the Korea Atomic Energy Research Institute (KAERI; Daejeon, South Korea), and the calibration neutron fields were constructed using a 252 Californium ( 252 Cf) neutron source, which was positioned in the center of the neutron irradiation room. The neutron spectra of the calibration neutron fields were measured by a europium-activated lithium iodide scintillator in combination with KAERI's Bonner sphere system. When the shadow cone method was used, 10 single moderator-based survey meters exhibited a smaller calibration factor by as much as 3.1 - 9.3% than that of the semi-empirical method. This finding indicates that neutron survey meters underestimated the scattered neutrons and attenuated neutrons (i.e., the total scatter corrections). This underestimation of the calibration factor was attributed to the fact that single moderator-based survey meters have an under-ambient dose equivalent response in the thermal or thermal-dominant neutron field. As a result, when the shadow cone method is used for a single moderator-based survey meter, an additional correction and the International Organization for Standardization standard 8529-2 for room-scattered neutrons should be considered

  20. Effects of scatter correction on regional distribution of cerebral blood flow using I-123-IMP and SPECT

    International Nuclear Information System (INIS)

    Ito, Hiroshi; Iida, Hidehiro; Kinoshita, Toshibumi; Hatazawa, Jun; Okudera, Toshio; Uemura, Kazuo

    1999-01-01

    The transmission dependent convolution subtraction method which is one of the methods for scatter correction of SPECT was applied to the assessment of CBF using SPECT and I-123-IMP. The effects of scatter correction on regional distribution of CBF were evaluated on a pixel by pixel basis by means of an anatomic standardization technique. SPECT scan was performed on six healthy men. Image reconstruction was carried out with and without the scatter correction. All reconstructed images were globally normalized for the radioactivity of each pixel, and transformed into a standard brain anatomy. After anatomic standardization, the average SPECT images were calculated for scatter corrected and uncorrected groups, and these groups were compared on pixel by pixel basis. In the scatter uncorrected group, a significant overestimation of CBF was observed in the deep cerebral white matter, pons, thalamus, putamen, hippocampal region and cingulate gyrus as compared with scatter corrected group. A significant underestimation was observed in all neocortical regions, especially in the occipital and parietal lobes, and the cerebellar cortex. The regional distribution of CBF obtained by scatter corrected SPECT was similar to that obtained by O-15 water PET. The scatter correction is needed for the assessment of CBF using SPECT. (author)

  1. Neural network scatter correction technique for digital radiography

    International Nuclear Information System (INIS)

    Boone, J.M.

    1990-01-01

    This paper presents a scatter correction technique based on artificial neural networks. The technique utilizes the acquisition of a conventional digital radiographic image, coupled with the acquisition of a multiple pencil beam (micro-aperture) digital image. Image subtraction results in a sparsely sampled estimate of the scatter component in the image. The neural network is trained to develop a causal relationship between image data on the low-pass filtered open field image and the sparsely sampled scatter image, and then the trained network is used to correct the entire image (pixel by pixel) in a manner which is operationally similar to but potentially more powerful than convolution. The technique is described and is illustrated using clinical primary component images combined with scatter component images that are realistically simulated using the results from previously reported Monte Carlo investigations. The results indicate that an accurate scatter correction can be realized using this technique

  2. Scatter correction using a primary modulator on a clinical angiography C-arm CT system.

    Science.gov (United States)

    Bier, Bastian; Berger, Martin; Maier, Andreas; Kachelrieß, Marc; Ritschl, Ludwig; Müller, Kerstin; Choi, Jang-Hwan; Fahrig, Rebecca

    2017-09-01

    Cone beam computed tomography (CBCT) suffers from a large amount of scatter, resulting in severe scatter artifacts in the reconstructions. Recently, a new scatter correction approach, called improved primary modulator scatter estimation (iPMSE), was introduced. That approach utilizes a primary modulator that is inserted between the X-ray source and the object. This modulation enables estimation of the scatter in the projection domain by optimizing an objective function with respect to the scatter estimate. Up to now the approach has not been implemented on a clinical angiography C-arm CT system. In our work, the iPMSE method is transferred to a clinical C-arm CBCT. Additional processing steps are added in order to compensate for the C-arm scanner motion and the automatic X-ray tube current modulation. These challenges were overcome by establishing a reference modulator database and a block-matching algorithm. Experiments with phantom and experimental in vivo data were performed to evaluate the method. We show that scatter correction using primary modulation is possible on a clinical C-arm CBCT. Scatter artifacts in the reconstructions are reduced with the newly extended method. Compared to a scan with a narrow collimation, our approach showed superior results with an improvement of the contrast and the contrast-to-noise ratio for the phantom experiments. In vivo data are evaluated by comparing the results with a scan with a narrow collimation and with a constant scatter correction approach. Scatter correction using primary modulation is possible on a clinical CBCT by compensating for the scanner motion and the tube current modulation. Scatter artifacts could be reduced in the reconstructions of phantom scans and in experimental in vivo data. © 2017 American Association of Physicists in Medicine.

  3. SU-E-I-07: An Improved Technique for Scatter Correction in PET

    International Nuclear Information System (INIS)

    Lin, S; Wang, Y; Lue, K; Lin, H; Chuang, K

    2014-01-01

    Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends on the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the insufficient

  4. SU-E-I-07: An Improved Technique for Scatter Correction in PET

    Energy Technology Data Exchange (ETDEWEB)

    Lin, S; Wang, Y; Lue, K; Lin, H; Chuang, K [Chuang, National Tsing Hua University, Hsichu, Taiwan (China)

    2014-06-01

    Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends on the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the insufficient

  5. A library least-squares approach for scatter correction in gamma-ray tomography

    International Nuclear Information System (INIS)

    Meric, Ilker; Anton Johansen, Geir; Valgueiro Malta Moreira, Icaro

    2015-01-01

    Scattered radiation is known to lead to distortion in reconstructed images in Computed Tomography (CT). The effects of scattered radiation are especially more pronounced in non-scanning, multiple source systems which are preferred for flow imaging where the instantaneous density distribution of the flow components is of interest. In this work, a new method based on a library least-squares (LLS) approach is proposed as a means of estimating the scatter contribution and correcting for this. The validity of the proposed method is tested using the 85-channel industrial gamma-ray tomograph previously developed at the University of Bergen (UoB). The results presented here confirm that the LLS approach can effectively estimate the amounts of transmission and scatter components in any given detector in the UoB gamma-ray tomography system. - Highlights: • A LLS approach is proposed for scatter correction in gamma-ray tomography. • The validity of the LLS approach is tested through experiments. • Gain shift and pulse pile-up affect the accuracy of the LLS approach. • The LLS approach successfully estimates scatter profiles

  6. Compton scatter correction for planner scintigraphic imaging

    Energy Technology Data Exchange (ETDEWEB)

    Vaan Steelandt, E; Dobbeleir, A; Vanregemorter, J [Algemeen Ziekenhuis Middelheim, Antwerp (Belgium). Dept. of Nuclear Medicine and Radiotherapy

    1995-12-01

    A major problem in nuclear medicine is the image degradation due to Compton scatter in the patient. Photons emitted by the radioactive tracer scatter in collision with electrons of the surrounding tissue. Due to the resulting loss of energy and change in direction, the scattered photons induce an object dependant background on the images. This results in a degradation of the contrast of warm and cold lesions. Although theoretically interesting, most of the techniques proposed in literature like the use of symmetrical photopeaks can not be implemented on the commonly used gamma camera due to the energy/linearity/sensitivity corrections applied in the detector. A method for a single energy isotope based on existing methods with adjustments towards daily practice and clinical situations is proposed. It is assumed that the scatter image, recorded from photons collected within a scatter window adjacent to the photo peak, is a reasonable close approximation of the true scatter component of the image reconstructed from the photo peak window. A fraction `k` of the image using the scatter window is subtracted from the image recorded in the photo peak window to produce the compensated image. The principal matter of the method is the right value for the factor `k`, which is determined in a mathematical way and confirmed by experiments. To determine `k`, different kinds of scatter media are used and are positioned in different ways in order to simulate a clinical situation. For a secondary energy window from 100 to 124 keV below a photo peak window from 126 to 154 keV, a value of 0.7 is found. This value has been verified using both an antropomorph thyroid phantom and the Rollo contrast phantom.

  7. Improving quantitative dosimetry in (177)Lu-DOTATATE SPECT by energy window-based scatter corrections

    DEFF Research Database (Denmark)

    de Nijs, Robin; Lagerburg, Vera; Klausen, Thomas L

    2014-01-01

    and the activity, which depends on the collimator type, the utilized energy windows and the applied scatter correction techniques. In this study, energy window subtraction-based scatter correction methods are compared experimentally and quantitatively. MATERIALS AND METHODS: (177)Lu SPECT images of a phantom...... technique, the measured ratio was close to the real ratio, and the differences between spheres were small. CONCLUSION: For quantitative (177)Lu imaging MEGP collimators are advised. Both energy peaks can be utilized when the ESSE correction technique is applied. The difference between the calculated...

  8. Scattering Correction For Image Reconstruction In Flash Radiography

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Liangzhi; Wang, Mengqi; Wu, Hongchun; Liu, Zhouyu; Cheng, Yuxiong; Zhang, Hongbo [Xi' an Jiaotong Univ., Xi' an (China)

    2013-08-15

    Scattered photons cause blurring and distortions in flash radiography, reducing the accuracy of image reconstruction significantly. The effect of the scattered photons is taken into account and an iterative deduction of the scattered photons is proposed to amend the scattering effect for image restoration. In order to deduct the scattering contribution, the flux of scattered photons is estimated as the sum of two components. The single scattered component is calculated accurately together with the uncollided flux along the characteristic ray, while the multiple scattered component is evaluated using correction coefficients pre-obtained from Monte Carlo simulations.The arbitrary geometry pretreatment and ray tracing are carried out based on the customization of AutoCAD. With the above model, an Iterative Procedure for image restORation code, IPOR, is developed. Numerical results demonstrate that the IPOR code is much more accurate than the direct reconstruction solution without scattering correction and it has a very high computational efficiency.

  9. Scattering Correction For Image Reconstruction In Flash Radiography

    International Nuclear Information System (INIS)

    Cao, Liangzhi; Wang, Mengqi; Wu, Hongchun; Liu, Zhouyu; Cheng, Yuxiong; Zhang, Hongbo

    2013-01-01

    Scattered photons cause blurring and distortions in flash radiography, reducing the accuracy of image reconstruction significantly. The effect of the scattered photons is taken into account and an iterative deduction of the scattered photons is proposed to amend the scattering effect for image restoration. In order to deduct the scattering contribution, the flux of scattered photons is estimated as the sum of two components. The single scattered component is calculated accurately together with the uncollided flux along the characteristic ray, while the multiple scattered component is evaluated using correction coefficients pre-obtained from Monte Carlo simulations.The arbitrary geometry pretreatment and ray tracing are carried out based on the customization of AutoCAD. With the above model, an Iterative Procedure for image restORation code, IPOR, is developed. Numerical results demonstrate that the IPOR code is much more accurate than the direct reconstruction solution without scattering correction and it has a very high computational efficiency

  10. A practical procedure to improve the accuracy of radiochromic film dosimetry. A integration with a correction method of uniformity correction and a red/blue correction method

    International Nuclear Information System (INIS)

    Uehara, Ryuzo; Tachibana, Hidenobu; Ito, Yasushi; Yoshino, Shinichi; Matsubayashi, Fumiyasu; Sato, Tomoharu

    2013-01-01

    It has been reported that the light scattering could worsen the accuracy of dose distribution measurement using a radiochromic film. The purpose of this study was to investigate the accuracy of two different films, EDR2 and EBT2, as film dosimetry tools. The effectiveness of a correction method for the non-uniformity caused from EBT2 film and the light scattering was also evaluated. In addition the efficacy of this correction method integrated with the red/blue correction method was assessed. EDR2 and EBT2 films were read using a flatbed charge-coupled device scanner (EPSON 10000 G). Dose differences on the axis perpendicular to the scanner lamp movement axis were within 1% with EDR2, but exceeded 3% (Maximum: +8%) with EBT2. The non-uniformity correction method, after a single film exposure, was applied to the readout of the films. A corrected dose distribution data was subsequently created. The correction method showed more than 10%-better pass ratios in dose difference evaluation than when the correction method was not applied. The red/blue correction method resulted in 5%-improvement compared with the standard procedure that employed red color only. The correction method with EBT2 proved to be able to rapidly correct non-uniformity, and has potential for routine clinical intensity modulated radiation therapy (IMRT) dose verification if the accuracy of EBT2 is required to be similar to that of EDR2. The use of red/blue correction method may improve the accuracy, but we recommend we should use the red/blue correction method carefully and understand the characteristics of EBT2 for red color only and the red/blue correction method. (author)

  11. [A practical procedure to improve the accuracy of radiochromic film dosimetry: a integration with a correction method of uniformity correction and a red/blue correction method].

    Science.gov (United States)

    Uehara, Ryuzo; Tachibana, Hidenobu; Ito, Yasushi; Yoshino, Shinichi; Matsubayashi, Fumiyasu; Sato, Tomoharu

    2013-06-01

    It has been reported that the light scattering could worsen the accuracy of dose distribution measurement using a radiochromic film. The purpose of this study was to investigate the accuracy of two different films, EDR2 and EBT2, as film dosimetry tools. The effectiveness of a correction method for the non-uniformity caused from EBT2 film and the light scattering was also evaluated. In addition the efficacy of this correction method integrated with the red/blue correction method was assessed. EDR2 and EBT2 films were read using a flatbed charge-coupled device scanner (EPSON 10000G). Dose differences on the axis perpendicular to the scanner lamp movement axis were within 1% with EDR2, but exceeded 3% (Maximum: +8%) with EBT2. The non-uniformity correction method, after a single film exposure, was applied to the readout of the films. A corrected dose distribution data was subsequently created. The correction method showed more than 10%-better pass ratios in dose difference evaluation than when the correction method was not applied. The red/blue correction method resulted in 5%-improvement compared with the standard procedure that employed red color only. The correction method with EBT2 proved to be able to rapidly correct non-uniformity, and has potential for routine clinical IMRT dose verification if the accuracy of EBT2 is required to be similar to that of EDR2. The use of red/blue correction method may improve the accuracy, but we recommend we should use the red/blue correction method carefully and understand the characteristics of EBT2 for red color only and the red/blue correction method.

  12. Scatter factor corrections for elongated fields

    International Nuclear Information System (INIS)

    Higgins, P.D.; Sohn, W.H.; Sibata, C.H.; McCarthy, W.A.

    1989-01-01

    Measurements have been made to determine scatter factor corrections for elongated fields of Cobalt-60 and for nominal linear accelerator energies of 6 MV (Siemens Mevatron 67) and 18 MV (AECL Therac 20). It was found that for every energy the collimator scatter factor varies by 2% or more as the field length-to-width ratio increases beyond 3:1. The phantom scatter factor is independent of which collimator pair is elongated at these energies. For 18 MV photons it was found that the collimator scatter factor is complicated by field-size-dependent backscatter into the beam monitor

  13. Real-time scatter measurement and correction in film radiography

    International Nuclear Information System (INIS)

    Shaw, C.G.

    1987-01-01

    A technique for real-time scatter measurement and correction in scanning film radiography is described. With this technique, collimated x-ray fan beams are used to partially reject scattered radiation. Photodiodes are attached to the aft-collimator for sampled scatter measurement. Such measurement allows the scatter distribution to be reconstructed and subtracted from digitized film image data for accurate transmission measurement. In this presentation the authors discuss the physical and technical considerations of this scatter correction technique. Examples are shown that demonstrate the feasibility of the technique. Improved x-ray transmission measurement and dual-energy subtraction imaging are demonstrated with phantoms

  14. SU-D-206-04: Iterative CBCT Scatter Shading Correction Without Prior Information

    International Nuclear Information System (INIS)

    Bai, Y; Wu, P; Mao, T; Gong, S; Wang, J; Niu, T; Sheng, K; Xie, Y

    2016-01-01

    Purpose: To estimate and remove the scatter contamination in the acquired projection of cone-beam CT (CBCT), to suppress the shading artifacts and improve the image quality without prior information. Methods: The uncorrected CBCT images containing shading artifacts are reconstructed by applying the standard FDK algorithm on CBCT raw projections. The uncorrected image is then segmented to generate an initial template image. To estimate scatter signal, the differences are calculated by subtracting the simulated projections of the template image from the raw projections. Since scatter signals are dominantly continuous and low-frequency in the projection domain, they are estimated by low-pass filtering the difference signals and subtracted from the raw CBCT projections to achieve the scatter correction. Finally, the corrected CBCT image is reconstructed from the corrected projection data. Since an accurate template image is not readily segmented from the uncorrected CBCT image, the proposed scheme is iterated until the produced template is not altered. Results: The proposed scheme is evaluated on the Catphan©600 phantom data and CBCT images acquired from a pelvis patient. The result shows that shading artifacts have been effectively suppressed by the proposed method. Using multi-detector CT (MDCT) images as reference, quantitative analysis is operated to measure the quality of corrected images. Compared to images without correction, the method proposed reduces the overall CT number error from over 200 HU to be less than 50 HU and can increase the spatial uniformity. Conclusion: An iterative strategy without relying on the prior information is proposed in this work to remove the shading artifacts due to scatter contamination in the projection domain. The method is evaluated in phantom and patient studies and the result shows that the image quality is remarkably improved. The proposed method is efficient and practical to address the poor image quality issue of CBCT

  15. SU-D-206-04: Iterative CBCT Scatter Shading Correction Without Prior Information

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Y; Wu, P; Mao, T; Gong, S; Wang, J; Niu, T [Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Institute of Translational Medicine, Zhejiang University, Hangzhou, Zhejiang (China); Sheng, K [Department of Radiation Oncology, University of California, Los Angeles, School of Medicine, Los Angeles, CA (United States); Xie, Y [Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong (China)

    2016-06-15

    Purpose: To estimate and remove the scatter contamination in the acquired projection of cone-beam CT (CBCT), to suppress the shading artifacts and improve the image quality without prior information. Methods: The uncorrected CBCT images containing shading artifacts are reconstructed by applying the standard FDK algorithm on CBCT raw projections. The uncorrected image is then segmented to generate an initial template image. To estimate scatter signal, the differences are calculated by subtracting the simulated projections of the template image from the raw projections. Since scatter signals are dominantly continuous and low-frequency in the projection domain, they are estimated by low-pass filtering the difference signals and subtracted from the raw CBCT projections to achieve the scatter correction. Finally, the corrected CBCT image is reconstructed from the corrected projection data. Since an accurate template image is not readily segmented from the uncorrected CBCT image, the proposed scheme is iterated until the produced template is not altered. Results: The proposed scheme is evaluated on the Catphan©600 phantom data and CBCT images acquired from a pelvis patient. The result shows that shading artifacts have been effectively suppressed by the proposed method. Using multi-detector CT (MDCT) images as reference, quantitative analysis is operated to measure the quality of corrected images. Compared to images without correction, the method proposed reduces the overall CT number error from over 200 HU to be less than 50 HU and can increase the spatial uniformity. Conclusion: An iterative strategy without relying on the prior information is proposed in this work to remove the shading artifacts due to scatter contamination in the projection domain. The method is evaluated in phantom and patient studies and the result shows that the image quality is remarkably improved. The proposed method is efficient and practical to address the poor image quality issue of CBCT

  16. Scatter and attenuation correction in SPECT

    International Nuclear Information System (INIS)

    Ljungberg, Michael

    2004-01-01

    The adsorbed dose is related to the activity uptake in the organ and its temporal distribution. Measured count rate with scintillation cameras is related to activity through the system sensitivity, cps/MBq. By accounting for physical processes and imaging limitations we can measure the activity at different time points. Correction for physical factor, such as attenuation and scatter is required for accurate quantitation. Both planar and SPECT imaging can be used to estimate activities for radiopharmaceutical dosimetry. Planar methods have been the most widely used but is a 2D technique. With accurate modelling for imagine in iterative reconstruction, SPECT methods will prove to be more accurate

  17. Use of x-ray scattering in absorption corrections for x-ray fluorescence analysis of aerosol loaded filters

    International Nuclear Information System (INIS)

    Nielson, K.K.; Garcia, S.R.

    1976-09-01

    Two methods are described for computing multielement x-ray absorption corrections for aerosol samples collected in IPC-1478 and Whatman 41 filters. The first relies on scatter peak intensities and scattering cross sections to estimate the mass of light elements (Z less than 14) in the sample. This mass is used with the measured heavy element (Z greater than or equal to 14) masses to iteratively compute sample absorption corrections. The second method utilizes a linear function of ln(μ) vs ln(E) determined from the scatter peak ratios and estimates sample mass from the scatter peak intensities. Both methods assume a homogeneous depth distribution of aerosol in a fraction of the front of the filters, and the assumption is evaluated with respect to an exponential aerosol depth distribution. Penetration depths for various real, synthethic and liquid aerosols were measured. Aerosol penetration appeared constant over a 1.1 mg/cm 2 range of sample loading for IPC filters, while absorption corrections for Si and S varied by a factor of two over the same loading range. Corrections computed by the two methods were compared with measured absorption corrections and with atomic absorption analyses of the same samples

  18. A simple, direct method for x-ray scatter estimation and correction in digital radiography and cone-beam CT

    International Nuclear Information System (INIS)

    Siewerdsen, J.H.; Daly, M.J.; Bakhtiar, B.

    2006-01-01

    X-ray scatter poses a significant limitation to image quality in cone-beam CT (CBCT), resulting in contrast reduction, image artifacts, and lack of CT number accuracy. We report the performance of a simple scatter correction method in which scatter fluence is estimated directly in each projection from pixel values near the edge of the detector behind the collimator leaves. The algorithm operates on the simple assumption that signal in the collimator shadow is attributable to x-ray scatter, and the 2D scatter fluence is estimated by interpolating between pixel values measured along the top and bottom edges of the detector behind the collimator leaves. The resulting scatter fluence estimate is subtracted from each projection to yield an estimate of the primary-only images for CBCT reconstruction. Performance was investigated in phantom experiments on an experimental CBCT benchtop, and the effect on image quality was demonstrated in patient images (head, abdomen, and pelvis sites) obtained on a preclinical system for CBCT-guided radiation therapy. The algorithm provides significant reduction in scatter artifacts without compromise in contrast-to-noise ratio (CNR). For example, in a head phantom, cupping artifact was essentially eliminated, CT number accuracy was restored to within 3%, and CNR (breast-to-water) was improved by up to 50%. Similarly in a body phantom, cupping artifact was reduced by at least a factor of 2 without loss in CNR. Patient images demonstrate significantly increased uniformity, accuracy, and contrast, with an overall improvement in image quality in all sites investigated. Qualitative evaluation illustrates that soft-tissue structures that are otherwise undetectable are clearly delineated in scatter-corrected reconstructions. Since scatter is estimated directly in each projection, the algorithm is robust with respect to system geometry, patient size and heterogeneity, patient motion, etc. Operating without prior information, analytical modeling

  19. Investigating the effect and photon scattering correction in isotopic scanning with gamma and SPECT

    International Nuclear Information System (INIS)

    Movafeghi, Amir

    1997-01-01

    Nowdays medical imaging systems has been become a very important tool in medicine, both in diagnosis and treatment. With the fast improvement in the computer sciences in the last three decades, three dimensional imaging systems or topographic systems has been developed for the daily applications. Among the different methods, for now X-ray Computerized tomography scanning, Magnetic Resonance Imaging, Single Photon Emission Computerized Tomography and Positron Emission tomography have been found many clinical application. SPECT and PET imaging systems are working with the use of emitting photons from special radioisotopes. In these two systems, image is reconstructed from a distribution of radioisotope in the human body's organs. In SPECT accuracy of data quantification for image reconstruction has influenced from photon attenuation, photon scattering, statistical noises and variation in detector response due to distance. Except scattering other three factors could be modeled and compensated with relatively simple models. Photon scattering is a complex process and usually semiemperical methods is used for its modeling. The effect of scattering photons on images was considered. This survey was done in both lab and clinical cases. Radioisotopes were 192 Ir and 99m Tc. 192 Ir is a solid source with the half-life of 73 days and is used at industrial radiography application. At the beginning, models and methods, were established by the help of 192 Ir. Then at the final stage, they were developed to use for 99m Tc. There are different methods for the error correction of scattered photons. A method from the 'window subtraction' group has been developed for lab cases. Generally, in this method with the use of adjacent window of the photopeak window, scattered photons are subtracted from the original count. A Monte Carlo simulation is used for better evaluation of results. In the clinical section , a dual head SPECT system was (ADAC system of Shariati hospital at Tehran). The

  20. Virtual two-loop corrections to Bhabha scattering

    International Nuclear Information System (INIS)

    Bjoerkevoll, K.S.

    1992-03-01

    The author has developed methods for the calculation of contributions from six ladder-like diagrams to Bhabha scattering. The leading terms both for separate diagrams and for the sum of the gauge-invariant set of all diagrams have been calculated. The study has been limited to contributions from Feynman diagrams without real photons, and all calculations have been done with s>> |t| >>m 2 , where s is the center of mass energy squared, t is the square of the transferred four-momentum, and m is the electron mass. For the separate diagrams the results depend upon how λ 2 is related to s, |t| and m 2 , whereas the leading term of the sum of the six diagrams is the same in the cases that have been considered. The methods described should be valuable for calculations of contributions from other Feynman diagrams, in particular QED corrections to Bhabha scattering or pair production at small angles. 23 refs., 5 figs., 5 tabs

  1. TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Y [UT Southwestern Medical Center, Dallas, TX (United States); Southern Medical University, Guangzhou (China); Bai, T [UT Southwestern Medical Center, Dallas, TX (United States); Xi' an Jiaotong University, Xi' an (China); Yan, H; Ouyang, L; Wang, J; Pompos, A; Jiang, S; Jia, X [UT Southwestern Medical Center, Dallas, TX (United States); Zhou, L [Southern Medical University, Guangzhou (China)

    2014-06-15

    Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections; 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research

  2. Novel scatter compensation with energy and spatial dependent corrections in positron emission tomography

    International Nuclear Information System (INIS)

    Guerin, Bastien

    2010-01-01

    We developed and validated a fast Monte Carlo simulation of PET acquisitions based on the SimSET program modeling accurately the propagation of gamma photons in the patient as well as the block-based PET detector. Comparison of our simulation with another well validated code, GATE, and measurements on two GE Discovery ST PET scanners showed that it models accurately energy spectra (errors smaller than 4.6%), the spatial resolution of block-based PET scanners (6.1%), scatter fraction (3.5%), sensitivity (2.3%) and count rates (12.7%). Next, we developed a novel scatter correction incorporating the energy and position of photons detected in list-mode. Our approach is based on the reformulation of the list-mode likelihood function containing the energy distribution of detected coincidences in addition to their spatial distribution, yielding an EM reconstruction algorithm containing spatial and energy dependent correction terms. We also proposed using the energy in addition to the position of gamma photons in the normalization of the scatter sinogram. Finally, we developed a method for estimating primary and scatter photons energy spectra from total spectra detected in different sectors of the PET scanner. We evaluated the accuracy and precision of our new spatio-spectral scatter correction and that of the standard spatial correction using realistic Monte Carlo simulations. These results showed that incorporating the energy in the scatter correction reduces bias in the estimation of the absolute activity level by ∼ 60% in the cold regions of the largest patients and yields quantification errors less than 13% in all regions. (author)

  3. Scatter and crosstalk corrections for 99mTc/123I dual-radionuclide imaging using a CZT SPECT system with pinhole collimators

    International Nuclear Information System (INIS)

    Fan, Peng; Hutton, Brian F.; Holstensson, Maria; Ljungberg, Michael; Hendrik Pretorius, P.; Prasad, Rameshwar; Liu, Chi; Ma, Tianyu; Liu, Yaqiang; Wang, Shi; Thorn, Stephanie L.; Stacy, Mitchel R.; Sinusas, Albert J.

    2015-01-01

    Purpose: The energy spectrum for a cadmium zinc telluride (CZT) detector has a low energy tail due to incomplete charge collection and intercrystal scattering. Due to these solid-state detector effects, scatter would be overestimated if the conventional triple-energy window (TEW) method is used for scatter and crosstalk corrections in CZT-based imaging systems. The objective of this work is to develop a scatter and crosstalk correction method for 99m Tc/ 123 I dual-radionuclide imaging for a CZT-based dedicated cardiac SPECT system with pinhole collimators (GE Discovery NM 530c/570c). Methods: A tailing model was developed to account for the low energy tail effects of the CZT detector. The parameters of the model were obtained using 99m Tc and 123 I point source measurements. A scatter model was defined to characterize the relationship between down-scatter and self-scatter projections. The parameters for this model were obtained from Monte Carlo simulation using SIMIND. The tailing and scatter models were further incorporated into a projection count model, and the primary and self-scatter projections of each radionuclide were determined with a maximum likelihood expectation maximization (MLEM) iterative estimation approach. The extracted scatter and crosstalk projections were then incorporated into MLEM image reconstruction as an additive term in forward projection to obtain scatter- and crosstalk-corrected images. The proposed method was validated using Monte Carlo simulation, line source experiment, anthropomorphic torso phantom studies, and patient studies. The performance of the proposed method was also compared to that obtained with the conventional TEW method. Results: Monte Carlo simulations and line source experiment demonstrated that the TEW method overestimated scatter while their proposed method provided more accurate scatter estimation by considering the low energy tail effect. In the phantom study, improved defect contrasts were observed with both

  4. Non-eikonal corrections for the scattering of spin-one particles

    Energy Technology Data Exchange (ETDEWEB)

    Gaber, M.W.; Wilkin, C. [Department of Physics and Astronomy, University College London, WC1E 6BT, London (United Kingdom); Al-Khalili, J.S. [Department of Physics, University of Surrey, GU2 7XH, Guildford, Surrey (United Kingdom)

    2004-08-01

    The Wallace Fourier-Bessel expansion of the scattering amplitude is generalised to the case of the scattering of a spin-one particle from a potential with a single tensor coupling as well as central and spin-orbit terms. A generating function for the eikonal-phase (quantum) corrections is evaluated in closed form. For medium-energy deuteron-nucleus scattering, the first-order correction is dominant and is shown to be significant in the interpretation of analysing power measurements. This conclusion is supported by a numerical comparison of the eikonal observables, evaluated with and without corrections, with those obtained from a numerical resolution of the Schroedinger equation for d-{sup 58}Ni scattering at incident deuteron energies of 400 and 700 MeV. (orig.)

  5. Corrections to the large-angle scattering amplitude

    International Nuclear Information System (INIS)

    Goloskokov, S.V.; Kudinov, A.V.; Kuleshov, S.P.

    1979-01-01

    High-energy behaviour of scattering amplitudes is considered within the frames of Logunov-Tavchelidze quasipotential approach. The representation of scattering amplitude of two scalar particles, convenient for the study of its asymptotic properties is given. Obtained are corrections of the main value of scattering amplitude of the first and the second orders in 1/p, where p is the pulse of colliding particles in the system of the inertia centre. An example of the obtained formulas use for a concrete quasipotential is given

  6. Radiative corrections to neutrino deep inelastic scattering revisited

    International Nuclear Information System (INIS)

    Arbuzov, Andrej B.; Bardin, Dmitry Yu.; Kalinovskaya, Lidia V.

    2005-01-01

    Radiative corrections to neutrino deep inelastic scattering are revisited. One-loop electroweak corrections are re-calculated within the automatic SANC system. Terms with mass singularities are treated including higher order leading logarithmic corrections. Scheme dependence of corrections due to weak interactions is investigated. The results are implemented into the data analysis of the NOMAD experiment. The present theoretical accuracy in description of the process is discussed

  7. Determining the water content in concrete by gamma scattering method

    International Nuclear Information System (INIS)

    Priyada, P.; Ramar, R.; Shivaramu

    2014-01-01

    Highlights: • Gamma scattering technique for estimation of water content in concrete is given. • The scattered intensity increases with the volumetric water content. • Attenuation correction is provided to the scattered intensities. • Volumetric water content of 137 Cs radioactive source and a high resolution HPGe detector based energy dispersive gamma ray spectrometer. Concrete samples of uniform density ≈2.4 g/cm 3 are chosen for the study and the scattered intensities found to vary with the amount of water present in the specimen. The scattered intensities are corrected for attenuation effects and the results obtained with reference to a dry sample are compared with those obtained by gravimetrical and gamma transmission methods. A good agreement is seen between gamma scattering results and those obtained by gravimetric and transmission methods within accuracy of 6% and <2% change in water content can be detected

  8. Evaluation of attenuation and scatter correction requirements in small animal PET and SPECT imaging

    Science.gov (United States)

    Konik, Arda Bekir

    Positron emission tomography (PET) and single photon emission tomography (SPECT) are two nuclear emission-imaging modalities that rely on the detection of high-energy photons emitted from radiotracers administered to the subject. The majority of these photons are attenuated (absorbed or scattered) in the body, resulting in count losses or deviations from true detection, which in turn degrades the accuracy of images. In clinical emission tomography, sophisticated correction methods are often required employing additional x-ray CT or radionuclide transmission scans. Having proven their potential in both clinical and research areas, both PET and SPECT are being adapted for small animal imaging. However, despite the growing interest in small animal emission tomography, little scientific information exists about the accuracy of these correction methods on smaller size objects, and what level of correction is required. The purpose of this work is to determine the role of attenuation and scatter corrections as a function of object size through simulations. The simulations were performed using Interactive Data Language (IDL) and a Monte Carlo based package, Geant4 application for emission tomography (GATE). In IDL simulations, PET and SPECT data acquisition were modeled in the presence of attenuation. A mathematical emission and attenuation phantom approximating a thorax slice and slices from real PET/CT data were scaled to 5 different sizes (i.e., human, dog, rabbit, rat and mouse). The simulated emission data collected from these objects were reconstructed. The reconstructed images, with and without attenuation correction, were compared to the ideal (i.e., non-attenuated) reconstruction. Next, using GATE, scatter fraction values (the ratio of the scatter counts to the total counts) of PET and SPECT scanners were measured for various sizes of NEMA (cylindrical phantoms representing small animals and human), MOBY (realistic mouse/rat model) and XCAT (realistic human model

  9. A Monte Carlo evaluation of analytical multiple scattering corrections for unpolarised neutron scattering and polarisation analysis data

    International Nuclear Information System (INIS)

    Mayers, J.; Cywinski, R.

    1985-03-01

    Some of the approximations commonly used for the analytical estimation of multiple scattering corrections to thermal neutron elastic scattering data from cylindrical and plane slab samples have been tested using a Monte Carlo program. It is shown that the approximations are accurate for a wide range of sample geometries and scattering cross-sections. Neutron polarisation analysis provides the most stringent test of multiple scattering calculations as multiply scattered neutrons may be redistributed not only geometrically but also between the spin flip and non spin flip scattering channels. A very simple analytical technique for correcting for multiple scattering in neutron polarisation analysis has been tested using the Monte Carlo program and has been shown to work remarkably well in most circumstances. (author)

  10. A library least-squares approach for scatter correction in gamma-ray tomography

    Science.gov (United States)

    Meric, Ilker; Anton Johansen, Geir; Valgueiro Malta Moreira, Icaro

    2015-03-01

    Scattered radiation is known to lead to distortion in reconstructed images in Computed Tomography (CT). The effects of scattered radiation are especially more pronounced in non-scanning, multiple source systems which are preferred for flow imaging where the instantaneous density distribution of the flow components is of interest. In this work, a new method based on a library least-squares (LLS) approach is proposed as a means of estimating the scatter contribution and correcting for this. The validity of the proposed method is tested using the 85-channel industrial gamma-ray tomograph previously developed at the University of Bergen (UoB). The results presented here confirm that the LLS approach can effectively estimate the amounts of transmission and scatter components in any given detector in the UoB gamma-ray tomography system.

  11. Compton scatter and randoms corrections for origin ensembles 3D PET reconstructions

    Energy Technology Data Exchange (ETDEWEB)

    Sitek, Arkadiusz [Harvard Medical School, Boston, MA (United States). Dept. of Radiology; Brigham and Women' s Hospital, Boston, MA (United States); Kadrmas, Dan J. [Utah Univ., Salt Lake City, UT (United States). Utah Center for Advanced Imaging Research (UCAIR)

    2011-07-01

    In this work we develop a novel approach to correction for scatter and randoms in reconstruction of data acquired by 3D positron emission tomography (PET) applicable to tomographic reconstruction done by the origin ensemble (OE) approach. The statistical image reconstruction using OE is based on calculation of expectations of the numbers of emitted events per voxel based on complete-data space. Since the OE estimation is fundamentally different than regular statistical estimators such those based on the maximum likelihoods, the standard methods of implementation of scatter and randoms corrections cannot be used. Based on prompts, scatter, and random rates, each detected event is graded in terms of a probability of being a true event. These grades are utilized by the Markov Chain Monte Carlo (MCMC) algorithm used in OE approach for calculation of the expectation over the complete-data space of the number of emitted events per voxel (OE estimator). We show that the results obtained with the OE are almost identical to results obtained by the maximum likelihood-expectation maximization (ML-EM) algorithm for reconstruction for experimental phantom data acquired using Siemens Biograph mCT 3D PET/CT scanner. The developed correction removes artifacts due to scatter and randoms in investigated 3D PET datasets. (orig.)

  12. Scatter and crosstalk corrections for {sup 99m}Tc/{sup 123}I dual-radionuclide imaging using a CZT SPECT system with pinhole collimators

    Energy Technology Data Exchange (ETDEWEB)

    Fan, Peng [Department of Diagnostic Radiology, Yale University, New Haven, Connecticut 06520 and Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Hutton, Brian F. [Institute of Nuclear Medicine, University College London, London WC1E 6BT, United Kingdom and Centre for Medical Radiation Physics, University of Wollongong, New South Wales 2522 (Australia); Holstensson, Maria [Department of Nuclear Medicine, Karolinska University Hospital, Stockholm 14186 (Sweden); Ljungberg, Michael [Department of Medical Radiation Physics, Lund University, Lund 222 41 (Sweden); Hendrik Pretorius, P. [Department of Radiology, University of Massachusetts Medical School, Worcester, Massachusetts 01655 (United States); Prasad, Rameshwar; Liu, Chi, E-mail: chi.liu@yale.edu [Department of Diagnostic Radiology, Yale University, New Haven, Connecticut 06520 (United States); Ma, Tianyu; Liu, Yaqiang; Wang, Shi [Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Thorn, Stephanie L.; Stacy, Mitchel R.; Sinusas, Albert J. [Department of Internal Medicine, Yale Translational Research Imaging Center, Yale University, New Haven, Connecticut 06520 (United States)

    2015-12-15

    Purpose: The energy spectrum for a cadmium zinc telluride (CZT) detector has a low energy tail due to incomplete charge collection and intercrystal scattering. Due to these solid-state detector effects, scatter would be overestimated if the conventional triple-energy window (TEW) method is used for scatter and crosstalk corrections in CZT-based imaging systems. The objective of this work is to develop a scatter and crosstalk correction method for {sup 99m}Tc/{sup 123}I dual-radionuclide imaging for a CZT-based dedicated cardiac SPECT system with pinhole collimators (GE Discovery NM 530c/570c). Methods: A tailing model was developed to account for the low energy tail effects of the CZT detector. The parameters of the model were obtained using {sup 99m}Tc and {sup 123}I point source measurements. A scatter model was defined to characterize the relationship between down-scatter and self-scatter projections. The parameters for this model were obtained from Monte Carlo simulation using SIMIND. The tailing and scatter models were further incorporated into a projection count model, and the primary and self-scatter projections of each radionuclide were determined with a maximum likelihood expectation maximization (MLEM) iterative estimation approach. The extracted scatter and crosstalk projections were then incorporated into MLEM image reconstruction as an additive term in forward projection to obtain scatter- and crosstalk-corrected images. The proposed method was validated using Monte Carlo simulation, line source experiment, anthropomorphic torso phantom studies, and patient studies. The performance of the proposed method was also compared to that obtained with the conventional TEW method. Results: Monte Carlo simulations and line source experiment demonstrated that the TEW method overestimated scatter while their proposed method provided more accurate scatter estimation by considering the low energy tail effect. In the phantom study, improved defect contrasts were

  13. Clinical value of scatter correction for interictal brain 99m Tc-HMPAO SPECT in mesial temporal lobe epilepsy

    International Nuclear Information System (INIS)

    Sanchez Catasus, C.; Morales, L.; Aguila, A.

    2002-01-01

    Aim: It is well known that some patients with temporal lobe epilepsy (TLE) show normal perfusion during interictal SPECT study. The aim of this research was to evaluate if the scatter radiation has some influence on this kind of result. Materials and Methods: We studied 15 patients with TLE by clinical diagnosis and by video-EEG monitoring with surface electrodes (11 left TLE, 4 right TLE), which showed normal perfusion during interictal brain 99m Tc-HMPAO SPECT. The SPECT data were reconstructed by filtered backprojection without scatter correction (A). The same SPECT data were reconstructed after the projections were corrected by dual energy window method of scatter correction (B). Attenuation was corrected in all cases using first order Chang Method. For A and B images groups, cerebellum perfusion ratios were calculated on irregular regions of interest (ROI) drawn on anterior (ATL), lateral (LTL), mesial (MTL) and whole temporal lobe (WTL). To evaluate the influence of scatter radiation, the cerebellum perfusion ratios of each subject were compared with a normal database of 10 normal subjects, with and without scatter correction, using z-score analysis. Results: In group A, the z-score was less than 2 in all cases. In group B, the z-score was more than 2 in 6 cases, 4 in MTL (3 left, 1 right) and 2 in left LTL, which were coincident with the EEG localization. All images of group B showed better contrast than images of group A. Conclusions: These results suggest that scatter correction could improve the sensitivity of interictal brain SPECT to identify epileptic focus in patients with TLE

  14. Study of radiative corrections with application to the electron-neutrino scattering

    International Nuclear Information System (INIS)

    Oliveira, L.C.S. de.

    1977-01-01

    The radiative correction method is studied which appears in Quantum Field Theory, for some weak interaction processes. e.g., Beta decay and muon decay. Such a method is then applied to calculate transition probability for the electron-neutrino scattering using the U-A theory as a base. The calculations of infrared and ultraviolet divergences are also discussed. (L.C.) [pt

  15. A fast and pragmatic approach for scatter correction in flat-detector CT using elliptic modeling and iterative optimization

    Science.gov (United States)

    Meyer, Michael; Kalender, Willi A.; Kyriakou, Yiannis

    2010-01-01

    Scattered radiation is a major source of artifacts in flat detector computed tomography (FDCT) due to the increased irradiated volumes. We propose a fast projection-based algorithm for correction of scatter artifacts. The presented algorithm combines a convolution method to determine the spatial distribution of the scatter intensity distribution with an object-size-dependent scaling of the scatter intensity distributions using a priori information generated by Monte Carlo simulations. A projection-based (PBSE) and an image-based (IBSE) strategy for size estimation of the scanned object are presented. Both strategies provide good correction and comparable results; the faster PBSE strategy is recommended. Even with such a fast and simple algorithm that in the PBSE variant does not rely on reconstructed volumes or scatter measurements, it is possible to provide a reasonable scatter correction even for truncated scans. For both simulations and measurements, scatter artifacts were significantly reduced and the algorithm showed stable behavior in the z-direction. For simulated voxelized head, hip and thorax phantoms, a figure of merit Q of 0.82, 0.76 and 0.77 was reached, respectively (Q = 0 for uncorrected, Q = 1 for ideal). For a water phantom with 15 cm diameter, for example, a cupping reduction from 10.8% down to 2.1% was achieved. The performance of the correction method has limitations in the case of measurements using non-ideal detectors, intensity calibration, etc. An iterative approach to overcome most of these limitations was proposed. This approach is based on root finding of a cupping metric and may be useful for other scatter correction methods as well. By this optimization, cupping of the measured water phantom was further reduced down to 0.9%. The algorithm was evaluated on a commercial system including truncated and non-homogeneous clinically relevant objects.

  16. Clinical usefulness of scatter and attenuation correction for brain single photon emission computed tomography (SPECT) in pediatrics

    International Nuclear Information System (INIS)

    Adachi, Itaru; Doi, Kenji; Komori, Tsuyoshi; Hou, Nobuyoshi; Tabuchi, Koujirou; Matsui, Ritsuo; Sueyoshi, Kouzou; Utsunomiya, Keita; Narabayashi, Isamu

    1998-01-01

    This investigation was undertaken to study clinical usefulness of scatter and attenuation correction (SAC) of brain SPECT in infants to compare the standard reconstruction (STD). The brain SPECT was performed in 31 patients with 19 epilepsy, 5 cerebro-vascular disease, 2 brain tumor, 3 meningitis, 1 hydrocephalus and psychosis (mean age 5.0±4.9 years old). Many patients was necessary to be injected sedatives for restraining body motion after Technetium-99m hexamethylpropylene amine oxime ( 99m Tc-HMPAO) was injected at the convulsion or rest. Brain SPECT data were acquired with triple detector gamma camera (GCA-9300 Toshiba Japan). These data were reconstructed by filtered backprojection after the raw data were corrected by triple energy windows method of scatter correction and Chang filtered method of attenuation correction. The same data was reconstructed by filtered backprojection without these corrections. Both SAC and STD SPECT images were analyzed by the visual interpretation. The uptake ratio of cerebral basal nuclei was calculated by the counts of the thalamus or lenticular nuclei divided by the cortex. All images of SAC method were excellent than that of STD method. The thalamic uptake ratio in SAC method was higher than that of STD method (1.22±0.09>0.87±0.22 p 1.02±0.16 p<0.01). Transmission scan is the most suitable method of absorption correction. But the transmission scan is not adequate for examination of children, because this scan needs a lot of time and the infants are exposed by the line source radioisotope. It was concluded that these scatter and absorption corrections were most suitable method for brain SPECT in pediatrics. (author)

  17. Proton dose calculation on scatter-corrected CBCT image: Feasibility study for adaptive proton therapy

    Energy Technology Data Exchange (ETDEWEB)

    Park, Yang-Kyun, E-mail: ykpark@mgh.harvard.edu; Sharp, Gregory C.; Phillips, Justin; Winey, Brian A. [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States)

    2015-08-15

    Purpose: To demonstrate the feasibility of proton dose calculation on scatter-corrected cone-beam computed tomographic (CBCT) images for the purpose of adaptive proton therapy. Methods: CBCT projection images were acquired from anthropomorphic phantoms and a prostate patient using an on-board imaging system of an Elekta infinity linear accelerator. Two previously introduced techniques were used to correct the scattered x-rays in the raw projection images: uniform scatter correction (CBCT{sub us}) and a priori CT-based scatter correction (CBCT{sub ap}). CBCT images were reconstructed using a standard FDK algorithm and GPU-based reconstruction toolkit. Soft tissue ROI-based HU shifting was used to improve HU accuracy of the uncorrected CBCT images and CBCT{sub us}, while no HU change was applied to the CBCT{sub ap}. The degree of equivalence of the corrected CBCT images with respect to the reference CT image (CT{sub ref}) was evaluated by using angular profiles of water equivalent path length (WEPL) and passively scattered proton treatment plans. The CBCT{sub ap} was further evaluated in more realistic scenarios such as rectal filling and weight loss to assess the effect of mismatched prior information on the corrected images. Results: The uncorrected CBCT and CBCT{sub us} images demonstrated substantial WEPL discrepancies (7.3 ± 5.3 mm and 11.1 ± 6.6 mm, respectively) with respect to the CT{sub ref}, while the CBCT{sub ap} images showed substantially reduced WEPL errors (2.4 ± 2.0 mm). Similarly, the CBCT{sub ap}-based treatment plans demonstrated a high pass rate (96.0% ± 2.5% in 2 mm/2% criteria) in a 3D gamma analysis. Conclusions: A priori CT-based scatter correction technique was shown to be promising for adaptive proton therapy, as it achieved equivalent proton dose distributions and water equivalent path lengths compared to those of a reference CT in a selection of anthropomorphic phantoms.

  18. Clinical usefulness of scatter and attenuation correction for brain single photon emission computed tomography (SPECT) in pediatrics

    Energy Technology Data Exchange (ETDEWEB)

    Adachi, Itaru; Doi, Kenji; Komori, Tsuyoshi; Hou, Nobuyoshi; Tabuchi, Koujirou; Matsui, Ritsuo; Sueyoshi, Kouzou; Utsunomiya, Keita; Narabayashi, Isamu [Osaka Medical Coll., Takatsuki (Japan)

    1998-01-01

    This investigation was undertaken to study clinical usefulness of scatter and attenuation correction (SAC) of brain SPECT in infants to compare the standard reconstruction (STD). The brain SPECT was performed in 31 patients with 19 epilepsy, 5 cerebro-vascular disease, 2 brain tumor, 3 meningitis, 1 hydrocephalus and psychosis (mean age 5.0{+-}4.9 years old). Many patients was necessary to be injected sedatives for restraining body motion after Technetium-99m hexamethylpropylene amine oxime ({sup 99m}Tc-HMPAO) was injected at the convulsion or rest. Brain SPECT data were acquired with triple detector gamma camera (GCA-9300 Toshiba Japan). These data were reconstructed by filtered backprojection after the raw data were corrected by triple energy windows method of scatter correction and Chang filtered method of attenuation correction. The same data was reconstructed by filtered backprojection without these corrections. Both SAC and STD SPECT images were analyzed by the visual interpretation. The uptake ratio of cerebral basal nuclei was calculated by the counts of the thalamus or lenticular nuclei divided by the cortex. All images of SAC method were excellent than that of STD method. The thalamic uptake ratio in SAC method was higher than that of STD method (1.22{+-}0.09>0.87{+-}0.22 p<0.01). The lenticular nuclear uptake ratio in SAC method was higher than that of STD method (1.26{+-}0.15>1.02{+-}0.16 p<0.01). Transmission scan is the most suitable method of absorption correction. But the transmission scan is not adequate for examination of children, because this scan needs a lot of time and the infants are exposed by the line source radioisotope. It was concluded that these scatter and absorption corrections were most suitable method for brain SPECT in pediatrics. (author)

  19. Binding and Pauli principle corrections in subthreshold pion-nucleus scattering

    International Nuclear Information System (INIS)

    Kam, J. de

    1981-01-01

    In this investigation I develop a three-body model for the single scattering optical potential in which the nucleon binding and the Pauli principle are accounted for. A unitarity pole approximation is used for the nucleon-core interaction. Calculations are presented for the π- 4 He elastic scattering cross sections at energies below the inelastic threshold and for the real part of the π- 4 He scattering length by solving the three-body equations. Off-shell kinematics and the Pauli principle are carefully taken into account. The binding correction and the Pauli principle correction each have an important effect on the differential cross sections and the scattering length. However, large cancellations occur between these two effects. I find an increase in the π- 4 He scattering length by 100%; an increase in the cross sections by 20-30% and shift of the minimum in π - - 4 He scattering to forward angles by 10 0 . (orig.)

  20. Local blur analysis and phase error correction method for fringe projection profilometry systems.

    Science.gov (United States)

    Rao, Li; Da, Feipeng

    2018-05-20

    We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.

  1. Simulation tools for scattering corrections in spectrally resolved x-ray computed tomography using McXtrace

    Science.gov (United States)

    Busi, Matteo; Olsen, Ulrik L.; Knudsen, Erik B.; Frisvad, Jeppe R.; Kehres, Jan; Dreier, Erik S.; Khalil, Mohamad; Haldrup, Kristoffer

    2018-03-01

    Spectral computed tomography is an emerging imaging method that involves using recently developed energy discriminating photon-counting detectors (PCDs). This technique enables measurements at isolated high-energy ranges, in which the dominating undergoing interaction between the x-ray and the sample is the incoherent scattering. The scattered radiation causes a loss of contrast in the results, and its correction has proven to be a complex problem, due to its dependence on energy, material composition, and geometry. Monte Carlo simulations can utilize a physical model to estimate the scattering contribution to the signal, at the cost of high computational time. We present a fast Monte Carlo simulation tool, based on McXtrace, to predict the energy resolved radiation being scattered and absorbed by objects of complex shapes. We validate the tool through measurements using a CdTe single PCD (Multix ME-100) and use it for scattering correction in a simulation of a spectral CT. We found the correction to account for up to 7% relative amplification in the reconstructed linear attenuation. It is a useful tool for x-ray CT to obtain a more accurate material discrimination, especially in the high-energy range, where the incoherent scattering interactions become prevailing (>50 keV).

  2. Variational methods in electron-atom scattering theory

    CERN Document Server

    Nesbet, Robert K

    1980-01-01

    The investigation of scattering phenomena is a major theme of modern physics. A scattered particle provides a dynamical probe of the target system. The practical problem of interest here is the scattering of a low­ energy electron by an N-electron atom. It has been difficult in this area of study to achieve theoretical results that are even qualitatively correct, yet quantitative accuracy is often needed as an adjunct to experiment. The present book describes a quantitative theoretical method, or class of methods, that has been applied effectively to this problem. Quantum mechanical theory relevant to the scattering of an electron by an N-electron atom, which may gain or lose energy in the process, is summarized in Chapter 1. The variational theory itself is presented in Chapter 2, both as currently used and in forms that may facilitate future applications. The theory of multichannel resonance and threshold effects, which provide a rich structure to observed electron-atom scattering data, is presented in Cha...

  3. An experimental study of the scatter correction by using a beam-stop-array algorithm with digital breast tomosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ye-Seul; Park, Hye-Suk; Kim, Hee-Joung [Yonsei University, Wonju (Korea, Republic of); Choi, Young-Wook; Choi, Jae-Gu [Korea Electrotechnology Research Institute, Ansan (Korea, Republic of)

    2014-12-15

    Digital breast tomosynthesis (DBT) is a technique that was developed to overcome the limitations of conventional digital mammography by reconstructing slices through the breast from projections acquired at different angles. In developing and optimizing DBT, The x-ray scatter reduction technique remains a significant challenge due to projection geometry and radiation dose limitations. The most common approach to scatter reduction is a beam-stop-array (BSA) algorithm; however, this method raises concerns regarding the additional exposure involved in acquiring the scatter distribution. The compressed breast is roughly symmetric, and the scatter profiles from projections acquired at axially opposite angles are similar to mirror images. The purpose of this study was to apply the BSA algorithm with only two scans with a beam stop array, which estimates the scatter distribution with minimum additional exposure. The results of the scatter correction with angular interpolation were comparable to those of the scatter correction with all scatter distributions at each angle. The exposure increase was less than 13%. This study demonstrated the influence of the scatter correction obtained by using the BSA algorithm with minimum exposure, which indicates its potential for practical applications.

  4. An empirical correction for moderate multiple scattering in super-heterodyne light scattering.

    Science.gov (United States)

    Botin, Denis; Mapa, Ludmila Marotta; Schweinfurth, Holger; Sieber, Bastian; Wittenberg, Christopher; Palberg, Thomas

    2017-05-28

    Frequency domain super-heterodyne laser light scattering is utilized in a low angle integral measurement configuration to determine flow and diffusion in charged sphere suspensions showing moderate to strong multiple scattering. We introduce an empirical correction to subtract the multiple scattering background and isolate the singly scattered light. We demonstrate the excellent feasibility of this simple approach for turbid suspensions of transmittance T ≥ 0.4. We study the particle concentration dependence of the electro-kinetic mobility in low salt aqueous suspension over an extended concentration regime and observe a maximum at intermediate concentrations. We further use our scheme for measurements of the self-diffusion coefficients in the fluid samples in the absence or presence of shear, as well as in polycrystalline samples during crystallization and coarsening. We discuss the scope and limits of our approach as well as possible future applications.

  5. Prior image constrained scatter correction in cone-beam computed tomography image-guided radiation therapy.

    Science.gov (United States)

    Brunner, Stephen; Nett, Brian E; Tolakanahalli, Ranjini; Chen, Guang-Hong

    2011-02-21

    X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes.

  6. Use of scatter correction in quantitative I-123 MIBG scintigraphy for differentiating patients with Parkinsonism: Results from Phantom experiment and clinical study

    International Nuclear Information System (INIS)

    Bai, J.; Hashimoto, J.; Suzuki, T.; Nakahara, T.; Kubo, A.; Ohira, M.; Takao, M.; Ogawa, K.

    2007-01-01

    The aims of this study were to elucidate the feasibility of scatter correction in improving the quantitative accuracy of the Heart-to-Mediastinum (H/M) ratio in I-123 MIBG imaging and to clarify whether the H/M ratio calculated from the scatter corrected image improves the accuracy of differentiating patients with Parkinsonism from other neurological disorders. The H/M ratio was calculated using the counts from planar images processed with and without scatter correction in the phantom and on patients. The triple energy window (TEW) method was used for scatter correction. Fifty five patients were enrolled in the clinical study. The Receiver Operating Characteristic (ROC) Curve analysis was used to evaluate diagnostic performance. The H/M ratio was found to be increased after scatter correction in the phantom simulating normal cardiac uptake, while no changes were observed in the phantom simulating no uptake. It was observed that scatter correction stabilized the H/M ratio by eliminating the influence of scatter photons originating from the liver, especially in the condition of no cardiac uptake. Similarly, scatter correction increased the H/M ratio in conditions other than Parkinson's disease but did not show any change in Parkinson's disease itself to widen the differences in the H/M ratios between the two groups. The overall power of the test did not show any significant improvement after scatter correction in differentiating patients with Parkinsonism. Based on the results of this study it has been concluded that scatter correction improves the quantitative accuracy of H/M ratio in MIBG imaging, but it does not offer any significant incremental diagnostic value over conventional imaging (without scatter correction). Nevertheless it is felt that the scatter correction technique deserves special consideration in order to make the test more robust and obtain stable H/M ratios. (author)

  7. Radiative corrections to deep inelastic muon scattering

    International Nuclear Information System (INIS)

    Akhundov, A.A.; Bardin, D.Yu.; Lohman, W.

    1986-01-01

    A summary is given of the most recent results for the calculaion of radiative corrections to deep inelastic muon-nucleon scattering. Contributions from leptonic electromagnetic processes up to the order a 4 , vacuum polarization by leptons and hadrons, hadronic electromagnetic processes approximately a 3 and γZ interference have been taken into account. The dependence of the individual contributions on kinematical variables is studied. Contributions, not considered in earlier calculations of radiative corrections, reach in certain kinematical regions several per cent at energies above 100 GeV

  8. Corrections for the effects of accidental coincidences, Compton scatter, and object size in positron emission mammography (PEM) imaging

    Science.gov (United States)

    Raylman, R. R.; Majewski, S.; Wojcik, R.; Weisenberger, A. G.; Kross, B.; Popov, V.

    2001-06-01

    Positron emission mammography (PEM) has begun to show promise as an effective method for the detection of breast lesions. Due to its utilization of tumor-avid radiopharmaceuticals labeled with positron-emitting radionuclides, this technique may be especially useful in imaging of women with radiodense or fibrocystic breasts. While the use of these radiotracers affords PEM unique capabilities, it also introduces some limitations. Specifically, acceptance of accidental and Compton-scattered coincidence events can decrease lesion detectability. The authors studied the effect of accidental coincidence events on PEM images produced by the presence of /sup 18/F-Fluorodeoxyglucose in the organs of a subject using an anthropomorphic phantom. A delayed-coincidence technique was tested as a method for correcting PEM images for the occurrence of accidental events. Also, a Compton scatter correction algorithm designed specifically for PEM was developed and tested using a compressed breast phantom. Finally, the effect of object size on image counts and a correction for this effect were explored. The imager used in this study consisted of two PEM detector heads mounted 20 cm apart on a Lorad biopsy apparatus. The results demonstrated that a majority of the accidental coincidence events (/spl sim/80%) detected by this system were produced by radiotracer uptake in the adipose and muscle tissue of the torso. The presence of accidental coincidence events was shown to reduce lesion detectability. Much of this effect was eliminated by correction of the images utilizing estimates of accidental-coincidence contamination acquired with delayed coincidence circuitry built into the PEM system. The Compton scatter fraction for this system was /spl sim/14%. Utilization of a new scatter correction algorithm reduced the scatter fraction to /spl sim/1.5%. Finally, reduction of count recovery due to object size was measured and a correction to the data applied. Application of correction techniques

  9. Fully iterative scatter corrected digital breast tomosynthesis using GPU-based fast Monte Carlo simulation and composition ratio update

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kyungsang; Ye, Jong Chul, E-mail: jong.ye@kaist.ac.kr [Bio Imaging and Signal Processing Laboratory, Department of Bio and Brain Engineering, KAIST 291, Daehak-ro, Yuseong-gu, Daejeon 34141 (Korea, Republic of); Lee, Taewon; Cho, Seungryong [Medical Imaging and Radiotherapeutics Laboratory, Department of Nuclear and Quantum Engineering, KAIST 291, Daehak-ro, Yuseong-gu, Daejeon 34141 (Korea, Republic of); Seong, Younghun; Lee, Jongha; Jang, Kwang Eun [Samsung Advanced Institute of Technology, Samsung Electronics, 130, Samsung-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do, 443-803 (Korea, Republic of); Choi, Jaegu; Choi, Young Wook [Korea Electrotechnology Research Institute (KERI), 111, Hanggaul-ro, Sangnok-gu, Ansan-si, Gyeonggi-do, 426-170 (Korea, Republic of); Kim, Hak Hee; Shin, Hee Jung; Cha, Joo Hee [Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro, 43-gil, Songpa-gu, Seoul, 138-736 (Korea, Republic of)

    2015-09-15

    Purpose: In digital breast tomosynthesis (DBT), scatter correction is highly desirable, as it improves image quality at low doses. Because the DBT detector panel is typically stationary during the source rotation, antiscatter grids are not generally compatible with DBT; thus, a software-based scatter correction is required. This work proposes a fully iterative scatter correction method that uses a novel fast Monte Carlo simulation (MCS) with a tissue-composition ratio estimation technique for DBT imaging. Methods: To apply MCS to scatter estimation, the material composition in each voxel should be known. To overcome the lack of prior accurate knowledge of tissue composition for DBT, a tissue-composition ratio is estimated based on the observation that the breast tissues are principally composed of adipose and glandular tissues. Using this approximation, the composition ratio can be estimated from the reconstructed attenuation coefficients, and the scatter distribution can then be estimated by MCS using the composition ratio. The scatter estimation and image reconstruction procedures can be performed iteratively until an acceptable accuracy is achieved. For practical use, (i) the authors have implemented a fast MCS using a graphics processing unit (GPU), (ii) the MCS is simplified to transport only x-rays in the energy range of 10–50 keV, modeling Rayleigh and Compton scattering and the photoelectric effect using the tissue-composition ratio of adipose and glandular tissues, and (iii) downsampling is used because the scatter distribution varies rather smoothly. Results: The authors have demonstrated that the proposed method can accurately estimate the scatter distribution, and that the contrast-to-noise ratio of the final reconstructed image is significantly improved. The authors validated the performance of the MCS by changing the tissue thickness, composition ratio, and x-ray energy. The authors confirmed that the tissue-composition ratio estimation was quite

  10. The fortran programme for the calculation of the absorption and double scattering corrections in cross-section measurements with fast neutrons using the monte Carlo method (1963); Programme fortran pour le calcul des corrections d'absorption et de double diffusion dans les mesures de sections efficaces pour les neutrons rapides par la methode de monte-carlo (1963)

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez, B [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1963-07-01

    A calculation for double scattering and absorption corrections in fast neutron scattering experiments using Monte-Carlo method is given. Application to cylindrical target is presented in FORTRAN symbolic language. (author) [French] Un calcul des corrections de double diffusion et d'absorption dans les experiences de diffusion de neutrons rapides par la methode de Monte-Carlo est presente. L'application au cas d'une cible cylindrique est traitee en langage symbolique FORTRAN. (auteur)

  11. WE-DE-207B-12: Scatter Correction for Dedicated Cone Beam Breast CT Based On a Forward Projection Model

    Energy Technology Data Exchange (ETDEWEB)

    Shi, L; Zhu, L [Georgia Institute of Technology, Atlanta, GA (Georgia); Vedantham, S; Karellas, A [University of Massachusetts Medical School, Worcester, MA (United States)

    2016-06-15

    Purpose: The image quality of dedicated cone-beam breast CT (CBBCT) is fundamentally limited by substantial x-ray scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose to suppress x-ray scatter in CBBCT images using a deterministic forward projection model. Method: We first use the 1st-pass FDK-reconstructed CBBCT images to segment fibroglandular and adipose tissue. Attenuation coefficients are assigned to the two tissues based on the x-ray spectrum used for imaging acquisition, and is forward projected to simulate scatter-free primary projections. We estimate the scatter by subtracting the simulated primary projection from the measured projection, and then the resultant scatter map is further refined by a Fourier-domain fitting algorithm after discarding untrusted scatter information. The final scatter estimate is subtracted from the measured projection for effective scatter correction. In our implementation, the proposed scatter correction takes 0.5 seconds for each projection. The method was evaluated using the overall image spatial non-uniformity (SNU) metric and the contrast-to-noise ratio (CNR) with 5 clinical datasets of BI-RADS 4/5 subjects. Results: For the 5 clinical datasets, our method reduced the SNU from 7.79% to 1.68% in coronal view and from 6.71% to 3.20% in sagittal view. The average CNR is improved by a factor of 1.38 in coronal view and 1.26 in sagittal view. Conclusion: The proposed scatter correction approach requires no additional scans or prior images and uses a deterministic model for efficient calculation. Evaluation with clinical datasets demonstrates the feasibility and stability of the method. These features are attractive for clinical CBBCT and make our method distinct from other approaches. Supported partly by NIH R21EB019597, R21CA134128

  12. Iteration of ultrasound aberration correction methods

    Science.gov (United States)

    Maasoey, Svein-Erik; Angelsen, Bjoern; Varslot, Trond

    2004-05-01

    Aberration in ultrasound medical imaging is usually modeled by time-delay and amplitude variations concentrated on the transmitting/receiving array. This filter process is here denoted a TDA filter. The TDA filter is an approximation to the physical aberration process, which occurs over an extended part of the human body wall. Estimation of the TDA filter, and performing correction on transmit and receive, has proven difficult. It has yet to be shown that this method works adequately for severe aberration. Estimation of the TDA filter can be iterated by retransmitting a corrected signal and re-estimate until a convergence criterion is fulfilled (adaptive imaging). Two methods for estimating time-delay and amplitude variations in receive signals from random scatterers have been developed. One method correlates each element signal with a reference signal. The other method use eigenvalue decomposition of the receive cross-spectrum matrix, based upon a receive energy-maximizing criterion. Simulations of iterating aberration correction with a TDA filter have been investigated to study its convergence properties. A weak and strong human-body wall model generated aberration. Both emulated the human abdominal wall. Results after iteration improve aberration correction substantially, and both estimation methods converge, even for the case of strong aberration.

  13. Coulomb corrections to scattering length and effective radius

    International Nuclear Information System (INIS)

    Mur, V.D.; Kudryavtsev, A.E.; Popov, V.S.

    1983-01-01

    The problem considered is extraction of the ''purely nuclear'' scattering length asub(s) (corresponding to the strong potential Vsub(s) at the Coulomb interaction switched off) from the Coulomb-nuclear scattering length asub(cs), which is an object of experimental measurement. The difference between asub(s) and asub(cs) is especially large if the potential Vsub(s) has a level (real or virtual) with an energy close to zero. For this case formulae are obtained relating the scattering lengths asub(s) and asub(cs), as well as the effective radii rsub(s) and rsub(cs). The results are extended to states with arbitrary angular momenta l. It is shown that the Coulomb correction is especially large for the coefficient with ksup(2l) in the expansion of the effective radius; in this case the correction contains a large logarithm ln(asub(B)/rsub(0)). The Coulomb renormalization of other terms in the effective radius espansion is of order (rsub(0)/asub(B)), where r 0 is the nuclear force radius, asub(B) is the Bohr radius. The obtained formulae are tried on a number of model potentials Vsub(s), used in nuclear physics

  14. Quantification of myocardial perfusion SPECT for the assessment of coronary artery disease: should we apply scatter correction?

    International Nuclear Information System (INIS)

    Hambye, A.S.; Vervaet, A.; Dobbeleir, A.

    2002-01-01

    Compared to other non invasive testings for CAD diagnosis, myocardial perfusion imaging (MPI) is considered as a very sensitive method which accuracy is however often dimmed by a certain lack of specificity, especially in patients with a small heart. With gated SPECT MPI, use of end-diastolic instead of summed images has been presented as an interesting approach for increasing specificity. Since scatter correction is reported to improve image contrast, it might potentially constitute another way to ameliorate MPI accuracy. We aimed at comparing the value of both approaches, either separate or combined, for CAD diagnosis. Methods. Hundred patients addressed for gated 99m-Tc sestamibi SPECT MPI were prospectively included (Group A). Thirty-five had an end-systolic volume <30ml by QGS-analysis (Group B). All had a coronary angiogram within 3 months of the MPI. Four polar maps (non-corrected and scatter-corrected summed, and non-corrected and scatter-corrected end-diastolic) were created to quantify the extent (EXT) and severity (TDS) of the perfusion defects if any. ROC-curve analysis was applied to define the optimal thresholds of EXT and TDS separating non-CAD from CAD-patients, using a 50%-stenosis on coronary angiogram as cutoff for disease positivity. Results. Significant CAD was present in 86 patients (25 in Group B). In Group A, assessment of EXT and TDS of perfusion defects on scatter-corrected summed images demonstrated the highest accuracy (76% for EXT; sens: 77%; spec: 71%, and 74% for TDS, sens: 73%, spec: 79%). Accuracy of EXT and TDS calculated from the other data sets was slightly but not significantly lower, especially because of a lower sensitivity. As a comparison, visual analysis was 90% accurate for the diagnosis of CAD (sens: 94%, spec: 64%). In group B, overall results were worse mainly due to a decreased sensitivity, with accuracies ranging between 51 and 63%. Again scatter-corrected summed data were the most accurate (EXT: 60%, TDS: 63%, visual

  15. Calculation of radiative corrections to virtual compton scattering - absolute measurement of the energy of Jefferson Lab. electron beam (hall A) by a magnetic method: arc project

    International Nuclear Information System (INIS)

    Marchand, D.

    1998-11-01

    This thesis presents the radiative corrections to the virtual compton scattering and the magnetic method adopted in the Hall A at Jefferson Laboratory, to measure the electrons beam energy with an accuracy of 10 4 . The virtual compton scattering experiments allow the access to the generalised polarizabilities of the protons. The extraction of these polarizabilities is obtained by the experimental and theoretical cross sections comparison. That's why the systematic errors and the radiative effects of the experiments have to be controlled very seriously. In this scope, a whole calculation of the internal radiative corrections has been realised in the framework of the quantum electrodynamic. The method of the dimensional regularisation has been used to the treatment of the ultraviolet and infra-red divergences. The absolute measure method of the energy, takes into account the magnetic deviation, made up of eight identical dipoles. The energy is determined from the deviation angle calculation of the beam and the measure of the magnetic field integral along the deviation

  16. First order correction to quasiclassical scattering amplitude

    International Nuclear Information System (INIS)

    Kuz'menko, A.V.

    1978-01-01

    First order (with respect to h) correction to quasiclassical with the aid of scattering amplitude in nonrelativistic quantum mechanics is considered. This correction is represented by two-loop diagrams and includes the double integrals. With the aid of classical equations of motion, the sum of the contributions of the two-loop diagrams is transformed into the expression which includes one-dimensional integrals only. The specific property of the expression obtained is that the integrand does not possess any singularities in the focal points of the classical trajectory. The general formula takes much simpler form in the case of one-dimensional systems

  17. Mass corrections in deep-inelastic scattering

    International Nuclear Information System (INIS)

    Gross, D.J.; Treiman, S.B.; Wilczek, F.A.

    1977-01-01

    The moment sum rules for deep-inelastic lepton scattering are expected for asymptotically free field theories to display a characteristic pattern of logarithmic departures from scaling at large enough Q 2 . In the large-Q 2 limit these patterns do not depend on hadron or quark masses m. For modest values of Q 2 one expects corrections at the level of powers of m 2 /Q 2 . We discuss the question whether these mass effects are accessible in perturbation theory, as applied to the twist-2 Wilson coefficients and more generally. Our conclusion is that some part of the mass effects must arise from a nonperturbative origin. We also discuss the corrections which arise from higher orders in perturbation theory for very large Q 2 , where mass effects can perhaps be ignored. The emphasis here is on a characterization of the Q 2 , x domain where higher-order corrections are likely to be unimportant

  18. Calculation of the flux attenuation and multiple scattering correction factors in time of flight technique for double differential cross section measurements

    International Nuclear Information System (INIS)

    Martin, G.; Coca, M.; Capote, R.

    1996-01-01

    Using Monte Carlo method technique , a computer code which simulates the time of flight experiment to measure double differential cross section was developed. The correction factor for flux attenuation and multiple scattering, that make a deformation to the measured spectrum, were calculated. The energy dependence of the correction factor was determined and a comparison with other works is shown. Calculations for Fe 56 at two different scattering angles were made. We also reproduce the experiment performed at the Nuclear Analysis Laboratory for C 12 at 25 celsius degree and the calculated correction factor for the is measured is shown. We found a linear relation between the scatter size and the correction factor for flux attenuation

  19. GEO-LEO reflectance band inter-comparison with BRDF and atmospheric scattering corrections

    Science.gov (United States)

    Chang, Tiejun; Xiong, Xiaoxiong Jack; Keller, Graziela; Wu, Xiangqian

    2017-09-01

    The inter-comparison of the reflective solar bands between the instruments onboard a geostationary orbit satellite and onboard a low Earth orbit satellite is very helpful to assess their calibration consistency. GOES-R was launched on November 19, 2016 and Himawari 8 was launched October 7, 2014. Unlike the previous GOES instruments, the Advanced Baseline Imager on GOES-16 (GOES-R became GOES-16 after November 29 when it reached orbit) and the Advanced Himawari Imager (AHI) on Himawari 8 have onboard calibrators for the reflective solar bands. The assessment of calibration is important for their product quality enhancement. MODIS and VIIRS, with their stringent calibration requirements and excellent on-orbit calibration performance, provide good references. The simultaneous nadir overpass (SNO) and ray-matching are widely used inter-comparison methods for reflective solar bands. In this work, the inter-comparisons are performed over a pseudo-invariant target. The use of stable and uniform calibration sites provides comparison with appropriate reflectance level, accurate adjustment for band spectral coverage difference, reduction of impact from pixel mismatching, and consistency of BRDF and atmospheric correction. The site in this work is a desert site in Australia (latitude -29.0 South; longitude 139.8 East). Due to the difference in solar and view angles, two corrections are applied to have comparable measurements. The first is the atmospheric scattering correction. The satellite sensor measurements are top of atmosphere reflectance. The scattering, especially Rayleigh scattering, should be removed allowing the ground reflectance to be derived. Secondly, the angle differences magnify the BRDF effect. The ground reflectance should be corrected to have comparable measurements. The atmospheric correction is performed using a vector version of the Second Simulation of a Satellite Signal in the Solar Spectrum modeling and BRDF correction is performed using a semi

  20. A model of diffraction scattering with unitary corrections

    International Nuclear Information System (INIS)

    Etim, E.; Malecki, A.; Satta, L.

    1989-01-01

    The inability of the multiple scattering model of Glauber and similar geometrical picture models to fit data at Collider energies, to fit low energy data at large momentum transfers and to explain the absence of multiple diffraction dips in the data is noted. It is argued and shown that a unitary correction to the multiple scattering amplitude gives rise to a better model and allows to fit all available data on nucleon-nucleon and nucleus-nucleus collisions at all energies and all momentum transfers. There are no multiple diffraction dips

  1. Interleaved segment correction achieves higher improvement factors in using genetic algorithm to optimize light focusing through scattering media

    Science.gov (United States)

    Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong

    2017-10-01

    Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.

  2. Fully 3D iterative scatter-corrected OSEM for HRRT PET using a GPU

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kyung Sang; Ye, Jong Chul, E-mail: kssigari@kaist.ac.kr, E-mail: jong.ye@kaist.ac.kr [Bio-Imaging and Signal Processing Lab., Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), 335 Gwahak-no, Yuseong-gu, Daejon 305-701 (Korea, Republic of)

    2011-08-07

    Accurate scatter correction is especially important for high-resolution 3D positron emission tomographies (PETs) such as high-resolution research tomograph (HRRT) due to large scatter fraction in the data. To address this problem, a fully 3D iterative scatter-corrected ordered subset expectation maximization (OSEM) in which a 3D single scatter simulation (SSS) is alternatively performed with a 3D OSEM reconstruction was recently proposed. However, due to the computational complexity of both SSS and OSEM algorithms for a high-resolution 3D PET, it has not been widely used in practice. The main objective of this paper is, therefore, to accelerate the fully 3D iterative scatter-corrected OSEM using a graphics processing unit (GPU) and verify its performance for an HRRT. We show that to exploit the massive thread structures of the GPU, several algorithmic modifications are necessary. For SSS implementation, a sinogram-driven approach is found to be more appropriate compared to a detector-driven approach, as fast linear interpolation can be performed in the sinogram domain through the use of texture memory. Furthermore, a pixel-driven backprojector and a ray-driven projector can be significantly accelerated by assigning threads to voxels and sinograms, respectively. Using Nvidia's GPU and compute unified device architecture (CUDA), the execution time of a SSS is less than 6 s, a single iteration of OSEM with 16 subsets takes 16 s, and a single iteration of the fully 3D scatter-corrected OSEM composed of a SSS and six iterations of OSEM takes under 105 s for the HRRT geometry, which corresponds to acceleration factors of 125x and 141x for OSEM and SSS, respectively. The fully 3D iterative scatter-corrected OSEM algorithm is validated in simulations using Geant4 application for tomographic emission and in actual experiments using an HRRT.

  3. Effect of scatter and attenuation correction in ROI analysis of brain perfusion scintigraphy. Phantom experiment and clinical study in patients with unilateral cerebrovascular disease

    Energy Technology Data Exchange (ETDEWEB)

    Bai, J. [Keio Univ., Tokyo (Japan). 21st Century Center of Excellence Program; Hashimoto, J.; Kubo, A. [Keio Univ., Tokyo (Japan). Dept. of Radiology; Ogawa, K. [Hosei Univ., Tokyo (Japan). Dept. of Electronic Informatics; Fukunaga, A.; Onozuka, S. [Keio Univ., Tokyo (Japan). Dept. of Neurosurgery

    2007-07-01

    The aim of this study was to evaluate the effect of scatter and attenuation correction in region of interest (ROI) analysis of brain perfusion single-photon emission tomography (SPECT), and to assess the influence of selecting the reference area on the calculation of lesion-to-reference count ratios. Patients, methods: Data were collected from a brain phantom and ten patients with unilateral internal carotid artery stenosis. A simultaneous emission and transmission scan was performed after injecting {sup 123}I-iodoamphetamine. We reconstructed three SPECT images from common projection data: with scatter correction and nonuniform attenuation correction, with scatter correction and uniform attenuation correction, and with uniform attenuation correction applied to data without scatter correction. Regional count ratios were calculated by using four different reference areas (contralateral intact side, ipsilateral cerebellum, whole brain and hemisphere). Results: Scatter correction improved the accuracy of measuring the count ratios in the phantom experiment. It also yielded marked difference in the count ratio in the clinical study when using the cerebellum, whole brain or hemisphere as the reference. Difference between nonuniform and uniform attenuation correction was not significant in the phantom and clinical studies except when the cerebellar reference was used. Calculation of the lesion-to-normal count ratios referring the same site in the contralateral hemisphere was not dependent on the use of scatter correction or transmission scan-based attenuation correction. Conclusion: Scatter correction was indispensable for accurate measurement in most of the ROI analyses. Nonuniform attenuation correction is not necessary when using the reference area other than the cerebellum. (orig.)

  4. Detector normalization and scatter correction for the jPET-D4: A 4-layer depth-of-interaction PET scanner

    Energy Technology Data Exchange (ETDEWEB)

    Kitamura, Keishi [Shimadzu Corporation, 1 Nishinokyo-Kuwabaracho, Nakagyo-ku, Kyoto-shi, Kyoto 604-8511 (Japan)]. E-mail: kitam@shimadzu.co.jp; Ishikawa, Akihiro [Shimadzu Corporation, 1 Nishinokyo-Kuwabaracho, Nakagyo-ku, Kyoto-shi, Kyoto 604-8511 (Japan); Mizuta, Tetsuro [Shimadzu Corporation, 1 Nishinokyo-Kuwabaracho, Nakagyo-ku, Kyoto-shi, Kyoto 604-8511 (Japan); Yamaya, Taiga [National Institute of Radiological Sciences, 9-1 Anagawa-4, Inage-ku, Chiba-shi, Chiba 263-8555 (Japan); Yoshida, Eiji [National Institute of Radiological Sciences, 9-1 Anagawa-4, Inage-ku, Chiba-shi, Chiba 263-8555 (Japan); Murayama, Hideo [National Institute of Radiological Sciences, 9-1 Anagawa-4, Inage-ku, Chiba-shi, Chiba 263-8555 (Japan)

    2007-02-01

    The jPET-D4 is a brain positron emission tomography (PET) scanner composed of 4-layer depth-of-interaction (DOI) detectors with a large number of GSO crystals, which achieves both high spatial resolution and high scanner sensitivity. Since the sensitivity of each crystal element is highly dependent on DOI layer depth and incidental {gamma} ray energy, it is difficult to estimate normalization factors and scatter components with high statistical accuracy. In this work, we implemented a hybrid scatter correction method combined with component-based normalization, which estimates scatter components with a dual energy acquisition using a convolution subtraction-method for an estimation of trues from an upper energy window. In order to reduce statistical noise in sinograms, the implemented scheme uses the DOI compression (DOIC) method, that combines deep pairs of DOI layers into the nearest shallow pairs of DOI layers with natural detector samplings. Since the compressed data preserve the block detector configuration, as if the data are acquired using 'virtual' detectors with high {gamma}-ray stopping power, these correction methods can be applied directly to DOIC sinograms. The proposed method provides high-quality corrected images with low statistical noise, even for a multi-layer DOI-PET.

  5. Computer method to detect and correct cycle skipping on sonic logs

    International Nuclear Information System (INIS)

    Muller, D.C.

    1985-01-01

    A simple but effective computer method has been developed to detect cycle skipping on sonic logs and to replace cycle skips with estimates of correct traveltimes. The method can be used to correct observed traveltime pairs from the transmitter to both receivers. The basis of the method is the linearity of a plot of theoretical traveltime from the transmitter to the first receiver versus theoretical traveltime from the transmitter to the second receiver. Theoretical traveltime pairs are calculated assuming that the sonic logging tool is centered in the borehole, that the borehole diameter is constant, that the borehole fluid velocity is constant, and that the formation is homogeneous. The plot is linear for the full range of possible formation-rock velocity. Plots of observed traveltime pairs from a sonic logging tool are also linear but have a large degree of scatter due to borehole rugosity, sharp boundaries exhibiting large velocity contrasts, and system measurement uncertainties. However, this scatter can be reduced to a level that is less than scatter due to cycle skipping, so that cycle skips may be detected and discarded or replaced with estimated values of traveltime. Advantages of the method are that it can be applied in real time, that it can be used with data collected by existing tools, that it only affects data that exhibit cycle skipping and leaves other data unchanged, and that a correction trace can be generated which shows where cycle skipping occurs and the amount of correction applied. The method has been successfully tested on sonic log data taken in two holes drilled at the Nevada Test Site, Nye County, Nevada

  6. Holographic corrections to meson scattering amplitudes

    Energy Technology Data Exchange (ETDEWEB)

    Armoni, Adi; Ireson, Edwin, E-mail: 746616@swansea.ac.uk

    2017-06-15

    We compute meson scattering amplitudes using the holographic duality between confining gauge theories and string theory, in order to consider holographic corrections to the Veneziano amplitude and associated higher-point functions. The generic nature of such computations is explained, thanks to the well-understood nature of confining string backgrounds, and two different examples of the calculation in given backgrounds are used to illustrate the details. The effect we discover, whilst only qualitative, is re-obtainable in many such examples, in four-point but also higher point amplitudes.

  7. SU-D-206-07: CBCT Scatter Correction Based On Rotating Collimator

    International Nuclear Information System (INIS)

    Yu, G; Feng, Z; Yin, Y; Qiang, L; Li, B; Huang, P; Li, D

    2016-01-01

    Purpose: Scatter correction in cone-beam computed tomography (CBCT) has obvious effect on the removal of image noise, the cup artifact and the increase of image contrast. Several methods using a beam blocker for the estimation and subtraction of scatter have been proposed. However, the inconvenience of mechanics and propensity to residual artifacts limited the further evolution of basic and clinical research. Here, we propose a rotating collimator-based approach, in conjunction with reconstruction based on a discrete Radon transform and Tchebichef moments algorithm, to correct scatter-induced artifacts. Methods: A rotating-collimator, comprising round tungsten alloy strips, was mounted on a linear actuator. The rotating-collimator is divided into 6 portions equally. The round strips space is evenly spaced on each portion but staggered between different portions. A step motor connected to the rotating collimator drove the blocker to around x-ray source during the CBCT acquisition. The CBCT reconstruction based on a discrete Radon transform and Tchebichef moments algorithm is performed. Experimental studies using water phantom and Catphan504 were carried out to evaluate the performance of the proposed scheme. Results: The proposed algorithm was tested on both the Monte Carlo simulation and actual experiments with the Catphan504 phantom. From the simulation result, the mean square error of the reconstruction error decreases from 16% to 1.18%, the cupping (τcup) from 14.005% to 0.66%, and the peak signal-to-noise ratio increase from 16.9594 to 31.45. From the actual experiments, the induced visual artifacts are significantly reduced. Conclusion: We conducted an experiment on CBCT imaging system with a rotating collimator to develop and optimize x-ray scatter control and reduction technique. The proposed method is attractive in applications where a high CBCT image quality is critical, for example, dose calculation in adaptive radiation therapy. We want to thank Dr. Lei

  8. SU-D-206-07: CBCT Scatter Correction Based On Rotating Collimator

    Energy Technology Data Exchange (ETDEWEB)

    Yu, G; Feng, Z [Shandong Normal University, Jinan, Shandong (China); Yin, Y [Shandong Cancer Hospital and Institute, China, Jinan, Shandong (China); Qiang, L [Zhang Jiagang STFK Medical Device Co, Zhangjiangkang, Suzhou (China); Li, B [Shandong Academy of Medical Sciences, Jinan, Shandong provice (China); Huang, P [Shandong Province Key Laboratory of Medical Physics and Image Processing Te, Ji’nan, Shandong province (China); Li, D [School of Physics and Electronics, Shandong Normal University, Jinan, Shandong (China)

    2016-06-15

    Purpose: Scatter correction in cone-beam computed tomography (CBCT) has obvious effect on the removal of image noise, the cup artifact and the increase of image contrast. Several methods using a beam blocker for the estimation and subtraction of scatter have been proposed. However, the inconvenience of mechanics and propensity to residual artifacts limited the further evolution of basic and clinical research. Here, we propose a rotating collimator-based approach, in conjunction with reconstruction based on a discrete Radon transform and Tchebichef moments algorithm, to correct scatter-induced artifacts. Methods: A rotating-collimator, comprising round tungsten alloy strips, was mounted on a linear actuator. The rotating-collimator is divided into 6 portions equally. The round strips space is evenly spaced on each portion but staggered between different portions. A step motor connected to the rotating collimator drove the blocker to around x-ray source during the CBCT acquisition. The CBCT reconstruction based on a discrete Radon transform and Tchebichef moments algorithm is performed. Experimental studies using water phantom and Catphan504 were carried out to evaluate the performance of the proposed scheme. Results: The proposed algorithm was tested on both the Monte Carlo simulation and actual experiments with the Catphan504 phantom. From the simulation result, the mean square error of the reconstruction error decreases from 16% to 1.18%, the cupping (τcup) from 14.005% to 0.66%, and the peak signal-to-noise ratio increase from 16.9594 to 31.45. From the actual experiments, the induced visual artifacts are significantly reduced. Conclusion: We conducted an experiment on CBCT imaging system with a rotating collimator to develop and optimize x-ray scatter control and reduction technique. The proposed method is attractive in applications where a high CBCT image quality is critical, for example, dose calculation in adaptive radiation therapy. We want to thank Dr. Lei

  9. Improvement of brain perfusion SPET using iterative reconstruction with scatter and non-uniform attenuation correction

    Energy Technology Data Exchange (ETDEWEB)

    Kauppinen, T.; Vanninen, E.; Kuikka, J.T. [Kuopio Central Hospital (Finland). Dept. of Clinical Physiology; Koskinen, M.O. [Dept. of Clinical Physiology and Nuclear Medicine, Tampere Univ. Hospital, Tampere (Finland); Alenius, S. [Signal Processing Lab., Tampere Univ. of Technology, Tampere (Finland)

    2000-09-01

    Filtered back-projection (FBP) is generally used as the reconstruction method for single-photon emission tomography although it produces noisy images with apparent streak artefacts. It is possible to improve the image quality by using an algorithm with iterative correction steps. The iterative reconstruction technique also has an additional benefit in that computation of attenuation correction can be included in the process. A commonly used iterative method, maximum-likelihood expectation maximisation (ML-EM), can be accelerated using ordered subsets (OS-EM). We have applied to the OS-EM algorithm a Bayesian one-step late correction method utilising median root prior (MRP). Methodological comparison was performed by means of measurements obtained with a brain perfusion phantom and using patient data. The aim of this work was to quantitate the accuracy of iterative reconstruction with scatter and non-uniform attenuation corrections and post-filtering in SPET brain perfusion imaging. SPET imaging was performed using a triple-head gamma camera with fan-beam collimators. Transmission and emission scans were acquired simultaneously. The brain phantom used was a high-resolution three-dimensional anthropomorphic JB003 phantom. Patient studies were performed in ten chronic pain syndrome patients. The images were reconstructed using conventional FBP and iterative OS-EM and MRP techniques including scatter and nonuniform attenuation corrections. Iterative reconstructions were individually post-filtered. The quantitative results obtained with the brain perfusion phantom were compared with the known actual contrast ratios. The calculated difference from the true values was largest with the FBP method; iteratively reconstructed images proved closer to the reality. Similar findings were obtained in the patient studies. The plain OS-EM method improved the contrast whereas in the case of the MRP technique the improvement in contrast was not so evident with post-filtering. (orig.)

  10. Improvement of brain perfusion SPET using iterative reconstruction with scatter and non-uniform attenuation correction

    International Nuclear Information System (INIS)

    Kauppinen, T.; Vanninen, E.; Kuikka, J.T.; Alenius, S.

    2000-01-01

    Filtered back-projection (FBP) is generally used as the reconstruction method for single-photon emission tomography although it produces noisy images with apparent streak artefacts. It is possible to improve the image quality by using an algorithm with iterative correction steps. The iterative reconstruction technique also has an additional benefit in that computation of attenuation correction can be included in the process. A commonly used iterative method, maximum-likelihood expectation maximisation (ML-EM), can be accelerated using ordered subsets (OS-EM). We have applied to the OS-EM algorithm a Bayesian one-step late correction method utilising median root prior (MRP). Methodological comparison was performed by means of measurements obtained with a brain perfusion phantom and using patient data. The aim of this work was to quantitate the accuracy of iterative reconstruction with scatter and non-uniform attenuation corrections and post-filtering in SPET brain perfusion imaging. SPET imaging was performed using a triple-head gamma camera with fan-beam collimators. Transmission and emission scans were acquired simultaneously. The brain phantom used was a high-resolution three-dimensional anthropomorphic JB003 phantom. Patient studies were performed in ten chronic pain syndrome patients. The images were reconstructed using conventional FBP and iterative OS-EM and MRP techniques including scatter and nonuniform attenuation corrections. Iterative reconstructions were individually post-filtered. The quantitative results obtained with the brain perfusion phantom were compared with the known actual contrast ratios. The calculated difference from the true values was largest with the FBP method; iteratively reconstructed images proved closer to the reality. Similar findings were obtained in the patient studies. The plain OS-EM method improved the contrast whereas in the case of the MRP technique the improvement in contrast was not so evident with post-filtering. (orig.)

  11. Binary moving-blocker-based scatter correction in cone-beam computed tomography with width-truncated projections: proof of concept

    Science.gov (United States)

    Lee, Ho; Fahimian, Benjamin P.; Xing, Lei

    2017-03-01

    This paper proposes a binary moving-blocker (BMB)-based technique for scatter correction in cone-beam computed tomography (CBCT). In concept, a beam blocker consisting of lead strips, mounted in front of the x-ray tube, moves rapidly in and out of the beam during a single gantry rotation. The projections are acquired in alternating phases of blocked and unblocked cone beams, where the blocked phase results in a stripe pattern in the width direction. To derive the scatter map from the blocked projections, 1D B-Spline interpolation/extrapolation is applied by using the detected information in the shaded regions. The scatter map of the unblocked projections is corrected by averaging two scatter maps that correspond to their adjacent blocked projections. The scatter-corrected projections are obtained by subtracting the corresponding scatter maps from the projection data and are utilized to generate the CBCT image by a compressed-sensing (CS)-based iterative reconstruction algorithm. Catphan504 and pelvis phantoms were used to evaluate the method’s performance. The proposed BMB-based technique provided an effective method to enhance the image quality by suppressing scatter-induced artifacts, such as ring artifacts around the bowtie area. Compared to CBCT without a blocker, the spatial nonuniformity was reduced from 9.1% to 3.1%. The root-mean-square error of the CT numbers in the regions of interest (ROIs) was reduced from 30.2 HU to 3.8 HU. In addition to high resolution, comparable to that of the benchmark image, the CS-based reconstruction also led to a better contrast-to-noise ratio in seven ROIs. The proposed technique enables complete scatter-corrected CBCT imaging with width-truncated projections and allows reducing the acquisition time to approximately half. This work may have significant implications for image-guided or adaptive radiation therapy, where CBCT is often used.

  12. Forward two-photon exchange in elastic lepton-proton scattering and hyperfine-splitting correction

    Energy Technology Data Exchange (ETDEWEB)

    Tomalak, Oleksandr [Johannes Gutenberg Universitaet, Institut fuer Kernphysik and PRISMA Cluster of Excellence, Mainz (Germany)

    2017-08-15

    We relate the forward two-photon exchange (TPE) amplitudes to integrals of the inclusive lepton-proton scattering cross sections. These relations yield an alternative way for the evaluation of the TPE correction to hyperfine-splitting (HFS) in the hydrogen-like atoms with an equivalent to the standard approach (Iddings, Drell and Sullivan) result implying the Burkhardt-Cottingham sum rule. For evaluation of the individual effects (e.g., elastic contribution) our approach yields a distinct result. We compare both methods numerically on examples of the elastic contribution and the full TPE correction to HFS in electronic and muonic hydrogen. (orig.)

  13. ITERATIVE SCATTER CORRECTION FOR GRID-LESS BEDSIDE CHEST RADIOGRAPHY: PERFORMANCE FOR A CHEST PHANTOM.

    Science.gov (United States)

    Mentrup, Detlef; Jockel, Sascha; Menser, Bernd; Neitzel, Ulrich

    2016-06-01

    The aim of this work was to experimentally compare the contrast improvement factors (CIFs) of a newly developed software-based scatter correction to the CIFs achieved by an antiscatter grid. To this end, three aluminium discs were placed in the lung, the retrocardial and the abdominal areas of a thorax phantom, and digital radiographs of the phantom were acquired both with and without a stationary grid. The contrast generated by the discs was measured in both images, and the CIFs achieved by grid usage were determined for each disc. Additionally, the non-grid images were processed with a scatter correction software. The contrasts generated by the discs were determined in the scatter-corrected images, and the corresponding CIFs were calculated. The CIFs obtained with the grid and with the software were in good agreement. In conclusion, the experiment demonstrates quantitatively that software-based scatter correction allows restoring the image contrast of a non-grid image in a manner comparable with an antiscatter grid. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Two-loop fermionic corrections to massive Bhabha scattering

    Energy Technology Data Exchange (ETDEWEB)

    Actis, S.; Riemann, T. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Czakon, M. [Wuerzburg Univ. (Germany). Inst. fuer Theoretische Physik und Astrophysik]|[Institute of Nuclear Physics, NSCR DEMOKRITOS, Athens (Greece); Gluza, J. [Silesia Univ., Katowice (Poland). Inst. of Physics

    2007-05-15

    We evaluate the two-loop corrections to Bhabha scattering from fermion loops in the context of pure Quantum Electrodynamics. The differential cross section is expressed by a small number of Master Integrals with exact dependence on the fermion masses m{sub e}, m{sub f} and the Mandelstam invariants s, t, u. We determine the limit of fixed scattering angle and high energy, assuming the hierarchy of scales m{sup 2}{sub e}<

  15. Application of transmission scan-based attenuation compensation to scatter-corrected thallium-201 myocardial single-photon emission tomographic images

    International Nuclear Information System (INIS)

    Hashimoto, Jun; Kubo, Atsushi; Ogawa, Koichi; Ichihara, Takashi; Motomura, Nobutoku; Takayama, Takuzo; Iwanaga, Shiro; Mitamura, Hideo; Ogawa, Satoshi

    1998-01-01

    A practical method for scatter and attenuation compensation was employed in thallium-201 myocardial single-photon emission tomography (SPET or ECT) with the triple-energy-window (TEW) technique and an iterative attenuation correction method by using a measured attenuation map. The map was reconstructed from technetium-99m transmission CT (TCT) data. A dual-headed SPET gamma camera system equipped with parallel-hole collimators was used for ECT/TCT data acquisition and a new type of external source named ''sheet line source'' was designed for TCT data acquisition. This sheet line source was composed of a narrow long fluoroplastic tube embedded in a rectangular acrylic board. After injection of 99m Tc solution into the tube by an automatic injector, the board was attached in front of the collimator surface of one of the two detectors. After acquiring emission and transmission data separately or simultaneously, we eliminated scattered photons in the transmission and emission data with the TEW method, and reconstructed both images. Then, the effect of attenuation in the scatter-corrected ECT images was compensated with Chang's iterative method by using measured attenuation maps. Our method was validated by several phantom studies and clinical cardiac studies. The method offered improved homogeneity in distribution of myocardial activity and accurate measurements of myocardial tracer uptake. We conclude that the above correction method is feasible because a new type of 99m Tc external source may not produce truncation in TCT images and is cost-effective and easy to prepare in clinical situations. (orig.)

  16. Assessment of the scatter correction procedures in single photon emission computed tomography imaging using simulation and clinical study

    Directory of Open Access Journals (Sweden)

    Mehravar Rafati

    2017-01-01

    Conclusion: The simulation and the clinical studies showed that the new approach could be better performance than DEW, TEW methods, according to values of the contrast, and the SNR for scatter correction.

  17. Two-photon exchange corrections in elastic lepton-proton scattering

    Energy Technology Data Exchange (ETDEWEB)

    Tomalak, Oleksandr; Vanderhaeghen, Marc [Johannes Gutenberg Universitaet Mainz (Germany)

    2015-07-01

    The measured value of the proton charge radius from the Lamb shift of energy levels in muonic hydrogen is in strong contradiction, by 7-8 standard deviations, with the value obtained from electronic hydrogen spectroscopy and the value extracted from unpolarized electron-proton scattering data. The dominant unaccounted higher order contribution in scattering experiments corresponds to the two photon exchange (TPE) diagram. The elastic contribution to the TPE correction was studied with the fixed momentum transfer dispersion relations and compared to the hadronic model with off-shell photon-nucleon vertices. A dispersion relation formalism with one subtraction was proposed. Theoretical predictions of the TPE elastic contribution to the unpolarized elastic electron-proton scattering and polarization transfer observables in the low momentum transfer region were made. The TPE formalism was generalized to the case of massive leptons and the elastic contribution was evaluated for the kinematics of upcoming muon-proton scattering experiment (MUSE).

  18. Compton scatter correction in case of multiple crosstalks in SPECT imaging.

    Science.gov (United States)

    Sychra, J J; Blend, M J; Jobe, T H

    1996-02-01

    A strategy for Compton scatter correction in brain SPECT images was proposed recently. It assumes that two radioisotopes are used and that a significant portion of photons of one radioisotope (for example, Tc99m) spills over into the low energy acquisition window of the other radioisotope (for example, Tl201). We are extending this approach to cases of several radioisotopes with mutual, multiple and significant photon spillover. In the example above, one may correct not only the Tl201 image but also the Tc99m image corrupted by the Compton scatter originating from the small component of high energy Tl201 photons. The proposed extension is applicable to other anatomical domains (cardiac imaging).

  19. Effects of scatter and attenuation corrections on phantom and clinical brain SPECT

    International Nuclear Information System (INIS)

    Prando, S.; Robilotta, C.C.R.; Oliveira, M.A.; Alves, T.C.; Busatto Filho, G.

    2002-01-01

    Aim: The present work evaluated the effects of combinations of scatter and attenuation corrections on the analysis of brain SPECT. Materials and Methods: We studied images of the 3D Hoffman brain phantom and from a group of 20 depressive patients with confirmed cardiac insufficiency (CI) and 14 matched healthy controls (HC). Data were acquired with a Sophy-DST/SMV-GE dual-head camera after venous injection of 1110MBq 99m Tc-HMPAO. Two energy windows, 15% on 140keV and 30% centered on 108keV of the Compton distribution, were used to obtain corresponding sets of 128x128x128 projections. Tomograms were reconstructed using OSEM (2 iterations, 8 sub-sets) and Metz filter (order 8, 4 pixels FWHM psf) and FBP with Butterworth filter (order 10, frequency 0.7 Nyquist). Ten combinations of Jaszczak correction (factors 0.3, 0.4 and 0.5) and the 1st order Chang correction (u=0.12cm -1 and 0.159cm -1 ) were applied on the phantom data. In all the phantom images, contrast and signal-noise ratio between 3 ROIs (ventricle, occipital and thalamus) and cerebellum, as well as the ratio between activities in gray and white matters, were calculated and compared with the expected values. The patients images were corrected with k=0.5 and u=0.159cm -1 and reconstructed with OSEM and Metz filter. The images were inspected visually and blood flow comparisons between the CI and the HC groups were performed using Statistical Parametric Mapping (SPM). Results: The best results in the analysis of the contrast and activities ratio were obtained with k=0.5 and u=0.159cm -1 . The results of the activities ratio obtained with OSEM e Metz filter are similar to those published by Laere et al.[J.Nucl.Med 2000;41:2051-2062]. The method of correction using effective attenuation coefficient produced results visually acceptable, but inadequate for the quantitative evaluation. The results of signal-noise ratio are better with OSEM than FBP reconstruction method. The corrections in the CI patients studies

  20. Evaluation of attenuation correction, scatter correction and resolution recovery in myocardial Tc-99m MIBI SPECT

    Energy Technology Data Exchange (ETDEWEB)

    Larcos, G.; Hutton, B.F.; Farlow, D.C.; Campbell- Rodgers, N.; Gruenewald, S.M.; Lau, Y.H. [Westmead Hospital, Westmead, Sydney, NSW (Australia). Departments of Nuclear Medicine and Ultrasound and Medical Physics

    1998-06-01

    Full text: The introduction of transmission based attenuation correction (AC) has increased the diagnostic accuracy of Tc-99m MIBI myocardial perfusion SPECT. The aim of this study is to evaluate recent developments, including scatter correction (SC) and resolution recovery (RR). We reviewed 13 patients who underwent Tc-99m MIBI SPECT (two day protocol) and coronary angiography and 4 manufacturer supplied studies assigned a low pretest likelihood of coronary artery disease (CAD). Patients had a mean age of 59 years (range: 41-78). Data were reconstructed using filtered backprojection (FBP; method 1), maximum likelihood (ML) incorporating AC (method 2), ADAC software using sinogram based SC+RR followed by ML with AC (method 3) and ordered subset ML incorporating AC,SC and RR (method 4). Images were reported by two of three blinded experienced physicians using a standard semiquantitative scoring scheme. Fixed or reversible perfusion defects were considered abnormal; CAD was considered present with stenoses > 50%. Patients had normal coronary anatomy (n=9), single (n=4) or two vessel CAD (n=4) (four in each of LAD, RCA and LCX). There were no statistically significant differences for any combination. Normalcy rate = 100% for all methods. Physicians graded 3/17 (methods 2,4) and 1/17 (method 3) images as fair or poor in quality. Thus, AC or AC+SC+RR produce good quality images in most patients; there is potential for improvement in sensitivity over standard FBP with no significant change in normalcy or specificity

  1. Monte Carlo simulation and scatter correction of the GE Advance PET scanner with SimSET and Geant4

    International Nuclear Information System (INIS)

    Barret, Olivier; Carpenter, T Adrian; Clark, John C; Ansorge, Richard E; Fryer, Tim D

    2005-01-01

    For Monte Carlo simulations to be used as an alternative solution to perform scatter correction, accurate modelling of the scanner as well as speed is paramount. General-purpose Monte Carlo packages (Geant4, EGS, MCNP) allow a detailed description of the scanner but are not efficient at simulating voxel-based geometries (patient images). On the other hand, dedicated codes (SimSET, PETSIM) will perform well for voxel-based objects but will be poor in their capacity of simulating complex geometries such as a PET scanner. The approach adopted in this work was to couple a dedicated code (SimSET) with a general-purpose package (Geant4) to have the efficiency of the former and the capabilities of the latter. The combined SimSET+Geant4 code (SimG4) was assessed on the GE Advance PET scanner and compared to the use of SimSET only. A better description of the resolution and sensitivity of the scanner and of the scatter fraction was obtained with SimG4. The accuracy of scatter correction performed with SimG4 and SimSET was also assessed from data acquired with the 20 cm NEMA phantom. SimG4 was found to outperform SimSET and to give slightly better results than the GE scatter correction methods installed on the Advance scanner (curve fitting and scatter modelling for the 300-650 keV and 375-650 keV energy windows, respectively). In the presence of a hot source close to the edge of the field of view (as found in oxygen scans), the GE curve-fitting method was found to fail whereas SimG4 maintained its performance

  2. Magnetic corrections to π -π scattering lengths in the linear sigma model

    Science.gov (United States)

    Loewe, M.; Monje, L.; Zamora, R.

    2018-03-01

    In this article, we consider the magnetic corrections to π -π scattering lengths in the frame of the linear sigma model. For this, we consider all the one-loop corrections in the s , t , and u channels, associated to the insertion of a Schwinger propagator for charged pions, working in the region of small values of the magnetic field. Our calculation relies on an appropriate expansion for the propagator. It turns out that the leading scattering length, l =0 in the S channel, increases for an increasing value of the magnetic field, in the isospin I =2 case, whereas the opposite effect is found for the I =0 case. The isospin symmetry is valid because the insertion of the magnetic field occurs through the absolute value of the electric charges. The channel I =1 does not receive any corrections. These results, for the channels I =0 and I =2 , are opposite with respect to the thermal corrections found previously in the literature.

  3. Multiple-scattering corrections to the Beer-Lambert law

    International Nuclear Information System (INIS)

    Zardecki, A.

    1983-01-01

    The effect of multiple scattering on the validity of the Beer-Lambert law is discussed for a wide range of particle-size parameters and optical depths. To predict the amount of received radiant power, appropriate correction terms are introduced. For particles larger than or comparable to the wavelength of radiation, the small-angle approximation is adequate; whereas for small densely packed particles, the diffusion theory is advantageously employed. These two approaches are used in the context of the problem of laser-beam propagation in a dense aerosol medium. In addition, preliminary results obtained by using a two-dimensional finite-element discrete-ordinates transport code are described. Multiple-scattering effects for laser propagation in fog, cloud, rain, and aerosol cloud are modeled

  4. The effect of scatter correction on {sup 123}I-IMP brain perfusion SPET with the triple energy window method in normal subjects using SPM analysis

    Energy Technology Data Exchange (ETDEWEB)

    Shiga, Tohru; Takano, Akihiro; Tsukamoto, Eriko; Tamaki, Nagara [Department of Nuclear Medicine, Hokkaido University School of Medicine, Sapporo (Japan); Kubo, Naoki [Department of Radiological Technology, College of Medical Technology, Hokkaido University, Sapporo (Japan); Kobayashi, Junko; Takeda, Yoji; Nakamura, Fumihiro; Koyama, Tsukasa [Department of Psychiatry and Neurology, Hokkaido University School of Medicine, Sapporo (Japan); Katoh, Chietsugu [Department of Tracer Kinetics, Hokkaido University School of Medicine, Sapporo (Japan)

    2002-03-01

    Scatter correction (SC) using the triple energy window method (TEW) has recently been applied for brain perfusion single-photon emission tomography (SPET). The aim of this study was to investigate the effect of scatter correction using TEW on N-isopropyl-p-[{sup 123}I]iodoamphetamine ({sup 123}I-IMP) SPET in normal subjects. The study population consisted of 15 right-handed normal subjects. SPET data were acquired from 20 min to 40 min after the injection of 167 MBq of IMP, using a triple-head gamma camera. Images were reconstructed with and without SC. 3D T1-weighted magnetic resonance (MR) images were also obtained with a 1.5-Tesla scanner. First, IMP images with and without SC were co-registered to the 3D MRI. Second, the two co-registered IMP images were normalised using SPM96. A t statistic image for the contrast condition effect was constructed. We investigated areas using a voxel-level threshold of 0.001, with a corrected threshold of 0.05. Compared with results obtained without SC, the IMP distribution with SC was significantly decreased in the peripheral areas of the cerebellum, the cortex and the ventricle, and also in the lateral occipital cortex and the base of the temporal lobe. On the other hand, the IMP distribution with SC was significantly increased in the anterior and posterior cingulate cortex, the insular cortex and the medial part of the thalamus. It is concluded that differences in the IMP distribution with and without SC exist not only in the peripheral areas of the cerebellum, the cortex and the ventricle but also in the occipital lobe, the base of the temporal lobe, the insular cortex, the medial part of the thalamus, and the anterior and posterior cingulate cortex. This needs to be recognised for adequate interpretation of IMP brain perfusion SPET after scatter correction. (orig.)

  5. Evaluation of Fresnel's corrections to the eikonal approximation by the separabilization method

    International Nuclear Information System (INIS)

    Musakhanov, M.M.; Zubarev, A.L.

    1975-01-01

    Method of separabilization of potential over the Schroedinger approximate solutions, leading to Schwinger's variational principle for scattering amplitude, is suggested. The results are applied to calculation of the Fresnel corrections to the Glauber approximation

  6. Attenuation correction for the HRRT PET-scanner using transmission scatter correction and total variation regularization

    DEFF Research Database (Denmark)

    Keller, Sune H; Svarer, Claus; Sibomana, Merence

    2013-01-01

    scatter correction in the μ-map reconstruction and total variation filtering to the transmission processing. Results: Comparing MAP-TR and the new TXTV with gold standard CT-based attenuation correction, we found that TXTV has less bias as compared to MAP-TR. We also compared images acquired at the HRRT......In the standard software for the Siemens high-resolution research tomograph (HRRT) positron emission tomography (PET) scanner the most commonly used segmentation in the μ -map reconstruction for human brain scans is maximum a posteriori for transmission (MAP-TR). Bias in the lower cerebellum...

  7. Multiple scattering corrections to the Beer-Lambert law. 1: Open detector.

    Science.gov (United States)

    Tam, W G; Zardecki, A

    1982-07-01

    Multiple scattering corrections to the Beer-Lambert law are analyzed by means of a rigorous small-angle solution to the radiative transfer equation. Transmission functions for predicting the received radiant power-a directly measured quantity in contrast to the spectral radiance in the Beer-Lambert law-are derived. Numerical algorithms and results relating to the multiple scattering effects for laser propagation in fog, cloud, and rain are presented.

  8. Corrections for the effects of accidental coincidences, Compton scatter, and object size in positron emission mammography (PEM) imaging

    Energy Technology Data Exchange (ETDEWEB)

    Raymond Raylman; Stanislaw Majewski; Randolph Wojcik; Andrew Weisenberger; Brian Kross; Vladimir Popov

    2001-06-01

    Positron emission mammography (PEM) has begun to show promise as an effective method for the detection of breast lesions. Due to its utilization of tumor-avid radiopharmaceuticals labeled with positron-emitting radionuclides, this technique may be especially useful in imaging of women with radiodense or fibrocystic breasts. While the use of these radiotracers affords PEM unique capabilities, it also introduces some limitations. Specifically, acceptance of accidental and Compton-scattered coincidence events can decrease lesion detectability. The authors studied the effect of accidental coincidence events on PEM images produced by the presence of 18F-Fluorodeoxyglucose in the organs of a subject using an anthropomorphic phantom. A delayed-coincidence technique was tested as a method for correcting PEM images for the occurrence of accidental events. Also, a Compton scatter correction algorithm designed specifically for PEM was developed and tested using a compressed breast phantom.

  9. Corrections for the effects of accidental coincidences, Compton scatter, and object size in positron emission mammography (PEM) imaging

    International Nuclear Information System (INIS)

    Raymond Raylman; Stanislaw Majewski; Randolph Wojcik; Andrew Weisenberger; Brian Kross; Vladimir Popov

    2001-01-01

    Positron emission mammography (PEM) has begun to show promise as an effective method for the detection of breast lesions. Due to its utilization of tumor-avid radiopharmaceuticals labeled with positron-emitting radionuclides, this technique may be especially useful in imaging of women with radiodense or fibrocystic breasts. While the use of these radiotracers affords PEM unique capabilities, it also introduces some limitations. Specifically, acceptance of accidental and Compton-scattered coincidence events can decrease lesion detectability. The authors studied the effect of accidental coincidence events on PEM images produced by the presence of 18F-Fluorodeoxyglucose in the organs of a subject using an anthropomorphic phantom. A delayed-coincidence technique was tested as a method for correcting PEM images for the occurrence of accidental events. Also, a Compton scatter correction algorithm designed specifically for PEM was developed and tested using a compressed breast phantom

  10. Hadron mass corrections in semi-inclusive deep inelastic scattering

    International Nuclear Information System (INIS)

    Accardi, A.; Hobbs, T.; Melnitchouk, W.

    2009-01-01

    We derive mass corrections for semi-inclusive deep inelastic scattering of leptons from nucleons using a collinear factorization framework which incorporates the initial state mass of the target nucleon and the final state mass of the produced hadron h. The hadron mass correction is made by introducing a generalized, finite-Q 2 scaling variable ζ h for the hadron fragmentation function, which approaches the usual energy fraction z h = E h /ν in the Bjorken limit. We systematically examine the kinematic dependencies of the mass corrections to semi-inclusive cross sections, and find that these are even larger than for inclusive structure functions. The hadron mass corrections compete with the experimental uncertainties at kinematics typical of current facilities, Q 2 2 and intermediate x B > 0.3, and will be important to efforts at extracting parton distributions from semi-inclusive processes at intermediate energies.

  11. Complete $O(\\alpha)$ QED corrections to polarized Compton scattering

    CERN Document Server

    Denner, Ansgar

    1999-01-01

    The complete QED corrections of O(alpha) to polarized Compton scattering are calculated for finite electron mass and including the real corrections induced by the processes e^- gamma -> e^- gamma gamma and e^- gamma -> e^- e^- e^+. All relevant formulas are listed in a form that is well suited for a direct implementation in computer codes. We present a detailed numerical discussion of the O(alpha)-corrected cross section and the left-right asymmetry in the energy range of present and future Compton polarimeters, which are used to determine the beam polarization of high-energetic e^+- beams. For photons with energies of a few eV and electrons with SLC energies or smaller, the corrections are of the order of a few per mille. In the energy range of future e^+e^- colliders, however, they reach 1-2% and cannot be neglected in a precision polarization measurement.

  12. A moving blocker-based strategy for simultaneous megavoltage and kilovoltage scatter correction in cone-beam computed tomography image acquired during volumetric modulated arc therapy

    International Nuclear Information System (INIS)

    Ouyang, Luo; Lee, Huichen Pam; Wang, Jing

    2015-01-01

    Purpose: To evaluate a moving blocker-based approach in estimating and correcting megavoltage (MV) and kilovoltage (kV) scatter contamination in kV cone-beam computed tomography (CBCT) acquired during volumetric modulated arc therapy (VMAT). Methods and materials: During the concurrent CBCT/VMAT acquisition, a physical attenuator (i.e., “blocker”) consisting of equally spaced lead strips was mounted and moved constantly between the CBCT source and patient. Both kV and MV scatter signals were estimated from the blocked region of the imaging panel, and interpolated into the unblocked region. A scatter corrected CBCT was then reconstructed from the unblocked projections after scatter subtraction using an iterative image reconstruction algorithm based on constraint optimization. Experimental studies were performed on a Catphan® phantom and an anthropomorphic pelvis phantom to demonstrate the feasibility of using a moving blocker for kV–MV scatter correction. Results: Scatter induced cupping artifacts were substantially reduced in the moving blocker corrected CBCT images. Quantitatively, the root mean square error of Hounsfield units (HU) in seven density inserts of the Catphan phantom was reduced from 395 to 40. Conclusions: The proposed moving blocker strategy greatly improves the image quality of CBCT acquired with concurrent VMAT by reducing the kV–MV scatter induced HU inaccuracy and cupping artifacts

  13. Investigation of electron-loss and photon scattering correction factors for FAC-IR-300 ionization chamber

    Science.gov (United States)

    Mohammadi, S. M.; Tavakoli-Anbaran, H.; Zeinali, H. Z.

    2017-02-01

    The parallel-plate free-air ionization chamber termed FAC-IR-300 was designed at the Atomic Energy Organization of Iran, AEOI. This chamber is used for low and medium X-ray dosimetry on the primary standard level. In order to evaluate the air-kerma, some correction factors such as electron-loss correction factor (ke) and photon scattering correction factor (ksc) are needed. ke factor corrects the charge loss from the collecting volume and ksc factor corrects the scattering of photons into collecting volume. In this work ke and ksc were estimated by Monte Carlo simulation. These correction factors are calculated for mono-energy photon. As a result of the simulation data, the ke and ksc values for FAC-IR-300 ionization chamber are 1.0704 and 0.9982, respectively.

  14. Numerical correction of anti-symmetric aberrations in single HRTEM images of weakly scattering 2D-objects

    International Nuclear Information System (INIS)

    Lehtinen, Ossi; Geiger, Dorin; Lee, Zhongbo; Whitwick, Michael Brian; Chen, Ming-Wei; Kis, Andras; Kaiser, Ute

    2015-01-01

    Here, we present a numerical post-processing method for removing the effect of anti-symmetric residual aberrations in high-resolution transmission electron microscopy (HRTEM) images of weakly scattering 2D-objects. The method is based on applying the same aberrations with the opposite phase to the Fourier transform of the recorded image intensity and subsequently inverting the Fourier transform. We present the theoretical justification of the method, and its verification based on simulated images in the case of low-order anti-symmetric aberrations. Ultimately the method is applied to experimental hardware aberration-corrected HRTEM images of single-layer graphene and MoSe 2 resulting in images with strongly reduced residual low-order aberrations, and consequently improved interpretability. Alternatively, this method can be used to estimate by trial and error the residual anti-symmetric aberrations in HRTEM images of weakly scattering objects

  15. The relative contributions of scatter and attenuation corrections toward improved brain SPECT quantification

    International Nuclear Information System (INIS)

    Stodilka, Robert Z.; Msaki, Peter; Prato, Frank S.; Nicholson, Richard L.; Kemp, B.J.

    1998-01-01

    Mounting evidence indicates that scatter and attenuation are major confounds to objective diagnosis of brain disease by quantitative SPECT. There is considerable debate, however, as to the relative importance of scatter correction (SC) and attenuation correction (AC), and how they should be implemented. The efficacy of SC and AC for 99m Tc brain SPECT was evaluated using a two-compartment fully tissue-equivalent anthropomorphic head phantom. Four correction schemes were implemented: uniform broad-beam AC, non-uniform broad-beam AC, uniform SC+AC, and non-uniform SC+AC. SC was based on non-stationary deconvolution scatter subtraction, modified to incorporate a priori knowledge of either the head contour (uniform SC) or transmission map (non-uniform SC). The quantitative accuracy of the correction schemes was evaluated in terms of contrast recovery, relative quantification (cortical:cerebellar activity), uniformity ((coefficient of variation of 230 macro-voxels) x100%), and bias (relative to a calibration scan). Our results were: uniform broad-beam (μ=0.12cm -1 ) AC (the most popular correction): 71% contrast recovery, 112% relative quantification, 7.0% uniformity, +23% bias. Non-uniform broad-beam (soft tissue μ=0.12cm -1 ) AC: 73%, 114%, 6.0%, +21%, respectively. Uniform SC+AC: 90%, 99%, 4.9%, +12%, respectively. Non-uniform SC+AC: 93%, 101%, 4.0%, +10%, respectively. SC and AC achieved the best quantification; however, non-uniform corrections produce only small improvements over their uniform counterparts. SC+AC was found to be superior to AC; this advantage is distinct and consistent across all four quantification indices. (author)

  16. Effect of inter-crystal scatter on estimation methods for random coincidences and subsequent correction

    International Nuclear Information System (INIS)

    Torres-Espallardo, I; Spanoudaki, V; Ziegler, S I; Rafecas, M; McElroy, D P

    2008-01-01

    Random coincidences can contribute substantially to the background in positron emission tomography (PET). Several estimation methods are being used for correcting them. The goal of this study was to investigate the validity of techniques for random coincidence estimation, with various low-energy thresholds (LETs). Simulated singles list-mode data of the MADPET-II small animal PET scanner were used as input. The simulations have been performed using the GATE simulation toolkit. Several sources with different geometries have been employed. We evaluated the number of random events using three methods: delayed window (DW), singles rate (SR) and time histogram fitting (TH). Since the GATE simulations allow random and true coincidences to be distinguished, a comparison between the number of random coincidences estimated using the standard methods and the number obtained using GATE was performed. An overestimation in the number of random events was observed using the DW and SR methods. This overestimation decreases for LETs higher than 255 keV. It is additionally reduced when the single events which have undergone a Compton interaction in crystals before being detected are removed from the data. These two observations lead us to infer that the overestimation is due to inter-crystal scatter. The effect of this mismatch in the reconstructed images is important for quantification because it leads to an underestimation of activity. This was shown using a hot-cold-background source with 3.7 MBq total activity in the background region and a 1.59 MBq total activity in the hot region. For both 200 keV and 400 keV LET, an overestimation of random coincidences for the DW and SR methods was observed, resulting in approximately 1.5% or more (at 200 keV LET: 1.7% for DW and 7% for SR) and less than 1% (at 400 keV LET: both methods) underestimation of activity within the background region. In almost all cases, images obtained by compensating for random events in the reconstruction

  17. Attenuation and scatter correction in SPECT

    International Nuclear Information System (INIS)

    Pant, G.S.; Pandey, A.K.

    2000-01-01

    While passing through matter, photons undergo various types of interactions. In the process, some photons are completely absorbed, some are scattered in different directions with or without any change in their energy and some pass through unattenuated. These unattenuated photons carry the information with them. However, the image data gets corrupted with attenuation and scatter processes. This paper deals with the effect of these two processes in nuclear medicine images and suggests the methods to overcome them

  18. SU-E-QI-03: Compartment Modeling of Dynamic Brain PET - The Effect of Scatter and Random Corrections On Parameter Errors

    International Nuclear Information System (INIS)

    Häggström, I; Karlsson, M; Larsson, A; Schmidtlein, C

    2014-01-01

    Purpose: To investigate the effects of corrections for random and scattered coincidences on kinetic parameters in brain tumors, by using ten Monte Carlo (MC) simulated dynamic FLT-PET brain scans. Methods: The GATE MC software was used to simulate ten repetitions of a 1 hour dynamic FLT-PET scan of a voxelized head phantom. The phantom comprised six normal head tissues, plus inserted regions for blood and tumor tissue. Different time-activity-curves (TACs) for all eight tissue types were used in the simulation and were generated in Matlab using a 2-tissue model with preset parameter values (K1,k2,k3,k4,Va,Ki). The PET data was reconstructed into 28 frames by both ordered-subset expectation maximization (OSEM) and 3D filtered back-projection (3DFBP). Five image sets were reconstructed, all with normalization and different additional corrections C (A=attenuation, R=random, S=scatter): Trues (AC), trues+randoms (ARC), trues+scatters (ASC), total counts (ARSC) and total counts (AC). Corrections for randoms and scatters were based on real random and scatter sinograms that were back-projected, blurred and then forward projected and scaled to match the real counts. Weighted non-linearleast- squares fitting of TACs from the blood and tumor regions was used to obtain parameter estimates. Results: The bias was not significantly different for trues (AC), trues+randoms (ARC), trues+scatters (ASC) and total counts (ARSC) for either 3DFBP or OSEM (p<0.05). Total counts with only AC stood out however, with an up to 160% larger bias. In general, there was no difference in bias found between 3DFBP and OSEM, except in parameter Va and Ki. Conclusion: According to our results, the methodology of correcting the PET data for randoms and scatters performed well for the dynamic images where frames have much lower counts compared to static images. Generally, no bias was introduced by the corrections and their importance was emphasized since omitting them increased bias extensively

  19. Investigation of electron-loss and photon scattering correction factors for FAC-IR-300 ionization chamber

    International Nuclear Information System (INIS)

    Mohammadi, S.M.; Tavakoli-Anbaran, H.; Zeinali, H.Z.

    2017-01-01

    The parallel-plate free-air ionization chamber termed FAC-IR-300 was designed at the Atomic Energy Organization of Iran, AEOI. This chamber is used for low and medium X-ray dosimetry on the primary standard level. In order to evaluate the air-kerma, some correction factors such as electron-loss correction factor (k e ) and photon scattering correction factor (k sc ) are needed. k e factor corrects the charge loss from the collecting volume and k sc factor corrects the scattering of photons into collecting volume. In this work k e and k sc were estimated by Monte Carlo simulation. These correction factors are calculated for mono-energy photon. As a result of the simulation data, the k e and k sc values for FAC-IR-300 ionization chamber are 1.0704 and 0.9982, respectively.

  20. Analysis and development of methods of correcting for heterogeneities to cobalt-60: computing application

    International Nuclear Information System (INIS)

    Kappas, K.

    1982-11-01

    The purpose of this work is the analysis of the influence of inhomogeneities of the human body on the determination of the dose in Cobalt-60 radiation therapy. The first part is dedicated to the physical characteristics of inhomogeneities and to the conventional methods of correction. New methods of correction are proposed based on the analysis of the scatter. This analysis allows to take account, with a greater accuracy of their physical characteristics and of the corresponding modifications of the dose: ''the differential TAR method'' and ''the Beam Substraction Method''. The second part is dedicated to the computer implementation of the second method of correction for routine application in hospital [fr

  1. Attenuation correction for the HRRT PET-scanner using transmission scatter correction and total variation regularization.

    Science.gov (United States)

    Keller, Sune H; Svarer, Claus; Sibomana, Merence

    2013-09-01

    In the standard software for the Siemens high-resolution research tomograph (HRRT) positron emission tomography (PET) scanner the most commonly used segmentation in the μ -map reconstruction for human brain scans is maximum a posteriori for transmission (MAP-TR). Bias in the lower cerebellum and pons in HRRT brain images have been reported. The two main sources of the problem with MAP-TR are poor bone/soft tissue segmentation below the brain and overestimation of bone mass in the skull. We developed the new transmission processing with total variation (TXTV) method that introduces scatter correction in the μ-map reconstruction and total variation filtering to the transmission processing. Comparing MAP-TR and the new TXTV with gold standard CT-based attenuation correction, we found that TXTV has less bias as compared to MAP-TR. We also compared images acquired at the HRRT scanner using TXTV to the GE Advance scanner images and found high quantitative correspondence. TXTV has been used to reconstruct more than 4000 HRRT scans at seven different sites with no reports of biases. TXTV-based reconstruction is recommended for human brain scans on the HRRT.

  2. Coastal Zone Color Scanner atmospheric correction algorithm - Multiple scattering effects

    Science.gov (United States)

    Gordon, Howard R.; Castano, Diego J.

    1987-01-01

    Errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm are analyzed. The analysis is based on radiative transfer computations in model atmospheres, in which the aerosols and molecules are distributed vertically in an exponential manner, with most of the aerosol scattering located below the molecular scattering. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates, making it possible to determine the errors along typical CZCS scan lines. Information provided by the analysis makes it possible to judge the efficacy of the current algorithm with the current sensor and to estimate the impact of the algorithm-induced errors on a variety of applications.

  3. An FFT-based Method for Attenuation Correction in Fluorescence Confocal Microscopy

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.; Bakker, M.

    1993-01-01

    A problem in three-dimensional imaging by a confocal scanning laser microscope (CSLM) in the (epi)fluorescence mode is the darkening of the deeper layers due to absorption and scattering of both the excitation and the fluorescence light. In this paper we propose a new method to correct for these

  4. Application of the equivalent radiator method for radiative corrections to the spectra of elastic electron scattering by nuclei

    Directory of Open Access Journals (Sweden)

    I. S. Timchenko

    2015-07-01

    Full Text Available For calculating the radiative tails in the spectra of inelastic electron scattering by nuclei, the approximation, namely, the equivalent radiator method (ERM, is used. However, the applicability of this method for evaluating the radiative tail from the elastic scattering peak has been little investigated, and therefore, it has become the subject of the present study for the case of light nuclei. As a result, spectral regions were found, where a significant discrepancy between the ERM calculation and the exact-formula calculation was observed. A link was established between this phenomenon and the diffraction minimum of the squared form-factor of the nuclear ground state. Varieties of calculations were carried out for different kinematics of electron scattering by nuclei. The analysis of the calculation results has shown the conditions, at which the equivalent radiator method can be applied for adequately evaluating the radiative tail of the elastic scattering peak.

  5. Virtual hadronic and heavy-fermion O({alpha}{sup 2}) corrections to Bhabha scattering

    Energy Technology Data Exchange (ETDEWEB)

    Actis, Stefano [Inst. fuer Theoretische Physik E, RWTH Aachen (Germany); Czakon, Michal [Wuerzburg Univ. (Germany). Inst. fuer Theoretische Physik und Astrophysik]|[Uniwersytet Slaski, Katowice (Poland). Inst. of Physics and Chemistry of Metals; Gluza, Janusz [Uniwersytet Slaski, Katowice (Poland). Inst. of Physics and Chemistry of Metals; Riemann, Tord [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2008-07-15

    Effects of vacuum polarization by hadronic and heavy-fermion insertions were the last unknown two-loop QED corrections to high-energy Bhabha scattering. Here we describe the corrections in detail and explore their numerical influence. The hadronic contributions to the virtual O({alpha}{sup 2}) QED corrections to the Bhabha-scattering cross-section are evaluated using dispersion relations and computing the convolution of hadronic data with perturbatively calculated kernel functions. The technique of dispersion integrals is also employed to derive the virtual O({alpha}{sup 2}) corrections generated by muon-, tau- and top-quark loops in the small electron-mass limit for arbitrary values of the internal-fermion masses. At a meson factory with 1 GeV center-of-mass energy the complete effect of hadronic and heavy-fermion corrections amounts to less than 0.5 per mille and reaches, at 10 GeV, up to about 2 per mille. At the Z resonance it amounts to 2.3 per mille at 3 degrees; overall, hadronic corrections are less than 4 per mille. For ILC energies (500 GeV or above), the combined effect of hadrons and heavy fermions becomes 6 per mille at 3 degrees; hadrons contribute less than 20 per mille in the whole angular region. (orig.)

  6. Methods of correcting Anger camera deadtime losses

    International Nuclear Information System (INIS)

    Sorenson, J.A.

    1976-01-01

    Three different methods of correcting for Anger camera deadtime loss were investigated. These included analytic methods (mathematical modeling), the marker-source method, and a new method based on counting ''pileup'' events appearing in a pulseheight analyzer window positioned above the photopeak of interest. The studies were done with /sup 99m/Tc on a Searle Radiographics camera with a measured deadtime of about 6 μsec. Analytic methods were found to be unreliable because of unpredictable changes in deadtime with changes in radiation scattering conditions. Both the marker-source method and the pileup-counting method were found to be accurate to within a few percent for true counting rates of up to about 200 K cps, with the pileup-counting method giving better results. This finding applied to sources at depths ranging up to 10 cm of pressed wood. The relative merits of the two methods are discussed

  7. Application of the method of continued fractions for electron scattering by linear molecules

    International Nuclear Information System (INIS)

    Lee, M.-T.; Iga, I.; Fujimoto, M.M.; Lara, O.; Brasilia Univ., DF

    1995-01-01

    The method of continued fractions (MCF) of Horacek and Sasakawa is adapted for the first time to study low-energy electron scattering by linear molecules. Particularly, we have calculated the reactance K-matrices for an electron scattered by hydrogen molecule and hydrogen molecular ion as well as by a polar LiH molecule in the static-exchange level. For all the applications studied herein. the calculated physical quantities converge rapidly, even for a strongly polar molecule such as LiH, to the correct values and in most cases the convergence is monotonic. Our study suggests that the MCF could be an efficient method for studying electron-molecule scattering and also photoionization of molecules. (Author)

  8. TU-F-18C-03: X-Ray Scatter Correction in Breast CT: Advances and Patient Testing

    International Nuclear Information System (INIS)

    Ramamurthy, S; Sechopoulos, I

    2014-01-01

    Purpose: To further develop and perform patient testing of an x-ray scatter correction algorithm for dedicated breast computed tomography (BCT). Methods: A previously proposed algorithm for x-ray scatter signal reduction in BCT imaging was modified and tested with a phantom and on patients. A wireless electronic positioner system was designed and added to the BCT system that positions a tungsten plate in and out of the x-ray beam. The interpolation used by the algorithm was replaced with a radial basis function-based algorithm, with automated exclusion of non-valid sampled points due to patient motion or other factors. A 3D adaptive noise reduction filter was also introduced to reduce the impact of scatter quantum noise post-reconstruction. The impact on image quality of the improved algorithm was evaluated using a breast phantom and seven patient breasts, using quantitative metrics such signal difference (SD) and signal difference-to-noise ratios (SDNR) and qualitatively using image profiles. Results: The improvements in the algorithm resulted in a more robust interpolation step, with no introduction of image artifacts, especially at the imaged object boundaries, which was an issue in the previous implementation. Qualitative evaluation of the reconstructed slices and corresponding profiles show excellent homogeneity of both the background and the higher density features throughout the whole imaged object, as well as increased accuracy in the Hounsfield Units (HU) values of the tissues. Profiles also demonstrate substantial increase in both SD and SDNR between glandular and adipose regions compared to both the uncorrected and system-corrected images. Conclusion: The improved scatter correction algorithm can be reliably used during patient BCT acquisitions with no introduction of artifacts, resulting in substantial improvement in image quality. Its impact on actual clinical performance needs to be evaluated in the future. Research Agreement, Koning Corp., Hologic

  9. Corrections to the leading eikonal amplitude for high-energy scattering and quasipotential approach

    International Nuclear Information System (INIS)

    Nguyen Suan Hani; Nguyen Duy Hung

    2003-12-01

    Asymptotic behaviour of the scattering amplitude for two scalar particle at high energy and fixed momentum transfers is reconsidered in quantum field theory. In the framework of the quasipotential approach and the modified perturbation theory a systematic scheme of finding the leading eikonal scattering amplitudes and its corrections is developed and constructed. The connection between the solutions obtained by quasipotential and functional approaches is also discussed. (author)

  10. QED corrections in deep-inelastic scattering from tensor polarized deuteron target

    CERN Document Server

    Gakh, G I

    2001-01-01

    The QED correction in the deep inelastic scattering from the polarized tensor of the deuteron target is considered. The calculations are based on the covariant parametrization of the deuteron quadrupole polarization tensor. The Drell-Yan representations in the electrodynamics are used for describing the radiation real and virtual particles

  11. WE-AB-207A-08: BEST IN PHYSICS (IMAGING): Advanced Scatter Correction and Iterative Reconstruction for Improved Cone-Beam CT Imaging On the TrueBeam Radiotherapy Machine

    Energy Technology Data Exchange (ETDEWEB)

    Wang, A; Paysan, P; Brehm, M; Maslowski, A; Lehmann, M; Messmer, P; Munro, P; Yoon, S; Star-Lack, J; Seghers, D [Varian Medical Systems, Palo Alto, CA (United States)

    2016-06-15

    Purpose: To improve CBCT image quality for image-guided radiotherapy by applying advanced reconstruction algorithms to overcome scatter, noise, and artifact limitations Methods: CBCT is used extensively for patient setup in radiotherapy. However, image quality generally falls short of diagnostic CT, limiting soft-tissue based positioning and potential applications such as adaptive radiotherapy. The conventional TrueBeam CBCT reconstructor uses a basic scatter correction and FDK reconstruction, resulting in residual scatter artifacts, suboptimal image noise characteristics, and other artifacts like cone-beam artifacts. We have developed an advanced scatter correction that uses a finite-element solver (AcurosCTS) to model the behavior of photons as they pass (and scatter) through the object. Furthermore, iterative reconstruction is applied to the scatter-corrected projections, enforcing data consistency with statistical weighting and applying an edge-preserving image regularizer to reduce image noise. The combined algorithms have been implemented on a GPU. CBCT projections from clinically operating TrueBeam systems have been used to compare image quality between the conventional and improved reconstruction methods. Planning CT images of the same patients have also been compared. Results: The advanced scatter correction removes shading and inhomogeneity artifacts, reducing the scatter artifact from 99.5 HU to 13.7 HU in a typical pelvis case. Iterative reconstruction provides further benefit by reducing image noise and eliminating streak artifacts, thereby improving soft-tissue visualization. In a clinical head and pelvis CBCT, the noise was reduced by 43% and 48%, respectively, with no change in spatial resolution (assessed visually). Additional benefits include reduction of cone-beam artifacts and reduction of metal artifacts due to intrinsic downweighting of corrupted rays. Conclusion: The combination of an advanced scatter correction with iterative reconstruction

  12. Coulomb corrections to nuclear scattering lengths and effective ranges for weakly bound systems

    International Nuclear Information System (INIS)

    Mur, V.D.; Popov, V.S.; Sergeev, A.V.

    1996-01-01

    A procedure is considered for extracting the purely nuclear scattering length as and effective range rs (which correspond to a strong-interaction potential Vs with disregarded Coulomb interaction) from the experimentally determined nuclear quantities acs and rcs, which are modified by Coulomb interaction. The Coulomb renormalization of as and rs is especially strong if the system under study involves a level with energy close to zero (on the nuclear scale). This applies to formulas that determine the Coulomb renormalization of the low-energy parameters of s scattering (l=0). Detailed numerical calculations are performed for coefficients appearing in the equations that determine Coulomb corrections for various models of the potential Vs(r). This makes it possible to draw qualitative conclusions that the dependence of Coulomb corrections on the form of the strong-interaction potential and, in particular, on its small-distance behavior. A considerable enhancement of Coulomb corrections to the effective range rs is found for potentials with a barrier

  13. Aethalometer multiple scattering correction Cref for mineral dust aerosols

    Science.gov (United States)

    Di Biagio, Claudia; Formenti, Paola; Cazaunau, Mathieu; Pangui, Edouard; Marchand, Nicolas; Doussin, Jean-François

    2017-08-01

    In this study we provide a first estimate of the Aethalometer multiple scattering correction Cref for mineral dust aerosols. Cref is an empirical constant used to correct the aerosol absorption coefficient measurements for the multiple scattering artefact of the Aethalometer; i.e. the filter fibres on which aerosols are deposited scatter light and this is miscounted as absorption. The Cref at 450 and 660 nm was obtained from the direct comparison of Aethalometer data (Magee Sci. AE31) with (i) the absorption coefficient calculated as the difference between the extinction and scattering coefficients measured by a Cavity Attenuated Phase Shift Extinction analyser (CAPS PMex) and a nephelometer respectively at 450 nm and (ii) the absorption coefficient from a MAAP (Multi-Angle Absorption Photometer) at 660 nm. Measurements were performed on seven dust aerosol samples generated in the laboratory by the mechanical shaking of natural parent soils issued from different source regions worldwide. The single scattering albedo (SSA) at 450 and 660 nm and the size distribution of the aerosols were also measured. Cref for mineral dust varies between 1.81 and 2.56 for a SSA of 0.85-0.96 at 450 nm and between 1.75 and 2.28 for a SSA of 0.98-0.99 at 660 nm. The calculated mean for dust is 2.09 (±0.22) at 450 nm and 1.92 (±0.17) at 660 nm. With this new Cref the dust absorption coefficient by the Aethalometer is about 2 % (450 nm) and 11 % (660 nm) higher than that obtained by using Cref = 2.14 at both 450 and 660 nm, as usually assumed in the literature. This difference induces a change of up to 3 % in the dust SSA at 660 nm. The Cref seems to be independent of the fine and coarse particle size fractions, and so the obtained Cref can be applied to dust both close to sources and following transport. Additional experiments performed with pure kaolinite minerals and polluted ambient aerosols indicate Cref of 2.49 (±0.02) and 2.32 (±0.01) at 450 and 660 nm (SSA = 0.96-0.97) for

  14. Scatter Correction with Combined Single-Scatter Simulation and Monte Carlo Simulation Scaling Improved the Visual Artifacts and Quantification in 3-Dimensional Brain PET/CT Imaging with 15O-Gas Inhalation.

    Science.gov (United States)

    Magota, Keiichi; Shiga, Tohru; Asano, Yukari; Shinyama, Daiki; Ye, Jinghan; Perkins, Amy E; Maniawski, Piotr J; Toyonaga, Takuya; Kobayashi, Kentaro; Hirata, Kenji; Katoh, Chietsugu; Hattori, Naoya; Tamaki, Nagara

    2017-12-01

    In 3-dimensional PET/CT imaging of the brain with 15 O-gas inhalation, high radioactivity in the face mask creates cold artifacts and affects the quantitative accuracy when scatter is corrected by conventional methods (e.g., single-scatter simulation [SSS] with tail-fitting scaling [TFS-SSS]). Here we examined the validity of a newly developed scatter-correction method that combines SSS with a scaling factor calculated by Monte Carlo simulation (MCS-SSS). Methods: We performed phantom experiments and patient studies. In the phantom experiments, a plastic bottle simulating a face mask was attached to a cylindric phantom simulating the brain. The cylindric phantom was filled with 18 F-FDG solution (3.8-7.0 kBq/mL). The bottle was filled with nonradioactive air or various levels of 18 F-FDG (0-170 kBq/mL). Images were corrected either by TFS-SSS or MCS-SSS using the CT data of the bottle filled with nonradioactive air. We compared the image activity concentration in the cylindric phantom with the true activity concentration. We also performed 15 O-gas brain PET based on the steady-state method on patients with cerebrovascular disease to obtain quantitative images of cerebral blood flow and oxygen metabolism. Results: In the phantom experiments, a cold artifact was observed immediately next to the bottle on TFS-SSS images, where the image activity concentrations in the cylindric phantom were underestimated by 18%, 36%, and 70% at the bottle radioactivity levels of 2.4, 5.1, and 9.7 kBq/mL, respectively. At higher bottle radioactivity, the image activity concentrations in the cylindric phantom were greater than 98% underestimated. For the MCS-SSS, in contrast, the error was within 5% at each bottle radioactivity level, although the image generated slight high-activity artifacts around the bottle when the bottle contained significantly high radioactivity. In the patient imaging with 15 O 2 and C 15 O 2 inhalation, cold artifacts were observed on TFS-SSS images, whereas

  15. In-medium effects in K+ scattering versus Glauber model with noneikonal corrections

    International Nuclear Information System (INIS)

    Eliseev, S.M.; Rihan, T.H.

    1996-01-01

    The discrepancy between the experimental and the theoretical ratio R of the total cross sections, R=σ(K + - 12 C)/6σ(K + - d), at momenta up to 800 MeV/c is discussed in the framework of the Glauber multiple scattering approach. It is shown that various corrections such as adopting relativistic K + -N amplitudes as well as noneikonal corrections seem to fail in reproducing the experimental data especially at higher momenta. 17 refs., 1 fig

  16. Nonuniform Illumination Correction Algorithm for Underwater Images Using Maximum Likelihood Estimation Method

    Directory of Open Access Journals (Sweden)

    Sonali Sachin Sankpal

    2016-01-01

    Full Text Available Scattering and absorption of light is main reason for limited visibility in water. The suspended particles and dissolved chemical compounds in water are also responsible for scattering and absorption of light in water. The limited visibility in water results in degradation of underwater images. The visibility can be increased by using artificial light source in underwater imaging system. But the artificial light illuminates the scene in a nonuniform fashion. It produces bright spot at the center with the dark region at surroundings. In some cases imaging system itself creates dark region in the image by producing shadow on the objects. The problem of nonuniform illumination is neglected by the researchers in most of the image enhancement techniques of underwater images. Also very few methods are discussed showing the results on color images. This paper suggests a method for nonuniform illumination correction for underwater images. The method assumes that natural underwater images are Rayleigh distributed. This paper used maximum likelihood estimation of scale parameter to map distribution of image to Rayleigh distribution. The method is compared with traditional methods for nonuniform illumination correction using no-reference image quality metrics like average luminance, average information entropy, normalized neighborhood function, average contrast, and comprehensive assessment function.

  17. Precise method for correcting count-rate losses in scintillation cameras

    International Nuclear Information System (INIS)

    Madsen, M.T.; Nickles, R.J.

    1986-01-01

    Quantitative studies performed with scintillation detectors often require corrections for lost data because of the finite resolving time of the detector. Methods that monitor losses by means of a reference source or pulser have unacceptably large statistical fluctuations associated with their correction factors. Analytic methods that model the detector as a paralyzable system require an accurate estimate of the system resolving time. Because the apparent resolving time depends on many variables, including the window setting, source distribution, and the amount of scattering material, significant errors can be introduced by relying on a resolving time obtained from phantom measurements. These problems can be overcome by curve-fitting the data from a reference source to a paralyzable model in which the true total count rate in the selected window is estimated from the observed total rate. The resolving time becomes a free parameter in this method which is optimized to provide the best fit to the observed reference data. The fitted curve has the inherent accuracy of the reference source method with the precision associated with the observed total image count rate. Correction factors can be simply calculated from the ratio of the true reference source rate and the fitted curve. As a result, the statistical uncertainty of the data corrected by this method is not significantly increased

  18. Patient-specific scatter correction in clinical cone beam computed tomography imaging made possible by the combination of Monte Carlo simulations and a ray tracing algorithm

    International Nuclear Information System (INIS)

    Thing, Rune S.; Bernchou, Uffe; Brink, Carsten; Mainegra-Hing, Ernesto

    2013-01-01

    Purpose: Cone beam computed tomography (CBCT) image quality is limited by scattered photons. Monte Carlo (MC) simulations provide the ability of predicting the patient-specific scatter contamination in clinical CBCT imaging. Lengthy simulations prevent MC-based scatter correction from being fully implemented in a clinical setting. This study investigates the combination of using fast MC simulations to predict scatter distributions with a ray tracing algorithm to allow calibration between simulated and clinical CBCT images. Material and methods: An EGSnrc-based user code (egs c bct), was used to perform MC simulations of an Elekta XVI CBCT imaging system. A 60keV x-ray source was used, and air kerma scored at the detector plane. Several variance reduction techniques (VRTs) were used to increase the scatter calculation efficiency. Three patient phantoms based on CT scans were simulated, namely a brain, a thorax and a pelvis scan. A ray tracing algorithm was used to calculate the detector signal due to primary photons. A total of 288 projections were simulated, one for each thread on the computer cluster used for the investigation. Results: Scatter distributions for the brain, thorax and pelvis scan were simulated within 2 % statistical uncertainty in two hours per scan. Within the same time, the ray tracing algorithm provided the primary signal for each of the projections. Thus, all the data needed for MC-based scatter correction in clinical CBCT imaging was obtained within two hours per patient, using a full simulation of the clinical CBCT geometry. Conclusions: This study shows that use of MC-based scatter corrections in CBCT imaging has a great potential to improve CBCT image quality. By use of powerful VRTs to predict scatter distributions and a ray tracing algorithm to calculate the primary signal, it is possible to obtain the necessary data for patient specific MC scatter correction within two hours per patient

  19. Analytical multiple scattering correction to the Mie theory: Application to the analysis of the lidar signal

    Science.gov (United States)

    Flesia, C.; Schwendimann, P.

    1992-01-01

    The contribution of the multiple scattering to the lidar signal is dependent on the optical depth tau. Therefore, the radar analysis, based on the assumption that the multiple scattering can be neglected is limited to cases characterized by low values of the optical depth (tau less than or equal to 0.1) and hence it exclude scattering from most clouds. Moreover, all inversion methods relating lidar signal to number densities and particle size must be modified since the multiple scattering affects the direct analysis. The essential requests of a realistic model for lidar measurements which include the multiple scattering and which can be applied to practical situations follow. (1) Requested are not only a correction term or a rough approximation describing results of a certain experiment, but a general theory of multiple scattering tying together the relevant physical parameter we seek to measure. (2) An analytical generalization of the lidar equation which can be applied in the case of a realistic aerosol is requested. A pure analytical formulation is important in order to avoid the convergency and stability problems which, in the case of numerical approach, are due to the large number of events that have to be taken into account in the presence of large depth and/or a strong experimental noise.

  20. Magnetic resonance imaging-guided attenuation and scatter corrections in three-dimensional brain positron emission tomography

    CERN Document Server

    Zaidi, H; Slosman, D O

    2003-01-01

    Reliable attenuation correction represents an essential component of the long chain of modules required for the reconstruction of artifact-free, quantitative brain positron emission tomography (PET) images. In this work we demonstrate the proof of principle of segmented magnetic resonance imaging (MRI)-guided attenuation and scatter corrections in 3D brain PET. We have developed a method for attenuation correction based on registered T1-weighted MRI, eliminating the need of an additional transmission (TX) scan. The MR images were realigned to preliminary reconstructions of PET data using an automatic algorithm and then segmented by means of a fuzzy clustering technique which identifies tissues of significantly different density and composition. The voxels belonging to different regions were classified into air, skull, brain tissue and nasal sinuses. These voxels were then assigned theoretical tissue-dependent attenuation coefficients as reported in the ICRU 44 report followed by Gaussian smoothing and additio...

  1. Modification of the method of polarized orbitals for electron--alkali-metal scattering: Application to e-Li

    International Nuclear Information System (INIS)

    Bhatia, A.K.; Temkin, A.; Silver, A.; Sullivan, E.C.

    1978-01-01

    The method of polarized orbitals is modified to treat low-energy scattering of electrons from highly polarizable systems, specifically alkali-metal atoms. The modification is carried out in the particular context of the e-Li system, but the procedure is general; it consists of modifying the polarized orbital, so that when used in the otherwise orthodox form of the method, it gives (i) the correct electron affinity of the negative ion (in this case Li - ), (ii) the proper (i.e., Levinson-Swan) number of nodes of the associated zero-energy scattering orbital, and (iii) the correct polarizability. A procedure is devised whereby the scattering length can be calculated from the (known) electron affinity without solving the bound-state equation. Using this procedure we adduce a 1 S scattering length of 8.69a 0 . (The 3 S scattering length is -9.22a 0 .) The above modifications can also be carried out in the (lesser) exchange adiabatic approximation. However, they lead to qualitatively incorrect 3 S phase shifts. The modified polarized-orbital phase shifts are qualitatively similar to close-coupling and elaborate variational calculations. Quantitative differences from the latter calculations, however, remain; they are manifested most noticeably in the very-low-energy total and differential spin-flip cross sections

  2. Aethalometer multiple scattering correction Cref for mineral dust aerosols

    Directory of Open Access Journals (Sweden)

    C. Di Biagio

    2017-08-01

    Full Text Available In this study we provide a first estimate of the Aethalometer multiple scattering correction Cref for mineral dust aerosols. Cref is an empirical constant used to correct the aerosol absorption coefficient measurements for the multiple scattering artefact of the Aethalometer; i.e. the filter fibres on which aerosols are deposited scatter light and this is miscounted as absorption. The Cref at 450 and 660 nm was obtained from the direct comparison of Aethalometer data (Magee Sci. AE31 with (i the absorption coefficient calculated as the difference between the extinction and scattering coefficients measured by a Cavity Attenuated Phase Shift Extinction analyser (CAPS PMex and a nephelometer respectively at 450 nm and (ii the absorption coefficient from a MAAP (Multi-Angle Absorption Photometer at 660 nm. Measurements were performed on seven dust aerosol samples generated in the laboratory by the mechanical shaking of natural parent soils issued from different source regions worldwide. The single scattering albedo (SSA at 450 and 660 nm and the size distribution of the aerosols were also measured. Cref for mineral dust varies between 1.81 and 2.56 for a SSA of 0.85–0.96 at 450 nm and between 1.75 and 2.28 for a SSA of 0.98–0.99 at 660 nm. The calculated mean for dust is 2.09 (±0.22 at 450 nm and 1.92 (±0.17 at 660 nm. With this new Cref the dust absorption coefficient by the Aethalometer is about 2 % (450 nm and 11 % (660 nm higher than that obtained by using Cref  =  2.14 at both 450 and 660 nm, as usually assumed in the literature. This difference induces a change of up to 3 % in the dust SSA at 660 nm. The Cref seems to be independent of the fine and coarse particle size fractions, and so the obtained Cref can be applied to dust both close to sources and following transport. Additional experiments performed with pure kaolinite minerals and polluted ambient aerosols indicate Cref of 2.49 (±0.02 and 2

  3. On the radiative corrections to the neutrino deep inelastic scattering

    International Nuclear Information System (INIS)

    Bardin, D.Yu.; Dokuchaeva, V.A.

    1986-01-01

    A unique set of formulae is presented for the radiative corrections to the double differential cross section of deep inelastic neutrino scattering in channels of charged and neutral currents within a simple quark parton model in a renormalization scheme on mass-shell. It is shown that these cross sections when being integrated up to the one-dimensional distribution or up to the total cross section reproduce many results existing in the literature

  4. Optimization-based scatter estimation using primary modulation for computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yi; Ma, Jingchen; Zhao, Jun, E-mail: junzhao@sjtu.edu.cn [School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240 (China); Song, Ying [Department of Radiation Oncology, West China Hospital, Sichuan University, Chengdu 610041 (China)

    2016-08-15

    Purpose: Scatter reduces the image quality in computed tomography (CT), but scatter correction remains a challenge. A previously proposed primary modulation method simultaneously obtains the primary and scatter in a single scan. However, separating the scatter and primary in primary modulation is challenging because it is an underdetermined problem. In this study, an optimization-based scatter estimation (OSE) algorithm is proposed to estimate and correct scatter. Methods: In the concept of primary modulation, the primary is modulated, but the scatter remains smooth by inserting a modulator between the x-ray source and the object. In the proposed algorithm, an objective function is designed for separating the scatter and primary. Prior knowledge is incorporated in the optimization-based framework to improve the accuracy of the estimation: (1) the primary is always positive; (2) the primary is locally smooth and the scatter is smooth; (3) the location of penumbra can be determined; and (4) the scatter-contaminated data provide knowledge about which part is smooth. Results: The simulation study shows that the edge-preserving weighting in OSE improves the estimation accuracy near the object boundary. Simulation study also demonstrates that OSE outperforms the two existing primary modulation algorithms for most regions of interest in terms of the CT number accuracy and noise. The proposed method was tested on a clinical cone beam CT, demonstrating that OSE corrects the scatter even when the modulator is not accurately registered. Conclusions: The proposed OSE algorithm improves the robustness and accuracy in scatter estimation and correction. This method is promising for scatter correction of various kinds of x-ray imaging modalities, such as x-ray radiography, cone beam CT, and the fourth-generation CT.

  5. CORRECTING FOR INTERPLANETARY SCATTERING IN VELOCITY DISPERSION ANALYSIS OF SOLAR ENERGETIC PARTICLES

    International Nuclear Information System (INIS)

    Laitinen, T.; Dalla, S.; Huttunen-Heikinmaa, K.; Valtonen, E.

    2015-01-01

    To understand the origin of Solar Energetic Particles (SEPs), we must study their injection time relative to other solar eruption manifestations. Traditionally the injection time is determined using the Velocity Dispersion Analysis (VDA) where a linear fit of the observed event onset times at 1 AU to the inverse velocities of SEPs is used to derive the injection time and path length of the first-arriving particles. VDA does not, however, take into account that the particles that produce a statistically observable onset at 1 AU have scattered in the interplanetary space. We use Monte Carlo test particle simulations of energetic protons to study the effect of particle scattering on the observable SEP event onset above pre-event background, and consequently on VDA results. We find that the VDA results are sensitive to the properties of the pre-event and event particle spectra as well as SEP injection and scattering parameters. In particular, a VDA-obtained path length that is close to the nominal Parker spiral length does not imply that the VDA injection time is correct. We study the delay to the observed onset caused by scattering of the particles and derive a simple estimate for the delay time by using the rate of intensity increase at the SEP onset as a parameter. We apply the correction to a magnetically well-connected SEP event of 2000 June 10, and show it to improve both the path length and injection time estimates, while also increasing the error limits to better reflect the inherent uncertainties of VDA

  6. Next-to-soft corrections to high energy scattering in QCD and gravity

    Energy Technology Data Exchange (ETDEWEB)

    Luna, A.; Melville, S. [SUPA, School of Physics and Astronomy, University of Glasgow,Glasgow G12 8QQ, Scotland (United Kingdom); Naculich, S.G. [Department of Physics, Bowdoin College,Brunswick, ME 04011 (United States); White, C.D. [Centre for Research in String Theory, School of Physics and Astronomy,Queen Mary University of London,327 Mile End Road, London E1 4NS (United Kingdom)

    2017-01-12

    We examine the Regge (high energy) limit of 4-point scattering in both QCD and gravity, using recently developed techniques to systematically compute all corrections up to next-to-leading power in the exchanged momentum i.e. beyond the eikonal approximation. We consider the situation of two scalar particles of arbitrary mass, thus generalising previous calculations in the literature. In QCD, our calculation describes power-suppressed corrections to the Reggeisation of the gluon. In gravity, we confirm a previous conjecture that next-to-soft corrections correspond to two independent deflection angles for the incoming particles. Our calculations in QCD and gravity are consistent with the well-known double copy relating amplitudes in the two theories.

  7. WE-DE-207B-10: Library-Based X-Ray Scatter Correction for Dedicated Cone-Beam Breast CT: Clinical Validation

    Energy Technology Data Exchange (ETDEWEB)

    Shi, L; Zhu, L [Georgia Institute of Technology, Atlanta, GA (Georgia); Vedantham, S; Karellas, A [University of Massachusetts Medical School, Worcester, MA (United States)

    2016-06-15

    Purpose: Scatter contamination is detrimental to image quality in dedicated cone-beam breast CT (CBBCT), resulting in cupping artifacts and loss of contrast in reconstructed images. Such effects impede visualization of breast lesions and the quantitative accuracy. Previously, we proposed a library-based software approach to suppress scatter on CBBCT images. In this work, we quantify the efficacy and stability of this approach using datasets from 15 human subjects. Methods: A pre-computed scatter library is generated using Monte Carlo simulations for semi-ellipsoid breast models and homogeneous fibroglandular/adipose tissue mixture encompassing the range reported in literature. Projection datasets from 15 human subjects that cover 95 percentile of breast dimensions and fibroglandular volume fraction were included in the analysis. Our investigations indicate that it is sufficient to consider the breast dimensions alone and variation in fibroglandular fraction does not significantly affect the scatter-to-primary ratio. The breast diameter is measured from a first-pass reconstruction; the appropriate scatter distribution is selected from the library; and, deformed by considering the discrepancy in total projection intensity between the clinical dataset and the simulated semi-ellipsoidal breast. The deformed scatter-distribution is subtracted from the measured projections for scatter correction. Spatial non-uniformity (SNU) and contrast-to-noise ratio (CNR) were used as quantitative metrics to evaluate the results. Results: On the 15 patient cases, our method reduced the overall image spatial non-uniformity (SNU) from 7.14%±2.94% (mean ± standard deviation) to 2.47%±0.68% in coronal view and from 10.14%±4.1% to 3.02% ±1.26% in sagittal view. The average contrast to noise ratio (CNR) improved by a factor of 1.49±0.40 in coronal view and by 2.12±1.54 in sagittal view. Conclusion: We demonstrate the robustness and effectiveness of a library-based scatter correction

  8. Corrections on energy spectrum and scattering for fast neutron radiography at NECTAR facility

    International Nuclear Information System (INIS)

    Liu Shuquan; Thomas, Boucherl; Li Hang; Zou Yubin; Lu Yuanrong; Guo Zhiyu

    2013-01-01

    Distortions caused by the neutron spectrum and scattered neutrons are major problems in fast neutron radiography and should be considered for improving the image quality. This paper puts emphasis on the removal of these image distortions and deviations for fast neutron radiography performed at the NECTAR facility of the research reactor FRM-Ⅱ in Technische Universitaet Mounchen (TUM), Germany. The NECTAR energy spectrum is analyzed and established to modify the influence caused by the neutron spectrum, and the Point Scattered Function (PScF) simulated by the Monte-Carlo program MCNPX is used to evaluate scattering effects from the object and improve image quality. Good analysis results prove the sound effects of the above two corrections. (authors)

  9. Corrections on energy spectrum and scatterings for fast neutron radiography at NECTAR facility

    Science.gov (United States)

    Liu, Shu-Quan; Bücherl, Thomas; Li, Hang; Zou, Yu-Bin; Lu, Yuan-Rong; Guo, Zhi-Yu

    2013-11-01

    Distortions caused by the neutron spectrum and scattered neutrons are major problems in fast neutron radiography and should be considered for improving the image quality. This paper puts emphasis on the removal of these image distortions and deviations for fast neutron radiography performed at the NECTAR facility of the research reactor FRM- II in Technische Universität München (TUM), Germany. The NECTAR energy spectrum is analyzed and established to modify the influence caused by the neutron spectrum, and the Point Scattered Function (PScF) simulated by the Monte-Carlo program MCNPX is used to evaluate scattering effects from the object and improve image quality. Good analysis results prove the sound effects of the above two corrections.

  10. A study on basic theory for CDCC method for three-body model of deuteron scattering

    International Nuclear Information System (INIS)

    Kawai, Mitsuji

    1988-01-01

    Recent studies have revealed that the CDCC method is valid for treating the decomposition process involved in deuteron scattering on the basis of a three-body model. However, theoretical support has not been developed for this method. The present study is aimed at determining whether a solution by the CDCC method can be obtained 'correctly' from a 'realistic' model Hamiltonian for deuteron scattering. Some researchers have recently pointed out that there are some problems with the conventional CDCC calculation procedure in view of the general scattering theory. These problems are associated with asymptotic froms of the wave functions, convergence of calculations, and boundary conditions. Considerations show that the problem with asymptotic forms of the wave function is not a fatal defect, though some compromise is necessary. The problem with the convergence of calculations is not very serious either. Discussions are made of the handling of boundary conditions. Thus, the present study indicates that the CDCC method can be applied satisfactorily to actual deuteron scattering, and that the model wave function for the CDCC method is consistent with the model Hamiltonian. (Nogami, K.)

  11. Experimental validation of a multi-energy x-ray adapted scatter separation method

    Science.gov (United States)

    Sossin, A.; Rebuffel, V.; Tabary, J.; Létang, J. M.; Freud, N.; Verger, L.

    2016-12-01

    Both in radiography and computed tomography (CT), recently emerged energy-resolved x-ray photon counting detectors enable the identification and quantification of individual materials comprising the inspected object. However, the approaches used for these operations require highly accurate x-ray images. The accuracy of the images is severely compromised by the presence of scattered radiation, which leads to a loss of spatial contrast and, more importantly, a bias in radiographic material imaging and artefacts in CT. The aim of the present study was to experimentally evaluate a recently introduced partial attenuation spectral scatter separation approach (PASSSA) adapted for multi-energy imaging. For this purpose, a prototype x-ray system was used. Several radiographic acquisitions of an anthropomorphic thorax phantom were performed. Reference primary images were obtained via the beam-stop (BS) approach. The attenuation images acquired from PASSSA-corrected data showed a substantial increase in local contrast and internal structure contour visibility when compared to uncorrected images. A substantial reduction of scatter induced bias was also achieved. Quantitatively, the developed method proved to be in relatively good agreement with the BS data. The application of the proposed scatter correction technique lowered the initial normalized root-mean-square error (NRMSE) of 45% between the uncorrected total and the reference primary spectral images by a factor of 9, thus reducing it to around 5%.

  12. Evaluation of systematic uncertainties caused by radiative corrections in experiments on deep inelastic νsub(l)N-scattering

    International Nuclear Information System (INIS)

    Bardin, D.Yu.

    1979-01-01

    Basing on the simple quark-parton model of strong interaction and on the Weinberg-Salam theory compact formulae are derived for the radiative correction to the charged current induced deep inelastic scattering of neutrinos on nucleons. The radiative correction is found to be around 20-30%, i.e., the value typical for deep inelastic lN-scattering. The results obtained are rather different from the presently available estimations of the effect under consideration

  13. Studies of coherent/Compton scattering method for bone mineral content measurement

    International Nuclear Information System (INIS)

    Sakurai, Kiyoko; Iwanami, Shigeru; Nakazawa, Keiji; Matsubayashi, Takashi; Imamura, Keiko.

    1980-01-01

    A measurement of bone mineral content by a coherent/Compton scattering method was described. A bone sample was irradiated by a collimated narrow beam of 59.6 keV gamma-rays emitted from a 300 mCi 241 Am source, and the scattered radiations were detected using a collimated pure germanium detector placed at 90 0 to the incident beam. The ratio of coherent to Compton peaks in a spectrum of the scattered radiations depends on the bone mineral content of the bone sample. The advantage of this method is that bone mineral content of a small region in a bone can be accurately measured. Assuming that bone consists of two components, protein and bone mineral, and that the mass absorption coefficient for Compton scattering is independent of material, the coherent to Compton scattering ratio is linearly related to the percentage in weight of bone mineral. A calibration curve was obtained by measuring standard samples which were mixed with Ca 3 (PO 4 ) 2 and H 2 O. The error due to the assumption about the mass absorption coefficient for Compton scattering and to the difference between true bone and standard samples was estimated to be less than 3% within the range from 10 to 60% in weight of bone mineral. The fat in bone affects an estimated value by only 1.5% when it is 20% in weight. For the clinical application of this method, the location to be analyzed should be selected before the measurement with two X-ray images viewed from the source and the detector. These views would be also used to correct the difference in absorption between coherent and Compton scattered radiations whose energies are slightly different from each other. The absorbed dose to the analyzed region was approximately 150 mrad. The time required for one measurement in this study was about 10 minutes. (author)

  14. Scatter and cross-talk correction for one-day acquisition of 123I-BMIPP and 99mtc-tetrofosmin myocardial SPECT.

    Science.gov (United States)

    Kaneta, Tomohiro; Kurihara, Hideyuki; Hakamatsuka, Takashi; Ito, Hiroshi; Maruoka, Shin; Fukuda, Hiroshi; Takahashi, Shoki; Yamada, Shogo

    2004-12-01

    123I-15-(p-iodophenyl)-3-(R,S)-methylpentadecanoic acid (BMIPP) and 99mTc-tetrofosmin (TET) are widely used for evaluation of myocardial fatty acid metabolism and perfusion, respectively. ECG-gated TET SPECT is also used for evaluation of myocardial wall motion. These tests are often performed on the same day to minimize both the time required and inconvenience to patients and medical staff. However, as 123I and 99mTc have similar emission energies (159 keV and 140 keV, respectively), it is necessary to consider not only scattered photons, but also primary photons of each radionuclide detected in the wrong window (cross-talk). In this study, we developed and evaluated the effectiveness of a new scatter and cross-talk correction imaging protocol. Fourteen patients with ischemic heart disease or heart failure (8 men and 6 women with a mean age of 69.4 yr, ranging from 45 to 94 yr) were enrolled in this study. In the routine one-day acquisition protocol, BMIPP SPECT was performed in the morning, with TET SPECT performed 4 h later. An additional SPECT was performed just before injection of TET with the energy window for 99mTc. These data correspond to the scatter and cross-talk factor of the next TET SPECT. The correction was performed by subtraction of the scatter and cross-talk factor from TET SPECT. Data are presented as means +/- S.E. Statistical analyses were performed using Wilcoxon's matched-pairs signed-ranks test, and p corrected total count was 26.0 +/- 5.3%. EDV and ESV after correction were significantly greater than those before correction (p = 0.019 and 0.016, respectively). After correction, EF was smaller than that before correction, but the difference was not significant. Perfusion scores (17 segments per heart) were significantly lower after as compared with those before correction (p correction revealed significant differences in EDV, ESV, and perfusion scores. These observations indicate that scatter and cross-talk correction is required for one

  15. Full correction of scattering effects by using the radiative transfer theory for improved quantitative analysis of absorbing species in suspensions.

    Science.gov (United States)

    Steponavičius, Raimundas; Thennadil, Suresh N

    2013-05-01

    Sample-to-sample photon path length variations that arise due to multiple scattering can be removed by decoupling absorption and scattering effects by using the radiative transfer theory, with a suitable set of measurements. For samples where particles both scatter and absorb light, the extracted bulk absorption spectrum is not completely free from nonlinear particle effects, since it is related to the absorption cross-section of particles that changes nonlinearly with particle size and shape. For the quantitative analysis of absorbing-only (i.e., nonscattering) species present in a matrix that contains a particulate species that absorbs and scatters light, a method to eliminate particle effects completely is proposed here, which utilizes the particle size information contained in the bulk scattering coefficient extracted by using the Mie theory to carry out an additional correction step to remove particle effects from bulk absorption spectra. This should result in spectra that are equivalent to spectra collected with only the liquid species in the mixture. Such an approach has the potential to significantly reduce the number of calibration samples as well as improve calibration performance. The proposed method was tested with both simulated and experimental data from a four-component model system.

  16. Methods for Motion Correction Evaluation Using 18F-FDG Human Brain Scans on a High-Resolution PET Scanner

    DEFF Research Database (Denmark)

    Keller, Sune H.; Sibomana, Merence; Olesen, Oline Vinter

    2012-01-01

    Many authors have reported the importance of motion correction (MC) for PET. Patient motion during scanning disturbs kinetic analysis and degrades resolution. In addition, using misaligned transmission for attenuation and scatter correction may produce regional quantification bias in the reconstr......Many authors have reported the importance of motion correction (MC) for PET. Patient motion during scanning disturbs kinetic analysis and degrades resolution. In addition, using misaligned transmission for attenuation and scatter correction may produce regional quantification bias...... in the reconstructed emission images. The purpose of this work was the development of quality control (QC) methods for MC procedures based on external motion tracking (EMT) for human scanning using an optical motion tracking system. Methods: Two scans with minor motion and 5 with major motion (as reported...... (automated image registration) software. The following 3 QC methods were used to evaluate the EMT and AIR MC: a method using the ratio between 2 regions of interest with gray matter voxels (GM) and white matter voxels (WM), called GM/WM; mutual information; and cross correlation. Results: The results...

  17. Coulomb correction to the screening angle of the Moliere multiple scattering theory

    International Nuclear Information System (INIS)

    Kuraev, E.A.; Voskresenskaya, O.O.; Tarasov, A.V.

    2012-01-01

    Coulomb correction to the screening angular parameter of the Moliere multiple scattering theory is found. Numerical calculations are presented in the range of nuclear charge 4 ≤ Z ≤ 82. Comparison with the Moliere result for the screening angle reveals up to 30% deviation from it for sufficiently heavy elements of the target material

  18. Modifications Of Discrete Ordinate Method For Computations With High Scattering Anisotropy: Comparative Analysis

    Science.gov (United States)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.

    2012-01-01

    A numerical accuracy analysis of the radiative transfer equation (RTE) solution based on separation of the diffuse light field into anisotropic and smooth parts is presented. The analysis uses three different algorithms based on the discrete ordinate method (DOM). Two methods, DOMAS and DOM2+, that do not use the truncation of the phase function, are compared against the TMS-method. DOMAS and DOM2+ use the Small-Angle Modification of RTE and the single scattering term, respectively, as an anisotropic part. The TMS method uses Delta-M method for truncation of the phase function along with the single scattering correction. For reference, a standard discrete ordinate method, DOM, is also included in analysis. The obtained results for cases with high scattering anisotropy show that at low number of streams (16, 32) only DOMAS provides an accurate solution in the aureole area. Outside of the aureole, the convergence and accuracy of DOMAS, and TMS is found to be approximately similar: DOMAS was found more accurate in cases with coarse aerosol and liquid water cloud models, except low optical depth, while the TMS showed better results in case of ice cloud.

  19. Wall attenuation and scatter corrections for ion chambers: measurements versus calculations

    Energy Technology Data Exchange (ETDEWEB)

    Rogers, D W.O.; Bielajew, A F [National Research Council of Canada, Ottawa, ON (Canada). Div. of Physics

    1990-08-01

    In precision ion chamber dosimetry in air, wall attenuation and scatter are corrected for A{sub wall} (K{sub att} in IAEA terminology, K{sub w}{sup -1} in standards laboratory terminology). Using the EGS4 system the authors show that Monte Carlo calculated A{sub wall} factors predict relative variations in detector response with wall thickness which agree with all available experimental data within a statistical uncertainty of less than 0.1%. They calculated correction factors for use in exposure and air kerma standards are different by up to 1% from those obtained by extrapolating these same measurements. Using calculated correction factors would imply increases of 0.7-1.0% in the exposure and air kerma standards based on spherical and large diameter, large length cylindrical chambers and decreases of 0.3-0.5% for standards based on large diameter pancake chambers. (author).

  20. Prediction of e± elastic scattering cross-section ratio based on phenomenological two-photon exchange corrections

    Science.gov (United States)

    Qattan, I. A.

    2017-06-01

    I present a prediction of the e± elastic scattering cross-section ratio, Re+e-, as determined using a new parametrization of the two-photon exchange (TPE) corrections to electron-proton elastic scattering cross section σR. The extracted ratio is compared to several previous phenomenological extractions, TPE hadronic calculations, and direct measurements from the comparison of electron and positron scattering. The TPE corrections and the ratio Re+e- show a clear change of sign at low Q2, which is necessary to explain the high-Q2 form factors discrepancy while being consistent with the known Q2→0 limit. While my predictions are in generally good agreement with previous extractions, TPE hadronic calculations, and existing world data including the recent two measurements from the CLAS and VEPP-3 Novosibirsk experiments, they are larger than the new OLYMPUS measurements at larger Q2 values.

  1. Three-loop corrections to the soft anomalous dimension in multileg scattering

    CERN Document Server

    Almelid, Øyvind; Gardi, Einan

    2016-01-01

    We present the three-loop result for the soft anomalous dimension governing long-distance singularities of multi-leg gauge-theory scattering amplitudes of massless partons. We compute all contributing webs involving semi-infinite Wilson lines at three loops and obtain the complete three-loop correction to the dipole formula. We find that non-dipole corrections appear already for three coloured partons, where the correction is a constant without kinematic dependence. Kinematic dependence appears only through conformally-invariant cross ratios for four coloured partons or more, and the result can be expressed in terms of single-valued harmonic polylogarithms of weight five. While the non-dipole three-loop term does not vanish in two-particle collinear limits, its contribution to the splitting amplitude anomalous dimension reduces to a constant, and it only depends on the colour charges of the collinear pair, thereby preserving strict collinear factorization properties. Finally we verify that our result is consi...

  2. Study of Six Energy-Window Settings for Scatter Correction in Quantitative 111In Imaging: Comparative analysis Using SIMIND

    International Nuclear Information System (INIS)

    Gomez Facenda, A.; Castillo Lopez, J. P.; Torres Aroche, L. A.; Coca Perez, M. A.

    2013-01-01

    Activity quantification in nuclear medicine imaging is highly desirable, particularly for dosimetry and biodistribution studies of radiopharmaceuticals. Quantitative 111 In imaging is increasingly important with the current interest in therapy using 90 Y-radiolabeled compounds. Photons scattered in the patient are one of the major problems in quantification, which leads to degradation of image quality. The aim of this work was to assess the configuration of energy windows and the best weight factor for the scatter correction in 111 In images. All images were obtained using the Monte Carlo simulation code, Simind, configured to emulate the gamma camera Nucline SPIRIT DH-V. Simulations were validated by a positive agreement between experimental and simulated line-spread functions (LSF) of 99 mTc. It was examined the sensitivity, the scatter-to-total ratio, the contrast and the spatial resolution for scatter-compensated images obtained from six different multi-windows scatter corrections. Taking into consideration the results, the best energy-window setting was two 20% windows centered at 171 and 245keV, together with a 10% scatter window located between the photo peaks at 209keV. (Author)

  3. Meson exchange corrections in deep inelastic scattering on deuteron

    International Nuclear Information System (INIS)

    Kaptari, L.P.; Titov, A.I.

    1989-01-01

    Starting with the general equations of motion of the nucleons interacting with the mesons the one-particle Schroedinger-like equation for the nucleon wave function and the deep inelastic scattering amplitude with the meson-exchange currents are obtained. Effective pion-, sigma-, and omega-meson exchanges are considered. It is found that the mesonic corrections only partially (about 60%) restore the energy sum rule breaking because of the nucleon off-mass-shell effects in nuclei. This results contradicts with the prediction based on the calculation of the energy sum rule limited by the second order of the nucleon-meson vertex and static approximation. 17 refs.; 3 figs

  4. Dispersion corrections to the forward Rayleigh scattering amplitudes of tantalum, mercury and lead derived using photon interaction cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Appaji Gowda, S.B. [Department of Studies in Physics, Manasagangothri, University of Mysore, Mysore 570006 (India); Umesh, T.K. [Department of Studies in Physics, Manasagangothri, University of Mysore, Mysore 570006 (India)]. E-mail: tku@physics.uni-mysore.ac.in

    2006-01-15

    Dispersion corrections to the forward Rayleigh scattering amplitudes of tantalum, mercury and lead in the photon energy range 24-136 keV have been determined by a numerical evaluation of the dispersion integral that relates them through optical theorem to the photo effect cross sections. The photo effect cross sections have been extracted by subtracting the coherent and incoherent scattering contribution from the measured total attenuation cross section, using high-resolution high-purity germanium detector in a narrow beam good geometry set up. The real part of the dispersion correction to which the relativistic corrections calculated by Kissel and Pratt (S-matrix approach) or Creagh and McAuley (multipole corrections) have been included are in better agreement with the available theoretical values.

  5. Lectures on the inverse scattering method

    International Nuclear Information System (INIS)

    Zakharov, V.E.

    1983-06-01

    In a series of six lectures an elementary introduction to the theory of inverse scattering is given. The first four lectures contain a detailed theory of solitons in the framework of the KdV equation, together with the inverse scattering theory of the one-dimensional Schroedinger equation. In the fifth lecture the dressing method is described, while the sixth lecture gives a brief review of the equations soluble by the inverse scattering method. (author)

  6. New method for solving multidimensional scattering problem

    International Nuclear Information System (INIS)

    Melezhik, V.S.

    1991-01-01

    A new method is developed for solving the quantum mechanical problem of scattering of a particle with internal structure. The multichannel scattering problem is formulated as a system of nonlinear functional equations for the wave function and reaction matrix. The method is successfully tested for the scattering from a nonspherical potential well and a long-range nonspherical scatterer. The method is also applicable to solving the multidimensional Schroedinger equation with a discrete spectrum. As an example the known problem of a hydrogen atom in a homogeneous magnetic field is analyzed

  7. Discrete ordinates transport methods for problems with highly forward-peaked scattering

    International Nuclear Information System (INIS)

    Pautz, S.D.

    1998-04-01

    The author examines the solutions of the discrete ordinates (S N ) method for problems with highly forward-peaked scattering kernels. He derives conditions necessary to obtain reasonable solutions in a certain forward-peaked limit, the Fokker-Planck (FP) limit. He also analyzes the acceleration of the iterative solution of such problems and offer improvements to it. He extends the analytic Fokker-Planck limit analysis to the S N equations. This analysis shows that in this asymptotic limit the S N solution satisfies a pseudospectral discretization of the FP equation, provided that the scattering term is handled in a certain way (which he describes) and that the analytic transport solution satisfies an analytic FP equation. Similar analyses of various spatially discretized S N equations reveal that they too produce solutions that satisfy discrete FP equations, given the same provisions. Numerical results agree with these theoretical predictions. He defines a multidimensional angular multigrid (ANMG) method to accelerate the iterative solution of highly forward-peaked problems. The analyses show that a straightforward application of this scheme is subject to high-frequency instabilities. However, by applying a diffusive filter to the ANMG corrections he is able to stabilize this method. Fourier analyses of model problems show that the resulting method is effective at accelerating the convergence rate when the scattering is forward-peaked. The numerical results demonstrate that these analyses are good predictors of the actual performance of the ANMG method

  8. Scatter kernel estimation with an edge-spread function method for cone-beam computed tomography imaging

    International Nuclear Information System (INIS)

    Li Heng; Mohan, Radhe; Zhu, X Ronald

    2008-01-01

    The clinical applications of kilovoltage x-ray cone-beam computed tomography (CBCT) have been compromised by the limited quality of CBCT images, which typically is due to a substantial scatter component in the projection data. In this paper, we describe an experimental method of deriving the scatter kernel of a CBCT imaging system. The estimated scatter kernel can be used to remove the scatter component from the CBCT projection images, thus improving the quality of the reconstructed image. The scattered radiation was approximated as depth-dependent, pencil-beam kernels, which were derived using an edge-spread function (ESF) method. The ESF geometry was achieved with a half-beam block created by a 3 mm thick lead sheet placed on a stack of slab solid-water phantoms. Measurements for ten water-equivalent thicknesses (WET) ranging from 0 cm to 41 cm were taken with (half-blocked) and without (unblocked) the lead sheet, and corresponding pencil-beam scatter kernels or point-spread functions (PSFs) were then derived without assuming any empirical trial function. The derived scatter kernels were verified with phantom studies. Scatter correction was then incorporated into the reconstruction process to improve image quality. For a 32 cm diameter cylinder phantom, the flatness of the reconstructed image was improved from 22% to 5%. When the method was applied to CBCT images for patients undergoing image-guided therapy of the pelvis and lung, the variation in selected regions of interest (ROIs) was reduced from >300 HU to <100 HU. We conclude that the scatter reduction technique utilizing the scatter kernel effectively suppresses the artifact caused by scatter in CBCT.

  9. Effect of scatter correction on the compartmental measurement of striatal and extrastriatal dopamine D2 receptors using [123I]epidepride SPET

    International Nuclear Information System (INIS)

    Fujita, Masahiro; Seneca, Nicholas; Innis, Robert B.; Varrone, Andrea; Kim, Kyeong Min; Watabe, Hiroshi; Iida, Hidehiro; Zoghbi, Sami S.; Tipre, Dnyanesh; Seibyl, John P.

    2004-01-01

    Prior studies with anthropomorphic phantoms and single, static in vivo brain images have demonstrated that scatter correction significantly improves the accuracy of regional quantitation of single-photon emission tomography (SPET) brain images. Since the regional distribution of activity changes following a bolus injection of a typical neuroreceptor ligand, we examined the effect of scatter correction on the compartmental modeling of serial dynamic images of striatal and extrastriatal dopamine D 2 receptors using [ 123 I]epidepride. Eight healthy human subjects [age 30±8 (range 22-46) years] participated in a study with a bolus injection of 373±12 (354-389) MBq [ 123 I]epidepride and data acquisition over a period of 14 h. A transmission scan was obtained in each study for attenuation and scatter correction. Distribution volumes were calculated by means of compartmental nonlinear least-squares analysis using metabolite-corrected arterial input function and brain data processed with scatter correction using narrow-beam geometry μ (SC) and without scatter correction using broad-beam μ (NoSC). Effects of SC were markedly different among brain regions. SC increased activities in the putamen and thalamus after 1-1.5 h while it decreased activity during the entire experiment in the temporal cortex and cerebellum. Compared with NoSC, SC significantly increased specific distribution volume in the putamen (58%, P=0.0001) and thalamus (23%, P=0.0297). Compared with NoSC, SC made regional distribution of the specific distribution volume closer to that of [ 18 F]fallypride. It is concluded that SC is required for accurate quantification of distribution volumes of receptor ligands in SPET studies. (orig.)

  10. Evaluation of various energy windows at different radionuclides for scatter and attenuation correction in nuclear medicine.

    Science.gov (United States)

    Asgari, Afrouz; Ashoor, Mansour; Sohrabpour, Mostafa; Shokrani, Parvaneh; Rezaei, Ali

    2015-05-01

    Improving signal to noise ratio (SNR) and qualified images by the various methods is very important for detecting the abnormalities at the body organs. Scatter and attenuation of photons by the organs lead to errors in radiopharmaceutical estimation as well as degradation of images. The choice of suitable energy window and the radionuclide have a key role in nuclear medicine which appearing the lowest scatter fraction as well as having a nearly constant linear attenuation coefficient as a function of phantom thickness. The energy windows of symmetrical window (SW), asymmetric window (ASW), high window (WH) and low window (WL) using Tc-99m and Sm-153 radionuclide with solid water slab phantom (RW3) and Teflon bone phantoms have been compared, and Matlab software and Monte Carlo N-Particle (MCNP4C) code were modified to simulate these methods and obtaining the amounts of FWHM and full width at tenth maximum (FWTM) using line spread functions (LSFs). The experimental data were obtained from the Orbiter Scintron gamma camera. Based on the results of the simulation as well as experimental work, the performance of WH and ASW display of the results, lowest scatter fraction as well as constant linear attenuation coefficient as a function of phantom thickness. WH and ASW were optimal windows in nuclear medicine imaging for Tc-99m in RW3 phantom and Sm-153 in Teflon bone phantom. Attenuation correction was done for WH and ASW optimal windows and for these radionuclides using filtered back projection algorithm. Results of simulation and experimental show that very good agreement between the set of experimental with simulation as well as theoretical values with simulation data were obtained which was nominally less than 7.07 % for Tc-99m and less than 8.00 % for Sm-153. Corrected counts were not affected by the thickness of scattering material. The Simulated results of Line Spread Function (LSF) for Sm-153 and Tc-99m in phantom based on four windows and TEW method were

  11. On the kinematic reconstruction of deep inelastic scattering at HERA: the Σmethod

    International Nuclear Information System (INIS)

    Bassler, U.; Bernardi, G.

    1994-12-01

    We review and compare the reconstruction methods of the inclusive deep inelastic scattering variables used at HERA. We introduce a new prescription, the Sigma (Σ) method, which allows to measure the structure function of the proton F 2 (x, Q 2 ) in a large kinematic domain, and in particular in the low x-low Q 2 region, with small systematic errors and small radiative corrections. A detailed comparison between the Σ method and the other methods is shown. Extensions of the Σ method are presented. The effect of QED radiation on the kinematic reconstruction and on the structure function measurement is discussed. (orig.)

  12. Corrections in clinical Magnetic Resonance Spectroscopy and SPECT

    DEFF Research Database (Denmark)

    de Nijs, Robin

    infants. In Iodine-123 SPECT the problem of downscatter was addressed. This thesis is based on two papers. Paper I deals with the problem of motion in Single Voxel Spectroscopy. Two novel methods for the identification of outliers in the set of repeated measurements were implemented and compared...... a detrimental effect of the extra-uterine environment on brain development. Paper II describes a method to correct for downscatter in low count Iodine-123 SPECT with a broad energy window above the normal imaging window. Both spatial dependency and weight factors were measured. As expected, the implicitly...... be performed by the subtraction of an energy window, a method was developed to perform scatter and downscatter correction simultaneously. A phantom study has been performed, where the in paper II described downscatter correction was extended with scatter correction. This new combined correction was compared...

  13. Investigation of radiative corrections in the scattering at 180 deg. of 240 MeV positrons on atomic electrons

    International Nuclear Information System (INIS)

    Poux, J.P.

    1972-06-01

    In this research thesis, after a recall of processes of elastic scattering of positrons on electrons (kinematics and cross section), and of involved radiative corrections, the author describes the experimental installation (positron beam, ionization chamber, targets, spectrometer, electronic logics associated with the counter telescope) which has been used to measure the differential cross section of recoil electrons, and the methods which have been used. In a third part, the author reports the calculation of corrections and the obtained spectra. In the next part, the author reports the interpretation of results and their comparison with the experiment performed by Browman, Grossetete and Yount. The author shows that both experiments are complementary to each other, and are in agreement with the calculation performed by Yennie, Hearn and Kuo

  14. O({alpha}{sub s}) heavy flavor corrections to charged current deep-inelastic scattering in Mellin space

    Energy Technology Data Exchange (ETDEWEB)

    Bluemlein, J.; Hasselhuhn, A.; Kovacikova, P.; Moch, S.

    2011-04-15

    We provide a fast and precise Mellin-space implementation of the O({alpha}{sub s}) heavy flavor Wilson coefficients for charged current deep inelastic scattering processes. They are of importance for the extraction of the strange quark distribution in neutrino-nucleon scattering and the QCD analyses of the HERA charged current data. Errors in the literature are corrected. We also discuss a series of more general parton parameterizations in Mellin space. (orig.)

  15. On the radiative corrections of deep inelastic scattering of muon neutrino on nucleon

    International Nuclear Information System (INIS)

    So Sang Guk

    1986-01-01

    The radiative corrections of deep inelastic scattering process VΜP→ ΜN are considered. Matrix element which takes Feynman one photon exchange diagrams into account at high transfer momentum are used. Based on calculation of the matrix element one can obtain matrix element for given process. It is shown that the effective cross section which takes one photon exchange into account is obtained. (author)

  16. A technique of scatter and glare correction for videodensitometric studies in digital subtraction videoangiography

    International Nuclear Information System (INIS)

    Shaw, C.G.; Ergun, D.L.; Myerowitz, P.D.; Van Lysel, M.S.; Mistretta, C.A.; Zarnstorff, W.C.; Crummy, A.B.

    1982-01-01

    The logarithmic amplification of video signals and the availability of data in digital form make digital subtraction videoangiography a suitable tool for videodensitometric estimation of physiological quantities. A system for this purpose was implemented with a digital video image processor. However, it was found that the radiation scattering and veiling glare present in the image-intensified video must be removed to make meaningful quantitations. An algorithm to make such a correction was developed and is presented. With this correction, the videodensitometry system was calibrated with phantoms and used to measure the left ventricular ejection fraction of a canine heart

  17. Relativistic corrections to the elastic electron scattering from 208Pb

    International Nuclear Information System (INIS)

    Chandra, H.; Sauer, G.

    1976-01-01

    In the present work we have calculated the differential cross sections for the elastic electron scattering from 208 Pb using the charge distributions resulting from various corrections. The point proton and neutron mass distributions have been calculated from the spherical wave functions for 208 Pb obtained by Kolb et al. The relativistic correction to the nuclear charge distribution coming from the electromagnetic structure of the nucleon has been accomplished by assuming a linear superposition of Gaussian shapes for the proton and the neutron charge form factor. Results of this calculation are quite similar to an earlier calculation by Bertozzi et al., who have used a different wave function for 208 Pb and have assumed exponential smearing for the proton corresponding to the dipole fit for the form factor. Also in the present work, reason for the small spin orbit contribution to the effective charge distribution is discussed in some detail. It is also shown that the use of a single Gaussian shape for the proton smearing usually underestimates the actual theoretical cross section

  18. Model Correction Factor Method

    DEFF Research Database (Denmark)

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes

    1997-01-01

    The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...

  19. Uniqueness and numerical methods in inverse obstacle scattering

    International Nuclear Information System (INIS)

    Kress, Rainer

    2007-01-01

    The inverse problem we consider in this tutorial is to determine the shape of an obstacle from the knowledge of the far field pattern for scattering of time-harmonic plane waves. In the first part we will concentrate on the issue of uniqueness, i.e., we will investigate under what conditions an obstacle and its boundary condition can be identified from a knowledge of its far field pattern for incident plane waves. We will review some classical and some recent results and draw attention to open problems. In the second part we will survey on numerical methods for solving inverse obstacle scattering problems. Roughly speaking, these methods can be classified into three groups. Iterative methods interpret the inverse obstacle scattering problem as a nonlinear ill-posed operator equation and apply iterative schemes such as regularized Newton methods, Landweber iterations or conjugate gradient methods for its solution. Decomposition methods, in principle, separate the inverse scattering problem into an ill-posed linear problem to reconstruct the scattered wave from its far field and the subsequent determination of the boundary of the scatterer from the boundary condition. Finally, the third group consists of the more recently developed sampling methods. These are based on the numerical evaluation of criteria in terms of indicator functions that decide whether a point lies inside or outside the scatterer. The tutorial will give a survey by describing one or two representatives of each group including a discussion on the various advantages and disadvantages

  20. Effect of scatter correction on the compartmental measurement of striatal and extrastriatal dopamine D{sub 2} receptors using [{sup 123}I]epidepride SPET

    Energy Technology Data Exchange (ETDEWEB)

    Fujita, Masahiro; Seneca, Nicholas; Innis, Robert B. [Department of Psychiatry, Yale University School of Medicine and VA Connecticut Healthcare System, West Haven, CT (United States); Molecular Imaging Branch, National Institute of Mental Health, Bethesda, MD (United States); Varrone, Andrea [Department of Psychiatry, Yale University School of Medicine and VA Connecticut Healthcare System, West Haven, CT (United States); Biostructure and Bioimaging Institute, National Research Council, Napoli (Italy); Kim, Kyeong Min; Watabe, Hiroshi; Iida, Hidehiro [Department of Investigative Radiology, National Cardiovascular Center Research Institute, Osaka (Japan); Zoghbi, Sami S. [Department of Psychiatry, Yale University School of Medicine and VA Connecticut Healthcare System, West Haven, CT (United States); Molecular Imaging Branch, National Institute of Mental Health, Bethesda, MD (United States); Department of Radiology, Yale University School of Medicine and VA Connecticut Healthcare System, West Haven, CT (United States); Tipre, Dnyanesh [Molecular Imaging Branch, National Institute of Mental Health, Bethesda, MD (United States); Seibyl, John P. [Institute for Neurodegenerative Disorders, New Haven, CT (United States)

    2004-05-01

    Prior studies with anthropomorphic phantoms and single, static in vivo brain images have demonstrated that scatter correction significantly improves the accuracy of regional quantitation of single-photon emission tomography (SPET) brain images. Since the regional distribution of activity changes following a bolus injection of a typical neuroreceptor ligand, we examined the effect of scatter correction on the compartmental modeling of serial dynamic images of striatal and extrastriatal dopamine D{sub 2} receptors using [{sup 123}I]epidepride. Eight healthy human subjects [age 30{+-}8 (range 22-46) years] participated in a study with a bolus injection of 373{+-}12 (354-389) MBq [{sup 123}I]epidepride and data acquisition over a period of 14 h. A transmission scan was obtained in each study for attenuation and scatter correction. Distribution volumes were calculated by means of compartmental nonlinear least-squares analysis using metabolite-corrected arterial input function and brain data processed with scatter correction using narrow-beam geometry {mu} (SC) and without scatter correction using broad-beam {mu} (NoSC). Effects of SC were markedly different among brain regions. SC increased activities in the putamen and thalamus after 1-1.5 h while it decreased activity during the entire experiment in the temporal cortex and cerebellum. Compared with NoSC, SC significantly increased specific distribution volume in the putamen (58%, P=0.0001) and thalamus (23%, P=0.0297). Compared with NoSC, SC made regional distribution of the specific distribution volume closer to that of [{sup 18}F]fallypride. It is concluded that SC is required for accurate quantification of distribution volumes of receptor ligands in SPET studies. (orig.)

  1. Nuclear corrections in neutrino deep inelastic scattering and the extraction of the strange quark distribution

    International Nuclear Information System (INIS)

    Boros, C.

    1999-01-01

    Recent measurement of the structure function F 2 υ in neutrino deep inelastic scattering allows us to compare structure functions measured in neutrino and charged lepton scattering for the first time with reasonable precision. The comparison between neutrino and muon structure functions made by the CCFR Collaboration indicates that there is a discrepancy between these structure functions at small Bjorken x values. In this talk I examine two effects which might account for this experimental discrepancy: nuclear shadowing corrections for neutrinos and contributions from strange and anti-strange quarks. Copyright (1999) World Scientific Publishing Co. Pte. Ltd

  2. Parametrisation of the collimator scatter correction factors of square and rectangular photon beams

    International Nuclear Information System (INIS)

    Jager, H.N.; Heukelom, S.; Kleffens, H.J. van; Gasteren, J.J.M. van; Laarse, R. van der; Venselaar, J.L.M.; Westermann, C.F.

    1995-01-01

    Collimator scatter correction factors S c have been measured with a cylindrical mini-phantom for five types of dual photon energy accelerators with energies between 6 and 25 MV. Using these S c -data three methods to parametrize S c of square fields have been compared including a third-order polynomial of the natural logarithm of the fieldsize normalised by the fieldsize of 10 cm 2 . Also six methods to calculate S c of rectangular fields have been compared including a new one which determines the equivalent fieldsize by extending Sterling's method. The deviation between measured and calculated S c for every accelerator, energy and all methods are determined resulting in the maximum and average deviation per method. Applied to square fields the maximum and average deviation were for the method of Chen 0.64% and 0.15%, of Szymzcyk 0.98% and 0.21%, and of this work 0.41% and 0.10%. For the rectangular fields the deviations were for the method of Sterling 1.89% and 0.50%, of Vadash 1.60% and 0.28%, of Szymczyk et al. 1.21% and 0.25%, of Chen 1.84% and 0.31% and of this work 0.79% and 0.20%. Finally, a recommendation is given how to limit the number of fields at which S c should be measured

  3. NNLO leptonic and hadronic corrections to Bhabha scattering and luminosity monitoring at meson factories

    Energy Technology Data Exchange (ETDEWEB)

    Carloni Calame, C. [Southampton Univ. (United Kingdom). School of Physics; Czyz, H.; Gluza, J.; Gunia, M. [Silesia Univ., Katowice (Poland). Dept. of Field Theory and Particle Physics; Montagna, G. [Pavia Univ. (Italy). Dipt. di Fisica Nucleare e Teorica; INFN, Sezione di Pavia (Italy); Nicrosini, O.; Piccinini, F. [INFN, Sezione di Pavia (Italy); Riemann, T. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Worek, M. [Wuppertal Univ. (Germany). Fachbereich C Physik

    2011-07-15

    Virtual fermionic N{sub f}=1 and N{sub f}=2 contributions to Bhabha scattering are combined with realistic real corrections at next-to-next-to-leading order in QED. The virtual corrections are determined by the package BHANNLOHF, and real corrections with the Monte Carlo generators BHAGEN-1PH, HELAC-PHEGAS and EKHARA. Numerical results are discussed at the energies of and with realistic cuts used at the {phi} factory DA{phi}NE, at the B factories PEP-II and KEK, and at the charm/{tau} factory BEPC II. We compare these complete calculations with the approximate ones realized in the generator BABAYAGA rate at NLO used at meson factories to evaluate their luminosities. For realistic reference event selections we find agreement for the NNLO leptonic and hadronic corrections within 0.07% or better and conclude that they are well accounted for in the generator by comparison with the present experimental accuracy. (orig.)

  4. Modal Ring Method for the Scattering of Electromagnetic Waves

    Science.gov (United States)

    Baumeister, Kenneth J.; Kreider, Kevin L.

    1993-01-01

    The modal ring method for electromagnetic scattering from perfectly electric conducting (PEC) symmetrical bodies is presented. The scattering body is represented by a line of finite elements (triangular) on its outer surface. The infinite computational region surrounding the body is represented analytically by an eigenfunction expansion. The modal ring method effectively reduces the two dimensional scattering problem to a one-dimensional problem similar to the method of moments. The modal element method is capable of handling very high frequency scattering because it has a highly banded solution matrix.

  5. NNLO massive corrections to Bhabha scattering and theoretical precision of BabaYaga rate at NLO

    International Nuclear Information System (INIS)

    Carloni Calame, C.M.; Nicrosini, O.; Piccinini, F.; Riemann, T.; Worek, M.

    2011-12-01

    We provide an exact calculation of next-to-next-to-leading order (NNLO) massive corrections to Bhabha scattering in QED, relevant for precision luminosity monitoring at meson factories. Using realistic reference event selections, exact numerical results for leptonic and hadronic corrections are given and compared with the corresponding approximate predictions of the event generator BabaYaga rate at NLO. It is shown that the NNLO massive corrections are necessary for luminosity measurements with per mille precision. At the same time they are found to be well accounted for in the generator at an accuracy level below the one per mille. An update of the total theoretical precision of BabaYaga rate at NLO is presented and possible directions for a further error reduction are sketched. (orig.)

  6. Window selection for dual photopeak window scatter correction in Tc-99m imaging

    International Nuclear Information System (INIS)

    Vries, D.J. de; King, M.A.

    1994-01-01

    The width and placement of the windows for the dual photopeak window (DPW) scatter subtraction method for Tc-99m imaging is investigated in order to obtain a method that is stable on a multihead detector system for single photon emission computed tomography (SPECT) and is capable of providing a good scatter estimate for extended objects. For various window pairs, stability and noise were examined with experiments using a SPECT system, while Monte Carlo simulations were used to predict the accuracy of scatter estimates for a variety of objects and to guide the development of regression relations for various window pairs. The DPW method that resulted from this study was implemented with a symmetric 20% photopeak window composed of a 15% asymmetric photopeak window and a 5% lower window abutted at 7 keV below the peak. A power function regression was used to relate the scatter-to-total ratio to the lower window-to-total ratio at each pixel, from which an estimated scatter image was calculated. DPW demonstrated good stability, achieved by abutting the two windows away from the peak. Performance was assessed and compared with Compton window subtraction (CWS). For simulated extended objects, DPW generally produced a less biased scatter estimate than the commonly used CWS method with k = 0.5. In acquisitions of a clinical SPECT phantom, contrast recovery was comparable for both DPW and CWS; however, DPW showed greater visual contrast in clinical SPECT bone studies

  7. Dose calculations for irregular fields using three-dimensional first-scatter integration

    International Nuclear Information System (INIS)

    Boesecke, R.; Scharfenberg, H.; Schlegel, W.; Hartmann, G.H.

    1986-01-01

    This paper describes a method of dose calculations for irregular fields which requires only the mean energy of the incident photons, the geometrical properties of the irregular field and of the therapy unit, and the attenuation coefficient of tissue. The method goes back to an approach including spatial aspects of photon scattering for inhomogeneities for the calculation of dose reduction factors as proposed by Sontag and Cunningham (1978). It is based on the separation of dose into a primary component and a scattered component. The scattered component can generally be calculated for each field by integration over dose contributions from scattering in neighbouring volume elements. The quotient of this scattering contribution in the irregular field and the scattering contribution in the equivalent open field is then the correction factor for scattering in an irregular field. A correction factor for the primary component can be calculated if the attenuation of the photons in the shielding block is properly taken into account. The correction factor is simply given by the quotient of primary photons of the irregular field and the primary photons of the open field. (author)

  8. A two-stage method for inverse medium scattering

    KAUST Repository

    Ito, Kazufumi; Jin, Bangti; Zou, Jun

    2013-01-01

    We present a novel numerical method to the time-harmonic inverse medium scattering problem of recovering the refractive index from noisy near-field scattered data. The approach consists of two stages, one pruning step of detecting the scatterer

  9. A method of precise profile analysis of diffuse scattering for the KENS pulsed neutrons

    International Nuclear Information System (INIS)

    Todate, Y.; Fukumura, T.; Fukazawa, H.

    2001-01-01

    An outline of our profile analysis method, which is now of practical use for the asymmetric KENS pulsed thermal neutrons, are presented. The analysis of the diffuse scattering from a single crystal of D 2 O is shown as an example. The pulse shape function is based on the Ikeda-Carpenter function adjusted for the KENS neutron pulses. The convoluted intensity is calculated by a Monte-Carlo method and the precision of the calculation is controlled. Fitting parameters in the model cross section can be determined by the built-in nonlinear least square fitting procedure. Because this method is the natural extension of the procedure conventionally used for the triple-axis data, it is easy to apply with generality and versatility. Most importantly, furthermore, this method has capability of precise correction of the time shift of the observed peak position which is inevitably caused in the case of highly asymmetric pulses and broad scattering function. It will be pointed out that the accurate determination of true time-of-flight is important especially in the single crystal inelastic experiments. (author)

  10. Effects of projection and background correction method upon calculation of right ventricular ejection fraction using first-pass radionuclide angiography

    International Nuclear Information System (INIS)

    Caplin, J.L.; Flatman, W.D.; Dymond, D.S.

    1985-01-01

    There is no consensus as to the best projection or correction method for first-pass radionuclide studies of the right ventricle. We assessed the effects of two commonly used projections, 30 degrees right anterior oblique and anterior-posterior, on the calculation of right ventricular ejection fraction. In addition two background correction methods, planar background correction to account for scatter, and right atrial correction to account for right atrio-ventricular overlap were assessed. Two first-pass radionuclide angiograms were performed in 19 subjects, one in each projection, using gold-195m (half-life 30.5 seconds), and each study was analysed using the two methods of correction. Right ventricular ejection fraction was highest using the right anterior oblique projection with right atrial correction 35.6 +/- 12.5% (mean +/- SD), and lowest when using the anterior posterior projection with planar background correction 26.2 +/- 11% (p less than 0.001). The study design allowed assessment of the effects of correction method and projection independently. Correction method appeared to have relatively little effect on right ventricular ejection fraction. Using right atrial correction correlation coefficient (r) between projections was 0.92, and for planar background correction r = 0.76, both p less than 0.001. However, right ventricular ejection fraction was far more dependent upon projection. When the anterior-posterior projection was used calculated right ventricular ejection fraction was much more dependent on correction method (r = 0.65, p = not significant), than using the right anterior oblique projection (r = 0.85, p less than 0.001)

  11. MO-FG-CAMPUS-JeP1-05: Water Equivalent Path Length Calculations Using Scatter-Corrected Head and Neck CBCT Images to Evaluate Patients for Adaptive Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Kim, J; Park, Y; Sharp, G; Winey, B [Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States)

    2016-06-15

    Purpose: To establish a method to evaluate the dosimetric impact of anatomic changes in head and neck patients during proton therapy by using scatter-corrected cone-beam CT (CBCT) images. Methods: The water equivalent path length (WEPL) was calculated to the distal edge of PTV contours by using tomographic images available for six head and neck patients received photon therapy. The proton range variation was measured by calculating the difference between the distal WEPLs calculated with the planning CT and weekly treatment CBCT images. By performing an automatic rigid registration, six degrees-of-freedom (DOF) correction was made to the CBCT images to account for the patient setup uncertainty. For accurate WEPL calculations, an existing CBCT scatter correction algorithm, whose performance was already proven for phantom images, was calibrated for head and neck patient images. Specifically, two different image similarity measures, mutual information (MI) and mean square error (MSE), were tested for the deformable image registration (DIR) in the CBCT scatter correction algorithm. Results: The impact of weight loss was reflected in the distal WEPL differences with the aid of the automatic rigid registration reducing the influence of patient setup uncertainty on the WEPL calculation results. The WEPL difference averaged over distal area was 2.9 ± 2.9 (mm) across all fractions of six patients and its maximum, mostly found at the last available fraction, was 6.2 ± 3.4 (mm). The MSE-based DIR successfully registered each treatment CBCT image to the planning CT image. On the other hand, the MI-based DIR deformed the skin voxels in the planning CT image to the immobilization mask in the treatment CBCT image, most of which was cropped out of the planning CT image. Conclusion: The dosimetric impact of anatomic changes was evaluated by calculating the distal WEPL difference with the existing scatter-correction algorithm appropriately calibrated. Jihun Kim, Yang-Kyun Park

  12. Patient-specific scatter correction in clinical cone beam computed tomography imaging made possible by the combination of Monte Carlo simulations and a ray tracing algorithm

    DEFF Research Database (Denmark)

    Slot Thing, Rune; Bernchou, Uffe; Mainegra-Hing, Ernesto

    2013-01-01

    Abstract Purpose. Cone beam computed tomography (CBCT) image quality is limited by scattered photons. Monte Carlo (MC) simulations provide the ability of predicting the patient-specific scatter contamination in clinical CBCT imaging. Lengthy simulations prevent MC-based scatter correction from...

  13. Regularization method for solving the inverse scattering problem

    International Nuclear Information System (INIS)

    Denisov, A.M.; Krylov, A.S.

    1985-01-01

    The inverse scattering problem for the Schroedinger radial equation consisting in determining the potential according to the scattering phase is considered. The problem of potential restoration according to the phase specified with fixed error in a finite range is solved by the regularization method based on minimization of the Tikhonov's smoothing functional. The regularization method is used for solving the problem of neutron-proton potential restoration according to the scattering phases. The determined potentials are given in the table

  14. A two-stage method for inverse medium scattering

    KAUST Repository

    Ito, Kazufumi

    2013-03-01

    We present a novel numerical method to the time-harmonic inverse medium scattering problem of recovering the refractive index from noisy near-field scattered data. The approach consists of two stages, one pruning step of detecting the scatterer support, and one resolution enhancing step with nonsmooth mixed regularization. The first step is strictly direct and of sampling type, and it faithfully detects the scatterer support. The second step is an innovative application of nonsmooth mixed regularization, and it accurately resolves the scatterer size as well as intensities. The nonsmooth model can be efficiently solved by a semi-smooth Newton-type method. Numerical results for two- and three-dimensional examples indicate that the new approach is accurate, computationally efficient, and robust with respect to data noise. © 2012 Elsevier Inc.

  15. Atmospheric monitoring in MAGIC and data corrections

    Directory of Open Access Journals (Sweden)

    Fruck Christian

    2015-01-01

    Full Text Available A method for analyzing returns of a custom-made “micro”-LIDAR system, operated alongside the two MAGIC telescopes is presented. This method allows for calculating the transmission through the atmospheric boundary layer as well as thin cloud layers. This is achieved by applying exponential fits to regions of the back-scattering signal that are dominated by Rayleigh scattering. Making this real-time transmission information available for the MAGIC data stream allows to apply atmospheric corrections later on in the analysis. Such corrections allow for extending the effective observation time of MAGIC by including data taken under adverse atmospheric conditions. In the future they will help reducing the systematic uncertainties of energy and flux.

  16. On quasiclassical approximation in the inverse scattering method

    International Nuclear Information System (INIS)

    Geogdzhaev, V.V.

    1985-01-01

    Using as an example quasiclassical limits of the Korteweg-de Vries equation and nonlinear Schroedinger equation, the quasiclassical limiting variant of the inverse scattering problem method is presented. In quasiclassical approximation the inverse scattering problem for the Schroedinger equation is reduced to the classical inverse scattering problem

  17. Dual-energy digital mammography for calcification imaging: Scatter and nonuniformity corrections

    International Nuclear Information System (INIS)

    Kappadath, S. Cheenu; Shaw, Chris C.

    2005-01-01

    Mammographic images of small calcifications, which are often the earliest signs of breast cancer, can be obscured by overlapping fibroglandular tissue. We have developed and implemented a dual-energy digital mammography (DEDM) technique for calcification imaging under full-field imaging conditions using a commercially available aSi:H/CsI:Tl flat-panel based digital mammography system. The low- and high-energy images were combined using a nonlinear mapping function to cancel the tissue structures and generate the dual-energy (DE) calcification images. The total entrance-skin exposure and mean-glandular dose from the low- and high-energy images were constrained so that they were similar to screening-examination levels. To evaluate the DE calcification image, we designed a phantom using calcium carbonate crystals to simulate calcifications of various sizes (212-425 μm) overlaid with breast-tissue-equivalent material 5 cm thick with a continuously varying glandular-tissue ratio from 0% to 100%. We report on the effects of scatter radiation and nonuniformity in x-ray intensity and detector response on the DE calcification images. The nonuniformity was corrected by normalizing the low- and high-energy images with full-field reference images. Correction of scatter in the low- and high-energy images significantly reduced the background signal in the DE calcification image. Under the current implementation of DEDM, utilizing the mammography system and dose level tested, calcifications in the 300-355 μm size range were clearly visible in DE calcification images. Calcification threshold sizes decreased to the 250-280 μm size range when the visibility criteria were lowered to barely visible. Calcifications smaller than ∼250 μm were usually not visible in most cases. The visibility of calcifications with our DEDM imaging technique was limited by quantum noise, not system noise

  18. The lowest order total electromagnetic correction to the deep inelastic scattering of polarized leptons on polarized nucleons

    International Nuclear Information System (INIS)

    Shumeiko, N.M.; Timoshin, S.I.

    1991-01-01

    Compact formulae for a total 1-loop electromagnetic corrections, including the contribution of electromagnetic hadron effects to the deep inelastic scattering of polarized leptons on polarized nucleons in the quark-parton model have been obtained. The cases of longitudinal and transverse nucleon polarization are considered in detail. A thorough numerical calculation of corrections to cross sections and polarization asymmetries at muon (electron) energies over the range of 200-2000 GeV (10-16 GeV) has been made. It has been established that the contribution of corrections to the hadron current considerably affects the behaviour of longitudinal asymmetry. A satisfactory agreement is found between the model calculations of corrections to the lepton current and the phenomenological calculation results, which makes it possible to find the total 1-loop correction within the framework of a common approach. (Author)

  19. Scattering at low energies by potentials containing power-law corrections to the Coulomb interaction

    International Nuclear Information System (INIS)

    Kuitsinskii, A.A.

    1986-01-01

    The low-energy asymptotic behavior is found for the phase shifts and scattering amplitudes in the case of central potentials which decrease at infinity as n/r+ar /sup -a/,a 1. In problems of atomic and nuclear physics one is generally interested in collisions of clusters consisting of several charged particles. The effective interaction potential of such clusters contains long-range power law corrections to the Coulomb interaction that is presented

  20. Kernel integration scatter model for parallel beam gamma camera and SPECT point source response

    International Nuclear Information System (INIS)

    Marinkovic, P.M.

    2001-01-01

    Scatter correction is a prerequisite for quantitative single photon emission computed tomography (SPECT). In this paper a kernel integration scatter Scatter correction is a prerequisite for quantitative SPECT. In this paper a kernel integration scatter model for parallel beam gamma camera and SPECT point source response based on Klein-Nishina formula is proposed. This method models primary photon distribution as well as first Compton scattering. It also includes a correction for multiple scattering by applying a point isotropic single medium buildup factor for the path segment between the point of scatter an the point of detection. Gamma ray attenuation in the object of imaging, based on known μ-map distribution, is considered too. Intrinsic spatial resolution of the camera is approximated by a simple Gaussian function. Collimator is modeled simply using acceptance angles derived from the physical dimensions of the collimator. Any gamma rays satisfying this angle were passed through the collimator to the crystal. Septal penetration and scatter in the collimator were not included in the model. The method was validated by comparison with Monte Carlo MCNP-4a numerical phantom simulation and excellent results were obtained. The physical phantom experiments, to confirm this method, are planed to be done. (author)

  1. The O(α{sub s}{sup 2}) heavy quark corrections to charged current deep-inelastic scattering at large virtualities

    Energy Technology Data Exchange (ETDEWEB)

    Blümlein, Johannes, E-mail: Johannes.Bluemlein@desy.de [Deutsches Elektronen–Synchrotron, DESY, Platanenallee 6, D-15738 Zeuthen (Germany); Hasselhuhn, Alexander [Deutsches Elektronen–Synchrotron, DESY, Platanenallee 6, D-15738 Zeuthen (Germany); Research Institute for Symbolic Computation (RISC), Johannes Kepler University, Altenbergerstraße 69, A-4040 Linz (Austria); Pfoh, Torsten [Deutsches Elektronen–Synchrotron, DESY, Platanenallee 6, D-15738 Zeuthen (Germany)

    2014-04-15

    We calculate the O(α{sub s}{sup 2}) heavy flavor corrections to charged current deep-inelastic scattering at large scales Q{sup 2}≫m{sup 2}. The contributing Wilson coefficients are given as convolutions between massive operator matrix elements and massless Wilson coefficients. Foregoing results in the literature are extended and corrected. Numerical results are presented for the kinematic region of the HERA data.

  2. On iteration-separable method on the multichannel scattering theory

    International Nuclear Information System (INIS)

    Zubarev, A.L.; Ivlieva, I.N.; Podkopaev, A.P.

    1975-01-01

    The iteration-separable method for solving the equations of the Lippman-Schwinger type is suggested. Exponential convergency of the method of proven. Numerical convergency is clarified on the e + H scattering. Application of the method to the theory of multichannel scattering is formulated

  3. 4D cone-beam computed tomography (CBCT) using a moving blocker for simultaneous radiation dose reduction and scatter correction

    Science.gov (United States)

    Zhao, Cong; Zhong, Yuncheng; Duan, Xinhui; Zhang, You; Huang, Xiaokun; Wang, Jing; Jin, Mingwu

    2018-06-01

    Four-dimensional (4D) x-ray cone-beam computed tomography (CBCT) is important for a precise radiation therapy for lung cancer. Due to the repeated use and 4D acquisition over a course of radiotherapy, the radiation dose becomes a concern. Meanwhile, the scatter contamination in CBCT deteriorates image quality for treatment tasks. In this work, we propose the use of a moving blocker (MB) during the 4D CBCT acquisition (‘4D MB’) and to combine motion-compensated reconstruction to address these two issues simultaneously. In 4D MB CBCT, the moving blocker reduces the x-ray flux passing through the patient and collects the scatter information in the blocked region at the same time. The scatter signal is estimated from the blocked region for correction. Even though the number of projection views and projection data in each view are not complete for conventional reconstruction, 4D reconstruction with a total-variation (TV) constraint and a motion-compensated temporal constraint can utilize both spatial gradient sparsity and temporal correlations among different phases to overcome the missing data problem. The feasibility simulation studies using the 4D NCAT phantom showed that 4D MB with motion-compensated reconstruction with 1/3 imaging dose reduction could produce satisfactory images and achieve 37% improvement on structural similarity (SSIM) index and 55% improvement on root mean square error (RMSE), compared to 4D reconstruction at the regular imaging dose without scatter correction. For the same 4D MB data, 4D reconstruction outperformed 3D TV reconstruction by 28% on SSIM and 34% on RMSE. A study of synthetic patient data also demonstrated the potential of 4D MB to reduce the radiation dose by 1/3 without compromising the image quality. This work paves the way for more comprehensive studies to investigate the dose reduction limit offered by this novel 4D MB method using physical phantom experiments and real patient data based on clinical relevant metrics.

  4. SU-F-J-211: Scatter Correction for Clinical Cone-Beam CT System Using An Optimized Stationary Beam Blocker with a Single Scan

    International Nuclear Information System (INIS)

    Liang, X; Zhang, Z; Xie, Y; Gong, S; Niu, T; Zhou, Q

    2016-01-01

    Purpose: X-ray scatter photons result in significant image quality degradation of cone-beam CT (CBCT). Measurement based algorithms using beam blocker directly acquire the scatter samples and achieve significant improvement on the quality of CBCT image. Within existing algorithms, single-scan and stationary beam blocker proposed previously is promising due to its simplicity and practicability. Although demonstrated effectively on tabletop system, the blocker fails to estimate the scatter distribution on clinical CBCT system mainly due to the gantry wobble. In addition, the uniform distributed blocker strips in our previous design results in primary data loss in the CBCT system and leads to the image artifacts due to data insufficiency. Methods: We investigate the motion behavior of the beam blocker in each projection and design an optimized non-uniform blocker strip distribution which accounts for the data insufficiency issue. An accurate scatter estimation is then achieved from the wobble modeling. Blocker wobble curve is estimated using threshold-based segmentation algorithms in each projection. In the blocker design optimization, the quality of final image is quantified using the number of the primary data loss voxels and the mesh adaptive direct search algorithm is applied to minimize the objective function. Scatter-corrected CT images are obtained using the optimized blocker. Results: The proposed method is evaluated using Catphan@504 phantom and a head patient. On the Catphan©504, our approach reduces the average CT number error from 115 Hounsfield unit (HU) to 11 HU in the selected regions of interest, and improves the image contrast by a factor of 1.45 in the high-contrast regions. On the head patient, the CT number error is reduced from 97 HU to 6 HU in the soft tissue region and image spatial non-uniformity is decreased from 27% to 5% after correction. Conclusion: The proposed optimized blocker design is practical and attractive for CBCT guided radiation

  5. SU-F-J-211: Scatter Correction for Clinical Cone-Beam CT System Using An Optimized Stationary Beam Blocker with a Single Scan

    Energy Technology Data Exchange (ETDEWEB)

    Liang, X; Zhang, Z; Xie, Y [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, GuangDong (China); Gong, S; Niu, T [Department of Radiation Oncology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang (China); Institute of Translational Medicine, Zhejiang University, Hangzhou, Zhejiang (China); Zhou, Q [Department of Radiation Oncology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang (China)

    2016-06-15

    Purpose: X-ray scatter photons result in significant image quality degradation of cone-beam CT (CBCT). Measurement based algorithms using beam blocker directly acquire the scatter samples and achieve significant improvement on the quality of CBCT image. Within existing algorithms, single-scan and stationary beam blocker proposed previously is promising due to its simplicity and practicability. Although demonstrated effectively on tabletop system, the blocker fails to estimate the scatter distribution on clinical CBCT system mainly due to the gantry wobble. In addition, the uniform distributed blocker strips in our previous design results in primary data loss in the CBCT system and leads to the image artifacts due to data insufficiency. Methods: We investigate the motion behavior of the beam blocker in each projection and design an optimized non-uniform blocker strip distribution which accounts for the data insufficiency issue. An accurate scatter estimation is then achieved from the wobble modeling. Blocker wobble curve is estimated using threshold-based segmentation algorithms in each projection. In the blocker design optimization, the quality of final image is quantified using the number of the primary data loss voxels and the mesh adaptive direct search algorithm is applied to minimize the objective function. Scatter-corrected CT images are obtained using the optimized blocker. Results: The proposed method is evaluated using Catphan@504 phantom and a head patient. On the Catphan©504, our approach reduces the average CT number error from 115 Hounsfield unit (HU) to 11 HU in the selected regions of interest, and improves the image contrast by a factor of 1.45 in the high-contrast regions. On the head patient, the CT number error is reduced from 97 HU to 6 HU in the soft tissue region and image spatial non-uniformity is decreased from 27% to 5% after correction. Conclusion: The proposed optimized blocker design is practical and attractive for CBCT guided radiation

  6. Off-Angle Iris Correction Methods

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Thompson, Joseph T [ORNL; Karakaya, Mahmut [ORNL; Boehnen, Chris Bensing [ORNL

    2016-01-01

    In many real world iris recognition systems obtaining consistent frontal images is problematic do to inexperienced or uncooperative users, untrained operators, or distracting environments. As a result many collected images are unusable by modern iris matchers. In this chapter we present four methods for correcting off-angle iris images to appear frontal which makes them compatible with existing iris matchers. The methods include an affine correction, a retraced model of the human eye, measured displacements, and a genetic algorithm optimized correction. The affine correction represents a simple way to create an iris image that appears frontal but it does not account for refractive distortions of the cornea. The other method account for refraction. The retraced model simulates the optical properties of the cornea. The other two methods are data driven. The first uses optical flow to measure the displacements of the iris texture when compared to frontal images of the same subject. The second uses a genetic algorithm to learn a mapping that optimizes the Hamming Distance scores between off-angle and frontal images. In this paper we hypothesize that the biological model presented in our earlier work does not adequately account for all variations in eye anatomy and therefore the two data-driven approaches should yield better performance. Results are presented using the commercial VeriEye matcher that show that the genetic algorithm method clearly improves over prior work and makes iris recognition possible up to 50 degrees off-angle.

  7. A direct sampling method for inverse electromagnetic medium scattering

    KAUST Repository

    Ito, Kazufumi; Jin, Bangti; Zou, Jun

    2013-01-01

    In this paper, we study the inverse electromagnetic medium scattering problem of estimating the support and shape of medium scatterers from scattered electric/magnetic near-field data. We shall develop a novel direct sampling method based

  8. Efficient Fixed-Offset GPR Scattering Analysis

    DEFF Research Database (Denmark)

    Meincke, Peter; Chen, Xianyao

    2004-01-01

    The electromagnetic scattering by buried three-dimensional penetrable objects, as involved in the analysis of ground penetrating radar systems, is calculated using the extended Born approximation. The involved scattering tensor is calculated using fast Fourier transforms (FFT's). We incorporate...... in the scattering calculation the correct radiation patterns of the ground penetrating radar antennas by using their plane-wave transmitting and receiving spectra. Finally, we derive an efficient FFT-based method to analyze a fixed-offset configuration in which the location of the transmitting antenna is different...

  9. Correlation expansion: a powerful alternative multiple scattering calculation method

    International Nuclear Information System (INIS)

    Zhao Haifeng; Wu Ziyu; Sebilleau, Didier

    2008-01-01

    We introduce a powerful alternative expansion method to perform multiple scattering calculations. In contrast to standard MS series expansion, where the scattering contributions are grouped in terms of scattering order and may diverge in the low energy region, this expansion, called correlation expansion, partitions the scattering process into contributions from different small atom groups and converges at all energies. It converges faster than MS series expansion when the latter is convergent. Furthermore, it takes less memory than the full MS method so it can be used in the near edge region without any divergence problem, even for large clusters. The correlation expansion framework we derive here is very general and can serve to calculate all the elements of the scattering path operator matrix. Photoelectron diffraction calculations in a cluster containing 23 atoms are presented to test the method and compare it to full MS and standard MS series expansion

  10. Rietveld analysis using powder diffraction data with anomalous scattering effect obtained by focused beam flat sample method

    International Nuclear Information System (INIS)

    Tanaka, Masahiko; Katsuya, Yoshio; Sakata, Osami

    2016-01-01

    Focused-beam flat-sample method (FFM) is a new trial for synchrotron powder diffraction method, which is a combination of beam focusing optics, flat shape powder sample and area detectors. The method has advantages for X-ray diffraction experiments applying anomalous scattering effect (anomalous diffraction), because of 1. Absorption correction without approximation, 2. High intensity X-rays of focused incident beams and high signal noise ratio of diffracted X-rays 3. Rapid data collection with area detectors. We applied the FFM to anomalous diffraction experiments and collected synchrotron X-ray powder diffraction data of CoFe_2O_4 (inverse spinel structure) using X-rays near Fe K absorption edge, which can distinguish Co and Fe by anomalous scattering effect. We conducted Rietveld analyses with the obtained powder diffraction data and successfully determined the distribution of Co and Fe ions in CoFe_2O_4 crystal structure.

  11. Acoustic scattering by multiple elliptical cylinders using collocation multipole method

    International Nuclear Information System (INIS)

    Lee, Wei-Ming

    2012-01-01

    This paper presents the collocation multipole method for the acoustic scattering induced by multiple elliptical cylinders subjected to an incident plane sound wave. To satisfy the Helmholtz equation in the elliptical coordinate system, the scattered acoustic field is formulated in terms of angular and radial Mathieu functions which also satisfy the radiation condition at infinity. The sound-soft or sound-hard boundary condition is satisfied by uniformly collocating points on the boundaries. For the sound-hard or Neumann conditions, the normal derivative of the acoustic pressure is determined by using the appropriate directional derivative without requiring the addition theorem of Mathieu functions. By truncating the multipole expansion, a finite linear algebraic system is derived and the scattered field can then be determined according to the given incident acoustic wave. Once the total field is calculated as the sum of the incident field and the scattered field, the near field acoustic pressure along the scatterers and the far field scattering pattern can be determined. For the acoustic scattering of one elliptical cylinder, the proposed results match well with the analytical solutions. The proposed scattered fields induced by two and three elliptical–cylindrical scatterers are critically compared with those provided by the boundary element method to validate the present method. Finally, the effects of the convexity of an elliptical scatterer, the separation between scatterers and the incident wave number and angle on the acoustic scattering are investigated.

  12. Errors and corrections in the separation of spin-flip and non-spin-flip thermal neutron scattering using the polarization analysis technique

    International Nuclear Information System (INIS)

    Williams, W.G.

    1975-01-01

    The use of the polarization analysis technique to separate spin-flip from non-spin-flip thermal neutron scattering is especially important in determining magnetic scattering cross-sections. In order to identify a spin-flip ratio in the scattering with a particular scattering process, it is necessary to correct the experimentally observed 'flipping-ratio' to allow for the efficiencies of the vital instrument components (polarizers and spin-flippers), as well as multiple scattering effects in the sample. Analytical expressions for these corections are presented and their magnitudes in typical cases estimated. The errors in measurement depend strongly on the uncertainties in the calibration of the efficiencies of the polarizers and the spin-flipper. The final section is devoted to a discussion of polarization analysis instruments

  13. Attenuation correction method for single photon emission CT

    Energy Technology Data Exchange (ETDEWEB)

    Morozumi, Tatsuru; Nakajima, Masato [Keio Univ., Yokohama (Japan). Faculty of Science and Technology; Ogawa, Koichi; Yuta, Shinichi

    1983-10-01

    A correction method (Modified Correction Matrix method) is proposed to implement iterative correction by exactly measuring attenuation constant distribution in a test body, calculating a correction factor for every picture element, then multiply the image by these factors. Computer simulation for the comparison of the results showed that the proposed method was specifically more effective to an application to the test body, in which the rate of attenuation constant change is large, than the conventional correction matrix method. Since the actual measurement data always contain quantum noise, the noise was taken into account in the simulation. However, the correction effect was large even under the noise. For verifying its clinical effectiveness, the experiment using an acrylic phantom was also carried out. As the result, the recovery of image quality in the parts with small attenuation constant was remarkable as compared with the conventional method.

  14. SU-F-T-143: Implementation of a Correction-Based Output Model for a Compact Passively Scattered Proton Therapy System

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, S; Ahmad, S; Chen, Y; Ferreira, C; Islam, M; Lau, A; Jin, H [University of Oklahoma Health Sciences Center, Oklahoma City, OK (United States); Keeling, V [Carti, Inc., Little Rock, AR (United States)

    2016-06-15

    Purpose: To commission and investigate the accuracy of an output (cGy/MU) prediction model for a compact passively scattered proton therapy system. Methods: A previously published output prediction model (Sahoo et al, Med Phys, 35, 5088–5097, 2008) was commissioned for our Mevion S250 proton therapy system. This model is a correction-based model that multiplies correction factors (d/MUwnc=ROFxSOBPF xRSFxSOBPOCFxOCRxFSFxISF). These factors accounted for changes in output due to options (12 large, 5 deep, and 7 small), modulation width M, range R, off-center, off-axis, field-size, and off-isocenter. In this study, the model was modified to ROFxSOBPFxRSFxOCRxFSFxISF-OCFxGACF by merging SOBPOCF and ISF for simplicity and introducing a gantry angle correction factor (GACF). To commission the model, outputs over 1,000 data points were taken at the time of the system commissioning. The output was predicted by interpolation (1D for SOBPF, FSF, and GACF; 2D for RSF and OCR) with inverse-square calculation (ISF-OCR). The outputs of 273 combinations of R and M covering total 24 options were measured to test the model. To minimize fluence perturbation, scattered dose from range compensator and patient was not considered. The percent differences between the predicted (P) and measured (M) outputs were calculated to test the prediction accuracy ([P-M]/Mx100%). Results: GACF was required because of up to 3.5% output variation dependence on the gantry angle. A 2D interpolation was required for OCR because the dose distribution was not radially symmetric especially for the deep options. The average percent differences were −0.03±0.98% (mean±SD) and the differences of all the measurements fell within ±3%. Conclusion: It is concluded that the model can be clinically used for the compact passively scattered proton therapy system. However, great care should be taken when the field-size is less than 5×5 cm{sup 2} where a direct output measurement is required due to substantial

  15. Efficient SPECT scatter calculation in non-uniform media using correlated Monte Carlo simulation

    International Nuclear Information System (INIS)

    Beekman, F.J.

    1999-01-01

    Accurate simulation of scatter in projection data of single photon emission computed tomography (SPECT) is computationally extremely demanding for activity distribution in non-uniform dense media. This paper suggests how the computation time and memory requirements can be significantly reduced. First the scatter projection of a uniform dense object (P SDSE ) is calculated using a previously developed accurate and fast method which includes all orders of scatter (slab-derived scatter estimation), and then P SDSE is transformed towards the desired projection P which is based on the non-uniform object. The transform of P SDSE is based on two first-order Compton scatter Monte Carlo (MC) simulated projections. One is based on the uniform object (P u ) and the other on the object with non-uniformities (P ν ). P is estimated by P-tilde=P SDSE P ν /P u . A tremendous decrease in noise in P-tilde is achieved by tracking photon paths for P ν identical to those which were tracked for the calculation of P u and by using analytical rather than stochastic modelling of the collimator. The method was validated by comparing the results with standard MC-simulated scatter projections (P) of 99m Tc and 201 Tl point sources in a digital thorax phantom. After correction, excellent agreement was obtained between P-tilde and P. The total computation time required to calculate an accurate scatter projection of an extended distribution in a thorax phantom on a PC is a only few tens of seconds per projection, which makes the method attractive for application in accurate scatter correction in clinical SPECT. Furthermore, the method removes the need of excessive computer memory involved with previously proposed 3D model-based scatter correction methods. (author)

  16. Fast analytical scatter estimation using graphics processing units.

    Science.gov (United States)

    Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris

    2015-01-01

    To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.

  17. A method to correct coordinate distortion in EBSD maps

    International Nuclear Information System (INIS)

    Zhang, Y.B.; Elbrønd, A.; Lin, F.X.

    2014-01-01

    Drift during electron backscatter diffraction mapping leads to coordinate distortions in resulting orientation maps, which affects, in some cases significantly, the accuracy of analysis. A method, thin plate spline, is introduced and tested to correct such coordinate distortions in the maps after the electron backscatter diffraction measurements. The accuracy of the correction as well as theoretical and practical aspects of using the thin plate spline method is discussed in detail. By comparing with other correction methods, it is shown that the thin plate spline method is most efficient to correct different local distortions in the electron backscatter diffraction maps. - Highlights: • A new method is suggested to correct nonlinear spatial distortion in EBSD maps. • The method corrects EBSD maps more precisely than presently available methods. • Errors less than 1–2 pixels are typically obtained. • Direct quantitative analysis of dynamic data are available after this correction

  18. Application of focused-beam flat-sample method to synchrotron powder X-ray diffraction with anomalous scattering effect

    International Nuclear Information System (INIS)

    Tanaka, M; Katsuya, Y; Matsushita, Y

    2013-01-01

    The focused-beam flat-sample method (FFM), which is a method for high-resolution and rapid synchrotron X-ray powder diffraction measurements by combination of beam focusing optics, a flat shape sample and an area detector, was applied for diffraction experiments with anomalous scattering effect. The advantages of FFM for anomalous diffraction were absorption correction without approximation, rapid data collection by an area detector and good signal-to-noise ratio data by focusing optics. In the X-ray diffraction experiments of CoFe 2 O 4 and Fe 3 O 4 (By FFM) using X-rays near the Fe K absorption edge, the anomalous scattering effect between Fe/Co or Fe 2+ /Fe 3+ can be clearly detected, due to the change of diffraction intensity. The change of observed diffraction intensity as the incident X-ray energy was consistent with the calculation. The FFM is expected to be a method for anomalous powder diffraction.

  19. Rietveld analysis using powder diffraction data with anomalous scattering effect obtained by focused beam flat sample method

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, Masahiko, E-mail: masahiko@spring8.or.jp; Katsuya, Yoshio, E-mail: katsuya@spring8.or.jp; Sakata, Osami, E-mail: SAKATA.Osami@nims.go.jp [Synchrotron X-ray Station at SPring-8, National Institute for Materials Science 1-1-1 Kouto, Sayo, Hyogo 679-5198 (Japan)

    2016-07-27

    Focused-beam flat-sample method (FFM) is a new trial for synchrotron powder diffraction method, which is a combination of beam focusing optics, flat shape powder sample and area detectors. The method has advantages for X-ray diffraction experiments applying anomalous scattering effect (anomalous diffraction), because of 1. Absorption correction without approximation, 2. High intensity X-rays of focused incident beams and high signal noise ratio of diffracted X-rays 3. Rapid data collection with area detectors. We applied the FFM to anomalous diffraction experiments and collected synchrotron X-ray powder diffraction data of CoFe{sub 2}O{sub 4} (inverse spinel structure) using X-rays near Fe K absorption edge, which can distinguish Co and Fe by anomalous scattering effect. We conducted Rietveld analyses with the obtained powder diffraction data and successfully determined the distribution of Co and Fe ions in CoFe{sub 2}O{sub 4} crystal structure.

  20. Magnetic photon scattering

    International Nuclear Information System (INIS)

    Lovesey, S.W.

    1987-05-01

    The report reviews, at an introductory level, the theory of photon scattering from condensed matter. Magnetic scattering, which arises from first-order relativistic corrections to the Thomson scattering amplitude, is treated in detail and related to the corresponding interaction in the magnetic neutron diffraction amplitude. (author)

  1. A novel scatter separation method for multi-energy x-ray imaging

    Science.gov (United States)

    Sossin, A.; Rebuffel, V.; Tabary, J.; Létang, J. M.; Freud, N.; Verger, L.

    2016-06-01

    X-ray imaging coupled with recently emerged energy-resolved photon counting detectors provides the ability to differentiate material components and to estimate their respective thicknesses. However, such techniques require highly accurate images. The presence of scattered radiation leads to a loss of spatial contrast and, more importantly, a bias in radiographic material imaging and artefacts in computed tomography (CT). The aim of the present study was to introduce and evaluate a partial attenuation spectral scatter separation approach (PASSSA) adapted for multi-energy imaging. This evaluation was carried out with the aid of numerical simulations provided by an internal simulation tool, Sindbad-SFFD. A simplified numerical thorax phantom placed in a CT geometry was used. The attenuation images and CT slices obtained from corrected data showed a remarkable increase in local contrast and internal structure detectability when compared to uncorrected images. Scatter induced bias was also substantially decreased. In terms of quantitative performance, the developed approach proved to be quite accurate as well. The average normalized root-mean-square error between the uncorrected projections and the reference primary projections was around 23%. The application of PASSSA reduced this error to around 5%. Finally, in terms of voxel value accuracy, an increase by a factor  >10 was observed for most inspected volumes-of-interest, when comparing the corrected and uncorrected total volumes.

  2. Characterization of Scattered X-Ray Photons in Dental Cone-Beam Computed Tomography.

    Science.gov (United States)

    Yang, Ching-Ching

    2016-01-01

    Scatter is a very important artifact causing factor in dental cone-beam CT (CBCT), which has a major influence on the detectability of details within images. This work aimed to improve the image quality of dental CBCT through scatter correction. Scatter was estimated in the projection domain from the low frequency component of the difference between the raw CBCT projection and the projection obtained by extrapolating the model fitted to the raw projections acquired with 2 different sizes of axial field-of-view (FOV). The function for curve fitting was optimized by using Monte Carlo simulation. To validate the proposed method, an anthropomorphic phantom and a water-filled cylindrical phantom with rod inserts simulating different tissue materials were scanned using 120 kVp, 5 mA and 9-second scanning time covering an axial FOV of 4 cm and 13 cm. The detectability of the CT image was evaluated by calculating the contrast-to-noise ratio (CNR). Beam hardening and cupping artifacts were observed in CBCT images without scatter correction, especially in those acquired with 13 cm FOV. These artifacts were reduced in CBCT images corrected by the proposed method, demonstrating its efficacy on scatter correction. After scatter correction, the image quality of CBCT was improved in terms of target detectability which was quantified as the CNR for rod inserts in the cylindrical phantom. Hopefully the calculations performed in this work can provide a route to reach a high level of diagnostic image quality for CBCT imaging used in oral and maxillofacial structures whilst ensuring patient dose as low as reasonably achievable, which may ultimately make CBCT scan a reliable and safe tool in clinical practice.

  3. Characterization of Scattered X-Ray Photons in Dental Cone-Beam Computed Tomography.

    Directory of Open Access Journals (Sweden)

    Ching-Ching Yang

    Full Text Available Scatter is a very important artifact causing factor in dental cone-beam CT (CBCT, which has a major influence on the detectability of details within images. This work aimed to improve the image quality of dental CBCT through scatter correction.Scatter was estimated in the projection domain from the low frequency component of the difference between the raw CBCT projection and the projection obtained by extrapolating the model fitted to the raw projections acquired with 2 different sizes of axial field-of-view (FOV. The function for curve fitting was optimized by using Monte Carlo simulation. To validate the proposed method, an anthropomorphic phantom and a water-filled cylindrical phantom with rod inserts simulating different tissue materials were scanned using 120 kVp, 5 mA and 9-second scanning time covering an axial FOV of 4 cm and 13 cm. The detectability of the CT image was evaluated by calculating the contrast-to-noise ratio (CNR.Beam hardening and cupping artifacts were observed in CBCT images without scatter correction, especially in those acquired with 13 cm FOV. These artifacts were reduced in CBCT images corrected by the proposed method, demonstrating its efficacy on scatter correction. After scatter correction, the image quality of CBCT was improved in terms of target detectability which was quantified as the CNR for rod inserts in the cylindrical phantom.Hopefully the calculations performed in this work can provide a route to reach a high level of diagnostic image quality for CBCT imaging used in oral and maxillofacial structures whilst ensuring patient dose as low as reasonably achievable, which may ultimately make CBCT scan a reliable and safe tool in clinical practice.

  4. Direct sampling methods for inverse elastic scattering problems

    Science.gov (United States)

    Ji, Xia; Liu, Xiaodong; Xi, Yingxia

    2018-03-01

    We consider the inverse elastic scattering of incident plane compressional and shear waves from the knowledge of the far field patterns. Specifically, three direct sampling methods for location and shape reconstruction are proposed using the different component of the far field patterns. Only inner products are involved in the computation, thus the novel sampling methods are very simple and fast to be implemented. With the help of the factorization of the far field operator, we give a lower bound of the proposed indicator functionals for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functionals decay like the Bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functionals continuously dependent on the far field patterns, which further implies that the novel sampling methods are extremely stable with respect to data error. For the case when the observation directions are restricted into the limited aperture, we firstly introduce some data retrieval techniques to obtain those data that can not be measured directly and then use the proposed direct sampling methods for location and shape reconstructions. Finally, some numerical simulations in two dimensions are conducted with noisy data, and the results further verify the effectiveness and robustness of the proposed sampling methods, even for multiple multiscale cases and limited-aperture problems.

  5. Comparison of matrix methods for elastic wave scattering problems

    International Nuclear Information System (INIS)

    Tsao, S.J.; Varadan, V.K.; Varadan, V.V.

    1983-01-01

    This article briefly describes the T-matrix method and the MOOT (method of optimal truncation) of elastic wave scattering as they apply to A-D, SH- wave problems as well as 3-D elastic wave problems. Two methods are compared for scattering by elliptical cylinders as well as oblate spheroids of various eccentricity as a function of frequency. Convergence, and symmetry of the scattering cross section are also compared for ellipses and spheroidal cavities of different aspect ratios. Both the T-matrix approach and the MOOT were programmed on an AMDHL 470 computer using double precision arithmetic. Although the T-matrix method and MOOT are not always in agreement, it is in no way implied that any of the published results using MOOT are in error

  6. Multiple and dependent scattering by densely packed discrete spheres: Comparison of radiative transfer and Maxwell theory

    International Nuclear Information System (INIS)

    Ma, L.X.; Tan, J.Y.; Zhao, J.M.; Wang, F.Q.; Wang, C.A.

    2017-01-01

    The radiative transfer equation (RTE) has been widely used to deal with multiple scattering of light by sparsely and randomly distributed discrete particles. However, for densely packed particles, the RTE becomes questionable due to strong dependent scattering effects. This paper examines the accuracy of RTE by comparing with the exact electromagnetic theory. For an imaginary spherical volume filled with randomly distributed, densely packed spheres, the RTE is solved by the Monte Carlo method combined with the Percus–Yevick hard model to consider the dependent scattering effect, while the electromagnetic calculation is based on the multi-sphere superposition T-matrix method. The Mueller matrix elements of the system with different size parameters and volume fractions of spheres are obtained using both methods. The results verify that the RTE fails to deal with the systems with a high-volume fraction due to the dependent scattering effects. Apart from the effects of forward interference scattering and coherent backscattering, the Percus–Yevick hard sphere model shows good accuracy in accounting for the far-field interference effects for medium or smaller size parameters (up to 6.964 in this study). For densely packed discrete spheres with large size parameters (equals 13.928 in this study), the improvement of dependent scattering correction tends to deteriorate. The observations indicate that caution must be taken when using RTE in dealing with the radiative transfer in dense discrete random media even though the dependent scattering correction is applied. - Highlights: • The Muller matrix of randomly distributed, densely packed spheres are investigated. • The effects of multiple scattering and dependent scattering are analyzed. • The accuracy of radiative transfer theory for densely packed spheres is discussed. • Dependent scattering correction takes effect at medium size parameter or smaller. • Performance of dependent scattering correction

  7. Methods for assessing forward and backward light scatter in patients with cataract.

    Science.gov (United States)

    Crnej, Alja; Hirnschall, Nino; Petsoglou, Con; Findl, Oliver

    2017-08-01

    To compare objective methods for assessing backward and forward light scatter and psychophysical tests in patients with cataracts. Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom. Prospective case series. This study included patients scheduled for cataract surgery. Lens opacities were grouped into predominantly nuclear sclerotic, cortical, posterior subcapsular, and mixed cataracts. Backward light scatter was assessed using a rotating Scheimpflug imaging technique (Pentacam HR), forward light scatter using a straylight meter (C-Quant), and straylight using the double-pass method (Optical Quality Analysis System, point-spread function [PSF] meter). The results were correlated with visual acuity under photopic conditions as well as photopic and mesopic contrast sensitivity. The study comprised 56 eyes of 56 patients. The mean age of the 23 men and 33 women was 71 years (range 48 to 84 years). Two patients were excluded. Of the remaining, 15 patients had predominantly nuclear sclerotic cataracts, 13 had cortical cataracts, 11 had posterior subcapsular cataracts, and 15 had mixed cataracts. Correlations between devices were low. The highest correlation was between PSF meter measurements and Scheimpflug measurements (r = 0.32). The best correlation between corrected distance visual acuity was with the PSF meter (r = 0.45). Forward and backward light-scatter measurements cannot be used interchangeably. Scatter as an aspect of quality of vision was independent of acuity. Measuring forward light scatter with the straylight meter can be a useful additional tool in preoperative decision-making. Copyright © 2017 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  8. Impact on dose and image quality of a software-based scatter correction in mammography.

    Science.gov (United States)

    Monserrat, Teresa; Prieto, Elena; Barbés, Benigno; Pina, Luis; Elizalde, Arlette; Fernández, Belén

    2017-01-01

    Background In 2014, Siemens developed a new software-based scatter correction (Progressive Reconstruction Intelligently Minimizing Exposure [PRIME]), enabling grid-less digital mammography. Purpose To compare doses and image quality between PRIME (grid-less) and standard (with anti-scatter grid) modes. Material and Methods Contrast-to-noise ratio (CNR) was measured for various polymethylmethacrylate (PMMA) thicknesses and dose values provided by the mammograph were recorded. CDMAM phantom images were acquired for various PMMA thicknesses and inverse Image Quality Figure (IQF inv ) was calculated. Values of incident entrance surface air kerma (ESAK) and average glandular dose (AGD) were obtained from the DICOM header for a total of 1088 pairs of clinical cases. Two experienced radiologists compared subjectively the image quality of a total of 149 pairs of clinical cases. Results CNR values were higher and doses were lower in PRIME mode for all thicknesses. IQF inv values in PRIME mode were lower for all thicknesses except for 40 mm of PMMA equivalent, in which IQF inv was slightly greater in PRIME mode. A mean reduction of 10% in ESAK and 12% in AGD in PRIME mode with respect to standard mode was obtained. The clinical image quality in PRIME and standard acquisitions resulted to be similar in most of the cases (84% for the first radiologist and 67% for the second one). Conclusion The use of PRIME software reduces, in average, the dose of radiation to the breast without affecting image quality. This reduction is greater for thinner and denser breasts.

  9. Method for more accurate transmittance measurements of low-angle scattering samples using an integrating sphere with an entry port beam diffuser

    International Nuclear Information System (INIS)

    Nilsson, Annica M.; Jonsson, Andreas; Jonsson, Jacob C.; Roos, Arne

    2011-01-01

    For most integrating sphere measurements, the difference in light distribution between a specular reference beam and a diffused sample beam can result in significant errors. The problem becomes especially pronounced in integrating spheres that include a port for reflectance or diffuse transmittance measurements. The port is included in many standard spectrophotometers to facilitate a multipurpose instrument, however, absorption around the port edge can result in a detected signal that is too low. The absorption effect is especially apparent for low-angle scattering samples, because a significant portion of the light is scattered directly onto that edge. In this paper, a method for more accurate transmittance measurements of low-angle light-scattering samples is presented. The method uses a standard integrating sphere spectrophotometer, and the problem with increased absorption around the port edge is addressed by introducing a diffuser between the sample and the integrating sphere during both reference and sample scan. This reduces the discrepancy between the two scans and spreads the scattered light over a greater portion of the sphere wall. The problem with multiple reflections between the sample and diffuser is successfully addressed using a correction factor. The method is tested for two patterned glass samples with low-angle scattering and in both cases the transmittance accuracy is significantly improved.

  10. Efficient Color-Dressed Calculation of Virtual Corrections

    CERN Document Server

    Giele, Walter; Winter, Jan

    2010-01-01

    With the advent of generalized unitarity and parametric integration techniques, the construction of a generic Next-to-Leading Order Monte Carlo becomes feasible. Such a generator will entail the treatment of QCD color in the amplitudes. We extend the concept of color dressing to one-loop amplitudes, resulting in the formulation of an explicit algorithmic solution for the calculation of arbitrary scattering processes at Next-to-Leading order. The resulting algorithm is of exponential complexity, that is the numerical evaluation time of the virtual corrections grows by a constant multiplicative factor as the number of external partons is increased. To study the properties of the method, we calculate the virtual corrections to $n$-gluon scattering.

  11. Point kernels and superposition methods for scatter dose calculations in brachytherapy

    International Nuclear Information System (INIS)

    Carlsson, A.K.

    2000-01-01

    Point kernels have been generated and applied for calculation of scatter dose distributions around monoenergetic point sources for photon energies ranging from 28 to 662 keV. Three different approaches for dose calculations have been compared: a single-kernel superposition method, a single-kernel superposition method where the point kernels are approximated as isotropic and a novel 'successive-scattering' superposition method for improved modelling of the dose from multiply scattered photons. An extended version of the EGS4 Monte Carlo code was used for generating the kernels and for benchmarking the absorbed dose distributions calculated with the superposition methods. It is shown that dose calculation by superposition at and below 100 keV can be simplified by using isotropic point kernels. Compared to the assumption of full in-scattering made by algorithms currently in clinical use, the single-kernel superposition method improves dose calculations in a half-phantom consisting of air and water. Further improvements are obtained using the successive-scattering superposition method, which reduces the overestimates of dose close to the phantom surface usually associated with kernel superposition methods at brachytherapy photon energies. It is also shown that scatter dose point kernels can be parametrized to biexponential functions, making them suitable for use with an effective implementation of the collapsed cone superposition algorithm. (author)

  12. Electromagnetic corrections to ππ scattering lengths: some lessons for the construction of effective hadronic field theories

    International Nuclear Information System (INIS)

    Maltman, K.

    1998-01-01

    Using the framework of effective chiral Lagrangians, we show that, in order to correctly implement electromagnetism (EM), as generated from the Standard Model, into effective hadronic theories (such as meson-exchange models) it is insufficient to consider only graphs in the low-energy effective theory containing explicit photon lines. The Standard Model requires the presence of contact interactions in the effective theory which are electromagnetic in origin, but which involve no photons in the effective theory. We illustrate the problems which can result from a ''standard'' EM subtraction: i.e., from assuming that removing all contributions in the effective theory generated by graphs with explicit photon lines fully removes EM effects, by considering the case of the s-wave ππ scattering lengths. In this case it is shown that such a subtraction procedure would lead to the incorrect conclusion that the strong interaction isospin-breaking contributions to these quantities were large when, in fact, they are known to vanish at leading order in m d -m u . The leading EM contact corrections for the channels employed in the extraction of the I=0,2 s-wave ππ scattering lengths from experiment are also evaluated. (orig.)

  13. QCD and power corrections to sum rules in deep-inelastic lepton-nucleon scattering

    International Nuclear Information System (INIS)

    Ravindran, V.; Neerven, W.L. van

    2001-01-01

    In this paper we study QCD and power corrections to sum rules which show up in deep-inelastic lepton-hadron scattering. Furthermore we will make a distinction between fundamental sum rules which can be derived from quantum field theory and those which are of a phenomenological origin. Using current algebra techniques the fundamental sum rules can be expressed into expectation values of (partially) conserved (axial-)vector currents sandwiched between hadronic states. These expectation values yield the quantum numbers of the corresponding hadron which are determined by the underlying flavour group SU(n) F . In this case one can show that there exist an intimate relation between the appearance of power and QCD corrections. The above features do not hold for phenomenological sum rules, hereafter called non-fundamental. They have no foundation in quantum field theory and they mostly depend on certain assumptions made for the structure functions like super-convergence relations or the parton model. Therefore only the fundamental sum rules provide us with a stringent test of QCD

  14. Delbrueck scattering of monoenergetic photons

    International Nuclear Information System (INIS)

    Kahane, S.

    1978-05-01

    The Delbrueck effect was experimentally investigated in high Z nuclei with monoenergetic photons in the range 6.8-11.4 MeV. Two different methods were used for measurements of the differential scattering cross-section, in the 25-140 deg range and in the forward direction (theta = 1.5 deg), respectively. The known Compton scattering cross-section was used in a new and unique way for the determination of the elastic scattering cross-section. Isolation of the contribution of the real Delbrueck amplitudes to the cross-section was crried out successfully. Experimental confirmation of the theoretical calculations of Papatzacos and Mork and measurement, for the first time, of the Rayleigh scattering in the 10 MeV region are also reported. One of the most interesting findings is the presence of Coulomb corrections in Delbrueck scattering at these energies. More theoretical effort is needed in this last direction. (author)

  15. A direct sampling method for inverse electromagnetic medium scattering

    KAUST Repository

    Ito, Kazufumi

    2013-09-01

    In this paper, we study the inverse electromagnetic medium scattering problem of estimating the support and shape of medium scatterers from scattered electric/magnetic near-field data. We shall develop a novel direct sampling method based on an analysis of electromagnetic scattering and the behavior of the fundamental solution. It is applicable to a few incident fields and needs only to compute inner products of the measured scattered field with the fundamental solutions located at sampling points. Hence, it is strictly direct, computationally very efficient and highly robust to the presence of data noise. Two- and three-dimensional numerical experiments indicate that it can provide reliable support estimates for multiple scatterers in the case of both exact and highly noisy data. © 2013 IOP Publishing Ltd.

  16. Correction factors for the NMi free-air ionization chamber for medium-energy x-rays calculated with the Monte Carlo method

    International Nuclear Information System (INIS)

    Grimbergen, T.W.M.; Dijk, E. van; Vries, W. de

    1998-01-01

    A new method is described for the determination of x-ray quality dependent correction factors for free-air ionization chambers. The method is based on weighting correction factors for mono-energetic photons, which are calculated using the Monte Carlo method, with measured air kerma spectra. With this method, correction factors for electron loss, scatter inside the chamber and transmission through the diaphragm and front wall have been calculated for the NMi free-air chamber for medium-energy x-rays for a wide range of x-ray qualities in use at NMi. The newly obtained correction factors were compared with the values in use at present, which are based on interpolation of experimental data for a specific set of x-ray qualities. For x-ray qualities which are similar to this specific set, the agreement between the correction factors determined with the new method and those based on the experimental data is better than 0.1%, except for heavily filtered x-rays generated at 250 kV. For x-ray qualities dissimilar to the specific set, differences up to 0.4% exist, which can be explained by uncertainties in the interpolation procedure of the experimental data. Since the new method does not depend on experimental data for a specific set of x-ray qualities, the new method allows for a more flexible use of the free-air chamber as a primary standard for air kerma for any x-ray quality in the medium-energy x-ray range. (author)

  17. Dose calculation in eye brachytherapy with Ir-192 threads using the Sievert integral and corrected by attenuation and scattering with the Meisberg polynomials

    International Nuclear Information System (INIS)

    Vivanco, M.G. Bernui de; Cardenas R, A.

    2006-01-01

    The ocular brachytherapy many times unique alternative to conserve the visual organ in patients of ocular cancer, one comes carrying out in the National Institute of Neoplastic Illnesses (INEN) using threads of Iridium 192; those which, they are placed in radial form on the interior surface of a spherical cap of gold of 18 K; the cap remains in the eye until reaching the prescribed dose by the doctor. The main objective of this work is to be able to calculate in a correct and practical way the one time that the treatment of ocular brachytherapy should last to reach the dose prescribed by the doctor. To reach this objective I use the Sievert integral corrected by attenuation effects and scattering (Meisberg polynomials); calculating it by the Simpson method. In the calculations by means of the Sievert integral doesn't take into account the scattering produced by the gold cap neither the variation of the constant of frequency of exposure with the distance. The calculations by means of Sievert integral are compared with those obtained using the Monte Carlo Penelope simulation code, where it is observed that they agree at distances of the surface of the cap greater or equal to 2mm. (Author)

  18. Retrieval method of aerosol extinction coefficient profile based on backscattering, side-scattering and Raman-scattering lidar

    Science.gov (United States)

    Shan, Huihui; Zhang, Hui; Liu, Junjian; Tao, Zongming; Wang, Shenhao; Ma, Xiaomin; Zhou, Pucheng; Yao, Ling; Liu, Dong; Xie, Chenbo; Wang, Yingjian

    2018-03-01

    Aerosol extinction coefficient profile is an essential parameter for atmospheric radiation model. It is difficult to get higher signal to noise ratio (SNR) of backscattering lidar from the ground to the tropopause especially in near range. Higher SNR problem can be solved by combining side-scattering and backscattering lidar. Using Raman-scattering lidar, aerosol extinction to backscatter ratio (lidar ratio) can be got. Based on side-scattering, backscattering and Raman-scattering lidar system, aerosol extinction coefficient is retrieved precisely from the earth's surface to the tropopause. Case studies show this method is reasonable and feasible.

  19. New decoding methods of interleaved burst error-correcting codes

    Science.gov (United States)

    Nakano, Y.; Kasahara, M.; Namekawa, T.

    1983-04-01

    A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.

  20. The Bateman method for multichannel scattering theory

    International Nuclear Information System (INIS)

    Kim, Y. E.; Kim, Y. J.; Zubarev, A. L.

    1997-01-01

    Accuracy and convergence of the Bateman method are investigated for calculating the transition amplitude in multichannel scattering theory. This approximation method is applied to the calculation of elastic amplitude. The calculated results are remarkably accurate compared with those of exactly solvable multichannel model

  1. A method to correct coordinate distortion in EBSD maps

    DEFF Research Database (Denmark)

    Zhang, Yubin; Elbrønd, Andreas Benjamin; Lin, Fengxiang

    2014-01-01

    Drift during electron backscatter diffraction mapping leads to coordinate distortions in resulting orientation maps, which affects, in some cases significantly, the accuracy of analysis. A method, thin plate spline, is introduced and tested to correct such coordinate distortions in the maps after...... the electron backscatter diffraction measurements. The accuracy of the correction as well as theoretical and practical aspects of using the thin plate spline method is discussed in detail. By comparing with other correction methods, it is shown that the thin plate spline method is most efficient to correct...

  2. Inverse Scattering Method and Soliton Solution Family for String Effective Action

    International Nuclear Information System (INIS)

    Ya-Jun, Gao

    2009-01-01

    A modified Hauser–Ernst-type linear system is established and used to develop an inverse scattering method for solving the motion equations of the string effective action describing the coupled gravity, dilaton and Kalb–Ramond fields. The reduction procedures in this inverse scattering method are found to be fairly simple, which makes the proposed inverse scattering method applied fine and effective. As an application, a concrete family of soliton solutions for the considered theory is obtained

  3. Elastic electron scattering from the DNA bases: cytosine and thymine

    International Nuclear Information System (INIS)

    Colyer, C J; Bellm, S M; Lohmanny, B; Blanco, F; Garcia, G

    2012-01-01

    Relative elastic differential cross sections for elastic scattering from cytosine and thymine have been measured using the crossed beam method. The experimental data are compared with theoretical cross sections calculated by the screen corrected additivity rule method.

  4. Local defect correction for boundary integral equation methods

    NARCIS (Netherlands)

    Kakuba, G.; Anthonissen, M.J.H.

    2014-01-01

    The aim in this paper is to develop a new local defect correction approach to gridding for problems with localised regions of high activity in the boundary element method. The technique of local defect correction has been studied for other methods as finite difference methods and finite volume

  5. Numerical tables of anomalous scattering factors calculated by the Cromer and Liberman's method

    International Nuclear Information System (INIS)

    Sasaki, Satoshi.

    1989-02-01

    Anomalous scattering factors f' and f'' have been calculated for the atoms Li through Bi, plus U, using the relativistic treatment described by Cromer and Liberman. The final f' value does not include the Jensen's correction term on the magnetic scattering. The tables are presented with the f' and f'' values (i) at 0.01 A intervals in the wavelength range from 0.1 to 2.89 A and (ii) at 0.0001 A intervals in the neighborhood of the K, L 1 , L 2 , and L 3 absorption edges. (author)

  6. Analysis of scattered radiation in an irradiated body by means of the monte carlo simulation

    International Nuclear Information System (INIS)

    Kato, Hideki; Nakamura, Masaru; Tsuiki, Saeko; Shimizu, Ikuo; Higashi, Naoki; Kamada, Takao

    1992-01-01

    Isodose charts for oblique incidence are simply obtained from normal isodose data of correcting methods such as the tissue-air ratio (TAR) method, the effective source-skin distance (SSD) method etc. Although, in these correcting methods, the depth dose data on the beam axis remained as the normal depth dose data, which were measured on the geometry of perpendicular incidence. In this paper, the primary and scattered dose on the beam axis for 60 Co gamma-ray oblique incidence were calculated by means of the Monthe Carlo simulation, and the variation of the percentage depth dose and scatter factor were evaluated for oblique incident angles. The scattered dose distribution was altered for change in the oblique incident angle. Also, for increasing the angle, percentage depth dose (PDD) was decreased and the scatter factor was increased. If the depth dose for oblique incidence was calculated using normal PDD data and normal scatter factors, the results become an underestimation of the shallow region up to several cm, and an overesitimation for the deep region. (author)

  7. Method for calculating anisotropic neutron transport using scattering kernel without polynomial expansion

    International Nuclear Information System (INIS)

    Takahashi, Akito; Yamamoto, Junji; Ebisuya, Mituo; Sumita, Kenji

    1979-01-01

    A new method for calculating the anisotropic neutron transport is proposed for the angular spectral analysis of D-T fusion reactor neutronics. The method is based on the transport equation with new type of anisotropic scattering kernels formulated by a single function I sub(i) (μ', μ) instead of polynomial expansion, for instance, Legendre polynomials. In the calculation of angular flux spectra by using scattering kernels with the Legendre polynomial expansion, we often observe the oscillation with negative flux. But in principle this oscillation disappears by this new method. In this work, we discussed anisotropic scattering kernels of the elastic scattering and the inelastic scatterings which excite discrete energy levels. The other scatterings were included in isotropic scattering kernels. An approximation method, with use of the first collision source written by the I sub(i) (μ', μ) function, was introduced to attenuate the ''oscillations'' when we are obliged to use the scattering kernels with the Legendre polynomial expansion. Calculated results with this approximation showed remarkable improvement for the analysis of the angular flux spectra in a slab system of lithium metal with the D-T neutron source. (author)

  8. The continuous cut-off method and the relativistic scattering of spin-1/2 particles

    International Nuclear Information System (INIS)

    Dolinszky, T.

    1979-07-01

    A high energy formula, obtained in the framework of the continuous cut-off approach, is shown to improve the correctness of the standard phase shift expression for Dirac scattering by two orders of magnitude in energy. (author)

  9. High-energy expansion for nuclear multiple scattering

    International Nuclear Information System (INIS)

    Wallace, S.J.

    1975-01-01

    The Watson multiple scattering series is expanded to develop the Glauber approximation plus systematic corrections arising from three (1) deviations from eikonal propagation between scatterings, (2) Fermi motion of struck nucleons, and (3) the kinematic transformation which relates the many-body scattering operators of the Watson series to the physical two-body scattering amplitude. Operators which express effects ignored at the outset to obtain the Glauber approximation are subsequently reintroduced via perturbation expansions. Hence a particular set of approximations is developed which renders the sum of the Watson series to the Glauber form in the center of mass system, and an expansion is carried out to find leading order corrections to that summation. Although their physical origins are quite distinct, the eikonal, Fermi motion, and kinematic corrections produce strikingly similar contributions to the scattering amplitude. It is shown that there is substantial cancellation between their effects and hence the Glauber approximation is more accurate than the individual approximations used in its derivation. It is shown that the leading corrections produce effects of order (2kR/subc/) -1 relative to the double scattering term in the uncorrected Glauber amplitude, hk being momentum and R/subc/ the nuclear char []e radius. The leading order corrections are found to be small enough to validate quatitative analyses of experimental data for many intermediate to high energy cases and for scattering angles not limited to the very forward region. In a Gaussian model, the leading corrections to the Glauber amplitude are given as convenient analytic expressions

  10. Simplified solutions of the Cox-Thompson inverse scattering method at fixed energy

    International Nuclear Information System (INIS)

    Palmai, Tamas; Apagyi, Barnabas; Horvath, Miklos

    2008-01-01

    Simplified solutions of the Cox-Thompson inverse quantum scattering method at fixed energy are derived if a finite number of partial waves with only even or odd angular momenta contribute to the scattering process. Based on new formulae various approximate methods are introduced which also prove applicable to the generic scattering events

  11. An Efficient Method for Electron-Atom Scattering Using Ab-initio Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Yuan; Yang, Yonggang; Xiao, Liantuan; Jia, Suotang [Shanxi University, Taiyuan (China)

    2017-02-15

    We present an efficient method based on ab-initio calculations to investigate electron-atom scatterings. Those calculations profit from methods implemented in standard quantum chemistry programs. The new approach is applied to electron-helium scattering. The results are compared with experimental and other theoretical references to demonstrate the efficiency of our method.

  12. A spectrum correction method for fuel assembly rehomogenization

    International Nuclear Information System (INIS)

    Lee, Kyung Taek; Cho, Nam Zin

    2004-01-01

    To overcome the limitation of existing homogenization methods based on the single assembly calculation with zero current boundary condition, we propose a new rehomogenization method, named spectrum correction method (SCM), consisting of the multigroup energy spectrum approximation by spectrum correction and the condensed two-group heterogeneous single assembly calculations with non-zero current boundary condition. In SCM, the spectrum shifting phenomena caused by current across assembly interfaces are considered by the spectrum correction at group condensation stage at first. Then, heterogeneous single assembly calculations with two-group cross sections condensed by using corrected multigroup energy spectrum are performed to obtain rehomogenized nodal diffusion parameters, i.e., assembly-wise homogenized cross sections and discontinuity factors. To evaluate the performance of SCM, it was applied to the analytic function expansion nodal (AFEN) method and several test problems were solved. The results show that SCM can reduce the errors significantly both in multiplication factors and assembly averaged power distributions

  13. A direct sampling method to an inverse medium scattering problem

    KAUST Repository

    Ito, Kazufumi

    2012-01-10

    In this work we present a novel sampling method for time harmonic inverse medium scattering problems. It provides a simple tool to directly estimate the shape of the unknown scatterers (inhomogeneous media), and it is applicable even when the measured data are only available for one or two incident directions. A mathematical derivation is provided for its validation. Two- and three-dimensional numerical simulations are presented, which show that the method is accurate even with a few sets of scattered field data, computationally efficient, and very robust with respect to noises in the data. © 2012 IOP Publishing Ltd.

  14. Improved quantitative 90 Y bremsstrahlung SPECT/CT reconstruction with Monte Carlo scatter modeling.

    Science.gov (United States)

    Dewaraja, Yuni K; Chun, Se Young; Srinivasa, Ravi N; Kaza, Ravi K; Cuneo, Kyle C; Majdalany, Bill S; Novelli, Paula M; Ljungberg, Michael; Fessler, Jeffrey A

    2017-12-01

    In 90 Y microsphere radioembolization (RE), accurate post-therapy imaging-based dosimetry is important for establishing absorbed dose versus outcome relationships for developing future treatment planning strategies. Additionally, accurately assessing microsphere distributions is important because of concerns for unexpected activity deposition outside the liver. Quantitative 90 Y imaging by either SPECT or PET is challenging. In 90 Y SPECT model based methods are necessary for scatter correction because energy window-based methods are not feasible with the continuous bremsstrahlung energy spectrum. The objective of this work was to implement and evaluate a scatter estimation method for accurate 90 Y bremsstrahlung SPECT/CT imaging. Since a fully Monte Carlo (MC) approach to 90 Y SPECT reconstruction is computationally very demanding, in the present study the scatter estimate generated by a MC simulator was combined with an analytical projector in the 3D OS-EM reconstruction model. A single window (105 to 195-keV) was used for both the acquisition and the projector modeling. A liver/lung torso phantom with intrahepatic lesions and low-uptake extrahepatic objects was imaged to evaluate SPECT/CT reconstruction without and with scatter correction. Clinical application was demonstrated by applying the reconstruction approach to five patients treated with RE to determine lesion and normal liver activity concentrations using a (liver) relative calibration. There was convergence of the scatter estimate after just two updates, greatly reducing computational requirements. In the phantom study, compared with reconstruction without scatter correction, with MC scatter modeling there was substantial improvement in activity recovery in intrahepatic lesions (from > 55% to > 86%), normal liver (from 113% to 104%), and lungs (from 227% to 104%) with only a small degradation in noise (13% vs. 17%). Similarly, with scatter modeling contrast improved substantially both visually and in

  15. A hybrid numerical method for orbit correction

    International Nuclear Information System (INIS)

    White, G.; Himel, T.; Shoaee, H.

    1997-09-01

    The authors describe a simple hybrid numerical method for beam orbit correction in particle accelerators. The method overcomes both degeneracy in the linear system being solved and respects boundaries on the solution. It uses the Singular Value Decomposition (SVD) to find and remove the null-space in the system, followed by a bounded Linear Least Squares analysis of the remaining recast problem. It was developed for correcting orbit and dispersion in the B-factory rings

  16. Characterization of Diesel Soot Aggregates by Scattering and Extinction Methods

    Science.gov (United States)

    Kamimoto, Takeyuki

    2006-07-01

    Characteristics of diesel soot particles sampled from diesel exhaust of a common-rail turbo-charged diesel engine are quantified by scattering and extinction diagnostics using newly build two laser-based instruments. The radius of gyration representing the aggregates size is measured by the angular distribution of scattering intensity, while the soot mass concentration is measured by a two-wavelength extinction method. An approach to estimate the refractive index of diesel soot by an analysis of the extinction and scattering data using an aggregates scattering theory is proposed.

  17. Characterization of Diesel Soot Aggregates by Scattering and Extinction Methods

    International Nuclear Information System (INIS)

    Kamimoto, Takeyuki

    2006-01-01

    Characteristics of diesel soot particles sampled from diesel exhaust of a common-rail turbo-charged diesel engine are quantified by scattering and extinction diagnostics using newly build two laser-based instruments. The radius of gyration representing the aggregates size is measured by the angular distribution of scattering intensity, while the soot mass concentration is measured by a two-wavelength extinction method. An approach to estimate the refractive index of diesel soot by an analysis of the extinction and scattering data using an aggregates scattering theory is proposed

  18. A multi-dimensional sampling method for locating small scatterers

    International Nuclear Information System (INIS)

    Song, Rencheng; Zhong, Yu; Chen, Xudong

    2012-01-01

    A multiple signal classification (MUSIC)-like multi-dimensional sampling method (MDSM) is introduced to locate small three-dimensional scatterers using electromagnetic waves. The indicator is built with the most stable part of signal subspace of the multi-static response matrix on a set of combinatorial sampling nodes inside the domain of interest. It has two main advantages compared to the conventional MUSIC methods. First, the MDSM is more robust against noise. Second, it can work with a single incidence even for multi-scatterers. Numerical simulations are presented to show the good performance of the proposed method. (paper)

  19. Analytical Method and Semianalytical Method for Analysis of Scattering by Anisotropic Sphere: A Review

    Directory of Open Access Journals (Sweden)

    Chao Wan

    2012-01-01

    Full Text Available The history of methods for the electromagnetic scattering by an anisotropic sphere has been reviewed. Two main methods, angular expansion method and T-matrix method, which are widely used for the anisotropic sphere, are expressed in Cartesian coordinate firstly. The comparison of those and the further exploration on the scattering field are illustrated afterwards. Based on the most general form concluded by variable separation method, the coupled electric field and magnetic field of radial anisotropic sphere can be derived. By simplifying the condition, simpler case of uniaxial anisotropic media is expressed with confirmed coefficients for the internal and external field. Details of significant phenomenon are presented.

  20. Application of the 2-D discrete-ordinates method to multiple scattering of laser radiation

    International Nuclear Information System (INIS)

    Zardecki, A.; Gerstl, S.A.W.; Embury, J.F.

    1983-01-01

    The discrete-ordinates finite-element radiation transport code twotran is applied to describe the multiple scattering of a laser beam from a reflecting target. For a model scenario involving a 99% relative humidity rural aerosol we compute the average intensity of the scattered radiation and correction factors to the Beer-Lambert law arising from multiple scattering. As our results indicate, 2-D x-y and r-z geometry modeling can reliably describe a realistic 3-D scenario. Specific results are presented for the two visual ranges of 1.52 and 0.76 km which show that, for sufficiently high aerosol concentrations (e.g., equivalent to V = 0.76 km), the target signature in a distant detector becomes dominated by multiply scattered radiation from interactions of the laser light with the aerosol environment. The merits of the scaling group and the delta-M approximation for the transfer equation are also explored

  1. NNLO QCD corrections to jet production at hadron colliders from gluon scattering

    International Nuclear Information System (INIS)

    Currie, James; Ridder, Aude Gehrmann-De; Glover, E.W.N.; Pires, João

    2014-01-01

    We present the next-to-next-to-leading order (NNLO) QCD corrections to dijet production in the purely gluonic channel retaining the full dependence on the number of colours. The sub-leading colour contribution in this channel first appears at NNLO and increases the NNLO correction by around 10% and exhibits a p T dependence, rising from 8% at low p T to 15% at high p T . The present calculation demonstrates the utility of the antenna subtraction method for computing the full colour NNLO corrections to dijet production at the Large Hadron Collider

  2. A direct sampling method to an inverse medium scattering problem

    KAUST Repository

    Ito, Kazufumi; Jin, Bangti; Zou, Jun

    2012-01-01

    In this work we present a novel sampling method for time harmonic inverse medium scattering problems. It provides a simple tool to directly estimate the shape of the unknown scatterers (inhomogeneous media), and it is applicable even when

  3. An attenuation correction method for PET/CT images

    International Nuclear Information System (INIS)

    Ue, Hidenori; Yamazaki, Tomohiro; Haneishi, Hideaki

    2006-01-01

    In PET/CT systems, accurate attenuation correction can be achieved by creating an attenuation map from an X-ray CT image. On the other hand, respiratory-gated PET acquisition is an effective method for avoiding motion blurring of the thoracic and abdominal organs caused by respiratory motion. In PET/CT systems employing respiratory-gated PET, using an X-ray CT image acquired during breath-holding for attenuation correction may have a large effect on the voxel values, especially in regions with substantial respiratory motion. In this report, we propose an attenuation correction method in which, as the first step, a set of respiratory-gated PET images is reconstructed without attenuation correction, as the second step, the motion of each phase PET image from the PET image in the same phase as the CT acquisition timing is estimated by the previously proposed method, as the third step, the CT image corresponding to each respiratory phase is generated from the original CT image by deformation according to the motion vector maps, and as the final step, attenuation correction using these CT images and reconstruction are performed. The effectiveness of the proposed method was evaluated using 4D-NCAT phantoms, and good stability of the voxel values near the diaphragm was observed. (author)

  4. Radiation scatter apparatus and method

    International Nuclear Information System (INIS)

    Molbert, J. L.; Riddle, E. R.

    1985-01-01

    A radiation scatter gauge includes multiple detector locations for developing separate and independent sets of data from which multiple physical characteristics of a thin material and underlying substrate may be determined. In an illustrated embodiment, the apparatus and method of the invention are directed to determining characteristics of resurfaced pavement by nondestructive testing. More particularly, the density and thickness of a thin asphalt overlay and the density of the underlying pavement may be determined

  5. Leading quantum gravitational corrections to scalar QED

    International Nuclear Information System (INIS)

    Bjerrum-Bohr, N.E.J.

    2002-01-01

    We consider the leading post-Newtonian and quantum corrections to the non-relativistic scattering amplitude of charged scalars in the combined theory of general relativity and scalar QED. The combined theory is treated as an effective field theory. This allows for a consistent quantization of the gravitational field. The appropriate vertex rules are extracted from the action, and the non-analytic contributions to the 1-loop scattering matrix are calculated in the non-relativistic limit. The non-analytical parts of the scattering amplitude, which are known to give the long range, low energy, leading quantum corrections, are used to construct the leading post-Newtonian and quantum corrections to the two-particle non-relativistic scattering matrix potential for two charged scalars. The result is discussed in relation to experimental verifications

  6. Scattering theory in quantum mechanics. Physical principles and mathematical methods

    International Nuclear Information System (INIS)

    Amrein, W.O.; Jauch, J.M.; Sinha, K.B.

    1977-01-01

    A contemporary approach is given to the classical topics of physics. The purpose is to explain the basic physical concepts of quantum scattering theory, to develop the necessary mathematical tools for their description, to display the interrelation between the three methods (the Schroedinger equation solutions, stationary scattering theory, and time dependence) to derive the properties of various quantities of physical interest with mathematically rigorous methods

  7. Non-eikonal effects in high-energy scattering IV. Inelastic scattering

    International Nuclear Information System (INIS)

    Gurvitz, S.A.; Kok, L.P.; Rinat, A.S.

    1978-01-01

    Amplitudes of inelastically scattered high-energy projections were calculated. In the scattering on 12 C(Tsub(P)=1 GeV) sizeable non-eikonal corrections in diffraction extrema even for relatively small q 2 are demonstrated. At least part of the anomaly in the 3 - distribution may be due to these non-eikonal effects. (B.G.)

  8. Heavy flavour corrections to polarised and unpolarised deep-inelastic scattering at 3-loop order

    International Nuclear Information System (INIS)

    Ablinger, J.; Round, M.; Schneider, C.; Hasselhuhn, A.

    2016-11-01

    We report on progress in the calculation of 3-loop corrections to the deep-inelastic structure functions from massive quarks in the asymptotic region of large momentum transfer Q"2. Recently completed results allow us to obtain the O(a"3_s) contributions to several heavy flavour Wilson coefficients which enter both polarised and unpolarised structure functions for lepton-nucleon scattering. In particular, we obtain the non-singlet contributions to the unpolarised structure functions F_2(x,Q"2) and xF_3(x,Q"2) and the polarised structure function g_1(x,Q"2). From these results we also obtain the heavy flavour contributions to the Gross-Llewellyn-Smith and the Bjorken sum rules.

  9. Algebraic collapsing acceleration of the characteristics method with anisotropic scattering

    International Nuclear Information System (INIS)

    Le Tellier, R.; Hebert, A.; Roy, R.

    2004-01-01

    In this paper, the characteristics solvers implemented in the lattice code Dragon are extended to allow a complete anisotropic treatment of the collision operator. An efficient synthetic acceleration method, called Algebraic Collapsing Acceleration (ACA), is presented. Tests show that this method can substantially speed up the convergence of scattering source iterations. The effect of boundary conditions, either specular or white reflections, on anisotropic scattering lattice-cell problems is also considered. (author)

  10. HECTOR 1.00. A program for the calculation of QED, QCD and electroweak corrections to ep and l±N deep inelastic neutral and charged current scattering

    International Nuclear Information System (INIS)

    Arbuzov, A.; Kalinovskaya, L.; Bardin, D.; Deutsches Elektronen-Synchrotron; Bluemlein, J.; Riemann, T.

    1995-11-01

    A description of the Fortran program HECTOR for a variety of semi-analytical calculations of radiative QED, QCD, and electroweak corrections to the double-differential cross sections of NC and CC deep inelastic charged lepton proton (or lepton deuteron) scattering is presented. HECTOR originates from the substantially improved and extended earlier programs HELIOS and TERAD91. It is mainly intended for applications at HERA or LEP x LHC, but may be used also for μN scattering in fixed target experiments. The QED corrections may be calculated in different sets of variables: leptonic, hadronic, mixed, Jaquet-Blondel, double angle etc. Besides the leading logarithmic approximation up to order O(α 2 ), exact O(α) corrections and inclusive soft photon exponentiation are taken into account. The photoproduction region is also covered. (orig.)

  11. Synthetic acceleration methods for linear transport problems with highly anisotropic scattering

    International Nuclear Information System (INIS)

    Khattab, K.M.; Larsen, E.W.

    1992-01-01

    The diffusion synthetic acceleration (DSA) algorithm effectively accelerates the iterative solution of transport problems with isotropic or mildly anisotropic scattering. However, DSA loses its effectiveness for transport problems that have strongly anisotropic scattering. Two generalizations of DSA are proposed, which, for highly anisotropic scattering problems, converge at least an order of magnitude (clock time) faster than the DSA method. These two methods are developed, the results of Fourier analysis that theoretically predict their efficiency are described, and numerical results that verify the theoretical predictions are presented. (author). 10 refs., 7 figs., 5 tabs

  12. Synthetic acceleration methods for linear transport problems with highly anisotropic scattering

    International Nuclear Information System (INIS)

    Khattab, K.M.; Larsen, E.W.

    1991-01-01

    This paper reports on the diffusion synthetic acceleration (DSA) algorithm that effectively accelerates the iterative solution of transport problems with isotropic or mildly anisotropic scattering. However, DSA loses its effectiveness for transport problems that have strongly anisotropic scattering. Two generalizations of DSA are proposed, which, for highly anisotropic scattering problems, converge at least an order of magnitude (clock time) faster than the DSA method. These two methods are developed, the results of Fourier analyses that theoretically predict their efficiency are described, and numerical results that verify the theoretical predictions are presented

  13. Poster – 02: Positron Emission Tomography (PET) Imaging Reconstruction using higher order Scattered Photon Coincidences

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Hongwei; Pistorius, Stephen [Department of Physics and Astronomy, University of Manitoba, CancerCare, Manitoba (Canada)

    2016-08-15

    PET images are affected by the presence of scattered photons. Incorrect scatter-correction may cause artifacts, particularly in 3D PET systems. Current scatter reconstruction methods do not distinguish between single and higher order scattered photons. A dual-scattered reconstruction method (GDS-MLEM) that is independent of the number of Compton scattering interactions and less sensitive to the need for high energy resolution detectors, is proposed. To avoid overcorrecting for scattered coincidences, the attenuation coefficient was calculated by integrating the differential Klein-Nishina cross-section over a restricted energy range, accounting only for scattered photons that were not detected. The optimum image can be selected by choosing an energy threshold which is the upper energy limit for the calculation of the cross-section and the lower limit for scattered photons in the reconstruction. Data was simulated using the GATE platform. 500,000 multiple scattered photon coincidences with perfect energy resolution were reconstructed using various methods. The GDS-MLEM algorithm had the highest confidence (98%) in locating the annihilation position and was capable of reconstructing the two largest hot regions. 100,000 photon coincidences, with a scatter fraction of 40%, were used to test the energy resolution dependence of different algorithms. With a 350–650 keV energy window and the restricted attenuation correction model, the GDS-MLEM algorithm was able to improve contrast recovery and reduce the noise by 7.56%–13.24% and 12.4%–24.03%, respectively. This approach is less sensitive to the energy resolution and shows promise if detector energy resolutions of 12% can be achieved.

  14. Efficient orbit integration by manifold correction methods.

    Science.gov (United States)

    Fukushima, Toshio

    2005-12-01

    Triggered by a desire to investigate, numerically, the planetary precession through a long-term numerical integration of the solar system, we developed a new formulation of numerical integration of orbital motion named manifold correct on methods. The main trick is to rigorously retain the consistency of physical relations, such as the orbital energy, the orbital angular momentum, or the Laplace integral, of a binary subsystem. This maintenance is done by applying a correction to the integrated variables at each integration step. Typical methods of correction are certain geometric transformations, such as spatial scaling and spatial rotation, which are commonly used in the comparison of reference frames, or mathematically reasonable operations, such as modularization of angle variables into the standard domain [-pi, pi). The form of the manifold correction methods finally evolved are the orbital longitude methods, which enable us to conduct an extremely precise integration of orbital motions. In unperturbed orbits, the integration errors are suppressed at the machine epsilon level for an indefinitely long period. In perturbed cases, on the other hand, the errors initially grow in proportion to the square root of time and then increase more rapidly, the onset of which depends on the type and magnitude of the perturbations. This feature is also realized for highly eccentric orbits by applying the same idea as used in KS-regularization. In particular, the introduction of time elements greatly enhances the performance of numerical integration of KS-regularized orbits, whether the scaling is applied or not.

  15. Correction method and software for image distortion and nonuniform response in charge-coupled device-based x-ray detectors utilizing x-ray image intensifier

    International Nuclear Information System (INIS)

    Ito, Kazuki; Kamikubo, Hironari; Yagi, Naoto; Amemiya, Yoshiyuki

    2005-01-01

    An on-site method of correcting the image distortion and nonuniform response of a charge-coupled device (CCD)-based X-ray detector was developed using the response of the imaging plate as a reference. The CCD-based X-ray detector consists of a beryllium-windowed X-ray image intensifier (Be-XRII) and a CCD as the image sensor. An image distortion of 29% was improved to less than 1% after the correction. In the correction of nonuniform response due to image distortion, subpixel approximation was performed for the redistribution of pixel values. The optimal number of subpixels was also discussed. In an experiment with polystyrene (PS) latex, it was verified that the correction of both image distortion and nonuniform response worked properly. The correction for the 'contrast reduction' problem was also demonstrated for an isotropic X-ray scattering pattern from the PS latex. (author)

  16. A versatile atomic number correction for electron-probe microanalysis

    International Nuclear Information System (INIS)

    Love, G.; Cox, M.G.; Scott, V.D.

    1978-01-01

    A new atomic number correction is proposed for quantitative electron-probe microanalysis. Analytical expressions for the stopping power S and back-scatter R factors are derived which take into account atomic number of the target, incident electron energy and overvoltage; the latter expression is established using Monte Carlo calculations. The correct procedures for evaluating S and R for multi-element specimens are described. The new method, which overcomes some limitations inherent in earlier atomic number corrections, may readily be used where specimens are inclined to the electron beam. (author)

  17. Thomson scattering measurements in atmospheric plasma jets

    International Nuclear Information System (INIS)

    Gregori, G.; Schein, J.; Schwendinger, P.; Kortshagen, U.; Heberlein, J.; Pfender, E.

    1999-01-01

    Electron temperature and electron density in a dc plasma jet at atmospheric pressure have been obtained using Thomson laser scattering. Measurements performed at various scattering angles have revealed effects that are not accounted for by the standard scattering theory. Differences between the predicted and experimental results suggest that higher order corrections to the theory may be required, and that corrections to the form of the spectral density function may play an important role. copyright 1999 The American Physical Society

  18. Wigner representation in scattering problems

    International Nuclear Information System (INIS)

    Remler, E.A.

    1975-01-01

    The basic equations of quantum scattering are translated into the Wigner representation. This puts quantum mechanics in the form of a stochastic process in phase space. Instead of complex valued wavefunctions and transition matrices, one now works with real-valued probability distributions and source functions, objects more responsive to physical intuition. Aside from writing out certain necessary basic expressions, the main purpose is to develop and stress the interpretive picture associated with this representation and to derive results used in applications published elsewhere. The quasiclassical guise assumed by the formalism lends itself particularly to approximations of complex multiparticle scattering problems is laid. The foundation for a systematic application of statistical approximations to such problems. The form of the integral equation for scattering as well as its mulitple scattering expansion in this representation are derived. Since this formalism remains unchanged upon taking the classical limit, these results also constitute a general treatment of classical multiparticle collision theory. Quantum corrections to classical propogators are discussed briefly. The basic approximation used in the Monte Carlo method is derived in a fashion that allows for future refinement and includes bound state production. The close connection that must exist between inclusive production of a bound state and of its constituents is brought out in an especially graphic way by this formalism. In particular one can see how comparisons between such cross sections yield direct physical insight into relevant production mechanisms. A simple illustration of scattering by a bound two-body system is treated. Simple expressions for single- and double-scattering contributions to total and differential cross sections, as well as for all necessary shadow corrections thereto, are obtained and compared to previous results of Glauber and Goldberger

  19. Simulating elastic light scattering using high performance computing methods

    NARCIS (Netherlands)

    Hoekstra, A.G.; Sloot, P.M.A.; Verbraeck, A.; Kerckhoffs, E.J.H.

    1993-01-01

    The Coupled Dipole method, as originally formulated byPurcell and Pennypacker, is a very powerful method tosimulate the Elastic Light Scattering from arbitraryparticles. This method, which is a particle simulationmodel for Computational Electromagnetics, has one majordrawback: if the size of the

  20. Inelastic scattering in condensed matter with high intensity moessbauer radiation

    International Nuclear Information System (INIS)

    Yelon, W.B.; Schupp, G.

    1991-05-01

    We give a progress report for the work which has been carried out in the last three years with DOE support. A facility for high-intensity Moessbauer scattering is not fully operational at the University of Missouri Research Reactor (MURR) as well as a facility at Purdue, using special isotopes produced at MURR. High precision, fundamental Moessbauer effect studies have been carried out using Bragg scattering filters to suppress unwanted radiation. These have led to a Fourier transform method for describing Moessbauer effect (ME) lineshape and a direct method of fitting ME data to the convolution integral. These methods allow complete correction for source resonance self absorption and the accurate representation of interference effects that add an asymmetric component to the ME lines. We have begun applying these techniques to attenuated ME sources whose central peak has been attenuated by stationary resonant absorbers, to make a novel independent determination of interference parameters and line-shape behavior in the resonance asymptotic region. This analysis is important to both fundamental ME studies and to scattering studies for which a deconvolution is essential for extracting the correct recoilless fractions and interference parameters. A number of scattering studies have been successfully carried out including a study of the thermal diffuse scattering in Si, which led to an analysis of the resolution function for gamma-ray scattering. Also studied was the anharmonic motion in Na metal and the charge density wave satellite reflection Debye-Waller factor in TaS 2 , which indicate phason rather than phonon behavior. Using a specially constructed sample cell which enables us to vary temperatures from -10 C to 110 C, we have begun quasielastic diffusion studies in viscous liquids and current results are summarized. Included are the temperature and Q dependence of the scattering in pentadecane and diffusion in glycerol

  1. A vibration correction method for free-fall absolute gravimeters

    Science.gov (United States)

    Qian, J.; Wang, G.; Wu, K.; Wang, L. J.

    2018-02-01

    An accurate determination of gravitational acceleration, usually approximated as 9.8 m s-2, has been playing an important role in the areas of metrology, geophysics, and geodetics. Absolute gravimetry has been experiencing rapid developments in recent years. Most absolute gravimeters today employ a free-fall method to measure gravitational acceleration. Noise from ground vibration has become one of the most serious factors limiting measurement precision. Compared to vibration isolators, the vibration correction method is a simple and feasible way to reduce the influence of ground vibrations. A modified vibration correction method is proposed and demonstrated. A two-dimensional golden section search algorithm is used to search for the best parameters of the hypothetical transfer function. Experiments using a T-1 absolute gravimeter are performed. It is verified that for an identical group of drop data, the modified method proposed in this paper can achieve better correction effects with much less computation than previous methods. Compared to vibration isolators, the correction method applies to more hostile environments and even dynamic platforms, and is expected to be used in a wider range of applications.

  2. Calculation of the Scattered Radiation Profile in 64 Slice CT Scanners Using Experimental Measurement

    Directory of Open Access Journals (Sweden)

    Afshin Akbarzadeh

    2009-06-01

    Full Text Available Introduction: One of the most important parameters in x-ray CT imaging is the noise induced by detected scattered radiation. The detected scattered radiation is completely dependent on the scanner geometry as well as size, shape and material of the scanned object. The magnitude and spatial distribution of the scattered radiation in x-ray CT should be quantified for development of robust scatter correction techniques. Empirical methods based on blocking the primary photons in a small region are not able to extract scatter in all elements of the detector array while the scatter profile is required for a scatter correction procedure. In this study, we measured scatter profiles in 64 slice CT scanners using a new experimental measurement. Material and Methods: To measure the scatter profile, a lead block array was inserted under the collimator and the phantom was exposed at the isocenter. The raw data file, which contained detector array readouts, was transferred to a PC and was read using a dedicated GUI running under MatLab 7.5. The scatter profile was extracted by interpolating the shadowed area. Results: The scatter and SPR profiles were measured. Increasing the tube voltage from 80 to 140 kVp resulted in an 80% fall off in SPR for a water phantom (d=210 mm and 86% for a polypropylene phantom (d = 350 mm. Increasing the air gap to 20.9 cm caused a 30% decrease in SPR. Conclusion: In this study, we presented a novel approach for measurement of scattered radiation distribution and SPR in a CT scanner with 64-slice capability using a lead block array. The method can also be used on other multi-slice CT scanners. The proposed technique can accurately estimate scatter profiles. It is relatively straightforward, easy to use, and can be used for any related measurement.

  3. Radiative heat transfer in strongly forward scattering media using the discrete ordinates method

    Science.gov (United States)

    Granate, Pedro; Coelho, Pedro J.; Roger, Maxime

    2016-03-01

    The discrete ordinates method (DOM) is widely used to solve the radiative transfer equation, often yielding satisfactory results. However, in the presence of strongly forward scattering media, this method does not generally conserve the scattering energy and the phase function asymmetry factor. Because of this, the normalization of the phase function has been proposed to guarantee that the scattering energy and the asymmetry factor are conserved. Various authors have used different normalization techniques. Three of these are compared in the present work, along with two other methods, one based on the finite volume method (FVM) and another one based on the spherical harmonics discrete ordinates method (SHDOM). In addition, the approximation of the Henyey-Greenstein phase function by a different one is investigated as an alternative to the phase function normalization. The approximate phase function is given by the sum of a Dirac delta function, which accounts for the forward scattering peak, and a smoother scaled phase function. In this study, these techniques are applied to three scalar radiative transfer test cases, namely a three-dimensional cubic domain with a purely scattering medium, an axisymmetric cylindrical enclosure containing an emitting-absorbing-scattering medium, and a three-dimensional transient problem with collimated irradiation. The present results show that accurate predictions are achieved for strongly forward scattering media when the phase function is normalized in such a way that both the scattered energy and the phase function asymmetry factor are conserved. The normalization of the phase function may be avoided using the FVM or the SHDOM to evaluate the in-scattering term of the radiative transfer equation. Both methods yield results whose accuracy is similar to that obtained using the DOM along with normalization of the phase function. Very satisfactory predictions were also achieved using the delta-M phase function, while the delta

  4. Study for correction of neutron scattering in the calibration of the albedo individual monitor from the Neutron Laboratory (LN), IRD/CNEN-RJ, Brazil

    International Nuclear Information System (INIS)

    Freitas, B.M.; Silva, A.X. da

    2014-01-01

    The Instituto de Radioprotecao e Dosimetria (IRD) runs a neutron individual monitoring service with albedo type monitor and thermoluminescent detectors (TLD). Moreover the largest number of workers exposed to neutrons in Brazil is exposed to 241 Am-Be fields. Therefore a study of the response of albedo dosemeter due to neutron scattering from 241 Am-Be source is important for a proper calibration. In this work, it has been evaluated the influence of the scattering correction in two distances at the Low Scattering Laboratory of the Neutron Laboratory of the Brazilian National Laboratory (Lab. Nacional de Metrologia Brasileira de Radiacoes Ionizantes) in the calibration of that albedo dosemeter for a 241 Am-Be source. (author)

  5. Fermion-boson scattering in ladder approximation

    International Nuclear Information System (INIS)

    Jafarov, R.G.; Hadjiev, S.A.

    1992-10-01

    A method of calculation of forward scattering amplitude for fermions and scalar bosons with exchanging of scalar particle is suggested. The Bethe-Salpeter ladder equation for the imaginary part of the amplitude is constructed and a solution in Regge asymptotical form is found and the corrections to the amplitude due to the exit from mass shell are calculated. (author). 8 refs

  6. Radiative corrections to high-energy neutrino scattering

    International Nuclear Information System (INIS)

    Rujula, A. de; Petronzio, R.; Savoy-Navarro, A.

    1979-01-01

    Motivated by precise neutrino experiments, the electromagnetic radiative corrections to the data are reconsidered. The usefulness is investigated and the simplicity demonstrated of the 'leading log' approximation: the calculation to order α ln (Q/μ), α ln (Q/msub(q)). Here Q is an energy scale of the overall process, μ is the lepton mass and msub(q) is a hadronic mass, the effective quark mass in a parton model. The leading log radiative corrections to dsigma/dy distributions and to suitably interpreted dsigma/dx distributions are quark-mass independent. The authors improve upon the conventional leading log approximation and compute explicitly the largest terms that lie beyond the leading log level. In practice this means that the model-independent formulae, though approximate, are likely to be excellent estimates everywhere except at low energy or very large y. It is pointed out that radiative corrections to measurements of deviations from the Callan-Gross relation and to measurements of the 'sea' constituency of nucleons are gigantic. The QCD inspired study of deviations from scaling is of particular interest. The authors compute, beyond the leading log level, the radiative corrections of the QCD predictions. (Auth.)

  7. Rayleigh scattering for a magnetized cold plasma sphere

    International Nuclear Information System (INIS)

    Li Yingle; Wang Mingjun; Tang Gaofeng; Li Jin

    2010-01-01

    The transformation of parameter tensors for anisotropic medium in different coordinate systems is derived. The electric field for a magnetized cold plasma sphere and the general expression of scattering field from anisotropic target are obtained. The functional relations of differential scattering cross section and the radar cross section for the magnetized plasma sphere are presented. Simulation results agree with that in the literatures, which shows the method used is correct and therefore the results may provide a theoretical base for anisotropic target identification. (authors)

  8. Simulation of inverse Compton scattering and its implications on the scattered linewidth

    Science.gov (United States)

    Ranjan, N.; Terzić, B.; Krafft, G. A.; Petrillo, V.; Drebot, I.; Serafini, L.

    2018-03-01

    Rising interest in inverse Compton sources has increased the need for efficient models that properly quantify the behavior of scattered radiation given a set of interaction parameters. The current state-of-the-art simulations rely on Monte Carlo-based methods, which, while properly expressing scattering behavior in high-probability regions of the produced spectra, may not correctly simulate such behavior in low-probability regions (e.g. tails of spectra). Moreover, sampling may take an inordinate amount of time for the desired accuracy to be achieved. In this paper, we present an analytic derivation of the expression describing the scattered radiation linewidth and propose a model to describe the effects of horizontal and vertical emittance on the properties of the scattered radiation. We also present an improved version of the code initially reported in Krafft et al. [Phys. Rev. Accel. Beams 19, 121302 (2016), 10.1103/PhysRevAccelBeams.19.121302], that can perform the same simulations as those present in cain and give accurate results in low-probability regions by integrating over the emissions of the electrons. Finally, we use these codes to carry out simulations that closely verify the behavior predicted by the analytically derived scaling law.

  9. X-ray coherent scattering tomography of textured material (Conference Presentation)

    Science.gov (United States)

    Zhu, Zheyuan; Pang, Shuo

    2017-05-01

    Small-angle X-ray scattering (SAXS) measures the signature of angular-dependent coherently scattered X-rays, which contains richer information in material composition and structure compared to conventional absorption-based computed tomography. SAXS image reconstruction method of a 2 or 3 dimensional object based on computed tomography, termed as coherent scattering computed tomography (CSCT), enables the detection of spatially-resolved, material-specific isotropic scattering signature inside an extended object, and provides improved contrast for medical diagnosis, security screening, and material characterization applications. However, traditional CSCT methods assumes materials are fine powders or amorphous, and possess isotropic scattering profiles, which is not generally true for all materials. Anisotropic scatters cannot be captured using conventional CSCT method and result in reconstruction errors. To obtain correct information from the sample, we designed new imaging strategy which incorporates extra degree of detector motion into X-ray scattering tomography for the detection of anisotropic scattered photons from a series of two-dimensional intensity measurements. Using a table-top, narrow-band X-ray source and a panel detector, we demonstrate the anisotropic scattering profile captured from an extended object and the reconstruction of a three-dimensional object. For materials possessing a well-organized crystalline structure with certain symmetry, the scatter texture is more predictable. We will also discuss the compressive schemes and implementation of data acquisition to improve the collection efficiency and accelerate the imaging process.

  10. Metric-based method of software requirements correctness improvement

    Directory of Open Access Journals (Sweden)

    Yaremchuk Svitlana

    2017-01-01

    Full Text Available The work highlights the most important principles of software reliability management (SRM. The SRM concept construes a basis for developing a method of requirements correctness improvement. The method assumes that complicated requirements contain more actual and potential design faults/defects. The method applies a newer metric to evaluate the requirements complexity and double sorting technique evaluating the priority and complexity of a particular requirement. The method enables to improve requirements correctness due to identification of a higher number of defects with restricted resources. Practical application of the proposed method in the course of demands review assured a sensible technical and economic effect.

  11. Large-angle hadron scattering at high energies

    International Nuclear Information System (INIS)

    Goloskokov, S.V.; Kudinov, A.V.; Kuleshov, S.P.

    1981-01-01

    Basing on the quasipotential Logunov-Tavkhelidze approach, corrections to the amplitude of high-energy large-angle meson-nucleon scattering are estimated. The estimates are compared with the available experimental data on pp- and π +- p-scattering, so as to check the adequacy of the suggested scheme to account for the preasymptotic deffects. The compared results are presented in the form of tables and graphs. The following conclusions are drawn: 1. the account for corrections, due to the long-range interaction, to the amplituda gives a good aghreee main asymptotic termment between the theoretical and experimental data. 2. in the case of π +- p- scattering the corrections prove to be comparable with the main asymptotic term up to the values of transferred pulses psub(lambdac)=50 GeV/c, which results in a noticeable deviation form the quark counting rules at such energies. Nevertheless, the preasymptotic formulae do well, beginning with psub(lambdac) approximately 6 GeV/c. In case of pp-scattering the corrections are mutually compensated to a considerable degree, and the deviation from the quark counting rules is negligible

  12. Generalized Hartree-Fock method for electron-atom scattering

    International Nuclear Information System (INIS)

    Rosenberg, L.

    1997-01-01

    In the widely used Hartree-Fock procedure for atomic structure calculations, trial functions in the form of linear combinations of Slater determinants are constructed and the Rayleigh-Ritz minimum principle is applied to determine the best in that class. A generalization of this approach, applicable to low-energy electron-atom scattering, is developed here. The method is based on a unique decomposition of the scattering wave function into open- and closed-channel components, so chosen that an approximation to the closed-channel component may be obtained by adopting it as a trial function in a minimum principle, whose rigor can be maintained even when the target wave functions are imprecisely known. Given a closed-channel trial function, the full scattering function may be determined from the solution of an effective one-body Schroedinger equation. Alternatively, in a generalized Hartree-Fock approach, the minimum principle leads to coupled integrodifferential equations to be satisfied by the basis functions appearing in a Slater-determinant representation of the closed-channel wave function; it also provides a procedure for optimizing the choice of nonlinear parameters in a variational determination of these basis functions. Inclusion of additional Slater determinants in the closed-channel trial function allows for systematic improvement of that function, as well as the calculated scattering parameters, with the possibility of spurious singularities avoided. Electron-electron correlations can be important in accounting for long-range forces and resonances. These correlation effects can be included explicitly by suitable choice of one component of the closed-channel wave function; the remaining component may then be determined by the generalized Hartree-Fock procedure. As a simple test, the method is applied to s-wave scattering of positrons by hydrogen. copyright 1997 The American Physical Society

  13. Automated general temperature correction method for dielectric soil moisture sensors

    Science.gov (United States)

    Kapilaratne, R. G. C. Jeewantinie; Lu, Minjiao

    2017-08-01

    An effective temperature correction method for dielectric sensors is important to ensure the accuracy of soil water content (SWC) measurements of local to regional-scale soil moisture monitoring networks. These networks are extensively using highly temperature sensitive dielectric sensors due to their low cost, ease of use and less power consumption. Yet there is no general temperature correction method for dielectric sensors, instead sensor or site dependent correction algorithms are employed. Such methods become ineffective at soil moisture monitoring networks with different sensor setups and those that cover diverse climatic conditions and soil types. This study attempted to develop a general temperature correction method for dielectric sensors which can be commonly used regardless of the differences in sensor type, climatic conditions and soil type without rainfall data. In this work an automated general temperature correction method was developed by adopting previously developed temperature correction algorithms using time domain reflectometry (TDR) measurements to ThetaProbe ML2X, Stevens Hydra probe II and Decagon Devices EC-TM sensor measurements. The rainy day effects removal procedure from SWC data was automated by incorporating a statistical inference technique with temperature correction algorithms. The temperature correction method was evaluated using 34 stations from the International Soil Moisture Monitoring Network and another nine stations from a local soil moisture monitoring network in Mongolia. Soil moisture monitoring networks used in this study cover four major climates and six major soil types. Results indicated that the automated temperature correction algorithms developed in this study can eliminate temperature effects from dielectric sensor measurements successfully even without on-site rainfall data. Furthermore, it has been found that actual daily average of SWC has been changed due to temperature effects of dielectric sensors with a

  14. A New Class of Scaling Correction Methods

    International Nuclear Information System (INIS)

    Mei Li-Jie; Wu Xin; Liu Fu-Yao

    2012-01-01

    When conventional integrators like Runge—Kutta-type algorithms are used, numerical errors can make an orbit deviate from a hypersurface determined by many constraints, which leads to unreliable numerical solutions. Scaling correction methods are a powerful tool to avoid this. We focus on their applications, and also develop a family of new velocity multiple scaling correction methods where scale factors only act on the related components of the integrated momenta. They can preserve exactly some first integrals of motion in discrete or continuous dynamical systems, so that rapid growth of roundoff or truncation errors is suppressed significantly. (general)

  15. Scattering theory on the lattice and with a Monte Carlo method

    International Nuclear Information System (INIS)

    Kroeger, H.; Moriarty, K.J.M.; Potvin, J.

    1990-01-01

    We present an alternative time-dependent method of calculating the S matrix in quantum systems governed by a Hamiltonian. In the first step one constructs a new Hamiltonian that describes the physics of scattering at energy E with a reduced number of degrees of freedom. Its matrix elements are computed with a Monte Carlo projector method. In the second step the scattering matrix is computed algebraically via diagonalization and exponentiation of the new Hamiltonian. Although we have in mind applications in many-body systems and quantum field theory, the method should be applicable and useful in such diverse areas as atomic and molecular physics, nuclear physics, high-energy physics and solid-state physics. As an illustration of the method, we compute s-wave scattering of two nucleons in a nonrelativistic potential model (Yamaguchi potential), for which the S matrix is known exactly

  16. Energy-angle correlation correction algorithm for monochromatic computed tomography based on Thomson scattering X-ray source

    Science.gov (United States)

    Chi, Zhijun; Du, Yingchao; Huang, Wenhui; Tang, Chuanxiang

    2017-12-01

    The necessity for compact and relatively low cost x-ray sources with monochromaticity, continuous tunability of x-ray energy, high spatial coherence, straightforward polarization control, and high brightness has led to the rapid development of Thomson scattering x-ray sources. To meet the requirement of in-situ monochromatic computed tomography (CT) for large-scale and/or high-attenuation materials based on this type of x-ray source, there is an increasing demand for effective algorithms to correct the energy-angle correlation. In this paper, we take advantage of the parametrization of the x-ray attenuation coefficient to resolve this problem. The linear attenuation coefficient of a material can be decomposed into a linear combination of the energy-dependent photoelectric and Compton cross-sections in the keV energy regime without K-edge discontinuities, and the line integrals of the decomposition coefficients of the above two parts can be determined by performing two spectrally different measurements. After that, the line integral of the linear attenuation coefficient of an imaging object at a certain interested energy can be derived through the above parametrization formula, and monochromatic CT can be reconstructed at this energy using traditional reconstruction methods, e.g., filtered back projection or algebraic reconstruction technique. Not only can monochromatic CT be realized, but also the distributions of the effective atomic number and electron density of the imaging object can be retrieved at the expense of dual-energy CT scan. Simulation results validate our proposal and will be shown in this paper. Our results will further expand the scope of application for Thomson scattering x-ray sources.

  17. Faster method for the calculation of the chattering signal at the ct-detector by monte-carlo-method

    International Nuclear Information System (INIS)

    Schmidt, B.; Kalender, W.A.

    2003-01-01

    Multislice spiral CT scanners allow to acquire multiple slices simultaneously. With increasing numbers of slices, not only the total extent of slice collimation increases, but also the contribution of scatter radiation to the detector signal. A fast method for calculating the scatter signal would offer the possibility to correct the measured detector signal. Monte Carlo methods allow to simulate the paths of photons through a 3D volume, both in a patient- and scanner-specific fashion. If a scatter photon leaves the volume, its path can be followed and its interaction with an element of the detector be checked. This conventional way of calculating the scatter signal is time-consuming. In order to reduce the calculation time, a more efficient method was developed (Method of Weights). Every time an interaction occurs inside of the 3D volume, the probability of a detector hit due to photon scattering is calculated for each detector channel. The respective value is added to the scatter signal per detector with the corresponding weight. Simulated values of scatter-to-primary-signal ratios were confirmed by data available in the literature. Both the conventional and fast methods for the calculation of scatter signals yielded identical values within the range of statistical accuracy. Assuming the same computing time, the standard deviation for the conventional method was 5 times higher than for the fast one. The presented method allows to significantly reduce the computation time. It may therefore provide a basis for ''real time'' methods to correct for the scatter signal, especially in case of increasing numbers of slices. (orig.) [de

  18. Practical methods to define scattering coefficients in a room acoustics computer model

    DEFF Research Database (Denmark)

    Zeng, Xiangyang; Christensen, Claus Lynge; Rindel, Jens Holger

    2006-01-01

    of obtaining the data becomes quite time consuming thus increasing the cost of design. In this paper, practical methods to define scattering coefficients, which is based on an approach of modeling surface scattering and scattering caused by limited size of surface as well as edge diffraction are presented...

  19. [Study on phase correction method of spatial heterodyne spectrometer].

    Science.gov (United States)

    Wang, Xin-Qiang; Ye, Song; Zhang, Li-Juan; Xiong, Wei

    2013-05-01

    Phase distortion exists in collected interferogram because of a variety of measure reasons when spatial heterodyne spectrometers are used in practice. So an improved phase correction method is presented. The phase curve of interferogram was obtained through Fourier inverse transform to extract single side transform spectrum, based on which, the phase distortions were attained by fitting phase slope, so were the phase correction functions, and the convolution was processed between transform spectrum and phase correction function to implement spectrum phase correction. The method was applied to phase correction of actually measured monochromatic spectrum and emulational water vapor spectrum. Experimental results show that the low-frequency false signals in monochromatic spectrum fringe would be eliminated effectively to increase the periodicity and the symmetry of interferogram, in addition when the continuous spectrum imposed phase error was corrected, the standard deviation between it and the original spectrum would be reduced form 0.47 to 0.20, and thus the accuracy of spectrum could be improved.

  20. Effective exchange potentials for electronically inelastic scattering

    International Nuclear Information System (INIS)

    Schwenke, D.W.; Staszewska, G.; Truhlar, D.G.

    1983-01-01

    We propose new methods for solving the electron scattering close coupling equations employing equivalent local exchange potentials in place of the continuum-multiconfiguration-Hartree--Fock-type exchange kernels. The local exchange potentials are Hermitian. They have the correct symmetry for any symmetries of excited electronic states included in the close coupling expansion, and they have the same limit at very high energy as previously employed exchange potentials. Comparison of numerical calculations employing the new exchange potentials with the results obtained with the standard nonlocal exchange kernels shows that the new exchange potentials are more accurate than the local exchange approximations previously available for electronically inelastic scattering. We anticipate that the new approximations will be most useful for intermediate-energy electronically inelastic electron--molecule scattering

  1. Inelastic neutron scattering method in hard coal quality monitoring

    International Nuclear Information System (INIS)

    Cywicka-Jakiel, T.; Loskiewicz, J.; Tracz, G.

    1994-07-01

    Nuclear methods in mining industry and power generation plants are nowadays very important especially because of the need for optimization of combustion processes and reduction of environmental pollution. On-line analysis of coal quality not only economic benefits but contribute to environmental protection too. Neutron methods especially inelastic scattering and PGNAA are very useful for analysis of coal quality where calorific valve, ash and moisture content are the most important. Using Pu-Be or Am-Be isotopic sources and measuring carbon 4.43 MeV γ-rays from neutron inelastic scattering: 12 C(n,n'γ) 12 C we can evaluate calorific valve in hard coals with precision better than in PGNAA method. This is mainly because of large cross-section for inelastic scattering and the strong correlation between carbon content and calorific value shown in the paper for different coal basins. The influence of moisture on 4.43 MeV carbon γ-rays in considered in the paper in theoretical and experimental aspects and appropriate formula is introduced. Also the possibilities of determine ash, moisture, Cl, Na and Si in coal are shown. (author). 11 refs, 15 figs

  2. Inelastic neutron scattering method in hard coal quality monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Cywicka-Jakiel, T.; Loskiewicz, J.; Tracz, G. [Institute of Nuclear Physics, Cracow (Poland)

    1994-07-01

    Nuclear methods in mining industry and power generation plants are nowadays very important especially because of the need for optimization of combustion processes and reduction of environmental pollution. On-line analysis of coal quality not only economic benefits but contribute to environmental protection too. Neutron methods especially inelastic scattering and PGNAA are very useful for analysis of coal quality where calorific valve, ash and moisture content are the most important. Using Pu-Be or Am-Be isotopic sources and measuring carbon 4.43 MeV {gamma}-rays from neutron inelastic scattering: {sup 12}C(n,n`{gamma}){sup 12}C we can evaluate calorific valve in hard coals with precision better than in PGNAA method. This is mainly because of large cross-section for inelastic scattering and the strong correlation between carbon content and calorific value shown in the paper for different coal basins. The influence of moisture on 4.43 MeV carbon {gamma}-rays in considered in the paper in theoretical and experimental aspects and appropriate formula is introduced. Also the possibilities of determine ash, moisture, Cl, Na and Si in coal are shown. (author). 11 refs, 15 figs.

  3. An Automated Baseline Correction Method Based on Iterative Morphological Operations.

    Science.gov (United States)

    Chen, Yunliang; Dai, Liankui

    2018-05-01

    Raman spectra usually suffer from baseline drift caused by fluorescence or other reasons. Therefore, baseline correction is a necessary and crucial step that must be performed before subsequent processing and analysis of Raman spectra. An automated baseline correction method based on iterative morphological operations is proposed in this work. The method can adaptively determine the structuring element first and then gradually remove the spectral peaks during iteration to get an estimated baseline. Experiments on simulated data and real-world Raman data show that the proposed method is accurate, fast, and flexible for handling different kinds of baselines in various practical situations. The comparison of the proposed method with some state-of-the-art baseline correction methods demonstrates its advantages over the existing methods in terms of accuracy, adaptability, and flexibility. Although only Raman spectra are investigated in this paper, the proposed method is hopefully to be used for the baseline correction of other analytical instrumental signals, such as IR spectra and chromatograms.

  4. Method for decoupling error correction from privacy amplification

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Hoi-Kwong [Department of Electrical and Computer Engineering and Department of Physics, University of Toronto, 10 King' s College Road, Toronto, Ontario, Canada, M5S 3G4 (Canada)

    2003-04-01

    In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof.

  5. Method for decoupling error correction from privacy amplification

    International Nuclear Information System (INIS)

    Lo, Hoi-Kwong

    2003-01-01

    In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof

  6. Simulating water hammer with corrective smoothed particle method

    NARCIS (Netherlands)

    Hou, Q.; Kruisbrink, A.C.H.; Tijsseling, A.S.; Keramat, A.

    2012-01-01

    The corrective smoothed particle method (CSPM) is used to simulate water hammer. The spatial derivatives in the water-hammer equations are approximated by a corrective kernel estimate. For the temporal derivatives, the Euler-forward time integration algorithm is employed. The CSPM results are in

  7. Evaluation of ion chamber dependent correction factors for ionisation chamber dosimetry in proton beams using a Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Palmans, H [Ghent Univ. (Belgium). Dept. of Biomedical Physics; Verhaegen, F

    1995-12-01

    In the last decade, several clinical proton beam therapy facilities have been developed. To satisfy the demand for uniformity in clinical (routine) proton beam dosimetry two dosimetry protocols (ECHED and AAPM) have been published. Both protocols neglect the influence of ion chamber dependent parameters on dose determination in proton beams because of the scatter properties of these beams, although the problem has not been studied thoroughly yet. A comparison between water calorimetry and ionisation chamber dosimetry showed a discrepancy of 2.6% between the former method and ionometry following the ECHED protocol. Possibly, a small part of this difference can be attributed to chamber dependent correction factors. Indications for this possibility are found in ionometry measurements. To allow the simulation of complex geometries with different media necessary for the study of those corrections, an existing proton Monte Carlo code (PTRAN, Berger) has been modified. The original code, that applies Mollire`s multiple scattering theory and Vavilov`s energy straggling theory, calculates depth dose profiles, energy distributions and radial distributions for pencil beams in water. Comparisons with measurements and calculations reported in the literature are done to test the program`s accuracy. Preliminary results of the influence of chamber design and chamber materials on dose to water determination are presented.

  8. Evaluation of ion chamber dependent correction factors for ionisation chamber dosimetry in proton beams using a Monte Carlo method

    International Nuclear Information System (INIS)

    Palmans, H.; Verhaegen, F.

    1995-01-01

    In the last decade, several clinical proton beam therapy facilities have been developed. To satisfy the demand for uniformity in clinical (routine) proton beam dosimetry two dosimetry protocols (ECHED and AAPM) have been published. Both protocols neglect the influence of ion chamber dependent parameters on dose determination in proton beams because of the scatter properties of these beams, although the problem has not been studied thoroughly yet. A comparison between water calorimetry and ionisation chamber dosimetry showed a discrepancy of 2.6% between the former method and ionometry following the ECHED protocol. Possibly, a small part of this difference can be attributed to chamber dependent correction factors. Indications for this possibility are found in ionometry measurements. To allow the simulation of complex geometries with different media necessary for the study of those corrections, an existing proton Monte Carlo code (PTRAN, Berger) has been modified. The original code, that applies Mollire's multiple scattering theory and Vavilov's energy straggling theory, calculates depth dose profiles, energy distributions and radial distributions for pencil beams in water. Comparisons with measurements and calculations reported in the literature are done to test the program's accuracy. Preliminary results of the influence of chamber design and chamber materials on dose to water determination are presented

  9. Elastic scattering of positronium: Application of the confined variational method

    KAUST Repository

    Zhang, Junyi

    2012-08-01

    We demonstrate for the first time that the phase shift in elastic positronium-atom scattering can be precisely determined by the confined variational method, in spite of the fact that the Hamiltonian includes an unphysical confining potential acting on the center of mass of the positron and one of the atomic electrons. As an example, we study the S-wave elastic scattering for the positronium-hydrogen scattering system, where the existing 4% discrepancy between the Kohn variational calculation and the R-matrix calculation is resolved. © Copyright EPLA, 2012.

  10. Elastic scattering of positronium: Application of the confined variational method

    KAUST Repository

    Zhang, Junyi; Yan, Zong-Chao; Schwingenschlö gl, Udo

    2012-01-01

    We demonstrate for the first time that the phase shift in elastic positronium-atom scattering can be precisely determined by the confined variational method, in spite of the fact that the Hamiltonian includes an unphysical confining potential acting on the center of mass of the positron and one of the atomic electrons. As an example, we study the S-wave elastic scattering for the positronium-hydrogen scattering system, where the existing 4% discrepancy between the Kohn variational calculation and the R-matrix calculation is resolved. © Copyright EPLA, 2012.

  11. Transient radiative transfer in a scattering slab considering polarization.

    Science.gov (United States)

    Yi, Hongliang; Ben, Xun; Tan, Heping

    2013-11-04

    The characteristics of the transient and polarization must be considered for a complete and correct description of short-pulse laser transfer in a scattering medium. A Monte Carlo (MC) method combined with a time shift and superposition principle is developed to simulate transient vector (polarized) radiative transfer in a scattering medium. The transient vector radiative transfer matrix (TVRTM) is defined to describe the transient polarization behavior of short-pulse laser propagating in the scattering medium. According to the definition of reflectivity, a new criterion of reflection at Fresnel surface is presented. In order to improve the computational efficiency and accuracy, a time shift and superposition principle is applied to the MC model for transient vector radiative transfer. The results for transient scalar radiative transfer and steady-state vector radiative transfer are compared with those in published literatures, respectively, and an excellent agreement between them is observed, which validates the correctness of the present model. Finally, transient radiative transfer is simulated considering the polarization effect of short-pulse laser in a scattering medium, and the distributions of Stokes vector in angular and temporal space are presented.

  12. Estimation of scattering object characteristics for image reconstruction using a nonzero background.

    Science.gov (United States)

    Jin, Jing; Astheimer, Jeffrey; Waag, Robert

    2010-06-01

    Two methods are described to estimate the boundary of a 2-D penetrable object and the average sound speed in the object. One method is for circular objects centered in the coordinate system of the scattering observation. This method uses an orthogonal function expansion for the scattering. The other method is for noncircular, essentially convex objects. This method uses cross correlation to obtain time differences that determine a family of parabolas whose envelope is the boundary of the object. A curve-fitting method and a phase-based method are described to estimate and correct the offset of an uncentered radial or elliptical object. A method based on the extinction theorem is described to estimate absorption in the object. The methods are applied to calculated scattering from a circular object with an offset and to measured scattering from an offset noncircular object. The results show that the estimated boundaries, sound speeds, and absorption slopes agree very well with independently measured or true values when the assumptions of the methods are reasonably satisfied.

  13. A bias-corrected CMIP5 dataset for Africa using the CDF-t method - a contribution to agricultural impact studies

    Science.gov (United States)

    Moise Famien, Adjoua; Janicot, Serge; Delfin Ochou, Abe; Vrac, Mathieu; Defrance, Dimitri; Sultan, Benjamin; Noël, Thomas

    2018-03-01

    The objective of this paper is to present a new dataset of bias-corrected CMIP5 global climate model (GCM) daily data over Africa. This dataset was obtained using the cumulative distribution function transform (CDF-t) method, a method that has been applied to several regions and contexts but never to Africa. Here CDF-t has been applied over the period 1950-2099 combining Historical runs and climate change scenarios for six variables: precipitation, mean near-surface air temperature, near-surface maximum air temperature, near-surface minimum air temperature, surface downwelling shortwave radiation, and wind speed, which are critical variables for agricultural purposes. WFDEI has been used as the reference dataset to correct the GCMs. Evaluation of the results over West Africa has been carried out on a list of priority user-based metrics that were discussed and selected with stakeholders. It includes simulated yield using a crop model simulating maize growth. These bias-corrected GCM data have been compared with another available dataset of bias-corrected GCMs using WATCH Forcing Data as the reference dataset. The impact of WFD, WFDEI, and also EWEMBI reference datasets has been also examined in detail. It is shown that CDF-t is very effective at removing the biases and reducing the high inter-GCM scattering. Differences with other bias-corrected GCM data are mainly due to the differences among the reference datasets. This is particularly true for surface downwelling shortwave radiation, which has a significant impact in terms of simulated maize yields. Projections of future yields over West Africa are quite different, depending on the bias-correction method used. However all these projections show a similar relative decreasing trend over the 21st century.

  14. Neutron Inelastic Scattering Study of Liquid Argon

    Energy Technology Data Exchange (ETDEWEB)

    Skoeld, K; Rowe, J M; Ostrowski, G [Solid State Science Div., Argonne National Laboratory, Argonne, Illinois (US); Randolph, P D [Nuclear Technology Div., Idaho Nuclear Corporation, Idaho Falls, Idaho (US)

    1972-02-15

    The inelastic scattering functions for liquid argon have been measured at 85.2 K. The coherent scattering function was obtained from a measurement on pure A-36 and the incoherent function was derived from the result obtained from the A-36 sample and the result obtained from a mixture of A-36 and A-40 for which the scattering is predominantly incoherent. The data, which are presented as smooth scattering functions at constant values of the wave vector transfer in the range 10 - 44/nm, are corrected for multiple scattering contributions and for resolution effects. Such corrections are shown to be essential in the derivation of reliable scattering functions from neutron scattering data. The incoherent data are compared to recent molecular dynamics results and the mean square displacement as a function of time is derived. The coherent data are compared to molecular dynamics results and also, briefly, to some recent theoretical models

  15. Inelastic scattering in condensed matter with high intensity Moessbauer radiation

    International Nuclear Information System (INIS)

    Yelon, W.B.; Schupp, G.

    1990-10-01

    We give a progress report for the work which has been carried out in the last three years with DOE support. A facility for high-intensity Moessbauer scattering is now fully operational at the University of Missouri Research Reactor (MURR) as well as facility at Purdue, using special isotopes produced at MURR. High precision, fundamental Moessbauer effect studies have been carried out using scattering to filter the unwanted radiation. These have led to a new Fourier transform method for describing Moessbauer effect (ME) lineshape and a direct method of fitting ME data to the convolution integral. These methods allow complete correction for source resonance self absorption (SRSA) and the accurate representation of interference effects that add an asymmetric component to the ME lines. We have begun applying these techniques to attenuated ME sources whose central peak has been attenuated by stationary resonant absorbers, to more precisely determine interference parameters and line-shape behavior in the resonance asymptotic region. This analysis is important to both the fundamental ME studies and to scattering studies for which a deconvolution is essential for extracting the correct recoilless fractions and interference parameters. A number of scattering studies have been successfully carried out including a study of the thermal diffuse scattering in Si, which led to an analysis of the resolution function for gamma-ray scattering. Also studied was the anharmonic motion in Na and the satellite reflection Debye-Waller factor in TaS 2 , which indicate phason rather than phonon behavior. We have begun quasielastic diffusion studies in viscous liquids and current results are summarized. These advances, coupled to our improvements in MIcrofoil Conversion Electron spectroscopy lay the foundation for the proposed research outlined in this request for a three-year renewal of DOE support

  16. Gamma scattering in condensed matter with high intensity Moessbauer radiation

    International Nuclear Information System (INIS)

    1990-01-01

    We give a progress report for the work which has been carried out in the last three years with DOE support. A facility for high-intensity Moessbauer scattering is now fully operational at the University of Missouri Research Reactor (MURR) as well as a facility at Purdue, using special isotopes produced at MURR. High precision, fundamental Moessbauer effect studies have been carried out using scattering to filter the unwanted radiation. These have led to a new Fourier transform method for describing Moessbauer effect (ME) lineshape and a direct method of fitting ME data to the convolution integral. These methods allow complete correction for source resonance self absorption (SRSA) and the accurate representation of interference effects that add an asymmetric component to the ME lines. We have begun applying these techniques to attenuated ME sources whose central peak has been attenuated by stationary resonant absorbers, to more precisely determine interference parameters and line-shape behavior in the resonance asymptotic region. This analysis is important to both the fundamental ME studies and to scattering studies for which a deconvolution is essential for extracting the correct recoilless fractions and interference parameters. A number of scattering studies have been successfully carried out including a study of the thermal diffuse scattering in Si, which led to an analysis of the resolution function for gamma-ray scattering. Also studied was the anharmonic motion in Na and the satellite reflection Debye-Waller factor in TaS 2 , which indicate phason rather than phonon behavior. We have begun quasielastic diffusion studies in viscous liquids and current results are summarized. These advances, coupled to our improvements in MIcrofoil Conversion Electron spectroscopy lay the foundation for the proposed research outlined in this request for a three-year renewal of DOE support

  17. Quadratic Regression-based Non-uniform Response Correction for Radiochromic Film Scanners

    International Nuclear Information System (INIS)

    Jeong, Hae Sun; Kim, Chan Hyeong; Han, Young Yih; Kum, O Yeon

    2009-01-01

    In recent years, several types of radiochromic films have been extensively used for two-dimensional dose measurements such as dosimetry in radiotherapy as well as imaging and radiation protection applications. One of the critical aspects in radiochromic film dosimetry is the accurate readout of the scanner without dose distortion. However, most of charge-coupled device (CCD) scanners used for the optical density readout of the film employ a fluorescent lamp or a coldcathode lamp as a light source, which leads to a significant amount of light scattering on the active layer of the film. Due to the effect of the light scattering, dose distortions are produced with non-uniform responses, although the dose is uniformly irradiated to the film. In order to correct the distorted doses, a method based on correction factors (CF) has been reported and used. However, the prediction of the real incident doses is difficult when the indiscreet doses are delivered to the film, since the dose correction with the CF-based method is restrictively used in case that the incident doses are already known. In a previous study, therefore, a pixel-based algorithm with linear regression was developed to correct the dose distortion of a flatbed scanner, and to estimate the initial doses. The result, however, was not very good for some cases especially when the incident dose is under approximately 100 cGy. In the present study, the problem was addressed by replacing the linear regression with the quadratic regression. The corrected doses using this method were also compared with the results of other conventional methods

  18. On the solution of a few problems of multiple scattering by Monte Carlo method

    International Nuclear Information System (INIS)

    Bluet, J.C.

    1966-02-01

    Three problems of multiple scattering arising from neutron cross sections experiments, are reported here. The common hypothesis are: - Elastic scattering is the only possible process - Angular distributions are isotropic - Losses of particle energy are negligible in successive collisions. In the three cases practical results, corresponding to actual experiments are given. Moreover the results are shown in more general way, using dimensionless variable such as the ratio of geometrical dimensions to neutron mean free path. The FORTRAN codes are given together with to the corresponding flow charts, and lexicons of symbols. First problem: Measurement of sodium capture cross-section. A sodium sample of given geometry is submitted to a neutron flux. Induced activity is then measured by means of a sodium iodide cristal. The distribution of active nuclei in the sample, and the counter efficiency are calculated by Monte-Carlo method taking multiple scattering into account. Second problem: absolute measurement of a neutron flux using a glass scintillator. The scintillator is a use of lithium 6 loaded glass, submitted to neutron flux perpendicular to its plane faces. If the glass thickness is not negligible compared with scattering mean free path λ, the mean path e' of neutrons in the glass is different from the thickness. Monte-Carlo calculation are made to compute this path and a relative correction to efficiency equal to (e' - e)/e. Third problem: study of a neutron collimator. A neutron detector is placed at the bottom of a cylinder surrounded with water. A neutron source is placed on the cylinder axis, in front of the water shield. The number of neutron tracks going directly and indirectly through the water from the source to the detector are counted. (author) [fr

  19. Synthetic acceleration methods for linear transport problems with highly anisotropic scattering

    International Nuclear Information System (INIS)

    Khattab, K.M.

    1989-01-01

    One of the iterative methods which is used to solve the discretized transport equation is called the Source Iteration Method (SI). The SI method converges very slowly for problems with optically thick regions and scattering ratios (σ s /σ t ) near unity. The Diffusion-Synthetic Acceleration method (DSA) is one of the methods which has been devised to improve the convergence rate of the SI method. The DSA method is a good tool to accelerate the SI method, if the particle which is being dealt with is a neutron. This is because the scattering process for neutrons is not severely anisotropic. However, if the particle is a charged particle (electron), DSA becomes ineffective as an acceleration device because here the scattering process is severely anisotropic. To improve the DSA algorithm for electron transport, the author approaches the problem in two different ways in this thesis. He develops the first approach by accelerating more angular moments (φ 0 , φ 1 , φ 2 , φ 3 ,...) than is done in DSA; he calls this approach the Modified P N Synthetic Acceleration (MPSA) method. In the second approach he modifies the definition of the transport sweep, using the physics of the scattering; he calls this approach the Modified Diffusion Synthetic Acceleration (MDSA) method. In general, he has developed, analyzed, and implemented the MPSA and MDSA methods in this thesis and has shown that for a high order quadrature set and mesh widths about 1.0 cm, they are each about 34 times faster (clock time) than the DSA method. Also, he has found that the MDSA spectral radius decreases as the mesh size increases. This makes the MDSA method a better choice for large spatial meshes

  20. Topics in bound-state dynamical processes: semiclassical eigenvalues, reactive scattering kernels and gas-surface scattering models

    International Nuclear Information System (INIS)

    Adams, J.E.

    1979-05-01

    The difficulty of applying the WKB approximation to problems involving arbitrary potentials has been confronted. Recent work has produced a convenient expression for the potential correction term. However, this approach does not yield a unique correction term and hence cannot be used to construct the proper modification. An attempt is made to overcome the uniqueness difficulties by imposing a criterion which permits identification of the correct modification. Sections of this work are: semiclassical eigenvalues for potentials defined on a finite interval; reactive scattering exchange kernels; a unified model for elastic and inelastic scattering from a solid surface; and selective absorption on a solid surface

  1. Nanoscale array structures suitable for surface enhanced raman scattering and methods related thereto

    Science.gov (United States)

    Bond, Tiziana C; Miles, Robin; Davidson, James; Liu, Gang Logan

    2015-11-03

    Methods for fabricating nanoscale array structures suitable for surface enhanced Raman scattering, structures thus obtained, and methods to characterize the nanoscale array structures suitable for surface enhanced Raman scattering. Nanoscale array structures may comprise nanotrees, nanorecesses and tapered nanopillars.

  2. Nanoscale array structures suitable for surface enhanced raman scattering and methods related thereto

    Science.gov (United States)

    Bond, Tiziana C.; Miles, Robin; Davidson, James C.; Liu, Gang Logan

    2014-07-22

    Methods for fabricating nanoscale array structures suitable for surface enhanced Raman scattering, structures thus obtained, and methods to characterize the nanoscale array structures suitable for surface enhanced Raman scattering. Nanoscale array structures may comprise nanotrees, nanorecesses and tapered nanopillars.

  3. Nanoscale array structures suitable for surface enhanced raman scattering and methods related thereto

    Science.gov (United States)

    Bond, Tiziana C.; Miles, Robin; Davidson, James C.; Liu, Gang Logan

    2015-07-14

    Methods for fabricating nanoscale array structures suitable for surface enhanced Raman scattering, structures thus obtained, and methods to characterize the nanoscale array structures suitable for surface enhanced Raman scattering. Nanoscale array structures may comprise nanotrees, nanorecesses and tapered nanopillars.

  4. Methods of contrast variation by nuclear polarisation in small-angle neutron scattering: Observation of domains of nuclear polarisation by neutron scattering

    International Nuclear Information System (INIS)

    Leymarie, E.

    2002-11-01

    In this thesis we study the theoretical and experimental aspects of Contrast Variation by Nuclear Polarization (CVNP) applied to small-angle neutron scattering. The basics of neutron scattering theory is developed by highlighting the origin of the CVNP method: the strong spin dependence of thermal neutron scattering, especially on protons. We also present the principles of NMR with a special attention on the method of dynamic nuclear polarization by the solid effect which makes it possible to control the proton polarization and therefore the contrast for neutron scattering. We present a theoretical study of the CVNP method called static which supposes that the nuclear polarization is homogeneous in the sample and constant during the experiment. We show that it allows one to obtain partial structure functions of systems with multiple components, by carrying out several acquisitions with different polarizations on a single sample. For this purpose, we tested a simple device to stabilize the nuclear polarization. We describe finally a new application of the CVNP method called dynamic. In a solution of deuterated glycerol-water containing a small concentration of paramagnetic centres, we showed the existence of domains of polarized protons at the onset of dynamic polarization. This reinforces considerably the coherent scattering of paramagnetic centres. We describe the theoretical reasons explaining the appearance of these domains of polarization, as well as the various techniques used to observe them by neutron scattering. (author)

  5. J-matrix method of scattering in one dimension: The nonrelativistic theory

    International Nuclear Information System (INIS)

    Alhaidari, A.D.; Bahlouli, H.; Abdelmonem, M.S.

    2009-01-01

    We formulate a theory of nonrelativistic scattering in one dimension based on the J-matrix method. The scattering potential is assumed to have a finite range such that it is well represented by its matrix elements in a finite subset of a basis that supports a tridiagonal matrix representation for the reference wave operator. Contrary to our expectation, the 1D formulation reveals a rich and highly nontrivial structure compared to the 3D formulation. Examples are given to demonstrate the utility and accuracy of the method. It is hoped that this formulation constitutes a viable alternative to the classical treatment of 1D scattering problem and that it will help unveil new and interesting applications.

  6. Scattering in an intense radiation field: Time-independent methods

    International Nuclear Information System (INIS)

    Rosenberg, L.

    1977-01-01

    The standard time-independent formulation of nonrelativistic scattering theory is here extended to take into account the presence of an intense external radiation field. In the case of scattering by a static potential the extension is accomplished by the introduction of asymptotic states and intermediate-state propagators which account for the absorption and induced emission of photons by the projectile as it propagates through the field. Self-energy contributions to the propagator are included by a systematic summation of forward-scattering terms. The self-energy analysis is summarized in the form of a modified perturbation expansion of the type introduced by Watson some time ago in the context of nuclear-scattering theory. This expansion, which has a simple continued-fraction structure in the case of a single-mode field, provides a generally applicable successive approximation procedure for the propagator and the asymptotic states. The problem of scattering by a composite target is formulated using the effective-potential method. The modified perturbation expansion which accounts for self-energy effects is applicable here as well. A discussion of a coupled two-state model is included to summarize and clarify the calculational procedures

  7. A method to measure the antikaon-nucleon scattering length in lattice QCD

    International Nuclear Information System (INIS)

    Lage, Michael; Meissner, Ulf-G.; Rusetsky, Akaki

    2009-01-01

    We propose a method to determine the isoscalar K-bar N scattering length on the lattice. Our method represents the generalization of Luescher's approach in the presence of inelastic channels (complex scattering length). In addition, the proposed approach allows one to find the position of the S-matrix pole corresponding the Λ(1405) resonance.

  8. Nowcasting Surface Meteorological Parameters Using Successive Correction Method

    National Research Council Canada - National Science Library

    Henmi, Teizi

    2002-01-01

    The successive correction method was examined and evaluated statistically as a nowcasting method for surface meteorological parameters including temperature, dew point temperature, and horizontal wind vector components...

  9. Inelastic neutron scattering, Raman, vibrational analysis with anharmonic corrections, and scaled quantum mechanical force field for polycrystalline L-alanine

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Robert W. [Department of Biomedical Informatics, Uniformed Services University, 4301 Jones Bridge Road, Bethesda, MD 20815 (United States)], E-mail: bob@bob.usuhs.mil; Schluecker, Sebastian [Institute of Physical Chemistry, University of Wuerzburg, Wuerzburg (Germany); Hudson, Bruce S. [Department of Chemistry, Syracuse University, Syracuse, NY (United States)

    2008-01-22

    A scaled quantum mechanical harmonic force field (SQMFF) corrected for anharmonicity is obtained for the 23 K L-alanine crystal structure using van der Waals corrected periodic boundary condition density functional theory (DFT) calculations with the PBE functional. Scale factors are obtained with comparisons to inelastic neutron scattering (INS), Raman, and FT-IR spectra of polycrystalline L-alanine at 15-23 K. Calculated frequencies for all 153 normal modes differ from observed frequencies with a standard deviation of 6 wavenumbers. Non-bonded external k = 0 lattice modes are included, but assignments to these modes are presently ambiguous. The extension of SQMFF methodology to lattice modes is new, as are the procedures used here for providing corrections for anharmonicity and van der Waals interactions in DFT calculations on crystals. First principles Born-Oppenheimer molecular dynamics (BOMD) calculations are performed on the L-alanine crystal structure at a series of classical temperatures ranging from 23 K to 600 K. Corrections for zero-point energy (ZPE) are estimated by finding the classical temperature that reproduces the mean square displacements (MSDs) measured from the diffraction data at 23 K. External k = 0 lattice motions are weakly coupled to bonded internal modes.

  10. Inelastic neutron scattering, Raman, vibrational analysis with anharmonic corrections, and scaled quantum mechanical force field for polycrystalline L-alanine

    International Nuclear Information System (INIS)

    Williams, Robert W.; Schluecker, Sebastian; Hudson, Bruce S.

    2008-01-01

    A scaled quantum mechanical harmonic force field (SQMFF) corrected for anharmonicity is obtained for the 23 K L-alanine crystal structure using van der Waals corrected periodic boundary condition density functional theory (DFT) calculations with the PBE functional. Scale factors are obtained with comparisons to inelastic neutron scattering (INS), Raman, and FT-IR spectra of polycrystalline L-alanine at 15-23 K. Calculated frequencies for all 153 normal modes differ from observed frequencies with a standard deviation of 6 wavenumbers. Non-bonded external k = 0 lattice modes are included, but assignments to these modes are presently ambiguous. The extension of SQMFF methodology to lattice modes is new, as are the procedures used here for providing corrections for anharmonicity and van der Waals interactions in DFT calculations on crystals. First principles Born-Oppenheimer molecular dynamics (BOMD) calculations are performed on the L-alanine crystal structure at a series of classical temperatures ranging from 23 K to 600 K. Corrections for zero-point energy (ZPE) are estimated by finding the classical temperature that reproduces the mean square displacements (MSDs) measured from the diffraction data at 23 K. External k = 0 lattice motions are weakly coupled to bonded internal modes

  11. Heavy ion elastic scatterings

    International Nuclear Information System (INIS)

    Mermaz, M.C.

    1984-01-01

    Diffraction and refraction play an important role in particle elastic scattering. The optical model treats correctly and simultaneously both phenomena but without disentangling them. Semi-classical discussions in terms of trajectories emphasize the refractive aspect due to the real part of the optical potential. The separation due to to R.C. Fuller of the quantal cross section into two components coming from opposite side of the target nucleus allows to understand better the refractive phenomenon and the origin of the observed oscillations in the elastic scattering angular distributions. We shall see that the real part of the potential is responsible of a Coulomb and a nuclear rainbow which allows to determine better the nuclear potential in the interior region near the nuclear surface since the volume absorption eliminates any effect of the real part of the potential for the internal partial scattering waves. Resonance phenomena seen in heavy ion scattering will be discussed in terms of optical model potential and Regge pole analysis. Compound nucleus resonances or quasi-molecular states can be indeed the more correct and fundamental alternative

  12. Comparison of the auxiliary function method and the discrete-ordinate method for solving the radiative transfer equation for light scattering.

    Science.gov (United States)

    da Silva, Anabela; Elias, Mady; Andraud, Christine; Lafait, Jacques

    2003-12-01

    Two methods for solving the radiative transfer equation are compared with the aim of computing the angular distribution of the light scattered by a heterogeneous scattering medium composed of a single flat layer or a multilayer. The first method [auxiliary function method (AFM)], recently developed, uses an auxiliary function and leads to an exact solution; the second [discrete-ordinate method (DOM)] is based on the channel concept and needs an angular discretization. The comparison is applied to two different media presenting two typical and extreme scattering behaviors: Rayleigh and Mie scattering with smooth or very anisotropic phase functions, respectively. A very good agreement between the predictions of the two methods is observed in both cases. The larger the number of channels used in the DOM, the better the agreement. The principal advantages and limitations of each method are also listed.

  13. Another method of dead time correction

    International Nuclear Information System (INIS)

    Sabol, J.

    1988-01-01

    A new method of the correction of counting losses caused by a non-extended dead time of pulse detection systems is presented. The approach is based on the distribution of time intervals between pulses at the output of the system. The method was verified both experimentally and by using the Monte Carlo simulations. The results show that the suggested technique is more reliable and accurate than other methods based on a separate measurement of the dead time. (author) 5 refs

  14. Study on interaction between palladium(ІІ)-Linezolid chelate with eosin by resonance Rayleigh scattering, second order of scattering and frequency doubling scattering methods using Taguchi orthogonal array design

    Science.gov (United States)

    Thakkar, Disha; Gevriya, Bhavesh; Mashru, R. C.

    2014-03-01

    Linezolid reacted with palladium to form 1:1 binary cationic chelate which further reacted with eosin dye to form 1:1 ternary ion association complex at pH 4 of Walpole's acetate buffer in the presence of methyl cellulose. As a result not only absorption spectra were changed but Resonance Rayleigh Scattering (RRS), Second-order Scattering (SOS) and Frequency Doubling Scattering (FDS) intensities were greatly enhanced. The analytical wavelengths of RRS, SOS and FDS (λex/λem) of ternary complex were located at 538 nm/538 nm, 240 nm/480 nm and 660 nm/330 nm, respectively. The linearity range for RRS, SOS and FDS methods were 0.01-0.5 μg mL-1, 0.1-2 μg mL-1 and 0.2-1.8 μg mL-1, respectively. The sensitivity order of three methods was as RRS > SOS > FDS. Accuracy of all methods were determined by recovery studies and showed recovery between 98% and 102%. Intraday and inter day precision were checked for all methods and %RSD was found to be less than 2 for all methods. The effects of foreign substances were tested on RRS method and it showed the method had good selectivity. For optimization of process parameter, Taguchi orthogonal array design L8(24) was used and ANOVA was adopted to determine the statistically significant control factors that affect the scattering intensities of methods. The reaction mechanism, composition of ternary ion association complex and reasons for scattering intensity enhancement was discussed in this work.

  15. NADH-fluorescence scattering correction for absolute concentration determination in a liquid tissue phantom using a novel multispectral magnetic-resonance-imaging-compatible needle probe

    Science.gov (United States)

    Braun, Frank; Schalk, Robert; Heintz, Annabell; Feike, Patrick; Firmowski, Sebastian; Beuermann, Thomas; Methner, Frank-Jürgen; Kränzlin, Bettina; Gretz, Norbert; Rädle, Matthias

    2017-07-01

    In this report, a quantitative nicotinamide adenine dinucleotide hydrate (NADH) fluorescence measurement algorithm in a liquid tissue phantom using a fiber-optic needle probe is presented. To determine the absolute concentrations of NADH in this phantom, the fluorescence emission spectra at 465 nm were corrected using diffuse reflectance spectroscopy between 600 nm and 940 nm. The patented autoclavable Nitinol needle probe enables the acquisition of multispectral backscattering measurements of ultraviolet, visible, near-infrared and fluorescence spectra. As a phantom, a suspension of calcium carbonate (Calcilit) and water with physiological NADH concentrations between 0 mmol l-1 and 2.0 mmol l-1 were used to mimic human tissue. The light scattering characteristics were adjusted to match the backscattering attributes of human skin by modifying the concentration of Calcilit. To correct the scattering effects caused by the matrices of the samples, an algorithm based on the backscattered remission spectrum was employed to compensate the influence of multiscattering on the optical pathway through the dispersed phase. The monitored backscattered visible light was used to correct the fluorescence spectra and thereby to determine the true NADH concentrations at unknown Calcilit concentrations. Despite the simplicity of the presented algorithm, the root-mean-square error of prediction (RMSEP) was 0.093 mmol l-1.

  16. A discontinuous galerkin time domain-boundary integral method for analyzing transient electromagnetic scattering

    KAUST Repository

    Li, Ping

    2014-07-01

    This paper presents an algorithm hybridizing discontinuous Galerkin time domain (DGTD) method and time domain boundary integral (BI) algorithm for 3-D open region electromagnetic scattering analysis. The computational domain of DGTD is rigorously truncated by analytically evaluating the incoming numerical flux from the outside of the truncation boundary through BI method based on the Huygens\\' principle. The advantages of the proposed method are that it allows the truncation boundary to be conformal to arbitrary (convex/ concave) scattering objects, well-separated scatters can be truncated by their local meshes without losing the physics (such as coupling/multiple scattering) of the problem, thus reducing the total mesh elements. Furthermore, low frequency waves can be efficiently absorbed, and the field outside the truncation domain can be conveniently calculated using the same BI formulation. Numerical examples are benchmarked to demonstrate the accuracy and versatility of the proposed method.

  17. Taking account of sample finite dimensions in processing measurements of double differential cross sections of slow neutron scattering

    International Nuclear Information System (INIS)

    Lisichkin, Yu.V.; Dovbenko, A.G.; Efimenko, B.A.; Novikov, A.G.; Smirenkina, L.D.; Tikhonova, S.I.

    1979-01-01

    Described is a method of taking account of finite sample dimensions in processing measurement results of double differential cross sections (DDCS) of slow neutron scattering. A necessity of corrective approach to the account taken of the effect of sample finite dimensions is shown, and, in particular, the necessity to conduct preliminary processing of DDCS, the account being taken of attenuation coefficients of single scattered neutrons (SSN) for measurements on the sample with a container, and on the container. Correction for multiple scattering (MS) calculated on the base of the dynamic model should be obtained, the account being taken of resolution effects. To minimize the effect of the dynamic model used in calculations it is preferred to make absolute measurements of DDCS and to use the subraction method. The above method was realized in the set of programs for the BESM-5 computer. The FISC program computes the coefficients of SSN attenuation and correction for MS. The DDS program serves to compute a model DDCS averaged as per the resolution function of an instrument. The SCATL program is intended to prepare initial information necessary for the FISC program, and permits to compute the scattering law for all materials. Presented are the results of using the above method while processing experimental data on measuring DDCS of water by the DIN-1M spectrometer

  18. An assessment of the DORT method on simple scatterers using boundary element modelling.

    Science.gov (United States)

    Gélat, P; Ter Haar, G; Saffari, N

    2015-05-07

    The ability to focus through ribs overcomes an important limitation of a high-intensity focused ultrasound (HIFU) system for the treatment of liver tumours. Whilst it is important to generate high enough acoustic pressures at the treatment location for tissue lesioning, it is also paramount to ensure that the resulting ultrasonic dose on the ribs remains below a specified threshold, since ribs both strongly absorb and reflect ultrasound. The DORT (décomposition de l'opérateur de retournement temporel) method has the ability to focus on and through scatterers immersed in an acoustic medium selectively without requiring prior knowledge of their location or geometry. The method requires a multi-element transducer and is implemented via a singular value decomposition of the measured matrix of inter-element transfer functions. The efficacy of a method of focusing through scatterers is often assessed by comparing the specific absorption rate (SAR) at the surface of the scatterer, and at the focal region. The SAR can be obtained from a knowledge of the acoustic pressure magnitude and the acoustic properties of the medium and scatterer. It is well known that measuring acoustic pressures with a calibrated hydrophone at or near a hard surface presents experimental challenges, potentially resulting in increased measurement uncertainties. Hence, the DORT method is usually assessed experimentally by measuring the SAR at locations on the surface of the scatterer after the latter has been removed from the acoustic medium. This is also likely to generate uncertainties in the acoustic pressure measurement. There is therefore a strong case for assessing the efficacy of the DORT method through a validated theoretical model. The boundary element method (BEM) applied to exterior acoustic scattering problems is well-suited for such an assessment. In this study, BEM was used to implement the DORT method theoretically on locally reacting spherical scatterers, and to assess its focusing

  19. Halo-independent methods for inelastic dark matter scattering

    International Nuclear Information System (INIS)

    Bozorgnia, Nassim; Schwetz, Thomas; Herrero-Garcia, Juan; Zupan, Jure

    2013-01-01

    We present halo-independent methods to analyze the results of dark matter direct detection experiments assuming inelastic scattering. We focus on the annual modulation signal reported by DAMA/LIBRA and present three different halo-independent tests. First, we compare it to the upper limit on the unmodulated rate from XENON100 using (a) the trivial requirement that the amplitude of the annual modulation has to be smaller than the bound on the unmodulated rate, and (b) a bound on the annual modulation amplitude based on an expansion in the Earth's velocity. The third test uses the special predictions of the signal shape for inelastic scattering and allows for an internal consistency check of the data without referring to any astrophysics. We conclude that a strong conflict between DAMA/LIBRA and XENON100 in the framework of spin-independent inelastic scattering can be established independently of the local properties of the dark matter halo

  20. Computational method for an axisymmetric laser beam scattered by a body of revolution

    International Nuclear Information System (INIS)

    Combis, P.; Robiche, J.

    2005-01-01

    An original hybrid computational method to solve the 2-D problem of the scattering of an axisymmetric laser beam by an arbitrary-shaped inhomogeneous body of revolution is presented. This method relies on a domain decomposition of the scattering zone into concentric spherical radially homogeneous sub-domains and on an expansion of the angular dependence of the fields on the Legendre polynomials. Numerical results for the fields obtained for various scatterers geometries are presented and analyzed. (authors)

  1. Spin and orbital magnetisation densities determined by Compton scattering of photons

    International Nuclear Information System (INIS)

    Collins, S.P.; Laundy, D.; Cooper, M.J.; Lovesey, S.W.; Uppsala Univ.

    1990-03-01

    Compton scattering of a circularly polarized photon beam is shown to provide direct information on orbital and spin magnetisation densities. Experiments are reported which demonstrate the feasibility of the method by correctly predicting the ratio of spin and orbital magnetisation components in iron and cobalt. A partially polarised beam of 45 keV photons from the Daresbury Synchrotron Radiation Source produces charge-magnetic interference scattering which is measured by a field-difference method. Theory shows that the interference cross section contains the Compton profile of polarised electrons modulated by a structure factor which is a weighted sum of spin and orbital magnetisations. In particular, the scattering geometry for which the structure factor vanishes yields a unique value for the ratio of the magnetisation densities. Compton scattering, being an incoherent process, provides data on total unit cell magnetisations which can be directly compared with bulk data. In this respect, Compton scattering complements magnetic neutron and photon Bragg diffraction. (author)

  2. A hybrid time-domain discontinuous galerkin-boundary integral method for electromagnetic scattering analysis

    KAUST Repository

    Li, Ping

    2014-05-01

    A scheme hybridizing discontinuous Galerkin time-domain (DGTD) and time-domain boundary integral (TDBI) methods for accurately analyzing transient electromagnetic scattering is proposed. Radiation condition is enforced using the numerical flux on the truncation boundary. The fields required by the flux are computed using the TDBI from equivalent currents introduced on a Huygens\\' surface enclosing the scatterer. The hybrid DGTDBI ensures that the radiation condition is mathematically exact and the resulting computation domain is as small as possible since the truncation boundary conforms to scatterer\\'s shape and is located very close to its surface. Locally truncated domains can also be defined around each disconnected scatterer additionally reducing the size of the overall computation domain. Numerical examples demonstrating the accuracy and versatility of the proposed method are presented. © 2014 IEEE.

  3. CT energy weighting in the presence of scatter and limited energy resolution

    International Nuclear Information System (INIS)

    Schmidt, Taly Gilat

    2010-01-01

    . Scatter reduced the CNR for all energy-weighting methods; however, the effect was greater for optimal energy weighting. For example, optimal energy weighting improved the CNR of iodine and water compared to energy-integrating weighting by a factor of ∼1.45 in the absence of scatter and by a factor of ∼1.1 in the presence of scatter (8.9 deg. cone angle, SPR 0.5). Without scatter correction, the difference in CNR resulting from photon-counting and optimal energy weighting was negligible ( 0.3). Optimal weights combined with deterministic scatter correction provided a 1.3 and 1.1 improvement in CNR compared to energy-integrating and photon-counting weighting, respectively, for the 8.9 deg. cone angle simulation. In the absence of spectrum tailing, image-based weighting demonstrated reduced cupping artifact compared to projection-based weighting; however, both weighting methods exhibited similar cupping artifacts when spectrum tailing was simulated. There were no statistically significant differences in the CNR resulting from projection and image-based weighting for any of the simulated conditions. Conclusions: Optimal linear energy weighting introduces artifacts and CT number inaccuracies due to spectrum tailing. While optimal energy weighting has the potential to improve CNR compared to conventional weighting methods, the benefits are reduced as scatter increases. Efficient methods for reducing scatter and correcting spectrum tailing effects are required to obtain the highest benefit from optimal energy weighting.

  4. Depth distribution of multiple order X-ray scatter

    International Nuclear Information System (INIS)

    Yao Weiguang; Leszczynski, Konrad

    2008-01-01

    Scatter can significantly affect quality of projectional X-ray radiographs and tomographic reconstructions. With this in mind, we examined some of the physical properties of multiple orders of scatter of X-ray photons traversing through a layer of scattering media such as water. Using Monte Carlo techniques, we investigated depth distributions of interactions between incident X-ray photons and water before the resulting scattered photons reach the detector plane. Effects of factors such as radiation field size, air gap, thickness of the layer of scattering medium and X-ray energy, on the scatter were included in the scope of this study. The following scatter characteristics were observed: (1) for a layer of scattering material corresponding to the typical subject thickness in medical imaging, frequency distribution of locations of the last scattering interaction increases approximately exponentially with depth, and the higher the order of scatter or the energy of the incident photon, the narrower is the distribution; (2) for the second order scatter, the distribution of locations of the first interaction is more uniform than that of the last interaction and is dependent on the energy of the primary photons. Theoretical proofs for some of these properties are given. These properties are important to better understanding of effects of scatter on the radiographic and tomographic imaging process and to developing effective methods for scatter correction

  5. A novel sampling method for multiple multiscale targets from scattering amplitudes at a fixed frequency

    Science.gov (United States)

    Liu, Xiaodong

    2017-08-01

    A sampling method by using scattering amplitude is proposed for shape and location reconstruction in inverse acoustic scattering problems. Only matrix multiplication is involved in the computation, thus the novel sampling method is very easy and simple to implement. With the help of the factorization of the far field operator, we establish an inf-criterion for characterization of underlying scatterers. This result is then used to give a lower bound of the proposed indicator functional for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functional decays like the bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functional continuously depends on the scattering amplitude, this further implies that the novel sampling method is extremely stable with respect to errors in the data. Different to the classical sampling method such as the linear sampling method or the factorization method, from the numerical point of view, the novel indicator takes its maximum near the boundary of the underlying target and decays like the bessel functions as the sampling points go away from the boundary. The numerical simulations also show that the proposed sampling method can deal with multiple multiscale case, even the different components are close to each other.

  6. Extending 3D near-cloud corrections from shorter to longer wavelengths

    International Nuclear Information System (INIS)

    Marshak, Alexander; Evans, K. Frank; Várnai, Tamás; Wen, Guoyong

    2014-01-01

    Satellite observations have shown a positive correlation between cloud amount and aerosol optical thickness (AOT) that can be explained by the humidification of aerosols near clouds, and/or by cloud contamination by sub-pixel size clouds and the cloud adjacency effect. The last effect may substantially increase reflected radiation in cloud-free columns, leading to overestimates in the retrieved AOT. For clear-sky areas near boundary layer clouds the main contribution to the enhancement of clear sky reflectance at shorter wavelengths comes from the radiation scattered into clear areas by clouds and then scattered to the sensor by air molecules. Because of the wavelength dependence of air molecule scattering, this process leads to a larger reflectance increase at shorter wavelengths, and can be corrected using a simple two-layer model [18]. However, correcting only for molecular scattering skews spectral properties of the retrieved AOT. Kassianov and Ovtchinnikov [9] proposed a technique that uses spectral reflectance ratios to retrieve AOT in the vicinity of clouds; they assumed that the cloud adjacency effect influences the spectral ratio between reflectances at two wavelengths less than it influences the reflectances themselves. This paper combines the two approaches: It assumes that the 3D correction for the shortest wavelength is known with some uncertainties, and then it estimates the 3D correction for longer wavelengths using a modified ratio method. The new approach is tested with 3D radiances simulated for 26 cumulus fields from Large-Eddy Simulations, supplemented with 40 aerosol profiles. The results showed that (i) for a variety of cumulus cloud scenes and aerosol profiles over ocean the 3D correction due to cloud adjacency effect can be extended from shorter to longer wavelengths and (ii) the 3D corrections for longer wavelengths are not very sensitive to unbiased random uncertainties in the 3D corrections at shorter wavelengths. - Highlights:

  7. Extraction of chemical information of suspensions using radiative transfer theory to remove multiple scattering effects: application to a model multicomponent system.

    Science.gov (United States)

    Steponavičius, Raimundas; Thennadil, Suresh N

    2011-03-15

    The effectiveness of a scatter correction approach based on decoupling absorption and scattering effects through the use of the radiative transfer theory to invert a suitable set of measurements is studied by considering a model multicomponent suspension. The method was used in conjunction with partial least-squares regression to build calibration models for estimating the concentration of two types of analytes: an absorbing (nonscattering) species and a particulate (absorbing and scattering) species. The performances of the models built by this approach were compared with those obtained by applying empirical scatter correction approaches to diffuse reflectance, diffuse transmittance, and collimated transmittance measurements. It was found that the method provided appreciable improvement in model performance for the prediction of both types of analytes. The study indicates that, as long as the bulk absorption spectra are accurately extracted, no further empirical preprocessing to remove light scattering effects is required.

  8. Error and corrections with scintigraphic measurement of gastric emptying of solid foods

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, J.H.; Van Deventer, G.; Graham, L.S.; Thomson, J.; Thomasson, D.

    1983-03-01

    Previous methods for correction of depth used geometric means of simultaneously obtained anterior and posterior counts. The present study compares this method with a new one that uses computations of depth based on peak-to-scatter (P:S) ratios. Six normal volunteers were fed a meal of beef stew, water, and chicken liver that had been labeled in vivo with both In-113m and Tc-99m. Gastric emptying was followed at short intervals with anterior counts of peak and scattered radiation for each nuclide, as well as posteriorly collected peak counts from the gastric ROI. Depth of the nuclides was estimated by the P:S method as well as the older method. Both gave similar results. Errors from septal penetration or scatter proved to be a significantly larger problem than errors from changes in depth.

  9. Low-energy scattering on the lattice

    International Nuclear Information System (INIS)

    Bour Bour, Shahin

    2014-01-01

    In this thesis we present precision benchmark calculations for two-component fermions in the unitarity limit using an ab initio method, namely Hamiltonian lattice formalism. We calculate the ground state energy for unpolarized four particles (Fermi gas) in a periodic cube as a fraction of the ground state energy of the non-interacting system for two independent representations of the lattice Hamiltonians. We obtain the values 0.211(2) and 0.210(2). These results are in full agreement with the Euclidean lattice and fixed-node diffusion Monte Carlo calculations. We also give an expression for the energy corrections to the binding energy of a bound state in a moving frame. These corrections contain information about the mass and number of the constituents and are topological in origin and will have a broad applications to the lattice calculations of nucleons, nuclei, hadronic molecules and cold atoms. As one of its applications we use this expression and determine the low-energy parameters for the fermion dimer elastic scattering in shallow binding limit. For our lattice calculations we use Luescher's finite volume method. From the lattice calculations we find κa fd =1.174(9) and κr fd =-0.029(13), where κ represents the binding momentum of dimer and a fd (r fd ) denotes the scattering length (effective-range). These results are confirmed by the continuum calculations using the Skorniakov-Ter-Martirosian integral equation which gives 1.17907(1) and -0.0383(3) for the scattering length and effective range, respectively.

  10. Rayleigh-wave scattering by shallow cracks using the indirect boundary element method

    International Nuclear Information System (INIS)

    Ávila-Carrera, R; Rodríguez-Castellanos, A; Ortiz-Alemán, C; Sánchez-Sesma, F J

    2009-01-01

    The scattering and diffraction of Rayleigh waves by shallow cracks using the indirect boundary element method (IBEM) are investigated. The detection of cracks is of interest because their presence may compromise structural elements, put technological devices at risk or represent economical potential in reservoir engineering. Shallow cracks may give rise to scattered body and surface waves. These waves are sensitive to the crack's geometry, size and orientation. Under certain conditions, amplitude spectra clearly show conspicuous resonances that are associated with trapped waves. Several applications based on the scattering of surface waves (e.g. Rayleigh and Stoneley waves), such as non-destructive testing or oil well exploration, have shown that the scattered fields may provide useful information to detect cracks and other heterogeneities. The subject is not new and several analytical and numerical techniques have been applied for the last 50 years to understand the basis of multiple scattering phenomena. In this work, we use the IBEM to calculate the scattered fields produced by single or multiple cracks near a free surface. This method is based upon an integral representation of the scattered displacement fields, which is derived from Somigliana's identity. Results are given in both frequency and time domains. The analyses of the displacement field using synthetic seismograms and snapshots reveal some important effects from various configurations of cracks. The study of these simple cases may provide an archetype to geoscientists and engineers to understand the fundamental aspects of multiple scattering and diffraction by cracks

  11. Evaluation of bias-correction methods for ensemble streamflow volume forecasts

    Directory of Open Access Journals (Sweden)

    T. Hashino

    2007-01-01

    Full Text Available Ensemble prediction systems are used operationally to make probabilistic streamflow forecasts for seasonal time scales. However, hydrological models used for ensemble streamflow prediction often have simulation biases that degrade forecast quality and limit the operational usefulness of the forecasts. This study evaluates three bias-correction methods for ensemble streamflow volume forecasts. All three adjust the ensemble traces using a transformation derived with simulated and observed flows from a historical simulation. The quality of probabilistic forecasts issued when using the three bias-correction methods is evaluated using a distributions-oriented verification approach. Comparisons are made of retrospective forecasts of monthly flow volumes for a north-central United States basin (Des Moines River, Iowa, issued sequentially for each month over a 48-year record. The results show that all three bias-correction methods significantly improve forecast quality by eliminating unconditional biases and enhancing the potential skill. Still, subtle differences in the attributes of the bias-corrected forecasts have important implications for their use in operational decision-making. Diagnostic verification distinguishes these attributes in a context meaningful for decision-making, providing criteria to choose among bias-correction methods with comparable skill.

  12. Fast estimation of first-order scattering in a medical x-ray computed tomography scanner using a ray-tracing technique.

    Science.gov (United States)

    Liu, Xin

    2014-01-01

    This study describes a deterministic method for simulating the first-order scattering in a medical computed tomography scanner. The method was developed based on a physics model of x-ray photon interactions with matter and a ray tracing technique. The results from simulated scattering were compared to the ones from an actual scattering measurement. Two phantoms with homogeneous and heterogeneous material distributions were used in the scattering simulation and measurement. It was found that the simulated scatter profile was in agreement with the measurement result, with an average difference of 25% or less. Finally, tomographic images with artifacts caused by scatter were corrected based on the simulated scatter profiles. The image quality improved significantly.

  13. Three-loop mixed QCD-electroweak corrections to Higgs boson gluon fusion

    Science.gov (United States)

    Bonetti, Marco; Melnikov, Kirill; Tancredi, Lorenzo

    2018-02-01

    We compute the contribution of three-loop mixed QCD-electroweak corrections (αS2α2) to the g g →H scattering amplitude. We employ the method of differential equations to compute the relevant integrals and express them in terms of Goncharov polylogarithms.

  14. Post-PRK corneal scatter measurements with a scanning confocal slit photon counter

    Science.gov (United States)

    Taboada, John; Gaines, David; Perez, Mary A.; Waller, Steve G.; Ivan, Douglas J.; Baldwin, J. Bruce; LoRusso, Frank; Tutt, Ronald C.; Perez, Jose; Tredici, Thomas; Johnson, Dan A.

    2000-06-01

    Increased corneal light scatter or 'haze' has been associated with excimer laser photorefractive surgery of the cornea. The increased scatter can affect visual performance; however, topical steroid treatment post surgery substantially reduces the post PRK scatter. For the treatment and monitoring of the scattering characteristics of the cornea, various methods have been developed to objectively measure the magnitude of the scatter. These methods generally can measure scatter associated with clinically observable levels of haze. For patients with moderate to low PRK corrections receiving steroid treatment, measurement becomes fairly difficult as the haze clinical rating is non observable. The goal of this development was to realize an objective, non-invasive physical measurement that could produce a significant reading for any level including the background present in a normal cornea. As back-scatter is the only readily accessible observable, the instrument is based on this measurement. To achieve this end required the use of a confocal method to bias out the background light that would normally confound conventional methods. A number of subjects with nominal refractive errors in an Air Force study have undergone PRK surgery. A measurable increase in corneal scatter has been observed in these subjects whereas clinical ratings of the haze were noted as level zero. Other favorable aspects of this back-scatter based instrument include an optical capability to perform what is equivalent to an optical A-scan of the anterior chamber. Lens scatter can also be measured.

  15. Study of the multiple scattering effect in TEBENE using the Monte Carlo method

    International Nuclear Information System (INIS)

    Singkarat, Somsorn.

    1990-01-01

    The neutron time-of-flight and energy spectra, from the TEBENE set-up, have been calculated by a computer program using the Monte Carlo method. The neutron multiple scattering within the polyethylene scatterer ring is closely investigated. The results show that multiple scattering has a significant effect on the detected neutron yield. They also indicate that the thickness of the scatterer ring has to be carefully chosen. (author)

  16. The algebraic method of the scattering inverse problem solution under untraditional statements

    CERN Document Server

    Popushnoj, M N

    2001-01-01

    The algebraic method of the scattering inverse problem solution under untraditional statements is proposed consistently in this review, in the framework of which some quantum theory od scattering charged particles problem were researched afterwards. The inverse problem of scattering theory of charged particles on the complex plane of the Coulomb coupling constant (CCC) is considered. A procedure of interaction potential restoration is established for the case when the energy, orbital moment quadrate and CCC are linearly dependent. The relation between one-parametric problems of the potential scattering of charged particles is investigated

  17. Septal penetration correction in I-131 imaging following thyroid cancer treatment

    Science.gov (United States)

    Barrack, Fiona; Scuffham, James; McQuaid, Sarah

    2018-04-01

    Whole body gamma camera images acquired after I-131 treatment for thyroid cancer can suffer from collimator septal penetration artefacts because of the high energy of the gamma photons. This results in the appearance of ‘spoke’ artefacts, emanating from regions of high activity concentration, caused by the non-isotropic attenuation of the collimator. Deconvolution has the potential to reduce such artefacts, by taking into account the non-Gaussian point-spread-function (PSF) of the system. A Richardson–Lucy deconvolution algorithm, with and without prior scatter-correction was tested as a method of reducing septal penetration in planar gamma camera images. Phantom images (hot spheres within a warm background) were acquired and deconvolution using a measured PSF was applied. The results were evaluated through region-of-interest and line profile analysis to determine the success of artefact reduction and the optimal number of deconvolution iterations and damping parameter (λ). Without scatter-correction, the optimal results were obtained with 15 iterations and λ  =  0.01, with the counts in the spokes reduced to 20% of the original value, indicating a substantial decrease in their prominence. When a triple-energy-window scatter-correction was applied prior to deconvolution, the optimal results were obtained with six iterations and λ  =  0.02, which reduced the spoke counts to 3% of the original value. The prior application of scatter-correction therefore produced the best results, with a marked change in the appearance of the images. The optimal settings were then applied to six patient datasets, to demonstrate its utility in the clinical setting. In all datasets, spoke artefacts were substantially reduced after the application of scatter-correction and deconvolution, with the mean spoke count being reduced to 10% of the original value. This indicates that deconvolution is a promising technique for septal penetration artefact reduction that

  18. A simple method for solving the inverse scattering problem

    International Nuclear Information System (INIS)

    Melnikov, V.N.; Rudyak, B.V.; Zakhariev, V.N.

    1977-01-01

    A new method is proposed for approximate reconstruction of a potential as a step function from scattering data using the completeness relation of solutions of the Schroedinger equation. The suggested method allows one to take into account exactly the additional centrifugal barrier for partial waves with angular momentum l>0, and also the Coulomb potential. The method admits different generalizations. Numerical calculations for checking the method have been performed

  19. A method for determination mass absorption coefficient of gamma rays by Compton scattering

    International Nuclear Information System (INIS)

    El Abd, A.

    2014-01-01

    A method was proposed for determination mass absorption coefficient of gamma rays for compounds, alloys and mixtures. It is based on simulating interaction processes of gamma rays with target elements having atomic numbers from Z=1 to Z=92 using the MCSHAPE software. Intensities of Compton scattered gamma rays at saturation thicknesses and at a scattering angle of 90° were calculated for incident gamma rays of different energies. The obtained results showed that the intensity of Compton scattered gamma rays at saturations and mass absorption coefficients can be described by mathematical formulas. These were used to determine mass absorption coefficients for compound, alloys and mixtures with the knowledge of their Compton scattered intensities. The method was tested by calculating mass absorption coefficients for some compounds, alloys and mixtures. There is a good agreement between obtained results and calculated ones using WinXom software. The advantages and limitations of the method were discussed. - Highlights: • Compton scattering of γ−rays was used for determining mass absorption coefficient. • Scattered intensities were determined by the MCSHAPE software. • Mass absorption coefficients were determined for some compounds, mixtures and alloys. • Mass absorption coefficients were calculated by Winxcom software. • Good agreements were found between determined and calculated results

  20. Hybrid simulation of scatter intensity in industrial cone-beam computed tomography

    International Nuclear Information System (INIS)

    Thierry, R.; Miceli, A.; Hofmann, J.; Flisch, A.; Sennhauser, U.

    2009-01-01

    A cone-beam computed tomography (CT) system using a 450 kV X-ray tube has been developed to challenge the three-dimensional imaging of parts of the automotive industry in short acquisition time. Because the probability of detecting scattered photons is high regarding the energy range and the area of detection, a scattering correction becomes mandatory for generating reliable images with enhanced contrast detectability. In this paper, we present a hybrid simulator for the fast and accurate calculation of the scattering intensity distribution. The full acquisition chain, from the generation of a polyenergetic photon beam, its interaction with the scanned object and the energy deposit in the detector is simulated. Object phantoms can be spatially described in form of voxels, mathematical primitives or CAD models. Uncollided radiation is treated with a ray-tracing method and scattered radiation is split into single and multiple scattering. The single scattering is calculated with a deterministic approach accelerated with a forced detection method. The residual noisy signal is subsequently deconvoluted with the iterative Richardson-Lucy method. Finally the multiple scattering is addressed with a coarse Monte Carlo (MC) simulation. The proposed hybrid method has been validated on aluminium phantoms with varying size and object-to-detector distance, and found in good agreement with the MC code Geant4. The acceleration achieved by the hybrid method over the standard MC on a single projection is approximately of three orders of magnitude.

  1. Determining Complex Structures using Docking Method with Single Particle Scattering Data

    Directory of Open Access Journals (Sweden)

    Haiguang Liu

    2017-04-01

    Full Text Available Protein complexes are critical for many molecular functions. Due to intrinsic flexibility and dynamics of complexes, their structures are more difficult to determine using conventional experimental methods, in contrast to individual subunits. One of the major challenges is the crystallization of protein complexes. Using X-ray free electron lasers (XFELs, it is possible to collect scattering signals from non-crystalline protein complexes, but data interpretation is more difficult because of unknown orientations. Here, we propose a hybrid approach to determine protein complex structures by combining XFEL single particle scattering data with computational docking methods. Using simulations data, we demonstrate that a small set of single particle scattering data collected at random orientations can be used to distinguish the native complex structure from the decoys generated using docking algorithms. The results also indicate that a small set of single particle scattering data is superior to spherically averaged intensity profile in distinguishing complex structures. Given the fact that XFEL experimental data are difficult to acquire and at low abundance, this hybrid approach should find wide applications in data interpretations.

  2. Scatter correction, intermediate view estimation and dose characterization in megavoltage cone-beam CT imaging

    Science.gov (United States)

    Sramek, Benjamin Koerner

    The ability to deliver conformal dose distributions in radiation therapy through intensity modulation and the potential for tumor dose escalation to improve treatment outcome has necessitated an increase in localization accuracy of inter- and intra-fractional patient geometry. Megavoltage cone-beam CT imaging using the treatment beam and onboard electronic portal imaging device is one option currently being studied for implementation in image-guided radiation therapy. However, routine clinical use is predicated upon continued improvements in image quality and patient dose delivered during acquisition. The formal statement of hypothesis for this investigation was that the conformity of planned to delivered dose distributions in image-guided radiation therapy could be further enhanced through the application of kilovoltage scatter correction and intermediate view estimation techniques to megavoltage cone-beam CT imaging, and that normalized dose measurements could be acquired and inter-compared between multiple imaging geometries. The specific aims of this investigation were to: (1) incorporate the Feldkamp, Davis and Kress filtered backprojection algorithm into a program to reconstruct a voxelized linear attenuation coefficient dataset from a set of acquired megavoltage cone-beam CT projections, (2) characterize the effects on megavoltage cone-beam CT image quality resulting from the application of Intermediate View Interpolation and Intermediate View Reprojection techniques to limited-projection datasets, (3) incorporate the Scatter and Primary Estimation from Collimator Shadows (SPECS) algorithm into megavoltage cone-beam CT image reconstruction and determine the set of SPECS parameters which maximize image quality and quantitative accuracy, and (4) evaluate the normalized axial dose distributions received during megavoltage cone-beam CT image acquisition using radiochromic film and thermoluminescent dosimeter measurements in anthropomorphic pelvic and head and

  3. An evaluation of diverse methods of obtaining effective Schroedinger interaction potentials for elastic scattering

    International Nuclear Information System (INIS)

    Amos, K.; Allen, L.J.; Steward, C.; Hodgson, P.E.; Sofianos, S.A.

    1995-01-01

    Direct solution of the Schroedinger equation and inversion methods of analysis of elastic scattering data are considered to evaluate the information that they can provide about the physical interaction between colliding nuclear particles. It was found that both optical model and inversion methods based upon inverse scattering theories are subject to ambiguities. Therefore, it is essential that elastic scattering data analyses are consistent with microscopic calculations of the potential. 25 refs

  4. An evaluation of diverse methods of obtaining effective Schroedinger interaction potentials for elastic scattering

    Energy Technology Data Exchange (ETDEWEB)

    Amos, K.; Allen, L.J.; Steward, C. [Melbourne Univ., Parkville, VIC (Australia). School of Physics; Hodgson, P.E. [Oxford Univ. (United Kingdom). Dept. of Physics; Sofianos, S.A. [University of South Africa (UNISA), Pretoria (South Africa). Dept. of Physics

    1995-10-01

    Direct solution of the Schroedinger equation and inversion methods of analysis of elastic scattering data are considered to evaluate the information that they can provide about the physical interaction between colliding nuclear particles. It was found that both optical model and inversion methods based upon inverse scattering theories are subject to ambiguities. Therefore, it is essential that elastic scattering data analyses are consistent with microscopic calculations of the potential. 25 refs.

  5. A New Method to Extract CSP Gather of Topography for Scattered Wave Imaging

    Directory of Open Access Journals (Sweden)

    Zhao Pan

    2017-01-01

    Full Text Available The seismic method is one of the major geophysical tools to study the structure of the earth. The extraction of the common scatter point (CSP gather is a critical step to accomplish the seismic imaging with a scattered wave. Conventionally, the CSP gather is obtained with the assumption that the earth surface is horizontal. However, errors are introduced to the final imaging result if the seismic traces obtained at the rugged surface are processed using the conventional method. Hence, we propose the method of the extraction of the CSP gather for the seismic data collected at the rugged surface. The proposed method is validated by two numerical examples and expected to reduce the effect of the topography on the scattered wave imaging.

  6. Evaluation of room-scattered neutrons at the JNC Tokai neutron reference field

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, Tadayoshi; Tsujimura, Norio [Japan Nuclear Cycle Development Inst., Tokai, Ibaraki (Japan). Tokai Works; Oyanagi, Katsumi [Japan Radiation Engineering Co., Ltd., Hitachi, Ibaraki (Japan)

    2002-09-01

    Neutron reference fields for calibrating neutron-measuring devices in JNC Tokai Works are produced by using radionuclide neutron sources, {sup 241}Am-Be and {sup 252}Cf sources. The reference field for calibration includes scattered neutrons from the material surrounding sources, wall, floor and ceiling of the irradiation room. It is, therefore, necessary to evaluate the scattered neutrons contribution and their energy spectra at reference points. Spectral measurements were performed with a set of Bonner multi-sphere spectrometers and the reference fields were characterized in terms of spectral composition and the fractions of room-scattered neutrons. In addition, two techniques stated in ISO 10647, the shadow-cone method and the polynomial fit method, for correcting the contributions from the room-scattered neutrons to the readings of neutron survey instruments were compared. It was found that the two methods gave an equivalent result within a deviation of 3.3% at a source-to-detector distance from 50cm to 500cm. (author)

  7. Evaluation of room-scattered neutrons at the JNC Tokai neutron reference field

    International Nuclear Information System (INIS)

    Yoshida, Tadayoshi; Tsujimura, Norio

    2002-01-01

    Neutron reference fields for calibrating neutron-measuring devices in JNC Tokai Works are produced by using radionuclide neutron sources, 241 Am-Be and 252 Cf sources. The reference field for calibration includes scattered neutrons from the material surrounding sources, wall, floor and ceiling of the irradiation room. It is, therefore, necessary to evaluate the scattered neutrons contribution and their energy spectra at reference points. Spectral measurements were performed with a set of Bonner multi-sphere spectrometers and the reference fields were characterized in terms of spectral composition and the fractions of room-scattered neutrons. In addition, two techniques stated in ISO 10647, the shadow-cone method and the polynomial fit method, for correcting the contributions from the room-scattered neutrons to the readings of neutron survey instruments were compared. It was found that the two methods gave an equivalent result within a deviation of 3.3% at a source-to-detector distance from 50cm to 500cm. (author)

  8. Slot technique - an alternative method of scatter reduction in radiography

    International Nuclear Information System (INIS)

    Panzer, W.; Widenmann, L.

    1983-01-01

    The most common method of scatter reduction in radiography is the use of an antiscatter grid. Its disadvantage is the absorption of a certain percentage of primary radiation in the lead strips of the grid and the fact that due to the limited thickness of the lead strips their scatter absorption is also limited. A possibility for avoiding this disadvantage is offered by the so-called slot technique, ie, the successive exposure of the subject with a narrow fan beam provided by slots in rather thick lead plates. The results of a comparison between grid and slot technique regarding dose to the patient, scatter reduction, image quality and the effect of automatic exposure control are reported. (author)

  9. Exact scattering solutions in an energy sudden (ES) representation

    International Nuclear Information System (INIS)

    Chang, B.; Eno, L.; Rabitz, H.

    1983-01-01

    In this paper, we lay down the theoretical foundations for computing exact scattering wave functions in a reference frame which moves in unison with the system internal coordinates. In this frame the (internal) coordinates appear to be fixed and its adoption leads very naturally (in zeroth order) to the energy sudden (ES) approximation [and the related infinite order sudden (IOS) method]. For this reason we call the new representation for describing the exact dynamics of a many channel scattering problem, the ES representation. Exact scattering solutions are derived in both time dependent and time independent frameworks for the representation and many interesting results in these frames are established. It is shown, e.g., that in a time dependent frame the usual Schroedinger propagator factorizes into internal Hamiltonian, ES, and energy correcting propagators. We also show that in a time independent frame the full Green's functions can be similarly factorized. Another important feature of the new representation is that it forms a firm foundation for seeking corrections to the ES approximation. Thus, for example, the singularity which arises in conventional perturbative expansions of the full Green's functions (with the ES Green's function as the zeroth order solution) is avoided in the ES representation. Finally, a number of both time independent and time dependent ES correction schemes are suggested

  10. Local defect correction for boundary integral equation methods

    NARCIS (Netherlands)

    Kakuba, G.; Anthonissen, M.J.H.

    2013-01-01

    This paper presents a new approach to gridding for problems with localised regions of high activity. The technique of local defect correction has been studied for other methods as ¿nite difference methods and ¿nite volume methods. In this paper we develop the technique for the boundary element

  11. Rearrangement and convergence improvement of the Born series in scattering theory on the basis of orthogonal projections

    International Nuclear Information System (INIS)

    Kukulin, V.I.; Pomerantsev, V.N.

    1976-01-01

    Method of rearrangement of the Born series in scattering theory is proposed which uses the corthogonal projecting pseudopotentials (OPP) proposed recently. It is proved vigorously that the rearranged Born series will converge for all negative and small positive energy value seven in the presence of bound states. Method of correct introduction of scattering operators in orthogonal subspaces is displayed. Comparison of the OPP method with the projection technique developed by Feschbach is given. Physical applications of the method formulated are discussed

  12. A novel technique for determining luminosity in electron-scattering/positron-scattering experiments from multi-interaction events

    Science.gov (United States)

    Schmidt, A.; O'Connor, C.; Bernauer, J. C.; Milner, R.

    2018-01-01

    The OLYMPUS experiment measured the cross-section ratio of positron-proton elastic scattering relative to electron-proton elastic scattering to look for evidence of hard two-photon exchange. To make this measurement, the experiment alternated between electron beam and positron beam running modes, with the relative integrated luminosities of the two running modes providing the crucial normalization. For this reason, OLYMPUS had several redundant luminosity monitoring systems, including a pair of electromagnetic calorimeters positioned downstream from the target to detect symmetric Møller and Bhabha scattering from atomic electrons in the hydrogen gas target. Though this system was designed to monitor the rate of events with single Møller/Bhabha interactions, we found that a more accurate determination of relative luminosity could be made by additionally considering the rate of events with both a Møller/Bhabha interaction and a concurrent elastic ep interaction. This method was improved by small corrections for the variance of the current within bunches in the storage ring and for the probability of three interactions occurring within a bunch. After accounting for systematic effects, we estimate that the method is accurate in determining the relative luminosity to within 0.36%. This precise technique can be employed in future electron-proton and positron-proton scattering experiments to monitor relative luminosity between different running modes.

  13. Variational, projection methods and Pade approximants in scattering theory

    International Nuclear Information System (INIS)

    Turchetti, G.

    1980-12-01

    Several aspects on the scattering theory are discussed in a perturbative scheme. The Pade approximant method plays an important role in such a scheme. Solitons solutions are also discussed in this same scheme. (L.C.) [pt

  14. Benchmarking the inelastic neutron scattering soil carbon method

    Science.gov (United States)

    The herein described inelastic neutron scattering (INS) method of measuring soil carbon was based on a new procedure for extracting the net carbon signal (NCS) from the measured gamma spectra and determination of the average carbon weight percent (AvgCw%) in the upper soil layer (~8 cm). The NCS ext...

  15. Incoherent-scatter computed tomography with monochromatic synchrotron x ray: feasibility of multi-CT imaging system for simultaneous measurement-of fluorescent and incoherent scatter x rays

    Science.gov (United States)

    Yuasa, T.; Akiba, M.; Takeda, T.; Kazama, M.; Hoshino, A.; Watanabe, Y.; Hyodo, K.; Dilmanian, F. A.; Akatsuka, T.; Itai, Y.

    1997-10-01

    We describe a new system of incoherent scatter computed tomography (ISCT) using monochromatic synchrotron X rays, and we discuss its potential to be used in in vivo imaging for medical use. The system operates on the basis of computed tomography (CT) of the first generation. The reconstruction method for ISCT uses the least squares method with singular value decomposition. The research was carried out at the BLNE-5A bending magnet beam line of the Tristan Accumulation Ring in KEK, Japan. An acrylic cylindrical phantom of 20-mm diameter containing a cross-shaped channel was imaged. The channel was filled with a diluted iodine solution with a concentration of 200 /spl mu/gI/ml. Spectra obtained with the system's high purity germanium (HPGe) detector separated the incoherent X-ray line from the other notable peaks, i.e., the iK/sub /spl alpha// and K/sub /spl beta/1/ X-ray fluorescent lines and the coherent scattering peak. CT images were reconstructed from projections generated by integrating the counts In the energy window centering around the incoherent scattering peak and whose width was approximately 2 keV. The reconstruction routine employed an X-ray attenuation correction algorithm. The resulting image showed more homogeneity than one without the attenuation correction.

  16. WE-AB-207A-09: Optimization of the Design of a Moving Blocker for Cone-Beam CT Scatter Correction: Experimental Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Chen, X; Ouyang, L; Jia, X; Zhang, Y; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States); Yan, H [Cyber Medical Corporation, Xi’an (China)

    2016-06-15

    Purpose: A moving blocker based strategy has shown promising results for scatter correction in cone-beam computed tomography (CBCT). Different geometry designs and moving speeds of the blocker affect its performance in image reconstruction accuracy. The goal of this work is to optimize the geometric design and moving speed of the moving blocker system through experimental evaluations. Methods: An Elekta Synergy XVI system and an anthropomorphic pelvis phantom CIRS 801-P were used for our experiment. A blocker consisting of lead strips was inserted between the x-ray source and the phantom moving back and forth along rotation axis to measure the scatter signal. Accoriding to our Monte Carlo simulation results, three blockers were used, which have the same lead strip width 3.2mm and different gap between neighboring lead strips, 3.2, 6.4 and 9.6mm. For each blocker, three moving speeds were evaluated, 10, 20 and 30 pixels per projection (on the detector plane). Scatter signal in the unblocked region was estimated by cubic B-spline based interpolation from the blocked region. CBCT image was reconstructed by a total variation (TV) based algebraic iterative reconstruction (ART) algorithm from the partially blocked projection data. Reconstruction accuracy in each condition is quantified as CT number error of region of interest (ROI) by comparing to a CBCT reconstructed image from analytically simulated unblocked and scatter free projection data. Results: Highest reconstruction accuracy is achieved when the blocker width is 3.2 mm, the gap between neighboring lead strips is 9.6 mm and the moving speed is 20 pixels per projection. RMSE of the CT number of ROIs can be reduced from 436 to 27. Conclusions: Image reconstruction accuracy is greatly affected by the geometry design of the blocker. The moving speed does not have a very strong effect on reconstruction result if it is over 20 pixels per projection.

  17. A method for determination mass absorption coefficient of gamma rays by Compton scattering.

    Science.gov (United States)

    El Abd, A

    2014-12-01

    A method was proposed for determination mass absorption coefficient of gamma rays for compounds, alloys and mixtures. It is based on simulating interaction processes of gamma rays with target elements having atomic numbers from Z=1 to Z=92 using the MCSHAPE software. Intensities of Compton scattered gamma rays at saturation thicknesses and at a scattering angle of 90° were calculated for incident gamma rays of different energies. The obtained results showed that the intensity of Compton scattered gamma rays at saturations and mass absorption coefficients can be described by mathematical formulas. These were used to determine mass absorption coefficients for compound, alloys and mixtures with the knowledge of their Compton scattered intensities. The method was tested by calculating mass absorption coefficients for some compounds, alloys and mixtures. There is a good agreement between obtained results and calculated ones using WinXom software. The advantages and limitations of the method were discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Light-like scattering in quantum gravity

    International Nuclear Information System (INIS)

    Bjerrum-Bohr, N.E.J.; Donoghue, John F.; Holstein, Barry R.; Planté, Ludovic; Vanhove, Pierre

    2016-01-01

    We consider scattering in quantum gravity and derive long-range classical and quantum contributions to the scattering of light-like bosons and fermions (spin-0, spin-(1/2), spin-1) from an external massive scalar field, such as the Sun or a black hole. This is achieved by treating general relativity as an effective field theory and identifying the non-analytic pieces of the one-loop gravitational scattering amplitude. It is emphasized throughout the paper how modern amplitude techniques, involving spinor-helicity variables, unitarity, and squaring relations in gravity enable much simplified computations. We directly verify, as predicted by general relativity, that all classical effects in our computation are universal (in the context of matter type and statistics). Using an eikonal procedure we confirm the post-Newtonian general relativity correction for light-like bending around large stellar objects. We also comment on treating effects from quantum ℏ dependent terms using the same eikonal method.

  19. Light-like scattering in quantum gravity

    Energy Technology Data Exchange (ETDEWEB)

    Bjerrum-Bohr, N.E.J. [Niels Bohr International Academy & Discovery Center, Niels Bohr Institute,University of Copenhagen, Blegdamsvej 17, Copenhagen Ø, DK-2100 (Denmark); Donoghue, John F. [Department of Physics-LGRT, University of Massachusetts,Amherst, MA, 01003 (United States); Holstein, Barry R. [Department of Physics-LGRT, University of Massachusetts,Amherst, MA, 01003 (United States); Kavli Institute for Theoretical Physics, University of California,Santa Barbara, CA, 93016 (United States); Planté, Ludovic; Vanhove, Pierre [CEA, DSM, Institut de Physique Théorique, IPhT, CNRS MPPU, URA2306,Saclay, Gif-sur-Yvette, F-91191 (France)

    2016-11-21

    We consider scattering in quantum gravity and derive long-range classical and quantum contributions to the scattering of light-like bosons and fermions (spin-0, spin-(1/2), spin-1) from an external massive scalar field, such as the Sun or a black hole. This is achieved by treating general relativity as an effective field theory and identifying the non-analytic pieces of the one-loop gravitational scattering amplitude. It is emphasized throughout the paper how modern amplitude techniques, involving spinor-helicity variables, unitarity, and squaring relations in gravity enable much simplified computations. We directly verify, as predicted by general relativity, that all classical effects in our computation are universal (in the context of matter type and statistics). Using an eikonal procedure we confirm the post-Newtonian general relativity correction for light-like bending around large stellar objects. We also comment on treating effects from quantum ℏ dependent terms using the same eikonal method.

  20. A New Online Calibration Method Based on Lord's Bias-Correction.

    Science.gov (United States)

    He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei

    2017-09-01

    Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.

  1. Different partial volume correction methods lead to different conclusions

    DEFF Research Database (Denmark)

    Greve, Douglas N; Salat, David H; Bowen, Spencer L

    2016-01-01

    A cross-sectional group study of the effects of aging on brain metabolism as measured with (18)F-FDG-PET was performed using several different partial volume correction (PVC) methods: no correction (NoPVC), Meltzer (MZ), Müller-Gärtner (MG), and the symmetric geometric transfer matrix (SGTM) usin...

  2. Hybrid Monte Carlo-Diffusion Method For Light Propagation in Tissue With a Low-Scattering Region

    Science.gov (United States)

    Hayashi, Toshiyuki; Kashio, Yoshihiko; Okada, Eiji

    2003-06-01

    The heterogeneity of the tissues in a head, especially the low-scattering cerebrospinal fluid (CSF) layer surrounding the brain has previously been shown to strongly affect light propagation in the brain. The radiosity-diffusion method, in which the light propagation in the CSF layer is assumed to obey the radiosity theory, has been employed to predict the light propagation in head models. Although the CSF layer is assumed to be a nonscattering region in the radiosity-diffusion method, fine arachnoid trabeculae cause faint scattering in the CSF layer in real heads. A novel approach, the hybrid Monte Carlo-diffusion method, is proposed to calculate the head models, including the low-scattering region in which the light propagation does not obey neither the diffusion approximation nor the radiosity theory. The light propagation in the high-scattering region is calculated by means of the diffusion approximation solved by the finite-element method and that in the low-scattering region is predicted by the Monte Carlo method. The intensity and mean time of flight of the detected light for the head model with a low-scattering CSF layer calculated by the hybrid method agreed well with those by the Monte Carlo method, whereas the results calculated by means of the diffusion approximation included considerable error caused by the effect of the CSF layer. In the hybrid method, the time-consuming Monte Carlo calculation is employed only for the thin CSF layer, and hence, the computation time of the hybrid method is dramatically shorter than that of the Monte Carlo method.

  3. Beam-Based Error Identification and Correction Methods for Particle Accelerators

    CERN Document Server

    AUTHOR|(SzGeCERN)692826; Tomas, Rogelio; Nilsson, Thomas

    2014-06-10

    Modern particle accelerators have tight tolerances on the acceptable deviation from their desired machine parameters. The control of the parameters is of crucial importance for safe machine operation and performance. This thesis focuses on beam-based methods and algorithms to identify and correct errors in particle accelerators. The optics measurements and corrections of the Large Hadron Collider (LHC), which resulted in an unprecedented low β-beat for a hadron collider is described. The transverse coupling is another parameter which is of importance to control. Improvement in the reconstruction of the coupling from turn-by-turn data has resulted in a significant decrease of the measurement uncertainty. An automatic coupling correction method, which is based on the injected beam oscillations, has been successfully used in normal operation of the LHC. Furthermore, a new method to measure and correct chromatic coupling that was applied to the LHC, is described. It resulted in a decrease of the chromatic coupli...

  4. Research on 3-D terrain correction methods of airborne gamma-ray spectrometry survey

    International Nuclear Information System (INIS)

    Liu Yanyang; Liu Qingcheng; Zhang Zhiyong

    2008-01-01

    The general method of height correction is not effectual in complex terrain during the process of explaining airborne gamma-ray spectrometry data, and the 2-D terrain correction method researched in recent years is just available for correction of section measured. A new method of 3-D sector terrain correction is studied. The ground radiator is divided into many small sector radiators by the method, then the irradiation rate is calculated in certain survey distance, and the total value of all small radiate sources is regarded as the irradiation rate of the ground radiator at certain point of aero- survey, and the correction coefficients of every point are calculated which then applied to correct to airborne gamma-ray spectrometry data. The method can achieve the forward calculation, inversion calculation and terrain correction for airborne gamma-ray spectrometry survey in complex topography by dividing the ground radiator into many small sectors. Other factors are considered such as the un- saturated degree of measure scope, uneven-radiator content on ground, and so on. The results of for- ward model and an example analysis show that the 3-D terrain correction method is proper and effectual. (authors)

  5. Transmittance and scattering during wound healing after refractive surgery

    Science.gov (United States)

    Mar, Santiago; Martinez-Garcia, C.; Blanco, J. T.; Torres, R. M.; Gonzalez, V. R.; Najera, S.; Rodriguez, G.; Merayo, J. M.

    2004-10-01

    Photorefractive keratectomy (PRK) and laser in situ keratomileusis (LASIK) are frequent techniques performed to correct ametropia. Both methods have been compared in their way of healing but there is not comparison about transmittance and light scattering during this process. Scattering in corneal wound healing is due to three parameters: cellular size and density, and the size of scar. Increase in the scattering angular width implies a decrease the contrast sensitivity. During wound healing keratocytes activation is induced and these cells become into fibroblasts and myofibroblasts. Hens were operated using PRK and LASIK techniques. Animals used in this experiment were euthanized, and immediately their corneas were removed and placed carefully into a cornea camera support. All optical measurements have been done with a scatterometer constructed in our laboratory. Scattering measurements are correlated with the transmittance -- the smaller transmittance is the bigger scattering is. The aim of this work is to provide experimental data of the corneal transparency and scattering, in order to supply data that they allow generate a more complete model of the corneal transparency.

  6. Development of new methods for studying nanostructures using neutron scattering

    International Nuclear Information System (INIS)

    Pynn, Roger

    2016-01-01

    The goal of this project was to develop improved instrumentation for studying the microscopic structures of materials using neutron scattering. Neutron scattering has a number of advantages for studying material structure but suffers from the well-known disadvantage that neutrons' ability to resolve structural details is usually limited by the strength of available neutron sources. We aimed to overcome this disadvantage using a new experimental technique, called Spin Echo Scattering Angle Encoding (SESAME) that makes use of the neutron's magnetism. Our goal was to show that this innovation will allow the country to make better use of the significant investment it has recently made in a new neutron source at Oak Ridge National Laboratory (ORNL) and will lead to increases in scientific knowledge that contribute to the Nation's technological infrastructure and ability to develop advanced materials and technologies. We were successful in demonstrating the technical effectiveness of the new method and established a baseline of knowledge that has allowed ORNL to start a project to implement the method on one of its neutron beam lines.

  7. Development of new methods for studying nanostructures using neutron scattering

    Energy Technology Data Exchange (ETDEWEB)

    Pynn, Roger [Indiana Univ., Bloomington, IN (United States)

    2016-03-18

    The goal of this project was to develop improved instrumentation for studying the microscopic structures of materials using neutron scattering. Neutron scattering has a number of advantages for studying material structure but suffers from the well-known disadvantage that neutrons’ ability to resolve structural details is usually limited by the strength of available neutron sources. We aimed to overcome this disadvantage using a new experimental technique, called Spin Echo Scattering Angle Encoding (SESAME) that makes use of the neutron’s magnetism. Our goal was to show that this innovation will allow the country to make better use of the significant investment it has recently made in a new neutron source at Oak Ridge National Laboratory (ORNL) and will lead to increases in scientific knowledge that contribute to the Nation’s technological infrastructure and ability to develop advanced materials and technologies. We were successful in demonstrating the technical effectiveness of the new method and established a baseline of knowledge that has allowed ORNL to start a project to implement the method on one of its neutron beam lines.

  8. Multiple scattering corrections to the Beer-Lambert law. 2: Detector with a variable field of view.

    Science.gov (United States)

    Zardecki, A; Tam, W G

    1982-07-01

    The multiple scattering corrections to the Beer-Lambert law in the case of a detector with a variable field of view are analyzed. We introduce transmission functions relating the received radiant power to reference power levels relevant to two different experimental situations. In the first case, the transmission function relates the received power to a reference power level appropriate to a nonattenuating medium. In the second case, the reference power level is established by bringing the receiver to the close-up position with respect to the source. To examine the effect of the variation of the detector field of view the behavior of the gain factor is studied. Numerical results modeling the laser beam propagation in fog, cloud, and rain are presented.

  9. Methods for reduction of scattered x-ray in measuring MTF with the square chart

    International Nuclear Information System (INIS)

    Hatagawa, Masakatsu; Yoshida, Rie

    1982-01-01

    A square wave chart has been used to measure the MTF of a screen-film system. The problem is that the scattered X-ray from the chart may give rise to measurement errors. In this paper, the authors proposed two methods to reduce the scattered X-ray: the first method is the use of a Pb mask and second is to provide for an air gap between the chart and the screen-film system. In these methods, the scattered X-ray from the chart was reduced. MTFs were measured by both of the new methods and the conventional method, and MTF values of the new methods were in good agreement while that of the conventional method was not. It was concluded that these new methods are able to reduce errors in the measurement of MTF. (author)

  10. A hybrid time-domain discontinuous galerkin-boundary integral method for electromagnetic scattering analysis

    KAUST Repository

    Li, Ping; Shi, Yifei; Jiang, Lijun; Bagci, Hakan

    2014-01-01

    A scheme hybridizing discontinuous Galerkin time-domain (DGTD) and time-domain boundary integral (TDBI) methods for accurately analyzing transient electromagnetic scattering is proposed. Radiation condition is enforced using the numerical flux on the truncation boundary. The fields required by the flux are computed using the TDBI from equivalent currents introduced on a Huygens' surface enclosing the scatterer. The hybrid DGTDBI ensures that the radiation condition is mathematically exact and the resulting computation domain is as small as possible since the truncation boundary conforms to scatterer's shape and is located very close to its surface. Locally truncated domains can also be defined around each disconnected scatterer additionally reducing the size of the overall computation domain. Numerical examples demonstrating the accuracy and versatility of the proposed method are presented. © 2014 IEEE.

  11. Derivation of Batho's correction factor for heterogeneities

    International Nuclear Information System (INIS)

    Lulu, B.A.; Bjaerngard, B.E.

    1982-01-01

    Batho's correction factor for dose in a heterogeneous, layered medium is derived from the tissue--air ratio method (TARM). The reason why the Batho factor is superior to the TARM factor at low energy is ascribed to the fact that it accounts for the distribution of the scatter-generating matter along the centerline. The poor behavior of the Batho factor at high energies is explained as a consequence of the lack of electron equilibrium at appreciable depth below the surface. Key words: Batho factor, heterogeneity, inhomogeneity, tissue--air ratio method

  12. GPU accelerated manifold correction method for spinning compact binaries

    Science.gov (United States)

    Ran, Chong-xi; Liu, Song; Zhong, Shuang-ying

    2018-04-01

    The graphics processing unit (GPU) acceleration of the manifold correction algorithm based on the compute unified device architecture (CUDA) technology is designed to simulate the dynamic evolution of the Post-Newtonian (PN) Hamiltonian formulation of spinning compact binaries. The feasibility and the efficiency of parallel computation on GPU have been confirmed by various numerical experiments. The numerical comparisons show that the accuracy on GPU execution of manifold corrections method has a good agreement with the execution of codes on merely central processing unit (CPU-based) method. The acceleration ability when the codes are implemented on GPU can increase enormously through the use of shared memory and register optimization techniques without additional hardware costs, implying that the speedup is nearly 13 times as compared with the codes executed on CPU for phase space scan (including 314 × 314 orbits). In addition, GPU-accelerated manifold correction method is used to numerically study how dynamics are affected by the spin-induced quadrupole-monopole interaction for black hole binary system.

  13. A New Dyslexia Reading Method and Visual Correction Position Method.

    Science.gov (United States)

    Manilla, George T; de Braga, Joe

    2017-01-01

    Pediatricians and educators may interact daily with several dyslexic patients or students. One dyslexic author accidently developed a personal, effective, corrective reading method. Its effectiveness was evaluated in 3 schools. One school utilized 8 demonstration special education students. Over 3 months, one student grew one third year, 3 grew 1 year, and 4 grew 2 years. In another school, 6 sixth-, seventh-, and eighth-grade classroom teachers followed 45 treated dyslexic students. They all excelled and progressed beyond their classroom peers in 4 months. Using cyclovergence upper gaze, dyslexic reading problems disappeared at one of the Positional Reading Arc positions of 30°, 60°, 90°, 120°, or 150° for 10 dyslexics. Positional Reading Arc on 112 students of the second through eighth grades showed words read per minute, reading errors, and comprehension improved. Dyslexia was visually corrected by use of a new reading method and Positional Reading Arc positions.

  14. Evaluation of a scatter correlation technique for single photon transmission measurements in PET by means of Monte Carlo simulations

    International Nuclear Information System (INIS)

    Wegmann, K.; Brix, G.

    2000-01-01

    Purpose: Single photon transmission (SPT) measurements offer a new approach for the determination of attenuation correction factors (ACF) in PET. It was the aim of the present work, to evaluate a scatter correction alogrithm proposed by C. Watson by means of Monte Carlo simulations. Methods: SPT measurements with a Cs-137 point source were simulated for a whole-body PET scanner (ECAT EXACT HR + ) in both the 2D and 3D mode. To examine the scatter fraction (SF) in the transmission data, the detected photons were classified as unscattered or scattered. The simulated data were used to determine (i) the spatial distribution of the SFs, (ii) an ACF sinogram from all detected events (ACF tot ) and (iii) from the unscattered events only (ACF unscattered ), and (iv) an ACF cor =(ACF tot ) 1+Κ sinogram corrected according to the Watson algorithm. In addition, density images were reconstructed in order to quantitatively evaluate linear attenuation coefficients. Results: A high correlation was found between the SF and the ACF tot sinograms. For the cylinder and the EEC phantom, similar correction factors Κ were estimated. The determined values resulted in an accurate scatter correction in both the 2D and 3D mode. (orig.) [de

  15. Dead time corrections using the backward extrapolation method

    Energy Technology Data Exchange (ETDEWEB)

    Gilad, E., E-mail: gilade@bgu.ac.il [The Unit of Nuclear Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel); Dubi, C. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel); Geslot, B.; Blaise, P. [DEN/CAD/DER/SPEx/LPE, CEA Cadarache, Saint-Paul-les-Durance 13108 (France); Kolin, A. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel)

    2017-05-11

    Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold, up to total saturation of the detector system. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (e.g., paralyzing vs. non paralyzing), and the stochastic nature of the fission chains. In the present study, a new technique is introduced for dead time corrections on the sampled Count Per Second (CPS), based on backward extrapolation of the losses, created by increasingly growing artificially imposed dead time on the data, back to zero. The method has been implemented on actual neutron noise measurements carried out in the MINERVE zero power reactor, demonstrating high accuracy (of 1–2%) in restoring the corrected count rate. - Highlights: • A new method for dead time corrections is introduced and experimentally validated. • The method does not depend on any prior calibration nor assumes any specific model. • Different dead times are imposed on the signal and the losses are extrapolated to zero. • The method is implemented and validated using neutron measurements from the MINERVE. • Result show very good correspondence to empirical results.

  16. Incoherent quasielastic neutron scattering from plastic crystals

    International Nuclear Information System (INIS)

    Bee, M.; Amoureux, J.P.

    1980-01-01

    The aim of this paper is to present some applications of a method indicated by Sears in order to correct for multiple scattering. The calculations were performed in the particular case of slow neutron incoherent quasielastic scattering from organic plastic crystals. First, an exact calculation (up to second scattering) is compared with the results of a Monte Carlo simulation technique. Then, an approximation is developed on the basis of a rotational jump model which allows a further analytical treatment. The multiple scattering is expressed in terms of generalized structure factors (which can be regarded as self convolutions of first order structure factors taking into account the instrumental geometry) and lorentzian functions the widths of which are linear combinations of the jump rates. Three examples are given. Two of them correspond to powder samples while in the third we are concerned with the case of a single crystalline slab. In every case, this approximation is shown to be a good approach to the multiple scattering evaluation, its main advantage being the possibility of applying it without any preliminary knowledge of the correlation times for rotational jumps. (author)

  17. Error analysis of motion correction method for laser scanning of moving objects

    Science.gov (United States)

    Goel, S.; Lohani, B.

    2014-05-01

    The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such `motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of "motion correction" explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of "motion correction" method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results.

  18. A Workshop on Methods for Neutron Scattering Instrument Design. Introduction and Summary

    International Nuclear Information System (INIS)

    Hjelm, Rex P.

    1996-09-01

    The future of neutron and x-ray scattering instrument development and international cooperation was the focus of the workshop on ''Methods for Neutron Scattering Instrument Design'' September 23-25 at the E.O. Lawrence Berkeley National Laboratory. These proceedings are a collection of a portion of the invited and contributed presentations

  19. A Horizontal Tilt Correction Method for Ship License Numbers Recognition

    Science.gov (United States)

    Liu, Baolong; Zhang, Sanyuan; Hong, Zhenjie; Ye, Xiuzi

    2018-02-01

    An automatic ship license numbers (SLNs) recognition system plays a significant role in intelligent waterway transportation systems since it can be used to identify ships by recognizing the characters in SLNs. Tilt occurs frequently in many SLNs because the monitors and the ships usually have great vertical or horizontal angles, which decreases the accuracy and robustness of a SLNs recognition system significantly. In this paper, we present a horizontal tilt correction method for SLNs. For an input tilt SLN image, the proposed method accomplishes the correction task through three main steps. First, a MSER-based characters’ center-points computation algorithm is designed to compute the accurate center-points of the characters contained in the input SLN image. Second, a L 1- L 2 distance-based straight line is fitted to the computed center-points using M-estimator algorithm. The tilt angle is estimated at this stage. Finally, based on the computed tilt angle, an affine transformation rotation is conducted to rotate and to correct the input SLN horizontally. At last, the proposed method is tested on 200 tilt SLN images, the proposed method is proved to be effective with a tilt correction rate of 80.5%.

  20. Texture analysis by the Schulz reflection method: Defocalization corrections for thin films

    International Nuclear Information System (INIS)

    Chateigner, D.; Germi, P.; Pernet, M.

    1992-01-01

    A new method is described for correcting experimental data obtained from the texture analysis of thin films. The analysis employed for correcting the data usually requires the experimental curves of defocalization for a randomly oriented specimen. In view of difficulties in finding non-oriented films, a theoretical method for these corrections is proposed which uses the defocalization evolution for a bulk sample, the film thickness and the penetration depth of the incident beam in the material. This correction method is applied to a film of YBa 2 CU 3 O 7-δ on an SrTiO 3 single-crystal substrate. (orig.)

  1. AXMIX, ANISN Cross-Sections Mixing, Transport Corrections, Data Library Management

    International Nuclear Information System (INIS)

    2002-01-01

    1 - Nature of physical problem solved: Mixing, changing table length, adjoining, making scattering order adjustments (PN delta function subtraction), and transport corrections of ANISN-type cross sections, and management of cross section data sets and libraries. 2 - Method of solution: The number of energy groups which will fit into the core allocated is determined first. If all groups will fit, the solution is straightforward. If not, then the maximum number of groups which will fit is processed repeatedly using direct access I/O and storage disks. 3 - Restrictions on the complexity of the problem: Some flexibility in applying AXMIX is lost when cross sections to be processed contain up-scatter. A special section on up-scatter is therefore included in the report

  2. Inelastic scattering in condensed matter with high intensity Mossbauer radiation: Progress report, March 1, 1985-October 31, 1987

    International Nuclear Information System (INIS)

    Yelon, W.B.; Schupp, G.

    1987-10-01

    A facility for high intensity Moessbauer scattering has been commissioned at the University of Missouri Research Reactor (MURR) as well as a facility at Purdue University using special isotopes produced at MURR. A number of scattering studies have been successfully carried out, including a study of the thermal diffuse scattering in Si, which led to an analysis of the resolution function for gamma-ray scattering. Also studied was the anharmonic motion in Na and the satellite reflection Debye-Waller factor in TaS 2 which indicates phason rather than phonon behavior. High precision, fundamental Moessbauer effect studies have also been carried out using scattering to filter unwanted radiation. These have led to a new Fourier transform method for describing Moessbauer effect (ME) lineshape. This method allows complete correction for source resonance self-absorption (SRSA) and the accurate representation of interference effects that add an asymmetric component to the ME lines. This analysis is important to both the funadmental ME studies and to scattering studies for which a deconvolution is essential for extracting the correct elastic fractions and lineshape parameters. These advances, coupled to our improvements in MIcrofoil Conversion Electron (MICE) spectroscopy, lay the foundation for the proposed research outlined in this request for a three-year renewal of DOE support

  3. Introduction to the determination of transport numbers in electrolytic solutions. Effect of the activity coefficient in the coupled scattering and self-scattering processes. Electric mobility of the Na+ ion in water-THF mixture - Measurements of transport numbers by means of radio-tracers

    International Nuclear Information System (INIS)

    M'Malla

    1976-01-01

    Within the frame of a study of ion preferential solvation in hydro-organic media, the author reports some measurements of ionic conductivities of the Na + ion in mixtures of different proportions of water and THF (tetrahydrofuran), and more specifically the use of a recently developed method of transport number measurement. The author explains the general definition of the transport number, recalls usual measurement methods (Hittorf method, moving boundary method), describes the method principle, the measurement process, reports the assessment of corrective terms in the calculation of the transport number, and presents and comments the obtained results. A second part addresses the influence of activity coefficient gradient on the couple scattering and self-scattering phenomenon: self-scattering measurement with a tracer, theoretical aspects of coupled scattering, experimental results and discussion

  4. Two loop integrals and QCD scattering

    International Nuclear Information System (INIS)

    Anastasiou, C.

    2001-04-01

    We present the techniques for the calculation of one- and two-loop integrals contributing to the virtual corrections to 2→2 scattering of massless particles. First, tensor integrals are related to scalar integrals with extra powers of propagators and higher dimension using the Schwinger representation. Integration By Parts and Lorentz Invariance recurrence relations reduce the number of independent scalar integrals to a set of master integrals for which their expansion in ε = 2 - D/2 is calculated using a combination of Feynman parameters, the Negative Dimension Integration Method, the Differential Equations Method, and Mellin-Barnes integral representations. The two-loop matrix-elements for light-quark scattering are calculated in Conventional Dimensional Regularisation by direct evaluation of the Feynman diagrams. The ultraviolet divergences are removed by renormalising with the MS-bar scheme. Finally, the infrared singular behavior is shown to be in agreement with the one anticipated by the application of Catani's formalism for the infrared divergences of generic QCD two-loop amplitudes. (author)

  5. Correcting saturation of detectors for particle/droplet imaging methods

    International Nuclear Information System (INIS)

    Kalt, Peter A M

    2010-01-01

    Laser-based diagnostic methods are being applied to more and more flows of theoretical and practical interest and are revealing interesting new flow features. Imaging particles or droplets in nephelometry and laser sheet dropsizing methods requires a trade-off of maximized signal-to-noise ratio without over-saturating the detector. Droplet and particle imaging results in lognormal distribution of pixel intensities. It is possible to fit a derived lognormal distribution to the histogram of measured pixel intensities. If pixel intensities are clipped at a saturated value, it is possible to estimate a presumed probability density function (pdf) shape without the effects of saturation from the lognormal fit to the unsaturated histogram. Information about presumed shapes of the pixel intensity pdf is used to generate corrections that can be applied to data to account for saturation. The effects of even slight saturation are shown to be a significant source of error on the derived average. The influence of saturation on the derived root mean square (rms) is even more pronounced. It is found that errors on the determined average exceed 5% when the number of saturated samples exceeds 3% of the total. Errors on the rms are 20% for a similar saturation level. This study also attempts to delineate limits, within which the detector saturation can be accurately corrected. It is demonstrated that a simple method for reshaping the clipped part of the pixel intensity histogram makes accurate corrections to account for saturated pixels. These outcomes can be used to correct a saturated signal, quantify the effect of saturation on a derived average and offer a method to correct the derived average in the case of slight to moderate saturation of pixels

  6. RELIC: a novel dye-bias correction method for Illumina Methylation BeadChip.

    Science.gov (United States)

    Xu, Zongli; Langie, Sabine A S; De Boever, Patrick; Taylor, Jack A; Niu, Liang

    2017-01-03

    The Illumina Infinium HumanMethylation450 BeadChip and its successor, Infinium MethylationEPIC BeadChip, have been extensively utilized in epigenome-wide association studies. Both arrays use two fluorescent dyes (Cy3-green/Cy5-red) to measure methylation level at CpG sites. However, performance difference between dyes can result in biased estimates of methylation levels. Here we describe a novel method, called REgression on Logarithm of Internal Control probes (RELIC) to correct for dye bias on whole array by utilizing the intensity values of paired internal control probes that monitor the two color channels. We evaluate the method in several datasets against other widely used dye-bias correction methods. Results on data quality improvement showed that RELIC correction statistically significantly outperforms alternative dye-bias correction methods. We incorporated the method into the R package ENmix, which is freely available from the Bioconductor website ( https://www.bioconductor.org/packages/release/bioc/html/ENmix.html ). RELIC is an efficient and robust method to correct for dye-bias in Illumina Methylation BeadChip data. It outperforms other alternative methods and conveniently implemented in R package ENmix to facilitate DNA methylation studies.

  7. Improved pion pion scattering amplitude from dispersion relation formalism

    International Nuclear Information System (INIS)

    Cavalcante, I.P.; Coutinho, Y.A.; Borges, J. Sa

    2005-01-01

    Pion-pion scattering amplitude is obtained from Chiral Perturbation Theory at one- and two-loop approximations. Dispersion relation formalism provides a more economic method, which was proved to reproduce the analytical structure of that amplitude at both approximation levels. This work extends the use of the formalism in order to compute further unitarity corrections to partial waves, including the D-wave amplitude. (author)

  8. Phase retrieval with the reverse projection method in the presence of object's scattering

    International Nuclear Information System (INIS)

    Wang, Zhili; Gao, Kun; Wang, Dajiang

    2017-01-01

    X-ray grating interferometry can provide substantially increased contrast over traditional attenuation-based techniques in biomedical applications, and therefore novel and complementary information. Recently, special attention has been paid to quantitative phase retrieval in X-ray grating interferometry, which is mandatory to perform phase tomography, to achieve material identification, etc. An innovative approach, dubbed “Reverse Projection” (RP), has been developed for quantitative phase retrieval. The RP method abandons grating scanning completely, and is thus advantageous in terms of higher efficiency and reduced radiation damage. Therefore, it is expected that this novel method would find its potential in preclinical and clinical implementations. Strictly speaking, the reverse projection method is applicable for objects exhibiting only absorption and refraction. In this contribution, we discuss the phase retrieval with the reverse projection method for general objects with absorption, refraction and scattering simultaneously. Especially, we investigate the influence of the object's scattering on the retrieved refraction signal. Both theoretical analysis and numerical experiments are performed. The results show that the retrieved refraction signal is the product of object's refraction and scattering signals for small values. In the case of a strong scattering, the reverse projection method cannot provide reliable phase retrieval. Those presented results will guide the use of the reverse projection method for future practical applications, and help to explain some possible artifacts in the retrieved images and/or reconstructed slices. - Highlights: • Accurate phase retrieval by the reverse projection method without object's scattering. • Retrieved refraction signal contaminated by the object's scattering. • Refraction signal underestimated by the reverse projection method. • Guide the use of the reverse projection method for

  9. Accurate single-scattering simulation of ice cloud using the invariant-imbedding T-matrix method and the physical-geometric optics method

    Science.gov (United States)

    Sun, B.; Yang, P.; Kattawar, G. W.; Zhang, X.

    2017-12-01

    The ice cloud single-scattering properties can be accurately simulated using the invariant-imbedding T-matrix method (IITM) and the physical-geometric optics method (PGOM). The IITM has been parallelized using the Message Passing Interface (MPI) method to remove the memory limitation so that the IITM can be used to obtain the single-scattering properties of ice clouds for sizes in the geometric optics regime. Furthermore, the results associated with random orientations can be analytically achieved once the T-matrix is given. The PGOM is also parallelized in conjunction with random orientations. The single-scattering properties of a hexagonal prism with height 400 (in units of lambda/2*pi, where lambda is the incident wavelength) and an aspect ratio of 1 (defined as the height over two times of bottom side length) are given by using the parallelized IITM and compared to the counterparts using the parallelized PGOM. The two results are in close agreement. Furthermore, the integrated single-scattering properties, including the asymmetry factor, the extinction cross-section, and the scattering cross-section, are given in a completed size range. The present results show a smooth transition from the exact IITM solution to the approximate PGOM result. Because the calculation of the IITM method has reached the geometric regime, the IITM and the PGOM can be efficiently employed to accurately compute the single-scattering properties of ice cloud in a wide spectral range.

  10. A data reduction program for the linac total-scattering amorphous materials spectrometer (LINDA)

    International Nuclear Information System (INIS)

    Clarke, J.H.

    1976-01-01

    A computer program has been written to reduce the data collected on the A.E.R.E., Harwell linac total-scattering spectrometer (TSS) to the differential scattering cross-section. This instrument, used for studying the structure of amorphous materials such as liquids and glasses, has been described in detail. Time-of-flight spectra are recorded by several arrays of detectors at different angles using a pulsed incident neutron beam with a continuous distribution of wavelengths. The program performs all necessary background and container subtractions and also absorption corrections using the method of Paalman and Pings. The incident neutron energy distribution is obtained from the intensity recorded from a standard vanadium sample, enabling the observed differential scattering cross-section dsigma/dΩ (theta, lambda) and the structure factor S(Q) to be obtained. Various sample and vanadium geometries can be analysed by the program and facilities exist for the summation of data sets, smoothing of data, application of Placzek corrections and the output of processed data onto magnetic tape or punched cards. A set of example data is provided and some structure factors are shown with absorption corrections. (author)

  11. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods

    Directory of Open Access Journals (Sweden)

    Huiliang Cao

    2016-01-01

    Full Text Available This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC, Quadrature Force Correction (QFC and Coupling Stiffness Correction (CSC methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.

  12. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods.

    Science.gov (United States)

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-07

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses' quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups' output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.

  13. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods

    Science.gov (United States)

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-01

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability. PMID:26751455

  14. A method of detector correction for cosmic ray muon radiography

    International Nuclear Information System (INIS)

    Liu Yuanyuan; Zhao Ziran; Chen Zhiqiang; Zhang Li; Wang Zhentian

    2008-01-01

    Cosmic ray muon radiography which has good penetrability and sensitivity to high-Z materials is an effective way for detecting shielded nuclear materials. The problem of data correction is one of the key points of muon radiography technique. Because of the influence of environmental background, environmental yawp and error of detectors, the raw data can not be used directly. If we used the raw data as the usable data to reconstruct without any corrections, it would turn up terrible artifacts. Based on the characteristics of the muon radiography system, aimed at the error of detectors, this paper proposes a method of detector correction. The simulation experiments demonstrate that this method can effectively correct the error produced by detectors. Therefore, we can say that it does a further step to let the technique of cosmic muon radiography into out real life. (authors)

  15. Simulating the influence of scatter and beam hardening in dimensional computed tomography

    Science.gov (United States)

    Lifton, J. J.; Carmignato, S.

    2017-10-01

    Cone-beam x-ray computed tomography (XCT) is a radiographic scanning technique that allows the non-destructive dimensional measurement of an object’s internal and external features. XCT measurements are influenced by a number of different factors that are poorly understood. This work investigates how non-linear x-ray attenuation caused by beam hardening and scatter influences XCT-based dimensional measurements through the use of simulated data. For the measurement task considered, both scatter and beam hardening are found to influence dimensional measurements when evaluated using the ISO50 surface determination method. On the other hand, only beam hardening is found to influence dimensional measurements when evaluated using an advanced surface determination method. Based on the results presented, recommendations on the use of beam hardening and scatter correction for dimensional XCT are given.

  16. Investigation of scattered radiation in 3D whole-body positron emission tomography using Monte Carlo simulations

    International Nuclear Information System (INIS)

    Adam, L.-E.; Brix, G.

    1999-01-01

    The correction of scattered radiation is one of the most challenging tasks in 3D positron emission tomography (PET) and knowledge about the amount of scatter and its distribution is a prerequisite for performing an accurate correction. One concern in 3D PET in contrast to 2D PET is the scatter contribution from activity outside the field-of-view (FOV) and multiple scatter. Using Monte Carlo simulations, we examined the scatter distribution for various phantoms. The simulations were performed for a whole-body PET system (ECAT EXACT HR + , Siemens/CTI) with an axial FOV of 15.5 cm and a ring diameter of 82.7 cm. With (without) interplane septa, up to one (two) out of three detected events are scattered (for a centred point source in a water-filled cylinder that nearly fills out the patient port), whereby the relative scatter fraction varies significantly with the axial position. Our results show that for an accurate scatter correction, activity as well as scattering media outside the FOV have to be taken into account. Furthermore it could be shown that there is a considerable amount of multiple scatter which has a different spatial distribution from single scatter. This means that multiple scatter cannot be corrected by simply rescaling the single scatter component. (author)

  17. Scattering Amplitudes via Algebraic Geometry Methods

    DEFF Research Database (Denmark)

    Søgaard, Mads

    Feynman diagrams. The study of multiloop scattering amplitudes is crucial for the new era of precision phenomenology at the Large Hadron Collider (LHC) at CERN. Loop-level scattering amplitudes can be reduced to a basis of linearly independent integrals whose coefficients are extracted from generalized...

  18. High order QED corrections in Z physics

    International Nuclear Information System (INIS)

    Marck, S.C. van der.

    1991-01-01

    In this thesis a number of calculations of higher order QED corrections are presented, all applying to the standard LEP/SLC processes e + e - → f-bar f, where f stands for any fermion. In cases where f≠ e - , ν e , the above process is only possible via annihilation of the incoming electron positron pair. At LEP/SLC this mainly occurs via the production and the subsequent decay of a Z boson, i.e. the cross section is heavily dominated by the Z resonance. These processes and the corrections to them, treated in a semi-analytical way, are discussed (ch. 2). In the case f = e - (Bhabha scattering) the process can also occur via the exchange of a virtual photon in the t-channel. Since the latter contribution is dominant at small scattering angles one has to exclude these angles if one is interested in Z physics. Having excluded that region one has to recalculate all QED corrections (ch. 3). The techniques introduced there enables for the calculation the difference between forward and backward scattering, the forward backward symmetry, for the cases f ≠ e - , ν e (ch. 4). At small scattering angles, where Bhabha scattering is dominated by photon exchange in the t-channel, this process is used in experiments to determine the luminosity of the e + e - accelerator. hence an accurate theoretical description of this process at small angles is of vital interest to the overall normalization of all measurements at LEP/SLC. Ch. 5 gives such a description in a semi-analytical way. The last two chapters discuss Monte Carlo techniques that are used for the cases f≠ e - , ν e . Ch. 6 describes the simulation of two photon bremsstrahlung, which is a second order QED correction effect. The results are compared with results of the semi-analytical treatment in ch. 2. Finally ch. 7 reviews several techniques that have been used to simulate higher order QED corrections for the cases f≠ e - , ν e . (author). 132 refs.; 10 figs.; 16 tabs

  19. A semiclassical method in the theory of light scattering by semiconductor quantum dots

    International Nuclear Information System (INIS)

    Lang, I. G.; Korovin, L. I.; Pavlov, S. T.

    2008-01-01

    A semiclassical method is proposed for the theoretical description of elastic light scattering by arbitrary semiconductor quantum dots under conditions of size quantization. This method involves retarded potentials and allows one to dispense with boundary conditions for electric and magnetic fields. Exact results for the Umov-Poynting vector at large distances from quantum dots in the case of monochromatic and pulsed irradiation and formulas for differential scattering cross sections are obtained

  20. Equation-Method for correcting clipping errors in OFDM signals.

    Science.gov (United States)

    Bibi, Nargis; Kleerekoper, Anthony; Muhammad, Nazeer; Cheetham, Barry

    2016-01-01

    Orthogonal frequency division multiplexing (OFDM) is the digital modulation technique used by 4G and many other wireless communication systems. OFDM signals have significant amplitude fluctuations resulting in high peak to average power ratios which can make an OFDM transmitter susceptible to non-linear distortion produced by its high power amplifiers (HPA). A simple and popular solution to this problem is to clip the peaks before an OFDM signal is applied to the HPA but this causes in-band distortion and introduces bit-errors at the receiver. In this paper we discuss a novel technique, which we call the Equation-Method, for correcting these errors. The Equation-Method uses the Fast Fourier Transform to create a set of simultaneous equations which, when solved, return the amplitudes of the peaks before they were clipped. We show analytically and through simulations that this method can, correct all clipping errors over a wide range of clipping thresholds. We show that numerical instability can be avoided and new techniques are needed to enable the receiver to differentiate between correctly and incorrectly received frequency-domain constellation symbols.

  1. Scattering Amplitudes via Algebraic Geometry Methods

    CERN Document Server

    Søgaard, Mads; Damgaard, Poul Henrik

    This thesis describes recent progress in the understanding of the mathematical structure of scattering amplitudes in quantum field theory. The primary purpose is to develop an enhanced analytic framework for computing multiloop scattering amplitudes in generic gauge theories including QCD without Feynman diagrams. The study of multiloop scattering amplitudes is crucial for the new era of precision phenomenology at the Large Hadron Collider (LHC) at CERN. Loop-level scattering amplitudes can be reduced to a basis of linearly independent integrals whose coefficients are extracted from generalized unitarity cuts. We take advantage of principles from algebraic geometry in order to extend the notion of maximal cuts to a large class of two- and three-loop integrals. This allows us to derive unique and surprisingly compact formulae for the coefficients of the basis integrals. Our results are expressed in terms of certain linear combinations of multivariate residues and elliptic integrals computed from products of ...

  2. Studies of oxide-based thin-layered heterostructures by X-ray scattering methods

    Energy Technology Data Exchange (ETDEWEB)

    Durand, O. [Thales Research and Technology France, Route Departementale 128, F-91767 Palaiseau Cedex (France)]. E-mail: olivier.durand@thalesgroup.com; Rogers, D. [Nanovation SARL, 103 bis rue de Versailles 91400 Orsay (France); Universite de Technologie de Troyes, 10-12 rue Marie Curie, 10010 (France); Teherani, F. Hosseini [Nanovation SARL, 103 bis rue de Versailles 91400 Orsay (France); Andrieux, M. [LEMHE, ICMMOCNRS-UMR 8182, Universite d' Orsay, Batiment 410, 91410 Orsay (France); Modreanu, M. [Tyndall National Institute, Lee Maltings, Prospect Row, Cork (Ireland)

    2007-06-04

    Some X-ray scattering methods (X-ray reflectometry and Diffractometry) dedicated to the study of thin-layered heterostructures are presented with a particular focus, for practical purposes, on the description of fast, accurate and robust techniques. The use of X-ray scattering metrology as a routinely working non-destructive testing method, particularly by using procedures simplifying the data-evaluation, is emphasized. The model-independent Fourier-inversion method applied to a reflectivity curve allows a fast determination of the individual layer thicknesses. We demonstrate the capability of this method by reporting X-ray reflectometry study on multilayered oxide structures, even when the number of the layers constitutive of the stack is not known a-priori. Fast Fourier transform-based procedure has also been employed successfully on high resolution X-ray diffraction profiles. A study of the reliability of the integral-breadth methods in diffraction line-broadening analysis applied to thin layers, in order to determine coherent domain sizes, is also reported. Examples from studies of oxides-based thin-layers heterostructures will illustrate these methods. In particular, X-ray scattering studies performed on high-k HfO{sub 2} and SrZrO{sub 3} thin-layers, a (GaAs/AlOx) waveguide, and a ZnO thin-layer are reported.

  3. Comparison of the spatial landmark scatter of various 3D digitalization methods.

    Science.gov (United States)

    Boldt, Florian; Weinzierl, Christian; Hertrich, Klaus; Hirschfelder, Ursula

    2009-05-01

    The aim of this study was to compare four different three-dimensional digitalization methods on the basis of the complex anatomical surface of a cleft lip and palate plaster cast, and to ascertain their accuracy when positioning 3D landmarks. A cleft lip and palate plaster cast was digitalized with the SCAN3D photo-optical scanner, the OPTIX 400S laser-optical scanner, the Somatom Sensation 64 computed tomography system and the MicroScribe MLX 3-axis articulated-arm digitizer. First, four examiners appraised by individual visual inspection the surface detail reproduction of the three non-tactile digitalization methods in comparison to the reference plaster cast. The four examiners then localized the landmarks five times at intervals of 2 weeks. This involved simply copying, or spatially tracing, the landmarks from a reference plaster cast to each model digitally reproduced by each digitalization method. Statistical analysis of the landmark distribution specific to each method was performed based on the 3D coordinates of the positioned landmarks. Visual evaluation of surface detail conformity assigned the photo-optical digitalization method an average score of 1.5, the highest subjectively-determined conformity (surpassing computer tomographic and laser-optical methods). The tactile scanning method revealed the lowest degree of 3D landmark scatter, 0.12 mm, and at 1.01 mm the lowest maximum 3D landmark scatter; this was followed by the computer tomographic, photo-optical and laser-optical methods (in that order). This study demonstrates that the landmarks' precision and reproducibility are determined by the complexity of the reference-model surface as well as the digital surface quality and individual ability of each evaluator to capture 3D spatial relationships. The differences in the 3D-landmark scatter values and lowest maximum 3D-landmark scatter between the best and the worst methods showed minor differences. The measurement results in this study reveal that it

  4. Multi-frequency direct sampling method in inverse scattering problem

    Science.gov (United States)

    Kang, Sangwoo; Lambert, Marc; Park, Won-Kwang

    2017-10-01

    We consider the direct sampling method (DSM) for the two-dimensional inverse scattering problem. Although DSM is fast, stable, and effective, some phenomena remain unexplained by the existing results. We show that the imaging function of the direct sampling method can be expressed by a Bessel function of order zero. We also clarify the previously unexplained imaging phenomena and suggest multi-frequency DSM to overcome traditional DSM. Our method is evaluated in simulation studies using both single and multiple frequencies.

  5. Neutron scattering on molten transition metals and on Fe-C melts

    International Nuclear Information System (INIS)

    Weber, M.

    1978-01-01

    In order to find out whether short-range order phenomena can be detected in iron-carbon melts, neutron scattering experiments were carried out in molten iron-carbon alloys. The method of isotope substitution, where the natural alloying iron was substituted by a 57 Fe-enriched isotope mixture, helped to increase the ratio between the scattering length of the carbon atoms and that of the iron atoms. The mean coherent scattering length for the isotope mixture which is required for further evaluation of the measurements, was determined in an experiment by measuring the limiting angle for total reflection of neutrons on evaporated films. From this determination of the scattering length, a value for the so far unknown scattering length of the 58 Fe isotope was obtained. The small angle scattering in corrected intensity curves of molten Fe-C alloys was investigated in detail. Scattering experiments in unalloyed Fe, Co, and Ni in the range of small scattering vectors proved that this small-angle scattering effect, which was observed here for the first time, is of magnetic origin. It is caused by short-range spin correlations fluctuating with space and time. [de

  6. A new correction method for determination on carbohydrates in lignocellulosic biomass.

    Science.gov (United States)

    Li, Hong-Qiang; Xu, Jian

    2013-06-01

    The accurate determination on the key components in lignocellulosic biomass is the premise of pretreatment and bioconversion. Currently, the widely used 72% H2SO4 two-step hydrolysis quantitative saccharification (QS) procedure uses loss coefficient of monosaccharide standards to correct monosaccharide loss in the secondary hydrolysis (SH) of QS and may result in excessive correction. By studying the quantitative relationships of glucose and xylose losses during special hydrolysis conditions and the HMF and furfural productions, a simple correction on the monosaccharide loss from both PH and SH was established by using HMF and furfural as the calibrators. This method was used to the component determination on corn stover, Miscanthus and cotton stalk (raw materials and pretreated) and compared to the NREL method. It has been proved that this method can avoid excessive correction on the samples with high-carbohydrate contents. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Multiple-scattering theory. New developments and applications

    Energy Technology Data Exchange (ETDEWEB)

    Ernst, Arthur

    2007-12-04

    Multiple-scattering theory (MST) is a very efficient technique for calculating the electronic properties of an assembly of atoms. It provides explicitly the Green function, which can be used in many applications such as magnetism, transport and spectroscopy. This work gives an overview on recent developments of multiple-scattering theory. One of the important innovations is the multiple scattering implementation of the self-interaction correction approach, which enables realistic electronic structure calculations of systems with localized electrons. Combined with the coherent potential approximation (CPA), this method can be applied for studying the electronic structure of alloys and as well as pseudo-alloys representing charge and spin disorder. This formalism is extended to finite temperatures which allows to investigate phase transitions and thermal fluctuations in correlated materials. Another novel development is the implementation of the self-consistent non-local CPA approach, which takes into account charge correlations around the CPA average and chemical short range order. This formalism is generalized to the relativistic treatment of magnetically ordered systems. Furthermore, several improvements are implemented to optimize the computational performance and to increase the accuracy of the KKR Green function method. The versatility of the approach is illustrated in numerous applications. (orig.)

  8. Multiple-scattering theory. New developments and applications

    International Nuclear Information System (INIS)

    Ernst, Arthur

    2007-01-01

    Multiple-scattering theory (MST) is a very efficient technique for calculating the electronic properties of an assembly of atoms. It provides explicitly the Green function, which can be used in many applications such as magnetism, transport and spectroscopy. This work gives an overview on recent developments of multiple-scattering theory. One of the important innovations is the multiple scattering implementation of the self-interaction correction approach, which enables realistic electronic structure calculations of systems with localized electrons. Combined with the coherent potential approximation (CPA), this method can be applied for studying the electronic structure of alloys and as well as pseudo-alloys representing charge and spin disorder. This formalism is extended to finite temperatures which allows to investigate phase transitions and thermal fluctuations in correlated materials. Another novel development is the implementation of the self-consistent non-local CPA approach, which takes into account charge correlations around the CPA average and chemical short range order. This formalism is generalized to the relativistic treatment of magnetically ordered systems. Furthermore, several improvements are implemented to optimize the computational performance and to increase the accuracy of the KKR Green function method. The versatility of the approach is illustrated in numerous applications. (orig.)

  9. Unitarity corrections and high field strengths in high energy hard collisions

    International Nuclear Information System (INIS)

    Kovchegov, Y.V.; Mueller, A.H.

    1997-01-01

    Unitarity corrections to the BFKL description of high energy hard scattering are viewed in large N c QCD in light-cone quantization. In a center of mass frame unitarity corrections to high energy hard scattering are manifestly perturbatively calculable and unrelated to questions of parton saturation. In a frame where one of the hadrons is initially at rest unitarity corrections are related to parton saturation effects and involve potential strengths A μ ∝1/g. In such a frame we describe the high energy scattering in terms of the expectation value of a Wilson loop. The large potentials A μ ∝1/g are shown to be pure gauge terms allowing perturbation theory to again describe unitarity corrections and parton saturation effects. Genuine nonperturbative effects only come in at energies well beyond those energies where unitarity constraints first become important. (orig.)

  10. Simulation electromagnetic scattering on bodies through integral equation and neural networks methods

    Science.gov (United States)

    Lvovich, I. Ya; Preobrazhenskiy, A. P.; Choporov, O. N.

    2018-05-01

    The paper deals with the issue of electromagnetic scattering on a perfectly conducting diffractive body of a complex shape. Performance calculation of the body scattering is carried out through the integral equation method. Fredholm equation of the second time was used for calculating electric current density. While solving the integral equation through the moments method, the authors have properly described the core singularity. The authors determined piecewise constant functions as basic functions. The chosen equation was solved through the moments method. Within the Kirchhoff integral approach it is possible to define the scattered electromagnetic field, in some way related to obtained electrical currents. The observation angles sector belongs to the area of the front hemisphere of the diffractive body. To improve characteristics of the diffractive body, the authors used a neural network. All the neurons contained a logsigmoid activation function and weighted sums as discriminant functions. The paper presents the matrix of weighting factors of the connectionist model, as well as the results of the optimized dimensions of the diffractive body. The paper also presents some basic steps in calculation technique of the diffractive bodies, based on the combination of integral equation and neural networks methods.

  11. The generalized PN synthetic acceleration method for linear transport problems with highly anisotropic scattering

    International Nuclear Information System (INIS)

    Khattab, K.M.

    1997-01-01

    The diffusion synthetic acceleration (DSA) method has been known to be an effective tool for accelerating the iterative solution of transport equations with isotropic or mildly anisotropic scattering. However, the DSA method is not effective for transport equations that have strongly anisotropic scattering. A generalization of the modified DSA (MDSA) method is proposed that converges (clock time) faster than the MDSA method. This method is developed, the results of a Fourier analysis that theoretically predicts its efficiency are described, and numerical results that verify the theoretical prediction are presented

  12. Determination of several trace elements in silicate rocks by an XRF method with background and matrix corrections

    International Nuclear Information System (INIS)

    Pascual, J.

    1987-01-01

    An X-ray fluorescence method for determining trace elements in silicate rock samples was studied. The procedure focused on the application of the pertinent matrix corrections. Either the Compton peak or the reciprocal of the mass absorption coefficient of the sample was used as internal standard for this purpose. X-ray tubes with W or Cr anodes were employed, and the W Lβ and Cr Kα Compton intensities scattered by the sample were measured. The mass absorption coefficients at both sides of the absorption edge for Fe (1.658 and 1.936 A) were calculated. The elements Zr, Y, Rb, Zn, Ni, Cr and V were determined in 15 international reference rocks covering wide ranges of concentration. Relative mean errors were in many cases less than 10%. (author)

  13. Advanced methods for scattering amplitudes in gauge theories

    International Nuclear Information System (INIS)

    Peraro, Tiziano

    2014-01-01

    We present new techniques for the evaluation of multi-loop scattering amplitudes and their application to gauge theories, with relevance to the Standard Model phenomenology. We define a mathematical framework for the multi-loop integrand reduction of arbitrary diagrams, and elaborate algebraic approaches, such as the Laurent expansion method, implemented in the software Ninja, and the multivariate polynomial division technique by means of Groebner bases.

  14. Advanced methods for scattering amplitudes in gauge theories

    Energy Technology Data Exchange (ETDEWEB)

    Peraro, Tiziano

    2014-09-24

    We present new techniques for the evaluation of multi-loop scattering amplitudes and their application to gauge theories, with relevance to the Standard Model phenomenology. We define a mathematical framework for the multi-loop integrand reduction of arbitrary diagrams, and elaborate algebraic approaches, such as the Laurent expansion method, implemented in the software Ninja, and the multivariate polynomial division technique by means of Groebner bases.

  15. Histogram-driven cupping correction (HDCC) in CT

    Science.gov (United States)

    Kyriakou, Y.; Meyer, M.; Lapp, R.; Kalender, W. A.

    2010-04-01

    Typical cupping correction methods are pre-processing methods which require either pre-calibration measurements or simulations of standard objects to approximate and correct for beam hardening and scatter. Some of them require the knowledge of spectra, detector characteristics, etc. The aim of this work was to develop a practical histogram-driven cupping correction (HDCC) method to post-process the reconstructed images. We use a polynomial representation of the raw-data generated by forward projection of the reconstructed images; forward and backprojection are performed on graphics processing units (GPU). The coefficients of the polynomial are optimized using a simplex minimization of the joint entropy of the CT image and its gradient. The algorithm was evaluated using simulations and measurements of homogeneous and inhomogeneous phantoms. For the measurements a C-arm flat-detector CT (FD-CT) system with a 30×40 cm2 detector, a kilovoltage on board imager (radiation therapy simulator) and a micro-CT system were used. The algorithm reduced cupping artifacts both in simulations and measurements using a fourth-order polynomial and was in good agreement to the reference. The minimization algorithm required less than 70 iterations to adjust the coefficients only performing a linear combination of basis images, thus executing without time consuming operations. HDCC reduced cupping artifacts without the necessity of pre-calibration or other scan information enabling a retrospective improvement of CT image homogeneity. However, the method can work with other cupping correction algorithms or in a calibration manner, as well.

  16. Fast Neutron Elastic and Inelastic Scattering of Vanadium

    Energy Technology Data Exchange (ETDEWEB)

    Holmqvist, B; Johansson, S G; Lodin, G; Wiedling, T

    1969-11-15

    Fast neutron scattering interactions with vanadium were studied using time-of-flight techniques at several energies in the interval 1.5 to 8.1 MeV. The experimental differential elastic scattering cross sections have been fitted to optical model calculations and the inelastic scattering cross sections have been compared with Hauser-Feshbach calculations, corrected for the fluctuation of compound-nuclear level widths.

  17. The Development of a Parameterized Scatter Removal Algorithm for Nuclear Materials Identification System Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Grogan, Brandon Robert [Univ. of Tennessee, Knoxville, TN (United States)

    2010-03-01

    This dissertation presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects non-intrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross-sections of features inside the object can be determined. The cross sections can then be used to identify the materials and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons which are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using

  18. THE DEVELOPMENT OF A PARAMETERIZED SCATTER REMOVAL ALGORITHM FOR NUCLEAR MATERIALS IDENTIFICATION SYSTEM IMAGING

    Energy Technology Data Exchange (ETDEWEB)

    Grogan, Brandon R [ORNL

    2010-05-01

    This report presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects nonintrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross sections of features inside the object can be determined. The cross sections can then be used to identify the materials, and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons that are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized, and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements, and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using the

  19. Segmentation-free empirical beam hardening correction for CT

    Energy Technology Data Exchange (ETDEWEB)

    Schüller, Sören; Sawall, Stefan [German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, Heidelberg 69120 (Germany); Stannigel, Kai; Hülsbusch, Markus; Ulrici, Johannes; Hell, Erich [Sirona Dental Systems GmbH, Fabrikstraße 31, 64625 Bensheim (Germany); Kachelrieß, Marc, E-mail: marc.kachelriess@dkfz.de [German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg (Germany)

    2015-02-15

    Purpose: The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one is a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. Methods: To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the

  20. The factorization method for inverse acoustic scattering in a layered medium

    International Nuclear Information System (INIS)

    Bondarenko, Oleksandr; Kirsch, Andreas; Liu, Xiaodong

    2013-01-01

    In this paper, we consider a problem of inverse acoustic scattering by an impenetrable obstacle embedded in a layered medium. We will show that the factorization method can be applied to recover the embedded obstacle; that is, the equation F-tilde g =φ z is solvable if and only if the sampling point z is in the interior of the unknown obstacle. Here, F-tilde is a self-adjoint operator related to the far field operator and ϕ z is the far field pattern of the Green function with respect to the problem of scattering by the background medium for point z. The validity of the factorization method is proven with the help of a mixed reciprocity principle and an application of the scattering operator. Due to the established mixed reciprocity principle, knowledge of the Green function for the background medium is no longer required, which makes the method attractive from the computational point of view. The paper is only concerned with sound-soft obstacles, but the analysis can be easily extended for sound-hard obstacles, or obstacles with separated sound-soft and sound-hard parts. Finally, we provide an explicit example for a radially symmetric case and present some numerical examples. (paper)

  1. Pion nucleus scattering lengths

    International Nuclear Information System (INIS)

    Huang, W.T.; Levinson, C.A.; Banerjee, M.K.

    1971-09-01

    Soft pion theory and the Fubini-Furlan mass dispersion relations have been used to analyze the pion nucleon scattering lengths and obtain a value for the sigma commutator term. With this value and using the same principles, scattering lengths have been predicted for nuclei with mass number ranging from 6 to 23. Agreement with experiment is very good. For those who believe in the Gell-Mann-Levy sigma model, the evaluation of the commutator yields the value 0.26(m/sub σ//m/sub π/) 2 for the sigma nucleon coupling constant. The large dispersive corrections for the isosymmetric case implies that the basic idea behind many of the soft pion calculations, namely, slow variation of matrix elements from the soft pion limit to the physical pion mass, is not correct. 11 refs., 1 fig., 3 tabs

  2. Direct determination of scattering time delays using the R-matrix propagation method

    International Nuclear Information System (INIS)

    Walker, R.B.; Hayes, E.F.

    1989-01-01

    A direct method for determining time delays for scattering processes is developed using the R-matrix propagation method. The procedure involves the simultaneous generation of the global R matrix and its energy derivative. The necessary expressions to obtain the energy derivative of the S matrix are relatively simple and involve many of the same matrix elements required for the R-matrix propagation method. This method is applied to a simple model for a chemical reaction that displays sharp resonance features. The test results of the direct method are shown to be in excellent agreement with the traditional numerical differentiation method for scattering energies near the resonance energy. However, for sharp resonances the numerical differentiation method requires calculation of the S-matrix elements at many closely spaced energies. Since the direct method presented here involves calculations at only a single energy, one is able to generate accurate energy derivatives and time delays much more efficiently and reliably

  3. Study of material science by neutron scattering

    International Nuclear Information System (INIS)

    Kim, H.J.; Yoon, B.K.; Cheon, B.C.; Lee, C.Y.; Kim, C.S.

    1980-01-01

    To develop accurate methods of texture measurement in metallic materials by neutron diffraction, (100),(200),(111) and (310) pole figures have been measured for the oriented silicon steel sheet, and currently study of correction methods for neutron absorption and extinction effects are in progress. For quantitative analysis of texture of polycrystalline material with a cubic structure, a software has been developed to calculate inverse pole figures for arbitrary direction specified in the speciman as well as pole figures for arbitrary chosen crystallographic planes from three experimental pole figures. This work is to be extended for the calculation of three dimensional orientation distribution function and for the evaluation of errors in the quantitative analysis of texture. Work is also for the study of N-H...O hydrogen bond in amino acid by observing molecular motions using neutron inelastic scattering. Measurement of neutron inelastic scattering spectrum of L-Serine is completed at 100 0 K and over the energy transfer range of 20-150 meV. (KAERI INIS Section)

  4. Relativistic convergent close-coupling method applied to electron scattering from mercury

    International Nuclear Information System (INIS)

    Bostock, Christopher J.; Fursa, Dmitry V.; Bray, Igor

    2010-01-01

    We report on the extension of the recently formulated relativistic convergent close-coupling (RCCC) method to accommodate two-electron and quasi-two-electron targets. We apply the theory to electron scattering from mercury and obtain differential and integrated cross sections for elastic and inelastic scattering. We compared with previous nonrelativistic convergent close-coupling (CCC) calculations and for a number of transitions obtained significantly better agreement with the experiment. The RCCC method is able to resolve structure in the integrated cross sections for the energy regime in the vicinity of the excitation thresholds for the (6s6p) 3 P 0,1,2 states. These cross sections are associated with the formation of negative ion (Hg - ) resonances that could not be resolved with the nonrelativistic CCC method. The RCCC results are compared with the experiment and other relativistic theories.

  5. The various correction methods to the high precision aeromagnetic data

    International Nuclear Information System (INIS)

    Xu Guocang; Zhu Lin; Ning Yuanli; Meng Xiangbao; Zhang Hongjian

    2014-01-01

    In the airborne geophysical survey, an outstanding achievement first depends on the measurement precision of the instrument, and the choice of measurement conditions, the reliability of data collection, followed by the correct method of measurement data processing, the rationality of the data interpretation. Obviously, geophysical data processing is an important task for the comprehensive interpretation of the measurement results, processing method is correct or not directly related to the quality of the final results. we have developed a set of personal computer software to aeromagnetic and radiometric survey data processing in the process of actual production and scientific research in recent years, and successfully applied to the production. The processing methods and flowcharts to the high precision aromagnetic data were simply introduced in this paper. However, the mathematical techniques of the various correction programes to IGRF and flying height and magnetic diurnal variation were stressily discussed in the paper. Their processing effectness were illustrated by taking an example as well. (authors)

  6. Comparison of approximate methods for multiple scattering in high-energy collisions. II

    International Nuclear Information System (INIS)

    Nolan, A.M.; Tobocman, W.; Werby, M.F.

    1976-01-01

    The scattering in one dimension of a particle by a target of N like particles in a bound state has been studied. The exact result for the transmission probability has been compared with the predictions of the Glauber theory, the Watson optical potential model, and the adiabatic (or fixed scatterer) approximation. The approximate methods optical potential model is second best. The Watson method is found to work better when the kinematics suggested by Foldy and Walecka are used rather than that suggested by Watson, that is to say, when the two-body of the nucleon-nucleon reduced mass

  7. Decay correction methods in dynamic PET studies

    International Nuclear Information System (INIS)

    Chen, K.; Reiman, E.; Lawson, M.

    1995-01-01

    In order to reconstruct positron emission tomography (PET) images in quantitative dynamic studies, the data must be corrected for radioactive decay. One of the two commonly used methods ignores physiological processes including blood flow that occur at the same time as radioactive decay; the other makes incorrect use of time-accumulated PET counts. In simulated dynamic PET studies using 11 C-acetate and 18 F-fluorodeoxyglucose (FDG), these methods are shown to result in biased estimates of the time-activity curve (TAC) and model parameters. New methods described in this article provide significantly improved parameter estimates in dynamic PET studies

  8. Method of absorbance correction in a spectroscopic heating value sensor

    Science.gov (United States)

    Saveliev, Alexei; Jangale, Vilas Vyankatrao; Zelepouga, Sergeui; Pratapas, John

    2013-09-17

    A method and apparatus for absorbance correction in a spectroscopic heating value sensor in which a reference light intensity measurement is made on a non-absorbing reference fluid, a light intensity measurement is made on a sample fluid, and a measured light absorbance of the sample fluid is determined. A corrective light intensity measurement at a non-absorbing wavelength of the sample fluid is made on the sample fluid from which an absorbance correction factor is determined. The absorbance correction factor is then applied to the measured light absorbance of the sample fluid to arrive at a true or accurate absorbance for the sample fluid.

  9. A model-based radiography restoration method based on simple scatter-degradation scheme for improving image visibility

    Science.gov (United States)

    Kim, K.; Kang, S.; Cho, H.; Kang, W.; Seo, C.; Park, C.; Lee, D.; Lim, H.; Lee, H.; Kim, G.; Park, S.; Park, J.; Kim, W.; Jeon, D.; Woo, T.; Oh, J.

    2018-02-01

    In conventional planar radiography, image visibility is often limited mainly due to the superimposition of the object structure under investigation and the artifacts caused by scattered x-rays and noise. Several methods, including computed tomography (CT) as a multiplanar imaging modality, air-gap and grid techniques for the reduction of scatters, phase-contrast imaging as another image-contrast modality, etc., have extensively been investigated in attempt to overcome these difficulties. However, those methods typically require higher x-ray doses or special equipment. In this work, as another approach, we propose a new model-based radiography restoration method based on simple scatter-degradation scheme where the intensity of scattered x-rays and the transmission function of a given object are estimated from a single x-ray image to restore the original degraded image. We implemented the proposed algorithm and performed an experiment to demonstrate its viability. Our results indicate that the degradation of image characteristics by scattered x-rays and noise was effectively recovered by using the proposed method, which improves the image visibility in radiography considerably.

  10. An efficient dose-compensation method for proximity effect correction

    International Nuclear Information System (INIS)

    Wang Ying; Han Weihua; Yang Xiang; Zhang Yang; Yang Fuhua; Zhang Renping

    2010-01-01

    A novel simple dose-compensation method is developed for proximity effect correction in electron-beam lithography. The sizes of exposed patterns depend on dose factors while other exposure parameters (including accelerate voltage, resist thickness, exposing step size, substrate material, and so on) remain constant. This method is based on two reasonable assumptions in the evaluation of the compensated dose factor: one is that the relation between dose factors and circle-diameters is linear in the range under consideration; the other is that the compensated dose factor is only affected by the nearest neighbors for simplicity. Four-layer-hexagon photonic crystal structures were fabricated as test patterns to demonstrate this method. Compared to the uncorrected structures, the homogeneity of the corrected hole-size in photonic crystal structures was clearly improved. (semiconductor technology)

  11. A method for the generation of random multiple Coulomb scattering angles

    International Nuclear Information System (INIS)

    Campbell, J.R.

    1995-06-01

    A method for the random generation of spatial angles drawn from non-Gaussian multiple Coulomb scattering distributions is presented. The method employs direct numerical inversion of cumulative probability distributions computed from the universal non-Gaussian angular distributions of Marion and Zimmerman. (author). 12 refs., 3 figs

  12. The generalized PN synthetic acceleration method for linear transport problems with highly anisotropic scattering

    International Nuclear Information System (INIS)

    Khattab, K.M.

    1998-01-01

    The diffusion synthetic acceleration (DSA) method has been known to be an effective tool for accelerating the iterative solution of transport equations with isotopic or mildly anisotropic scattering. However, the DSA method is not effective for transport equations that have strongly anisotropic scattering. A generalization of the modified DSA (MDSA) methods is proposed. This method converges (Clock time) faster than the MDSA method. It is developed, the results of a Fourier analysis that theoretically predicts its efficiency are described, and numerical results that verify the theoretical prediction are presented. (author). 9 refs., 2 tabs., 5 figs

  13. Second Born approximation in elastic-electron scattering from nuclear static electro-magnetic multipoles

    International Nuclear Information System (INIS)

    Al-Khamiesi, I.M.; Kerimov, B.K.

    1988-01-01

    Second Born approximation corrections to electron scattering by nuclei with arbitrary spin are considered. Explicit integral expressions for the charge, magnetic dipole and interference differential cross sections are obtained. Magnetic and interference relative corrections are then investigated in the case of backward electron scattering using shell model form factors for nuclear targets 9 Be, 10 B, and 14 N. To understand exponential growth of these corrections with square of the electron energy K 0 2 , the case of electron scattering by 6 Li is considered using monopole model charge form factor with power-law asymptotics. 11 refs., 2 figs. (author)

  14. Determination of the mass attenuation coefficients for X-ray fluorescence measurements correction by the Rayleigh to Compton scattering ratio

    Energy Technology Data Exchange (ETDEWEB)

    Conti, C.C., E-mail: ccconti@ird.gov.br [Institute for Radioprotection and Dosimetry – IRD/CNEN, Rio de Janeiro (Brazil); Physics Institute, State University of Rio de Janeiro – UERJ, Rio de Janeiro (Brazil); Anjos, M.J. [Physics Institute, State University of Rio de Janeiro – UERJ, Rio de Janeiro (Brazil); Salgado, C.M. [Nuclear Engineering Institute – IEN/CNEN, Rio de Janeiro (Brazil)

    2014-09-15

    Highlights: •This work describes a procedure for sample self-absorption correction. •The use of Monte Carlo simulation to calculate the mass attenuation coefficients curve was effective. •No need for transmission measurement, saving time, financial resources and effort. •This article provides de curves for the 90° scattering angle. •Calculation on-line at (www.macx.net.br). -- Abstract: X-ray fluorescence technique plays an important role in nondestructive analysis nowadays. The development of equipment, including portable ones, enables a wide assortment of possibilities for analysis of stable elements, even in trace concentrations. Nevertheless, despite of the advantages, one important drawback is radiation self-attenuation in the sample being measured, which needs to be considered in the calculation for the proper determination of elemental concentration. The mass attenuation coefficient can be determined by transmission measurement, but, in this case, the sample must be in slab shape geometry and demands two different setups and measurements. The Rayleigh to Compton scattering ratio, determined from the X-ray fluorescence spectrum, provides a link to the mass attenuation coefficient by means of a polynomial type equation. This work presents a way to construct a Rayleigh to Compton scattering ratio versus mass attenuation coefficient curve by using the MCNP5 Monte Carlo computer code. The comparison between the calculated and literature values of the mass attenuation coefficient for some known samples showed to be within 15%. This calculation procedure is available on-line at (www.macx.net.br)

  15. Nuclear Compton scattering

    International Nuclear Information System (INIS)

    Christillin, P.

    1986-01-01

    The theory of nuclear Compton scattering is reformulated with explicit consideration of both virtual and real pionic degrees of freedom. The effects due to low-lying nuclear states, to seagull terms, to pion condensation and to the Δ dynamics in the nucleus and their interplay in the different energy regions are examined. It is shown that all corrections to the one-body terms, of diffractive behaviour determined by the nuclear form factor, have an effective two-body character. The possibility of using Compton scattering as a complementary source of information about nuclear dynamics is restressed. (author)

  16. Improvement of spatial discretization error on the semi-analytic nodal method using the scattered source subtraction method

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Tatsumi, Masahiro

    2006-01-01

    In this paper, the scattered source subtraction (SSS) method is newly proposed to improve the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. In the SSS method, the scattered source is subtracted from both side of the diffusion or the transport equation to make spatial variation of the source term to be small. The same neutron balance equation is still used in the SSS method. Since the SSS method just modifies coefficients of node coupling equations (those used in evaluation for the response of partial currents), its implementation is easy. Validity of the present method is verified through test calculations that are carried out in PWR multi-assemblies configurations. The calculation results show that the SSS method can significantly improve the spatial discretization error. Since the SSS method does not have any negative impact on execution time, convergence behavior and memory requirement, it will be useful to reduce the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. (author)

  17. A novel 3D absorption correction method for quantitative EDX-STEM tomography

    International Nuclear Information System (INIS)

    Burdet, Pierre; Saghi, Z.; Filippin, A.N.; Borrás, A.; Midgley, P.A.

    2016-01-01

    This paper presents a novel 3D method to correct for absorption in energy dispersive X-ray (EDX) microanalysis of heterogeneous samples of unknown structure and composition. By using STEM-based tomography coupled with EDX, an initial 3D reconstruction is used to extract the location of generated X-rays as well as the X-ray path through the sample to the surface. The absorption correction needed to retrieve the generated X-ray intensity is then calculated voxel-by-voxel estimating the different compositions encountered by the X-ray. The method is applied to a core/shell nanowire containing carbon and oxygen, two elements generating highly absorbed low energy X-rays. Absorption is shown to cause major reconstruction artefacts, in the form of an incomplete recovery of the oxide and an erroneous presence of carbon in the shell. By applying the correction method, these artefacts are greatly reduced. The accuracy of the method is assessed using reference X-ray lines with low absorption. - Highlights: • A novel 3D absorption correction method is proposed for 3D EDX-STEM tomography. • The absorption of X-rays along the path to the surface is calculated voxel-by-voxel. • The method is applied on highly absorbed X-rays emitted from a core/shell nanowire. • Absorption is shown to cause major artefacts in the reconstruction. • Using the absorption correction method, the reconstruction artefacts are greatly reduced.

  18. A novel 3D absorption correction method for quantitative EDX-STEM tomography

    Energy Technology Data Exchange (ETDEWEB)

    Burdet, Pierre, E-mail: pierre.burdet@a3.epfl.ch [Department of Materials Science and Metallurgy, University of Cambridge, Charles Babbage Road 27, Cambridge CB3 0FS, Cambridgeshire (United Kingdom); Saghi, Z. [Department of Materials Science and Metallurgy, University of Cambridge, Charles Babbage Road 27, Cambridge CB3 0FS, Cambridgeshire (United Kingdom); Filippin, A.N.; Borrás, A. [Nanotechnology on Surfaces Laboratory, Materials Science Institute of Seville (ICMS), CSIC-University of Seville, C/ Americo Vespucio 49, 41092 Seville (Spain); Midgley, P.A. [Department of Materials Science and Metallurgy, University of Cambridge, Charles Babbage Road 27, Cambridge CB3 0FS, Cambridgeshire (United Kingdom)

    2016-01-15

    This paper presents a novel 3D method to correct for absorption in energy dispersive X-ray (EDX) microanalysis of heterogeneous samples of unknown structure and composition. By using STEM-based tomography coupled with EDX, an initial 3D reconstruction is used to extract the location of generated X-rays as well as the X-ray path through the sample to the surface. The absorption correction needed to retrieve the generated X-ray intensity is then calculated voxel-by-voxel estimating the different compositions encountered by the X-ray. The method is applied to a core/shell nanowire containing carbon and oxygen, two elements generating highly absorbed low energy X-rays. Absorption is shown to cause major reconstruction artefacts, in the form of an incomplete recovery of the oxide and an erroneous presence of carbon in the shell. By applying the correction method, these artefacts are greatly reduced. The accuracy of the method is assessed using reference X-ray lines with low absorption. - Highlights: • A novel 3D absorption correction method is proposed for 3D EDX-STEM tomography. • The absorption of X-rays along the path to the surface is calculated voxel-by-voxel. • The method is applied on highly absorbed X-rays emitted from a core/shell nanowire. • Absorption is shown to cause major artefacts in the reconstruction. • Using the absorption correction method, the reconstruction artefacts are greatly reduced.

  19. Target mass effects in polarized deep-inelastic scattering

    International Nuclear Information System (INIS)

    Piccione, A.

    1998-01-01

    We present a computation of nucleon mass corrections to nucleon structure functions for polarized deep-inelastic scattering. We perform a fit to existing data including mass corrections at first order in m 2 /Q 2 and we study the effect of these corrections on physically interesting quantities. We conclude that mass corrections are generally small, and compatible with current estimates of higher twist uncertainties, when available. (orig.)

  20. In situ surface roughness measurement using a laser scattering method

    Science.gov (United States)

    Tay, C. J.; Wang, S. H.; Quan, C.; Shang, H. M.

    2003-03-01

    In this paper, the design and development of an optical probe for in situ measurement of surface roughness are discussed. Based on this light scattering principle, the probe which consists of a laser diode, measuring lens and a linear photodiode array, is designed to capture the scattered light from a test surface with a relatively large scattering angle ϕ (=28°). This capability increases the measuring range and enhances repeatability of the results. The coaxial arrangement that incorporates a dual-laser beam and a constant compressed air stream renders the proposed system insensitive to movement or vibration of the test surface as well as surface conditions. Tests were conducted on workpieces which were mounted on a turning machine that operates with different cutting speeds. Test specimens which underwent different machining processes and of different surface finish were also studied. The results obtained demonstrate the feasibility of surface roughness measurement using the proposed method.