Coastal Zone Color Scanner atmospheric correction algorithm - Multiple scattering effects
Gordon, Howard R.; Castano, Diego J.
1987-01-01
Errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm are analyzed. The analysis is based on radiative transfer computations in model atmospheres, in which the aerosols and molecules are distributed vertically in an exponential manner, with most of the aerosol scattering located below the molecular scattering. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates, making it possible to determine the errors along typical CZCS scan lines. Information provided by the analysis makes it possible to judge the efficacy of the current algorithm with the current sensor and to estimate the impact of the algorithm-induced errors on a variety of applications.
Energy Technology Data Exchange (ETDEWEB)
Kim, Ye-Seul; Park, Hye-Suk; Kim, Hee-Joung [Yonsei University, Wonju (Korea, Republic of); Choi, Young-Wook; Choi, Jae-Gu [Korea Electrotechnology Research Institute, Ansan (Korea, Republic of)
2014-12-15
Digital breast tomosynthesis (DBT) is a technique that was developed to overcome the limitations of conventional digital mammography by reconstructing slices through the breast from projections acquired at different angles. In developing and optimizing DBT, The x-ray scatter reduction technique remains a significant challenge due to projection geometry and radiation dose limitations. The most common approach to scatter reduction is a beam-stop-array (BSA) algorithm; however, this method raises concerns regarding the additional exposure involved in acquiring the scatter distribution. The compressed breast is roughly symmetric, and the scatter profiles from projections acquired at axially opposite angles are similar to mirror images. The purpose of this study was to apply the BSA algorithm with only two scans with a beam stop array, which estimates the scatter distribution with minimum additional exposure. The results of the scatter correction with angular interpolation were comparable to those of the scatter correction with all scatter distributions at each angle. The exposure increase was less than 13%. This study demonstrated the influence of the scatter correction obtained by using the BSA algorithm with minimum exposure, which indicates its potential for practical applications.
Cross plane scattering correction
International Nuclear Information System (INIS)
Shao, L.; Karp, J.S.
1990-01-01
Most previous scattering correction techniques for PET are based on assumptions made for a single transaxial plane and are independent of axial variations. These techniques will incorrectly estimate the scattering fraction for volumetric PET imaging systems since they do not take the cross-plane scattering into account. In this paper, the authors propose a new point source scattering deconvolution method (2-D). The cross-plane scattering is incorporated into the algorithm by modeling a scattering point source function. In the model, the scattering dependence both on axial and transaxial directions is reflected in the exponential fitting parameters and these parameters are directly estimated from a limited number of measured point response functions. The authors' results comparing the standard in-plane point source deconvolution to the authors' cross-plane source deconvolution show that for a small source, the former technique overestimates the scatter fraction in the plane of the source and underestimate the scatter fraction in adjacent planes. In addition, the authors also propose a simple approximation technique for deconvolution
Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong
2017-10-01
Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.
Energy Technology Data Exchange (ETDEWEB)
Chun, Se Young [School of Electrical and Computer Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan (Korea, Republic of)
2016-03-15
PET and SPECT are important tools for providing valuable molecular information about patients to clinicians. Advances in nuclear medicine hardware technologies and statistical image reconstruction algorithms enabled significantly improved image quality. Sequentially or simultaneously acquired anatomical images such as CT and MRI from hybrid scanners are also important ingredients for improving the image quality of PET or SPECT further. High-quality anatomical information has been used and investigated for attenuation and scatter corrections, motion compensation, and noise reduction via post-reconstruction filtering and regularization in inverse problems. In this article, we will review works using anatomical information for molecular image reconstruction algorithms for better image quality by describing mathematical models, discussing sources of anatomical information for different cases, and showing some examples.
International Nuclear Information System (INIS)
Thing, Rune S.; Bernchou, Uffe; Brink, Carsten; Mainegra-Hing, Ernesto
2013-01-01
Purpose: Cone beam computed tomography (CBCT) image quality is limited by scattered photons. Monte Carlo (MC) simulations provide the ability of predicting the patient-specific scatter contamination in clinical CBCT imaging. Lengthy simulations prevent MC-based scatter correction from being fully implemented in a clinical setting. This study investigates the combination of using fast MC simulations to predict scatter distributions with a ray tracing algorithm to allow calibration between simulated and clinical CBCT images. Material and methods: An EGSnrc-based user code (egs c bct), was used to perform MC simulations of an Elekta XVI CBCT imaging system. A 60keV x-ray source was used, and air kerma scored at the detector plane. Several variance reduction techniques (VRTs) were used to increase the scatter calculation efficiency. Three patient phantoms based on CT scans were simulated, namely a brain, a thorax and a pelvis scan. A ray tracing algorithm was used to calculate the detector signal due to primary photons. A total of 288 projections were simulated, one for each thread on the computer cluster used for the investigation. Results: Scatter distributions for the brain, thorax and pelvis scan were simulated within 2 % statistical uncertainty in two hours per scan. Within the same time, the ray tracing algorithm provided the primary signal for each of the projections. Thus, all the data needed for MC-based scatter correction in clinical CBCT imaging was obtained within two hours per patient, using a full simulation of the clinical CBCT geometry. Conclusions: This study shows that use of MC-based scatter corrections in CBCT imaging has a great potential to improve CBCT image quality. By use of powerful VRTs to predict scatter distributions and a ray tracing algorithm to calculate the primary signal, it is possible to obtain the necessary data for patient specific MC scatter correction within two hours per patient
Chi, Zhijun; Du, Yingchao; Huang, Wenhui; Tang, Chuanxiang
2017-12-01
The necessity for compact and relatively low cost x-ray sources with monochromaticity, continuous tunability of x-ray energy, high spatial coherence, straightforward polarization control, and high brightness has led to the rapid development of Thomson scattering x-ray sources. To meet the requirement of in-situ monochromatic computed tomography (CT) for large-scale and/or high-attenuation materials based on this type of x-ray source, there is an increasing demand for effective algorithms to correct the energy-angle correlation. In this paper, we take advantage of the parametrization of the x-ray attenuation coefficient to resolve this problem. The linear attenuation coefficient of a material can be decomposed into a linear combination of the energy-dependent photoelectric and Compton cross-sections in the keV energy regime without K-edge discontinuities, and the line integrals of the decomposition coefficients of the above two parts can be determined by performing two spectrally different measurements. After that, the line integral of the linear attenuation coefficient of an imaging object at a certain interested energy can be derived through the above parametrization formula, and monochromatic CT can be reconstructed at this energy using traditional reconstruction methods, e.g., filtered back projection or algebraic reconstruction technique. Not only can monochromatic CT be realized, but also the distributions of the effective atomic number and electron density of the imaging object can be retrieved at the expense of dual-energy CT scan. Simulation results validate our proposal and will be shown in this paper. Our results will further expand the scope of application for Thomson scattering x-ray sources.
International Nuclear Information System (INIS)
Stimpson, Shane; Collins, Benjamin; Kochunas, Brendan
2017-01-01
The MPACT code, being developed collaboratively by the University of Michigan and Oak Ridge National Laboratory, is the primary deterministic neutron transport solver being deployed within the Virtual Environment for Reactor Applications (VERA) as part of the Consortium for Advanced Simulation of Light Water Reactors (CASL). In many applications of the MPACT code, transport-corrected scattering has proven to be an obstacle in terms of stability, and considerable effort has been made to try to resolve the convergence issues that arise from it. Most of the convergence problems seem related to the transport-corrected cross sections, particularly when used in the 2D method of characteristics (MOC) solver, which is the focus of this work. Here in this paper, the stability and performance of the 2-D MOC solver in MPACT is evaluated for two iteration schemes: Gauss-Seidel and Jacobi. With the Gauss-Seidel approach, as the MOC solver loops over groups, it uses the flux solution from the previous group to construct the inscatter source for the next group. Alternatively, the Jacobi approach uses only the fluxes from the previous outer iteration to determine the inscatter source for each group. Consequently for the Jacobi iteration, the loop over groups can be moved from the outermost loop-as is the case with the Gauss-Seidel sweeper-to the innermost loop, allowing for a substantial increase in efficiency by minimizing the overhead of retrieving segment, region, and surface index information from the ray tracing data. Several test problems are assessed: (1) Babcock & Wilcox 1810 Core I, (2) Dimple S01A-Sq, (3) VERA Progression Problem 5a, and (4) VERA Problem 2a. The Jacobi iteration exhibits better stability than Gauss-Seidel, allowing for converged solutions to be obtained over a much wider range of iteration control parameters. Additionally, the MOC solve time with the Jacobi approach is roughly 2.0-2.5× faster per sweep. While the performance and stability of the Jacobi
DEFF Research Database (Denmark)
Slot Thing, Rune; Bernchou, Uffe; Mainegra-Hing, Ernesto
2013-01-01
Abstract Purpose. Cone beam computed tomography (CBCT) image quality is limited by scattered photons. Monte Carlo (MC) simulations provide the ability of predicting the patient-specific scatter contamination in clinical CBCT imaging. Lengthy simulations prevent MC-based scatter correction from...
Atmospheric scattering corrections to solar radiometry
International Nuclear Information System (INIS)
Box, M.A.; Deepak, A.
1979-01-01
Whenever a solar radiometer is used to measure direct solar radiation, some diffuse sky radiation invariably enters the detector's field of view along with the direct beam. Therefore, the atmospheric optical depth obtained by the use of Bouguer's transmission law (also called Beer-Lambert's law), that is valid only for direct radiation, needs to be corrected by taking account of the scattered radiation. In this paper we shall discuss the correction factors needed to account for the diffuse (i.e., singly and multiply scattered) radiation and the algorithms developed for retrieving aerosol size distribution from such measurements. For a radiometer with a small field of view (half-cone angle 0 ) and relatively clear skies (optical depths <0.4), it is shown that the total diffuse contributions represents approximately l% of the total intensity. It is assumed here that the main contributions to the diffuse radiation within the detector's view cone are due to single scattering by molecules and aerosols and multiple scattering by molecules alone, aerosol multiple scattering contributions being treated as negligibly small. The theory and the numerical results discussed in this paper will be helpful not only in making corrections to the measured optical depth data but also in designing improved solar radiometers
Source distribution dependent scatter correction for PVI
International Nuclear Information System (INIS)
Barney, J.S.; Harrop, R.; Dykstra, C.J.
1993-01-01
Source distribution dependent scatter correction methods which incorporate different amounts of information about the source position and material distribution have been developed and tested. The techniques use image to projection integral transformation incorporating varying degrees of information on the distribution of scattering material, or convolution subtraction methods, with some information about the scattering material included in one of the convolution methods. To test the techniques, the authors apply them to data generated by Monte Carlo simulations which use geometric shapes or a voxelized density map to model the scattering material. Source position and material distribution have been found to have some effect on scatter correction. An image to projection method which incorporates a density map produces accurate scatter correction but is computationally expensive. Simpler methods, both image to projection and convolution, can also provide effective scatter correction
Research of scatter correction on industry computed tomography
International Nuclear Information System (INIS)
Sun Shaohua; Gao Wenhuan; Zhang Li; Chen Zhiqiang
2002-01-01
In the scanning process of industry computer tomography, scatter blurs the reconstructed image. The grey values of pixels in the reconstructed image are away from what is true and such effect need to be corrected. If the authors use the conventional method of deconvolution, many steps of iteration are needed and the computing time is not satisfactory. The author discusses a method combining Ordered Subsets Convex algorithm and scatter model to implement scatter correction and promising results are obtained in both speed and image quality
Scatter factor corrections for elongated fields
International Nuclear Information System (INIS)
Higgins, P.D.; Sohn, W.H.; Sibata, C.H.; McCarthy, W.A.
1989-01-01
Measurements have been made to determine scatter factor corrections for elongated fields of Cobalt-60 and for nominal linear accelerator energies of 6 MV (Siemens Mevatron 67) and 18 MV (AECL Therac 20). It was found that for every energy the collimator scatter factor varies by 2% or more as the field length-to-width ratio increases beyond 3:1. The phantom scatter factor is independent of which collimator pair is elongated at these energies. For 18 MV photons it was found that the collimator scatter factor is complicated by field-size-dependent backscatter into the beam monitor
Brunner, Stephen; Nett, Brian E; Tolakanahalli, Ranjini; Chen, Guang-Hong
2011-02-21
X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes.
Compton scatter correction for planner scintigraphic imaging
Energy Technology Data Exchange (ETDEWEB)
Vaan Steelandt, E; Dobbeleir, A; Vanregemorter, J [Algemeen Ziekenhuis Middelheim, Antwerp (Belgium). Dept. of Nuclear Medicine and Radiotherapy
1995-12-01
A major problem in nuclear medicine is the image degradation due to Compton scatter in the patient. Photons emitted by the radioactive tracer scatter in collision with electrons of the surrounding tissue. Due to the resulting loss of energy and change in direction, the scattered photons induce an object dependant background on the images. This results in a degradation of the contrast of warm and cold lesions. Although theoretically interesting, most of the techniques proposed in literature like the use of symmetrical photopeaks can not be implemented on the commonly used gamma camera due to the energy/linearity/sensitivity corrections applied in the detector. A method for a single energy isotope based on existing methods with adjustments towards daily practice and clinical situations is proposed. It is assumed that the scatter image, recorded from photons collected within a scatter window adjacent to the photo peak, is a reasonable close approximation of the true scatter component of the image reconstructed from the photo peak window. A fraction `k` of the image using the scatter window is subtracted from the image recorded in the photo peak window to produce the compensated image. The principal matter of the method is the right value for the factor `k`, which is determined in a mathematical way and confirmed by experiments. To determine `k`, different kinds of scatter media are used and are positioned in different ways in order to simulate a clinical situation. For a secondary energy window from 100 to 124 keV below a photo peak window from 126 to 154 keV, a value of 0.7 is found. This value has been verified using both an antropomorph thyroid phantom and the Rollo contrast phantom.
First order correction to quasiclassical scattering amplitude
International Nuclear Information System (INIS)
Kuz'menko, A.V.
1978-01-01
First order (with respect to h) correction to quasiclassical with the aid of scattering amplitude in nonrelativistic quantum mechanics is considered. This correction is represented by two-loop diagrams and includes the double integrals. With the aid of classical equations of motion, the sum of the contributions of the two-loop diagrams is transformed into the expression which includes one-dimensional integrals only. The specific property of the expression obtained is that the integrand does not possess any singularities in the focal points of the classical trajectory. The general formula takes much simpler form in the case of one-dimensional systems
Radiative corrections to deep inelastic muon scattering
International Nuclear Information System (INIS)
Akhundov, A.A.; Bardin, D.Yu.; Lohman, W.
1986-01-01
A summary is given of the most recent results for the calculaion of radiative corrections to deep inelastic muon-nucleon scattering. Contributions from leptonic electromagnetic processes up to the order a 4 , vacuum polarization by leptons and hadrons, hadronic electromagnetic processes approximately a 3 and γZ interference have been taken into account. The dependence of the individual contributions on kinematical variables is studied. Contributions, not considered in earlier calculations of radiative corrections, reach in certain kinematical regions several per cent at energies above 100 GeV
Evaluation of a scattering correction method for high energy tomography
Tisseur, David; Bhatia, Navnina; Estre, Nicolas; Berge, Léonie; Eck, Daniel; Payan, Emmanuel
2018-01-01
One of the main drawbacks of Cone Beam Computed Tomography (CBCT) is the contribution of the scattered photons due to the object and the detector. Scattered photons are deflected from their original path after their interaction with the object. This additional contribution of the scattered photons results in increased measured intensities, since the scattered intensity simply adds to the transmitted intensity. This effect is seen as an overestimation in the measured intensity thus corresponding to an underestimation of absorption. This results in artifacts like cupping, shading, streaks etc. on the reconstructed images. Moreover, the scattered radiation provides a bias for the quantitative tomography reconstruction (for example atomic number and volumic mass measurement with dual-energy technique). The effect can be significant and difficult in the range of MeV energy using large objects due to higher Scatter to Primary Ratio (SPR). Additionally, the incident high energy photons which are scattered by the Compton effect are more forward directed and hence more likely to reach the detector. Moreover, for MeV energy range, the contribution of the photons produced by pair production and Bremsstrahlung process also becomes important. We propose an evaluation of a scattering correction technique based on the method named Scatter Kernel Superposition (SKS). The algorithm uses a continuously thickness-adapted kernels method. The analytical parameterizations of the scatter kernels are derived in terms of material thickness, to form continuously thickness-adapted kernel maps in order to correct the projections. This approach has proved to be efficient in producing better sampling of the kernels with respect to the object thickness. This technique offers applicability over a wide range of imaging conditions and gives users an additional advantage. Moreover, since no extra hardware is required by this approach, it forms a major advantage especially in those cases where
Mass corrections in deep-inelastic scattering
International Nuclear Information System (INIS)
Gross, D.J.; Treiman, S.B.; Wilczek, F.A.
1977-01-01
The moment sum rules for deep-inelastic lepton scattering are expected for asymptotically free field theories to display a characteristic pattern of logarithmic departures from scaling at large enough Q 2 . In the large-Q 2 limit these patterns do not depend on hadron or quark masses m. For modest values of Q 2 one expects corrections at the level of powers of m 2 /Q 2 . We discuss the question whether these mass effects are accessible in perturbation theory, as applied to the twist-2 Wilson coefficients and more generally. Our conclusion is that some part of the mass effects must arise from a nonperturbative origin. We also discuss the corrections which arise from higher orders in perturbation theory for very large Q 2 , where mass effects can perhaps be ignored. The emphasis here is on a characterization of the Q 2 , x domain where higher-order corrections are likely to be unimportant
Holographic corrections to meson scattering amplitudes
Energy Technology Data Exchange (ETDEWEB)
Armoni, Adi; Ireson, Edwin, E-mail: 746616@swansea.ac.uk
2017-06-15
We compute meson scattering amplitudes using the holographic duality between confining gauge theories and string theory, in order to consider holographic corrections to the Veneziano amplitude and associated higher-point functions. The generic nature of such computations is explained, thanks to the well-understood nature of confining string backgrounds, and two different examples of the calculation in given backgrounds are used to illustrate the details. The effect we discover, whilst only qualitative, is re-obtainable in many such examples, in four-point but also higher point amplitudes.
Scatter and attenuation correction in SPECT
International Nuclear Information System (INIS)
Ljungberg, Michael
2004-01-01
The adsorbed dose is related to the activity uptake in the organ and its temporal distribution. Measured count rate with scintillation cameras is related to activity through the system sensitivity, cps/MBq. By accounting for physical processes and imaging limitations we can measure the activity at different time points. Correction for physical factor, such as attenuation and scatter is required for accurate quantitation. Both planar and SPECT imaging can be used to estimate activities for radiopharmaceutical dosimetry. Planar methods have been the most widely used but is a 2D technique. With accurate modelling for imagine in iterative reconstruction, SPECT methods will prove to be more accurate
International Nuclear Information System (INIS)
Cheng, J-C; Rahmim, Arman; Blinder, Stephan; Camborde, Marie-Laure; Raywood, Kelvin; Sossi, Vesna
2007-01-01
We describe an ordinary Poisson list-mode expectation maximization (OP-LMEM) algorithm with a sinogram-based scatter correction method based on the single scatter simulation (SSS) technique and a random correction method based on the variance-reduced delayed-coincidence technique. We also describe a practical approximate scatter and random-estimation approach for dynamic PET studies based on a time-averaged scatter and random estimate followed by scaling according to the global numbers of true coincidences and randoms for each temporal frame. The quantitative accuracy achieved using OP-LMEM was compared to that obtained using the histogram-mode 3D ordinary Poisson ordered subset expectation maximization (3D-OP) algorithm with similar scatter and random correction methods, and they showed excellent agreement. The accuracy of the approximated scatter and random estimates was tested by comparing time activity curves (TACs) as well as the spatial scatter distribution from dynamic non-human primate studies obtained from the conventional (frame-based) approach and those obtained from the approximate approach. An excellent agreement was found, and the time required for the calculation of scatter and random estimates in the dynamic studies became much less dependent on the number of frames (we achieved a nearly four times faster performance on the scatter and random estimates by applying the proposed method). The precision of the scatter fraction was also demonstrated for the conventional and the approximate approach using phantom studies
Scattering Correction For Image Reconstruction In Flash Radiography
International Nuclear Information System (INIS)
Cao, Liangzhi; Wang, Mengqi; Wu, Hongchun; Liu, Zhouyu; Cheng, Yuxiong; Zhang, Hongbo
2013-01-01
Scattered photons cause blurring and distortions in flash radiography, reducing the accuracy of image reconstruction significantly. The effect of the scattered photons is taken into account and an iterative deduction of the scattered photons is proposed to amend the scattering effect for image restoration. In order to deduct the scattering contribution, the flux of scattered photons is estimated as the sum of two components. The single scattered component is calculated accurately together with the uncollided flux along the characteristic ray, while the multiple scattered component is evaluated using correction coefficients pre-obtained from Monte Carlo simulations.The arbitrary geometry pretreatment and ray tracing are carried out based on the customization of AutoCAD. With the above model, an Iterative Procedure for image restORation code, IPOR, is developed. Numerical results demonstrate that the IPOR code is much more accurate than the direct reconstruction solution without scattering correction and it has a very high computational efficiency
Scattering Correction For Image Reconstruction In Flash Radiography
Energy Technology Data Exchange (ETDEWEB)
Cao, Liangzhi; Wang, Mengqi; Wu, Hongchun; Liu, Zhouyu; Cheng, Yuxiong; Zhang, Hongbo [Xi' an Jiaotong Univ., Xi' an (China)
2013-08-15
Scattered photons cause blurring and distortions in flash radiography, reducing the accuracy of image reconstruction significantly. The effect of the scattered photons is taken into account and an iterative deduction of the scattered photons is proposed to amend the scattering effect for image restoration. In order to deduct the scattering contribution, the flux of scattered photons is estimated as the sum of two components. The single scattered component is calculated accurately together with the uncollided flux along the characteristic ray, while the multiple scattered component is evaluated using correction coefficients pre-obtained from Monte Carlo simulations.The arbitrary geometry pretreatment and ray tracing are carried out based on the customization of AutoCAD. With the above model, an Iterative Procedure for image restORation code, IPOR, is developed. Numerical results demonstrate that the IPOR code is much more accurate than the direct reconstruction solution without scattering correction and it has a very high computational efficiency.
Software correction of scatter coincidence in positron CT
International Nuclear Information System (INIS)
Endo, M.; Iinuma, T.A.
1984-01-01
This paper describes a software correction of scatter coincidence in positron CT which is based on an estimation of scatter projections from true projections by an integral transform. Kernels for the integral transform are projected distributions of scatter coincidences for a line source at different positions in a water phantom and are calculated by Klein-Nishina's formula. True projections of any composite object can be determined from measured projections by iterative applications of the integral transform. The correction method was tested in computer simulations and phantom experiments with Positologica. The results showed that effects of scatter coincidence are not negligible in the quantitation of images, but the correction reduces them significantly. (orig.)
Real-time scatter measurement and correction in film radiography
International Nuclear Information System (INIS)
Shaw, C.G.
1987-01-01
A technique for real-time scatter measurement and correction in scanning film radiography is described. With this technique, collimated x-ray fan beams are used to partially reject scattered radiation. Photodiodes are attached to the aft-collimator for sampled scatter measurement. Such measurement allows the scatter distribution to be reconstructed and subtracted from digitized film image data for accurate transmission measurement. In this presentation the authors discuss the physical and technical considerations of this scatter correction technique. Examples are shown that demonstrate the feasibility of the technique. Improved x-ray transmission measurement and dual-energy subtraction imaging are demonstrated with phantoms
Attenuation and scatter correction in SPECT
International Nuclear Information System (INIS)
Pant, G.S.; Pandey, A.K.
2000-01-01
While passing through matter, photons undergo various types of interactions. In the process, some photons are completely absorbed, some are scattered in different directions with or without any change in their energy and some pass through unattenuated. These unattenuated photons carry the information with them. However, the image data gets corrupted with attenuation and scatter processes. This paper deals with the effect of these two processes in nuclear medicine images and suggests the methods to overcome them
An algorithm to determine backscattering ratio and single scattering albedo
Digital Repository Service at National Institute of Oceanography (India)
Suresh, T.; Desa, E.; Matondkar, S.G.P.; Mascarenhas, A.A.M.Q.; Nayak, S.R.; Naik, P.
Algorithms to determine the inherent optical properties of water, backscattering probability and single scattering albedo at 490 and 676 nm from the apparent optical property, remote sensing reflectance are presented here. The measured scattering...
Multiple scattering corrections to the Beer-Lambert law. 1: Open detector.
Tam, W G; Zardecki, A
1982-07-01
Multiple scattering corrections to the Beer-Lambert law are analyzed by means of a rigorous small-angle solution to the radiative transfer equation. Transmission functions for predicting the received radiant power-a directly measured quantity in contrast to the spectral radiance in the Beer-Lambert law-are derived. Numerical algorithms and results relating to the multiple scattering effects for laser propagation in fog, cloud, and rain are presented.
Neural network scatter correction technique for digital radiography
International Nuclear Information System (INIS)
Boone, J.M.
1990-01-01
This paper presents a scatter correction technique based on artificial neural networks. The technique utilizes the acquisition of a conventional digital radiographic image, coupled with the acquisition of a multiple pencil beam (micro-aperture) digital image. Image subtraction results in a sparsely sampled estimate of the scatter component in the image. The neural network is trained to develop a causal relationship between image data on the low-pass filtered open field image and the sparsely sampled scatter image, and then the trained network is used to correct the entire image (pixel by pixel) in a manner which is operationally similar to but potentially more powerful than convolution. The technique is described and is illustrated using clinical primary component images combined with scatter component images that are realistically simulated using the results from previously reported Monte Carlo investigations. The results indicate that an accurate scatter correction can be realized using this technique
Radiative corrections to neutrino deep inelastic scattering revisited
International Nuclear Information System (INIS)
Arbuzov, Andrej B.; Bardin, Dmitry Yu.; Kalinovskaya, Lidia V.
2005-01-01
Radiative corrections to neutrino deep inelastic scattering are revisited. One-loop electroweak corrections are re-calculated within the automatic SANC system. Terms with mass singularities are treated including higher order leading logarithmic corrections. Scheme dependence of corrections due to weak interactions is investigated. The results are implemented into the data analysis of the NOMAD experiment. The present theoretical accuracy in description of the process is discussed
A software-based x-ray scatter correction method for breast tomosynthesis
International Nuclear Information System (INIS)
Jia Feng, Steve Si; Sechopoulos, Ioannis
2011-01-01
reconstructions. The visibility of the findings in two patient images was also improved by the application of the scatter correction algorithm. The MTF of the images did not change after application of the scatter correction algorithm, indicating that spatial resolution was not adversely affected. Conclusions: Our software-based scatter correction algorithm exhibits great potential in improving the image quality of DBT acquisitions of both phantoms and patients. The proposed algorithm does not require a time-consuming MC simulation for each specific case to be corrected, making it applicable in the clinical realm.
Teuho, Jarmo; Saunavaara, Virva; Tolvanen, Tuula; Tuokkola, Terhi; Karlsson, Antti; Tuisku, Jouni; Teräs, Mika
2017-10-01
In PET, corrections for photon scatter and attenuation are essential for visual and quantitative consistency. MR attenuation correction (MRAC) is generally conducted by image segmentation and assignment of discrete attenuation coefficients, which offer limited accuracy compared with CT attenuation correction. Potential inaccuracies in MRAC may affect scatter correction, because the attenuation image (μ-map) is used in single scatter simulation (SSS) to calculate the scatter estimate. We assessed the impact of MRAC to scatter correction using 2 scatter-correction techniques and 3 μ-maps for MRAC. Methods: The tail-fitted SSS (TF-SSS) and a Monte Carlo-based single scatter simulation (MC-SSS) algorithm implementations on the Philips Ingenuity TF PET/MR were used with 1 CT-based and 2 MR-based μ-maps. Data from 7 subjects were used in the clinical evaluation, and a phantom study using an anatomic brain phantom was conducted. Scatter-correction sinograms were evaluated for each scatter correction method and μ-map. Absolute image quantification was investigated with the phantom data. Quantitative assessment of PET images was performed by volume-of-interest and ratio image analysis. Results: MRAC did not result in large differences in scatter algorithm performance, especially with TF-SSS. Scatter sinograms and scatter fractions did not reveal large differences regardless of the μ-map used. TF-SSS showed slightly higher absolute quantification. The differences in volume-of-interest analysis between TF-SSS and MC-SSS were 3% at maximum in the phantom and 4% in the patient study. Both algorithms showed excellent correlation with each other with no visual differences between PET images. MC-SSS showed a slight dependency on the μ-map used, with a difference of 2% on average and 4% at maximum when a μ-map without bone was used. Conclusion: The effect of different MR-based μ-maps on the performance of scatter correction was minimal in non-time-of-flight 18 F-FDG PET
International Nuclear Information System (INIS)
Ruehrnschopf and, Ernst-Peter; Klingenbeck, Klaus
2011-01-01
The main components of scatter correction procedures are scatter estimation and a scatter compensation algorithm. This paper completes a previous paper where a general framework for scatter compensation was presented under the prerequisite that a scatter estimation method is already available. In the current paper, the authors give a systematic review of the variety of scatter estimation approaches. Scatter estimation methods are based on measurements, mathematical-physical models, or combinations of both. For completeness they present an overview of measurement-based methods, but the main topic is the theoretically more demanding models, as analytical, Monte-Carlo, and hybrid models. Further classifications are 3D image-based and 2D projection-based approaches. The authors present a system-theoretic framework, which allows to proceed top-down from a general 3D formulation, by successive approximations, to efficient 2D approaches. A widely useful method is the beam-scatter-kernel superposition approach. Together with the review of standard methods, the authors discuss their limitations and how to take into account the issues of object dependency, spatial variance, deformation of scatter kernels, external and internal absorbers. Open questions for further investigations are indicated. Finally, the authors refer on some special issues and applications, such as bow-tie filter, offset detector, truncated data, and dual-source CT.
Fully 3D iterative scatter-corrected OSEM for HRRT PET using a GPU
Energy Technology Data Exchange (ETDEWEB)
Kim, Kyung Sang; Ye, Jong Chul, E-mail: kssigari@kaist.ac.kr, E-mail: jong.ye@kaist.ac.kr [Bio-Imaging and Signal Processing Lab., Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), 335 Gwahak-no, Yuseong-gu, Daejon 305-701 (Korea, Republic of)
2011-08-07
Accurate scatter correction is especially important for high-resolution 3D positron emission tomographies (PETs) such as high-resolution research tomograph (HRRT) due to large scatter fraction in the data. To address this problem, a fully 3D iterative scatter-corrected ordered subset expectation maximization (OSEM) in which a 3D single scatter simulation (SSS) is alternatively performed with a 3D OSEM reconstruction was recently proposed. However, due to the computational complexity of both SSS and OSEM algorithms for a high-resolution 3D PET, it has not been widely used in practice. The main objective of this paper is, therefore, to accelerate the fully 3D iterative scatter-corrected OSEM using a graphics processing unit (GPU) and verify its performance for an HRRT. We show that to exploit the massive thread structures of the GPU, several algorithmic modifications are necessary. For SSS implementation, a sinogram-driven approach is found to be more appropriate compared to a detector-driven approach, as fast linear interpolation can be performed in the sinogram domain through the use of texture memory. Furthermore, a pixel-driven backprojector and a ray-driven projector can be significantly accelerated by assigning threads to voxels and sinograms, respectively. Using Nvidia's GPU and compute unified device architecture (CUDA), the execution time of a SSS is less than 6 s, a single iteration of OSEM with 16 subsets takes 16 s, and a single iteration of the fully 3D scatter-corrected OSEM composed of a SSS and six iterations of OSEM takes under 105 s for the HRRT geometry, which corresponds to acceleration factors of 125x and 141x for OSEM and SSS, respectively. The fully 3D iterative scatter-corrected OSEM algorithm is validated in simulations using Geant4 application for tomographic emission and in actual experiments using an HRRT.
SU-E-I-07: An Improved Technique for Scatter Correction in PET
International Nuclear Information System (INIS)
Lin, S; Wang, Y; Lue, K; Lin, H; Chuang, K
2014-01-01
Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends on the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the insufficient
SU-E-I-07: An Improved Technique for Scatter Correction in PET
Energy Technology Data Exchange (ETDEWEB)
Lin, S; Wang, Y; Lue, K; Lin, H; Chuang, K [Chuang, National Tsing Hua University, Hsichu, Taiwan (China)
2014-06-01
Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends on the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the insufficient
Energy Technology Data Exchange (ETDEWEB)
Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Verhaegen, F. [Department of Radiation Oncology - MAASTRO, GROW—School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4 (Canada); Jaffray, D. A. [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Ontario Cancer Institute, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada)
2015-01-15
Purpose: X-ray scatter is a significant impediment to image quality improvements in cone-beam CT (CBCT). The authors present and demonstrate a novel scatter correction algorithm using a scatter estimation method that simultaneously combines multiple Monte Carlo (MC) CBCT simulations through the use of a concurrently evaluated fitting function, referred to as concurrent MC fitting (CMCF). Methods: The CMCF method uses concurrently run MC CBCT scatter projection simulations that are a subset of the projection angles used in the projection set, P, to be corrected. The scattered photons reaching the detector in each MC simulation are simultaneously aggregated by an algorithm which computes the scatter detector response, S{sub MC}. S{sub MC} is fit to a function, S{sub F}, and if the fit of S{sub F} is within a specified goodness of fit (GOF), the simulations are terminated. The fit, S{sub F}, is then used to interpolate the scatter distribution over all pixel locations for every projection angle in the set P. The CMCF algorithm was tested using a frequency limited sum of sines and cosines as the fitting function on both simulated and measured data. The simulated data consisted of an anthropomorphic head and a pelvis phantom created from CT data, simulated with and without the use of a compensator. The measured data were a pelvis scan of a phantom and patient taken on an Elekta Synergy platform. The simulated data were used to evaluate various GOF metrics as well as determine a suitable fitness value. The simulated data were also used to quantitatively evaluate the image quality improvements provided by the CMCF method. A qualitative analysis was performed on the measured data by comparing the CMCF scatter corrected reconstruction to the original uncorrected and corrected by a constant scatter correction reconstruction, as well as a reconstruction created using a set of projections taken with a small cone angle. Results: Pearson’s correlation, r, proved to be a
Scatter correction using a primary modulator on a clinical angiography C-arm CT system.
Bier, Bastian; Berger, Martin; Maier, Andreas; Kachelrieß, Marc; Ritschl, Ludwig; Müller, Kerstin; Choi, Jang-Hwan; Fahrig, Rebecca
2017-09-01
Cone beam computed tomography (CBCT) suffers from a large amount of scatter, resulting in severe scatter artifacts in the reconstructions. Recently, a new scatter correction approach, called improved primary modulator scatter estimation (iPMSE), was introduced. That approach utilizes a primary modulator that is inserted between the X-ray source and the object. This modulation enables estimation of the scatter in the projection domain by optimizing an objective function with respect to the scatter estimate. Up to now the approach has not been implemented on a clinical angiography C-arm CT system. In our work, the iPMSE method is transferred to a clinical C-arm CBCT. Additional processing steps are added in order to compensate for the C-arm scanner motion and the automatic X-ray tube current modulation. These challenges were overcome by establishing a reference modulator database and a block-matching algorithm. Experiments with phantom and experimental in vivo data were performed to evaluate the method. We show that scatter correction using primary modulation is possible on a clinical C-arm CBCT. Scatter artifacts in the reconstructions are reduced with the newly extended method. Compared to a scan with a narrow collimation, our approach showed superior results with an improvement of the contrast and the contrast-to-noise ratio for the phantom experiments. In vivo data are evaluated by comparing the results with a scan with a narrow collimation and with a constant scatter correction approach. Scatter correction using primary modulation is possible on a clinical CBCT by compensating for the scanner motion and the tube current modulation. Scatter artifacts could be reduced in the reconstructions of phantom scans and in experimental in vivo data. © 2017 American Association of Physicists in Medicine.
A locally adaptive algorithm for shadow correction in color images
Karnaukhov, Victor; Kober, Vitaly
2017-09-01
The paper deals with correction of color images distorted by spatially nonuniform illumination. A serious distortion occurs in real conditions when a part of the scene containing 3D objects close to a directed light source is illuminated much brighter than the rest of the scene. A locally-adaptive algorithm for correction of shadow regions in color images is proposed. The algorithm consists of segmentation of shadow areas with rank-order statistics followed by correction of nonuniform illumination with human visual perception approach. The performance of the proposed algorithm is compared to that of common algorithms for correction of color images containing shadow regions.
Ultrafast cone-beam CT scatter correction with GPU-based Monte Carlo simulation
Directory of Open Access Journals (Sweden)
Yuan Xu
2014-03-01
Full Text Available Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT. We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstruction within 30 seconds.Methods: The method consists of six steps: 1 FDK reconstruction using raw projection data; 2 Rigid Registration of planning CT to the FDK results; 3 MC scatter calculation at sparse view angles using the planning CT; 4 Interpolation of the calculated scatter signals to other angles; 5 Removal of scatter from the raw projections; 6 FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC noise from the simulated scatter images caused by low photon numbers. The method is validated on one simulated head-and-neck case with 364 projection angles.Results: We have examined variation of the scatter signal among projection angles using Fourier analysis. It is found that scatter images at 31 angles are sufficient to restore those at all angles with < 0.1% error. For the simulated patient case with a resolution of 512 × 512 × 100, we simulated 5 × 106 photons per angle. The total computation time is 20.52 seconds on a Nvidia GTX Titan GPU, and the time at each step is 2.53, 0.64, 14.78, 0.13, 0.19, and 2.25 seconds, respectively. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU.Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. It accomplished the whole procedure of scatter correction and reconstruction within 30 seconds.----------------------------Cite this
Evaluation of a method for correction of scatter radiation in thorax cone beam CT
International Nuclear Information System (INIS)
Rinkel, J.; Dinten, J.M.; Esteve, F.
2004-01-01
Purpose: Cone beam CT (CBCT) enables three-dimensional imaging with isotropic resolution. X-ray scatter estimation is a big challenge for quantitative CBCT imaging of thorax: scatter level is significantly higher on cone beam systems compared to collimated fan beam systems. The effects of this scattered radiation are cupping artefacts, streaks, and quantification inaccuracies. The beam stops conventional scatter estimation approach can be used for CBCT but leads to a significant increase in terms of dose and acquisition time. At CEA-LETI has been developed an original scatter management process without supplementary acquisition. Methods and Materials: This Analytical Plus Indexing-based method (API) of scatter correction in CBCT is based on scatter calibration through offline acquisitions with beam stops on lucite plates, combined to an analytical transformation issued from physical equations. This approach has been applied with success in bone densitometry and mammography. To evaluate this method in CBCT, acquisitions from a thorax phantom with and without beam stops have been performed. To compare different scatter correction approaches, Feldkamp algorithm has been applied on rough data corrected from scatter by API and by beam stops approaches. Results: The API method provides results in good agreement with the beam stops array approach, suppressing cupping artefact. Otherwise influence of the scatter correction method on the noise in the reconstructed images has been evaluated. Conclusion: The results indicate that the API method is effective for quantitative CBCT imaging of thorax. Compared to a beam stops array method it needs a lower x-ray dose and shortens acquisition time. (authors)
Higher Order Heavy Quark Corrections to Deep-Inelastic Scattering
Blümlein, Johannes; DeFreitas, Abilio; Schneider, Carsten
2015-04-01
The 3-loop heavy flavor corrections to deep-inelastic scattering are essential for consistent next-to-next-to-leading order QCD analyses. We report on the present status of the calculation of these corrections at large virtualities Q2. We also describe a series of mathematical, computer-algebraic and combinatorial methods and special function spaces, needed to perform these calculations. Finally, we briefly discuss the status of measuring αs (MZ), the charm quark mass mc, and the parton distribution functions at next-to-next-to-leading order from the world precision data on deep-inelastic scattering.
Higher order heavy quark corrections to deep-inelastic scattering
International Nuclear Information System (INIS)
Bluemlein, J.; Freitas, A. de; Johannes Kepler Univ., Linz; Schneider, C.
2014-11-01
The 3-loop heavy flavor corrections to deep-inelastic scattering are essential for consistent next-to-next-to-leading order QCD analyses. We report on the present status of the calculation of these corrections at large virtualities Q 2 . We also describe a series of mathematical, computer-algebraic and combinatorial methods and special function spaces, needed to perform these calculations. Finally, we briefly discuss the status of measuring α s (M Z ), the charm quark mass m c , and the parton distribution functions at next-to-next-to-leading order from the world precision data on deep-inelastic scattering.
Method for measuring multiple scattering corrections between liquid scintillators
Energy Technology Data Exchange (ETDEWEB)
Verbeke, J.M., E-mail: verbeke2@llnl.gov; Glenn, A.M., E-mail: glenn22@llnl.gov; Keefer, G.J., E-mail: keefer1@llnl.gov; Wurtz, R.E., E-mail: wurtz1@llnl.gov
2016-07-21
A time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.
Corrections to the large-angle scattering amplitude
International Nuclear Information System (INIS)
Goloskokov, S.V.; Kudinov, A.V.; Kuleshov, S.P.
1979-01-01
High-energy behaviour of scattering amplitudes is considered within the frames of Logunov-Tavchelidze quasipotential approach. The representation of scattering amplitude of two scalar particles, convenient for the study of its asymptotic properties is given. Obtained are corrections of the main value of scattering amplitude of the first and the second orders in 1/p, where p is the pulse of colliding particles in the system of the inertia centre. An example of the obtained formulas use for a concrete quasipotential is given
The analysis and correction of neutron scattering effects in neutron imaging
International Nuclear Information System (INIS)
Raine, D.A.; Brenizer, J.S.
1997-01-01
A method of correcting for the scattering effects present in neutron radiographic and computed tomographic imaging has been developed. Prior work has shown that beam, object, and imaging system geometry factors, such as the L/D ratio and angular divergence, are the primary sources contributing to the degradation of neutron images. With objects smaller than 20--40 mm in width, a parallel beam approximation can be made where the effects from geometry are negligible. Factors which remain important in the image formation process are the pixel size of the imaging system, neutron scattering, the size of the object, the conversion material, and the beam energy spectrum. The Monte Carlo N-Particle transport code, version 4A (MCNP4A), was used to separate and evaluate the effect that each of these parameters has on neutron image data. The simulations were used to develop a correction algorithm which is easy to implement and requires no a priori knowledge of the object. The correction algorithm is based on the determination of the object scatter function (OSF) using available data outside the object to estimate the shape and magnitude of the OSF based on a Gaussian functional form. For objects smaller than 1 mm (0.04 in.) in width, the correction function can be well approximated by a constant function. Errors in the determination and correction of the MCNP simulated neutron scattering component were under 5% and larger errors were only noted in objects which were at the extreme high end of the range of object sizes simulated. The Monte Carlo data also indicated that scattering does not play a significant role in the blurring of neutron radiographic and tomographic images. The effect of neutron scattering on computed tomography is shown to be minimal at best, with the most serious effect resulting when the basic backprojection method is used
The Bouguer Correction Algorithm for Gravity with Limited Range
Directory of Open Access Journals (Sweden)
MA Jian
2017-01-01
Full Text Available The Bouguer correction is an important item in gravity reduction, while the traditional Bouguer correction, whether the plane Bouguer correction or the spherical Bouguer correction, exists approximation error because of far-zone virtual terrain. The error grows as the calculation point gets higher. Therefore gravity reduction using the Bouguer correction with limited range, which was in accordance with the scope of the topographic correction, was researched in this paper. After that, a simplified formula to calculate the Bouguer correction with limited range was proposed. The algorithm, which is innovative and has the value of mathematical theory to some extent, shows consistency with the equation evolved from the strict integral algorithm for topographic correction. The interpolation experiment shows that gravity reduction based on the Bouguer correction with limited range is prior to unlimited range when the calculation point is taller than 1000 m.
An Algorithm for Computing Screened Coulomb Scattering in Geant4
Mendenhall, Marcus H.; Weller, Robert A.
2004-01-01
An algorithm has been developed for the Geant4 Monte-Carlo package for the efficient computation of screened Coulomb interatomic scattering. It explicitly integrates the classical equations of motion for scattering events, resulting in precise tracking of both the projectile and the recoil target nucleus. The algorithm permits the user to plug in an arbitrary screening function, such as Lens-Jensen screening, which is good for backscattering calculations, or Ziegler-Biersack-Littmark screenin...
Compton scatter and randoms corrections for origin ensembles 3D PET reconstructions
Energy Technology Data Exchange (ETDEWEB)
Sitek, Arkadiusz [Harvard Medical School, Boston, MA (United States). Dept. of Radiology; Brigham and Women' s Hospital, Boston, MA (United States); Kadrmas, Dan J. [Utah Univ., Salt Lake City, UT (United States). Utah Center for Advanced Imaging Research (UCAIR)
2011-07-01
In this work we develop a novel approach to correction for scatter and randoms in reconstruction of data acquired by 3D positron emission tomography (PET) applicable to tomographic reconstruction done by the origin ensemble (OE) approach. The statistical image reconstruction using OE is based on calculation of expectations of the numbers of emitted events per voxel based on complete-data space. Since the OE estimation is fundamentally different than regular statistical estimators such those based on the maximum likelihoods, the standard methods of implementation of scatter and randoms corrections cannot be used. Based on prompts, scatter, and random rates, each detected event is graded in terms of a probability of being a true event. These grades are utilized by the Markov Chain Monte Carlo (MCMC) algorithm used in OE approach for calculation of the expectation over the complete-data space of the number of emitted events per voxel (OE estimator). We show that the results obtained with the OE are almost identical to results obtained by the maximum likelihood-expectation maximization (ML-EM) algorithm for reconstruction for experimental phantom data acquired using Siemens Biograph mCT 3D PET/CT scanner. The developed correction removes artifacts due to scatter and randoms in investigated 3D PET datasets. (orig.)
Backscatter Correction Algorithm for TBI Treatment Conditions
Energy Technology Data Exchange (ETDEWEB)
Sanchez-Nieto, B.; Sanchez-Doblado, F.; Arrans, R.; Terron, J.A. [Dpto. Fisiología Médica y Biofísica, Universidad de Sevilla, Avda. Sánchez Pizjuán, 4. E-41009, Sevilla (Spain); Errazquin, L. [Servicio Oncología Radioterápica, Hospital Univ.V. Macarena. Dr. Fedriani, s/n. E-41009, Sevilla (Spain)
2015-01-15
The accuracy requirements in target dose delivery is, according to ICRU, ±5%. This is so not only in standard radiotherapy but also in total body irradiation (TBI). Physical dosimetry plays an important role in achieving this recommended level. The semi-infinite phantoms, customarily used for dosimetry purposes, give scatter conditions different to those of the finite thickness of the patient. So dose calculated in patient’s points close to beam exit surface may be overestimated. It is then necessary to quantify the backscatter factor in order to decrease the uncertainty in this dose calculation. The backward scatter has been well studied at standard distances. The present work intends to evaluate the backscatter phenomenon under our particular TBI treatment conditions. As a consequence of this study, a semi-empirical expression has been derived to calculate (within 0.3% uncertainty) the backscatter factor. This factor depends lineally on the depth and exponentially on the underlying tissue. Differences found in the qualitative behavior with respect to standard distances are due to scatter in the bunker wall close to the measurement point.
A model of diffraction scattering with unitary corrections
International Nuclear Information System (INIS)
Etim, E.; Malecki, A.; Satta, L.
1989-01-01
The inability of the multiple scattering model of Glauber and similar geometrical picture models to fit data at Collider energies, to fit low energy data at large momentum transfers and to explain the absence of multiple diffraction dips in the data is noted. It is argued and shown that a unitary correction to the multiple scattering amplitude gives rise to a better model and allows to fit all available data on nucleon-nucleon and nucleus-nucleus collisions at all energies and all momentum transfers. There are no multiple diffraction dips
Meyer, Michael; Kalender, Willi A.; Kyriakou, Yiannis
2010-01-01
Scattered radiation is a major source of artifacts in flat detector computed tomography (FDCT) due to the increased irradiated volumes. We propose a fast projection-based algorithm for correction of scatter artifacts. The presented algorithm combines a convolution method to determine the spatial distribution of the scatter intensity distribution with an object-size-dependent scaling of the scatter intensity distributions using a priori information generated by Monte Carlo simulations. A projection-based (PBSE) and an image-based (IBSE) strategy for size estimation of the scanned object are presented. Both strategies provide good correction and comparable results; the faster PBSE strategy is recommended. Even with such a fast and simple algorithm that in the PBSE variant does not rely on reconstructed volumes or scatter measurements, it is possible to provide a reasonable scatter correction even for truncated scans. For both simulations and measurements, scatter artifacts were significantly reduced and the algorithm showed stable behavior in the z-direction. For simulated voxelized head, hip and thorax phantoms, a figure of merit Q of 0.82, 0.76 and 0.77 was reached, respectively (Q = 0 for uncorrected, Q = 1 for ideal). For a water phantom with 15 cm diameter, for example, a cupping reduction from 10.8% down to 2.1% was achieved. The performance of the correction method has limitations in the case of measurements using non-ideal detectors, intensity calibration, etc. An iterative approach to overcome most of these limitations was proposed. This approach is based on root finding of a cupping metric and may be useful for other scatter correction methods as well. By this optimization, cupping of the measured water phantom was further reduced down to 0.9%. The algorithm was evaluated on a commercial system including truncated and non-homogeneous clinically relevant objects.
Multiple scattering and attenuation corrections in Deep Inelastic Neutron Scattering experiments
International Nuclear Information System (INIS)
Dawidowski, J; Blostein, J J; Granada, J R
2006-01-01
Multiple scattering and attenuation corrections in Deep Inelastic Neutron Scattering experiments are analyzed. The theoretical basis of the method is stated, and a Monte Carlo procedure to perform the calculation is presented. The results are compared with experimental data. The importance of the accuracy in the description of the experimental parameters is tested, and the implications of the present results on the data analysis procedures is examined
Hadron mass corrections in semi-inclusive deep inelastic scattering
International Nuclear Information System (INIS)
Accardi, A.; Hobbs, T.; Melnitchouk, W.
2009-01-01
We derive mass corrections for semi-inclusive deep inelastic scattering of leptons from nucleons using a collinear factorization framework which incorporates the initial state mass of the target nucleon and the final state mass of the produced hadron h. The hadron mass correction is made by introducing a generalized, finite-Q 2 scaling variable ζ h for the hadron fragmentation function, which approaches the usual energy fraction z h = E h /ν in the Bjorken limit. We systematically examine the kinematic dependencies of the mass corrections to semi-inclusive cross sections, and find that these are even larger than for inclusive structure functions. The hadron mass corrections compete with the experimental uncertainties at kinematics typical of current facilities, Q 2 2 and intermediate x B > 0.3, and will be important to efforts at extracting parton distributions from semi-inclusive processes at intermediate energies.
Complete $O(\\alpha)$ QED corrections to polarized Compton scattering
Denner, Ansgar
1999-01-01
The complete QED corrections of O(alpha) to polarized Compton scattering are calculated for finite electron mass and including the real corrections induced by the processes e^- gamma -> e^- gamma gamma and e^- gamma -> e^- e^- e^+. All relevant formulas are listed in a form that is well suited for a direct implementation in computer codes. We present a detailed numerical discussion of the O(alpha)-corrected cross section and the left-right asymmetry in the energy range of present and future Compton polarimeters, which are used to determine the beam polarization of high-energetic e^+- beams. For photons with energies of a few eV and electrons with SLC energies or smaller, the corrections are of the order of a few per mille. In the energy range of future e^+e^- colliders, however, they reach 1-2% and cannot be neglected in a precision polarization measurement.
International Nuclear Information System (INIS)
Guerin, Bastien
2010-01-01
We developed and validated a fast Monte Carlo simulation of PET acquisitions based on the SimSET program modeling accurately the propagation of gamma photons in the patient as well as the block-based PET detector. Comparison of our simulation with another well validated code, GATE, and measurements on two GE Discovery ST PET scanners showed that it models accurately energy spectra (errors smaller than 4.6%), the spatial resolution of block-based PET scanners (6.1%), scatter fraction (3.5%), sensitivity (2.3%) and count rates (12.7%). Next, we developed a novel scatter correction incorporating the energy and position of photons detected in list-mode. Our approach is based on the reformulation of the list-mode likelihood function containing the energy distribution of detected coincidences in addition to their spatial distribution, yielding an EM reconstruction algorithm containing spatial and energy dependent correction terms. We also proposed using the energy in addition to the position of gamma photons in the normalization of the scatter sinogram. Finally, we developed a method for estimating primary and scatter photons energy spectra from total spectra detected in different sectors of the PET scanner. We evaluated the accuracy and precision of our new spatio-spectral scatter correction and that of the standard spatial correction using realistic Monte Carlo simulations. These results showed that incorporating the energy in the scatter correction reduces bias in the estimation of the absolute activity level by ∼ 60% in the cold regions of the largest patients and yields quantification errors less than 13% in all regions. (author)
International Nuclear Information System (INIS)
Shaw, C.G.; Ergun, D.L.; Myerowitz, P.D.; Van Lysel, M.S.; Mistretta, C.A.; Zarnstorff, W.C.; Crummy, A.B.
1982-01-01
The logarithmic amplification of video signals and the availability of data in digital form make digital subtraction videoangiography a suitable tool for videodensitometric estimation of physiological quantities. A system for this purpose was implemented with a digital video image processor. However, it was found that the radiation scattering and veiling glare present in the image-intensified video must be removed to make meaningful quantitations. An algorithm to make such a correction was developed and is presented. With this correction, the videodensitometry system was calibrated with phantoms and used to measure the left ventricular ejection fraction of a canine heart
Beam hardening correction algorithm in microtomography images
International Nuclear Information System (INIS)
Sales, Erika S.; Lima, Inaya C.B.; Lopes, Ricardo T.; Assis, Joaquim T. de
2009-01-01
Quantification of mineral density of bone samples is directly related to the attenuation coefficient of bone. The X-rays used in microtomography images are polychromatic and have a moderately broad spectrum of energy, which makes the low-energy X-rays passing through a sample to be absorbed, causing a decrease in the attenuation coefficient and possibly artifacts. This decrease in the attenuation coefficient is due to a process called beam hardening. In this work the beam hardening of microtomography images of vertebrae of Wistar rats subjected to a study of hyperthyroidism was corrected by the method of linearization of the projections. It was discretized using a spectrum in energy, also called the spectrum of Herman. The results without correction for beam hardening showed significant differences in bone volume, which could lead to a possible diagnosis of osteoporosis. But the data with correction showed a decrease in bone volume, but this decrease was not significant in a confidence interval of 95%. (author)
Beam hardening correction algorithm in microtomography images
Energy Technology Data Exchange (ETDEWEB)
Sales, Erika S.; Lima, Inaya C.B.; Lopes, Ricardo T., E-mail: esales@con.ufrj.b, E-mail: ricardo@lin.ufrj.b [Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Lab. de Instrumentacao Nuclear; Assis, Joaquim T. de, E-mail: joaquim@iprj.uerj.b [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Inst. Politecnico. Dept. de Engenharia Mecanica
2009-07-01
Quantification of mineral density of bone samples is directly related to the attenuation coefficient of bone. The X-rays used in microtomography images are polychromatic and have a moderately broad spectrum of energy, which makes the low-energy X-rays passing through a sample to be absorbed, causing a decrease in the attenuation coefficient and possibly artifacts. This decrease in the attenuation coefficient is due to a process called beam hardening. In this work the beam hardening of microtomography images of vertebrae of Wistar rats subjected to a study of hyperthyroidism was corrected by the method of linearization of the projections. It was discretized using a spectrum in energy, also called the spectrum of Herman. The results without correction for beam hardening showed significant differences in bone volume, which could lead to a possible diagnosis of osteoporosis. But the data with correction showed a decrease in bone volume, but this decrease was not significant in a confidence interval of 95%. (author)
On the radiative corrections to the neutrino deep inelastic scattering
International Nuclear Information System (INIS)
Bardin, D.Yu.; Dokuchaeva, V.A.
1986-01-01
A unique set of formulae is presented for the radiative corrections to the double differential cross section of deep inelastic neutrino scattering in channels of charged and neutral currents within a simple quark parton model in a renormalization scheme on mass-shell. It is shown that these cross sections when being integrated up to the one-dimensional distribution or up to the total cross section reproduce many results existing in the literature
Investigating the effect and photon scattering correction in isotopic scanning with gamma and SPECT
International Nuclear Information System (INIS)
Movafeghi, Amir
1997-01-01
phantom was elliptical Jaszczak (or Carlson Phantom). The effect of different adjustment of system on scattering were considered. For qualitative comparison in each case, line Spread Function and Modulating Transfer Function were measured and extracted, respectively. This comparison were done for both lab and clinical systems in different experiments, with and without scatter correction. Also image reconstruction with filtered back projection algorithm were performed in both cases (with and without scatter correction) and results has been surveyed. The final result was: scattering has a major rules in degrading of reconstructed images and with scatter correction, it is possible to compensate error and increase image quality
Genetic algorithm for chromaticity correction in diffraction limited storage rings
Directory of Open Access Journals (Sweden)
M. P. Ehrlichman
2016-04-01
Full Text Available A multiobjective genetic algorithm is developed for optimizing nonlinearities in diffraction limited storage rings. This algorithm determines sextupole and octupole strengths for chromaticity correction that deliver optimized dynamic aperture and beam lifetime. The algorithm makes use of dominance constraints to breed desirable properties into the early generations. The momentum aperture is optimized indirectly by constraining the chromatic tune footprint and optimizing the off-energy dynamic aperture. The result is an effective and computationally efficient technique for correcting chromaticity in a storage ring while maintaining optimal dynamic aperture and beam lifetime.
Comparative evaluation of scatter correction techniques in 3D positron emission tomography
Zaidi, H
2000-01-01
Much research and development has been concentrated on the scatter compensation required for quantitative 3D PET. Increasingly sophisticated scatter correction procedures are under investigation, particularly those based on accurate scatter models, and iterative reconstruction-based scatter compensation approaches. The main difference among the correction methods is the way in which the scatter component in the selected energy window is estimated. Monte Carlo methods give further insight and might in themselves offer a possible correction procedure. Methods: Five scatter correction methods are compared in this paper where applicable. The dual-energy window (DEW) technique, the convolution-subtraction (CVS) method, two variants of the Monte Carlo-based scatter correction technique (MCBSC1 and MCBSC2) and our newly developed statistical reconstruction-based scatter correction (SRBSC) method. These scatter correction techniques are evaluated using Monte Carlo simulation studies, experimental phantom measurements...
A Hierarchical Volumetric Shadow Algorithm for Single Scattering
Baran, Ilya; Chen, Jiawen; Ragan-Kelley, Jonathan Millar; Durand, Fredo; Lehtinen, Jaakko
2010-01-01
Volumetric effects such as beams of light through participating media are an important component in the appearance of the natural world. Many such effects can be faithfully modeled by a single scattering medium. In the presence of shadows, rendering these effects can be prohibitively expensive: current algorithms are based on ray marching, i.e., integrating the illumination scattered towards the camera along each view ray, modulated by visibility to the light source at each sample. Visibility...
Inverse scattering and refraction corrected reflection for breast cancer imaging
Wiskin, J.; Borup, D.; Johnson, S.; Berggren, M.; Robinson, D.; Smith, J.; Chen, J.; Parisky, Y.; Klock, John
2010-03-01
Reflection ultrasound (US) has been utilized as an adjunct imaging modality for over 30 years. TechniScan, Inc. has developed unique, transmission and concomitant reflection algorithms which are used to reconstruct images from data gathered during a tomographic breast scanning process called Warm Bath Ultrasound (WBU™). The transmission algorithm yields high resolution, 3D, attenuation and speed of sound (SOS) images. The reflection algorithm is based on canonical ray tracing utilizing refraction correction via the SOS and attenuation reconstructions. The refraction correction reflection algorithm allows 360 degree compounding resulting in the reflection image. The requisite data are collected when scanning the entire breast in a 33° C water bath, on average in 8 minutes. This presentation explains how the data are collected and processed by the 3D transmission and reflection imaging mode algorithms. The processing is carried out using two NVIDIA® Tesla™ GPU processors, accessing data on a 4-TeraByte RAID. The WBU™ images are displayed in a DICOM viewer that allows registration of all three modalities. Several representative cases are presented to demonstrate potential diagnostic capability including: a cyst, fibroadenoma, and a carcinoma. WBU™ images (SOS, attenuation, and reflection modalities) are shown along with their respective mammograms and standard ultrasound images. In addition, anatomical studies are shown comparing WBU™ images and MRI images of a cadaver breast. This innovative technology is designed to provide additional tools in the armamentarium for diagnosis of breast disease.
An empirical correction for moderate multiple scattering in super-heterodyne light scattering.
Botin, Denis; Mapa, Ludmila Marotta; Schweinfurth, Holger; Sieber, Bastian; Wittenberg, Christopher; Palberg, Thomas
2017-05-28
Frequency domain super-heterodyne laser light scattering is utilized in a low angle integral measurement configuration to determine flow and diffusion in charged sphere suspensions showing moderate to strong multiple scattering. We introduce an empirical correction to subtract the multiple scattering background and isolate the singly scattered light. We demonstrate the excellent feasibility of this simple approach for turbid suspensions of transmittance T ≥ 0.4. We study the particle concentration dependence of the electro-kinetic mobility in low salt aqueous suspension over an extended concentration regime and observe a maximum at intermediate concentrations. We further use our scheme for measurements of the self-diffusion coefficients in the fluid samples in the absence or presence of shear, as well as in polycrystalline samples during crystallization and coarsening. We discuss the scope and limits of our approach as well as possible future applications.
Monte Carlo evaluation of accuracy and noise properties of two scatter correction methods
International Nuclear Information System (INIS)
Narita, Y.; Eberl, S.; Nakamura, T.
1996-01-01
Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for 99m Tc and 201 Tl for numerical chest phantoms. Data were reconstructed with ordered-subset ML-EM algorithm including attenuation correction using the transmission data. In the chest phantom simulation, TDCS provided better S/N than TEW, and better accuracy, i.e., 1.0% vs -7.2% in myocardium, and -3.7% vs -30.1% in the ventricular chamber for 99m Tc with TDCS and TEW, respectively. For 201 Tl, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT
International Nuclear Information System (INIS)
Ramamurthy, Senthil; D’Orsi, Carl J; Sechopoulos, Ioannis
2016-01-01
A previously proposed x-ray scatter correction method for dedicated breast computed tomography was further developed and implemented so as to allow for initial patient testing. The method involves the acquisition of a complete second set of breast CT projections covering 360° with a perforated tungsten plate in the path of the x-ray beam. To make patient testing feasible, a wirelessly controlled electronic positioner for the tungsten plate was designed and added to a breast CT system. Other improvements to the algorithm were implemented, including automated exclusion of non-valid primary estimate points and the use of a different approximation method to estimate the full scatter signal. To evaluate the effectiveness of the algorithm, evaluation of the resulting image quality was performed with a breast phantom and with nine patient images. The improvements in the algorithm resulted in the avoidance of introduction of artifacts, especially at the object borders, which was an issue in the previous implementation in some cases. Both contrast, in terms of signal difference and signal difference-to-noise ratio were improved with the proposed method, as opposed to with the correction algorithm incorporated in the system, which does not recover contrast. Patient image evaluation also showed enhanced contrast, better cupping correction, and more consistent voxel values for the different tissues. The algorithm also reduces artifacts present in reconstructions of non-regularly shaped breasts. With the implemented hardware and software improvements, the proposed method can be reliably used during patient breast CT imaging, resulting in improvement of image quality, no introduction of artifacts, and in some cases reduction of artifacts already present. The impact of the algorithm on actual clinical performance for detection, diagnosis and other clinical tasks in breast imaging remains to be evaluated. (paper)
Coulomb corrections to scattering length and effective radius
International Nuclear Information System (INIS)
Mur, V.D.; Kudryavtsev, A.E.; Popov, V.S.
1983-01-01
The problem considered is extraction of the ''purely nuclear'' scattering length asub(s) (corresponding to the strong potential Vsub(s) at the Coulomb interaction switched off) from the Coulomb-nuclear scattering length asub(cs), which is an object of experimental measurement. The difference between asub(s) and asub(cs) is especially large if the potential Vsub(s) has a level (real or virtual) with an energy close to zero. For this case formulae are obtained relating the scattering lengths asub(s) and asub(cs), as well as the effective radii rsub(s) and rsub(cs). The results are extended to states with arbitrary angular momenta l. It is shown that the Coulomb correction is especially large for the coefficient with ksup(2l) in the expansion of the effective radius; in this case the correction contains a large logarithm ln(asub(B)/rsub(0)). The Coulomb renormalization of other terms in the effective radius espansion is of order (rsub(0)/asub(B)), where r 0 is the nuclear force radius, asub(B) is the Bohr radius. The obtained formulae are tried on a number of model potentials Vsub(s), used in nuclear physics
Two-loop fermionic corrections to massive Bhabha scattering
Energy Technology Data Exchange (ETDEWEB)
Actis, S.; Riemann, T. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Czakon, M. [Wuerzburg Univ. (Germany). Inst. fuer Theoretische Physik und Astrophysik]|[Institute of Nuclear Physics, NSCR DEMOKRITOS, Athens (Greece); Gluza, J. [Silesia Univ., Katowice (Poland). Inst. of Physics
2007-05-15
We evaluate the two-loop corrections to Bhabha scattering from fermion loops in the context of pure Quantum Electrodynamics. The differential cross section is expressed by a small number of Master Integrals with exact dependence on the fermion masses m{sub e}, m{sub f} and the Mandelstam invariants s, t, u. We determine the limit of fixed scattering angle and high energy, assuming the hierarchy of scales m{sup 2}{sub e}<
Multiple-scattering corrections to the Beer-Lambert law
International Nuclear Information System (INIS)
Zardecki, A.
1983-01-01
The effect of multiple scattering on the validity of the Beer-Lambert law is discussed for a wide range of particle-size parameters and optical depths. To predict the amount of received radiant power, appropriate correction terms are introduced. For particles larger than or comparable to the wavelength of radiation, the small-angle approximation is adequate; whereas for small densely packed particles, the diffusion theory is advantageously employed. These two approaches are used in the context of the problem of laser-beam propagation in a dense aerosol medium. In addition, preliminary results obtained by using a two-dimensional finite-element discrete-ordinates transport code are described. Multiple-scattering effects for laser propagation in fog, cloud, rain, and aerosol cloud are modeled
Directory of Open Access Journals (Sweden)
Xiaole Shen
2015-09-01
Full Text Available The uneven illumination phenomenon caused by thin clouds will reduce the quality of remote sensing images, and bring adverse effects to the image interpretation. To remove the effect of thin clouds on images, an uneven illumination correction can be applied. In this paper, an effective uneven illumination correction algorithm is proposed to remove the effect of thin clouds and to restore the ground information of the optical remote sensing image. The imaging model of remote sensing images covered by thin clouds is analyzed. Due to the transmission attenuation, reflection, and scattering, the thin cloud cover usually increases region brightness and reduces saturation and contrast of the image. As a result, a wavelet domain enhancement is performed for the image in Hue-Saturation-Value (HSV color space. We use images with thin clouds in Wuhan area captured by QuickBird and ZiYuan-3 (ZY-3 satellites for experiments. Three traditional uneven illumination correction algorithms, i.e., multi-scale Retinex (MSR algorithm, homomorphic filtering (HF-based algorithm, and wavelet transform-based MASK (WT-MASK algorithm are performed for comparison. Five indicators, i.e., mean value, standard deviation, information entropy, average gradient, and hue deviation index (HDI are used to analyze the effect of the algorithms. The experimental results show that the proposed algorithm can effectively eliminate the influences of thin clouds and restore the real color of ground objects under thin clouds.
Meson exchange corrections in deep inelastic scattering on deuteron
International Nuclear Information System (INIS)
Kaptari, L.P.; Titov, A.I.
1989-01-01
Starting with the general equations of motion of the nucleons interacting with the mesons the one-particle Schroedinger-like equation for the nucleon wave function and the deep inelastic scattering amplitude with the meson-exchange currents are obtained. Effective pion-, sigma-, and omega-meson exchanges are considered. It is found that the mesonic corrections only partially (about 60%) restore the energy sum rule breaking because of the nucleon off-mass-shell effects in nuclei. This results contradicts with the prediction based on the calculation of the energy sum rule limited by the second order of the nucleon-meson vertex and static approximation. 17 refs.; 3 figs
DEFF Research Database (Denmark)
Keller, Sune H; Svarer, Claus; Sibomana, Merence
2013-01-01
scatter correction in the μ-map reconstruction and total variation filtering to the transmission processing. Results: Comparing MAP-TR and the new TXTV with gold standard CT-based attenuation correction, we found that TXTV has less bias as compared to MAP-TR. We also compared images acquired at the HRRT......In the standard software for the Siemens high-resolution research tomograph (HRRT) positron emission tomography (PET) scanner the most commonly used segmentation in the μ -map reconstruction for human brain scans is maximum a posteriori for transmission (MAP-TR). Bias in the lower cerebellum...
Energy Technology Data Exchange (ETDEWEB)
Park, Yang-Kyun, E-mail: ykpark@mgh.harvard.edu; Sharp, Gregory C.; Phillips, Justin; Winey, Brian A. [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States)
2015-08-15
Purpose: To demonstrate the feasibility of proton dose calculation on scatter-corrected cone-beam computed tomographic (CBCT) images for the purpose of adaptive proton therapy. Methods: CBCT projection images were acquired from anthropomorphic phantoms and a prostate patient using an on-board imaging system of an Elekta infinity linear accelerator. Two previously introduced techniques were used to correct the scattered x-rays in the raw projection images: uniform scatter correction (CBCT{sub us}) and a priori CT-based scatter correction (CBCT{sub ap}). CBCT images were reconstructed using a standard FDK algorithm and GPU-based reconstruction toolkit. Soft tissue ROI-based HU shifting was used to improve HU accuracy of the uncorrected CBCT images and CBCT{sub us}, while no HU change was applied to the CBCT{sub ap}. The degree of equivalence of the corrected CBCT images with respect to the reference CT image (CT{sub ref}) was evaluated by using angular profiles of water equivalent path length (WEPL) and passively scattered proton treatment plans. The CBCT{sub ap} was further evaluated in more realistic scenarios such as rectal filling and weight loss to assess the effect of mismatched prior information on the corrected images. Results: The uncorrected CBCT and CBCT{sub us} images demonstrated substantial WEPL discrepancies (7.3 ± 5.3 mm and 11.1 ± 6.6 mm, respectively) with respect to the CT{sub ref}, while the CBCT{sub ap} images showed substantially reduced WEPL errors (2.4 ± 2.0 mm). Similarly, the CBCT{sub ap}-based treatment plans demonstrated a high pass rate (96.0% ± 2.5% in 2 mm/2% criteria) in a 3D gamma analysis. Conclusions: A priori CT-based scatter correction technique was shown to be promising for adaptive proton therapy, as it achieved equivalent proton dose distributions and water equivalent path lengths compared to those of a reference CT in a selection of anthropomorphic phantoms.
Aethalometer multiple scattering correction Cref for mineral dust aerosols
Di Biagio, Claudia; Formenti, Paola; Cazaunau, Mathieu; Pangui, Edouard; Marchand, Nicolas; Doussin, Jean-François
2017-08-01
In this study we provide a first estimate of the Aethalometer multiple scattering correction Cref for mineral dust aerosols. Cref is an empirical constant used to correct the aerosol absorption coefficient measurements for the multiple scattering artefact of the Aethalometer; i.e. the filter fibres on which aerosols are deposited scatter light and this is miscounted as absorption. The Cref at 450 and 660 nm was obtained from the direct comparison of Aethalometer data (Magee Sci. AE31) with (i) the absorption coefficient calculated as the difference between the extinction and scattering coefficients measured by a Cavity Attenuated Phase Shift Extinction analyser (CAPS PMex) and a nephelometer respectively at 450 nm and (ii) the absorption coefficient from a MAAP (Multi-Angle Absorption Photometer) at 660 nm. Measurements were performed on seven dust aerosol samples generated in the laboratory by the mechanical shaking of natural parent soils issued from different source regions worldwide. The single scattering albedo (SSA) at 450 and 660 nm and the size distribution of the aerosols were also measured. Cref for mineral dust varies between 1.81 and 2.56 for a SSA of 0.85-0.96 at 450 nm and between 1.75 and 2.28 for a SSA of 0.98-0.99 at 660 nm. The calculated mean for dust is 2.09 (±0.22) at 450 nm and 1.92 (±0.17) at 660 nm. With this new Cref the dust absorption coefficient by the Aethalometer is about 2 % (450 nm) and 11 % (660 nm) higher than that obtained by using Cref = 2.14 at both 450 and 660 nm, as usually assumed in the literature. This difference induces a change of up to 3 % in the dust SSA at 660 nm. The Cref seems to be independent of the fine and coarse particle size fractions, and so the obtained Cref can be applied to dust both close to sources and following transport. Additional experiments performed with pure kaolinite minerals and polluted ambient aerosols indicate Cref of 2.49 (±0.02) and 2.32 (±0.01) at 450 and 660 nm (SSA = 0.96-0.97) for
Multiangle Implementation of Atmospheric Correction (MAIAC): 2. Aerosol Algorithm
Lyapustin, A.; Wang, Y.; Laszlo, I.; Kahn, R.; Korkin, S.; Remer, L.; Levy, R.; Reid, J. S.
2011-01-01
An aerosol component of a new multiangle implementation of atmospheric correction (MAIAC) algorithm is presented. MAIAC is a generic algorithm developed for the Moderate Resolution Imaging Spectroradiometer (MODIS), which performs aerosol retrievals and atmospheric correction over both dark vegetated surfaces and bright deserts based on a time series analysis and image-based processing. The MAIAC look-up tables explicitly include surface bidirectional reflectance. The aerosol algorithm derives the spectral regression coefficient (SRC) relating surface bidirectional reflectance in the blue (0.47 micron) and shortwave infrared (2.1 micron) bands; this quantity is prescribed in the MODIS operational Dark Target algorithm based on a parameterized formula. The MAIAC aerosol products include aerosol optical thickness and a fine-mode fraction at resolution of 1 km. This high resolution, required in many applications such as air quality, brings new information about aerosol sources and, potentially, their strength. AERONET validation shows that the MAIAC and MOD04 algorithms have similar accuracy over dark and vegetated surfaces and that MAIAC generally improves accuracy over brighter surfaces due to the SRC retrieval and explicit bidirectional reflectance factor characterization, as demonstrated for several U.S. West Coast AERONET sites. Due to its generic nature and developed angular correction, MAIAC performs aerosol retrievals over bright deserts, as demonstrated for the Solar Village Aerosol Robotic Network (AERONET) site in Saudi Arabia.
[An automatic color correction algorithm for digital human body sections].
Zhuge, Bin; Zhou, He-qin; Tang, Lei; Lang, Wen-hui; Feng, Huan-qing
2005-06-01
To find a new approach to improve the uniformity of color parameters for images data of the serial sections of the human body. An auto-color correction algorithm in the RGB color space based on a standard CMYK color chart was proposed. The gray part of the color chart was auto-segmented from every original image, and fifteen gray values were attained. The transformation function between the measured gray value and the standard gray value of the color chart and the lookup table were obtained. In RGB color space, the colors of images were corrected according to the lookup table. The color of original Chinese Digital Human Girl No. 1 (CDH-G1) database was corrected by using the algorithm with Matlab 6.5, and it took 13.475 s to deal with one picture on a personal computer. Using the algorithm, the color of the original database is corrected automatically and quickly. The uniformity of color parameters for corrected dataset is improved.
International Nuclear Information System (INIS)
Siewerdsen, J.H.; Daly, M.J.; Bakhtiar, B.
2006-01-01
X-ray scatter poses a significant limitation to image quality in cone-beam CT (CBCT), resulting in contrast reduction, image artifacts, and lack of CT number accuracy. We report the performance of a simple scatter correction method in which scatter fluence is estimated directly in each projection from pixel values near the edge of the detector behind the collimator leaves. The algorithm operates on the simple assumption that signal in the collimator shadow is attributable to x-ray scatter, and the 2D scatter fluence is estimated by interpolating between pixel values measured along the top and bottom edges of the detector behind the collimator leaves. The resulting scatter fluence estimate is subtracted from each projection to yield an estimate of the primary-only images for CBCT reconstruction. Performance was investigated in phantom experiments on an experimental CBCT benchtop, and the effect on image quality was demonstrated in patient images (head, abdomen, and pelvis sites) obtained on a preclinical system for CBCT-guided radiation therapy. The algorithm provides significant reduction in scatter artifacts without compromise in contrast-to-noise ratio (CNR). For example, in a head phantom, cupping artifact was essentially eliminated, CT number accuracy was restored to within 3%, and CNR (breast-to-water) was improved by up to 50%. Similarly in a body phantom, cupping artifact was reduced by at least a factor of 2 without loss in CNR. Patient images demonstrate significantly increased uniformity, accuracy, and contrast, with an overall improvement in image quality in all sites investigated. Qualitative evaluation illustrates that soft-tissue structures that are otherwise undetectable are clearly delineated in scatter-corrected reconstructions. Since scatter is estimated directly in each projection, the algorithm is robust with respect to system geometry, patient size and heterogeneity, patient motion, etc. Operating without prior information, analytical modeling
Energy Technology Data Exchange (ETDEWEB)
Kauppinen, T.; Vanninen, E.; Kuikka, J.T. [Kuopio Central Hospital (Finland). Dept. of Clinical Physiology; Koskinen, M.O. [Dept. of Clinical Physiology and Nuclear Medicine, Tampere Univ. Hospital, Tampere (Finland); Alenius, S. [Signal Processing Lab., Tampere Univ. of Technology, Tampere (Finland)
2000-09-01
Filtered back-projection (FBP) is generally used as the reconstruction method for single-photon emission tomography although it produces noisy images with apparent streak artefacts. It is possible to improve the image quality by using an algorithm with iterative correction steps. The iterative reconstruction technique also has an additional benefit in that computation of attenuation correction can be included in the process. A commonly used iterative method, maximum-likelihood expectation maximisation (ML-EM), can be accelerated using ordered subsets (OS-EM). We have applied to the OS-EM algorithm a Bayesian one-step late correction method utilising median root prior (MRP). Methodological comparison was performed by means of measurements obtained with a brain perfusion phantom and using patient data. The aim of this work was to quantitate the accuracy of iterative reconstruction with scatter and non-uniform attenuation corrections and post-filtering in SPET brain perfusion imaging. SPET imaging was performed using a triple-head gamma camera with fan-beam collimators. Transmission and emission scans were acquired simultaneously. The brain phantom used was a high-resolution three-dimensional anthropomorphic JB003 phantom. Patient studies were performed in ten chronic pain syndrome patients. The images were reconstructed using conventional FBP and iterative OS-EM and MRP techniques including scatter and nonuniform attenuation corrections. Iterative reconstructions were individually post-filtered. The quantitative results obtained with the brain perfusion phantom were compared with the known actual contrast ratios. The calculated difference from the true values was largest with the FBP method; iteratively reconstructed images proved closer to the reality. Similar findings were obtained in the patient studies. The plain OS-EM method improved the contrast whereas in the case of the MRP technique the improvement in contrast was not so evident with post-filtering. (orig.)
International Nuclear Information System (INIS)
Kauppinen, T.; Vanninen, E.; Kuikka, J.T.; Alenius, S.
2000-01-01
Filtered back-projection (FBP) is generally used as the reconstruction method for single-photon emission tomography although it produces noisy images with apparent streak artefacts. It is possible to improve the image quality by using an algorithm with iterative correction steps. The iterative reconstruction technique also has an additional benefit in that computation of attenuation correction can be included in the process. A commonly used iterative method, maximum-likelihood expectation maximisation (ML-EM), can be accelerated using ordered subsets (OS-EM). We have applied to the OS-EM algorithm a Bayesian one-step late correction method utilising median root prior (MRP). Methodological comparison was performed by means of measurements obtained with a brain perfusion phantom and using patient data. The aim of this work was to quantitate the accuracy of iterative reconstruction with scatter and non-uniform attenuation corrections and post-filtering in SPET brain perfusion imaging. SPET imaging was performed using a triple-head gamma camera with fan-beam collimators. Transmission and emission scans were acquired simultaneously. The brain phantom used was a high-resolution three-dimensional anthropomorphic JB003 phantom. Patient studies were performed in ten chronic pain syndrome patients. The images were reconstructed using conventional FBP and iterative OS-EM and MRP techniques including scatter and nonuniform attenuation corrections. Iterative reconstructions were individually post-filtered. The quantitative results obtained with the brain perfusion phantom were compared with the known actual contrast ratios. The calculated difference from the true values was largest with the FBP method; iteratively reconstructed images proved closer to the reality. Similar findings were obtained in the patient studies. The plain OS-EM method improved the contrast whereas in the case of the MRP technique the improvement in contrast was not so evident with post-filtering. (orig.)
An algorithm for computing screened Coulomb scattering in GEANT4
Energy Technology Data Exchange (ETDEWEB)
Mendenhall, Marcus H. [Vanderbilt University Free Electron Laser Center, P.O. Box 351816 Station B, Nashville, TN 37235-1816 (United States)]. E-mail: marcus.h.mendenhall@vanderbilt.edu; Weller, Robert A. [Department of Electrical Engineering and Computer Science, Vanderbilt University, P.O. Box 351821 Station B, Nashville, TN 37235-1821 (United States)]. E-mail: robert.a.weller@vanderbilt.edu
2005-01-01
An algorithm has been developed for the GEANT4 Monte-Carlo package for the efficient computation of screened Coulomb interatomic scattering. It explicitly integrates the classical equations of motion for scattering events, resulting in precise tracking of both the projectile and the recoil target nucleus. The algorithm permits the user to plug in an arbitrary screening function, such as Lens-Jensen screening, which is good for backscattering calculations, or Ziegler-Biersack-Littmark screening, which is good for nuclear straggling and implantation problems. This will allow many of the applications of the TRIM and SRIM codes to be extended into the much more general GEANT4 framework where nuclear and other effects can be included.
An algorithm for computing screened Coulomb scattering in GEANT4
International Nuclear Information System (INIS)
Mendenhall, Marcus H.; Weller, Robert A.
2005-01-01
An algorithm has been developed for the GEANT4 Monte-Carlo package for the efficient computation of screened Coulomb interatomic scattering. It explicitly integrates the classical equations of motion for scattering events, resulting in precise tracking of both the projectile and the recoil target nucleus. The algorithm permits the user to plug in an arbitrary screening function, such as Lens-Jensen screening, which is good for backscattering calculations, or Ziegler-Biersack-Littmark screening, which is good for nuclear straggling and implantation problems. This will allow many of the applications of the TRIM and SRIM codes to be extended into the much more general GEANT4 framework where nuclear and other effects can be included
Virtual two-loop corrections to Bhabha scattering
International Nuclear Information System (INIS)
Bjoerkevoll, K.S.
1992-03-01
The author has developed methods for the calculation of contributions from six ladder-like diagrams to Bhabha scattering. The leading terms both for separate diagrams and for the sum of the gauge-invariant set of all diagrams have been calculated. The study has been limited to contributions from Feynman diagrams without real photons, and all calculations have been done with s>> |t| >>m 2 , where s is the center of mass energy squared, t is the square of the transferred four-momentum, and m is the electron mass. For the separate diagrams the results depend upon how λ 2 is related to s, |t| and m 2 , whereas the leading term of the sum of the six diagrams is the same in the cases that have been considered. The methods described should be valuable for calculations of contributions from other Feynman diagrams, in particular QED corrections to Bhabha scattering or pair production at small angles. 23 refs., 5 figs., 5 tabs
TU-F-18C-03: X-Ray Scatter Correction in Breast CT: Advances and Patient Testing
International Nuclear Information System (INIS)
Ramamurthy, S; Sechopoulos, I
2014-01-01
Purpose: To further develop and perform patient testing of an x-ray scatter correction algorithm for dedicated breast computed tomography (BCT). Methods: A previously proposed algorithm for x-ray scatter signal reduction in BCT imaging was modified and tested with a phantom and on patients. A wireless electronic positioner system was designed and added to the BCT system that positions a tungsten plate in and out of the x-ray beam. The interpolation used by the algorithm was replaced with a radial basis function-based algorithm, with automated exclusion of non-valid sampled points due to patient motion or other factors. A 3D adaptive noise reduction filter was also introduced to reduce the impact of scatter quantum noise post-reconstruction. The impact on image quality of the improved algorithm was evaluated using a breast phantom and seven patient breasts, using quantitative metrics such signal difference (SD) and signal difference-to-noise ratios (SDNR) and qualitatively using image profiles. Results: The improvements in the algorithm resulted in a more robust interpolation step, with no introduction of image artifacts, especially at the imaged object boundaries, which was an issue in the previous implementation. Qualitative evaluation of the reconstructed slices and corresponding profiles show excellent homogeneity of both the background and the higher density features throughout the whole imaged object, as well as increased accuracy in the Hounsfield Units (HU) values of the tissues. Profiles also demonstrate substantial increase in both SD and SDNR between glandular and adipose regions compared to both the uncorrected and system-corrected images. Conclusion: The improved scatter correction algorithm can be reliably used during patient BCT acquisitions with no introduction of artifacts, resulting in substantial improvement in image quality. Its impact on actual clinical performance needs to be evaluated in the future. Research Agreement, Koning Corp., Hologic
Fully multidimensional flux-corrected transport algorithms for fluids
International Nuclear Information System (INIS)
Zalesak, S.T.
1979-01-01
The theory of flux-corrected transport (FCT) developed by Boris and Book is placed in a simple, generalized format, and a new algorithm for implementing the critical flux limiting stage in multidimensions without resort to time splitting is presented. The new flux limiting algorithm allows the use of FCT techniques in multidimensional fluid problems for which time splitting would produce unacceptable numerical results, such as those involving incompressible or nearly incompressible flow fields. The 'clipping' problem associated with the original one dimensional flux limiter is also eliminated or alleviated. Test results and applications to a two dimensional fluid plasma problem are presented
Relativistic corrections to the elastic electron scattering from 208Pb
International Nuclear Information System (INIS)
Chandra, H.; Sauer, G.
1976-01-01
In the present work we have calculated the differential cross sections for the elastic electron scattering from 208 Pb using the charge distributions resulting from various corrections. The point proton and neutron mass distributions have been calculated from the spherical wave functions for 208 Pb obtained by Kolb et al. The relativistic correction to the nuclear charge distribution coming from the electromagnetic structure of the nucleon has been accomplished by assuming a linear superposition of Gaussian shapes for the proton and the neutron charge form factor. Results of this calculation are quite similar to an earlier calculation by Bertozzi et al., who have used a different wave function for 208 Pb and have assumed exponential smearing for the proton corresponding to the dipole fit for the form factor. Also in the present work, reason for the small spin orbit contribution to the effective charge distribution is discussed in some detail. It is also shown that the use of a single Gaussian shape for the proton smearing usually underestimates the actual theoretical cross section
Jo, Byung-Du; Lee, Young-Jin; Kim, Dae-Hong; Jeon, Pil-Hyun; Kim, Hee-Joung
2014-03-01
In conventional digital radiography (DR) using a dual energy subtraction technique, a significant fraction of the detected photons are scattered within the body, resulting in the scatter component. Scattered radiation can significantly deteriorate image quality in diagnostic X-ray imaging systems. Various methods of scatter correction, including both measurement and non-measurement-based methods have been proposed in the past. Both methods can reduce scatter artifacts in images. However, non-measurement-based methods require a homogeneous object and have insufficient scatter component correction. Therefore, we employed a measurement-based method to correct for the scatter component of inhomogeneous objects from dual energy DR (DEDR) images. We performed a simulation study using a Monte Carlo simulation with a primary modulator, which is a measurement-based method for the DEDR system. The primary modulator, which has a checkerboard pattern, was used to modulate primary radiation. Cylindrical phantoms of variable size were used to quantify imaging performance. For scatter estimation, we used Discrete Fourier Transform filtering. The primary modulation method was evaluated using a cylindrical phantom in the DEDR system. The scatter components were accurately removed using a primary modulator. When the results acquired with scatter correction and without correction were compared, the average contrast-to-noise ratio (CNR) with the correction was 1.35 times higher than that obtained without correction, and the average root mean square error (RMSE) with the correction was 38.00% better than that without correction. In the subtraction study, the average CNR with correction was 2.04 (aluminum subtraction) and 1.38 (polymethyl methacrylate (PMMA) subtraction) times higher than that obtained without the correction. The analysis demonstrated the accuracy of scatter correction and the improvement of image quality using a primary modulator and showed the feasibility of
Aethalometer multiple scattering correction Cref for mineral dust aerosols
Directory of Open Access Journals (Sweden)
C. Di Biagio
2017-08-01
Full Text Available In this study we provide a first estimate of the Aethalometer multiple scattering correction Cref for mineral dust aerosols. Cref is an empirical constant used to correct the aerosol absorption coefficient measurements for the multiple scattering artefact of the Aethalometer; i.e. the filter fibres on which aerosols are deposited scatter light and this is miscounted as absorption. The Cref at 450 and 660 nm was obtained from the direct comparison of Aethalometer data (Magee Sci. AE31 with (i the absorption coefficient calculated as the difference between the extinction and scattering coefficients measured by a Cavity Attenuated Phase Shift Extinction analyser (CAPS PMex and a nephelometer respectively at 450 nm and (ii the absorption coefficient from a MAAP (Multi-Angle Absorption Photometer at 660 nm. Measurements were performed on seven dust aerosol samples generated in the laboratory by the mechanical shaking of natural parent soils issued from different source regions worldwide. The single scattering albedo (SSA at 450 and 660 nm and the size distribution of the aerosols were also measured. Cref for mineral dust varies between 1.81 and 2.56 for a SSA of 0.85–0.96 at 450 nm and between 1.75 and 2.28 for a SSA of 0.98–0.99 at 660 nm. The calculated mean for dust is 2.09 (±0.22 at 450 nm and 1.92 (±0.17 at 660 nm. With this new Cref the dust absorption coefficient by the Aethalometer is about 2 % (450 nm and 11 % (660 nm higher than that obtained by using Cref = 2.14 at both 450 and 660 nm, as usually assumed in the literature. This difference induces a change of up to 3 % in the dust SSA at 660 nm. The Cref seems to be independent of the fine and coarse particle size fractions, and so the obtained Cref can be applied to dust both close to sources and following transport. Additional experiments performed with pure kaolinite minerals and polluted ambient aerosols indicate Cref of 2.49 (±0.02 and 2
Fast sampling algorithm for the simulation of photon Compton scattering
International Nuclear Information System (INIS)
Brusa, D.; Salvat, F.
1996-01-01
A simple algorithm for the simulation of Compton interactions of unpolarized photons is described. The energy and direction of the scattered photon, as well as the active atomic electron shell, are sampled from the double-differential cross section obtained by Ribberfors from the relativistic impulse approximation. The algorithm consistently accounts for Doppler broadening and electron binding effects. Simplifications of Ribberfors' formula, required for efficient random sampling, are discussed. The algorithm involves a combination of inverse transform, composition and rejection methods. A parameterization of the Compton profile is proposed from which the simulation of Compton events can be performed analytically in terms of a few parameters that characterize the target atom, namely shell ionization energies, occupation numbers and maximum values of the one-electron Compton profiles. (orig.)
International Nuclear Information System (INIS)
Mayers, J.; Cywinski, R.
1985-03-01
Some of the approximations commonly used for the analytical estimation of multiple scattering corrections to thermal neutron elastic scattering data from cylindrical and plane slab samples have been tested using a Monte Carlo program. It is shown that the approximations are accurate for a wide range of sample geometries and scattering cross-sections. Neutron polarisation analysis provides the most stringent test of multiple scattering calculations as multiply scattered neutrons may be redistributed not only geometrically but also between the spin flip and non spin flip scattering channels. A very simple analytical technique for correcting for multiple scattering in neutron polarisation analysis has been tested using the Monte Carlo program and has been shown to work remarkably well in most circumstances. (author)
SU-D-206-04: Iterative CBCT Scatter Shading Correction Without Prior Information
International Nuclear Information System (INIS)
Bai, Y; Wu, P; Mao, T; Gong, S; Wang, J; Niu, T; Sheng, K; Xie, Y
2016-01-01
Purpose: To estimate and remove the scatter contamination in the acquired projection of cone-beam CT (CBCT), to suppress the shading artifacts and improve the image quality without prior information. Methods: The uncorrected CBCT images containing shading artifacts are reconstructed by applying the standard FDK algorithm on CBCT raw projections. The uncorrected image is then segmented to generate an initial template image. To estimate scatter signal, the differences are calculated by subtracting the simulated projections of the template image from the raw projections. Since scatter signals are dominantly continuous and low-frequency in the projection domain, they are estimated by low-pass filtering the difference signals and subtracted from the raw CBCT projections to achieve the scatter correction. Finally, the corrected CBCT image is reconstructed from the corrected projection data. Since an accurate template image is not readily segmented from the uncorrected CBCT image, the proposed scheme is iterated until the produced template is not altered. Results: The proposed scheme is evaluated on the Catphan©600 phantom data and CBCT images acquired from a pelvis patient. The result shows that shading artifacts have been effectively suppressed by the proposed method. Using multi-detector CT (MDCT) images as reference, quantitative analysis is operated to measure the quality of corrected images. Compared to images without correction, the method proposed reduces the overall CT number error from over 200 HU to be less than 50 HU and can increase the spatial uniformity. Conclusion: An iterative strategy without relying on the prior information is proposed in this work to remove the shading artifacts due to scatter contamination in the projection domain. The method is evaluated in phantom and patient studies and the result shows that the image quality is remarkably improved. The proposed method is efficient and practical to address the poor image quality issue of CBCT
SU-D-206-04: Iterative CBCT Scatter Shading Correction Without Prior Information
Energy Technology Data Exchange (ETDEWEB)
Bai, Y; Wu, P; Mao, T; Gong, S; Wang, J; Niu, T [Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Institute of Translational Medicine, Zhejiang University, Hangzhou, Zhejiang (China); Sheng, K [Department of Radiation Oncology, University of California, Los Angeles, School of Medicine, Los Angeles, CA (United States); Xie, Y [Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong (China)
2016-06-15
Purpose: To estimate and remove the scatter contamination in the acquired projection of cone-beam CT (CBCT), to suppress the shading artifacts and improve the image quality without prior information. Methods: The uncorrected CBCT images containing shading artifacts are reconstructed by applying the standard FDK algorithm on CBCT raw projections. The uncorrected image is then segmented to generate an initial template image. To estimate scatter signal, the differences are calculated by subtracting the simulated projections of the template image from the raw projections. Since scatter signals are dominantly continuous and low-frequency in the projection domain, they are estimated by low-pass filtering the difference signals and subtracted from the raw CBCT projections to achieve the scatter correction. Finally, the corrected CBCT image is reconstructed from the corrected projection data. Since an accurate template image is not readily segmented from the uncorrected CBCT image, the proposed scheme is iterated until the produced template is not altered. Results: The proposed scheme is evaluated on the Catphan©600 phantom data and CBCT images acquired from a pelvis patient. The result shows that shading artifacts have been effectively suppressed by the proposed method. Using multi-detector CT (MDCT) images as reference, quantitative analysis is operated to measure the quality of corrected images. Compared to images without correction, the method proposed reduces the overall CT number error from over 200 HU to be less than 50 HU and can increase the spatial uniformity. Conclusion: An iterative strategy without relying on the prior information is proposed in this work to remove the shading artifacts due to scatter contamination in the projection domain. The method is evaluated in phantom and patient studies and the result shows that the image quality is remarkably improved. The proposed method is efficient and practical to address the poor image quality issue of CBCT
Two dimensional spatial distortion correction algorithm for scintillation GAMMA cameras
International Nuclear Information System (INIS)
Chaney, R.; Gray, E.; Jih, F.; King, S.E.; Lim, C.B.
1985-01-01
Spatial distortion in an Anger gamma camera originates fundamentally from the discrete nature of scintillation light sampling with an array of PMT's. Historically digital distortion correction started with the method based on the distortion measurement by using 1-D slit pattern and the subsequent on-line bi-linear approximation with 64 x 64 look-up tables for X and Y. However, the X, Y distortions are inherently two-dimensional in nature, and thus the validity of this 1-D calibration method becomes questionable with the increasing distortion amplitude in association with the effort to get better spatial and energy resolutions. The authors have developed a new accurate 2-D correction algorithm. This method involves the steps of; data collection from 2-D orthogonal hole pattern, 2-D distortion vector measurement, 2-D Lagrangian polynomial interpolation, and transformation to X, Y ADC frame. The impact of numerical precision used in correction and the accuracy of bilinear approximation with varying look-up table size have been carefully examined through computer simulation by using measured single PMT light response function together with Anger positioning logic. Also the accuracy level of different order Lagrangian polynomial interpolations for correction table expansion from hole centroids were investigated. Detailed algorithm and computer simulation are presented along with camera test results
TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation
Energy Technology Data Exchange (ETDEWEB)
Xu, Y [UT Southwestern Medical Center, Dallas, TX (United States); Southern Medical University, Guangzhou (China); Bai, T [UT Southwestern Medical Center, Dallas, TX (United States); Xi' an Jiaotong University, Xi' an (China); Yan, H; Ouyang, L; Wang, J; Pompos, A; Jiang, S; Jia, X [UT Southwestern Medical Center, Dallas, TX (United States); Zhou, L [Southern Medical University, Guangzhou (China)
2014-06-15
Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections; 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research
How to simplify transmission-based scatter correction for clinical application
International Nuclear Information System (INIS)
Baccarne, V.; Hutton, B.F.
1998-01-01
Full text: The performances of ordered subsets (OS) EM reconstruction including attenuation, scatter and spatial resolution correction are evaluated using cardiac Monte Carlo data. We demonstrate how simplifications in the scatter model allow one to correct SPECT data for scatter in terms of quantitation and quality in a reasonable time. Initial reconstruction of the 20% window is performed including attenuation correction (broad beam μ values), to estimate the activity quantitatively (accuracy 3%), but not spatially. A rough reconstruction with 2 iterations (subset size: 8) is sufficient for subsequent scatter correction. Estimation of primary photons is obtained by projecting the previous distribution including attenuation (narrow beam μ values). Estimation of the scatter is obtained by convolving the primary estimates by a depth dependent scatter kernel, and scaling the result by a factor calculated from the attenuation map. The correction can be accelerated by convolving several adjacent planes with the same kernel, and using an average scaling factor. Simulation of the effects of the collimator during the scatter correction was demonstrated to be unnecessary. Final reconstruction is performed using 6 iterations OSEM, including attenuation (narrow beam μ values) and spatial resolution correction. Scatter correction is implemented by incorporating the estimated scatter as a constant offset in the forward projection step. The total correction + reconstruction (64 proj. 40x128 pixel) takes 38 minutes on a Sun Sparc 20. Quantitatively, the accuracy is 7% in a reconstructed slice. The SNR inside the whole myocardium (defined from the original object), is equal to 2.1 and 2.3 - in the corrected and the primary slices respectively. The scatter correction preserves the myocardium to ventricle contrast (primary: 0.79, corrected: 0.82). These simplifications allow acceleration of correction without influencing the quality of the result
Geometry correction Algorithm for UAV Remote Sensing Image Based on Improved Neural Network
Liu, Ruian; Liu, Nan; Zeng, Beibei; Chen, Tingting; Yin, Ninghao
2018-03-01
Aiming at the disadvantage of current geometry correction algorithm for UAV remote sensing image, a new algorithm is proposed. Adaptive genetic algorithm (AGA) and RBF neural network are introduced into this algorithm. And combined with the geometry correction principle for UAV remote sensing image, the algorithm and solving steps of AGA-RBF are presented in order to realize geometry correction for UAV remote sensing. The correction accuracy and operational efficiency is improved through optimizing the structure and connection weight of RBF neural network separately with AGA and LMS algorithm. Finally, experiments show that AGA-RBF algorithm has the advantages of high correction accuracy, high running rate and strong generalization ability.
SU-D-206-07: CBCT Scatter Correction Based On Rotating Collimator
International Nuclear Information System (INIS)
Yu, G; Feng, Z; Yin, Y; Qiang, L; Li, B; Huang, P; Li, D
2016-01-01
Purpose: Scatter correction in cone-beam computed tomography (CBCT) has obvious effect on the removal of image noise, the cup artifact and the increase of image contrast. Several methods using a beam blocker for the estimation and subtraction of scatter have been proposed. However, the inconvenience of mechanics and propensity to residual artifacts limited the further evolution of basic and clinical research. Here, we propose a rotating collimator-based approach, in conjunction with reconstruction based on a discrete Radon transform and Tchebichef moments algorithm, to correct scatter-induced artifacts. Methods: A rotating-collimator, comprising round tungsten alloy strips, was mounted on a linear actuator. The rotating-collimator is divided into 6 portions equally. The round strips space is evenly spaced on each portion but staggered between different portions. A step motor connected to the rotating collimator drove the blocker to around x-ray source during the CBCT acquisition. The CBCT reconstruction based on a discrete Radon transform and Tchebichef moments algorithm is performed. Experimental studies using water phantom and Catphan504 were carried out to evaluate the performance of the proposed scheme. Results: The proposed algorithm was tested on both the Monte Carlo simulation and actual experiments with the Catphan504 phantom. From the simulation result, the mean square error of the reconstruction error decreases from 16% to 1.18%, the cupping (τcup) from 14.005% to 0.66%, and the peak signal-to-noise ratio increase from 16.9594 to 31.45. From the actual experiments, the induced visual artifacts are significantly reduced. Conclusion: We conducted an experiment on CBCT imaging system with a rotating collimator to develop and optimize x-ray scatter control and reduction technique. The proposed method is attractive in applications where a high CBCT image quality is critical, for example, dose calculation in adaptive radiation therapy. We want to thank Dr. Lei
SU-D-206-07: CBCT Scatter Correction Based On Rotating Collimator
Energy Technology Data Exchange (ETDEWEB)
Yu, G; Feng, Z [Shandong Normal University, Jinan, Shandong (China); Yin, Y [Shandong Cancer Hospital and Institute, China, Jinan, Shandong (China); Qiang, L [Zhang Jiagang STFK Medical Device Co, Zhangjiangkang, Suzhou (China); Li, B [Shandong Academy of Medical Sciences, Jinan, Shandong provice (China); Huang, P [Shandong Province Key Laboratory of Medical Physics and Image Processing Te, Ji’nan, Shandong province (China); Li, D [School of Physics and Electronics, Shandong Normal University, Jinan, Shandong (China)
2016-06-15
Purpose: Scatter correction in cone-beam computed tomography (CBCT) has obvious effect on the removal of image noise, the cup artifact and the increase of image contrast. Several methods using a beam blocker for the estimation and subtraction of scatter have been proposed. However, the inconvenience of mechanics and propensity to residual artifacts limited the further evolution of basic and clinical research. Here, we propose a rotating collimator-based approach, in conjunction with reconstruction based on a discrete Radon transform and Tchebichef moments algorithm, to correct scatter-induced artifacts. Methods: A rotating-collimator, comprising round tungsten alloy strips, was mounted on a linear actuator. The rotating-collimator is divided into 6 portions equally. The round strips space is evenly spaced on each portion but staggered between different portions. A step motor connected to the rotating collimator drove the blocker to around x-ray source during the CBCT acquisition. The CBCT reconstruction based on a discrete Radon transform and Tchebichef moments algorithm is performed. Experimental studies using water phantom and Catphan504 were carried out to evaluate the performance of the proposed scheme. Results: The proposed algorithm was tested on both the Monte Carlo simulation and actual experiments with the Catphan504 phantom. From the simulation result, the mean square error of the reconstruction error decreases from 16% to 1.18%, the cupping (τcup) from 14.005% to 0.66%, and the peak signal-to-noise ratio increase from 16.9594 to 31.45. From the actual experiments, the induced visual artifacts are significantly reduced. Conclusion: We conducted an experiment on CBCT imaging system with a rotating collimator to develop and optimize x-ray scatter control and reduction technique. The proposed method is attractive in applications where a high CBCT image quality is critical, for example, dose calculation in adaptive radiation therapy. We want to thank Dr. Lei
Flux-corrected transport principles, algorithms, and applications
Kuzmin, Dmitri; Turek, Stefan
2005-01-01
Addressing students and researchers as well as CFD practitioners, this book describes the state of the art in the development of high-resolution schemes based on the Flux-Corrected Transport (FCT) paradigm. Intended for readers who have a solid background in Computational Fluid Dynamics, the book begins with historical notes by J.P. Boris and D.L. Book. Review articles that follow describe recent advances in the design of FCT algorithms as well as various algorithmic aspects. The topics addressed in the book and its main highlights include: the derivation and analysis of classical FCT schemes with special emphasis on the underlying physical and mathematical constraints; flux limiting for hyperbolic systems; generalization of FCT to implicit time-stepping and finite element discretizations on unstructured meshes and its role as a subgrid scale model for Monotonically Integrated Large Eddy Simulation (MILES) of turbulent flows. The proposed enhancements of the FCT methodology also comprise the prelimiting and '...
Multirobot FastSLAM Algorithm Based on Landmark Consistency Correction
Directory of Open Access Journals (Sweden)
Shi-Ming Chen
2014-01-01
Full Text Available Considering the influence of uncertain map information on multirobot SLAM problem, a multirobot FastSLAM algorithm based on landmark consistency correction is proposed. Firstly, electromagnetism-like mechanism is introduced to the resampling procedure in single-robot FastSLAM, where we assume that each sampling particle is looked at as a charged electron and attraction-repulsion mechanism in electromagnetism field is used to simulate interactive force between the particles to improve the distribution of particles. Secondly, when multiple robots observe the same landmarks, every robot is regarded as one node and Kalman-Consensus Filter is proposed to update landmark information, which further improves the accuracy of localization and mapping. Finally, the simulation results show that the algorithm is suitable and effective.
Bias correction of daily satellite precipitation data using genetic algorithm
Pratama, A. W.; Buono, A.; Hidayat, R.; Harsa, H.
2018-05-01
Climate Hazards Group InfraRed Precipitation with Stations (CHIRPS) was producted by blending Satellite-only Climate Hazards Group InfraRed Precipitation (CHIRP) with Stasion observations data. The blending process was aimed to reduce bias of CHIRP. However, Biases of CHIRPS on statistical moment and quantil values were high during wet season over Java Island. This paper presented a bias correction scheme to adjust statistical moment of CHIRP using observation precipitation data. The scheme combined Genetic Algorithm and Nonlinear Power Transformation, the results was evaluated based on different season and different elevation level. The experiment results revealed that the scheme robustly reduced bias on variance around 100% reduction and leaded to reduction of first, and second quantile biases. However, bias on third quantile only reduced during dry months. Based on different level of elevation, the performance of bias correction process is only significantly different on skewness indicators.
Quantum algorithms and quantum maps - implementation and error correction
International Nuclear Information System (INIS)
Alber, G.; Shepelyansky, D.
2005-01-01
Full text: We investigate the dynamics of the quantum tent map under the influence of errors and explore the possibilities of quantum error correcting methods for the purpose of stabilizing this quantum algorithm. It is known that static but uncontrollable inter-qubit couplings between the qubits of a quantum information processor lead to a rapid Gaussian decay of the fidelity of the quantum state. We present a new error correcting method which slows down this fidelity decay to a linear-in-time exponential one. One of its advantages is that it does not require redundancy so that all physical qubits involved can be used for logical purposes. We also study the influence of decoherence due to spontaneous decay processes which can be corrected by quantum jump-codes. It is demonstrated how universal encoding can be performed in these code spaces. For this purpose we discuss a new entanglement gate which can be used for lowest level encoding in concatenated error-correcting architectures. (author)
Keller, Sune H; Svarer, Claus; Sibomana, Merence
2013-09-01
In the standard software for the Siemens high-resolution research tomograph (HRRT) positron emission tomography (PET) scanner the most commonly used segmentation in the μ -map reconstruction for human brain scans is maximum a posteriori for transmission (MAP-TR). Bias in the lower cerebellum and pons in HRRT brain images have been reported. The two main sources of the problem with MAP-TR are poor bone/soft tissue segmentation below the brain and overestimation of bone mass in the skull. We developed the new transmission processing with total variation (TXTV) method that introduces scatter correction in the μ-map reconstruction and total variation filtering to the transmission processing. Comparing MAP-TR and the new TXTV with gold standard CT-based attenuation correction, we found that TXTV has less bias as compared to MAP-TR. We also compared images acquired at the HRRT scanner using TXTV to the GE Advance scanner images and found high quantitative correspondence. TXTV has been used to reconstruct more than 4000 HRRT scans at seven different sites with no reports of biases. TXTV-based reconstruction is recommended for human brain scans on the HRRT.
Energy Technology Data Exchange (ETDEWEB)
Larcos, G.; Hutton, B.F.; Farlow, D.C.; Campbell- Rodgers, N.; Gruenewald, S.M.; Lau, Y.H. [Westmead Hospital, Westmead, Sydney, NSW (Australia). Departments of Nuclear Medicine and Ultrasound and Medical Physics
1998-06-01
Full text: The introduction of transmission based attenuation correction (AC) has increased the diagnostic accuracy of Tc-99m MIBI myocardial perfusion SPECT. The aim of this study is to evaluate recent developments, including scatter correction (SC) and resolution recovery (RR). We reviewed 13 patients who underwent Tc-99m MIBI SPECT (two day protocol) and coronary angiography and 4 manufacturer supplied studies assigned a low pretest likelihood of coronary artery disease (CAD). Patients had a mean age of 59 years (range: 41-78). Data were reconstructed using filtered backprojection (FBP; method 1), maximum likelihood (ML) incorporating AC (method 2), ADAC software using sinogram based SC+RR followed by ML with AC (method 3) and ordered subset ML incorporating AC,SC and RR (method 4). Images were reported by two of three blinded experienced physicians using a standard semiquantitative scoring scheme. Fixed or reversible perfusion defects were considered abnormal; CAD was considered present with stenoses > 50%. Patients had normal coronary anatomy (n=9), single (n=4) or two vessel CAD (n=4) (four in each of LAD, RCA and LCX). There were no statistically significant differences for any combination. Normalcy rate = 100% for all methods. Physicians graded 3/17 (methods 2,4) and 1/17 (method 3) images as fair or poor in quality. Thus, AC or AC+SC+RR produce good quality images in most patients; there is potential for improvement in sensitivity over standard FBP with no significant change in normalcy or specificity
Energy Technology Data Exchange (ETDEWEB)
Grogan, Brandon Robert [Univ. of Tennessee, Knoxville, TN (United States)
2010-03-01
This dissertation presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects non-intrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross-sections of features inside the object can be determined. The cross sections can then be used to identify the materials and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons which are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using
Energy Technology Data Exchange (ETDEWEB)
Grogan, Brandon R [ORNL
2010-05-01
This report presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects nonintrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross sections of features inside the object can be determined. The cross sections can then be used to identify the materials, and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons that are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized, and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements, and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using the
Regier, Michael D; Moodie, Erica E M
2016-05-01
We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.
Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.
Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua
2018-02-01
Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.
Jo, Byung-Du; Lee, Young-Jin; Kim, Dae-Hong; Kim, Hee-Joung
2014-08-01
In conventional digital radiography (DR) using a dual energy subtraction technique, a significant fraction of the detected photons are scattered within the body, making up the scatter component. Scattered radiation can significantly deteriorate image quality in diagnostic X-ray imaging systems. Various methods of scatter correction, including both measurement- and non-measurement-based methods, have been proposed in the past. Both methods can reduce scatter artifacts in images. However, non-measurement-based methods require a homogeneous object and have insufficient scatter component correction. Therefore, we employed a measurement-based method to correct for the scatter component of inhomogeneous objects from dual energy DR (DEDR) images. We performed a simulation study using a Monte Carlo simulation with a primary modulator, which is a measurement-based method for the DEDR system. The primary modulator, which has a checkerboard pattern, was used to modulate the primary radiation. Cylindrical phantoms of variable size were used to quantify the imaging performance. For scatter estimates, we used discrete Fourier transform filtering, e.g., a Gaussian low-high pass filter with a cut-off frequency. The primary modulation method was evaluated using a cylindrical phantom in the DEDR system. The scatter components were accurately removed using a primary modulator. When the results acquired with scatter correction and without scatter correction were compared, the average contrast-to-noise ratio (CNR) with the correction was 1.35 times higher than that obtained without the correction, and the average root mean square error (RMSE) with the correction was 38.00% better than that without the correction. In the subtraction study, the average CNR with the correction was 2.04 (aluminum subtraction) and 1.38 (polymethyl methacrylate (PMMA) subtraction) times higher than that obtained without the correction. The analysis demonstrated the accuracy of the scatter correction and the
Directory of Open Access Journals (Sweden)
Z. X. Cao
2014-06-01
Full Text Available To retrieve complex-valued effective permittivity and permeability of electromagnetic metamaterials (EMMs based on resonant effect from scattering parameters using a complex logarithmic function is not inevitable. When complex values are expressed in terms of magnitude and phase, an infinite number of permissible phase angles is permissible due to the multi-valued property of complex logarithmic functions. Special attention needs to be paid to ensure continuity of the effective permittivity and permeability of lossy metamaterials as frequency sweeps. In this paper, an automated phase correction (APC algorithm is proposed to properly trace and compensate phase angles of the complex logarithmic function which may experience abrupt phase jumps near the resonant frequency region of the concerned EMMs, and hence the continuity of the effective optical properties of lossy metamaterials is ensured. The algorithm is then verified to extract effective optical properties from the simulated scattering parameters of the four different types of metamaterial media: a cut-wire cell array, a split ring resonator (SRR cell array, an electric-LC (E-LC resonator cell array, and a combined SRR and wire cell array respectively. The results demonstrate that the proposed algorithm is highly accurate and effective.
The Algorithm Theoretical Basis Document for Tidal Corrections
Fricker, Helen A.; Ridgway, Jeff R.; Minster, Jean-Bernard; Yi, Donghui; Bentley, Charles R.`
2012-01-01
This Algorithm Theoretical Basis Document deals with the tidal corrections that need to be applied to range measurements made by the Geoscience Laser Altimeter System (GLAS). These corrections result from the action of ocean tides and Earth tides which lead to deviations from an equilibrium surface. Since the effect of tides is dependent of the time of measurement, it is necessary to remove the instantaneous tide components when processing altimeter data, so that all measurements are made to the equilibrium surface. The three main tide components to consider are the ocean tide, the solid-earth tide and the ocean loading tide. There are also long period ocean tides and the pole tide. The approximate magnitudes of these components are illustrated in Table 1, together with estimates of their uncertainties (i.e. the residual error after correction). All of these components are important for GLAS measurements over the ice sheets since centimeter-level accuracy for surface elevation change detection is required. The effect of each tidal component is to be removed by approximating their magnitude using tidal prediction models. Conversely, assimilation of GLAS measurements into tidal models will help to improve them, especially at high latitudes.
International Nuclear Information System (INIS)
Yang, J.; Kuikka, J.T.; Vanninen, E.; Laensimies, E.; Kauppinen, T.; Patomaeki, L.
1999-01-01
Photon scatter is one of the most important factors degrading the quantitative accuracy of SPECT images. Many scatter correction methods have been proposed. The single isotope method was proposed by us. Aim: We evaluate the scatter correction method of improving the quality of images by acquiring emission and transmission data simultaneously with single isotope scan. Method: To evaluate the proposed scatter correction method, a contrast and linearity phantom was studied. Four female patients with fibromyalgia (FM) syndrome and four with chronic back pain (BP) were imaged. Grey-to-cerebellum (G/C) and grey-to-white matter (G/W) ratios were determined by one skilled operator for 12 regions of interest (ROIs) in each subject. Results: The linearity of activity response was improved after the scatter correction (r=0.999). The y-intercept value of the regression line was 0.036 (p [de
Raylman, R. R.; Majewski, S.; Wojcik, R.; Weisenberger, A. G.; Kross, B.; Popov, V.
2001-06-01
Positron emission mammography (PEM) has begun to show promise as an effective method for the detection of breast lesions. Due to its utilization of tumor-avid radiopharmaceuticals labeled with positron-emitting radionuclides, this technique may be especially useful in imaging of women with radiodense or fibrocystic breasts. While the use of these radiotracers affords PEM unique capabilities, it also introduces some limitations. Specifically, acceptance of accidental and Compton-scattered coincidence events can decrease lesion detectability. The authors studied the effect of accidental coincidence events on PEM images produced by the presence of /sup 18/F-Fluorodeoxyglucose in the organs of a subject using an anthropomorphic phantom. A delayed-coincidence technique was tested as a method for correcting PEM images for the occurrence of accidental events. Also, a Compton scatter correction algorithm designed specifically for PEM was developed and tested using a compressed breast phantom. Finally, the effect of object size on image counts and a correction for this effect were explored. The imager used in this study consisted of two PEM detector heads mounted 20 cm apart on a Lorad biopsy apparatus. The results demonstrated that a majority of the accidental coincidence events (/spl sim/80%) detected by this system were produced by radiotracer uptake in the adipose and muscle tissue of the torso. The presence of accidental coincidence events was shown to reduce lesion detectability. Much of this effect was eliminated by correction of the images utilizing estimates of accidental-coincidence contamination acquired with delayed coincidence circuitry built into the PEM system. The Compton scatter fraction for this system was /spl sim/14%. Utilization of a new scatter correction algorithm reduced the scatter fraction to /spl sim/1.5%. Finally, reduction of count recovery due to object size was measured and a correction to the data applied. Application of correction techniques
Energy Technology Data Exchange (ETDEWEB)
Raymond Raylman; Stanislaw Majewski; Randolph Wojcik; Andrew Weisenberger; Brian Kross; Vladimir Popov
2001-06-01
Positron emission mammography (PEM) has begun to show promise as an effective method for the detection of breast lesions. Due to its utilization of tumor-avid radiopharmaceuticals labeled with positron-emitting radionuclides, this technique may be especially useful in imaging of women with radiodense or fibrocystic breasts. While the use of these radiotracers affords PEM unique capabilities, it also introduces some limitations. Specifically, acceptance of accidental and Compton-scattered coincidence events can decrease lesion detectability. The authors studied the effect of accidental coincidence events on PEM images produced by the presence of 18F-Fluorodeoxyglucose in the organs of a subject using an anthropomorphic phantom. A delayed-coincidence technique was tested as a method for correcting PEM images for the occurrence of accidental events. Also, a Compton scatter correction algorithm designed specifically for PEM was developed and tested using a compressed breast phantom.
International Nuclear Information System (INIS)
Raymond Raylman; Stanislaw Majewski; Randolph Wojcik; Andrew Weisenberger; Brian Kross; Vladimir Popov
2001-01-01
Positron emission mammography (PEM) has begun to show promise as an effective method for the detection of breast lesions. Due to its utilization of tumor-avid radiopharmaceuticals labeled with positron-emitting radionuclides, this technique may be especially useful in imaging of women with radiodense or fibrocystic breasts. While the use of these radiotracers affords PEM unique capabilities, it also introduces some limitations. Specifically, acceptance of accidental and Compton-scattered coincidence events can decrease lesion detectability. The authors studied the effect of accidental coincidence events on PEM images produced by the presence of 18F-Fluorodeoxyglucose in the organs of a subject using an anthropomorphic phantom. A delayed-coincidence technique was tested as a method for correcting PEM images for the occurrence of accidental events. Also, a Compton scatter correction algorithm designed specifically for PEM was developed and tested using a compressed breast phantom
Flux-corrected transport principles, algorithms, and applications
Löhner, Rainald; Turek, Stefan
2012-01-01
Many modern high-resolution schemes for Computational Fluid Dynamics trace their origins to the Flux-Corrected Transport (FCT) paradigm. FCT maintains monotonicity using a nonoscillatory low-order scheme to determine the bounds for a constrained high-order approximation. This book begins with historical notes by J.P. Boris and D.L. Book who invented FCT in the early 1970s. The chapters that follow describe the design of fully multidimensional FCT algorithms for structured and unstructured grids, limiting for systems of conservation laws, and the use of FCT as an implicit subgrid scale model. The second edition presents 200 pages of additional material. The main highlights of the three new chapters include: FCT-constrained interpolation for Arbitrary Lagrangian-Eulerian methods, an optimization-based approach to flux correction, and FCT simulations of high-speed flows on overset grids. Addressing students and researchers, as well as CFD practitioners, the book is focused on computational aspects and contains m...
An inter-crystal scatter correction method for DOI PET image reconstruction
International Nuclear Information System (INIS)
Lam, Chih Fung; Hagiwara, Naoki; Obi, Takashi; Yamaguchi, Masahiro; Yamaya, Taiga; Murayama, Hideo
2006-01-01
New positron emission tomography (PET) scanners utilize depth-of-interaction (DOI) information to improve image resolution, particularly at the edge of field-of-view while maintaining high detector sensitivity. However, the inter-crystal scatter (ICS) effect cannot be neglected in DOI scanners due to the use of smaller crystals. ICS is the phenomenon wherein there are multiple scintillations for irradiation of a gamma photon due to Compton scatter in detecting crystals. In the case of ICS, only one scintillation position is approximated for detectors with Anger-type logic calculation. This causes an error in position detection and ICS worsens the image contrast, particularly for smaller hotspots. In this study, we propose to model an ICS probability by using a Monte Carlo simulator. The probability is given as a statistical relationship between the gamma photon first interaction crystal pair and the detected crystal pair. It is then used to improve the system matrix of a statistical image reconstruction algorithm, such as maximum likehood expectation maximization (ML-EM) in order to correct for the position error caused by ICS. We apply the proposed method to simulated data of the jPET-D4, which is a four-layer DOI PET being developed at the National Institute of Radiological Sciences. Our computer simulations show that image contrast is recovered successfully by the proposed method. (author)
A software-based x-ray scatter correction method for breast tomosynthesis
Jia Feng, Steve Si; Sechopoulos, Ioannis
2011-01-01
Purpose: To develop a software-based scatter correction method for digital breast tomosynthesis (DBT) imaging and investigate its impact on the image quality of tomosynthesis reconstructions of both phantoms and patients.
Binding and Pauli principle corrections in subthreshold pion-nucleus scattering
International Nuclear Information System (INIS)
Kam, J. de
1981-01-01
In this investigation I develop a three-body model for the single scattering optical potential in which the nucleon binding and the Pauli principle are accounted for. A unitarity pole approximation is used for the nucleon-core interaction. Calculations are presented for the π- 4 He elastic scattering cross sections at energies below the inelastic threshold and for the real part of the π- 4 He scattering length by solving the three-body equations. Off-shell kinematics and the Pauli principle are carefully taken into account. The binding correction and the Pauli principle correction each have an important effect on the differential cross sections and the scattering length. However, large cancellations occur between these two effects. I find an increase in the π- 4 He scattering length by 100%; an increase in the cross sections by 20-30% and shift of the minimum in π - - 4 He scattering to forward angles by 10 0 . (orig.)
Asgari, Afrouz; Ashoor, Mansour; Sohrabpour, Mostafa; Shokrani, Parvaneh; Rezaei, Ali
2015-05-01
Improving signal to noise ratio (SNR) and qualified images by the various methods is very important for detecting the abnormalities at the body organs. Scatter and attenuation of photons by the organs lead to errors in radiopharmaceutical estimation as well as degradation of images. The choice of suitable energy window and the radionuclide have a key role in nuclear medicine which appearing the lowest scatter fraction as well as having a nearly constant linear attenuation coefficient as a function of phantom thickness. The energy windows of symmetrical window (SW), asymmetric window (ASW), high window (WH) and low window (WL) using Tc-99m and Sm-153 radionuclide with solid water slab phantom (RW3) and Teflon bone phantoms have been compared, and Matlab software and Monte Carlo N-Particle (MCNP4C) code were modified to simulate these methods and obtaining the amounts of FWHM and full width at tenth maximum (FWTM) using line spread functions (LSFs). The experimental data were obtained from the Orbiter Scintron gamma camera. Based on the results of the simulation as well as experimental work, the performance of WH and ASW display of the results, lowest scatter fraction as well as constant linear attenuation coefficient as a function of phantom thickness. WH and ASW were optimal windows in nuclear medicine imaging for Tc-99m in RW3 phantom and Sm-153 in Teflon bone phantom. Attenuation correction was done for WH and ASW optimal windows and for these radionuclides using filtered back projection algorithm. Results of simulation and experimental show that very good agreement between the set of experimental with simulation as well as theoretical values with simulation data were obtained which was nominally less than 7.07 % for Tc-99m and less than 8.00 % for Sm-153. Corrected counts were not affected by the thickness of scattering material. The Simulated results of Line Spread Function (LSF) for Sm-153 and Tc-99m in phantom based on four windows and TEW method were
Improved scatter correction with factor analysis for planar and SPECT imaging
Knoll, Peter; Rahmim, Arman; Gültekin, Selma; Šámal, Martin; Ljungberg, Michael; Mirzaei, Siroos; Segars, Paul; Szczupak, Boguslaw
2017-09-01
Quantitative nuclear medicine imaging is an increasingly important frontier. In order to achieve quantitative imaging, various interactions of photons with matter have to be modeled and compensated. Although correction for photon attenuation has been addressed by including x-ray CT scans (accurate), correction for Compton scatter remains an open issue. The inclusion of scattered photons within the energy window used for planar or SPECT data acquisition decreases the contrast of the image. While a number of methods for scatter correction have been proposed in the past, in this work, we propose and assess a novel, user-independent framework applying factor analysis (FA). Extensive Monte Carlo simulations for planar and tomographic imaging were performed using the SIMIND software. Furthermore, planar acquisition of two Petri dishes filled with 99mTc solutions and a Jaszczak phantom study (Data Spectrum Corporation, Durham, NC, USA) using a dual head gamma camera were performed. In order to use FA for scatter correction, we subdivided the applied energy window into a number of sub-windows, serving as input data. FA results in two factor images (photo-peak, scatter) and two corresponding factor curves (energy spectra). Planar and tomographic Jaszczak phantom gamma camera measurements were recorded. The tomographic data (simulations and measurements) were processed for each angular position resulting in a photo-peak and a scatter data set. The reconstructed transaxial slices of the Jaszczak phantom were quantified using an ImageJ plugin. The data obtained by FA showed good agreement with the energy spectra, photo-peak, and scatter images obtained in all Monte Carlo simulated data sets. For comparison, the standard dual-energy window (DEW) approach was additionally applied for scatter correction. FA in comparison with the DEW method results in significant improvements in image accuracy for both planar and tomographic data sets. FA can be used as a user
Energy Technology Data Exchange (ETDEWEB)
Shi, L; Zhu, L [Georgia Institute of Technology, Atlanta, GA (Georgia); Vedantham, S; Karellas, A [University of Massachusetts Medical School, Worcester, MA (United States)
2016-06-15
Purpose: The image quality of dedicated cone-beam breast CT (CBBCT) is fundamentally limited by substantial x-ray scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose to suppress x-ray scatter in CBBCT images using a deterministic forward projection model. Method: We first use the 1st-pass FDK-reconstructed CBBCT images to segment fibroglandular and adipose tissue. Attenuation coefficients are assigned to the two tissues based on the x-ray spectrum used for imaging acquisition, and is forward projected to simulate scatter-free primary projections. We estimate the scatter by subtracting the simulated primary projection from the measured projection, and then the resultant scatter map is further refined by a Fourier-domain fitting algorithm after discarding untrusted scatter information. The final scatter estimate is subtracted from the measured projection for effective scatter correction. In our implementation, the proposed scatter correction takes 0.5 seconds for each projection. The method was evaluated using the overall image spatial non-uniformity (SNU) metric and the contrast-to-noise ratio (CNR) with 5 clinical datasets of BI-RADS 4/5 subjects. Results: For the 5 clinical datasets, our method reduced the SNU from 7.79% to 1.68% in coronal view and from 6.71% to 3.20% in sagittal view. The average CNR is improved by a factor of 1.38 in coronal view and 1.26 in sagittal view. Conclusion: The proposed scatter correction approach requires no additional scans or prior images and uses a deterministic model for efficient calculation. Evaluation with clinical datasets demonstrates the feasibility and stability of the method. These features are attractive for clinical CBBCT and make our method distinct from other approaches. Supported partly by NIH R21EB019597, R21CA134128
The Bouguer Correction Algorithm for Gravity with Limited Range
MA Jian; WEI Ziqing; WU Lili; YANG Zhenghui
2017-01-01
The Bouguer correction is an important item in gravity reduction, while the traditional Bouguer correction, whether the plane Bouguer correction or the spherical Bouguer correction, exists approximation error because of far-zone virtual terrain. The error grows as the calculation point gets higher. Therefore gravity reduction using the Bouguer correction with limited range, which was in accordance with the scope of the topographic correction, was researched in this paper. After that, a simpli...
Zaidi, H; Slosman, D O
2003-01-01
Reliable attenuation correction represents an essential component of the long chain of modules required for the reconstruction of artifact-free, quantitative brain positron emission tomography (PET) images. In this work we demonstrate the proof of principle of segmented magnetic resonance imaging (MRI)-guided attenuation and scatter corrections in 3D brain PET. We have developed a method for attenuation correction based on registered T1-weighted MRI, eliminating the need of an additional transmission (TX) scan. The MR images were realigned to preliminary reconstructions of PET data using an automatic algorithm and then segmented by means of a fuzzy clustering technique which identifies tissues of significantly different density and composition. The voxels belonging to different regions were classified into air, skull, brain tissue and nasal sinuses. These voxels were then assigned theoretical tissue-dependent attenuation coefficients as reported in the ICRU 44 report followed by Gaussian smoothing and additio...
Non-eikonal corrections for the scattering of spin-one particles
Energy Technology Data Exchange (ETDEWEB)
Gaber, M.W.; Wilkin, C. [Department of Physics and Astronomy, University College London, WC1E 6BT, London (United Kingdom); Al-Khalili, J.S. [Department of Physics, University of Surrey, GU2 7XH, Guildford, Surrey (United Kingdom)
2004-08-01
The Wallace Fourier-Bessel expansion of the scattering amplitude is generalised to the case of the scattering of a spin-one particle from a potential with a single tensor coupling as well as central and spin-orbit terms. A generating function for the eikonal-phase (quantum) corrections is evaluated in closed form. For medium-energy deuteron-nucleus scattering, the first-order correction is dominant and is shown to be significant in the interpretation of analysing power measurements. This conclusion is supported by a numerical comparison of the eikonal observables, evaluated with and without corrections, with those obtained from a numerical resolution of the Schroedinger equation for d-{sup 58}Ni scattering at incident deuteron energies of 400 and 700 MeV. (orig.)
Radiative corrections to high-energy neutrino scattering
International Nuclear Information System (INIS)
Rujula, A. de; Petronzio, R.; Savoy-Navarro, A.
1979-01-01
Motivated by precise neutrino experiments, the electromagnetic radiative corrections to the data are reconsidered. The usefulness is investigated and the simplicity demonstrated of the 'leading log' approximation: the calculation to order α ln (Q/μ), α ln (Q/msub(q)). Here Q is an energy scale of the overall process, μ is the lepton mass and msub(q) is a hadronic mass, the effective quark mass in a parton model. The leading log radiative corrections to dsigma/dy distributions and to suitably interpreted dsigma/dx distributions are quark-mass independent. The authors improve upon the conventional leading log approximation and compute explicitly the largest terms that lie beyond the leading log level. In practice this means that the model-independent formulae, though approximate, are likely to be excellent estimates everywhere except at low energy or very large y. It is pointed out that radiative corrections to measurements of deviations from the Callan-Gross relation and to measurements of the 'sea' constituency of nucleons are gigantic. The QCD inspired study of deviations from scaling is of particular interest. The authors compute, beyond the leading log level, the radiative corrections of the QCD predictions. (Auth.)
International Nuclear Information System (INIS)
Ljungberg, M.
1990-05-01
Quantitative scintigrafic images, obtained by NaI(Tl) scintillation cameras, are limited by photon attenuation and contribution from scattered photons. A Monte Carlo program was developed in order to evaluate these effects. Simple source-phantom geometries and more complex nonhomogeneous cases can be simulated. Comparisons with experimental data for both homogeneous and nonhomogeneous regions and with published results have shown good agreement. The usefulness for simulation of parameters in scintillation camera systems, stationary as well as in SPECT systems, has also been demonstrated. An attenuation correction method based on density maps and build-up functions has been developed. The maps were obtained from a transmission measurement using an external 57 Co flood source and the build-up was simulated by the Monte Carlo code. Two scatter correction methods, the dual-window method and the convolution-subtraction method, have been compared using the Monte Carlo method. The aim was to compare the estimated scatter with the true scatter in the photo-peak window. It was concluded that accurate depth-dependent scatter functions are essential for a proper scatter correction. A new scatter and attenuation correction method has been developed based on scatter line-spread functions (SLSF) obtained for different depths and lateral positions in the phantom. An emission image is used to determine the source location in order to estimate the scatter in the photo-peak window. Simulation studies of a clinically realistic source in different positions in cylindrical water phantoms were made for three photon energies. The SLSF-correction method was also evaluated by simulation studies for 1. a myocardial source, 2. uniform source in the lungs and 3. a tumour located in the lungs in a realistic, nonhomogeneous computer phantom. The results showed that quantitative images could be obtained in nonhomogeneous regions. (67 refs.)
DEFF Research Database (Denmark)
de Nijs, Robin; Lagerburg, Vera; Klausen, Thomas L
2014-01-01
and the activity, which depends on the collimator type, the utilized energy windows and the applied scatter correction techniques. In this study, energy window subtraction-based scatter correction methods are compared experimentally and quantitatively. MATERIALS AND METHODS: (177)Lu SPECT images of a phantom...... technique, the measured ratio was close to the real ratio, and the differences between spheres were small. CONCLUSION: For quantitative (177)Lu imaging MEGP collimators are advised. Both energy peaks can be utilized when the ESSE correction technique is applied. The difference between the calculated...
International Nuclear Information System (INIS)
Ouyang, Luo; Lee, Huichen Pam; Wang, Jing
2015-01-01
Purpose: To evaluate a moving blocker-based approach in estimating and correcting megavoltage (MV) and kilovoltage (kV) scatter contamination in kV cone-beam computed tomography (CBCT) acquired during volumetric modulated arc therapy (VMAT). Methods and materials: During the concurrent CBCT/VMAT acquisition, a physical attenuator (i.e., “blocker”) consisting of equally spaced lead strips was mounted and moved constantly between the CBCT source and patient. Both kV and MV scatter signals were estimated from the blocked region of the imaging panel, and interpolated into the unblocked region. A scatter corrected CBCT was then reconstructed from the unblocked projections after scatter subtraction using an iterative image reconstruction algorithm based on constraint optimization. Experimental studies were performed on a Catphan® phantom and an anthropomorphic pelvis phantom to demonstrate the feasibility of using a moving blocker for kV–MV scatter correction. Results: Scatter induced cupping artifacts were substantially reduced in the moving blocker corrected CBCT images. Quantitatively, the root mean square error of Hounsfield units (HU) in seven density inserts of the Catphan phantom was reduced from 395 to 40. Conclusions: The proposed moving blocker strategy greatly improves the image quality of CBCT acquired with concurrent VMAT by reducing the kV–MV scatter induced HU inaccuracy and cupping artifacts
Williams, C. R.
2012-12-01
The NASA Global Precipitation Mission (GPM) raindrop size distribution (DSD) Working Group is composed of NASA PMM Science Team Members and is charged to "investigate the correlations between DSD parameters using Ground Validation (GV) data sets that support, or guide, the assumptions used in satellite retrieval algorithms." Correlations between DSD parameters can be used to constrain the unknowns and reduce the degrees-of-freedom in under-constrained satellite algorithms. Over the past two years, the GPM DSD Working Group has analyzed GV data and has found correlations between the mass-weighted mean raindrop diameter (Dm) and the mass distribution standard deviation (Sm) that follows a power-law relationship. This Dm-Sm power-law relationship appears to be robust and has been observed in surface disdrometer and vertically pointing radar observations. One benefit of a Dm-Sm power-law relationship is that a three parameter DSD can be modeled with just two parameters: Dm and Nw that determines the DSD amplitude. In order to incorporate observed DSD correlations into satellite algorithms, the GPM DSD Working Group is developing scattering and integral tables that can be used by satellite algorithms. Scattering tables describe the interaction of electromagnetic waves on individual particles to generate cross sections of backscattering, extinction, and scattering. Scattering tables are independent of the distribution of particles. Integral tables combine scattering table outputs with DSD parameters and DSD correlations to generate integrated normalized reflectivity, attenuation, scattering, emission, and asymmetry coefficients. Integral tables contain both frequency dependent scattering properties and cloud microphysics. The GPM DSD Working Group has developed scattering tables for raindrops at both Dual Precipitation Radar (DPR) frequencies and at all GMI radiometer frequencies less than 100 GHz. Scattering tables include Mie and T-matrix scattering with H- and V
Coherent scattering and matrix correction in bone-lead measurements
International Nuclear Information System (INIS)
Todd, A.C.
2000-01-01
The technique of K-shell x-ray fluorescence of lead in bone has been used in many studies of the health effects of lead. This paper addresses one aspect of the technique, namely the coherent conversion factor (CCF) which converts between the matrix of the calibration standards and those of human bone. The CCF is conventionally considered a constant but is a function of scattering angle, energy and the elemental composition of the matrices. The aims of this study were to quantify the effect on the CCF of several assumptions which may not have been tested adequately and to compare the CCFs for plaster of Paris (the present matrix of calibration standards) and a synthetic apatite matrix. The CCF was calculated, using relativistic form factors, for published compositions of bone, both assumed and assessed compositions of plaster, and the synthetic apatite. The main findings of the study were, first, that impurities in plaster, lead in the plaster or bone matrices, coherent scatter from non-bone tissues and the individual subject's measurement geometry are all minor or negligible effects; and, second, that the synthetic apatite matrix is more representative of bone mineral than is plaster of Paris. (author)
Energy Technology Data Exchange (ETDEWEB)
Rusz, Ján, E-mail: jan.rusz@fysik.uu.se
2017-06-15
Highlights: • New algorithm for calculating double differential scattering cross-section. • Shown good convergence properties. • Outperforms older MATS algorithm, particularly in zone axis calculations. - Abstract: We present a new algorithm for calculating inelastic scattering cross-section for fast electrons. Compared to the previous Modified Automatic Term Selection (MATS) algorithm (Rusz et al. [18]), it has far better convergence properties in zone axis calculations and it allows to identify contributions of individual atoms. One can think of it as a blend of MATS algorithm and a method described by Weickenmeier and Kohl [10].
An algorithm for 3D target scatterer feature estimation from sparse SAR apertures
Jackson, Julie Ann; Moses, Randolph L.
2009-05-01
We present an algorithm for extracting 3D canonical scattering features from complex targets observed over sparse 3D SAR apertures. The algorithm begins with complex phase history data and ends with a set of geometrical features describing the scene. The algorithm provides a pragmatic approach to initialization of a nonlinear feature estimation scheme, using regularization methods to deconvolve the point spread function and obtain sparse 3D images. Regions of high energy are detected in the sparse images, providing location initializations for scattering center estimates. A single canonical scattering feature, corresponding to a geometric shape primitive, is fit to each region via nonlinear optimization of fit error between the regularized data and parametric canonical scattering models. Results of the algorithm are presented using 3D scattering prediction data of a simple scene for both a densely-sampled and a sparsely-sampled SAR measurement aperture.
Scatter measurement and correction method for cone-beam CT based on single grating scan
Huang, Kuidong; Shi, Wenlong; Wang, Xinyu; Dong, Yin; Chang, Taoqi; Zhang, Hua; Zhang, Dinghua
2017-06-01
In cone-beam computed tomography (CBCT) systems based on flat-panel detector imaging, the presence of scatter significantly reduces the quality of slices. Based on the concept of collimation, this paper presents a scatter measurement and correction method based on single grating scan. First, according to the characteristics of CBCT imaging, the scan method using single grating and the design requirements of the grating are analyzed and figured out. Second, by analyzing the composition of object projection images and object-and-grating projection images, the processing method for the scatter image at single projection angle is proposed. In addition, to avoid additional scan, this paper proposes an angle interpolation method of scatter images to reduce scan cost. Finally, the experimental results show that the scatter images obtained by this method are accurate and reliable, and the effect of scatter correction is obvious. When the additional object-and-grating projection images are collected and interpolated at intervals of 30 deg, the scatter correction error of slices can still be controlled within 3%.
International Nuclear Information System (INIS)
Mukai, T.; Torizuka, K.; Douglass, K.H.; Wagner, H.N.
1985-01-01
Quantitative assessment of tracer distribution with single photon emission computed tomography (SPECT) is difficult because of attenuation and scattering of gamma rays within the object. A method considering the source geometry was developed, and effects of attenuation and scatter on SPECT quantitation were studied using phantoms with non-uniform attenuation. The distribution of attenuation coefficients (μ) within the source were obtained by transmission CT. The attenuation correction was performed by an iterative reprojection technique. The scatter correction was done by convolution of the attenuation corrected image and an appropriate filter made by line source studies. The filter characteristics depended on μ and SPEC measurement at each pixel. The SPECT obtained by this method showed the most reasonable results than the images reconstructed by other methods. The scatter correction could compensate completely for a 28% scatter components from a long line source, and a 61% component for thick and extended source. Consideration of source geometries was necessary for effective corrections. The present method is expected to be valuable for the quantitative assessment of regional tracer activity
Sramek, Benjamin Koerner
The ability to deliver conformal dose distributions in radiation therapy through intensity modulation and the potential for tumor dose escalation to improve treatment outcome has necessitated an increase in localization accuracy of inter- and intra-fractional patient geometry. Megavoltage cone-beam CT imaging using the treatment beam and onboard electronic portal imaging device is one option currently being studied for implementation in image-guided radiation therapy. However, routine clinical use is predicated upon continued improvements in image quality and patient dose delivered during acquisition. The formal statement of hypothesis for this investigation was that the conformity of planned to delivered dose distributions in image-guided radiation therapy could be further enhanced through the application of kilovoltage scatter correction and intermediate view estimation techniques to megavoltage cone-beam CT imaging, and that normalized dose measurements could be acquired and inter-compared between multiple imaging geometries. The specific aims of this investigation were to: (1) incorporate the Feldkamp, Davis and Kress filtered backprojection algorithm into a program to reconstruct a voxelized linear attenuation coefficient dataset from a set of acquired megavoltage cone-beam CT projections, (2) characterize the effects on megavoltage cone-beam CT image quality resulting from the application of Intermediate View Interpolation and Intermediate View Reprojection techniques to limited-projection datasets, (3) incorporate the Scatter and Primary Estimation from Collimator Shadows (SPECS) algorithm into megavoltage cone-beam CT image reconstruction and determine the set of SPECS parameters which maximize image quality and quantitative accuracy, and (4) evaluate the normalized axial dose distributions received during megavoltage cone-beam CT image acquisition using radiochromic film and thermoluminescent dosimeter measurements in anthropomorphic pelvic and head and
Angle Statistics Reconstruction: a robust reconstruction algorithm for Muon Scattering Tomography
Stapleton, M.; Burns, J.; Quillin, S.; Steer, C.
2014-11-01
Muon Scattering Tomography (MST) is a technique for using the scattering of cosmic ray muons to probe the contents of enclosed volumes. As a muon passes through material it undergoes multiple Coulomb scattering, where the amount of scattering is dependent on the density and atomic number of the material as well as the path length. Hence, MST has been proposed as a means of imaging dense materials, for instance to detect special nuclear material in cargo containers. Algorithms are required to generate an accurate reconstruction of the material density inside the volume from the muon scattering information and some have already been proposed, most notably the Point of Closest Approach (PoCA) and Maximum Likelihood/Expectation Maximisation (MLEM) algorithms. However, whilst PoCA-based algorithms are easy to implement, they perform rather poorly in practice. Conversely, MLEM is a complicated algorithm to implement and computationally intensive and there is currently no published, fast and easily-implementable algorithm that performs well in practice. In this paper, we first provide a detailed analysis of the source of inaccuracy in PoCA-based algorithms. We then motivate an alternative method, based on ideas first laid out by Morris et al, presenting and fully specifying an algorithm that performs well against simulations of realistic scenarios. We argue this new algorithm should be adopted by developers of Muon Scattering Tomography as an alternative to PoCA.
Energy Technology Data Exchange (ETDEWEB)
Rinkel, J.; Dinten, J.M. [CEA Grenoble (DTBS/STD), Lab. d' Electronique et de Technologie de l' Informatique, LETI, 38 (France); Esteve, F. [European Synchrotron Radiation Facility (ESRF), 38 - Grenoble (France)
2004-07-01
Purpose: Cone beam CT (CBCT) enables three-dimensional imaging with isotropic resolution. X-ray scatter estimation is a big challenge for quantitative CBCT imaging of thorax: scatter level is significantly higher on cone beam systems compared to collimated fan beam systems. The effects of this scattered radiation are cupping artefacts, streaks, and quantification inaccuracies. The beam stops conventional scatter estimation approach can be used for CBCT but leads to a significant increase in terms of dose and acquisition time. At CEA-LETI has been developed an original scatter management process without supplementary acquisition. Methods and Materials: This Analytical Plus Indexing-based method (API) of scatter correction in CBCT is based on scatter calibration through offline acquisitions with beam stops on lucite plates, combined to an analytical transformation issued from physical equations. This approach has been applied with success in bone densitometry and mammography. To evaluate this method in CBCT, acquisitions from a thorax phantom with and without beam stops have been performed. To compare different scatter correction approaches, Feldkamp algorithm has been applied on rough data corrected from scatter by API and by beam stops approaches. Results: The API method provides results in good agreement with the beam stops array approach, suppressing cupping artefact. Otherwise influence of the scatter correction method on the noise in the reconstructed images has been evaluated. Conclusion: The results indicate that the API method is effective for quantitative CBCT imaging of thorax. Compared to a beam stops array method it needs a lower x-ray dose and shortens acquisition time. (authors)
Investigation of Compton scattering correction methods in cardiac SPECT by Monte Carlo simulations
International Nuclear Information System (INIS)
Silva, A.M. Marques da; Furlan, A.M.; Robilotta, C.C.
2001-01-01
The goal of this work was the use of Monte Carlo simulations to investigate the effects of two scattering correction methods: dual energy window (DEW) and dual photopeak window (DPW), in quantitative cardiac SPECT reconstruction. MCAT torso-cardiac phantom, with 99m Tc and non-uniform attenuation map was simulated. Two different photopeak windows were evaluated in DEW method: 15% and 20%. Two 10% wide subwindows centered symmetrically within the photopeak were used in DPW method. Iterative ML-EM reconstruction with modified projector-backprojector for attenuation correction was applied. Results indicated that the choice of the scattering and photopeak windows determines the correction accuracy. For the 15% window, fitted scatter fraction gives better results than k = 0.5. For the 20% window, DPW is the best method, but it requires parameters estimation using Monte Carlo simulations. (author)
Corrections to the leading eikonal amplitude for high-energy scattering and quasipotential approach
International Nuclear Information System (INIS)
Nguyen Suan Hani; Nguyen Duy Hung
2003-12-01
Asymptotic behaviour of the scattering amplitude for two scalar particle at high energy and fixed momentum transfers is reconsidered in quantum field theory. In the framework of the quasipotential approach and the modified perturbation theory a systematic scheme of finding the leading eikonal scattering amplitudes and its corrections is developed and constructed. The connection between the solutions obtained by quasipotential and functional approaches is also discussed. (author)
The modular small-angle X-ray scattering data correction sequence.
Pauw, B R; Smith, A J; Snow, T; Terrill, N J; Thünemann, A F
2017-12-01
Data correction is probably the least favourite activity amongst users experimenting with small-angle X-ray scattering: if it is not done sufficiently well, this may become evident only during the data analysis stage, necessitating the repetition of the data corrections from scratch. A recommended comprehensive sequence of elementary data correction steps is presented here to alleviate the difficulties associated with data correction, both in the laboratory and at the synchrotron. When applied in the proposed order to the raw signals, the resulting absolute scattering cross section will provide a high degree of accuracy for a very wide range of samples, with its values accompanied by uncertainty estimates. The method can be applied without modification to any pinhole-collimated instruments with photon-counting direct-detection area detectors.
Magnetic corrections to π -π scattering lengths in the linear sigma model
Loewe, M.; Monje, L.; Zamora, R.
2018-03-01
In this article, we consider the magnetic corrections to π -π scattering lengths in the frame of the linear sigma model. For this, we consider all the one-loop corrections in the s , t , and u channels, associated to the insertion of a Schwinger propagator for charged pions, working in the region of small values of the magnetic field. Our calculation relies on an appropriate expansion for the propagator. It turns out that the leading scattering length, l =0 in the S channel, increases for an increasing value of the magnetic field, in the isospin I =2 case, whereas the opposite effect is found for the I =0 case. The isospin symmetry is valid because the insertion of the magnetic field occurs through the absolute value of the electric charges. The channel I =1 does not receive any corrections. These results, for the channels I =0 and I =2 , are opposite with respect to the thermal corrections found previously in the literature.
Scatter correction method for x-ray CT using primary modulation: Phantom studies
International Nuclear Information System (INIS)
Gao Hewei; Fahrig, Rebecca; Bennett, N. Robert; Sun Mingshan; Star-Lack, Josh; Zhu Lei
2010-01-01
Purpose: Scatter correction is a major challenge in x-ray imaging using large area detectors. Recently, the authors proposed a promising scatter correction method for x-ray computed tomography (CT) using primary modulation. Proof of concept was previously illustrated by Monte Carlo simulations and physical experiments on a small phantom with a simple geometry. In this work, the authors provide a quantitative evaluation of the primary modulation technique and demonstrate its performance in applications where scatter correction is more challenging. Methods: The authors first analyze the potential errors of the estimated scatter in the primary modulation method. On two tabletop CT systems, the method is investigated using three phantoms: A Catphan(c)600 phantom, an anthropomorphic chest phantom, and the Catphan(c)600 phantom with two annuli. Two different primary modulators are also designed to show the impact of the modulator parameters on the scatter correction efficiency. The first is an aluminum modulator with a weak modulation and a low modulation frequency, and the second is a copper modulator with a strong modulation and a high modulation frequency. Results: On the Catphan(c)600 phantom in the first study, the method reduces the error of the CT number in the selected regions of interest (ROIs) from 371.4 to 21.9 Hounsfield units (HU); the contrast to noise ratio also increases from 10.9 to 19.2. On the anthropomorphic chest phantom in the second study, which represents a more difficult case due to the high scatter signals and object heterogeneity, the method reduces the error of the CT number from 327 to 19 HU in the selected ROIs and from 31.4% to 5.7% on the overall average. The third study is to investigate the impact of object size on the efficiency of our method. The scatter-to-primary ratio estimation error on the Catphan(c)600 phantom without any annulus (20 cm in diameter) is at the level of 0.04, it rises to 0.07 and 0.1 on the phantom with an
In-medium effects in K+ scattering versus Glauber model with noneikonal corrections
International Nuclear Information System (INIS)
Eliseev, S.M.; Rihan, T.H.
1996-01-01
The discrepancy between the experimental and the theoretical ratio R of the total cross sections, R=σ(K + - 12 C)/6σ(K + - d), at momenta up to 800 MeV/c is discussed in the framework of the Glauber multiple scattering approach. It is shown that various corrections such as adopting relativistic K + -N amplitudes as well as noneikonal corrections seem to fail in reproducing the experimental data especially at higher momenta. 17 refs., 1 fig
Evaluation of the ICS and DEW scatter correction methods for low statistical content scans in 3D PET
International Nuclear Information System (INIS)
Sossi, V.; Oakes, T.R.; Ruth, T.J.
1996-01-01
The performance of the Integral Convolution and the Dual Energy Window scatter correction methods in 3D PET has been evaluated over a wide range of statistical content of acquired data (1M to 400M events) The order in which scatter correction and detector normalization should be applied has also been investigated. Phantom and human neuroreceptor studies were used with the following figures of merit: axial and radial uniformity, sinogram and image noise, contrast accuracy and contrast accuracy uniformity. Both scatter correction methods perform reliably in the range of number of events examined. Normalization applied after scatter correction yields better radial uniformity and fewer image artifacts
Evaluation of attenuation and scatter correction requirements in small animal PET and SPECT imaging
Konik, Arda Bekir
Positron emission tomography (PET) and single photon emission tomography (SPECT) are two nuclear emission-imaging modalities that rely on the detection of high-energy photons emitted from radiotracers administered to the subject. The majority of these photons are attenuated (absorbed or scattered) in the body, resulting in count losses or deviations from true detection, which in turn degrades the accuracy of images. In clinical emission tomography, sophisticated correction methods are often required employing additional x-ray CT or radionuclide transmission scans. Having proven their potential in both clinical and research areas, both PET and SPECT are being adapted for small animal imaging. However, despite the growing interest in small animal emission tomography, little scientific information exists about the accuracy of these correction methods on smaller size objects, and what level of correction is required. The purpose of this work is to determine the role of attenuation and scatter corrections as a function of object size through simulations. The simulations were performed using Interactive Data Language (IDL) and a Monte Carlo based package, Geant4 application for emission tomography (GATE). In IDL simulations, PET and SPECT data acquisition were modeled in the presence of attenuation. A mathematical emission and attenuation phantom approximating a thorax slice and slices from real PET/CT data were scaled to 5 different sizes (i.e., human, dog, rabbit, rat and mouse). The simulated emission data collected from these objects were reconstructed. The reconstructed images, with and without attenuation correction, were compared to the ideal (i.e., non-attenuated) reconstruction. Next, using GATE, scatter fraction values (the ratio of the scatter counts to the total counts) of PET and SPECT scanners were measured for various sizes of NEMA (cylindrical phantoms representing small animals and human), MOBY (realistic mouse/rat model) and XCAT (realistic human model
Energy Technology Data Exchange (ETDEWEB)
Yang, J.; Kuikka, J.T.; Vanninen, E.; Laensimies, E. [Kuopio Univ. Hospital (Finland). Dept. of Clinical Physiology and Nuclear Medicine; Kauppinen, T.; Patomaeki, L. [Kuopio Univ. (Finland). Dept. of Applied Physics
1999-05-01
Photon scatter is one of the most important factors degrading the quantitative accuracy of SPECT images. Many scatter correction methods have been proposed. The single isotope method was proposed by us. Aim: We evaluate the scatter correction method of improving the quality of images by acquiring emission and transmission data simultaneously with single isotope scan. Method: To evaluate the proposed scatter correction method, a contrast and linearity phantom was studied. Four female patients with fibromyalgia (FM) syndrome and four with chronic back pain (BP) were imaged. Grey-to-cerebellum (G/C) and grey-to-white matter (G/W) ratios were determined by one skilled operator for 12 regions of interest (ROIs) in each subject. Results: The linearity of activity response was improved after the scatter correction (r=0.999). The y-intercept value of the regression line was 0.036 (p<0.0001) after scatter correction and the slope was 0.954. Pairwise correlation indicated the agreement between nonscatter corrected and scatter corrected images. Reconstructed slices before and after scatter correction demonstrate a good correlation in the quantitative accuracy of radionuclide concentration. G/C values have significant correlation coefficients between original and corrected data. Conclusion: The transaxial images of human brain studies show that the scatter correction using single isotope in simultaneous transmission and emission tomography provides a good scatter compensation. The contrasts were increased on all 12 ROIs. The scatter compensation enhanced details of physiological lesions. (orig.) [Deutsch] Die Photonenstreuung gehoert zu den wichtigsten Faktoren, die die quantitative Genauigkeit von SPECT-Bildern vermindern. Es wurde eine ganze Reihe von Methoden zur Streuungskorrektur vorgeschlagen. Von uns wurde die Einzelisotopen-Methode empfohlen. Ziel: Wir untersuchten die Streuungskorrektur-Methode zur Verbesserung der Bildqualitaet durch simultane Gewinnung von Emissions
Jin, Minglei; Jin, Weiqi; Li, Yiyang; Li, Shuo
2015-08-01
In this paper, we propose a novel scene-based non-uniformity correction algorithm for infrared image processing-temporal high-pass non-uniformity correction algorithm based on grayscale mapping (THP and GM). The main sources of non-uniformity are: (1) detector fabrication inaccuracies; (2) non-linearity and variations in the read-out electronics and (3) optical path effects. The non-uniformity will be reduced by non-uniformity correction (NUC) algorithms. The NUC algorithms are often divided into calibration-based non-uniformity correction (CBNUC) algorithms and scene-based non-uniformity correction (SBNUC) algorithms. As non-uniformity drifts temporally, CBNUC algorithms must be repeated by inserting a uniform radiation source which SBNUC algorithms do not need into the view, so the SBNUC algorithm becomes an essential part of infrared imaging system. The SBNUC algorithms' poor robustness often leads two defects: artifacts and over-correction, meanwhile due to complicated calculation process and large storage consumption, hardware implementation of the SBNUC algorithms is difficult, especially in Field Programmable Gate Array (FPGA) platform. The THP and GM algorithm proposed in this paper can eliminate the non-uniformity without causing defects. The hardware implementation of the algorithm only based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay: less than 20 lines, it can be transplanted to a variety of infrared detectors equipped with FPGA image processing module, it can reduce the stripe non-uniformity and the ripple non-uniformity.
Monte Carlo evaluation of scattering correction methods in 131I studies using pinhole collimator
International Nuclear Information System (INIS)
López Díaz, Adlin; San Pedro, Aley Palau; Martín Escuela, Juan Miguel; Rodríguez Pérez, Sunay; Díaz García, Angelina
2017-01-01
Scattering is quite important for image activity quantification. In order to study the scattering factors and the efficacy of 3 multiple window energy scatter correction methods during 131 I thyroid studies with a pinhole collimator (5 mm hole) a Monte Carlo simulation (MC) was developed. The GAMOS MC code was used to model the gamma camera and the thyroid source geometry. First, to validate the MC gamma camera pinhole-source model, sensibility in air and water of the simulated and measured thyroid phantom geometries were compared. Next, simulations to investigate scattering and the result of triple energy (TEW), Double energy (DW) and Reduced double (RDW) energy windows correction methods were performed for different thyroid sizes and depth thicknesses. The relative discrepancies to MC real event were evaluated. Results: The accuracy of the GAMOS MC model was verified and validated. The image’s scattering contribution was significant, between 27-40 %. The discrepancies between 3 multiple window energy correction method results were significant (between 9-86 %). The Reduce Double Window methods (15%) provide discrepancies of 9-16 %. Conclusions: For the simulated thyroid geometry with pinhole, the RDW (15 %) was the most effective. (author)
QED corrections in deep-inelastic scattering from tensor polarized deuteron target
Gakh, G I
2001-01-01
The QED correction in the deep inelastic scattering from the polarized tensor of the deuteron target is considered. The calculations are based on the covariant parametrization of the deuteron quadrupole polarization tensor. The Drell-Yan representations in the electrodynamics are used for describing the radiation real and virtual particles
Coulomb correction to the screening angle of the Moliere multiple scattering theory
International Nuclear Information System (INIS)
Kuraev, E.A.; Voskresenskaya, O.O.; Tarasov, A.V.
2012-01-01
Coulomb correction to the screening angular parameter of the Moliere multiple scattering theory is found. Numerical calculations are presented in the range of nuclear charge 4 ≤ Z ≤ 82. Comparison with the Moliere result for the screening angle reveals up to 30% deviation from it for sufficiently heavy elements of the target material
Mentrup, Detlef; Jockel, Sascha; Menser, Bernd; Neitzel, Ulrich
2016-06-01
The aim of this work was to experimentally compare the contrast improvement factors (CIFs) of a newly developed software-based scatter correction to the CIFs achieved by an antiscatter grid. To this end, three aluminium discs were placed in the lung, the retrocardial and the abdominal areas of a thorax phantom, and digital radiographs of the phantom were acquired both with and without a stationary grid. The contrast generated by the discs was measured in both images, and the CIFs achieved by grid usage were determined for each disc. Additionally, the non-grid images were processed with a scatter correction software. The contrasts generated by the discs were determined in the scatter-corrected images, and the corresponding CIFs were calculated. The CIFs obtained with the grid and with the software were in good agreement. In conclusion, the experiment demonstrates quantitatively that software-based scatter correction allows restoring the image contrast of a non-grid image in a manner comparable with an antiscatter grid. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Study of radiative corrections with application to the electron-neutrino scattering
International Nuclear Information System (INIS)
Oliveira, L.C.S. de.
1977-01-01
The radiative correction method is studied which appears in Quantum Field Theory, for some weak interaction processes. e.g., Beta decay and muon decay. Such a method is then applied to calculate transition probability for the electron-neutrino scattering using the U-A theory as a base. The calculations of infrared and ultraviolet divergences are also discussed. (L.C.) [pt
A library least-squares approach for scatter correction in gamma-ray tomography
International Nuclear Information System (INIS)
Meric, Ilker; Anton Johansen, Geir; Valgueiro Malta Moreira, Icaro
2015-01-01
Scattered radiation is known to lead to distortion in reconstructed images in Computed Tomography (CT). The effects of scattered radiation are especially more pronounced in non-scanning, multiple source systems which are preferred for flow imaging where the instantaneous density distribution of the flow components is of interest. In this work, a new method based on a library least-squares (LLS) approach is proposed as a means of estimating the scatter contribution and correcting for this. The validity of the proposed method is tested using the 85-channel industrial gamma-ray tomograph previously developed at the University of Bergen (UoB). The results presented here confirm that the LLS approach can effectively estimate the amounts of transmission and scatter components in any given detector in the UoB gamma-ray tomography system. - Highlights: • A LLS approach is proposed for scatter correction in gamma-ray tomography. • The validity of the LLS approach is tested through experiments. • Gain shift and pulse pile-up affect the accuracy of the LLS approach. • The LLS approach successfully estimates scatter profiles
Library based x-ray scatter correction for dedicated cone beam breast CT
International Nuclear Information System (INIS)
Shi, Linxi; Zhu, Lei; Vedantham, Srinivasan; Karellas, Andrew
2016-01-01
Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the GEANT4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correction on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal
Library based x-ray scatter correction for dedicated cone beam breast CT
Energy Technology Data Exchange (ETDEWEB)
Shi, Linxi; Zhu, Lei, E-mail: leizhu@gatech.edu [Nuclear and Radiological Engineering and Medical Physics Programs, The George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States); Vedantham, Srinivasan; Karellas, Andrew [Department of Radiology, University of Massachusetts Medical School, Worcester, Massachusetts 01655 (United States)
2016-08-15
Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the GEANT4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correction on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal
International Nuclear Information System (INIS)
Shidahara, Miho; Kato, Takashi; Kawatsu, Shoji; Yoshimura, Kumiko; Ito, Kengo; Watabe, Hiroshi; Kim, Kyeong Min; Iida, Hidehiro; Kato, Rikio
2005-01-01
An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99m Tc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I AC μb with Chang's attenuation correction factor. The scatter component image is estimated by convolving I AC μb with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99m Tc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Shidahara, Miho; Kato, Takashi; Kawatsu, Shoji; Yoshimura, Kumiko; Ito, Kengo [National Center for Geriatrics and Gerontology Research Institute, Department of Brain Science and Molecular Imaging, Obu, Aichi (Japan); Watabe, Hiroshi; Kim, Kyeong Min; Iida, Hidehiro [National Cardiovascular Center Research Institute, Department of Investigative Radiology, Suita (Japan); Kato, Rikio [National Center for Geriatrics and Gerontology, Department of Radiology, Obu (Japan)
2005-10-01
An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with {sup 99m}Tc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I{sub AC}{sup {mu}}{sup b} with Chang's attenuation correction factor. The scatter component image is estimated by convolving I{sub AC}{sup {mu}}{sup b} with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and {sup 99m}Tc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine. (orig.)
Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo
2005-10-01
An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.
International Nuclear Information System (INIS)
Ahmadian, Alireza; Ay, Mohammad R.; Sarkar, Saeed; Bidgoli, Javad H.; Zaidi, Habib
2008-01-01
Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map (μmap), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated μmaps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique in
A library least-squares approach for scatter correction in gamma-ray tomography
Meric, Ilker; Anton Johansen, Geir; Valgueiro Malta Moreira, Icaro
2015-03-01
Scattered radiation is known to lead to distortion in reconstructed images in Computed Tomography (CT). The effects of scattered radiation are especially more pronounced in non-scanning, multiple source systems which are preferred for flow imaging where the instantaneous density distribution of the flow components is of interest. In this work, a new method based on a library least-squares (LLS) approach is proposed as a means of estimating the scatter contribution and correcting for this. The validity of the proposed method is tested using the 85-channel industrial gamma-ray tomograph previously developed at the University of Bergen (UoB). The results presented here confirm that the LLS approach can effectively estimate the amounts of transmission and scatter components in any given detector in the UoB gamma-ray tomography system.
Two-photon exchange corrections in elastic lepton-proton scattering
Energy Technology Data Exchange (ETDEWEB)
Tomalak, Oleksandr; Vanderhaeghen, Marc [Johannes Gutenberg Universitaet Mainz (Germany)
2015-07-01
The measured value of the proton charge radius from the Lamb shift of energy levels in muonic hydrogen is in strong contradiction, by 7-8 standard deviations, with the value obtained from electronic hydrogen spectroscopy and the value extracted from unpolarized electron-proton scattering data. The dominant unaccounted higher order contribution in scattering experiments corresponds to the two photon exchange (TPE) diagram. The elastic contribution to the TPE correction was studied with the fixed momentum transfer dispersion relations and compared to the hadronic model with off-shell photon-nucleon vertices. A dispersion relation formalism with one subtraction was proposed. Theoretical predictions of the TPE elastic contribution to the unpolarized elastic electron-proton scattering and polarization transfer observables in the low momentum transfer region were made. The TPE formalism was generalized to the case of massive leptons and the elastic contribution was evaluated for the kinematics of upcoming muon-proton scattering experiment (MUSE).
Lee, Ho; Fahimian, Benjamin P.; Xing, Lei
2017-03-01
This paper proposes a binary moving-blocker (BMB)-based technique for scatter correction in cone-beam computed tomography (CBCT). In concept, a beam blocker consisting of lead strips, mounted in front of the x-ray tube, moves rapidly in and out of the beam during a single gantry rotation. The projections are acquired in alternating phases of blocked and unblocked cone beams, where the blocked phase results in a stripe pattern in the width direction. To derive the scatter map from the blocked projections, 1D B-Spline interpolation/extrapolation is applied by using the detected information in the shaded regions. The scatter map of the unblocked projections is corrected by averaging two scatter maps that correspond to their adjacent blocked projections. The scatter-corrected projections are obtained by subtracting the corresponding scatter maps from the projection data and are utilized to generate the CBCT image by a compressed-sensing (CS)-based iterative reconstruction algorithm. Catphan504 and pelvis phantoms were used to evaluate the method’s performance. The proposed BMB-based technique provided an effective method to enhance the image quality by suppressing scatter-induced artifacts, such as ring artifacts around the bowtie area. Compared to CBCT without a blocker, the spatial nonuniformity was reduced from 9.1% to 3.1%. The root-mean-square error of the CT numbers in the regions of interest (ROIs) was reduced from 30.2 HU to 3.8 HU. In addition to high resolution, comparable to that of the benchmark image, the CS-based reconstruction also led to a better contrast-to-noise ratio in seven ROIs. The proposed technique enables complete scatter-corrected CBCT imaging with width-truncated projections and allows reducing the acquisition time to approximately half. This work may have significant implications for image-guided or adaptive radiation therapy, where CBCT is often used.
Matheoud, Roberta; Della Monica, Patrizia; Secco, Chiara; Loi, Gianfranco; Krengli, Marco; Inglese, Eugenio; Brambilla, Marco
2011-01-01
The aim of this work is to evaluate the role of different amount of attenuation and scatter on FDG-PET image volume segmentation using a contrast-oriented method based on the target-to-background (TB) ratio and target dimensions. A phantom study was designed employing 3 phantom sets, which provided a clinical range of attenuation and scatter conditions, equipped with 6 spheres of different volumes (0.5-26.5 ml). The phantoms were: (1) the Hoffman 3-dimensional brain phantom, (2) a modified International Electro technical Commission (IEC) phantom with an annular ring of water bags of 3 cm thickness fit over the IEC phantom, and (3) a modified IEC phantom with an annular ring of water bags of 9 cm. The phantoms cavities were filled with a solution of FDG at 5.4 kBq/ml activity concentration, and the spheres with activity concentration ratios of about 16, 8, and 4 times the background activity concentration. Images were acquired with a Biograph 16 HI-REZ PET/CT scanner. Thresholds (TS) were determined as a percentage of the maximum intensity in the cross section area of the spheres. To reduce statistical fluctuations a nominal maximum value is calculated as the mean from all voxel > 95%. To find the TS value that yielded an area A best matching the true value, the cross section were auto-contoured in the attenuation corrected slices varying TS in step of 1%, until the area so determined differed by less than 10 mm² versus its known physical value. Multiple regression methods were used to derive an adaptive thresholding algorithm and to test its dependence on different conditions of attenuation and scatter. The errors of scatter and attenuation correction increased with increasing amount of attenuation and scatter in the phantoms. Despite these increasing inaccuracies, PET threshold segmentation algorithms resulted not influenced by the different condition of attenuation and scatter. The test of the hypothesis of coincident regression lines for the three phantoms used
Virtual hadronic and heavy-fermion O({alpha}{sup 2}) corrections to Bhabha scattering
Energy Technology Data Exchange (ETDEWEB)
Actis, Stefano [Inst. fuer Theoretische Physik E, RWTH Aachen (Germany); Czakon, Michal [Wuerzburg Univ. (Germany). Inst. fuer Theoretische Physik und Astrophysik]|[Uniwersytet Slaski, Katowice (Poland). Inst. of Physics and Chemistry of Metals; Gluza, Janusz [Uniwersytet Slaski, Katowice (Poland). Inst. of Physics and Chemistry of Metals; Riemann, Tord [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)
2008-07-15
Effects of vacuum polarization by hadronic and heavy-fermion insertions were the last unknown two-loop QED corrections to high-energy Bhabha scattering. Here we describe the corrections in detail and explore their numerical influence. The hadronic contributions to the virtual O({alpha}{sup 2}) QED corrections to the Bhabha-scattering cross-section are evaluated using dispersion relations and computing the convolution of hadronic data with perturbatively calculated kernel functions. The technique of dispersion integrals is also employed to derive the virtual O({alpha}{sup 2}) corrections generated by muon-, tau- and top-quark loops in the small electron-mass limit for arbitrary values of the internal-fermion masses. At a meson factory with 1 GeV center-of-mass energy the complete effect of hadronic and heavy-fermion corrections amounts to less than 0.5 per mille and reaches, at 10 GeV, up to about 2 per mille. At the Z resonance it amounts to 2.3 per mille at 3 degrees; overall, hadronic corrections are less than 4 per mille. For ILC energies (500 GeV or above), the combined effect of hadrons and heavy fermions becomes 6 per mille at 3 degrees; hadrons contribute less than 20 per mille in the whole angular region. (orig.)
International Nuclear Information System (INIS)
Boros, C.
1999-01-01
Recent measurement of the structure function F 2 υ in neutrino deep inelastic scattering allows us to compare structure functions measured in neutrino and charged lepton scattering for the first time with reasonable precision. The comparison between neutrino and muon structure functions made by the CCFR Collaboration indicates that there is a discrepancy between these structure functions at small Bjorken x values. In this talk I examine two effects which might account for this experimental discrepancy: nuclear shadowing corrections for neutrinos and contributions from strange and anti-strange quarks. Copyright (1999) World Scientific Publishing Co. Pte. Ltd
SPAM-assisted partial volume correction algorithm for PET
International Nuclear Information System (INIS)
Cho, Sung Il; Kang, Keon Wook; Lee, Jae Sung; Lee, Dong Soo; Chung, June Key; Soh, Kwang Sup; Lee, Myung Chul
2000-01-01
A probabilistic atlas of the human brain (Statistical Probability Anatomical Maps: SPAM) was developed by the International Consortium for Brain Mapping (ICBM). It will be a good frame for calculating volume of interest (VOI) according to statistical variability of human brain in many fields of brain images. We show that we can get more exact quantification of the counts in VOI by using SPAM in the correlation of partial volume effect for simulated PET image. The MRI of a patient with dementia was segmented into gray matter and white matter, and then they were smoothed to PET resolution. Simulated PET image was made by adding one third of the smoothed white matter to the smoothed gray matter. Spillover effect and partial volume effect were corrected for this simulated PET image with the aid of the segmented and smoothed MR images. The images were spatially normalized to the average brain MRI atlas of ICBM, and were multiplied by the probablities of 98 VOIs of SPAM images of Montreal Neurological Institute. After the correction of partial volume effect, the counts of frontal, partietal, temporal, and occipital lobes were increased by 38±6%, while those of hippocampus and amygdala by 4±3%. By calculating the counts in VOI using the product of probability of SPAM images and counts in the simulated PET image, the counts increase and become closer to the true values. SPAM-assisted partial volume correction is useful for quantification of VOIs in PET images
SPAM-assisted partial volume correction algorithm for PET
Energy Technology Data Exchange (ETDEWEB)
Cho, Sung Il; Kang, Keon Wook; Lee, Jae Sung; Lee, Dong Soo; Chung, June Key; Soh, Kwang Sup; Lee, Myung Chul [College of Medicine, Seoul National Univ., Seoul (Korea, Republic of)
2000-07-01
A probabilistic atlas of the human brain (Statistical Probability Anatomical Maps: SPAM) was developed by the International Consortium for Brain Mapping (ICBM). It will be a good frame for calculating volume of interest (VOI) according to statistical variability of human brain in many fields of brain images. We show that we can get more exact quantification of the counts in VOI by using SPAM in the correlation of partial volume effect for simulated PET image. The MRI of a patient with dementia was segmented into gray matter and white matter, and then they were smoothed to PET resolution. Simulated PET image was made by adding one third of the smoothed white matter to the smoothed gray matter. Spillover effect and partial volume effect were corrected for this simulated PET image with the aid of the segmented and smoothed MR images. The images were spatially normalized to the average brain MRI atlas of ICBM, and were multiplied by the probablities of 98 VOIs of SPAM images of Montreal Neurological Institute. After the correction of partial volume effect, the counts of frontal, partietal, temporal, and occipital lobes were increased by 38{+-}6%, while those of hippocampus and amygdala by 4{+-}3%. By calculating the counts in VOI using the product of probability of SPAM images and counts in the simulated PET image, the counts increase and become closer to the true values. SPAM-assisted partial volume correction is useful for quantification of VOIs in PET images.
Flesia, C.; Schwendimann, P.
1992-01-01
The contribution of the multiple scattering to the lidar signal is dependent on the optical depth tau. Therefore, the radar analysis, based on the assumption that the multiple scattering can be neglected is limited to cases characterized by low values of the optical depth (tau less than or equal to 0.1) and hence it exclude scattering from most clouds. Moreover, all inversion methods relating lidar signal to number densities and particle size must be modified since the multiple scattering affects the direct analysis. The essential requests of a realistic model for lidar measurements which include the multiple scattering and which can be applied to practical situations follow. (1) Requested are not only a correction term or a rough approximation describing results of a certain experiment, but a general theory of multiple scattering tying together the relevant physical parameter we seek to measure. (2) An analytical generalization of the lidar equation which can be applied in the case of a realistic aerosol is requested. A pure analytical formulation is important in order to avoid the convergency and stability problems which, in the case of numerical approach, are due to the large number of events that have to be taken into account in the presence of large depth and/or a strong experimental noise.
Channel Parameter Estimation for Scatter Cluster Model Using Modified MUSIC Algorithm
Directory of Open Access Journals (Sweden)
Jinsheng Yang
2012-01-01
Full Text Available Recently, the scatter cluster models which precisely evaluate the performance of the wireless communication system have been proposed in the literature. However, the conventional SAGE algorithm does not work for these scatter cluster-based models because it performs poorly when the transmit signals are highly correlated. In this paper, we estimate the time of arrival (TOA, the direction of arrival (DOA, and Doppler frequency for scatter cluster model by the modified multiple signal classification (MUSIC algorithm. Using the space-time characteristics of the multiray channel, the proposed algorithm combines the temporal filtering techniques and the spatial smoothing techniques to isolate and estimate the incoming rays. The simulation results indicated that the proposed algorithm has lower complexity and is less time-consuming in the dense multipath environment than SAGE algorithm. Furthermore, the estimations’ performance increases with elements of receive array and samples length. Thus, the problem of the channel parameter estimation of the scatter cluster model can be effectively addressed with the proposed modified MUSIC algorithm.
Lé tourneau, Pierre-David; Wu, Ying; Papanicolaou, George; Garnier, Josselin; Darve, Eric
2016-01-01
We present a wideband fast algorithm capable of accurately computing the full numerical solution of the problem of acoustic scattering of waves by multiple finite-sized bodies such as spherical scatterers in three dimensions. By full solution, we
International Nuclear Information System (INIS)
Ito, Hiroshi; Iida, Hidehiro; Kinoshita, Toshibumi; Hatazawa, Jun; Okudera, Toshio; Uemura, Kazuo
1999-01-01
The transmission dependent convolution subtraction method which is one of the methods for scatter correction of SPECT was applied to the assessment of CBF using SPECT and I-123-IMP. The effects of scatter correction on regional distribution of CBF were evaluated on a pixel by pixel basis by means of an anatomic standardization technique. SPECT scan was performed on six healthy men. Image reconstruction was carried out with and without the scatter correction. All reconstructed images were globally normalized for the radioactivity of each pixel, and transformed into a standard brain anatomy. After anatomic standardization, the average SPECT images were calculated for scatter corrected and uncorrected groups, and these groups were compared on pixel by pixel basis. In the scatter uncorrected group, a significant overestimation of CBF was observed in the deep cerebral white matter, pons, thalamus, putamen, hippocampal region and cingulate gyrus as compared with scatter corrected group. A significant underestimation was observed in all neocortical regions, especially in the occipital and parietal lobes, and the cerebellar cortex. The regional distribution of CBF obtained by scatter corrected SPECT was similar to that obtained by O-15 water PET. The scatter correction is needed for the assessment of CBF using SPECT. (author)
Energy Technology Data Exchange (ETDEWEB)
Carloni Calame, C. [Southampton Univ. (United Kingdom). School of Physics; Czyz, H.; Gluza, J.; Gunia, M. [Silesia Univ., Katowice (Poland). Dept. of Field Theory and Particle Physics; Montagna, G. [Pavia Univ. (Italy). Dipt. di Fisica Nucleare e Teorica; INFN, Sezione di Pavia (Italy); Nicrosini, O.; Piccinini, F. [INFN, Sezione di Pavia (Italy); Riemann, T. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Worek, M. [Wuppertal Univ. (Germany). Fachbereich C Physik
2011-07-15
Virtual fermionic N{sub f}=1 and N{sub f}=2 contributions to Bhabha scattering are combined with realistic real corrections at next-to-next-to-leading order in QED. The virtual corrections are determined by the package BHANNLOHF, and real corrections with the Monte Carlo generators BHAGEN-1PH, HELAC-PHEGAS and EKHARA. Numerical results are discussed at the energies of and with realistic cuts used at the {phi} factory DA{phi}NE, at the B factories PEP-II and KEK, and at the charm/{tau} factory BEPC II. We compare these complete calculations with the approximate ones realized in the generator BABAYAGA rate at NLO used at meson factories to evaluate their luminosities. For realistic reference event selections we find agreement for the NNLO leptonic and hadronic corrections within 0.07% or better and conclude that they are well accounted for in the generator by comparison with the present experimental accuracy. (orig.)
International Nuclear Information System (INIS)
Stodilka, Robert Z.; Msaki, Peter; Prato, Frank S.; Nicholson, Richard L.; Kemp, B.J.
1998-01-01
Mounting evidence indicates that scatter and attenuation are major confounds to objective diagnosis of brain disease by quantitative SPECT. There is considerable debate, however, as to the relative importance of scatter correction (SC) and attenuation correction (AC), and how they should be implemented. The efficacy of SC and AC for 99m Tc brain SPECT was evaluated using a two-compartment fully tissue-equivalent anthropomorphic head phantom. Four correction schemes were implemented: uniform broad-beam AC, non-uniform broad-beam AC, uniform SC+AC, and non-uniform SC+AC. SC was based on non-stationary deconvolution scatter subtraction, modified to incorporate a priori knowledge of either the head contour (uniform SC) or transmission map (non-uniform SC). The quantitative accuracy of the correction schemes was evaluated in terms of contrast recovery, relative quantification (cortical:cerebellar activity), uniformity ((coefficient of variation of 230 macro-voxels) x100%), and bias (relative to a calibration scan). Our results were: uniform broad-beam (μ=0.12cm -1 ) AC (the most popular correction): 71% contrast recovery, 112% relative quantification, 7.0% uniformity, +23% bias. Non-uniform broad-beam (soft tissue μ=0.12cm -1 ) AC: 73%, 114%, 6.0%, +21%, respectively. Uniform SC+AC: 90%, 99%, 4.9%, +12%, respectively. Non-uniform SC+AC: 93%, 101%, 4.0%, +10%, respectively. SC and AC achieved the best quantification; however, non-uniform corrections produce only small improvements over their uniform counterparts. SC+AC was found to be superior to AC; this advantage is distinct and consistent across all four quantification indices. (author)
Forward two-photon exchange in elastic lepton-proton scattering and hyperfine-splitting correction
Energy Technology Data Exchange (ETDEWEB)
Tomalak, Oleksandr [Johannes Gutenberg Universitaet, Institut fuer Kernphysik and PRISMA Cluster of Excellence, Mainz (Germany)
2017-08-15
We relate the forward two-photon exchange (TPE) amplitudes to integrals of the inclusive lepton-proton scattering cross sections. These relations yield an alternative way for the evaluation of the TPE correction to hyperfine-splitting (HFS) in the hydrogen-like atoms with an equivalent to the standard approach (Iddings, Drell and Sullivan) result implying the Burkhardt-Cottingham sum rule. For evaluation of the individual effects (e.g., elastic contribution) our approach yields a distinct result. We compare both methods numerically on examples of the elastic contribution and the full TPE correction to HFS in electronic and muonic hydrogen. (orig.)
International Nuclear Information System (INIS)
Defrise, Michel; Rezaei, Ahmadreza; Nuyts, Johan
2014-01-01
The maximum likelihood attenuation correction factors (MLACF) algorithm has been developed to calculate the maximum-likelihood estimate of the activity image and the attenuation sinogram in time-of-flight (TOF) positron emission tomography, using only emission data without prior information on the attenuation. We consider the case of a Poisson model of the data, in the absence of scatter or random background. In this case the maximization with respect to the attenuation factors can be achieved in a closed form and the MLACF algorithm works by updating the activity. Despite promising numerical results, the convergence of this algorithm has not been analysed. In this paper we derive the algorithm and demonstrate that the MLACF algorithm monotonically increases the likelihood, is asymptotically regular, and that the limit points of the iteration are stationary points of the likelihood. Because the problem is not convex, however, the limit points might be saddle points or local maxima. To obtain some empirical insight into the latter question, we present data obtained by applying MLACF to 2D simulated TOF data, using a large number of iterations and different initializations. (paper)
Compton scatter correction in case of multiple crosstalks in SPECT imaging.
Sychra, J J; Blend, M J; Jobe, T H
1996-02-01
A strategy for Compton scatter correction in brain SPECT images was proposed recently. It assumes that two radioisotopes are used and that a significant portion of photons of one radioisotope (for example, Tc99m) spills over into the low energy acquisition window of the other radioisotope (for example, Tl201). We are extending this approach to cases of several radioisotopes with mutual, multiple and significant photon spillover. In the example above, one may correct not only the Tl201 image but also the Tc99m image corrupted by the Compton scatter originating from the small component of high energy Tl201 photons. The proposed extension is applicable to other anatomical domains (cardiac imaging).
Corrections on energy spectrum and scattering for fast neutron radiography at NECTAR facility
International Nuclear Information System (INIS)
Liu Shuquan; Thomas, Boucherl; Li Hang; Zou Yubin; Lu Yuanrong; Guo Zhiyu
2013-01-01
Distortions caused by the neutron spectrum and scattered neutrons are major problems in fast neutron radiography and should be considered for improving the image quality. This paper puts emphasis on the removal of these image distortions and deviations for fast neutron radiography performed at the NECTAR facility of the research reactor FRM-Ⅱ in Technische Universitaet Mounchen (TUM), Germany. The NECTAR energy spectrum is analyzed and established to modify the influence caused by the neutron spectrum, and the Point Scattered Function (PScF) simulated by the Monte-Carlo program MCNPX is used to evaluate scattering effects from the object and improve image quality. Good analysis results prove the sound effects of the above two corrections. (authors)
Corrections on energy spectrum and scatterings for fast neutron radiography at NECTAR facility
Liu, Shu-Quan; Bücherl, Thomas; Li, Hang; Zou, Yu-Bin; Lu, Yuan-Rong; Guo, Zhi-Yu
2013-11-01
Distortions caused by the neutron spectrum and scattered neutrons are major problems in fast neutron radiography and should be considered for improving the image quality. This paper puts emphasis on the removal of these image distortions and deviations for fast neutron radiography performed at the NECTAR facility of the research reactor FRM- II in Technische Universität München (TUM), Germany. The NECTAR energy spectrum is analyzed and established to modify the influence caused by the neutron spectrum, and the Point Scattered Function (PScF) simulated by the Monte-Carlo program MCNPX is used to evaluate scattering effects from the object and improve image quality. Good analysis results prove the sound effects of the above two corrections.
Scattering at low energies by potentials containing power-law corrections to the Coulomb interaction
International Nuclear Information System (INIS)
Kuitsinskii, A.A.
1986-01-01
The low-energy asymptotic behavior is found for the phase shifts and scattering amplitudes in the case of central potentials which decrease at infinity as n/r+ar /sup -a/,a 1. In problems of atomic and nuclear physics one is generally interested in collisions of clusters consisting of several charged particles. The effective interaction potential of such clusters contains long-range power law corrections to the Coulomb interaction that is presented
On the radiative corrections of deep inelastic scattering of muon neutrino on nucleon
International Nuclear Information System (INIS)
So Sang Guk
1986-01-01
The radiative corrections of deep inelastic scattering process VΜP→ ΜN are considered. Matrix element which takes Feynman one photon exchange diagrams into account at high transfer momentum are used. Based on calculation of the matrix element one can obtain matrix element for given process. It is shown that the effective cross section which takes one photon exchange into account is obtained. (author)
International Nuclear Information System (INIS)
Bur'yan, V.I.; Kozlova, L.V.; Kuzhil', A.S.; Shikalov, V.F.
2005-01-01
The development of algorithms for correction of self-powered neutron detector (SPND) inertial is caused by necessity to increase the fast response of the in-core instrumentation systems (ICIS). The increase of ICIS fast response will permit to monitor in real time fast transient processes in the core, and in perspective - to use the signals of rhodium SPND for functions of emergency protection by local parameters. In this paper it is proposed to use mathematical model of neutron flux measurements by means of SPND in integral form for creation of correction algorithms. This approach, in the case, is the most convenient for creation of recurrent algorithms for flux estimation. The results of comparison for estimation of neutron flux and reactivity by readings of ionization chambers and SPND signals, corrected by proposed algorithms, are presented [ru
International Nuclear Information System (INIS)
Murase, Kenya; Itoh, Hisao; Mogami, Hiroshi; Ishine, Masashiro; Kawamura, Masashi; Iio, Atsushi; Hamamoto, Ken
1987-01-01
A computer based simulation method was developed to assess the relative effectiveness and availability of various attenuation compensation algorithms in single photon emission computed tomography (SPECT). The effect of the nonuniformity of attenuation coefficient distribution in the body, the errors in determining a body contour and the statistical noise on reconstruction accuracy and the computation time in using the algorithms were studied. The algorithms were classified into three groups: precorrection, post correction and iterative correction methods. Furthermore, a hybrid method was devised by combining several methods. This study will be useful for understanding the characteristics limitations and strengths of the algorithms and searching for a practical correction method for photon attenuation in SPECT. (orig.)
International Nuclear Information System (INIS)
Stevendaal, U. van; Schlomka, J.-P.; Harding, A.; Grass, M.
2003-01-01
Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter form factor of the investigated object. Reconstruction from coherently scattered x-rays is commonly done using algebraic reconstruction techniques (ART). In this paper, we propose an alternative approach based on filtered back-projection. For the first time, a three-dimensional (3D) filtered back-projection technique using curved 3D back-projection lines is applied to two-dimensional coherent scatter projection data. The proposed algorithm is tested with simulated projection data as well as with projection data acquired with a demonstrator setup similar to a multi-line CT scanner geometry. While yielding comparable image quality as ART reconstruction, the modified 3D filtered back-projection algorithm is about two orders of magnitude faster. In contrast to iterative reconstruction schemes, it has the advantage that subfield-of-view reconstruction becomes feasible. This allows a selective reconstruction of the coherent-scatter form factor for a region of interest. The proposed modified 3D filtered back-projection algorithm is a powerful reconstruction technique to be implemented in a CSCT scanning system. This method gives coherent scatter CT the potential of becoming a competitive modality for medical imaging or nondestructive testing
Attenuation correction of myocardial SPECT by scatter-photopeak window method in normal subjects
International Nuclear Information System (INIS)
Okuda, Koichi; Nakajima, Kenichi; Matsuo, Shinro; Kinuya, Seigo; Motomura, Nobutoku; Kubota, Masahiro; Yamaki, Noriyasu; Maeda, Hisato
2009-01-01
Segmentation with scatter and photopeak window data using attenuation correction (SSPAC) method can provide a patient-specific non-uniform attenuation coefficient map only by using photopeak and scatter images without X-ray computed tomography (CT). The purpose of this study is to evaluate the performance of attenuation correction (AC) by the SSPAC method on normal myocardial perfusion database. A total of 32 sets of exercise-rest myocardial images with Tc-99m-sestamibi were acquired in both photopeak (140 keV±10%) and scatter (7% of lower side of the photopeak window) energy windows. Myocardial perfusion databases by the SSPAC method and non-AC (NC) were created from 15 female and 17 male subjects with low likelihood of cardiac disease using quantitative perfusion SPECT software. Segmental myocardial counts of a 17-segment model from these databases were compared on the basis of paired t test. AC average myocardial perfusion count was significantly higher than that in NC in the septal and inferior regions (P<0.02). On the contrary, AC average count was significantly lower in the anterolateral and apical regions (P<0.01). Coefficient variation of the AC count in the mid, apical and apex regions was lower than that of NC. The SSPAC method can improve average myocardial perfusion uptake in the septal and inferior regions and provide uniform distribution of myocardial perfusion. The SSPAC method could be a practical method of attenuation correction without X-ray CT. (author)
MUSIC ALGORITHM FOR LOCATING POINT-LIKE SCATTERERS CONTAINED IN A SAMPLE ON FLAT SUBSTRATE
Institute of Scientific and Technical Information of China (English)
Dong Heping; Ma Fuming; Zhang Deyue
2012-01-01
In this paper,we consider a MUSIC algorithm for locating point-like scatterers contained in a sample on flat substrate.Based on an asymptotic expansion of the scattering amplitude proposed by Ammari et al.,the reconstruction problem can be reduced to a calculation of Green function corresponding to the background medium.In addition,we use an explicit formulation of Green function in the MUSIC algorithm to simplify the calculation when the cross-section of sample is a half-disc.Numerical experiments are included to demonstrate the feasibility of this method.
International Nuclear Information System (INIS)
Prettyman, T.H.; Sprinkle, J.K. Jr.; Sheppard, G.A.
1993-01-01
With transmission-corrected gamma-ray nondestructive assay instruments such as the Segmented Gamma Scanner (SGS) and the Tomographic Gamma Scanner (TGS) that is currently under development at Los Alamos National Laboratory, the amount of gamma-ray emitting material can be underestimated for samples in which the emitting material consists of particles or lumps of highly attenuating material. This problem is encountered in the assay of uranium and plutonium-bearing samples. To correct for this source of bias, we have developed a least-squares algorithm that uses transmission-corrected assay results for several emitted energies and a weighting function to account for statistical uncertainties in the assay results. The variation of effective lump size in the fitted model is parameterized; this allows the correction to be performed for a wide range of lump-size distributions. It may be possible to use the reduced chi-squared value obtained in the fit to identify samples in which assay assumptions have been violated. We found that the algorithm significantly reduced bias in simulated assays and improved SGS assay results for plutonium-bearing samples. Further testing will be conducted with the TGS, which is expected to be less susceptible than the SGS to systematic source of bias
NNLO massive corrections to Bhabha scattering and theoretical precision of BabaYaga rate at NLO
International Nuclear Information System (INIS)
Carloni Calame, C.M.; Nicrosini, O.; Piccinini, F.; Riemann, T.; Worek, M.
2011-12-01
We provide an exact calculation of next-to-next-to-leading order (NNLO) massive corrections to Bhabha scattering in QED, relevant for precision luminosity monitoring at meson factories. Using realistic reference event selections, exact numerical results for leptonic and hadronic corrections are given and compared with the corresponding approximate predictions of the event generator BabaYaga rate at NLO. It is shown that the NNLO massive corrections are necessary for luminosity measurements with per mille precision. At the same time they are found to be well accounted for in the generator at an accuracy level below the one per mille. An update of the total theoretical precision of BabaYaga rate at NLO is presented and possible directions for a further error reduction are sketched. (orig.)
Wall attenuation and scatter corrections for ion chambers: measurements versus calculations
Energy Technology Data Exchange (ETDEWEB)
Rogers, D W.O.; Bielajew, A F [National Research Council of Canada, Ottawa, ON (Canada). Div. of Physics
1990-08-01
In precision ion chamber dosimetry in air, wall attenuation and scatter are corrected for A{sub wall} (K{sub att} in IAEA terminology, K{sub w}{sup -1} in standards laboratory terminology). Using the EGS4 system the authors show that Monte Carlo calculated A{sub wall} factors predict relative variations in detector response with wall thickness which agree with all available experimental data within a statistical uncertainty of less than 0.1%. They calculated correction factors for use in exposure and air kerma standards are different by up to 1% from those obtained by extrapolating these same measurements. Using calculated correction factors would imply increases of 0.7-1.0% in the exposure and air kerma standards based on spherical and large diameter, large length cylindrical chambers and decreases of 0.3-0.5% for standards based on large diameter pancake chambers. (author).
Next-to-soft corrections to high energy scattering in QCD and gravity
Energy Technology Data Exchange (ETDEWEB)
Luna, A.; Melville, S. [SUPA, School of Physics and Astronomy, University of Glasgow,Glasgow G12 8QQ, Scotland (United Kingdom); Naculich, S.G. [Department of Physics, Bowdoin College,Brunswick, ME 04011 (United States); White, C.D. [Centre for Research in String Theory, School of Physics and Astronomy,Queen Mary University of London,327 Mile End Road, London E1 4NS (United Kingdom)
2017-01-12
We examine the Regge (high energy) limit of 4-point scattering in both QCD and gravity, using recently developed techniques to systematically compute all corrections up to next-to-leading power in the exchanged momentum i.e. beyond the eikonal approximation. We consider the situation of two scalar particles of arbitrary mass, thus generalising previous calculations in the literature. In QCD, our calculation describes power-suppressed corrections to the Reggeisation of the gluon. In gravity, we confirm a previous conjecture that next-to-soft corrections correspond to two independent deflection angles for the incoming particles. Our calculations in QCD and gravity are consistent with the well-known double copy relating amplitudes in the two theories.
International Nuclear Information System (INIS)
De Agostini, A.; Moretti, R.; Belletti, S.; Maira, G.; Magri, G.C.; Bestagno, M.
1992-01-01
The correction of organ movements in sequential radionuclide renography was done using an iterative algorithm that, by means of a set of rectangular regions of interest (ROIs), did not require any anatomical marker or manual elaboration of frames. The realignment programme here proposed is quite independent of the spatial and temporal distribution of activity and analyses the rotational movement in a simplified but reliable way. The position of the object inside a frame is evaluated by choosing the best ROI in a set of ROIs shifted 1 pixel around the central one. Statistical tests have to be fulfilled by the algorithm in order to activate the realignment procedure. Validation of the algorithm was done for different acquisition set-ups and organ movements. Results, summarized in Table 1, show that in about 90% of the stimulated experiments the algorithm is able to correct the movements of the object with a maximum error less of equal to 1 pixel limit. The usefulness of the realignment programme was demonstrated with sequential radionuclide renography as a typical clinical application. The algorithm-corrected curves of a 1-year-old patient were completely different from those obtained without a motion correction procedure. The algorithm may be applicable also to other types of scintigraphic examinations, besides functional imaging in which the realignment of frames of the dynamic sequence was an intrinsic demand. (orig.)
International Nuclear Information System (INIS)
Torres-Espallardo, I; Spanoudaki, V; Ziegler, S I; Rafecas, M; McElroy, D P
2008-01-01
Random coincidences can contribute substantially to the background in positron emission tomography (PET). Several estimation methods are being used for correcting them. The goal of this study was to investigate the validity of techniques for random coincidence estimation, with various low-energy thresholds (LETs). Simulated singles list-mode data of the MADPET-II small animal PET scanner were used as input. The simulations have been performed using the GATE simulation toolkit. Several sources with different geometries have been employed. We evaluated the number of random events using three methods: delayed window (DW), singles rate (SR) and time histogram fitting (TH). Since the GATE simulations allow random and true coincidences to be distinguished, a comparison between the number of random coincidences estimated using the standard methods and the number obtained using GATE was performed. An overestimation in the number of random events was observed using the DW and SR methods. This overestimation decreases for LETs higher than 255 keV. It is additionally reduced when the single events which have undergone a Compton interaction in crystals before being detected are removed from the data. These two observations lead us to infer that the overestimation is due to inter-crystal scatter. The effect of this mismatch in the reconstructed images is important for quantification because it leads to an underestimation of activity. This was shown using a hot-cold-background source with 3.7 MBq total activity in the background region and a 1.59 MBq total activity in the hot region. For both 200 keV and 400 keV LET, an overestimation of random coincidences for the DW and SR methods was observed, resulting in approximately 1.5% or more (at 200 keV LET: 1.7% for DW and 7% for SR) and less than 1% (at 400 keV LET: both methods) underestimation of activity within the background region. In almost all cases, images obtained by compensating for random events in the reconstruction
Fiorino, Steven T.; Elmore, Brannon; Schmidt, Jaclyn; Matchefts, Elizabeth; Burley, Jarred L.
2016-05-01
Properly accounting for multiple scattering effects can have important implications for remote sensing and possibly directed energy applications. For example, increasing path radiance can affect signal noise. This study describes the implementation of a fast-calculating two-stream-like multiple scattering algorithm that captures azimuthal and elevation variations into the Laser Environmental Effects Definition and Reference (LEEDR) atmospheric characterization and radiative transfer code. The multiple scattering algorithm fully solves for molecular, aerosol, cloud, and precipitation single-scatter layer effects with a Mie algorithm at every calculation point/layer rather than an interpolated value from a pre-calculated look-up-table. This top-down cumulative diffusivity method first considers the incident solar radiance contribution to a given layer accounting for solid angle and elevation, and it then measures the contribution of diffused energy from previous layers based on the transmission of the current level to produce a cumulative radiance that is reflected from a surface and measured at the aperture at the observer. Then a unique set of asymmetry and backscattering phase function parameter calculations are made which account for the radiance loss due to the molecular and aerosol constituent reflectivity within a level and allows for a more accurate characterization of diffuse layers that contribute to multiple scattered radiances in inhomogeneous atmospheres. The code logic is valid for spectral bands between 200 nm and radio wavelengths, and the accuracy is demonstrated by comparing the results from LEEDR to observed sky radiance data.
Braun, Frank; Schalk, Robert; Heintz, Annabell; Feike, Patrick; Firmowski, Sebastian; Beuermann, Thomas; Methner, Frank-Jürgen; Kränzlin, Bettina; Gretz, Norbert; Rädle, Matthias
2017-07-01
In this report, a quantitative nicotinamide adenine dinucleotide hydrate (NADH) fluorescence measurement algorithm in a liquid tissue phantom using a fiber-optic needle probe is presented. To determine the absolute concentrations of NADH in this phantom, the fluorescence emission spectra at 465 nm were corrected using diffuse reflectance spectroscopy between 600 nm and 940 nm. The patented autoclavable Nitinol needle probe enables the acquisition of multispectral backscattering measurements of ultraviolet, visible, near-infrared and fluorescence spectra. As a phantom, a suspension of calcium carbonate (Calcilit) and water with physiological NADH concentrations between 0 mmol l-1 and 2.0 mmol l-1 were used to mimic human tissue. The light scattering characteristics were adjusted to match the backscattering attributes of human skin by modifying the concentration of Calcilit. To correct the scattering effects caused by the matrices of the samples, an algorithm based on the backscattered remission spectrum was employed to compensate the influence of multiscattering on the optical pathway through the dispersed phase. The monitored backscattered visible light was used to correct the fluorescence spectra and thereby to determine the true NADH concentrations at unknown Calcilit concentrations. Despite the simplicity of the presented algorithm, the root-mean-square error of prediction (RMSEP) was 0.093 mmol l-1.
Sensitivity Analysis of the Scattering-Based SARBM3D Despeckling Algorithm.
Di Simone, Alessio
2016-06-25
Synthetic Aperture Radar (SAR) imagery greatly suffers from multiplicative speckle noise, typical of coherent image acquisition sensors, such as SAR systems. Therefore, a proper and accurate despeckling preprocessing step is almost mandatory to aid the interpretation and processing of SAR data by human users and computer algorithms, respectively. Very recently, a scattering-oriented version of the popular SAR Block-Matching 3D (SARBM3D) despeckling filter, named Scattering-Based (SB)-SARBM3D, was proposed. The new filter is based on the a priori knowledge of the local topography of the scene. In this paper, an experimental sensitivity analysis of the above-mentioned despeckling algorithm is carried out, and the main results are shown and discussed. In particular, the role of both electromagnetic and geometrical parameters of the surface and the impact of its scattering behavior are investigated. Furthermore, a comprehensive sensitivity analysis of the SB-SARBM3D filter against the Digital Elevation Model (DEM) resolution and the SAR image-DEM coregistration step is also provided. The sensitivity analysis shows a significant robustness of the algorithm against most of the surface parameters, while the DEM resolution plays a key role in the despeckling process. Furthermore, the SB-SARBM3D algorithm outperforms the original SARBM3D in the presence of the most realistic scattering behaviors of the surface. An actual scenario is also presented to assess the DEM role in real-life conditions.
A method and algorithm for correlating scattered light and suspended particles in polluted water
International Nuclear Information System (INIS)
Sami Gumaan Daraigan; Mohd Zubir Matjafri; Khiruddin Abdullah; Azlan Abdul Aziz; Abdul Aziz Tajuddin; Mohd Firdaus Othman
2005-01-01
An optical model has been developed for measuring total suspended solids TSS concentrations in water. This approach is based on the characteristics of scattered light from the suspended particles in water samples. An optical sensor system (an active spectrometer) has been developed to correlate pollutant (total suspended solids TSS) concentration and the scattered radiation. Scattered light was measured in terms of the output voltage of the phototransistor of the sensor system. The developed algorithm was used to calculate and estimate the concentrations of the polluted water samples. The proposed algorithm was calibrated using the observed readings. The results display a strong correlation between the radiation values and the total suspended solids concentrations. The proposed system yields a high degree of accuracy with the correlation coefficient (R) of 0.99 and the root mean square error (RMS) of 63.57 mg/l. (Author)
International Nuclear Information System (INIS)
Quirk, Thomas J. IV
2004-01-01
The Integrated TIGER Series (ITS) is a software package that solves coupled electron-photon transport problems. ITS performs analog photon tracking for energies between 1 keV and 1 GeV. Unlike its deterministic counterpart, the Monte Carlo calculations of ITS do not require a memory-intensive meshing of phase space; however, its solutions carry statistical variations. Reducing these variations is heavily dependent on runtime. Monte Carlo simulations must therefore be both physically accurate and computationally efficient. Compton scattering is the dominant photon interaction above 100 keV and below 5-10 MeV, with higher cutoffs occurring in lighter atoms. In its current model of Compton scattering, ITS corrects the differential Klein-Nishina cross sections (which assumes a stationary, free electron) with the incoherent scattering function, a function dependent on both the momentum transfer and the atomic number of the scattering medium. While this technique accounts for binding effects on the scattering angle, it excludes the Doppler broadening the Compton line undergoes because of the momentum distribution in each bound state. To correct for these effects, Ribbefor's relativistic impulse approximation (IA) will be employed to create scattering cross section differential in both energy and angle for each element. Using the parameterizations suggested by Brusa et al., scattered photon energies and angle can be accurately sampled at a high efficiency with minimal physical data. Two-body kinematics then dictates the electron's scattered direction and energy. Finally, the atomic ionization is relaxed via Auger emission or fluorescence. Future work will extend these improvements in incoherent scattering to compounds and to adjoint calculations.
A First-order Prediction-Correction Algorithm for Time-varying (Constrained) Optimization: Preprint
Energy Technology Data Exchange (ETDEWEB)
Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Simonetto, Andrea [Universite catholique de Louvain
2017-07-25
This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are established to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.
Xiao, Zhongxiu
2018-04-01
A Method of Measuring and Correcting Tilt of Anti - vibration Wind Turbines Based on Screening Algorithm is proposed in this paper. First of all, we design a device which the core is the acceleration sensor ADXL203, the inclination is measured by installing it on the tower of the wind turbine as well as the engine room. Next using the Kalman filter algorithm to filter effectively by establishing a state space model for signal and noise. Then we use matlab for simulation. Considering the impact of the tower and nacelle vibration on the collected data, the original data and the filtering data are classified and stored by the Screening algorithm, then filter the filtering data to make the output data more accurate. Finally, we eliminate installation errors by using algorithm to achieve the tilt correction. The device based on this method has high precision, low cost and anti-vibration advantages. It has a wide range of application and promotion value.
Qian, Fang; Wu, Yihui; Hao, Peng
2017-11-01
Baseline correction is a very important part of pre-processing. Baseline in the spectrum signal can induce uneven amplitude shifts across different wavenumbers and lead to bad results. Therefore, these amplitude shifts should be compensated before further analysis. Many algorithms are used to remove baseline, however fully automated baseline correction is convenient in practical application. A fully automated algorithm based on wavelet feature points and segment interpolation (AWFPSI) is proposed. This algorithm finds feature points through continuous wavelet transformation and estimates baseline through segment interpolation. AWFPSI is compared with three commonly introduced fully automated and semi-automated algorithms, using simulated spectrum signal, visible spectrum signal and Raman spectrum signal. The results show that AWFPSI gives better accuracy and has the advantage of easy use.
Coulomb corrections to nuclear scattering lengths and effective ranges for weakly bound systems
International Nuclear Information System (INIS)
Mur, V.D.; Popov, V.S.; Sergeev, A.V.
1996-01-01
A procedure is considered for extracting the purely nuclear scattering length as and effective range rs (which correspond to a strong-interaction potential Vs with disregarded Coulomb interaction) from the experimentally determined nuclear quantities acs and rcs, which are modified by Coulomb interaction. The Coulomb renormalization of as and rs is especially strong if the system under study involves a level with energy close to zero (on the nuclear scale). This applies to formulas that determine the Coulomb renormalization of the low-energy parameters of s scattering (l=0). Detailed numerical calculations are performed for coefficients appearing in the equations that determine Coulomb corrections for various models of the potential Vs(r). This makes it possible to draw qualitative conclusions that the dependence of Coulomb corrections on the form of the strong-interaction potential and, in particular, on its small-distance behavior. A considerable enhancement of Coulomb corrections to the effective range rs is found for potentials with a barrier
Directory of Open Access Journals (Sweden)
Mehravar Rafati
2017-01-01
Conclusion: The simulation and the clinical studies showed that the new approach could be better performance than DEW, TEW methods, according to values of the contrast, and the SNR for scatter correction.
Actuator Disc Model Using a Modified Rhie-Chow/SIMPLE Pressure Correction Algorithm
DEFF Research Database (Denmark)
Rethore, Pierre-Elouan; Sørensen, Niels
2008-01-01
An actuator disc model for the flow solver EllipSys (2D&3D) is proposed. It is based on a correction of the Rhie-Chow algorithm for using discreet body forces in collocated variable finite volume CFD code. It is compared with three cases where an analytical solution is known.......An actuator disc model for the flow solver EllipSys (2D&3D) is proposed. It is based on a correction of the Rhie-Chow algorithm for using discreet body forces in collocated variable finite volume CFD code. It is compared with three cases where an analytical solution is known....
The generation algorithm of arbitrary polygon animation based on dynamic correction
Directory of Open Access Journals (Sweden)
Hou Ya Wei
2016-01-01
Full Text Available This paper, based on the key-frame polygon sequence, proposes a method that makes use of dynamic correction to develop continuous animation. Firstly we use quadratic Bezier curve to interpolate the corresponding sides vector of polygon sequence consecutive frame and realize the continuity of animation sequences. And then, according to Bezier curve characteristic, we conduct dynamic regulation to interpolation parameters and implement the changing smoothness. Meanwhile, we take use of Lagrange Multiplier Method to correct the polygon and close it. Finally, we provide the concrete algorithm flow and present numerical experiment results. The experiment results show that the algorithm acquires excellent effect.
Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao
2016-06-01
An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.
GEO-LEO reflectance band inter-comparison with BRDF and atmospheric scattering corrections
Chang, Tiejun; Xiong, Xiaoxiong Jack; Keller, Graziela; Wu, Xiangqian
2017-09-01
The inter-comparison of the reflective solar bands between the instruments onboard a geostationary orbit satellite and onboard a low Earth orbit satellite is very helpful to assess their calibration consistency. GOES-R was launched on November 19, 2016 and Himawari 8 was launched October 7, 2014. Unlike the previous GOES instruments, the Advanced Baseline Imager on GOES-16 (GOES-R became GOES-16 after November 29 when it reached orbit) and the Advanced Himawari Imager (AHI) on Himawari 8 have onboard calibrators for the reflective solar bands. The assessment of calibration is important for their product quality enhancement. MODIS and VIIRS, with their stringent calibration requirements and excellent on-orbit calibration performance, provide good references. The simultaneous nadir overpass (SNO) and ray-matching are widely used inter-comparison methods for reflective solar bands. In this work, the inter-comparisons are performed over a pseudo-invariant target. The use of stable and uniform calibration sites provides comparison with appropriate reflectance level, accurate adjustment for band spectral coverage difference, reduction of impact from pixel mismatching, and consistency of BRDF and atmospheric correction. The site in this work is a desert site in Australia (latitude -29.0 South; longitude 139.8 East). Due to the difference in solar and view angles, two corrections are applied to have comparable measurements. The first is the atmospheric scattering correction. The satellite sensor measurements are top of atmosphere reflectance. The scattering, especially Rayleigh scattering, should be removed allowing the ground reflectance to be derived. Secondly, the angle differences magnify the BRDF effect. The ground reflectance should be corrected to have comparable measurements. The atmospheric correction is performed using a vector version of the Second Simulation of a Satellite Signal in the Solar Spectrum modeling and BRDF correction is performed using a semi
Quantum mean-field decoding algorithm for error-correcting codes
International Nuclear Information System (INIS)
Inoue, Jun-ichi; Saika, Yohei; Okada, Masato
2009-01-01
We numerically examine a quantum version of TAP (Thouless-Anderson-Palmer)-like mean-field algorithm for the problem of error-correcting codes. For a class of the so-called Sourlas error-correcting codes, we check the usefulness to retrieve the original bit-sequence (message) with a finite length. The decoding dynamics is derived explicitly and we evaluate the average-case performance through the bit-error rate (BER).
International Nuclear Information System (INIS)
Laitinen, T.; Dalla, S.; Huttunen-Heikinmaa, K.; Valtonen, E.
2015-01-01
To understand the origin of Solar Energetic Particles (SEPs), we must study their injection time relative to other solar eruption manifestations. Traditionally the injection time is determined using the Velocity Dispersion Analysis (VDA) where a linear fit of the observed event onset times at 1 AU to the inverse velocities of SEPs is used to derive the injection time and path length of the first-arriving particles. VDA does not, however, take into account that the particles that produce a statistically observable onset at 1 AU have scattered in the interplanetary space. We use Monte Carlo test particle simulations of energetic protons to study the effect of particle scattering on the observable SEP event onset above pre-event background, and consequently on VDA results. We find that the VDA results are sensitive to the properties of the pre-event and event particle spectra as well as SEP injection and scattering parameters. In particular, a VDA-obtained path length that is close to the nominal Parker spiral length does not imply that the VDA injection time is correct. We study the delay to the observed onset caused by scattering of the particles and derive a simple estimate for the delay time by using the rate of intensity increase at the SEP onset as a parameter. We apply the correction to a magnetically well-connected SEP event of 2000 June 10, and show it to improve both the path length and injection time estimates, while also increasing the error limits to better reflect the inherent uncertainties of VDA
International Nuclear Information System (INIS)
Bardin, D.Yu.
1979-01-01
Basing on the simple quark-parton model of strong interaction and on the Weinberg-Salam theory compact formulae are derived for the radiative correction to the charged current induced deep inelastic scattering of neutrinos on nucleons. The radiative correction is found to be around 20-30%, i.e., the value typical for deep inelastic lN-scattering. The results obtained are rather different from the presently available estimations of the effect under consideration
Létourneau, Pierre-David
2016-09-19
We present a wideband fast algorithm capable of accurately computing the full numerical solution of the problem of acoustic scattering of waves by multiple finite-sized bodies such as spherical scatterers in three dimensions. By full solution, we mean that no assumption (e.g. Rayleigh scattering, geometrical optics, weak scattering, Born single scattering, etc.) is necessary regarding the properties of the scatterers, their distribution or the background medium. The algorithm is also fast in the sense that it scales linearly with the number of unknowns. We use this algorithm to study the phenomenon of super-resolution in time-reversal refocusing in highly-scattering media recently observed experimentally (Lemoult et al., 2011), and provide numerical arguments towards the fact that such a phenomenon can be explained through a homogenization theory.
Automatic Correction Algorithm of Hyfrology Feature Attribute in National Geographic Census
Li, C.; Guo, P.; Liu, X.
2017-09-01
A subset of the attributes of hydrologic features data in national geographic census are not clear, the current solution to this problem was through manual filling which is inefficient and liable to mistakes. So this paper proposes an automatic correction algorithm of hydrologic features attribute. Based on the analysis of the structure characteristics and topological relation, we put forward three basic principles of correction which include network proximity, structure robustness and topology ductility. Based on the WJ-III map workstation, we realize the automatic correction of hydrologic features. Finally, practical data is used to validate the method. The results show that our method is highly reasonable and efficient.
Evaluation of the global orbit correction algorithm for the APS real-time orbit feedback system
International Nuclear Information System (INIS)
Carwardine, J.; Evans, K. Jr.
1997-01-01
The APS real-time orbit feedback system uses 38 correctors per plane and has available up to 320 rf beam position monitors. Orbit correction is implemented using multiple digital signal processors. Singular value decomposition is used to generate a correction matrix from a linear response matrix model of the storage ring lattice. This paper evaluates the performance of the APS system in terms of its ability to correct localized and distributed sources of orbit motion. The impact of regulator gain and bandwidth, choice of beam position monitors, and corrector dynamics are discussed. The weighted least-squares algorithm is reviewed in the context of local feedback
TPC cross-talk correction: CERN-Dubna-Milano algorithm and results
De Min, A; Guskov, A; Krasnoperov, A; Nefedov, Y; Zhemchugov, A
2003-01-01
The CDM (CERN-Dubna-Milano) algorithm for TPC Xtalk correction is presented and discussed in detail. It is a data-driven, model-independent approach to the problem of Xtalk correction. It accounts for arbitrary amplitudes and pulse shapes of signals, and corrects (almost) all generations of Xtalk, with a view to handling (almost) correctly even complex multi-track events. Results on preamp amplification and preamp linearity from the analysis of test-charge injection data of all six TPC sectors are presented. The minimal expected error on the measurement of signal charges in the TPC is discussed. Results are given on the application of the CDM Xtalk correction to test-charge events and krypton events.
The data correction algorithms in sup 6 sup 0 Co train inspection system
Yuan Ya Ding; LiuXiMing; Miao Ji Cheng
2002-01-01
Because of the physical characteristics of the sup 6 sup 0 Co train inspection system and the application of high-speed data collection system based on current integral, the original images have been distorted in a certain degree. Authors investigate into the reasons why the distortion comes into being, and accordingly present the data correction algorithm
International Nuclear Information System (INIS)
Barry, J.M.; Pollard, J.P.
1986-11-01
A FORTRAN subroutine MLTGRD is provided to solve efficiently the large systems of linear equations arising from a five-point finite difference discretisation of some elliptic partial differential equations. MLTGRD is a multigrid algorithm which provides multiplicative correction to iterative solution estimates from successively reduced systems of linear equations. It uses the method of implicit non-stationary iteration for all grid levels
International Nuclear Information System (INIS)
Yang, P; Hu, S J; Chen, S Q; Yang, W; Xu, B; Jiang, W H
2006-01-01
In order to improve laser beam quality, a real number encoding genetic algorithm based on adaptive optics technology was presented. This algorithm was applied to control a 19-channel deformable mirror to correct phase aberration in laser beam. It is known that when traditional adaptive optics system is used to correct laser beam wave-front phase aberration, a precondition is to measure the phase aberration information in the laser beam. However, using genetic algorithms, there is no necessary to know the phase aberration information in the laser beam beforehand. The only parameter need to know is the Light intensity behind the pinhole on the focal plane. This parameter was used as the fitness function for the genetic algorithm. Simulation results show that the optimal shape of the 19-channel deformable mirror applied to correct the phase aberration can be ascertained. The peak light intensity was improved by a factor of 21, and the encircled energy strehl ratio was increased to 0.34 from 0.02 as the phase aberration was corrected with this technique
Distortion correction algorithm for UAV remote sensing image based on CUDA
International Nuclear Information System (INIS)
Wenhao, Zhang; Yingcheng, Li; Delong, Li; Changsheng, Teng; Jin, Liu
2014-01-01
In China, natural disasters are characterized by wide distribution, severe destruction and high impact range, and they cause significant property damage and casualties every year. Following a disaster, timely and accurate acquisition of geospatial information can provide an important basis for disaster assessment, emergency relief, and reconstruction. In recent years, Unmanned Aerial Vehicle (UAV) remote sensing systems have played an important role in major natural disasters, with UAVs becoming an important technique of obtaining disaster information. UAV is equipped with a non-metric digital camera with lens distortion, resulting in larger geometric deformation for acquired images, and affecting the accuracy of subsequent processing. The slow speed of the traditional CPU-based distortion correction algorithm cannot meet the requirements of disaster emergencies. Therefore, we propose a Compute Unified Device Architecture (CUDA)-based image distortion correction algorithm for UAV remote sensing, which takes advantage of the powerful parallel processing capability of the GPU, greatly improving the efficiency of distortion correction. Our experiments show that, compared with traditional CPU algorithms and regardless of image loading and saving times, the maximum acceleration ratio using our proposed algorithm reaches 58 times that using the traditional algorithm. Thus, data processing time can be reduced by one to two hours, thereby considerably improving disaster emergency response capability
Non perturbative method for radiative corrections applied to lepton-proton scattering
International Nuclear Information System (INIS)
Chahine, C.
1979-01-01
We present a new, non perturbative method to effect radiative corrections in lepton (electron or muon)-nucleon scattering, useful for existing or planned experiments. This method relies on a spectral function derived in a previous paper, which takes into account both real soft photons and virtual ones and hence is free from infrared divergence. Hard effects are computed perturbatively and then included in the form of 'hard factors' in the non peturbative soft formulas. Practical computations are effected using the Gauss-Jacobi integration method which reduce the relevant integrals to a rapidly converging sequence. For the simple problem of the radiative quasi-elastic peak, we get an exponentiated form conjectured by Schwinger and found by Yennie, Frautschi and Suura. We compare also our results with the peaking approximation, which we derive independantly, and with the exact one-photon emission formula of Mo and Tsai. Applications of our method to the continuous spectrum include the radiative tail of the Δ 33 resonance in e + p scattering and radiative corrections to the Feynman scale invariant F 2 structure function for the kinematics of two recent high energy muon experiments
Zhou, Meiling; Singh, Alok Kumar; Pedrini, Giancarlo; Osten, Wolfgang; Min, Junwei; Yao, Baoli
2018-03-01
We present a tunable output-frequency filter (TOF) algorithm to reconstruct the object from noisy experimental data under low-power partially coherent illumination, such as LED, when imaging through scattering media. In the iterative algorithm, we employ Gaussian functions with different filter windows at different stages of iteration process to reduce corruption from experimental noise to search for a global minimum in the reconstruction. In comparison with the conventional iterative phase retrieval algorithm, we demonstrate that the proposed TOF algorithm achieves consistent and reliable reconstruction in the presence of experimental noise. Moreover, the spatial resolution and distinctive features are retained in the reconstruction since the filter is applied only to the region outside the object. The feasibility of the proposed method is proved by experimental results.
International Nuclear Information System (INIS)
Albino, Lucas D.; Santos, Gabriela R.; Ribeiro, Victor A.B.; Rodrigues, Laura N.; Weltman, Eduardo; Braga, Henrique F.
2013-01-01
The dose accuracy calculated by a treatment planning system is directly related to the chosen algorithm. Nowadays, several calculation doses algorithms are commercially available and they differ in calculation time and accuracy, especially when individual tissue densities are taken into account. The aim of this study was to compare two different calculation algorithms from iPlan®, BrainLAB, in the treatment of pituitary gland tumor with intensity-modulated radiation therapy (IMRT). These tumors are located in a region with variable electronic density tissues. The deviations from the plan with no heterogeneity correction were evaluated. To initial validation of the data inserted into the planning system, an IMRT plan was simulated in a anthropomorphic phantom and the dose distribution was measured with a radiochromic film. The gamma analysis was performed in the film, comparing it with dose distributions calculated with X-ray Voxel Monte Carlo (XVMC) algorithm and pencil beam convolution (PBC). Next, 33 patients plans, initially calculated by PBC algorithm, were recalculated with XVMC algorithm. The treatment volumes and organs-at-risk dose-volume histograms were compared. No relevant differences were found in dose-volume histograms between XVMC and PBC. However, differences were obtained when comparing each plan with the plan without heterogeneity correction. (author)
Mohammadi, S. M.; Tavakoli-Anbaran, H.; Zeinali, H. Z.
2017-02-01
The parallel-plate free-air ionization chamber termed FAC-IR-300 was designed at the Atomic Energy Organization of Iran, AEOI. This chamber is used for low and medium X-ray dosimetry on the primary standard level. In order to evaluate the air-kerma, some correction factors such as electron-loss correction factor (ke) and photon scattering correction factor (ksc) are needed. ke factor corrects the charge loss from the collecting volume and ksc factor corrects the scattering of photons into collecting volume. In this work ke and ksc were estimated by Monte Carlo simulation. These correction factors are calculated for mono-energy photon. As a result of the simulation data, the ke and ksc values for FAC-IR-300 ionization chamber are 1.0704 and 0.9982, respectively.
International Nuclear Information System (INIS)
Mohammadi, S.M.; Tavakoli-Anbaran, H.; Zeinali, H.Z.
2017-01-01
The parallel-plate free-air ionization chamber termed FAC-IR-300 was designed at the Atomic Energy Organization of Iran, AEOI. This chamber is used for low and medium X-ray dosimetry on the primary standard level. In order to evaluate the air-kerma, some correction factors such as electron-loss correction factor (k e ) and photon scattering correction factor (k sc ) are needed. k e factor corrects the charge loss from the collecting volume and k sc factor corrects the scattering of photons into collecting volume. In this work k e and k sc were estimated by Monte Carlo simulation. These correction factors are calculated for mono-energy photon. As a result of the simulation data, the k e and k sc values for FAC-IR-300 ionization chamber are 1.0704 and 0.9982, respectively.
International Nuclear Information System (INIS)
Groiselle, C.; Rocchisani, J.M.; Moretti, J.L.; Dreuille, O. de; Gaillard, J.F.; Bendriem, B.
1997-01-01
SPECT quantification: a review of the different correction methods with Compton scatter attenuation and spatial deterioration effects. The improvement of gamma-cameras, acquisition and reconstruction software opens new perspectives in term of image quantification in nuclear medicine. In order to meet the challenge, numerous works have been undertaken in recent years to correct for the different physical phenomena that prevent an exact estimation of the radioactivity distribution. The main phenomena that have to betaken into account are scatter, attenuation and resolution. In this work, authors present the physical basis of each issue, its consequences on quantification and the main methods proposed to correct them. (authors)
Scatter-Reducing Sounding Filtration Using a Genetic Algorithm and Mean Monthly Standard Deviation
Mandrake, Lukas
2013-01-01
Retrieval algorithms like that used by the Orbiting Carbon Observatory (OCO)-2 mission generate massive quantities of data of varying quality and reliability. A computationally efficient, simple method of labeling problematic datapoints or predicting soundings that will fail is required for basic operation, given that only 6% of the retrieved data may be operationally processed. This method automatically obtains a filter designed to reduce scatter based on a small number of input features. Most machine-learning filter construction algorithms attempt to predict error in the CO2 value. By using a surrogate goal of Mean Monthly STDEV, the goal is to reduce the retrieved CO2 scatter rather than solving the harder problem of reducing CO2 error. This lends itself to improved interpretability and performance. This software reduces the scatter of retrieved CO2 values globally based on a minimum number of input features. It can be used as a prefilter to reduce the number of soundings requested, or as a post-filter to label data quality. The use of the MMS (Mean Monthly Standard deviation) provides a much cleaner, clearer filter than the standard ABS(CO2-truth) metrics previously employed by competitor methods. The software's main strength lies in a clearer (i.e., fewer features required) filter that more efficiently reduces scatter in retrieved CO2 rather than focusing on the more complex (and easily removed) bias issues.
Three-loop corrections to the soft anomalous dimension in multileg scattering
Almelid, Øyvind; Gardi, Einan
2016-01-01
We present the three-loop result for the soft anomalous dimension governing long-distance singularities of multi-leg gauge-theory scattering amplitudes of massless partons. We compute all contributing webs involving semi-infinite Wilson lines at three loops and obtain the complete three-loop correction to the dipole formula. We find that non-dipole corrections appear already for three coloured partons, where the correction is a constant without kinematic dependence. Kinematic dependence appears only through conformally-invariant cross ratios for four coloured partons or more, and the result can be expressed in terms of single-valued harmonic polylogarithms of weight five. While the non-dipole three-loop term does not vanish in two-particle collinear limits, its contribution to the splitting amplitude anomalous dimension reduces to a constant, and it only depends on the colour charges of the collinear pair, thereby preserving strict collinear factorization properties. Finally we verify that our result is consi...
A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting
Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao
2014-01-01
We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813
An improved non-uniformity correction algorithm and its GPU parallel implementation
Cheng, Kuanhong; Zhou, Huixin; Qin, Hanlin; Zhao, Dong; Qian, Kun; Rong, Shenghui
2018-05-01
The performance of SLP-THP based non-uniformity correction algorithm is seriously affected by the result of SLP filter, which always leads to image blurring and ghosting artifacts. To address this problem, an improved SLP-THP based non-uniformity correction method with curvature constraint was proposed. Here we put forward a new way to estimate spatial low frequency component. First, the details and contours of input image were obtained respectively by minimizing local Gaussian curvature and mean curvature of image surface. Then, the guided filter was utilized to combine these two parts together to get the estimate of spatial low frequency component. Finally, we brought this SLP component into SLP-THP method to achieve non-uniformity correction. The performance of proposed algorithm was verified by several real and simulated infrared image sequences. The experimental results indicated that the proposed algorithm can reduce the non-uniformity without detail losing. After that, a GPU based parallel implementation that runs 150 times faster than CPU was presented, which showed the proposed algorithm has great potential for real time application.
Effects of scatter and attenuation corrections on phantom and clinical brain SPECT
International Nuclear Information System (INIS)
Prando, S.; Robilotta, C.C.R.; Oliveira, M.A.; Alves, T.C.; Busatto Filho, G.
2002-01-01
Aim: The present work evaluated the effects of combinations of scatter and attenuation corrections on the analysis of brain SPECT. Materials and Methods: We studied images of the 3D Hoffman brain phantom and from a group of 20 depressive patients with confirmed cardiac insufficiency (CI) and 14 matched healthy controls (HC). Data were acquired with a Sophy-DST/SMV-GE dual-head camera after venous injection of 1110MBq 99m Tc-HMPAO. Two energy windows, 15% on 140keV and 30% centered on 108keV of the Compton distribution, were used to obtain corresponding sets of 128x128x128 projections. Tomograms were reconstructed using OSEM (2 iterations, 8 sub-sets) and Metz filter (order 8, 4 pixels FWHM psf) and FBP with Butterworth filter (order 10, frequency 0.7 Nyquist). Ten combinations of Jaszczak correction (factors 0.3, 0.4 and 0.5) and the 1st order Chang correction (u=0.12cm -1 and 0.159cm -1 ) were applied on the phantom data. In all the phantom images, contrast and signal-noise ratio between 3 ROIs (ventricle, occipital and thalamus) and cerebellum, as well as the ratio between activities in gray and white matters, were calculated and compared with the expected values. The patients images were corrected with k=0.5 and u=0.159cm -1 and reconstructed with OSEM and Metz filter. The images were inspected visually and blood flow comparisons between the CI and the HC groups were performed using Statistical Parametric Mapping (SPM). Results: The best results in the analysis of the contrast and activities ratio were obtained with k=0.5 and u=0.159cm -1 . The results of the activities ratio obtained with OSEM e Metz filter are similar to those published by Laere et al.[J.Nucl.Med 2000;41:2051-2062]. The method of correction using effective attenuation coefficient produced results visually acceptable, but inadequate for the quantitative evaluation. The results of signal-noise ratio are better with OSEM than FBP reconstruction method. The corrections in the CI patients studies
Research of beam hardening correction method for CL system based on SART algorithm
International Nuclear Information System (INIS)
Cao Daquan; Wang Yaxiao; Que Jiemin; Sun Cuili; Wei Cunfeng; Wei Long
2014-01-01
Computed laminography (CL) is a non-destructive testing technique for large objects, especially for planar objects. Beam hardening artifacts were wildly observed in the CL system and significantly reduce the image quality. This study proposed a novel simultaneous algebraic reconstruction technique (SART) based beam hardening correction (BHC) method for the CL system, namely the SART-BHC algorithm in short. The SART-BHC algorithm took the polychromatic attenuation process in account to formulate the iterative reconstruction update. A novel projection matrix calculation method which was different from the conventional cone-beam or fan-beam geometry was also studied for the CL system. The proposed method was evaluated with simulation data and experimental data, which was generated using the Monte Carlo simulation toolkit Geant4 and a bench-top CL system, respectively. All projection data were reconstructed with SART-BHC algorithm and the standard filtered back projection (FBP) algorithm. The reconstructed images show that beam hardening artifacts are greatly reduced with the SART-BHC algorithm compared to the FBP algorithm. The SART-BHC algorithm doesn't need any prior know-ledge about the object or the X-ray spectrum and it can also mitigate the interlayer aliasing. (authors)
Three-dimensional ophthalmic optical coherence tomography with a refraction correction algorithm
Zawadzki, Robert J.; Leisser, Christoph; Leitgeb, Rainer; Pircher, Michael; Fercher, Adolf F.
2003-10-01
We built an optical coherence tomography (OCT) system with a rapid scanning optical delay (RSOD) line, which allows probing full axial eye length. The system produces Three-dimensional (3D) data sets that are used to generate 3D tomograms of the model eye. The raw tomographic data were processed by an algorithm, which is based on Snell"s law to correct the interface positions. The Zernike polynomials representation of the interfaces allows quantitative wave aberration measurements. 3D images of our results are presented to illustrate the capabilities of the system and the algorithm performance. The system allows us to measure intra-ocular distances.
Heavy flavour corrections to polarised and unpolarised deep-inelastic scattering at 3-loop order
International Nuclear Information System (INIS)
Ablinger, J.; Round, M.; Schneider, C.; Hasselhuhn, A.
2016-11-01
We report on progress in the calculation of 3-loop corrections to the deep-inelastic structure functions from massive quarks in the asymptotic region of large momentum transfer Q"2. Recently completed results allow us to obtain the O(a"3_s) contributions to several heavy flavour Wilson coefficients which enter both polarised and unpolarised structure functions for lepton-nucleon scattering. In particular, we obtain the non-singlet contributions to the unpolarised structure functions F_2(x,Q"2) and xF_3(x,Q"2) and the polarised structure function g_1(x,Q"2). From these results we also obtain the heavy flavour contributions to the Gross-Llewellyn-Smith and the Bjorken sum rules.
Dual-energy digital mammography for calcification imaging: Scatter and nonuniformity corrections
International Nuclear Information System (INIS)
Kappadath, S. Cheenu; Shaw, Chris C.
2005-01-01
Mammographic images of small calcifications, which are often the earliest signs of breast cancer, can be obscured by overlapping fibroglandular tissue. We have developed and implemented a dual-energy digital mammography (DEDM) technique for calcification imaging under full-field imaging conditions using a commercially available aSi:H/CsI:Tl flat-panel based digital mammography system. The low- and high-energy images were combined using a nonlinear mapping function to cancel the tissue structures and generate the dual-energy (DE) calcification images. The total entrance-skin exposure and mean-glandular dose from the low- and high-energy images were constrained so that they were similar to screening-examination levels. To evaluate the DE calcification image, we designed a phantom using calcium carbonate crystals to simulate calcifications of various sizes (212-425 μm) overlaid with breast-tissue-equivalent material 5 cm thick with a continuously varying glandular-tissue ratio from 0% to 100%. We report on the effects of scatter radiation and nonuniformity in x-ray intensity and detector response on the DE calcification images. The nonuniformity was corrected by normalizing the low- and high-energy images with full-field reference images. Correction of scatter in the low- and high-energy images significantly reduced the background signal in the DE calcification image. Under the current implementation of DEDM, utilizing the mammography system and dose level tested, calcifications in the 300-355 μm size range were clearly visible in DE calcification images. Calcification threshold sizes decreased to the 250-280 μm size range when the visibility criteria were lowered to barely visible. Calcifications smaller than ∼250 μm were usually not visible in most cases. The visibility of calcifications with our DEDM imaging technique was limited by quantum noise, not system noise
Impact on dose and image quality of a software-based scatter correction in mammography.
Monserrat, Teresa; Prieto, Elena; Barbés, Benigno; Pina, Luis; Elizalde, Arlette; Fernández, Belén
2017-01-01
Background In 2014, Siemens developed a new software-based scatter correction (Progressive Reconstruction Intelligently Minimizing Exposure [PRIME]), enabling grid-less digital mammography. Purpose To compare doses and image quality between PRIME (grid-less) and standard (with anti-scatter grid) modes. Material and Methods Contrast-to-noise ratio (CNR) was measured for various polymethylmethacrylate (PMMA) thicknesses and dose values provided by the mammograph were recorded. CDMAM phantom images were acquired for various PMMA thicknesses and inverse Image Quality Figure (IQF inv ) was calculated. Values of incident entrance surface air kerma (ESAK) and average glandular dose (AGD) were obtained from the DICOM header for a total of 1088 pairs of clinical cases. Two experienced radiologists compared subjectively the image quality of a total of 149 pairs of clinical cases. Results CNR values were higher and doses were lower in PRIME mode for all thicknesses. IQF inv values in PRIME mode were lower for all thicknesses except for 40 mm of PMMA equivalent, in which IQF inv was slightly greater in PRIME mode. A mean reduction of 10% in ESAK and 12% in AGD in PRIME mode with respect to standard mode was obtained. The clinical image quality in PRIME and standard acquisitions resulted to be similar in most of the cases (84% for the first radiologist and 67% for the second one). Conclusion The use of PRIME software reduces, in average, the dose of radiation to the breast without affecting image quality. This reduction is greater for thinner and denser breasts.
Energy Technology Data Exchange (ETDEWEB)
Liang, X; Zhang, Z; Xie, Y [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, GuangDong (China); Gong, S; Niu, T [Department of Radiation Oncology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang (China); Institute of Translational Medicine, Zhejiang University, Hangzhou, Zhejiang (China); Zhou, Q [Department of Radiation Oncology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang (China)
2016-06-15
Purpose: X-ray scatter photons result in significant image quality degradation of cone-beam CT (CBCT). Measurement based algorithms using beam blocker directly acquire the scatter samples and achieve significant improvement on the quality of CBCT image. Within existing algorithms, single-scan and stationary beam blocker proposed previously is promising due to its simplicity and practicability. Although demonstrated effectively on tabletop system, the blocker fails to estimate the scatter distribution on clinical CBCT system mainly due to the gantry wobble. In addition, the uniform distributed blocker strips in our previous design results in primary data loss in the CBCT system and leads to the image artifacts due to data insufficiency. Methods: We investigate the motion behavior of the beam blocker in each projection and design an optimized non-uniform blocker strip distribution which accounts for the data insufficiency issue. An accurate scatter estimation is then achieved from the wobble modeling. Blocker wobble curve is estimated using threshold-based segmentation algorithms in each projection. In the blocker design optimization, the quality of final image is quantified using the number of the primary data loss voxels and the mesh adaptive direct search algorithm is applied to minimize the objective function. Scatter-corrected CT images are obtained using the optimized blocker. Results: The proposed method is evaluated using Catphan@504 phantom and a head patient. On the Catphan©504, our approach reduces the average CT number error from 115 Hounsfield unit (HU) to 11 HU in the selected regions of interest, and improves the image contrast by a factor of 1.45 in the high-contrast regions. On the head patient, the CT number error is reduced from 97 HU to 6 HU in the soft tissue region and image spatial non-uniformity is decreased from 27% to 5% after correction. Conclusion: The proposed optimized blocker design is practical and attractive for CBCT guided radiation
International Nuclear Information System (INIS)
Liang, X; Zhang, Z; Xie, Y; Gong, S; Niu, T; Zhou, Q
2016-01-01
Purpose: X-ray scatter photons result in significant image quality degradation of cone-beam CT (CBCT). Measurement based algorithms using beam blocker directly acquire the scatter samples and achieve significant improvement on the quality of CBCT image. Within existing algorithms, single-scan and stationary beam blocker proposed previously is promising due to its simplicity and practicability. Although demonstrated effectively on tabletop system, the blocker fails to estimate the scatter distribution on clinical CBCT system mainly due to the gantry wobble. In addition, the uniform distributed blocker strips in our previous design results in primary data loss in the CBCT system and leads to the image artifacts due to data insufficiency. Methods: We investigate the motion behavior of the beam blocker in each projection and design an optimized non-uniform blocker strip distribution which accounts for the data insufficiency issue. An accurate scatter estimation is then achieved from the wobble modeling. Blocker wobble curve is estimated using threshold-based segmentation algorithms in each projection. In the blocker design optimization, the quality of final image is quantified using the number of the primary data loss voxels and the mesh adaptive direct search algorithm is applied to minimize the objective function. Scatter-corrected CT images are obtained using the optimized blocker. Results: The proposed method is evaluated using Catphan@504 phantom and a head patient. On the Catphan©504, our approach reduces the average CT number error from 115 Hounsfield unit (HU) to 11 HU in the selected regions of interest, and improves the image contrast by a factor of 1.45 in the high-contrast regions. On the head patient, the CT number error is reduced from 97 HU to 6 HU in the soft tissue region and image spatial non-uniformity is decreased from 27% to 5% after correction. Conclusion: The proposed optimized blocker design is practical and attractive for CBCT guided radiation
Research on correction algorithm of laser positioning system based on four quadrant detector
Gao, Qingsong; Meng, Xiangyong; Qian, Weixian; Cai, Guixia
2018-02-01
This paper first introduces the basic principle of the four quadrant detector, and a set of laser positioning experiment system is built based on the four quadrant detector. Four quadrant laser positioning system in the actual application, not only exist interference of background light and detector dark current noise, and the influence of random noise, system stability, spot equivalent error can't be ignored, so it is very important to system calibration and correction. This paper analyzes the various factors of system positioning error, and then propose an algorithm for correcting the system error, the results of simulation and experiment show that the modified algorithm can improve the effect of system error on positioning and improve the positioning accuracy.
Intercomparison of attenuation correction algorithms for single-polarized X-band radars
Lengfeld, K.; Berenguer, M.; Sempere Torres, D.
2018-03-01
Attenuation due to liquid water is one of the largest uncertainties in radar observations. The effects of attenuation are generally inversely proportional to the wavelength, i.e. observations from X-band radars are more affected by attenuation than those from C- or S-band systems. On the other hand, X-band radars can measure precipitation fields in higher temporal and spatial resolution and are more mobile and easier to install due to smaller antennas. A first algorithm for attenuation correction in single-polarized systems was proposed by Hitschfeld and Bordan (1954) (HB), but it gets unstable in case of small errors (e.g. in the radar calibration) and strong attenuation. Therefore, methods have been developed that restrict attenuation correction to keep the algorithm stable, using e.g. surface echoes (for space-borne radars) and mountain returns (for ground radars) as a final value (FV), or adjustment of the radar constant (C) or the coefficient α. In the absence of mountain returns, measurements from C- or S-band radars can be used to constrain the correction. All these methods are based on the statistical relation between reflectivity and specific attenuation. Another way to correct for attenuation in X-band radar observations is to use additional information from less attenuated radar systems, e.g. the ratio between X-band and C- or S-band radar measurements. Lengfeld et al. (2016) proposed such a method based isotonic regression of the ratio between X- and C-band radar observations along the radar beam. This study presents a comparison of the original HB algorithm and three algorithms based on the statistical relation between reflectivity and specific attenuation as well as two methods implementing additional information of C-band radar measurements. Their performance in two precipitation events (one mainly convective and the other one stratiform) shows that a restriction of the HB is necessary to avoid instabilities. A comparison with vertically pointing
Energy Technology Data Exchange (ETDEWEB)
Bluemlein, J.; Hasselhuhn, A.; Kovacikova, P.; Moch, S.
2011-04-15
We provide a fast and precise Mellin-space implementation of the O({alpha}{sub s}) heavy flavor Wilson coefficients for charged current deep inelastic scattering processes. They are of importance for the extraction of the strange quark distribution in neutrino-nucleon scattering and the QCD analyses of the HERA charged current data. Errors in the literature are corrected. We also discuss a series of more general parton parameterizations in Mellin space. (orig.)
Hybrid wavefront sensing and image correction algorithm for imaging through turbulent media
Wu, Chensheng; Robertson Rzasa, John; Ko, Jonathan; Davis, Christopher C.
2017-09-01
It is well known that passive image correction of turbulence distortions often involves using geometry-dependent deconvolution algorithms. On the other hand, active imaging techniques using adaptive optic correction should use the distorted wavefront information for guidance. Our work shows that a hybrid hardware-software approach is possible to obtain accurate and highly detailed images through turbulent media. The processing algorithm also takes much fewer iteration steps in comparison with conventional image processing algorithms. In our proposed approach, a plenoptic sensor is used as a wavefront sensor to guide post-stage image correction on a high-definition zoomable camera. Conversely, we show that given the ground truth of the highly detailed image and the plenoptic imaging result, we can generate an accurate prediction of the blurred image on a traditional zoomable camera. Similarly, the ground truth combined with the blurred image from the zoomable camera would provide the wavefront conditions. In application, our hybrid approach can be used as an effective way to conduct object recognition in a turbulent environment where the target has been significantly distorted or is even unrecognizable.
A DSP-based neural network non-uniformity correction algorithm for IRFPA
Liu, Chong-liang; Jin, Wei-qi; Cao, Yang; Liu, Xiu
2009-07-01
An effective neural network non-uniformity correction (NUC) algorithm based on DSP is proposed in this paper. The non-uniform response in infrared focal plane array (IRFPA) detectors produces corrupted images with a fixed-pattern noise(FPN).We introduced and analyzed the artificial neural network scene-based non-uniformity correction (SBNUC) algorithm. A design of DSP-based NUC development platform for IRFPA is described. The DSP hardware platform designed is of low power consumption, with 32-bit fixed point DSP TMS320DM643 as the kernel processor. The dependability and expansibility of the software have been improved by DSP/BIOS real-time operating system and Reference Framework 5. In order to realize real-time performance, the calibration parameters update is set at a lower task priority then video input and output in DSP/BIOS. In this way, calibration parameters updating will not affect video streams. The work flow of the system and the strategy of real-time realization are introduced. Experiments on real infrared imaging sequences demonstrate that this algorithm requires only a few frames to obtain high quality corrections. It is computationally efficient and suitable for all kinds of non-uniformity.
International Nuclear Information System (INIS)
Sanchez Catasus, C.; Morales, L.; Aguila, A.
2002-01-01
Aim: It is well known that some patients with temporal lobe epilepsy (TLE) show normal perfusion during interictal SPECT study. The aim of this research was to evaluate if the scatter radiation has some influence on this kind of result. Materials and Methods: We studied 15 patients with TLE by clinical diagnosis and by video-EEG monitoring with surface electrodes (11 left TLE, 4 right TLE), which showed normal perfusion during interictal brain 99m Tc-HMPAO SPECT. The SPECT data were reconstructed by filtered backprojection without scatter correction (A). The same SPECT data were reconstructed after the projections were corrected by dual energy window method of scatter correction (B). Attenuation was corrected in all cases using first order Chang Method. For A and B images groups, cerebellum perfusion ratios were calculated on irregular regions of interest (ROI) drawn on anterior (ATL), lateral (LTL), mesial (MTL) and whole temporal lobe (WTL). To evaluate the influence of scatter radiation, the cerebellum perfusion ratios of each subject were compared with a normal database of 10 normal subjects, with and without scatter correction, using z-score analysis. Results: In group A, the z-score was less than 2 in all cases. In group B, the z-score was more than 2 in 6 cases, 4 in MTL (3 left, 1 right) and 2 in left LTL, which were coincident with the EEG localization. All images of group B showed better contrast than images of group A. Conclusions: These results suggest that scatter correction could improve the sensitivity of interictal brain SPECT to identify epileptic focus in patients with TLE
Development of a 3D muon disappearance algorithm for muon scattering tomography
Blackwell, T. B.; Kudryavtsev, V. A.
2015-05-01
Upon passing through a material, muons lose energy, scatter off nuclei and atomic electrons, and can stop in the material. Muons will more readily lose energy in higher density materials. Therefore multiple muon disappearances within a localized volume may signal the presence of high-density materials. We have developed a new technique that improves the sensitivity of standard muon scattering tomography. This technique exploits these muon disappearances to perform non-destructive assay of an inspected volume. Muons that disappear have their track evaluated using a 3D line extrapolation algorithm, which is in turn used to construct a 3D tomographic image of the inspected volume. Results of Monte Carlo simulations that measure muon disappearance in different types of target materials are presented. The ability to differentiate between different density materials using the 3D line extrapolation algorithm is established. Finally the capability of this new muon disappearance technique to enhance muon scattering tomography techniques in detecting shielded HEU in cargo containers has been demonstrated.
International Nuclear Information System (INIS)
Kojima, Akihiro; Matsumoto, Masanori; Ohyama, Yoichi; Tomiguchi, Seiji; Kira, Mitsuko; Takahashi, Mutsumasa.
1997-01-01
To investigate validity of scatter correction by the TEW method in 201 Tl imaging, we performed an experimental study using the gamma camera with the capability to perform the TEW method and a plate source with a defect. Images were acquired with the triple energy window which is recommended by the gamma camera manufacturer. The result of the energy spectrum showed that backscattered photons were included within the lower sub-energy window and main energy window, and the spectral shapes in the upper half region of the photopeak (70 keV) were not changed greatly by the source shape and the thickness of scattering materials. The scatter fraction calculated using energy spectra and, visual observation and the contrast values measured at the defect using planar images also showed that substantial primary photons were included in the upper sub-energy window. In TEW method (for scatter correction), two sub-energy windows are expected to be defined on the part of energy region in which total counts mainly consist of scattered photons. Therefore, it is necessary to investigate the use of the upper sub-energy window on scatter correction by the TEW method in 201 Tl imaging. (author)
Busi, Matteo; Olsen, Ulrik L.; Knudsen, Erik B.; Frisvad, Jeppe R.; Kehres, Jan; Dreier, Erik S.; Khalil, Mohamad; Haldrup, Kristoffer
2018-03-01
Spectral computed tomography is an emerging imaging method that involves using recently developed energy discriminating photon-counting detectors (PCDs). This technique enables measurements at isolated high-energy ranges, in which the dominating undergoing interaction between the x-ray and the sample is the incoherent scattering. The scattered radiation causes a loss of contrast in the results, and its correction has proven to be a complex problem, due to its dependence on energy, material composition, and geometry. Monte Carlo simulations can utilize a physical model to estimate the scattering contribution to the signal, at the cost of high computational time. We present a fast Monte Carlo simulation tool, based on McXtrace, to predict the energy resolved radiation being scattered and absorbed by objects of complex shapes. We validate the tool through measurements using a CdTe single PCD (Multix ME-100) and use it for scattering correction in a simulation of a spectral CT. We found the correction to account for up to 7% relative amplification in the reconstructed linear attenuation. It is a useful tool for x-ray CT to obtain a more accurate material discrimination, especially in the high-energy range, where the incoherent scattering interactions become prevailing (>50 keV).
QCD and power corrections to sum rules in deep-inelastic lepton-nucleon scattering
International Nuclear Information System (INIS)
Ravindran, V.; Neerven, W.L. van
2001-01-01
In this paper we study QCD and power corrections to sum rules which show up in deep-inelastic lepton-hadron scattering. Furthermore we will make a distinction between fundamental sum rules which can be derived from quantum field theory and those which are of a phenomenological origin. Using current algebra techniques the fundamental sum rules can be expressed into expectation values of (partially) conserved (axial-)vector currents sandwiched between hadronic states. These expectation values yield the quantum numbers of the corresponding hadron which are determined by the underlying flavour group SU(n) F . In this case one can show that there exist an intimate relation between the appearance of power and QCD corrections. The above features do not hold for phenomenological sum rules, hereafter called non-fundamental. They have no foundation in quantum field theory and they mostly depend on certain assumptions made for the structure functions like super-convergence relations or the parton model. Therefore only the fundamental sum rules provide us with a stringent test of QCD
A baseline correction algorithm for Raman spectroscopy by adaptive knots B-spline
International Nuclear Information System (INIS)
Wang, Xin; Fan, Xian-guang; Xu, Ying-jie; Wang, Xiu-fen; He, Hao; Zuo, Yong
2015-01-01
The Raman spectroscopy technique is a powerful and non-invasive technique for molecular fingerprint detection which has been widely used in many areas, such as food safety, drug safety, and environmental testing. But Raman signals can be easily corrupted by a fluorescent background, therefore we presented a baseline correction algorithm to suppress the fluorescent background in this paper. In this algorithm, the background of the Raman signal was suppressed by fitting a curve called a baseline using a cyclic approximation method. Instead of the traditional polynomial fitting, we used the B-spline as the fitting algorithm due to its advantages of low-order and smoothness, which can avoid under-fitting and over-fitting effectively. In addition, we also presented an automatic adaptive knot generation method to replace traditional uniform knots. This algorithm can obtain the desired performance for most Raman spectra with varying baselines without any user input or preprocessing step. In the simulation, three kinds of fluorescent background lines were introduced to test the effectiveness of the proposed method. We showed that two real Raman spectra (parathion-methyl and colza oil) can be detected and their baselines were also corrected by the proposed method. (paper)
International Nuclear Information System (INIS)
Ogino, Takashi; Egawa, Sunao
1991-01-01
New algorithms of CT value correction for reconstructing a radiotherapy simulation image through axial CT images were developed. One, designated plane weighting method, is to correct CT value in proportion to the position of the beam element passing through the voxel. The other, designated solid weighting method, is to correct CT value in proportion to the length of the beam element passing through the voxel and the volume of voxel. Phantom experiments showed fair spatial resolution in the transverse direction. In the longitudinal direction, however, spatial resolution of under slice thickness could not be obtained. Contrast resolution was equivalent for both methods. In patient studies, the reconstructed radiotherapy simulation image was almost similar in visual perception of the density resolution to a simulation film taken by X-ray simulator. (author)
Goldengorin, Boris; Vink, Marius de
1999-01-01
The Data-Correcting Algorithm (DCA) corrects the data of a hard problem instance in such a way that we obtain an instance of a well solvable special case. For a given prescribed accuracy of the solution, the DCA uses a branch and bound scheme to make sure that the solution of the corrected instance
International Nuclear Information System (INIS)
Nielson, K.K.; Garcia, S.R.
1976-09-01
Two methods are described for computing multielement x-ray absorption corrections for aerosol samples collected in IPC-1478 and Whatman 41 filters. The first relies on scatter peak intensities and scattering cross sections to estimate the mass of light elements (Z less than 14) in the sample. This mass is used with the measured heavy element (Z greater than or equal to 14) masses to iteratively compute sample absorption corrections. The second method utilizes a linear function of ln(μ) vs ln(E) determined from the scatter peak ratios and estimates sample mass from the scatter peak intensities. Both methods assume a homogeneous depth distribution of aerosol in a fraction of the front of the filters, and the assumption is evaluated with respect to an exponential aerosol depth distribution. Penetration depths for various real, synthethic and liquid aerosols were measured. Aerosol penetration appeared constant over a 1.1 mg/cm 2 range of sample loading for IPC filters, while absorption corrections for Si and S varied by a factor of two over the same loading range. Corrections computed by the two methods were compared with measured absorption corrections and with atomic absorption analyses of the same samples
A systematic approach to robust preconditioning for gradient-based inverse scattering algorithms
International Nuclear Information System (INIS)
Nordebo, Sven; Fhager, Andreas; Persson, Mikael; Gustafsson, Mats
2008-01-01
This paper presents a systematic approach to robust preconditioning for gradient-based nonlinear inverse scattering algorithms. In particular, one- and two-dimensional inverse problems are considered where the permittivity and conductivity profiles are unknown and the input data consist of the scattered field over a certain bandwidth. A time-domain least-squares formulation is employed and the inversion algorithm is based on a conjugate gradient or quasi-Newton algorithm together with an FDTD-electromagnetic solver. A Fisher information analysis is used to estimate the Hessian of the error functional. A robust preconditioner is then obtained by incorporating a parameter scaling such that the scaled Fisher information has a unit diagonal. By improving the conditioning of the Hessian, the convergence rate of the conjugate gradient or quasi-Newton methods are improved. The preconditioner is robust in the sense that the scaling, i.e. the diagonal Fisher information, is virtually invariant to the numerical resolution and the discretization model that is employed. Numerical examples of image reconstruction are included to illustrate the efficiency of the proposed technique
Directory of Open Access Journals (Sweden)
Jesús A. Prieto-Amparan
2018-02-01
Full Text Available A key step in the processing of satellite imagery is the radiometric correction of images to account for reflectance that water vapor, atmospheric dust, and other atmospheric elements add to the images, causing imprecisions in variables of interest estimated at the earth’s surface level. That issue is important when performing spatiotemporal analyses to determine ecosystems’ productivity. In this study, three correction methods were applied to satellite images for the period 2010–2014. These methods were Atmospheric Correction for Flat Terrain 2 (ATCOR2, Fast Line-of-Sight Atmospheric Analysis of Spectral Hypercubes (FLAASH, and Dark Object Substract 1 (DOS1. The images included 12 sub-scenes from the Landsat Thematic Mapper (TM and the Operational Land Imager (OLI sensors. The images corresponded to three Permanent Monitoring Sites (PMS of grasslands, ‘Teseachi’, ‘Eden’, and ‘El Sitio’, located in the state of Chihuahua, Mexico. After the corrections were applied to the images, they were evaluated in terms of their precision for biomass estimation. For that, biomass production was measured during the study period at the three PMS to calibrate production models developed with simple and multiple linear regression (SLR and MLR techniques. When the estimations were made with MLR, DOS1 obtained an R2 of 0.97 (p < 0.05 for 2012 and values greater than 0.70 (p < 0.05 during 2013–2014. The rest of the algorithms did not show significant results and DOS1, which is the simplest algorithm, resulted in the best biomass estimator. Thus, in the multitemporal analysis of grassland based on spectral information, it is not necessary to apply complex correction procedures. The maps of biomass production, elaborated from images corrected with DOS1, can be used as a reference point for the assessment of the grassland condition, as well as to determine the grazing capacity and thus the potential animal production in such ecosystems.
Cell light scattering characteristic numerical simulation research based on FDTD algorithm
Lin, Xiaogang; Wan, Nan; Zhu, Hao; Weng, Lingdong
2017-01-01
In this study, finite-difference time-domain (FDTD) algorithm has been used to work out the cell light scattering problem. Before beginning to do the simulation contrast, finding out the changes or the differences between normal cells and abnormal cells which may be cancerous or maldevelopment is necessary. The preparation of simulation are building up the simple cell model of cell which consists of organelles, nucleus and cytoplasm and setting up the suitable precision of mesh. Meanwhile, setting up the total field scattering field source as the excitation source and far field projection analysis group is also important. Every step need to be explained by the principles of mathematic such as the numerical dispersion, perfect matched layer boundary condition and near-far field extrapolation. The consequences of simulation indicated that the position of nucleus changed will increase the back scattering intensity and the significant difference on the peak value of scattering intensity may result from the changes of the size of cytoplasm. The study may help us find out the regulations based on the simulation consequences and the regulations can be meaningful for early diagnosis of cancers.
The whole space three-dimensional magnetotelluric inversion algorithm with static shift correction
Zhang, K.
2016-12-01
Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results.The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm. The verification and application example of 3D inversion algorithm is shown in Figure 1. From the comparison of figure 1, the inversion model can reflect all the abnormal bodies and terrain clearly regardless of what type of data (impedance/tipper/impedance and tipper). And the resolution of the bodies' boundary can be improved by using tipper data. The algorithm is very effective for terrain inversion. So it is very useful for the study of continental shelf with continuous exploration of land, marine and underground.The three-dimensional electrical model of the ore zone reflects the basic information of stratum, rock and structure. Although it cannot indicate the ore body position directly, the important clues are provided for prospecting work by the delineation of diorite pluton uplift range. The test results show that, the high quality of
Energy Technology Data Exchange (ETDEWEB)
Appaji Gowda, S.B. [Department of Studies in Physics, Manasagangothri, University of Mysore, Mysore 570006 (India); Umesh, T.K. [Department of Studies in Physics, Manasagangothri, University of Mysore, Mysore 570006 (India)]. E-mail: tku@physics.uni-mysore.ac.in
2006-01-15
Dispersion corrections to the forward Rayleigh scattering amplitudes of tantalum, mercury and lead in the photon energy range 24-136 keV have been determined by a numerical evaluation of the dispersion integral that relates them through optical theorem to the photo effect cross sections. The photo effect cross sections have been extracted by subtracting the coherent and incoherent scattering contribution from the measured total attenuation cross section, using high-resolution high-purity germanium detector in a narrow beam good geometry set up. The real part of the dispersion correction to which the relativistic corrections calculated by Kissel and Pratt (S-matrix approach) or Creagh and McAuley (multipole corrections) have been included are in better agreement with the available theoretical values.
A necessary condition for applying MUSIC algorithm in limited-view inverse scattering problem
International Nuclear Information System (INIS)
Park, Taehoon; Park, Won-Kwang
2015-01-01
Throughout various results of numerical simulations, it is well-known that MUltiple SIgnal Classification (MUSIC) algorithm can be applied in the limited-view inverse scattering problems. However, the application is somehow heuristic. In this contribution, we identify a necessary condition of MUSIC for imaging of collection of small, perfectly conducting cracks. This is based on the fact that MUSIC imaging functional can be represented as an infinite series of Bessel function of integer order of the first kind. Numerical experiments from noisy synthetic data supports our investigation. (paper)
A necessary condition for applying MUSIC algorithm in limited-view inverse scattering problem
Park, Taehoon; Park, Won-Kwang
2015-09-01
Throughout various results of numerical simulations, it is well-known that MUltiple SIgnal Classification (MUSIC) algorithm can be applied in the limited-view inverse scattering problems. However, the application is somehow heuristic. In this contribution, we identify a necessary condition of MUSIC for imaging of collection of small, perfectly conducting cracks. This is based on the fact that MUSIC imaging functional can be represented as an infinite series of Bessel function of integer order of the first kind. Numerical experiments from noisy synthetic data supports our investigation.
Simulation of small-angle scattering patterns using a CPU-efficient algorithm
Anitas, E. M.
2017-12-01
Small-angle scattering (of neutrons, x-ray or light; SAS) is a well-established experimental technique for structural analysis of disordered systems at nano and micro scales. For complex systems, such as super-molecular assemblies or protein molecules, analytic solutions of SAS intensity are generally not available. Thus, a frequent approach to simulate the corresponding patterns is to use a CPU-efficient version of the Debye formula. For this purpose, in this paper we implement the well-known DALAI algorithm in Mathematica software. We present calculations for a series of 2D Sierpinski gaskets and respectively of pentaflakes, obtained from chaos game representation.
International Nuclear Information System (INIS)
Ji Zhilong; Ma Yuanwei; Wang Dezhong
2014-01-01
Background: In radioactive nuclides atmospheric diffusion models, the empirical dispersion coefficients were deduced under certain experiment conditions, whose difference with nuclear accident conditions is a source of deviation. A better estimation of the radioactive nuclide's actual dispersion process could be done by correcting dispersion coefficients with observation data, and Genetic Algorithm (GA) is an appropriate method for this correction procedure. Purpose: This study is to analyze the fitness functions' influence on the correction procedure and the forecast ability of diffusion model. Methods: GA, coupled with Lagrange dispersion model, was used in a numerical simulation to compare 4 fitness functions' impact on the correction result. Results: In the numerical simulation, the fitness function with observation deviation taken into consideration stands out when significant deviation exists in the observed data. After performing the correction procedure on the Kincaid experiment data, a significant boost was observed in the diffusion model's forecast ability. Conclusion: As the result shows, in order to improve dispersion models' forecast ability using GA, observation data should be given different weight in the fitness function corresponding to their error. (authors)
International Nuclear Information System (INIS)
Duo, J. I.; Azmy, Y. Y.
2007-01-01
A new method, the Singular Characteristics Tracking algorithm, is developed to account for potential non-smoothness across the singular characteristics in the exact solution of the discrete ordinates approximation of the transport equation. Numerical results show improved rate of convergence of the solution to the discrete ordinates equations in two spatial dimensions with isotropic scattering using the proposed methodology. Unlike the standard Weighted Diamond Difference methods, the new algorithm achieves local convergence in the case of discontinuous angular flux along the singular characteristics. The method also significantly reduces the error for problems where the angular flux presents discontinuous spatial derivatives across these lines. For purposes of verifying the results, the Method of Manufactured Solutions is used to generate analytical reference solutions that permit estimating the local error in the numerical solution. (authors)
International Nuclear Information System (INIS)
Bai, J.; Hashimoto, J.; Suzuki, T.; Nakahara, T.; Kubo, A.; Ohira, M.; Takao, M.; Ogawa, K.
2007-01-01
The aims of this study were to elucidate the feasibility of scatter correction in improving the quantitative accuracy of the Heart-to-Mediastinum (H/M) ratio in I-123 MIBG imaging and to clarify whether the H/M ratio calculated from the scatter corrected image improves the accuracy of differentiating patients with Parkinsonism from other neurological disorders. The H/M ratio was calculated using the counts from planar images processed with and without scatter correction in the phantom and on patients. The triple energy window (TEW) method was used for scatter correction. Fifty five patients were enrolled in the clinical study. The Receiver Operating Characteristic (ROC) Curve analysis was used to evaluate diagnostic performance. The H/M ratio was found to be increased after scatter correction in the phantom simulating normal cardiac uptake, while no changes were observed in the phantom simulating no uptake. It was observed that scatter correction stabilized the H/M ratio by eliminating the influence of scatter photons originating from the liver, especially in the condition of no cardiac uptake. Similarly, scatter correction increased the H/M ratio in conditions other than Parkinson's disease but did not show any change in Parkinson's disease itself to widen the differences in the H/M ratios between the two groups. The overall power of the test did not show any significant improvement after scatter correction in differentiating patients with Parkinsonism. Based on the results of this study it has been concluded that scatter correction improves the quantitative accuracy of H/M ratio in MIBG imaging, but it does not offer any significant incremental diagnostic value over conventional imaging (without scatter correction). Nevertheless it is felt that the scatter correction technique deserves special consideration in order to make the test more robust and obtain stable H/M ratios. (author)
International Nuclear Information System (INIS)
Bose, Supratik; Shukla, Himanshu; Maltz, Jonathan
2010-01-01
Purpose: In current image guided pretreatment patient position adjustment methods, image registration is used to determine alignment parameters. Since most positioning hardware lacks the full six degrees of freedom (DOF), accuracy is compromised. The authors show that such compromises are often unnecessary when one models the planned treatment beams as part of the adjustment calculation process. The authors present a flexible algorithm for determining optimal realizable adjustments for both step-and-shoot and arc delivery methods. Methods: The beam shape model is based on the polygonal intersection of each beam segment with the plane in pretreatment image volume that passes through machine isocenter perpendicular to the central axis of the beam. Under a virtual six-DOF correction, ideal positions of these polygon vertices are computed. The proposed method determines the couch, gantry, and collimator adjustments that minimize the total mismatch of all vertices over all segments with respect to their ideal positions. Using this geometric error metric as a function of the number of available DOF, the user may select the most desirable correction regime. Results: For a simulated treatment plan consisting of three equally weighted coplanar fixed beams, the authors achieve a 7% residual geometric error (with respect to the ideal correction, considered 0% error) by applying gantry rotation as well as translation and isocentric rotation of the couch. For a clinical head-and-neck intensity modulated radiotherapy plan with seven beams and five segments per beam, the corresponding error is 6%. Correction involving only couch translation (typical clinical practice) leads to a much larger 18% mismatch. Clinically significant consequences of more accurate adjustment are apparent in the dose volume histograms of target and critical structures. Conclusions: The algorithm achieves improvements in delivery accuracy using standard delivery hardware without significantly increasing
Directory of Open Access Journals (Sweden)
Sonali Sachin Sankpal
2016-01-01
Full Text Available Scattering and absorption of light is main reason for limited visibility in water. The suspended particles and dissolved chemical compounds in water are also responsible for scattering and absorption of light in water. The limited visibility in water results in degradation of underwater images. The visibility can be increased by using artificial light source in underwater imaging system. But the artificial light illuminates the scene in a nonuniform fashion. It produces bright spot at the center with the dark region at surroundings. In some cases imaging system itself creates dark region in the image by producing shadow on the objects. The problem of nonuniform illumination is neglected by the researchers in most of the image enhancement techniques of underwater images. Also very few methods are discussed showing the results on color images. This paper suggests a method for nonuniform illumination correction for underwater images. The method assumes that natural underwater images are Rayleigh distributed. This paper used maximum likelihood estimation of scale parameter to map distribution of image to Rayleigh distribution. The method is compared with traditional methods for nonuniform illumination correction using no-reference image quality metrics like average luminance, average information entropy, normalized neighborhood function, average contrast, and comprehensive assessment function.
Petersen, T. C.; Ringer, S. P.
2010-03-01
Upon discerning the mere shape of an imaged object, as portrayed by projected perimeters, the full three-dimensional scattering density may not be of particular interest. In this situation considerable simplifications to the reconstruction problem are possible, allowing calculations based upon geometric principles. Here we describe and provide an algorithm which reconstructs the three-dimensional morphology of specimens from tilt series of images for application to electron tomography. Our algorithm uses a differential approach to infer the intersection of projected tangent lines with surfaces which define boundaries between regions of different scattering densities within and around the perimeters of specimens. Details of the algorithm implementation are given and explained using reconstruction calculations from simulations, which are built into the code. An experimental application of the algorithm to a nano-sized Aluminium tip is also presented to demonstrate practical analysis for a real specimen. Program summaryProgram title: STOMO version 1.0 Catalogue identifier: AEFS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2988 No. of bytes in distributed program, including test data, etc.: 191 605 Distribution format: tar.gz Programming language: C/C++ Computer: PC Operating system: Windows XP RAM: Depends upon the size of experimental data as input, ranging from 200 Mb to 1.5 Gb Supplementary material: Sample output files, for the test run provided, are available. Classification: 7.4, 14 External routines: Dev-C++ ( http://www.bloodshed.net/devcpp.html) Nature of problem: Electron tomography of specimens for which conventional back projection may fail and/or data for which there is a limited angular
Pile-up correction by Genetic Algorithm and Artificial Neural Network
Kafaee, M.; Saramad, S.
2009-08-01
Pile-up distortion is a common problem for high counting rates radiation spectroscopy in many fields such as industrial, nuclear and medical applications. It is possible to reduce pulse pile-up using hardware-based pile-up rejections. However, this phenomenon may not be eliminated completely by this approach and the spectrum distortion caused by pile-up rejection can be increased as well. In addition, inaccurate correction or rejection of pile-up artifacts in applications such as energy dispersive X-ray (EDX) spectrometers can lead to losses of counts, will give poor quantitative results and even false element identification. Therefore, it is highly desirable to use software-based models to predict and correct any recognized pile-up signals in data acquisition systems. The present paper describes two new intelligent approaches for pile-up correction; the Genetic Algorithm (GA) and Artificial Neural Networks (ANNs). The validation and testing results of these new methods have been compared, which shows excellent agreement with the measured data with 60Co source and NaI detector. The Monte Carlo simulation of these new intelligent algorithms also shows their advantages over hardware-based pulse pile-up rejection methods.
International Nuclear Information System (INIS)
Gomez Facenda, A.; Castillo Lopez, J. P.; Torres Aroche, L. A.; Coca Perez, M. A.
2013-01-01
Activity quantification in nuclear medicine imaging is highly desirable, particularly for dosimetry and biodistribution studies of radiopharmaceuticals. Quantitative 111 In imaging is increasingly important with the current interest in therapy using 90 Y-radiolabeled compounds. Photons scattered in the patient are one of the major problems in quantification, which leads to degradation of image quality. The aim of this work was to assess the configuration of energy windows and the best weight factor for the scatter correction in 111 In images. All images were obtained using the Monte Carlo simulation code, Simind, configured to emulate the gamma camera Nucline SPIRIT DH-V. Simulations were validated by a positive agreement between experimental and simulated line-spread functions (LSF) of 99 mTc. It was examined the sensitivity, the scatter-to-total ratio, the contrast and the spatial resolution for scatter-compensated images obtained from six different multi-windows scatter corrections. Taking into consideration the results, the best energy-window setting was two 20% windows centered at 171 and 245keV, together with a 10% scatter window located between the photo peaks at 209keV. (Author)
Miwa, Kenta; Umeda, Takuro; Murata, Taisuke; Wagatsuma, Kei; Miyaji, Noriaki; Terauchi, Takashi; Koizumi, Mitsuru; Sasaki, Masayuki
2016-02-01
Overcorrection of scatter caused by patient motion during whole-body PET/computed tomography (CT) imaging can induce the appearance of photopenic artifacts in the PET images. The present study aimed to quantify the accuracy of scatter limitation correction (SLC) for eliminating photopenic artifacts. This study analyzed photopenic artifacts in (18)F-fluorodeoxyglucose ((18)F-FDG) PET/CT images acquired from 12 patients and from a National Electrical Manufacturers Association phantom with two peripheral plastic bottles that simulated the human body and arms, respectively. The phantom comprised a sphere (diameter, 10 or 37 mm) containing fluorine-18 solutions with target-to-background ratios of 2, 4, and 8. The plastic bottles were moved 10 cm posteriorly between CT and PET acquisitions. All PET data were reconstructed using model-based scatter correction (SC), no scatter correction (NSC), and SLC, and the presence or absence of artifacts on the PET images was visually evaluated. The SC and SLC images were also semiquantitatively evaluated using standardized uptake values (SUVs). Photopenic artifacts were not recognizable in any NSC and SLC image from all 12 patients in the clinical study. The SUVmax of mismatched SLC PET/CT images were almost equal to those of matched SC and SLC PET/CT images. Applying NSC and SLC substantially eliminated the photopenic artifacts on SC PET images in the phantom study. SLC improved the activity concentration of the sphere for all target-to-background ratios. The highest %errors of the 10 and 37-mm spheres were 93.3 and 58.3%, respectively, for mismatched SC, and 73.2 and 22.0%, respectively, for mismatched SLC. Photopenic artifacts caused by SC error induced by CT and PET image misalignment were corrected using SLC, indicating that this method is useful and practical for clinical qualitative and quantitative PET/CT assessment.
Qattan, I. A.
2017-06-01
I present a prediction of the e± elastic scattering cross-section ratio, Re+e-, as determined using a new parametrization of the two-photon exchange (TPE) corrections to electron-proton elastic scattering cross section σR. The extracted ratio is compared to several previous phenomenological extractions, TPE hadronic calculations, and direct measurements from the comparison of electron and positron scattering. The TPE corrections and the ratio Re+e- show a clear change of sign at low Q2, which is necessary to explain the high-Q2 form factors discrepancy while being consistent with the known Q2→0 limit. While my predictions are in generally good agreement with previous extractions, TPE hadronic calculations, and existing world data including the recent two measurements from the CLAS and VEPP-3 Novosibirsk experiments, they are larger than the new OLYMPUS measurements at larger Q2 values.
Energy Technology Data Exchange (ETDEWEB)
Kim, Kyungsang; Ye, Jong Chul, E-mail: jong.ye@kaist.ac.kr [Bio Imaging and Signal Processing Laboratory, Department of Bio and Brain Engineering, KAIST 291, Daehak-ro, Yuseong-gu, Daejeon 34141 (Korea, Republic of); Lee, Taewon; Cho, Seungryong [Medical Imaging and Radiotherapeutics Laboratory, Department of Nuclear and Quantum Engineering, KAIST 291, Daehak-ro, Yuseong-gu, Daejeon 34141 (Korea, Republic of); Seong, Younghun; Lee, Jongha; Jang, Kwang Eun [Samsung Advanced Institute of Technology, Samsung Electronics, 130, Samsung-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do, 443-803 (Korea, Republic of); Choi, Jaegu; Choi, Young Wook [Korea Electrotechnology Research Institute (KERI), 111, Hanggaul-ro, Sangnok-gu, Ansan-si, Gyeonggi-do, 426-170 (Korea, Republic of); Kim, Hak Hee; Shin, Hee Jung; Cha, Joo Hee [Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro, 43-gil, Songpa-gu, Seoul, 138-736 (Korea, Republic of)
2015-09-15
Purpose: In digital breast tomosynthesis (DBT), scatter correction is highly desirable, as it improves image quality at low doses. Because the DBT detector panel is typically stationary during the source rotation, antiscatter grids are not generally compatible with DBT; thus, a software-based scatter correction is required. This work proposes a fully iterative scatter correction method that uses a novel fast Monte Carlo simulation (MCS) with a tissue-composition ratio estimation technique for DBT imaging. Methods: To apply MCS to scatter estimation, the material composition in each voxel should be known. To overcome the lack of prior accurate knowledge of tissue composition for DBT, a tissue-composition ratio is estimated based on the observation that the breast tissues are principally composed of adipose and glandular tissues. Using this approximation, the composition ratio can be estimated from the reconstructed attenuation coefficients, and the scatter distribution can then be estimated by MCS using the composition ratio. The scatter estimation and image reconstruction procedures can be performed iteratively until an acceptable accuracy is achieved. For practical use, (i) the authors have implemented a fast MCS using a graphics processing unit (GPU), (ii) the MCS is simplified to transport only x-rays in the energy range of 10–50 keV, modeling Rayleigh and Compton scattering and the photoelectric effect using the tissue-composition ratio of adipose and glandular tissues, and (iii) downsampling is used because the scatter distribution varies rather smoothly. Results: The authors have demonstrated that the proposed method can accurately estimate the scatter distribution, and that the contrast-to-noise ratio of the final reconstructed image is significantly improved. The authors validated the performance of the MCS by changing the tissue thickness, composition ratio, and x-ray energy. The authors confirmed that the tissue-composition ratio estimation was quite
Energy Technology Data Exchange (ETDEWEB)
Wang, A; Paysan, P; Brehm, M; Maslowski, A; Lehmann, M; Messmer, P; Munro, P; Yoon, S; Star-Lack, J; Seghers, D [Varian Medical Systems, Palo Alto, CA (United States)
2016-06-15
Purpose: To improve CBCT image quality for image-guided radiotherapy by applying advanced reconstruction algorithms to overcome scatter, noise, and artifact limitations Methods: CBCT is used extensively for patient setup in radiotherapy. However, image quality generally falls short of diagnostic CT, limiting soft-tissue based positioning and potential applications such as adaptive radiotherapy. The conventional TrueBeam CBCT reconstructor uses a basic scatter correction and FDK reconstruction, resulting in residual scatter artifacts, suboptimal image noise characteristics, and other artifacts like cone-beam artifacts. We have developed an advanced scatter correction that uses a finite-element solver (AcurosCTS) to model the behavior of photons as they pass (and scatter) through the object. Furthermore, iterative reconstruction is applied to the scatter-corrected projections, enforcing data consistency with statistical weighting and applying an edge-preserving image regularizer to reduce image noise. The combined algorithms have been implemented on a GPU. CBCT projections from clinically operating TrueBeam systems have been used to compare image quality between the conventional and improved reconstruction methods. Planning CT images of the same patients have also been compared. Results: The advanced scatter correction removes shading and inhomogeneity artifacts, reducing the scatter artifact from 99.5 HU to 13.7 HU in a typical pelvis case. Iterative reconstruction provides further benefit by reducing image noise and eliminating streak artifacts, thereby improving soft-tissue visualization. In a clinical head and pelvis CBCT, the noise was reduced by 43% and 48%, respectively, with no change in spatial resolution (assessed visually). Additional benefits include reduction of cone-beam artifacts and reduction of metal artifacts due to intrinsic downweighting of corrupted rays. Conclusion: The combination of an advanced scatter correction with iterative reconstruction
DEFF Research Database (Denmark)
Pinkevych, Mykola; Cromer, Deborah; Tolstrup, Martin
2016-01-01
[This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.].......[This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.]....
International Nuclear Information System (INIS)
Hambye, A.S.; Vervaet, A.; Dobbeleir, A.
2002-01-01
Compared to other non invasive testings for CAD diagnosis, myocardial perfusion imaging (MPI) is considered as a very sensitive method which accuracy is however often dimmed by a certain lack of specificity, especially in patients with a small heart. With gated SPECT MPI, use of end-diastolic instead of summed images has been presented as an interesting approach for increasing specificity. Since scatter correction is reported to improve image contrast, it might potentially constitute another way to ameliorate MPI accuracy. We aimed at comparing the value of both approaches, either separate or combined, for CAD diagnosis. Methods. Hundred patients addressed for gated 99m-Tc sestamibi SPECT MPI were prospectively included (Group A). Thirty-five had an end-systolic volume <30ml by QGS-analysis (Group B). All had a coronary angiogram within 3 months of the MPI. Four polar maps (non-corrected and scatter-corrected summed, and non-corrected and scatter-corrected end-diastolic) were created to quantify the extent (EXT) and severity (TDS) of the perfusion defects if any. ROC-curve analysis was applied to define the optimal thresholds of EXT and TDS separating non-CAD from CAD-patients, using a 50%-stenosis on coronary angiogram as cutoff for disease positivity. Results. Significant CAD was present in 86 patients (25 in Group B). In Group A, assessment of EXT and TDS of perfusion defects on scatter-corrected summed images demonstrated the highest accuracy (76% for EXT; sens: 77%; spec: 71%, and 74% for TDS, sens: 73%, spec: 79%). Accuracy of EXT and TDS calculated from the other data sets was slightly but not significantly lower, especially because of a lower sensitivity. As a comparison, visual analysis was 90% accurate for the diagnosis of CAD (sens: 94%, spec: 64%). In group B, overall results were worse mainly due to a decreased sensitivity, with accuracies ranging between 51 and 63%. Again scatter-corrected summed data were the most accurate (EXT: 60%, TDS: 63%, visual
Focusing light through strongly scattering media using genetic algorithm with SBR discriminant
Zhang, Bin; Zhang, Zhenfeng; Feng, Qi; Liu, Zhipeng; Lin, Chengyou; Ding, Yingchun
2018-02-01
In this paper, we have experimentally demonstrated light focusing through strongly scattering media by performing binary amplitude optimization with a genetic algorithm. In the experiments, we control 160 000 mirrors of digital micromirror device to modulate and optimize the light transmission paths in the strongly scattering media. We replace the universal target-position-intensity (TPI) discriminant with signal-to-background ratio (SBR) discriminant in genetic algorithm. With 400 incident segments, a relative enhancement value of 17.5% with a ground glass diffuser is achieved, which is higher than the theoretical value of 1/(2π )≈ 15.9 % for binary amplitude optimization. According to our repetitive experiments, we conclude that, with the same segment number, the enhancement for the SBR discriminant is always higher than that for the TPI discriminant, which results from the background-weakening effect of SBR discriminant. In addition, with the SBR discriminant, the diameters of the focus can be changed ranging from 7 to 70 μm at arbitrary positions. Besides, multiple foci with high enhancement are obtained. Our work provides a meaningful reference for the study of binary amplitude optimization in the wavefront shaping field.
Ning, Jing; Chen, Yong; Piao, Jin
2017-07-01
Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Parametrisation of the collimator scatter correction factors of square and rectangular photon beams
International Nuclear Information System (INIS)
Jager, H.N.; Heukelom, S.; Kleffens, H.J. van; Gasteren, J.J.M. van; Laarse, R. van der; Venselaar, J.L.M.; Westermann, C.F.
1995-01-01
Collimator scatter correction factors S c have been measured with a cylindrical mini-phantom for five types of dual photon energy accelerators with energies between 6 and 25 MV. Using these S c -data three methods to parametrize S c of square fields have been compared including a third-order polynomial of the natural logarithm of the fieldsize normalised by the fieldsize of 10 cm 2 . Also six methods to calculate S c of rectangular fields have been compared including a new one which determines the equivalent fieldsize by extending Sterling's method. The deviation between measured and calculated S c for every accelerator, energy and all methods are determined resulting in the maximum and average deviation per method. Applied to square fields the maximum and average deviation were for the method of Chen 0.64% and 0.15%, of Szymzcyk 0.98% and 0.21%, and of this work 0.41% and 0.10%. For the rectangular fields the deviations were for the method of Sterling 1.89% and 0.50%, of Vadash 1.60% and 0.28%, of Szymczyk et al. 1.21% and 0.25%, of Chen 1.84% and 0.31% and of this work 0.79% and 0.20%. Finally, a recommendation is given how to limit the number of fields at which S c should be measured
Energy Technology Data Exchange (ETDEWEB)
Romanov, A.; Edstrom, D.; Emanov, F. A.; Koop, I. A.; Perevedentsev, E. A.; Rogovsky, Yu. A.; Shwartz, D. B.; Valishev, A.
2017-03-28
Precise beam based measurement and correction of magnetic optics is essential for the successful operation of accelerators. The LOCO algorithm is a proven and reliable tool, which in some situations can be improved by using a broader class of experimental data. The standard data sets for LOCO include the closed orbit responses to dipole corrector variation, dispersion, and betatron tunes. This paper discusses the benefits from augmenting the data with four additional classes of experimental data: the beam shape measured with beam profile monitors; responses of closed orbit bumps to focusing field variations; betatron tune responses to focusing field variations; BPM-to-BPM betatron phase advances and beta functions in BPMs from turn-by-turn coordinates of kicked beam. All of the described features were implemented in the Sixdsimulation software that was used to correct the optics of the VEPP-2000 collider, the VEPP-5 injector booster ring, and the FAST linac.
A fingerprint key binding algorithm based on vector quantization and error correction
Li, Liang; Wang, Qian; Lv, Ke; He, Ning
2012-04-01
In recent years, researches on seamless combination cryptosystem with biometric technologies, e.g. fingerprint recognition, are conducted by many researchers. In this paper, we propose a binding algorithm of fingerprint template and cryptographic key to protect and access the key by fingerprint verification. In order to avoid the intrinsic fuzziness of variant fingerprints, vector quantization and error correction technique are introduced to transform fingerprint template and then bind with key, after a process of fingerprint registration and extracting global ridge pattern of fingerprint. The key itself is secure because only hash value is stored and it is released only when fingerprint verification succeeds. Experimental results demonstrate the effectiveness of our ideas.
TUnfold, an algorithm for correcting migration effects in high energy physics
Energy Technology Data Exchange (ETDEWEB)
Schmitt, Stefan
2012-07-15
TUnfold is a tool for correcting migration and background effects in high energy physics for multi-dimensional distributions. It is based on a least square fit with Tikhonov regularisation and an optional area constraint. For determining the strength of the regularisation parameter, the L-curve method and scans of global correlation coefficients are implemented. The algorithm supports background subtraction and error propagation of statistical and systematic uncertainties, in particular those originating from limited knowledge of the response matrix. The program is interfaced to the ROOT analysis framework.
International Nuclear Information System (INIS)
Shumeiko, N.M.; Timoshin, S.I.
1991-01-01
Compact formulae for a total 1-loop electromagnetic corrections, including the contribution of electromagnetic hadron effects to the deep inelastic scattering of polarized leptons on polarized nucleons in the quark-parton model have been obtained. The cases of longitudinal and transverse nucleon polarization are considered in detail. A thorough numerical calculation of corrections to cross sections and polarization asymmetries at muon (electron) energies over the range of 200-2000 GeV (10-16 GeV) has been made. It has been established that the contribution of corrections to the hadron current considerably affects the behaviour of longitudinal asymmetry. A satisfactory agreement is found between the model calculations of corrections to the lepton current and the phenomenological calculation results, which makes it possible to find the total 1-loop correction within the framework of a common approach. (Author)
Peng, Jiangtao; Peng, Silong; Xie, Qiong; Wei, Jiping
2011-04-01
In order to eliminate the lower order polynomial interferences, a new quantitative calibration algorithm "Baseline Correction Combined Partial Least Squares (BCC-PLS)", which combines baseline correction and conventional PLS, is proposed. By embedding baseline correction constraints into PLS weights selection, the proposed calibration algorithm overcomes the uncertainty in baseline correction and can meet the requirement of on-line attenuated total reflectance Fourier transform infrared (ATR-FTIR) quantitative analysis. The effectiveness of the algorithm is evaluated by the analysis of glucose and marzipan ATR-FTIR spectra. BCC-PLS algorithm shows improved prediction performance over PLS. The root mean square error of cross-validation (RMSECV) on marzipan spectra for the prediction of the moisture is found to be 0.53%, w/w (range 7-19%). The sugar content is predicted with a RMSECV of 2.04%, w/w (range 33-68%). Copyright © 2011 Elsevier B.V. All rights reserved.
Gordon, Howard R.; Wang, Menghua
1992-01-01
The first step in the Coastal Zone Color Scanner (CZCS) atmospheric-correction algorithm is the computation of the Rayleigh-scattering (RS) contribution, L sub r, to the radiance leaving the top of the atmosphere over the ocean. In the present algorithm, L sub r is computed by assuming that the ocean surface is flat. Calculations of the radiance leaving an RS atmosphere overlying a rough Fresnel-reflecting ocean are presented to evaluate the radiance error caused by the flat-ocean assumption. Simulations are carried out to evaluate the error incurred when the CZCS-type algorithm is applied to a realistic ocean in which the surface is roughened by the wind. In situations where there is no direct sun glitter, it is concluded that the error induced by ignoring the Rayleigh-aerosol interaction is usually larger than that caused by ignoring the surface roughness. This suggests that, in refining algorithms for future sensors, more effort should be focused on dealing with the Rayleigh-aerosol interaction than on the roughness of the sea surface.
Zhang, Zhenhai; Li, Kejie; Wu, Xiaobing; Zhang, Shujiang
2008-03-01
The unwrapped and correcting algorithm based on Coordinate Rotation Digital Computer (CORDIC) and bilinear interpolation algorithm was presented in this paper, with the purpose of processing dynamic panoramic annular image. An original annular panoramic image captured by panoramic annular lens (PAL) can be unwrapped and corrected to conventional rectangular image without distortion, which is much more coincident with people's vision. The algorithm for panoramic image processing is modeled by VHDL and implemented in FPGA. The experimental results show that the proposed panoramic image algorithm for unwrapped and distortion correction has the lower computation complexity and the architecture for dynamic panoramic image processing has lower hardware cost and power consumption. And the proposed algorithm is valid.
Direct cone-beam cardiac reconstruction algorithm with cardiac banding artifact correction
International Nuclear Information System (INIS)
Taguchi, Katsuyuki; Chiang, Beshan S.; Hein, Ilmar A.
2006-01-01
Multislice helical computed tomography (CT) is a promising noninvasive technique for coronary artery imaging. Various factors can cause inconsistencies in cardiac CT data, which can result in degraded image quality. These inconsistencies may be the result of the patient physiology (e.g., heart rate variations), the nature of the data (e.g., cone-angle), or the reconstruction algorithm itself. An algorithm which provides the best temporal resolution for each slice, for example, often provides suboptimal image quality for the entire volume since the cardiac temporal resolution (TRc) changes from slice to slice. Such variations in TRc can generate strong banding artifacts in multi-planar reconstruction images or three-dimensional images. Discontinuous heart walls and coronary arteries may compromise the accuracy of the diagnosis. A β-blocker is often used to reduce and stabilize patients' heart rate but cannot eliminate the variation. In order to obtain robust and optimal image quality, a software solution that increases the temporal resolution and decreases the effect of heart rate is highly desirable. This paper proposes an ECG-correlated direct cone-beam reconstruction algorithm (TCOT-EGR) with cardiac banding artifact correction (CBC) and disconnected projections redundancy compensation technique (DIRECT). First the theory and analytical model of the cardiac temporal resolution is outlined. Next, the performance of the proposed algorithms is evaluated by using computer simulations as well as patient data. It will be shown that the proposed algorithms enhance the robustness of the image quality against inconsistencies by guaranteeing smooth transition of heart cycles used in reconstruction
International Nuclear Information System (INIS)
Adachi, Itaru; Doi, Kenji; Komori, Tsuyoshi; Hou, Nobuyoshi; Tabuchi, Koujirou; Matsui, Ritsuo; Sueyoshi, Kouzou; Utsunomiya, Keita; Narabayashi, Isamu
1998-01-01
This investigation was undertaken to study clinical usefulness of scatter and attenuation correction (SAC) of brain SPECT in infants to compare the standard reconstruction (STD). The brain SPECT was performed in 31 patients with 19 epilepsy, 5 cerebro-vascular disease, 2 brain tumor, 3 meningitis, 1 hydrocephalus and psychosis (mean age 5.0±4.9 years old). Many patients was necessary to be injected sedatives for restraining body motion after Technetium-99m hexamethylpropylene amine oxime ( 99m Tc-HMPAO) was injected at the convulsion or rest. Brain SPECT data were acquired with triple detector gamma camera (GCA-9300 Toshiba Japan). These data were reconstructed by filtered backprojection after the raw data were corrected by triple energy windows method of scatter correction and Chang filtered method of attenuation correction. The same data was reconstructed by filtered backprojection without these corrections. Both SAC and STD SPECT images were analyzed by the visual interpretation. The uptake ratio of cerebral basal nuclei was calculated by the counts of the thalamus or lenticular nuclei divided by the cortex. All images of SAC method were excellent than that of STD method. The thalamic uptake ratio in SAC method was higher than that of STD method (1.22±0.09>0.87±0.22 p 1.02±0.16 p<0.01). Transmission scan is the most suitable method of absorption correction. But the transmission scan is not adequate for examination of children, because this scan needs a lot of time and the infants are exposed by the line source radioisotope. It was concluded that these scatter and absorption corrections were most suitable method for brain SPECT in pediatrics. (author)
Energy Technology Data Exchange (ETDEWEB)
Adachi, Itaru; Doi, Kenji; Komori, Tsuyoshi; Hou, Nobuyoshi; Tabuchi, Koujirou; Matsui, Ritsuo; Sueyoshi, Kouzou; Utsunomiya, Keita; Narabayashi, Isamu [Osaka Medical Coll., Takatsuki (Japan)
1998-01-01
This investigation was undertaken to study clinical usefulness of scatter and attenuation correction (SAC) of brain SPECT in infants to compare the standard reconstruction (STD). The brain SPECT was performed in 31 patients with 19 epilepsy, 5 cerebro-vascular disease, 2 brain tumor, 3 meningitis, 1 hydrocephalus and psychosis (mean age 5.0{+-}4.9 years old). Many patients was necessary to be injected sedatives for restraining body motion after Technetium-99m hexamethylpropylene amine oxime ({sup 99m}Tc-HMPAO) was injected at the convulsion or rest. Brain SPECT data were acquired with triple detector gamma camera (GCA-9300 Toshiba Japan). These data were reconstructed by filtered backprojection after the raw data were corrected by triple energy windows method of scatter correction and Chang filtered method of attenuation correction. The same data was reconstructed by filtered backprojection without these corrections. Both SAC and STD SPECT images were analyzed by the visual interpretation. The uptake ratio of cerebral basal nuclei was calculated by the counts of the thalamus or lenticular nuclei divided by the cortex. All images of SAC method were excellent than that of STD method. The thalamic uptake ratio in SAC method was higher than that of STD method (1.22{+-}0.09>0.87{+-}0.22 p<0.01). The lenticular nuclear uptake ratio in SAC method was higher than that of STD method (1.26{+-}0.15>1.02{+-}0.16 p<0.01). Transmission scan is the most suitable method of absorption correction. But the transmission scan is not adequate for examination of children, because this scan needs a lot of time and the infants are exposed by the line source radioisotope. It was concluded that these scatter and absorption corrections were most suitable method for brain SPECT in pediatrics. (author)
International Nuclear Information System (INIS)
Sun, Wenbo; Videen, Gorden; Fu, Qiang; Hu, Yongxiang
2013-01-01
As fundamental parameters for polarized-radiative-transfer calculations, the single-scattering phase matrix of irregularly shaped aerosol particles must be accurately modeled. In this study, a scattered-field finite-difference time-domain (FDTD) model and a scattered-field pseudo-spectral time-domain (PSTD) model are developed for light scattering by arbitrarily shaped dielectric aerosols. The convolutional perfectly matched layer (CPML) absorbing boundary condition (ABC) is used to truncate the computational domain. It is found that the PSTD method is generally more accurate than the FDTD in calculation of the single-scattering properties given similar spatial cell sizes. Since the PSTD can use a coarser grid for large particles, it can lower the memory requirement in the calculation. However, the Fourier transformations in the PSTD need significantly more CPU time than simple subtractions in the FDTD, and the fast Fourier transform requires a power of 2 elements in calculations, thus using the PSTD could not significantly reduce the CPU time required in the numerical modeling. Furthermore, because the scattered-field FDTD/PSTD equations include incident-wave source terms, the FDTD/PSTD model allows for the inclusion of an arbitrarily incident wave source, including a plane parallel wave or a Gaussian beam like those emitted by lasers usually used in laboratory particle characterizations, etc. The scattered-field FDTD and PSTD light-scattering models can be used to calculate single-scattering properties of arbitrarily shaped aerosol particles over broad size and wavelength ranges. -- Highlights: • Scattered-field FDTD and PSTD models are developed for light scattering by aerosols. • Convolutional perfectly matched layer absorbing boundary condition is used. • PSTD is generally more accurate than FDTD in calculating single-scattering properties. • Using same spatial resolution, PSTD requires much larger CPU time than FDTD
International Nuclear Information System (INIS)
Lue Kunhan; Lin Hsinhon; Chuang Kehshih; Kao Chihhao, K.; Hsieh Hungjen; Liu Shuhsin
2014-01-01
In positron emission tomography (PET) of the dopaminergic system, quantitative measurements of nigrostriatal dopamine function are useful for differential diagnosis. A subregional analysis of striatal uptake enables the diagnostic performance to be more powerful. However, the partial volume effect (PVE) induces an underestimation of the true radioactivity concentration in small structures. This work proposes a simple algorithm for subregional analysis of striatal uptake with partial volume correction (PVC) in dopaminergic PET imaging. The PVC algorithm analyzes the separate striatal subregions and takes into account the PVE based on the recovery coefficient (RC). The RC is defined as the ratio of the PVE-uncorrected to PVE-corrected radioactivity concentration, and is derived from a combination of the traditional volume of interest (VOI) analysis and the large VOI technique. The clinical studies, comprising 11 patients with Parkinson's disease (PD) and 6 healthy subjects, were used to assess the impact of PVC on the quantitative measurements. Simulations on a numerical phantom that mimicked realistic healthy and neurodegenerative situations were used to evaluate the performance of the proposed PVC algorithm. In both the clinical and the simulation studies, the striatal-to-occipital ratio (SOR) values for the entire striatum and its subregions were calculated with and without PVC. In the clinical studies, the SOR values in each structure (caudate, anterior putamen, posterior putamen, putamen, and striatum) were significantly higher by using PVC in contrast to those without. Among the PD patients, the SOR values in each structure and quantitative disease severity ratings were shown to be significantly related only when PVC was used. For the simulation studies, the average absolute percentage error of the SOR estimates before and after PVC were 22.74% and 1.54% in the healthy situation, respectively; those in the neurodegenerative situation were 20.69% and 2
Baek, Jieun; Choi, Yosoon
2017-04-01
Most algorithms for least-cost path analysis usually calculate the slope gradient between the source cell and the adjacent cells to reflect the weights for terrain slope into the calculation of travel costs. However, these algorithms have limitations that they cannot analyze the least-cost path between two cells when obstacle cells with very high or low terrain elevation exist between the source cell and the target cell. This study presents a new algorithm for least-cost path analysis by correcting digital elevation models of natural landscapes to find possible paths satisfying the constraint of maximum or minimum slope gradient. The new algorithm calculates the slope gradient between the center cell and non-adjacent cells using the concept of extended move-sets. If the algorithm finds possible paths between the center cell and non-adjacent cells with satisfying the constraint of slope condition, terrain elevation of obstacle cells existing between two cells is corrected from the digital elevation model. After calculating the cumulative travel costs to the destination by reflecting the weight of the difference between the original and corrected elevations, the algorithm analyzes the least-cost path. The results of applying the proposed algorithm to the synthetic data sets and the real-world data sets provide proof that the new algorithm can provide more accurate least-cost paths than other conventional algorithms implemented in commercial GIS software such as ArcGIS.
Monte Carlo simulation and scatter correction of the GE Advance PET scanner with SimSET and Geant4
International Nuclear Information System (INIS)
Barret, Olivier; Carpenter, T Adrian; Clark, John C; Ansorge, Richard E; Fryer, Tim D
2005-01-01
For Monte Carlo simulations to be used as an alternative solution to perform scatter correction, accurate modelling of the scanner as well as speed is paramount. General-purpose Monte Carlo packages (Geant4, EGS, MCNP) allow a detailed description of the scanner but are not efficient at simulating voxel-based geometries (patient images). On the other hand, dedicated codes (SimSET, PETSIM) will perform well for voxel-based objects but will be poor in their capacity of simulating complex geometries such as a PET scanner. The approach adopted in this work was to couple a dedicated code (SimSET) with a general-purpose package (Geant4) to have the efficiency of the former and the capabilities of the latter. The combined SimSET+Geant4 code (SimG4) was assessed on the GE Advance PET scanner and compared to the use of SimSET only. A better description of the resolution and sensitivity of the scanner and of the scatter fraction was obtained with SimG4. The accuracy of scatter correction performed with SimG4 and SimSET was also assessed from data acquired with the 20 cm NEMA phantom. SimG4 was found to outperform SimSET and to give slightly better results than the GE scatter correction methods installed on the Advance scanner (curve fitting and scatter modelling for the 300-650 keV and 375-650 keV energy windows, respectively). In the presence of a hot source close to the edge of the field of view (as found in oxygen scans), the GE curve-fitting method was found to fail whereas SimG4 maintained its performance
International Nuclear Information System (INIS)
Narita, Y.; Eberl, S.; Bautovich, G.; Iida, H.; Hutton, B.F.; Braun, M.; Nakamura, T.
1996-01-01
Scatter correction is a prerequisite for quantitative SPECT, but potentially increases noise. Monte Carlo simulations (EGS4) and physical phantom measurements were used to compare accuracy and noise properties of two scatter correction techniques: the triple-energy window (TEW), and the transmission dependent convolution subtraction (TDCS) techniques. Two scatter functions were investigated for TDCS: (i) the originally proposed mono-exponential function (TDCS mono ) and (ii) an exponential plus Gaussian scatter function (TDCS Gauss ) demonstrated to be superior from our Monte Carlo simulations. Signal to noise ratio (S/N) and accuracy were investigated in cylindrical phantoms and a chest phantom. Results from each method were compared to the true primary counts (simulations), or known activity concentrations (phantom studies). 99m Tc was used in all cases. The optimized TDCS Gauss method overall performed best, with an accuracy of better than 4% for all simulations and physical phantom studies. Maximum errors for TEW and TDCS mono of -30 and -22%, respectively, were observed in the heart chamber of the simulated chest phantom. TEW had the worst S/N ratio of the three techniques. The S/N ratios of the two TDCS methods were similar and only slightly lower than those of simulated true primary data. Thus, accurate quantitation can be obtained with TDCS Gauss , with a relatively small reduction in S/N ratio. (author)
Energy Technology Data Exchange (ETDEWEB)
Bai, J. [Keio Univ., Tokyo (Japan). 21st Century Center of Excellence Program; Hashimoto, J.; Kubo, A. [Keio Univ., Tokyo (Japan). Dept. of Radiology; Ogawa, K. [Hosei Univ., Tokyo (Japan). Dept. of Electronic Informatics; Fukunaga, A.; Onozuka, S. [Keio Univ., Tokyo (Japan). Dept. of Neurosurgery
2007-07-01
The aim of this study was to evaluate the effect of scatter and attenuation correction in region of interest (ROI) analysis of brain perfusion single-photon emission tomography (SPECT), and to assess the influence of selecting the reference area on the calculation of lesion-to-reference count ratios. Patients, methods: Data were collected from a brain phantom and ten patients with unilateral internal carotid artery stenosis. A simultaneous emission and transmission scan was performed after injecting {sup 123}I-iodoamphetamine. We reconstructed three SPECT images from common projection data: with scatter correction and nonuniform attenuation correction, with scatter correction and uniform attenuation correction, and with uniform attenuation correction applied to data without scatter correction. Regional count ratios were calculated by using four different reference areas (contralateral intact side, ipsilateral cerebellum, whole brain and hemisphere). Results: Scatter correction improved the accuracy of measuring the count ratios in the phantom experiment. It also yielded marked difference in the count ratio in the clinical study when using the cerebellum, whole brain or hemisphere as the reference. Difference between nonuniform and uniform attenuation correction was not significant in the phantom and clinical studies except when the cerebellar reference was used. Calculation of the lesion-to-normal count ratios referring the same site in the contralateral hemisphere was not dependent on the use of scatter correction or transmission scan-based attenuation correction. Conclusion: Scatter correction was indispensable for accurate measurement in most of the ROI analyses. Nonuniform attenuation correction is not necessary when using the reference area other than the cerebellum. (orig.)
International Nuclear Information System (INIS)
Gu Yi; Xiong Shengqing; Zhou Jianxin; Fan Zhengguo; Ge Liangquan
2014-01-01
γ-ray released by the radon daughter has severe impact on airborne γ-ray spectrometry. The spectral-ratio method is one of the best mathematical methods for radon background deduction in airborne γ-ray spectrometry. In this paper, an advanced spectral-ratio method was proposed which deducts Compton scattering ray by the fast Fourier transform rather than tripping ratios, the relationship between survey height and correction coefficient of the advanced spectral-ratio radon background correction method was studied, the advanced spectral-ratio radon background correction mathematic model was established, and the ground saturation model calibrating technology for correction coefficient was proposed. As for the advanced spectral-ratio radon background correction method, its applicability and correction efficiency are improved, and the application cost is saved. Furthermore, it can prevent the physical meaning lost and avoid the possible errors caused by matrix computation and mathematical fitting based on spectrum shape which is applied in traditional correction coefficient. (authors)
International Nuclear Information System (INIS)
Nakajima, Kenichi; Matsudaira, Masamichi; Yamada, Masato; Taki, Junichi; Tonami, Norihisa; Hisada, Kinichi
1995-01-01
Triple-energy window (TEW) method is a simple and practical approach for correcting Compton scatter in single-photon emission tracer studies. The fraction of scatter correction, with a point source or 30 ml-syringe placed under the camera, was measured by the TEW method. The scatter fraction was 55% for 201 Tl, 29% for 99m Tc and 57% for 123 I. Composite energy spectra were generated and separated by the TEW method. Combination of 99m Tc and 201 Tl was well separated, and 201 Tl and 123 I were separated within an error of 10%; whereas asymmetric photopeak energy window was necessary for separating 123 I and 99m Tc. By applying this method to myocardial SPECT study, the effect of scatter elimination was investigated in each myocardial wall by polar map and profile curve analysis. The effect of scatter was higher in the septum and the inferior wall. The count ratio relative to the anterior wall including scatter was 9% higher in 123 I, 7-8% higher in 99m Tc and 6% higher in 201 Tl. Apparent count loss after scatter correction was 30% for 123 I, 13% for 99m Tc and 38% for 201 Tl. Image contrast, as defined myocardium-to-left ventricular cavity count ratio, improved by scatter correction. Since the influence of Compton scatter was significant in cardiac planar and SPECT studies; the degree of scatter fraction should be kept in mind both in quantification and visual interpretation. (author)
Steponavičius, Raimundas; Thennadil, Suresh N
2013-05-01
Sample-to-sample photon path length variations that arise due to multiple scattering can be removed by decoupling absorption and scattering effects by using the radiative transfer theory, with a suitable set of measurements. For samples where particles both scatter and absorb light, the extracted bulk absorption spectrum is not completely free from nonlinear particle effects, since it is related to the absorption cross-section of particles that changes nonlinearly with particle size and shape. For the quantitative analysis of absorbing-only (i.e., nonscattering) species present in a matrix that contains a particulate species that absorbs and scatters light, a method to eliminate particle effects completely is proposed here, which utilizes the particle size information contained in the bulk scattering coefficient extracted by using the Mie theory to carry out an additional correction step to remove particle effects from bulk absorption spectra. This should result in spectra that are equivalent to spectra collected with only the liquid species in the mixture. Such an approach has the potential to significantly reduce the number of calibration samples as well as improve calibration performance. The proposed method was tested with both simulated and experimental data from a four-component model system.
Energy Technology Data Exchange (ETDEWEB)
Chen, X; Ouyang, L; Jia, X; Zhang, Y; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States); Yan, H [Cyber Medical Corporation, Xi’an (China)
2016-06-15
Purpose: A moving blocker based strategy has shown promising results for scatter correction in cone-beam computed tomography (CBCT). Different geometry designs and moving speeds of the blocker affect its performance in image reconstruction accuracy. The goal of this work is to optimize the geometric design and moving speed of the moving blocker system through experimental evaluations. Methods: An Elekta Synergy XVI system and an anthropomorphic pelvis phantom CIRS 801-P were used for our experiment. A blocker consisting of lead strips was inserted between the x-ray source and the phantom moving back and forth along rotation axis to measure the scatter signal. Accoriding to our Monte Carlo simulation results, three blockers were used, which have the same lead strip width 3.2mm and different gap between neighboring lead strips, 3.2, 6.4 and 9.6mm. For each blocker, three moving speeds were evaluated, 10, 20 and 30 pixels per projection (on the detector plane). Scatter signal in the unblocked region was estimated by cubic B-spline based interpolation from the blocked region. CBCT image was reconstructed by a total variation (TV) based algebraic iterative reconstruction (ART) algorithm from the partially blocked projection data. Reconstruction accuracy in each condition is quantified as CT number error of region of interest (ROI) by comparing to a CBCT reconstructed image from analytically simulated unblocked and scatter free projection data. Results: Highest reconstruction accuracy is achieved when the blocker width is 3.2 mm, the gap between neighboring lead strips is 9.6 mm and the moving speed is 20 pixels per projection. RMSE of the CT number of ROIs can be reduced from 436 to 27. Conclusions: Image reconstruction accuracy is greatly affected by the geometry design of the blocker. The moving speed does not have a very strong effect on reconstruction result if it is over 20 pixels per projection.
Drift-corrected Odin-OSIRIS ozone product: algorithm and updated stratospheric ozone trends
Directory of Open Access Journals (Sweden)
A. E. Bourassa
2018-01-01
Full Text Available A small long-term drift in the Optical Spectrograph and Infrared Imager System (OSIRIS stratospheric ozone product, manifested mostly since 2012, is quantified and attributed to a changing bias in the limb pointing knowledge of the instrument. A correction to this pointing drift using a predictable shape in the measured limb radiance profile is implemented and applied within the OSIRIS retrieval algorithm. This new data product, version 5.10, displays substantially better both long- and short-term agreement with Microwave Limb Sounder (MLS ozone throughout the stratosphere due to the pointing correction. Previously reported stratospheric ozone trends over the time period 1984–2013, which were derived by merging the altitude–number density ozone profile measurements from the Stratospheric Aerosol and Gas Experiment (SAGE II satellite instrument (1984–2005 and from OSIRIS (2002–2013, are recalculated using the new OSIRIS version 5.10 product and extended to 2017. These results still show statistically significant positive trends throughout the upper stratosphere since 1997, but at weaker levels that are more closely in line with estimates from other data records.
Window selection for dual photopeak window scatter correction in Tc-99m imaging
International Nuclear Information System (INIS)
Vries, D.J. de; King, M.A.
1994-01-01
The width and placement of the windows for the dual photopeak window (DPW) scatter subtraction method for Tc-99m imaging is investigated in order to obtain a method that is stable on a multihead detector system for single photon emission computed tomography (SPECT) and is capable of providing a good scatter estimate for extended objects. For various window pairs, stability and noise were examined with experiments using a SPECT system, while Monte Carlo simulations were used to predict the accuracy of scatter estimates for a variety of objects and to guide the development of regression relations for various window pairs. The DPW method that resulted from this study was implemented with a symmetric 20% photopeak window composed of a 15% asymmetric photopeak window and a 5% lower window abutted at 7 keV below the peak. A power function regression was used to relate the scatter-to-total ratio to the lower window-to-total ratio at each pixel, from which an estimated scatter image was calculated. DPW demonstrated good stability, achieved by abutting the two windows away from the peak. Performance was assessed and compared with Compton window subtraction (CWS). For simulated extended objects, DPW generally produced a less biased scatter estimate than the commonly used CWS method with k = 0.5. In acquisitions of a clinical SPECT phantom, contrast recovery was comparable for both DPW and CWS; however, DPW showed greater visual contrast in clinical SPECT bone studies
International Nuclear Information System (INIS)
Younes, R.B.; Mas, J.; Bidet, R.
1988-01-01
Contour detection is an important step in information extraction from nuclear medicine images. In order to perform accurate quantitative studies in single photon emission computed tomography (SPECT) a new procedure is described which can rapidly derive the best fit contour of an attenuated medium. Some authors evaluate the influence of the detected contour on the reconstructed images with various attenuation correction techniques. Most of the methods are strongly affected by inaccurately detected contours. This approach uses the Compton window to redetermine the convex contour: It seems to be simpler and more practical in clinical SPECT studies. The main advantages of this procedure are the high speed of computation, the accuracy of the contour found and the programme's automation. Results obtained using computer simulated and real phantoms or clinical studies demonstrate the reliability of the present algorithm. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Kim, Sang In; Kim, Bong Hwan; Kim, Jang Lyul; Lee, Jung Il [Health Physics Team, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2015-12-15
The calibration methods of neutron-measuring devices such as the neutron survey meter have advantages and disadvantages. To compare the calibration factors obtained by the shadow cone method and semi-empirical method, 10 neutron survey meters of five different types were used in this study. This experiment was performed at the Korea Atomic Energy Research Institute (KAERI; Daejeon, South Korea), and the calibration neutron fields were constructed using a {sup 252}Californium ({sup 252}Cf) neutron source, which was positioned in the center of the neutron irradiation room. The neutron spectra of the calibration neutron fields were measured by a europium-activated lithium iodide scintillator in combination with KAERI's Bonner sphere system. When the shadow cone method was used, 10 single moderator-based survey meters exhibited a smaller calibration factor by as much as 3.1 - 9.3% than that of the semi-empirical method. This finding indicates that neutron survey meters underestimated the scattered neutrons and attenuated neutrons (i.e., the total scatter corrections). This underestimation of the calibration factor was attributed to the fact that single moderator-based survey meters have an under-ambient dose equivalent response in the thermal or thermal-dominant neutron field. As a result, when the shadow cone method is used for a single moderator-based survey meter, an additional correction and the International Organization for Standardization standard 8529-2 for room-scattered neutrons should be considered.
International Nuclear Information System (INIS)
Kim, Sang In; Kim, Bong Hwan; Kim, Jang Lyul; Lee, Jung Il
2015-01-01
The calibration methods of neutron-measuring devices such as the neutron survey meter have advantages and disadvantages. To compare the calibration factors obtained by the shadow cone method and semi-empirical method, 10 neutron survey meters of five different types were used in this study. This experiment was performed at the Korea Atomic Energy Research Institute (KAERI; Daejeon, South Korea), and the calibration neutron fields were constructed using a 252 Californium ( 252 Cf) neutron source, which was positioned in the center of the neutron irradiation room. The neutron spectra of the calibration neutron fields were measured by a europium-activated lithium iodide scintillator in combination with KAERI's Bonner sphere system. When the shadow cone method was used, 10 single moderator-based survey meters exhibited a smaller calibration factor by as much as 3.1 - 9.3% than that of the semi-empirical method. This finding indicates that neutron survey meters underestimated the scattered neutrons and attenuated neutrons (i.e., the total scatter corrections). This underestimation of the calibration factor was attributed to the fact that single moderator-based survey meters have an under-ambient dose equivalent response in the thermal or thermal-dominant neutron field. As a result, when the shadow cone method is used for a single moderator-based survey meter, an additional correction and the International Organization for Standardization standard 8529-2 for room-scattered neutrons should be considered
International Nuclear Information System (INIS)
Cao, Ye; Tang, Xiao-Bin; Wang, Peng; Meng, Jia; Huang, Xi; Wen, Liang-Sheng; Chen, Da
2015-01-01
The unmanned aerial vehicle (UAV) radiation monitoring method plays an important role in nuclear accidents emergency. In this research, a spectrum correction algorithm about the UAV airborne radioactivity monitoring equipment NH-UAV was studied to measure the radioactive nuclides within a small area in real time and in a fixed place. The simulation spectra of the high-purity germanium (HPGe) detector and the lanthanum bromide (LaBr 3 ) detector in the equipment were obtained using the Monte Carlo technique. Spectrum correction coefficients were calculated after performing ratio processing techniques about the net peak areas between the double detectors on the detection spectrum of the LaBr 3 detector according to the accuracy of the detection spectrum of the HPGe detector. The relationship between the spectrum correction coefficient and the size of the source term was also investigated. A good linear relation exists between the spectrum correction coefficient and the corresponding energy (R 2 =0.9765). The maximum relative deviation from the real condition reduced from 1.65 to 0.035. The spectrum correction method was verified as feasible. - Highlights: • An airborne radioactivity monitoring equipment NH-UAV was developed to measure radionuclide after a nuclear accident. • A spectrum correction algorithm was proposed to obtain precise information on the detected radioactivity within a small area. • The spectrum correction method was verified as feasible. • The corresponding spectrum correction coefficients increase first and then stay constant
Energy Technology Data Exchange (ETDEWEB)
Cao, Ye [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Tang, Xiao-Bin, E-mail: tangxiaobin@nuaa.edu.cn [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Jiangsu Key Laboratory of Nuclear Energy Equipment Materials Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Wang, Peng; Meng, Jia; Huang, Xi; Wen, Liang-Sheng [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Chen, Da [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Jiangsu Key Laboratory of Nuclear Energy Equipment Materials Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China)
2015-10-11
The unmanned aerial vehicle (UAV) radiation monitoring method plays an important role in nuclear accidents emergency. In this research, a spectrum correction algorithm about the UAV airborne radioactivity monitoring equipment NH-UAV was studied to measure the radioactive nuclides within a small area in real time and in a fixed place. The simulation spectra of the high-purity germanium (HPGe) detector and the lanthanum bromide (LaBr{sub 3}) detector in the equipment were obtained using the Monte Carlo technique. Spectrum correction coefficients were calculated after performing ratio processing techniques about the net peak areas between the double detectors on the detection spectrum of the LaBr{sub 3} detector according to the accuracy of the detection spectrum of the HPGe detector. The relationship between the spectrum correction coefficient and the size of the source term was also investigated. A good linear relation exists between the spectrum correction coefficient and the corresponding energy (R{sup 2}=0.9765). The maximum relative deviation from the real condition reduced from 1.65 to 0.035. The spectrum correction method was verified as feasible. - Highlights: • An airborne radioactivity monitoring equipment NH-UAV was developed to measure radionuclide after a nuclear accident. • A spectrum correction algorithm was proposed to obtain precise information on the detected radioactivity within a small area. • The spectrum correction method was verified as feasible. • The corresponding spectrum correction coefficients increase first and then stay constant.
International Nuclear Information System (INIS)
Broome, J.
1965-11-01
The programme SCATTER is a KDF9 programme in the Egtran dialect of Fortran to generate normalized angular distributions for elastically scattered neutrons from data input as the coefficients of a Legendre polynomial series, or from differential cross-section data. Also, differential cross-section data may be analysed to produce Legendre polynomial coefficients. Output on cards punched in the format of the U.K. A. E. A. Nuclear Data Library is optional. (author)
A new method for x-ray scatter correction: first assessment on a cone-beam CT experimental setup
International Nuclear Information System (INIS)
Rinkel, J; Gerfault, L; Esteve, F; Dinten, J-M
2007-01-01
Cone-beam computed tomography (CBCT) enables three-dimensional imaging with isotropic resolution and a shorter acquisition time compared to a helical CT scanner. Because a larger object volume is exposed for each projection, scatter levels are much higher than in collimated fan-beam systems, resulting in cupping artifacts, streaks and quantification inaccuracies. In this paper, a general method to correct for scatter in CBCT, without supplementary on-line acquisition, is presented. This method is based on scatter calibration through off-line acquisition combined with on-line analytical transformation based on physical equations, to adapt calibration to the object observed. The method was tested on a PMMA phantom and on an anthropomorphic thorax phantom. The results were validated by comparison to simulation for the PMMA phantom and by comparison to scans obtained on a commercial multi-slice CT scanner for the thorax phantom. Finally, the improvements achieved with the new method were compared to those obtained using a standard beam-stop method. The new method provided results that closely agreed with the simulation and with the conventional CT scanner, eliminating cupping artifacts and significantly improving quantification. Compared to the beam-stop method, lower x-ray doses and shorter acquisition times were needed, both divided by a factor of 9 for the same scatter estimation accuracy
International Nuclear Information System (INIS)
Lehtinen, Ossi; Geiger, Dorin; Lee, Zhongbo; Whitwick, Michael Brian; Chen, Ming-Wei; Kis, Andras; Kaiser, Ute
2015-01-01
Here, we present a numerical post-processing method for removing the effect of anti-symmetric residual aberrations in high-resolution transmission electron microscopy (HRTEM) images of weakly scattering 2D-objects. The method is based on applying the same aberrations with the opposite phase to the Fourier transform of the recorded image intensity and subsequently inverting the Fourier transform. We present the theoretical justification of the method, and its verification based on simulated images in the case of low-order anti-symmetric aberrations. Ultimately the method is applied to experimental hardware aberration-corrected HRTEM images of single-layer graphene and MoSe 2 resulting in images with strongly reduced residual low-order aberrations, and consequently improved interpretability. Alternatively, this method can be used to estimate by trial and error the residual anti-symmetric aberrations in HRTEM images of weakly scattering objects
N3LO corrections to jet production in deep inelastic scattering using the Projection-to-Born method
Currie, J.; Gehrmann, T.; Glover, E. W. N.; Huss, A.; Niehues, J.; Vogt, A.
2018-05-01
Computations of higher-order QCD corrections for processes with exclusive final states require a subtraction method for real-radiation contributions. We present the first-ever generalisation of a subtraction method for third-order (N3LO) QCD corrections. The Projection-to-Born method is used to combine inclusive N3LO coefficient functions with an exclusive second-order (NNLO) calculation for a final state with an extra jet. The input requirements, advantages, and potential applications of the method are discussed, and validations at lower orders are performed. As a test case, we compute the N3LO corrections to kinematical distributions and production rates for single-jet production in deep inelastic scattering in the laboratory frame, and compare them with data from the ZEUS experiment at HERA. The corrections are small in the central rapidity region, where they stabilize the predictions to sub per-cent level. The corrections increase substantially towards forward rapidity where large logarithmic effects are expected, thereby yielding an improved description of the data in this region.
The Scatter Search Based Algorithm to Revenue Management Problem in Broadcasting Companies
Pishdad, Arezoo; Sharifyazdi, Mehdi; Karimpour, Reza
2009-09-01
The problem under question in this paper which is faced by broadcasting companies is how to benefit from a limited advertising space. This problem is due to the stochastic behavior of customers (advertiser) in different fare classes. To address this issue we propose a mathematical constrained nonlinear multi period model which incorporates cancellation and overbooking. The objective function is to maximize the total expected revenue and our numerical method performs it by determining the sales limits for each class of customer to present the revenue management control policy. Scheduling the advertising spots in breaks is another area of concern and we consider it as a constraint in our model. In this paper an algorithm based on Scatter search is developed to acquire a good feasible solution. This method uses simulation over customer arrival and in a continuous finite time horizon [0, T]. Several sensitivity analyses are conducted in computational result for depicting the effectiveness of proposed method. It also provides insight into better results of considering revenue management (control policy) compared to "no sales limit" policy in which sooner demand will served first.
International Nuclear Information System (INIS)
Rosca, Florin; Zygmanski, Piotr
2008-01-01
We have developed an independent algorithm for the prediction of electronic portal imaging device (EPID) response. The algorithm uses a set of images [open beam, closed multileaf collimator (MLC), various fence and modified sweeping gap patterns] to separately characterize the primary and head-scatter contributions to EPID response. It also characterizes the relevant dosimetric properties of the MLC: Transmission, dosimetric gap, MLC scatter [P. Zygmansky et al., J. Appl. Clin. Med. Phys. 8(4) (2007)], inter-leaf leakage, and tongue and groove [F. Lorenz et al., Phys. Med. Biol. 52, 5985-5999 (2007)]. The primary radiation is modeled with a single Gaussian distribution defined at the target position, while the head-scatter radiation is modeled with a triple Gaussian distribution defined downstream of the target. The distances between the target and the head-scatter source, jaws, and MLC are model parameters. The scatter associated with the EPID is implicit in the model. Open beam images are predicted to within 1% of the maximum value across the image. Other MLC test patterns and intensity-modulated radiation therapy fluences are predicted to within 1.5% of the maximum value. The presented method was applied to the Varian aS500 EPID but is designed to work with any planar detector with sufficient spatial resolution
NNLO QCD corrections to jet production at hadron colliders from gluon scattering
International Nuclear Information System (INIS)
Currie, James; Ridder, Aude Gehrmann-De; Glover, E.W.N.; Pires, João
2014-01-01
We present the next-to-next-to-leading order (NNLO) QCD corrections to dijet production in the purely gluonic channel retaining the full dependence on the number of colours. The sub-leading colour contribution in this channel first appears at NNLO and increases the NNLO correction by around 10% and exhibits a p T dependence, rising from 8% at low p T to 15% at high p T . The present calculation demonstrates the utility of the antenna subtraction method for computing the full colour NNLO corrections to dijet production at the Large Hadron Collider
A simple algorithm for calculating the scattering angle in atomic collisions
International Nuclear Information System (INIS)
Belchior, J.C.; Braga, J.P.
1996-01-01
A geometric approach to calculate the classical atomic scattering angle is presented. The trajectory of the particle is divided into several straight-lines and changing in direction from one sector to the other is used to calculate the scattering angle. In this model, calculation of the scattering angle does not involve either the direct evaluation of integrals nor classical turning points. (author)
International Nuclear Information System (INIS)
Poux, J.P.
1972-06-01
In this research thesis, after a recall of processes of elastic scattering of positrons on electrons (kinematics and cross section), and of involved radiative corrections, the author describes the experimental installation (positron beam, ionization chamber, targets, spectrometer, electronic logics associated with the counter telescope) which has been used to measure the differential cross section of recoil electrons, and the methods which have been used. In a third part, the author reports the calculation of corrections and the obtained spectra. In the next part, the author reports the interpretation of results and their comparison with the experiment performed by Browman, Grossetete and Yount. The author shows that both experiments are complementary to each other, and are in agreement with the calculation performed by Yennie, Hearn and Kuo
International Nuclear Information System (INIS)
Gu Xuejun; Jia Xun; Jiang, Steve B; Jelen, Urszula; Li Jinsheng
2011-01-01
Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite-size pencil beam (FSPB) algorithm with a 3D-density correction method on graphics processing unit (GPU). This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework (Gu et al 2009 Phys. Med. Biol. 54 6287-97). Dosimetric evaluations against Monte Carlo dose calculations are conducted on ten IMRT treatment plans (five head-and-neck cases and five lung cases). For all cases, there is improvement with the 3D-density correction over the conventional FSPB algorithm and for most cases the improvement is significant. Regarding the efficiency, because of the appropriate arrangement of memory access and the usage of GPU intrinsic functions, the dose calculation for an IMRT plan can be accomplished well within 1 s (except for one case) with this new GPU-based FSPB algorithm. Compared to the previous GPU-based FSPB algorithm without 3D-density correction, this new algorithm, though slightly sacrificing the computational efficiency (∼5-15% lower), has significantly improved the dose calculation accuracy, making it more suitable for online IMRT replanning.
International Nuclear Information System (INIS)
Sun, Y.; Hou, Y.; Yan, Y.
2004-01-01
With the extensive application of industrial computed tomography in the field of non-destructive testing, how to improve the quality of the reconstructed image is receiving more and more concern. It is well known that in the existing cone-beam filtered backprojection reconstruction algorithms the cone angle is controlled within a narrow range. The reason of this limitation is the incompleteness of projection data when the cone angle increases. Thus the size of the tested workpiece is limited. Considering the characteristic of X-ray cone angle, an improved cone-beam filtered back-projection reconstruction algorithm taking account of angular correction is proposed in this paper. The aim of our algorithm is to correct the cone-angle effect resulted from the incompleteness of projection data in the conventional algorithm. The basis of the correction is the angular relationship among X-ray source, tested workpiece and the detector. Thus the cone angle is not strictly limited and this algorithm may be used to detect larger workpiece. Further more, adaptive wavelet filter is used to make multiresolution analysis, which can modify the wavelet decomposition series adaptively according to the demand for resolution of local reconstructed area. Therefore the computation and the time of reconstruction can be reduced, and the quality of the reconstructed image can also be improved. (author)
Energy Technology Data Exchange (ETDEWEB)
Borel, C.C.; Villeneuve, P.V.; Clodium, W.B.; Szymenski, J.J.; Davis, A.B.
1999-04-04
Deriving information about the Earth's surface requires atmospheric corrections of the measured top-of-the-atmosphere radiances. One possible path is to use atmospheric radiative transfer codes to predict how the radiance leaving the ground is affected by the scattering and attenuation. In practice the atmosphere is usually not well known and thus it is necessary to use more practical methods. The authors will describe how to find dark surfaces, estimate the atmospheric optical depth, estimate path radiance and identify thick clouds using thresholds on reflectance and NDVI and columnar water vapor. The authors describe a simple method to correct a visible channel contaminated by a thin cirrus clouds.
Directory of Open Access Journals (Sweden)
Yasuhiro Nakamura
2012-07-01
Full Text Available The present study introduces the four-component scattering power decomposition (4-CSPD algorithm with rotation of covariance matrix, and presents an experimental proof of the equivalence between the 4-CSPD algorithms based on rotation of covariance matrix and coherency matrix. From a theoretical point of view, the 4-CSPD algorithms with rotation of the two matrices are identical. Although it seems obvious, no experimental evidence has yet been presented. In this paper, using polarimetric synthetic aperture radar (POLSAR data acquired by Phased Array L-band SAR (PALSAR on board of Advanced Land Observing Satellite (ALOS, an experimental proof is presented to show that both algorithms indeed produce identical results.
International Nuclear Information System (INIS)
Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.; Kalantar, Daniel H.
2015-01-01
The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.
Energy Technology Data Exchange (ETDEWEB)
Labaria, George R. [Univ. of California, Santa Cruz, CA (United States); Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Warrick, Abbie L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Celliers, Peter M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kalantar, Daniel H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-01-12
The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.
Directory of Open Access Journals (Sweden)
Ray Debraj
2015-01-01
Full Text Available Speeded Up Robust Feature (SURF is used to position a robot with respect to an environment and aid in vision-based robotic navigation. During the course of navigation irregularities in the terrain, especially in an outdoor environment may deviate a robot from the track. Another reason for deviation can be unequal speed of the left and right robot wheels. Hence it is essential to detect such deviations and perform corrective operations to bring the robot back to the track. In this paper we propose a novel algorithm that uses image matching using SURF to detect deviation of a robot from the trajectory and subsequent restoration by corrective operations. This algorithm is executed in parallel to positioning and navigation algorithms by distributing tasks among different CPU cores using Open Multi-Processing (OpenMP API.
Sayed, Sadeed Bin; Uysal, Ismail Enes; Bagci, Hakan; Ulku, H. Arda
2018-01-01
Quantum tunneling is observed between two nanostructures that are separated by a sub-nanometer gap. Electrons “jumping” from one structure to another create an additional current path. An auxiliary tunnel is introduced between the two structures as a support for this so that a classical electromagnetic solver can account for the effects of quantum tunneling. The dispersive permittivity of the tunnel is represented by a Drude model, whose parameters are obtained from the electron tunneling probability. The transient scattering from the connected nanostructures (i.e., nanostructures plus auxiliary tunnel) is analyzed using a time domain volume integral equation solver. Numerical results demonstrating the effect of quantum tunneling on the scattered fields are provided.
Filli, Lukas; Marcon, Magda; Scholz, Bernhard; Calcagni, Maurizio; Finkenstädt, Tim; Andreisek, Gustav; Guggenberger, Roman
2014-12-01
The aim of this study was to evaluate a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography (FDCT) of scaphoid fixation screws. FDCT has gained interest in imaging small anatomic structures of the appendicular skeleton. Angiographic C-arm systems with flat detectors allow fluoroscopy and FDCT imaging in a one-stop procedure emphasizing their role as an ideal intraoperative imaging tool. However, FDCT imaging can be significantly impaired by artefacts induced by fixation screws. Following ethical board approval, commercially available scaphoid fixation screws were inserted into six cadaveric specimens in order to fix artificially induced scaphoid fractures. FDCT images corrected with the algorithm were compared to uncorrected images both quantitatively and qualitatively by two independent radiologists in terms of artefacts, screw contour, fracture line visibility, bone visibility, and soft tissue definition. Normal distribution of variables was evaluated using the Kolmogorov-Smirnov test. In case of normal distribution, quantitative variables were compared using paired Student's t tests. The Wilcoxon signed-rank test was used for quantitative variables without normal distribution and all qualitative variables. A p value of < 0.05 was considered to indicate statistically significant differences. Metal artefacts were significantly reduced by the correction algorithm (p < 0.001), and the fracture line was more clearly defined (p < 0.01). The inter-observer reliability was "almost perfect" (intra-class correlation coefficient 0.85, p < 0.001). The prototype correction algorithm in FDCT for metal artefacts induced by scaphoid fixation screws may facilitate intra- and postoperative follow-up imaging. Flat detector computed tomography (FDCT) is a helpful imaging tool for scaphoid fixation. The correction algorithm significantly reduces artefacts in FDCT induced by scaphoid fixation screws. This may facilitate intra
Energy Technology Data Exchange (ETDEWEB)
Williams, Robert W. [Department of Biomedical Informatics, Uniformed Services University, 4301 Jones Bridge Road, Bethesda, MD 20815 (United States)], E-mail: bob@bob.usuhs.mil; Schluecker, Sebastian [Institute of Physical Chemistry, University of Wuerzburg, Wuerzburg (Germany); Hudson, Bruce S. [Department of Chemistry, Syracuse University, Syracuse, NY (United States)
2008-01-22
A scaled quantum mechanical harmonic force field (SQMFF) corrected for anharmonicity is obtained for the 23 K L-alanine crystal structure using van der Waals corrected periodic boundary condition density functional theory (DFT) calculations with the PBE functional. Scale factors are obtained with comparisons to inelastic neutron scattering (INS), Raman, and FT-IR spectra of polycrystalline L-alanine at 15-23 K. Calculated frequencies for all 153 normal modes differ from observed frequencies with a standard deviation of 6 wavenumbers. Non-bonded external k = 0 lattice modes are included, but assignments to these modes are presently ambiguous. The extension of SQMFF methodology to lattice modes is new, as are the procedures used here for providing corrections for anharmonicity and van der Waals interactions in DFT calculations on crystals. First principles Born-Oppenheimer molecular dynamics (BOMD) calculations are performed on the L-alanine crystal structure at a series of classical temperatures ranging from 23 K to 600 K. Corrections for zero-point energy (ZPE) are estimated by finding the classical temperature that reproduces the mean square displacements (MSDs) measured from the diffraction data at 23 K. External k = 0 lattice motions are weakly coupled to bonded internal modes.
International Nuclear Information System (INIS)
Williams, Robert W.; Schluecker, Sebastian; Hudson, Bruce S.
2008-01-01
A scaled quantum mechanical harmonic force field (SQMFF) corrected for anharmonicity is obtained for the 23 K L-alanine crystal structure using van der Waals corrected periodic boundary condition density functional theory (DFT) calculations with the PBE functional. Scale factors are obtained with comparisons to inelastic neutron scattering (INS), Raman, and FT-IR spectra of polycrystalline L-alanine at 15-23 K. Calculated frequencies for all 153 normal modes differ from observed frequencies with a standard deviation of 6 wavenumbers. Non-bonded external k = 0 lattice modes are included, but assignments to these modes are presently ambiguous. The extension of SQMFF methodology to lattice modes is new, as are the procedures used here for providing corrections for anharmonicity and van der Waals interactions in DFT calculations on crystals. First principles Born-Oppenheimer molecular dynamics (BOMD) calculations are performed on the L-alanine crystal structure at a series of classical temperatures ranging from 23 K to 600 K. Corrections for zero-point energy (ZPE) are estimated by finding the classical temperature that reproduces the mean square displacements (MSDs) measured from the diffraction data at 23 K. External k = 0 lattice motions are weakly coupled to bonded internal modes
International Nuclear Information System (INIS)
Maltman, K.
1998-01-01
Using the framework of effective chiral Lagrangians, we show that, in order to correctly implement electromagnetism (EM), as generated from the Standard Model, into effective hadronic theories (such as meson-exchange models) it is insufficient to consider only graphs in the low-energy effective theory containing explicit photon lines. The Standard Model requires the presence of contact interactions in the effective theory which are electromagnetic in origin, but which involve no photons in the effective theory. We illustrate the problems which can result from a ''standard'' EM subtraction: i.e., from assuming that removing all contributions in the effective theory generated by graphs with explicit photon lines fully removes EM effects, by considering the case of the s-wave ππ scattering lengths. In this case it is shown that such a subtraction procedure would lead to the incorrect conclusion that the strong interaction isospin-breaking contributions to these quantities were large when, in fact, they are known to vanish at leading order in m d -m u . The leading EM contact corrections for the channels employed in the extraction of the I=0,2 s-wave ππ scattering lengths from experiment are also evaluated. (orig.)
Music algorithm for imaging of a sound-hard arc in limited-view inverse scattering problem
Park, Won-Kwang
2017-07-01
MUltiple SIgnal Classification (MUSIC) algorithm for a non-iterative imaging of sound-hard arc in limited-view inverse scattering problem is considered. In order to discover mathematical structure of MUSIC, we derive a relationship between MUSIC and an infinite series of Bessel functions of integer order. This structure enables us to examine some properties of MUSIC in limited-view problem. Numerical simulations are performed to support the identified structure of MUSIC.
International Nuclear Information System (INIS)
Fan, Peng; Hutton, Brian F.; Holstensson, Maria; Ljungberg, Michael; Hendrik Pretorius, P.; Prasad, Rameshwar; Liu, Chi; Ma, Tianyu; Liu, Yaqiang; Wang, Shi; Thorn, Stephanie L.; Stacy, Mitchel R.; Sinusas, Albert J.
2015-01-01
Purpose: The energy spectrum for a cadmium zinc telluride (CZT) detector has a low energy tail due to incomplete charge collection and intercrystal scattering. Due to these solid-state detector effects, scatter would be overestimated if the conventional triple-energy window (TEW) method is used for scatter and crosstalk corrections in CZT-based imaging systems. The objective of this work is to develop a scatter and crosstalk correction method for 99m Tc/ 123 I dual-radionuclide imaging for a CZT-based dedicated cardiac SPECT system with pinhole collimators (GE Discovery NM 530c/570c). Methods: A tailing model was developed to account for the low energy tail effects of the CZT detector. The parameters of the model were obtained using 99m Tc and 123 I point source measurements. A scatter model was defined to characterize the relationship between down-scatter and self-scatter projections. The parameters for this model were obtained from Monte Carlo simulation using SIMIND. The tailing and scatter models were further incorporated into a projection count model, and the primary and self-scatter projections of each radionuclide were determined with a maximum likelihood expectation maximization (MLEM) iterative estimation approach. The extracted scatter and crosstalk projections were then incorporated into MLEM image reconstruction as an additive term in forward projection to obtain scatter- and crosstalk-corrected images. The proposed method was validated using Monte Carlo simulation, line source experiment, anthropomorphic torso phantom studies, and patient studies. The performance of the proposed method was also compared to that obtained with the conventional TEW method. Results: Monte Carlo simulations and line source experiment demonstrated that the TEW method overestimated scatter while their proposed method provided more accurate scatter estimation by considering the low energy tail effect. In the phantom study, improved defect contrasts were observed with both
Energy Technology Data Exchange (ETDEWEB)
Fan, Peng [Department of Diagnostic Radiology, Yale University, New Haven, Connecticut 06520 and Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Hutton, Brian F. [Institute of Nuclear Medicine, University College London, London WC1E 6BT, United Kingdom and Centre for Medical Radiation Physics, University of Wollongong, New South Wales 2522 (Australia); Holstensson, Maria [Department of Nuclear Medicine, Karolinska University Hospital, Stockholm 14186 (Sweden); Ljungberg, Michael [Department of Medical Radiation Physics, Lund University, Lund 222 41 (Sweden); Hendrik Pretorius, P. [Department of Radiology, University of Massachusetts Medical School, Worcester, Massachusetts 01655 (United States); Prasad, Rameshwar; Liu, Chi, E-mail: chi.liu@yale.edu [Department of Diagnostic Radiology, Yale University, New Haven, Connecticut 06520 (United States); Ma, Tianyu; Liu, Yaqiang; Wang, Shi [Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Thorn, Stephanie L.; Stacy, Mitchel R.; Sinusas, Albert J. [Department of Internal Medicine, Yale Translational Research Imaging Center, Yale University, New Haven, Connecticut 06520 (United States)
2015-12-15
Purpose: The energy spectrum for a cadmium zinc telluride (CZT) detector has a low energy tail due to incomplete charge collection and intercrystal scattering. Due to these solid-state detector effects, scatter would be overestimated if the conventional triple-energy window (TEW) method is used for scatter and crosstalk corrections in CZT-based imaging systems. The objective of this work is to develop a scatter and crosstalk correction method for {sup 99m}Tc/{sup 123}I dual-radionuclide imaging for a CZT-based dedicated cardiac SPECT system with pinhole collimators (GE Discovery NM 530c/570c). Methods: A tailing model was developed to account for the low energy tail effects of the CZT detector. The parameters of the model were obtained using {sup 99m}Tc and {sup 123}I point source measurements. A scatter model was defined to characterize the relationship between down-scatter and self-scatter projections. The parameters for this model were obtained from Monte Carlo simulation using SIMIND. The tailing and scatter models were further incorporated into a projection count model, and the primary and self-scatter projections of each radionuclide were determined with a maximum likelihood expectation maximization (MLEM) iterative estimation approach. The extracted scatter and crosstalk projections were then incorporated into MLEM image reconstruction as an additive term in forward projection to obtain scatter- and crosstalk-corrected images. The proposed method was validated using Monte Carlo simulation, line source experiment, anthropomorphic torso phantom studies, and patient studies. The performance of the proposed method was also compared to that obtained with the conventional TEW method. Results: Monte Carlo simulations and line source experiment demonstrated that the TEW method overestimated scatter while their proposed method provided more accurate scatter estimation by considering the low energy tail effect. In the phantom study, improved defect contrasts were
International Nuclear Information System (INIS)
Blanco, F; Garcia, G
2009-01-01
A simplified form of the well-known screening-corrected additivity rule procedure for the calculation of electron-molecule cross sections is proposed for the treatment of some very large macro-molecules. While the comparison of the standard and simplified treatments for a DNA dodecamer reveals very similar results, the new treatment presents some important advantages for large molecules.
New results on the 3-loop heavy flavor corrections in deep-inelastic scattering
Energy Technology Data Exchange (ETDEWEB)
Behring, A.; Bluemlein, J.; Freitas, A. de [Deutsches Elektronen-Synchrotron, Zeuthen (Germany); and others
2013-12-15
We report on recent progress in the calculation of the 3-loop massiveWilson coefficients in deep inelastic scattering at general values of N for neutral- and charged-current reactions in the asymptotic region Q{sup 2}>>m{sup 2}. Four new out of eight massive operator matrix elements and Wilson coefficients have been obtained recently. We also discuss recent results on Feynman graphs containing two massive fermion lines and present complete results for the bubble topologies for all processes.
inverse correction of fourier transforms for one-dimensional strongly ...
African Journals Online (AJOL)
Hsin Ying-Fei
2016-05-01
May 1, 2016 ... As it is widely used in periodic lattice design theory and is particularly useful in aperiodic lattice design [12,13], the accuracy of the FT algorithm under strong scattering conditions is the focus of this paper. We propose an inverse correction approach for the inaccurate FT algorithm in strongly scattering ...
2002-01-01
Tile Calorimeter modules stored at CERN. The larger modules belong to the Barrel, whereas the smaller ones are for the two Extended Barrels. (The article was about the completion of the 64 modules for one of the latter.) The photo on the first page of the Bulletin n°26/2002, from 24 July 2002, illustrating the article «The ATLAS Tile Calorimeter gets into shape» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.
International Nuclear Information System (INIS)
Martin, G.; Coca, M.; Capote, R.
1996-01-01
Using Monte Carlo method technique , a computer code which simulates the time of flight experiment to measure double differential cross section was developed. The correction factor for flux attenuation and multiple scattering, that make a deformation to the measured spectrum, were calculated. The energy dependence of the correction factor was determined and a comparison with other works is shown. Calculations for Fe 56 at two different scattering angles were made. We also reproduce the experiment performed at the Nuclear Analysis Laboratory for C 12 at 25 celsius degree and the calculated correction factor for the is measured is shown. We found a linear relation between the scatter size and the correction factor for flux attenuation
Energy Technology Data Exchange (ETDEWEB)
Kim, J; Park, Y; Sharp, G; Winey, B [Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States)
2016-06-15
Purpose: To establish a method to evaluate the dosimetric impact of anatomic changes in head and neck patients during proton therapy by using scatter-corrected cone-beam CT (CBCT) images. Methods: The water equivalent path length (WEPL) was calculated to the distal edge of PTV contours by using tomographic images available for six head and neck patients received photon therapy. The proton range variation was measured by calculating the difference between the distal WEPLs calculated with the planning CT and weekly treatment CBCT images. By performing an automatic rigid registration, six degrees-of-freedom (DOF) correction was made to the CBCT images to account for the patient setup uncertainty. For accurate WEPL calculations, an existing CBCT scatter correction algorithm, whose performance was already proven for phantom images, was calibrated for head and neck patient images. Specifically, two different image similarity measures, mutual information (MI) and mean square error (MSE), were tested for the deformable image registration (DIR) in the CBCT scatter correction algorithm. Results: The impact of weight loss was reflected in the distal WEPL differences with the aid of the automatic rigid registration reducing the influence of patient setup uncertainty on the WEPL calculation results. The WEPL difference averaged over distal area was 2.9 ± 2.9 (mm) across all fractions of six patients and its maximum, mostly found at the last available fraction, was 6.2 ± 3.4 (mm). The MSE-based DIR successfully registered each treatment CBCT image to the planning CT image. On the other hand, the MI-based DIR deformed the skin voxels in the planning CT image to the immobilization mask in the treatment CBCT image, most of which was cropped out of the planning CT image. Conclusion: The dosimetric impact of anatomic changes was evaluated by calculating the distal WEPL difference with the existing scatter-correction algorithm appropriately calibrated. Jihun Kim, Yang-Kyun Park
Tedgren, Åsa Carlsson; Plamondon, Mathieu; Beaulieu, Luc
2015-07-07
The aim of this work was to investigate how dose distributions calculated with the collapsed cone (CC) algorithm depend on the size of the water phantom used in deriving the point kernel for multiple scatter. A research version of the CC algorithm equipped with a set of selectable point kernels for multiple-scatter dose that had initially been derived in water phantoms of various dimensions was used. The new point kernels were generated using EGSnrc in spherical water phantoms of radii 5 cm, 7.5 cm, 10 cm, 15 cm, 20 cm, 30 cm and 50 cm. Dose distributions derived with CC in water phantoms of different dimensions and in a CT-based clinical breast geometry were compared to Monte Carlo (MC) simulations using the Geant4-based brachytherapy specific MC code Algebra. Agreement with MC within 1% was obtained when the dimensions of the phantom used to derive the multiple-scatter kernel were similar to those of the calculation phantom. Doses are overestimated at phantom edges when kernels are derived in larger phantoms and underestimated when derived in smaller phantoms (by around 2% to 7% depending on distance from source and phantom dimensions). CC agrees well with MC in the high dose region of a breast implant and is superior to TG43 in determining skin doses for all multiple-scatter point kernel sizes. Increased agreement between CC and MC is achieved when the point kernel is comparable to breast dimensions. The investigated approximation in multiple scatter dose depends on the choice of point kernel in relation to phantom size and yields a significant fraction of the total dose only at distances of several centimeters from a source/implant which correspond to volumes of low doses. The current implementation of the CC algorithm utilizes a point kernel derived in a comparatively large (radius 20 cm) water phantom. A fixed point kernel leads to predictable behaviour of the algorithm with the worst case being a source/implant located well within a patient
Wang, Xu; Shi, Fang; Sigrist, Norbert; Seo, Byoung-Joon; Tang, Hong; Bikkannavar, Siddarayappa; Basinger, Scott; Lay, Oliver
2012-01-01
Large aperture telescope commonly features segment mirrors and a coarse phasing step is needed to bring these individual segments into the fine phasing capture range. Dispersed Fringe Sensing (DFS) is a powerful coarse phasing technique and its alteration is currently being used for JWST.An Advanced Dispersed Fringe Sensing (ADFS) algorithm is recently developed to improve the performance and robustness of previous DFS algorithms with better accuracy and unique solution. The first part of the paper introduces the basic ideas and the essential features of the ADFS algorithm and presents the some algorithm sensitivity study results. The second part of the paper describes the full details of algorithm validation process through the advanced wavefront sensing and correction testbed (AWCT): first, the optimization of the DFS hardware of AWCT to ensure the data accuracy and reliability is illustrated. Then, a few carefully designed algorithm validation experiments are implemented, and the corresponding data analysis results are shown. Finally the fiducial calibration using Range-Gate-Metrology technique is carried out and a <10nm or <1% algorithm accuracy is demonstrated.
Kim, Kio; Habas, Piotr A; Rajagopalan, Vidya; Scott, Julia A; Corbett-Detig, James M; Rousseau, Francois; Barkovich, A James; Glenn, Orit A; Studholme, Colin
2011-09-01
A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multislice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types.
Directory of Open Access Journals (Sweden)
2012-01-01
Full Text Available Regarding Gorelik, G., & Shackelford, T.K. (2011. Human sexual conflict from molecules to culture. Evolutionary Psychology, 9, 564–587: The authors wish to correct an omission in citation to the existing literature. In the final paragraph on p. 570, we neglected to cite Burch and Gallup (2006 [Burch, R. L., & Gallup, G. G., Jr. (2006. The psychobiology of human semen. In S. M. Platek & T. K. Shackelford (Eds., Female infidelity and paternal uncertainty (pp. 141–172. New York: Cambridge University Press.]. Burch and Gallup (2006 reviewed the relevant literature on FSH and LH discussed in this paragraph, and should have been cited accordingly. In addition, Burch and Gallup (2006 should have been cited as the originators of the hypothesis regarding the role of FSH and LH in the semen of rapists. The authors apologize for this oversight.
2002-01-01
The photo on the second page of the Bulletin n°48/2002, from 25 November 2002, illustrating the article «Spanish Visit to CERN» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption. The Spanish delegation, accompanied by Spanish scientists at CERN, also visited the LHC superconducting magnet test hall (photo). From left to right: Felix Rodriguez Mateos of CERN LHC Division, Josep Piqué i Camps, Spanish Minister of Science and Technology, César Dopazo, Director-General of CIEMAT (Spanish Research Centre for Energy, Environment and Technology), Juan Antonio Rubio, ETT Division Leader at CERN, Manuel Aguilar-Benitez, Spanish Delegate to Council, Manuel Delfino, IT Division Leader at CERN, and Gonzalo León, Secretary-General of Scientific Policy to the Minister.
Directory of Open Access Journals (Sweden)
2014-01-01
Full Text Available Regarding Tagler, M. J., and Jeffers, H. M. (2013. Sex differences in attitudes toward partner infidelity. Evolutionary Psychology, 11, 821–832: The authors wish to correct values in the originally published manuscript. Specifically, incorrect 95% confidence intervals around the Cohen's d values were reported on page 826 of the manuscript where we reported the within-sex simple effects for the significant Participant Sex × Infidelity Type interaction (first paragraph, and for attitudes toward partner infidelity (second paragraph. Corrected values are presented in bold below. The authors would like to thank Dr. Bernard Beins at Ithaca College for bringing these errors to our attention. Men rated sexual infidelity significantly more distressing (M = 4.69, SD = 0.74 than they rated emotional infidelity (M = 4.32, SD = 0.92, F(1, 322 = 23.96, p < .001, d = 0.44, 95% CI [0.23, 0.65], but there was little difference between women's ratings of sexual (M = 4.80, SD = 0.48 and emotional infidelity (M = 4.76, SD = 0.57, F(1, 322 = 0.48, p = .29, d = 0.08, 95% CI [−0.10, 0.26]. As expected, men rated sexual infidelity (M = 1.44, SD = 0.70 more negatively than they rated emotional infidelity (M = 2.66, SD = 1.37, F(1, 322 = 120.00, p < .001, d = 1.12, 95% CI [0.85, 1.39]. Although women also rated sexual infidelity (M = 1.40, SD = 0.62 more negatively than they rated emotional infidelity (M = 2.09, SD = 1.10, this difference was not as large and thus in the evolutionary theory supportive direction, F(1, 322 = 72.03, p < .001, d = 0.77, 95% CI [0.60, 0.94].
Zhao, Cong; Zhong, Yuncheng; Duan, Xinhui; Zhang, You; Huang, Xiaokun; Wang, Jing; Jin, Mingwu
2018-06-01
Four-dimensional (4D) x-ray cone-beam computed tomography (CBCT) is important for a precise radiation therapy for lung cancer. Due to the repeated use and 4D acquisition over a course of radiotherapy, the radiation dose becomes a concern. Meanwhile, the scatter contamination in CBCT deteriorates image quality for treatment tasks. In this work, we propose the use of a moving blocker (MB) during the 4D CBCT acquisition (‘4D MB’) and to combine motion-compensated reconstruction to address these two issues simultaneously. In 4D MB CBCT, the moving blocker reduces the x-ray flux passing through the patient and collects the scatter information in the blocked region at the same time. The scatter signal is estimated from the blocked region for correction. Even though the number of projection views and projection data in each view are not complete for conventional reconstruction, 4D reconstruction with a total-variation (TV) constraint and a motion-compensated temporal constraint can utilize both spatial gradient sparsity and temporal correlations among different phases to overcome the missing data problem. The feasibility simulation studies using the 4D NCAT phantom showed that 4D MB with motion-compensated reconstruction with 1/3 imaging dose reduction could produce satisfactory images and achieve 37% improvement on structural similarity (SSIM) index and 55% improvement on root mean square error (RMSE), compared to 4D reconstruction at the regular imaging dose without scatter correction. For the same 4D MB data, 4D reconstruction outperformed 3D TV reconstruction by 28% on SSIM and 34% on RMSE. A study of synthetic patient data also demonstrated the potential of 4D MB to reduce the radiation dose by 1/3 without compromising the image quality. This work paves the way for more comprehensive studies to investigate the dose reduction limit offered by this novel 4D MB method using physical phantom experiments and real patient data based on clinical relevant metrics.
International Nuclear Information System (INIS)
Tang Qiulin; Zeng, Gengsheng L; Gullberg, Grant T
2005-01-01
In this paper, we developed an analytical fan-beam reconstruction algorithm that compensates for uniform attenuation in SPECT. The new fan-beam algorithm is in the form of backprojection first, then filtering, and is mathematically exact. The algorithm is based on three components. The first one is the established generalized central-slice theorem, which relates the 1D Fourier transform of a set of arbitrary data and the 2D Fourier transform of the backprojected image. The second one is the fact that the backprojection of the fan-beam measurements is identical to the backprojection of the parallel measurements of the same object with the same attenuator. The third one is the stable analytical reconstruction algorithm for uniformly attenuated Radon data, developed by Metz and Pan. The fan-beam algorithm is then extended into a cone-beam reconstruction algorithm, where the orbit of the focal point of the cone-beam imaging geometry is a circle. This orbit geometry does not satisfy Tuy's condition and the obtained cone-beam algorithm is an approximation. In the cone-beam algorithm, the cone-beam data are first backprojected into the 3D image volume; then a slice-by-slice filtering is performed. This slice-by-slice filtering procedure is identical to that of the fan-beam algorithm. Both the fan-beam and cone-beam algorithms are efficient, and computer simulations are presented. The new cone-beam algorithm is compared with Bronnikov's cone-beam algorithm, and it is shown to have better performance with noisy projections
Energy Technology Data Exchange (ETDEWEB)
Filli, Lukas; Finkenstaedt, Tim; Andreisek, Gustav; Guggenberger, Roman [University Hospital of Zurich, Department of Diagnostic and Interventional Radiology, Zurich (Switzerland); Marcon, Magda [University Hospital of Zurich, Department of Diagnostic and Interventional Radiology, Zurich (Switzerland); University of Udine, Institute of Diagnostic Radiology, Department of Medical and Biological Sciences, Udine (Italy); Scholz, Bernhard [Imaging and Therapy Division, Siemens AG, Healthcare Sector, Forchheim (Germany); Calcagni, Maurizio [University Hospital of Zurich, Division of Plastic Surgery and Hand Surgery, Zurich (Switzerland)
2014-12-15
The aim of this study was to evaluate a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography (FDCT) of scaphoid fixation screws. FDCT has gained interest in imaging small anatomic structures of the appendicular skeleton. Angiographic C-arm systems with flat detectors allow fluoroscopy and FDCT imaging in a one-stop procedure emphasizing their role as an ideal intraoperative imaging tool. However, FDCT imaging can be significantly impaired by artefacts induced by fixation screws. Following ethical board approval, commercially available scaphoid fixation screws were inserted into six cadaveric specimens in order to fix artificially induced scaphoid fractures. FDCT images corrected with the algorithm were compared to uncorrected images both quantitatively and qualitatively by two independent radiologists in terms of artefacts, screw contour, fracture line visibility, bone visibility, and soft tissue definition. Normal distribution of variables was evaluated using the Kolmogorov-Smirnov test. In case of normal distribution, quantitative variables were compared using paired Student's t tests. The Wilcoxon signed-rank test was used for quantitative variables without normal distribution and all qualitative variables. A p value of < 0.05 was considered to indicate statistically significant differences. Metal artefacts were significantly reduced by the correction algorithm (p < 0.001), and the fracture line was more clearly defined (p < 0.01). The inter-observer reliability was ''almost perfect'' (intra-class correlation coefficient 0.85, p < 0.001). The prototype correction algorithm in FDCT for metal artefacts induced by scaphoid fixation screws may facilitate intra- and postoperative follow-up imaging. (orig.)
International Nuclear Information System (INIS)
Filli, Lukas; Finkenstaedt, Tim; Andreisek, Gustav; Guggenberger, Roman; Marcon, Magda; Scholz, Bernhard; Calcagni, Maurizio
2014-01-01
The aim of this study was to evaluate a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography (FDCT) of scaphoid fixation screws. FDCT has gained interest in imaging small anatomic structures of the appendicular skeleton. Angiographic C-arm systems with flat detectors allow fluoroscopy and FDCT imaging in a one-stop procedure emphasizing their role as an ideal intraoperative imaging tool. However, FDCT imaging can be significantly impaired by artefacts induced by fixation screws. Following ethical board approval, commercially available scaphoid fixation screws were inserted into six cadaveric specimens in order to fix artificially induced scaphoid fractures. FDCT images corrected with the algorithm were compared to uncorrected images both quantitatively and qualitatively by two independent radiologists in terms of artefacts, screw contour, fracture line visibility, bone visibility, and soft tissue definition. Normal distribution of variables was evaluated using the Kolmogorov-Smirnov test. In case of normal distribution, quantitative variables were compared using paired Student's t tests. The Wilcoxon signed-rank test was used for quantitative variables without normal distribution and all qualitative variables. A p value of < 0.05 was considered to indicate statistically significant differences. Metal artefacts were significantly reduced by the correction algorithm (p < 0.001), and the fracture line was more clearly defined (p < 0.01). The inter-observer reliability was ''almost perfect'' (intra-class correlation coefficient 0.85, p < 0.001). The prototype correction algorithm in FDCT for metal artefacts induced by scaphoid fixation screws may facilitate intra- and postoperative follow-up imaging. (orig.)
Multiple scattering corrections to the Beer-Lambert law. 2: Detector with a variable field of view.
Zardecki, A; Tam, W G
1982-07-01
The multiple scattering corrections to the Beer-Lambert law in the case of a detector with a variable field of view are analyzed. We introduce transmission functions relating the received radiant power to reference power levels relevant to two different experimental situations. In the first case, the transmission function relates the received power to a reference power level appropriate to a nonattenuating medium. In the second case, the reference power level is established by bringing the receiver to the close-up position with respect to the source. To examine the effect of the variation of the detector field of view the behavior of the gain factor is studied. Numerical results modeling the laser beam propagation in fog, cloud, and rain are presented.
The correctness of Newman’s typability algorithm and some of its extensions
Geuvers, J.H.; Krebbers, R.
2011-01-01
We study Newman’s typability algorithm (Newman, 1943) [14] for simple type theory. The algorithm originates from 1943, but was left unnoticed until (Newman, 1943) [14] was recently rediscovered by Hindley (2008) [10]. The remarkable thing is that it decides typability without computing a type. We
Upper Bounds on the Number of Errors Corrected by the Koetter–Vardy Algorithm
DEFF Research Database (Denmark)
Justesen, Jørn
2007-01-01
By introducing a few simplifying assumptions we derive a simple condition for successful decoding using the Koetter-Vardy algorithm for soft-decision decoding of Reed-Solomon codes. We show that the algorithm has a significant advantage over hard decision decoding when the code rate is low, when ...
Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.
2016-10-01
The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.
A New Adaptive Gamma Correction Based Algorithm Using DWT-SVD for Non-Contrast CT Image Enhancement.
Kallel, Fathi; Ben Hamida, Ahmed
2017-12-01
The performances of medical image processing techniques, in particular CT scans, are usually affected by poor contrast quality introduced by some medical imaging devices. This suggests the use of contrast enhancement methods as a solution to adjust the intensity distribution of the dark image. In this paper, an advanced adaptive and simple algorithm for dark medical image enhancement is proposed. This approach is principally based on adaptive gamma correction using discrete wavelet transform with singular-value decomposition (DWT-SVD). In a first step, the technique decomposes the input medical image into four frequency sub-bands by using DWT and then estimates the singular-value matrix of the low-low (LL) sub-band image. In a second step, an enhanced LL component is generated using an adequate correction factor and inverse singular value decomposition (SVD). In a third step, for an additional improvement of LL component, obtained LL sub-band image from SVD enhancement stage is classified into two main classes (low contrast and moderate contrast classes) based on their statistical information and therefore processed using an adaptive dynamic gamma correction function. In fact, an adaptive gamma correction factor is calculated for each image according to its class. Finally, the obtained LL sub-band image undergoes inverse DWT together with the unprocessed low-high (LH), high-low (HL), and high-high (HH) sub-bands for enhanced image generation. Different types of non-contrast CT medical images are considered for performance evaluation of the proposed contrast enhancement algorithm based on adaptive gamma correction using DWT-SVD (DWT-SVD-AGC). Results show that our proposed algorithm performs better than other state-of-the-art techniques.
Energy Technology Data Exchange (ETDEWEB)
Conti, C.C., E-mail: ccconti@ird.gov.br [Institute for Radioprotection and Dosimetry – IRD/CNEN, Rio de Janeiro (Brazil); Physics Institute, State University of Rio de Janeiro – UERJ, Rio de Janeiro (Brazil); Anjos, M.J. [Physics Institute, State University of Rio de Janeiro – UERJ, Rio de Janeiro (Brazil); Salgado, C.M. [Nuclear Engineering Institute – IEN/CNEN, Rio de Janeiro (Brazil)
2014-09-15
Highlights: •This work describes a procedure for sample self-absorption correction. •The use of Monte Carlo simulation to calculate the mass attenuation coefficients curve was effective. •No need for transmission measurement, saving time, financial resources and effort. •This article provides de curves for the 90° scattering angle. •Calculation on-line at (www.macx.net.br). -- Abstract: X-ray fluorescence technique plays an important role in nondestructive analysis nowadays. The development of equipment, including portable ones, enables a wide assortment of possibilities for analysis of stable elements, even in trace concentrations. Nevertheless, despite of the advantages, one important drawback is radiation self-attenuation in the sample being measured, which needs to be considered in the calculation for the proper determination of elemental concentration. The mass attenuation coefficient can be determined by transmission measurement, but, in this case, the sample must be in slab shape geometry and demands two different setups and measurements. The Rayleigh to Compton scattering ratio, determined from the X-ray fluorescence spectrum, provides a link to the mass attenuation coefficient by means of a polynomial type equation. This work presents a way to construct a Rayleigh to Compton scattering ratio versus mass attenuation coefficient curve by using the MCNP5 Monte Carlo computer code. The comparison between the calculated and literature values of the mass attenuation coefficient for some known samples showed to be within 15%. This calculation procedure is available on-line at (www.macx.net.br)
A scattering-based over-land rainfall retrieval algorithm for South Korea using GCOM-W1/AMSR-2 data
Kwon, Young-Joo; Shin, Hayan; Ban, Hyunju; Lee, Yang-Won; Park, Kyung-Ae; Cho, Jaeil; Park, No-Wook; Hong, Sungwook
2017-08-01
Heavy summer rainfall is a primary natural disaster affecting lives and properties in the Korean Peninsula. This study presents a satellite-based rainfall rate retrieval algorithm for the South Korea combining polarization-corrected temperature ( PCT) and scattering index ( SI) data from the 36.5 and 89.0 GHz channels of the Advanced microwave Scanning Radiometer 2 (AMSR-2) onboard the Global Change Observation Mission (GCOM)-W1 satellite. The coefficients for the algorithm were obtained from spatial and temporal collocation data from the AMSR-2 and groundbased automatic weather station rain gauges from 1 July - 30 August during the years, 2012-2015. There were time delays of about 25 minutes between the AMSR-2 observations and the ground raingauge measurements. A new linearly-combined rainfall retrieval algorithm focused on heavy rain for the PCT and SI was validated using ground-based rainfall observations for the South Korea from 1 July - 30 August, 2016. The validation presented PCT and SI methods showed slightly improved results for rainfall > 5 mm h-1 compared to the current ASMR-2 level 2 data. The best bias and root mean square error (RMSE) for the PCT method at AMSR-2 36.5 GHz were 2.09 mm h-1 and 7.29 mm h-1, respectively, while the current official AMSR-2 rainfall rates show a larger bias and RMSE (4.80 mm h-1 and 9.35 mm h-1, respectively). This study provides a scatteringbased over-land rainfall retrieval algorithm for South Korea affected by stationary front rain and typhoons with the advantages of the previous PCT and SI methods to be applied to a variety of spaceborne passive microwave radiometers.
International Nuclear Information System (INIS)
Fairbanks, Leandro R.; Barbi, Gustavo L.; Silva, Wiliam T.; Reis, Eduardo G.F.; Borges, Leandro F.; Bertucci, Edenyse C.; Maciel, Marina F.; Amaral, Leonardo L.
2011-01-01
Since the cross-section for various radiation interactions is dependent upon tissue material, the presence of heterogeneities affects the final dose delivered. This paper aims to analyze how different treatment planning algorithms (Fast Fourier Transform, Convolution, Superposition, Fast Superposition and Clarkson) work when heterogeneity corrections are used. To that end, a farmer-type ionization chamber was positioned reproducibly (during the time of CT as well as irradiation) inside several phantoms made of aluminum, bone, cork and solid water slabs. The percent difference between the dose measured and calculated by the various algorithms was less than 5%.The convolution method shows better results for high density materials (difference ∼1 %), whereas the Superposition algorithm is more accurate for low densities (around 1,1%). (author)
International Nuclear Information System (INIS)
Tachibana, Masayuki; Noguchi, Yoshitaka; Fukunaga, Jyunichi; Hirano, Naomi; Yoshidome, Satoshi; Hirose, Takaaki
2009-01-01
The monitor unit (MU) was calculated by pencil beam convolution (inhomogeneity correction algorithm: batho power law) [PBC (BPL)] which is the dose calculation algorithm based on measurement in the past in the stereotactic lung irradiation study. The recalculation was done by analytical anisotropic algorithm (AAA), which is the dose calculation algorithm based on theory data. The MU calculated by PBC (BPL) and AAA was compared for each field. In the result of the comparison of 1031 fields in 136 cases, the MU calculated by PBC (BPL) was about 2% smaller than that calculated by AAA. This depends on whether one does the calculation concerning the extension of the second electrons. In particular, the difference in the MU is influenced by the X-ray energy. With the same X-ray energy, when the irradiation field size is small, the lung pass length is long, the lung pass length percentage is large, and the CT value of the lung is low, and the difference of MU is increased. (author)
Effects of attenuation and scatter corrections in cat brain PET images using microPET R4 scanner
International Nuclear Information System (INIS)
Kim, Jin Su; Lee, Jae Sung; Lee, Jong Jin
2006-01-01
The aim of this study was to examine the effects of attenuation correction (AC) and scatter correction (SC) on the quantification of PET count rates. To assess the effects of AC and SC, 18 F-FDG PET images of phantom and cat brain were acquired using microPET R4 scanner. Thirty-minute transmission images using 68 Ge source and emission images after injection of FDG were acquired. PET images were reconstructed using. 2D OSEM. AC and SC were applied. Regional count rates were measured using ROls drawn on cerebral cortex including frontal, parietal, and latral temporal lobes and deep gray matter including head of caudate nucleus, putamen and thalamus for pre- and post-AC and SC images. The count rates were then normalized with the injected dose per body weight. To assess the effects of AC, count ratio of 'deep gray matter/cerebral cortex' was calculated. To assess the effects of SC, ROls were also drawn on the gray matter (GM) and white matter (WM), and contrast between them ((GM-WM)/GM) was measured. After the AC, count ratio of 'deep gray matter/cerebral cortex' was increased by 17±7%. After the SC, contrast was also increased by 12±3%. Relative count of deep gray matter and contrast between gray and white matters were increased after AC and SC, suggesting that the AC would be critical for the quantitative analysis of cat brain PET data
International Nuclear Information System (INIS)
Williams, W.G.
1975-01-01
The use of the polarization analysis technique to separate spin-flip from non-spin-flip thermal neutron scattering is especially important in determining magnetic scattering cross-sections. In order to identify a spin-flip ratio in the scattering with a particular scattering process, it is necessary to correct the experimentally observed 'flipping-ratio' to allow for the efficiencies of the vital instrument components (polarizers and spin-flippers), as well as multiple scattering effects in the sample. Analytical expressions for these corections are presented and their magnitudes in typical cases estimated. The errors in measurement depend strongly on the uncertainties in the calibration of the efficiencies of the polarizers and the spin-flipper. The final section is devoted to a discussion of polarization analysis instruments
Dose-calculation algorithms in the context of inhomogeneity corrections for high energy photon beams
International Nuclear Information System (INIS)
Papanikolaou, Niko; Stathakis, Sotirios
2009-01-01
Radiation therapy has witnessed a plethora of innovations and developments in the past 15 years. Since the introduction of computed tomography for treatment planning there has been a steady introduction of new methods to refine treatment delivery. Imaging continues to be an integral part of the planning, but also the delivery, of modern radiotherapy. However, all the efforts of image guided radiotherapy, intensity-modulated planning and delivery, adaptive radiotherapy, and everything else that we pride ourselves in having in the armamentarium can fall short, unless there is an accurate dose-calculation algorithm. The agreement between the calculated and delivered doses is of great significance in radiation therapy since the accuracy of the absorbed dose as prescribed determines the clinical outcome. Dose-calculation algorithms have evolved greatly over the years in an effort to be more inclusive of the effects that govern the true radiation transport through the human body. In this Vision 20/20 paper, we look back to see how it all started and where things are now in terms of dose algorithms for photon beams and the inclusion of tissue heterogeneities. Convolution-superposition algorithms have dominated the treatment planning industry for the past few years. Monte Carlo techniques have an inherent accuracy that is superior to any other algorithm and as such will continue to be the gold standard, along with measurements, and maybe one day will be the algorithm of choice for all particle treatment planning in radiation therapy.
Indian Academy of Sciences (India)
polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.
Directory of Open Access Journals (Sweden)
Mahsa Noori Asl
2013-01-01
Full Text Available Compton-scattered photons included within the photopeak pulse-height window result in the degradation of SPECT images both qualitatively and quantitatively. The purpose of this study is to evaluate and compare six scatter correction methods based on setting the energy windows in 99m Tc spectrum. SIMIND Monte Carlo simulation is used to generate the projection images from a cold-sphere hot-background phantom. For evaluation of different scatter correction methods, three assessment criteria including image contrast, signal-to-noise ratio (SNR and relative noise of the background (RNB are considered. Except for the dual-photopeak window (DPW method, the image contrast of the five cold spheres is improved in the range of 2.7-26%. Among methods considered, two methods show a nonuniform correction performance. The RNB for all of the scatter correction methods is ranged from minimum 0.03 for DPW method to maximum 0.0727 for the three energy window (TEW method using trapezoidal approximation. The TEW method using triangular approximation because of ease of implementation, good improvement of the image contrast and the SNR for the five cold spheres, and the low noise level is proposed as most appropriate correction method.
Directory of Open Access Journals (Sweden)
Y. W. Sun
2013-08-01
Full Text Available In this paper, we present an optimized analysis algorithm for non-dispersive infrared (NDIR to in situ monitor stack emissions. The proposed algorithm simultaneously compensates for nonlinear absorption and cross interference among different gases. We present a mathematical derivation for the measurement error caused by variations in interference coefficients when nonlinear absorption occurs. The proposed algorithm is derived from a classical one and uses interference functions to quantify cross interference. The interference functions vary proportionally with the nonlinear absorption. Thus, interference coefficients among different gases can be modeled by the interference functions whether gases are characterized by linear or nonlinear absorption. In this study, the simultaneous analysis of two components (CO2 and CO serves as an example for the validation of the proposed algorithm. The interference functions in this case can be obtained by least-squares fitting with third-order polynomials. Experiments show that the results of cross interference correction are improved significantly by utilizing the fitted interference functions when nonlinear absorptions occur. The dynamic measurement ranges of CO2 and CO are improved by about a factor of 1.8 and 3.5, respectively. A commercial analyzer with high accuracy was used to validate the CO and CO2 measurements derived from the NDIR analyzer prototype in which the new algorithm was embedded. The comparison of the two analyzers show that the prototype works well both within the linear and nonlinear ranges.
Cao, Le; Wei, Bing
2014-08-25
Finite-difference time-domain (FDTD) algorithm with a new method of plane wave excitation is used to investigate the RCS (Radar Cross Section) characteristics of targets over layered half space. Compare with the traditional excitation plane wave method, the calculation memory and time requirement is greatly decreased. The FDTD calculation is performed with a plane wave incidence, and the RCS of far field is obtained by extrapolating the currently calculated data on the output boundary. However, methods available for extrapolating have to evaluate the half space Green function. In this paper, a new method which avoids using the complex and time-consuming half space Green function is proposed. Numerical results show that this method is in good agreement with classic algorithm and it can be used in the fast calculation of scattering and radiation of targets over layered half space.
Energy Technology Data Exchange (ETDEWEB)
Aspelund, O; Gustafsson, B
1967-05-15
After an introductory discussion of various methods for correction of experimental left-right ratios for polarized multiple-scattering and finite-geometry effects necessary and sufficient formulas for consistent tracking of polarization effects in successive scattering orders are derived. The simplifying assumptions are then made that the scattering is purely elastic and nuclear, and that in the description of the kinematics of the arbitrary Scattering {mu}, only one triple-parameter - the so-called spin rotation parameter {beta}{sup ({mu})} - is required. Based upon these formulas a general discussion of the importance of the correct inclusion of polarization effects in any scattering order is presented. Special attention is then paid to the question of depolarization of an already polarized beam. Subsequently, the afore-mentioned formulas are incorporated in the comprehensive Monte Carlo program MULTPOL, which has been designed so as to correctly account for finite-geometry effects in the sense that both the scattering sample and the detectors (both having cylindrical shapes) are objects of finite dimensions located at finite distances from each other and from the source of polarized fast-neutrons. A special feature of MULTPOL is the application of the method of correlated sampling for reduction of the standard deviations .of the results of the simulated experiment. Typical data of performance of MULTPOL have been obtained by the application of this program to the correction of experimental polarization data observed in n + '{sup 12}C elastic scattering between 1 and 2 MeV. Finally, in the concluding remarks the possible modification of MULTPOL to other experimental geometries is briefly discussed.
Energy Technology Data Exchange (ETDEWEB)
Blümlein, Johannes, E-mail: Johannes.Bluemlein@desy.de [Deutsches Elektronen–Synchrotron, DESY, Platanenallee 6, D-15738 Zeuthen (Germany); Hasselhuhn, Alexander [Deutsches Elektronen–Synchrotron, DESY, Platanenallee 6, D-15738 Zeuthen (Germany); Research Institute for Symbolic Computation (RISC), Johannes Kepler University, Altenbergerstraße 69, A-4040 Linz (Austria); Pfoh, Torsten [Deutsches Elektronen–Synchrotron, DESY, Platanenallee 6, D-15738 Zeuthen (Germany)
2014-04-15
We calculate the O(α{sub s}{sup 2}) heavy flavor corrections to charged current deep-inelastic scattering at large scales Q{sup 2}≫m{sup 2}. The contributing Wilson coefficients are given as convolutions between massive operator matrix elements and massless Wilson coefficients. Foregoing results in the literature are extended and corrected. Numerical results are presented for the kinematic region of the HERA data.
Directory of Open Access Journals (Sweden)
Pablito M. López-Serrano
2016-04-01
Full Text Available Solar radiation is affected by absorption and emission phenomena during its downward trajectory from the Sun to the Earth’s surface and during the upward trajectory detected by satellite sensors. This leads to distortion of the ground radiometric properties (reflectance recorded by satellite images, used in this study to estimate aboveground forest biomass (AGB. Atmospherically-corrected remote sensing data can be used to estimate AGB on a global scale and with moderate effort. The objective of this study was to evaluate four atmospheric correction algorithms (for surface reflectance, ATCOR2 (Atmospheric Correction for Flat Terrain, COST (Cosine of the Sun Zenith Angle, FLAASH (Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes and 6S (Second Simulation of Satellite Signal in the Solar, and one radiometric correction algorithm (for reflectance at the sensor ToA (Apparent Reflectance at the Top of Atmosphere to estimate AGB in temperate forest in the northeast of the state of Durango, Mexico. The AGB was estimated from Landsat 5 TM imagery and ancillary information from a digital elevation model (DEM using the non-parametric multivariate adaptive regression splines (MARS technique. Field reference data for the model training were collected by systematic sampling of 99 permanent forest growth and soil research sites (SPIFyS established during the winter of 2011. The following predictor variables were identified in the MARS model: Band 7, Band 5, slope (β, Wetness Index (WI, NDVI and MSAVI2. After cross-validation, 6S was found to be the optimal model for estimating AGB (R2 = 0.71 and RMSE = 33.5 Mg·ha−1; 37.61% of the average stand biomass. We conclude that atmospheric and radiometric correction of satellite images can be used along with non-parametric techniques to estimate AGB with acceptable accuracy.
The variable refractive index correction algorithm based on a stereo light microscope
International Nuclear Information System (INIS)
Pei, W; Zhu, Y Y
2010-01-01
Refraction occurs at least twice on both the top and the bottom surfaces of the plastic plate covering the micro channel in a microfluidic chip. The refraction and the nonlinear model of a stereo light microscope (SLM) may severely affect measurement accuracy. In this paper, we study the correlation between optical paths of the SLM and present an algorithm to adjust the refractive index based on the SLM. Our algorithm quantizes the influence of cover plate and double optical paths on the measurement accuracy, and realizes non-destructive, non-contact and precise 3D measurement of a hyaloid and closed container
International Nuclear Information System (INIS)
Fujita, Masahiro; Seneca, Nicholas; Innis, Robert B.; Varrone, Andrea; Kim, Kyeong Min; Watabe, Hiroshi; Iida, Hidehiro; Zoghbi, Sami S.; Tipre, Dnyanesh; Seibyl, John P.
2004-01-01
Prior studies with anthropomorphic phantoms and single, static in vivo brain images have demonstrated that scatter correction significantly improves the accuracy of regional quantitation of single-photon emission tomography (SPET) brain images. Since the regional distribution of activity changes following a bolus injection of a typical neuroreceptor ligand, we examined the effect of scatter correction on the compartmental modeling of serial dynamic images of striatal and extrastriatal dopamine D 2 receptors using [ 123 I]epidepride. Eight healthy human subjects [age 30±8 (range 22-46) years] participated in a study with a bolus injection of 373±12 (354-389) MBq [ 123 I]epidepride and data acquisition over a period of 14 h. A transmission scan was obtained in each study for attenuation and scatter correction. Distribution volumes were calculated by means of compartmental nonlinear least-squares analysis using metabolite-corrected arterial input function and brain data processed with scatter correction using narrow-beam geometry μ (SC) and without scatter correction using broad-beam μ (NoSC). Effects of SC were markedly different among brain regions. SC increased activities in the putamen and thalamus after 1-1.5 h while it decreased activity during the entire experiment in the temporal cortex and cerebellum. Compared with NoSC, SC significantly increased specific distribution volume in the putamen (58%, P=0.0001) and thalamus (23%, P=0.0297). Compared with NoSC, SC made regional distribution of the specific distribution volume closer to that of [ 18 F]fallypride. It is concluded that SC is required for accurate quantification of distribution volumes of receptor ligands in SPET studies. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Fujita, Masahiro; Seneca, Nicholas; Innis, Robert B. [Department of Psychiatry, Yale University School of Medicine and VA Connecticut Healthcare System, West Haven, CT (United States); Molecular Imaging Branch, National Institute of Mental Health, Bethesda, MD (United States); Varrone, Andrea [Department of Psychiatry, Yale University School of Medicine and VA Connecticut Healthcare System, West Haven, CT (United States); Biostructure and Bioimaging Institute, National Research Council, Napoli (Italy); Kim, Kyeong Min; Watabe, Hiroshi; Iida, Hidehiro [Department of Investigative Radiology, National Cardiovascular Center Research Institute, Osaka (Japan); Zoghbi, Sami S. [Department of Psychiatry, Yale University School of Medicine and VA Connecticut Healthcare System, West Haven, CT (United States); Molecular Imaging Branch, National Institute of Mental Health, Bethesda, MD (United States); Department of Radiology, Yale University School of Medicine and VA Connecticut Healthcare System, West Haven, CT (United States); Tipre, Dnyanesh [Molecular Imaging Branch, National Institute of Mental Health, Bethesda, MD (United States); Seibyl, John P. [Institute for Neurodegenerative Disorders, New Haven, CT (United States)
2004-05-01
Prior studies with anthropomorphic phantoms and single, static in vivo brain images have demonstrated that scatter correction significantly improves the accuracy of regional quantitation of single-photon emission tomography (SPET) brain images. Since the regional distribution of activity changes following a bolus injection of a typical neuroreceptor ligand, we examined the effect of scatter correction on the compartmental modeling of serial dynamic images of striatal and extrastriatal dopamine D{sub 2} receptors using [{sup 123}I]epidepride. Eight healthy human subjects [age 30{+-}8 (range 22-46) years] participated in a study with a bolus injection of 373{+-}12 (354-389) MBq [{sup 123}I]epidepride and data acquisition over a period of 14 h. A transmission scan was obtained in each study for attenuation and scatter correction. Distribution volumes were calculated by means of compartmental nonlinear least-squares analysis using metabolite-corrected arterial input function and brain data processed with scatter correction using narrow-beam geometry {mu} (SC) and without scatter correction using broad-beam {mu} (NoSC). Effects of SC were markedly different among brain regions. SC increased activities in the putamen and thalamus after 1-1.5 h while it decreased activity during the entire experiment in the temporal cortex and cerebellum. Compared with NoSC, SC significantly increased specific distribution volume in the putamen (58%, P=0.0001) and thalamus (23%, P=0.0297). Compared with NoSC, SC made regional distribution of the specific distribution volume closer to that of [{sup 18}F]fallypride. It is concluded that SC is required for accurate quantification of distribution volumes of receptor ligands in SPET studies. (orig.)
A multifrequency MUSIC algorithm for locating small inhomogeneities in inverse scattering
International Nuclear Information System (INIS)
Griesmaier, Roland; Schmiedecke, Christian
2017-01-01
We consider an inverse scattering problem for time-harmonic acoustic or electromagnetic waves with sparse multifrequency far field data-sets. The goal is to localize several small penetrable objects embedded inside an otherwise homogeneous background medium from observations of far fields of scattered waves corresponding to incident plane waves with one fixed incident direction but several different frequencies. We assume that the far field is measured at a few observation directions only. Taking advantage of the smallness of the scatterers with respect to wavelength we utilize an asymptotic representation formula for the far field to design and analyze a MUSIC-type reconstruction method for this setup. We establish lower bounds on the number of frequencies and receiver directions that are required to recover the number and the positions of an ensemble of scatterers from the given measurements. Furthermore we briefly sketch a possible application of the reconstruction method to the practically relevant case of multifrequency backscattering data. Numerical examples are presented to document the potentials and limitations of this approach. (paper)
Morillot, Olivier; Likforman-Sulem, Laurence; Grosicki, Emmanuèle
2013-04-01
Many preprocessing techniques have been proposed for isolated word recognition. However, recently, recognition systems have dealt with text blocks and their compound text lines. In this paper, we propose a new preprocessing approach to efficiently correct baseline skew and fluctuations. Our approach is based on a sliding window within which the vertical position of the baseline is estimated. Segmentation of text lines into subparts is, thus, avoided. Experiments conducted on a large publicly available database (Rimes), with a BLSTM (bidirectional long short-term memory) recurrent neural network recognition system, show that our baseline correction approach highly improves performance.
de Priester, JA; den Boer, JA; Giele, ELW; Christiaans, MHL; Kessels, A; Hasman, A; van Engelshoven, JMA
We evaluated a mathematical algorithm for the generation of medullary signal from raw dynamic magnetic resonance (MR) data. Five healthy volunteers were studied. MR examination consisted of a run of 100 TI-weighted coronal scans (gradient echo: TR/TE 11/3.4 msec, flip angle 60 degrees; slice
Scheinost, Dustin; Hampson, Michelle; Qiu, Maolin; Bhawnani, Jitendra; Constable, R Todd; Papademetris, Xenophon
2013-07-01
Real-time functional magnetic resonance imaging (rt-fMRI) has recently gained interest as a possible means to facilitate the learning of certain behaviors. However, rt-fMRI is limited by processing speed and available software, and continued development is needed for rt-fMRI to progress further and become feasible for clinical use. In this work, we present an open-source rt-fMRI system for biofeedback powered by a novel Graphics Processing Unit (GPU) accelerated motion correction strategy as part of the BioImage Suite project ( www.bioimagesuite.org ). Our system contributes to the development of rt-fMRI by presenting a motion correction algorithm that provides an estimate of motion with essentially no processing delay as well as a modular rt-fMRI system design. Using empirical data from rt-fMRI scans, we assessed the quality of motion correction in this new system. The present algorithm performed comparably to standard (non real-time) offline methods and outperformed other real-time methods based on zero order interpolation of motion parameters. The modular approach to the rt-fMRI system allows the system to be flexible to the experiment and feedback design, a valuable feature for many applications. We illustrate the flexibility of the system by describing several of our ongoing studies. Our hope is that continuing development of open-source rt-fMRI algorithms and software will make this new technology more accessible and adaptable, and will thereby accelerate its application in the clinical and cognitive neurosciences.
Directory of Open Access Journals (Sweden)
Pei-Fang (Jennifer Tsai
2012-01-01
Full Text Available Remanufacturing of used products has become a strategic issue for cost-sensitive businesses. Due to the nature of uncertain supply of end-of-life (EoL products, the reverse logistic can only be sustainable with a dynamic production planning for disassembly process. This research investigates the sequencing of disassembly operations as a single-period partial disassembly optimization (SPPDO problem to minimize total disassembly cost. AND/OR graph representation is used to include all disassembly sequences of a returned product. A label correcting algorithm is proposed to find an optimal partial disassembly plan if a specific reusable subpart is retrieved from the original return. Then, a heuristic procedure that utilizes this polynomial-time algorithm is presented to solve the SPPDO problem. Numerical examples are used to demonstrate the effectiveness of this solution procedure.
International Nuclear Information System (INIS)
Liu, Xingchen; Hu, Zhiyong; He, Qingbo; Zhang, Shangbin; Zhu, Jun
2017-01-01
Doppler distortion and background noise can reduce the effectiveness of wayside acoustic train bearing monitoring and fault diagnosis. This paper proposes a method of combining a microphone array and matching pursuit algorithm to overcome these difficulties. First, a dictionary is constructed based on the characteristics and mechanism of a far-field assumption. Then, the angle of arrival of the train bearing is acquired when applying matching pursuit to analyze the acoustic array signals. Finally, after obtaining the resampling time series, the Doppler distortion can be corrected, which is convenient for further diagnostic work. Compared with traditional single-microphone Doppler correction methods, the advantages of the presented array method are its robustness to background noise and its barely requiring pre-measuring parameters. Simulation and experimental study show that the proposed method is effective in performing wayside acoustic bearing fault diagnosis. (paper)
Liu, Xingchen; Hu, Zhiyong; He, Qingbo; Zhang, Shangbin; Zhu, Jun
2017-10-01
Doppler distortion and background noise can reduce the effectiveness of wayside acoustic train bearing monitoring and fault diagnosis. This paper proposes a method of combining a microphone array and matching pursuit algorithm to overcome these difficulties. First, a dictionary is constructed based on the characteristics and mechanism of a far-field assumption. Then, the angle of arrival of the train bearing is acquired when applying matching pursuit to analyze the acoustic array signals. Finally, after obtaining the resampling time series, the Doppler distortion can be corrected, which is convenient for further diagnostic work. Compared with traditional single-microphone Doppler correction methods, the advantages of the presented array method are its robustness to background noise and its barely requiring pre-measuring parameters. Simulation and experimental study show that the proposed method is effective in performing wayside acoustic bearing fault diagnosis.
Kaneta, Tomohiro; Kurihara, Hideyuki; Hakamatsuka, Takashi; Ito, Hiroshi; Maruoka, Shin; Fukuda, Hiroshi; Takahashi, Shoki; Yamada, Shogo
2004-12-01
123I-15-(p-iodophenyl)-3-(R,S)-methylpentadecanoic acid (BMIPP) and 99mTc-tetrofosmin (TET) are widely used for evaluation of myocardial fatty acid metabolism and perfusion, respectively. ECG-gated TET SPECT is also used for evaluation of myocardial wall motion. These tests are often performed on the same day to minimize both the time required and inconvenience to patients and medical staff. However, as 123I and 99mTc have similar emission energies (159 keV and 140 keV, respectively), it is necessary to consider not only scattered photons, but also primary photons of each radionuclide detected in the wrong window (cross-talk). In this study, we developed and evaluated the effectiveness of a new scatter and cross-talk correction imaging protocol. Fourteen patients with ischemic heart disease or heart failure (8 men and 6 women with a mean age of 69.4 yr, ranging from 45 to 94 yr) were enrolled in this study. In the routine one-day acquisition protocol, BMIPP SPECT was performed in the morning, with TET SPECT performed 4 h later. An additional SPECT was performed just before injection of TET with the energy window for 99mTc. These data correspond to the scatter and cross-talk factor of the next TET SPECT. The correction was performed by subtraction of the scatter and cross-talk factor from TET SPECT. Data are presented as means +/- S.E. Statistical analyses were performed using Wilcoxon's matched-pairs signed-ranks test, and p corrected total count was 26.0 +/- 5.3%. EDV and ESV after correction were significantly greater than those before correction (p = 0.019 and 0.016, respectively). After correction, EF was smaller than that before correction, but the difference was not significant. Perfusion scores (17 segments per heart) were significantly lower after as compared with those before correction (p correction revealed significant differences in EDV, ESV, and perfusion scores. These observations indicate that scatter and cross-talk correction is required for one
Gu, Tingwei; Kong, Deren; Shang, Fei; Chen, Jing
2017-12-01
We present an optimization algorithm to obtain low-uncertainty dynamic pressure measurements from a force-transducer-based device. In this paper, the advantages and disadvantages of the methods that are commonly used to measure the propellant powder gas pressure, the applicable scope of dynamic pressure calibration devices, and the shortcomings of the traditional comparison calibration method based on the drop-weight device are firstly analysed in detail. Then, a dynamic calibration method for measuring pressure using a force sensor based on a drop-weight device is introduced. This method can effectively save time when many pressure sensors are calibrated simultaneously and extend the life of expensive reference sensors. However, the force sensor is installed between the drop-weight and the hammerhead by transition pieces through the connection mode of bolt fastening, which causes adverse effects such as additional pretightening and inertia forces. To solve these effects, the influence mechanisms of the pretightening force, the inertia force and other influence factors on the force measurement are theoretically analysed. Then a measurement correction method for the force measurement is proposed based on an artificial neural network optimized by a genetic algorithm. The training and testing data sets are obtained from calibration tests, and the selection criteria for the key parameters of the correction model is discussed. The evaluation results for the test data show that the correction model can effectively improve the force measurement accuracy of the force sensor. Compared with the traditional high-accuracy comparison calibration method, the percentage difference of the impact-force-based measurement is less than 0.6% and the relative uncertainty of the corrected force value is 1.95%, which can meet the requirements of engineering applications.
Indian Academy of Sciences (India)
to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...
Energy Technology Data Exchange (ETDEWEB)
Iwai, P; Lins, L Nadler [AC Camargo Cancer Center, Sao Paulo (Brazil)
2016-06-15
Purpose: There is a lack of studies with significant cohort data about patients using pacemaker (PM), implanted cardioverter defibrillator (ICD) or cardiac resynchronization therapy (CRT) device undergoing radiotherapy. There is no literature comparing the cumulative doses delivered to those cardiac implanted electronic devices (CIED) calculated by different algorithms neither studies comparing doses with heterogeneity correction or not. The aim of this study was to evaluate the influence of the algorithms Pencil Beam Convolution (PBC), Analytical Anisotropic Algorithm (AAA) and Acuros XB (AXB) as well as heterogeneity correction on risk categorization of patients. Methods: A retrospective analysis of 19 3DCRT or IMRT plans of 17 patients was conducted, calculating the dose delivered to CIED using three different calculation algorithms. Doses were evaluated with and without heterogeneity correction for comparison. Risk categorization of the patients was based on their CIED dependency and cumulative dose in the devices. Results: Total estimated doses at CIED calculated by AAA or AXB were higher than those calculated by PBC in 56% of the cases. In average, the doses at CIED calculated by AAA and AXB were higher than those calculated by PBC (29% and 4% higher, respectively). The maximum difference of doses calculated by each algorithm was about 1 Gy, either using heterogeneity correction or not. Values of maximum dose calculated with heterogeneity correction showed that dose at CIED was at least equal or higher in 84% of the cases with PBC, 77% with AAA and 67% with AXB than dose obtained with no heterogeneity correction. Conclusion: The dose calculation algorithm and heterogeneity correction did not change the risk categorization. Since higher estimated doses delivered to CIED do not compromise treatment precautions to be taken, it’s recommend that the most sophisticated algorithm available should be used to predict dose at the CIED using heterogeneity correction.
International Nuclear Information System (INIS)
Iwai, P; Lins, L Nadler
2016-01-01
Purpose: There is a lack of studies with significant cohort data about patients using pacemaker (PM), implanted cardioverter defibrillator (ICD) or cardiac resynchronization therapy (CRT) device undergoing radiotherapy. There is no literature comparing the cumulative doses delivered to those cardiac implanted electronic devices (CIED) calculated by different algorithms neither studies comparing doses with heterogeneity correction or not. The aim of this study was to evaluate the influence of the algorithms Pencil Beam Convolution (PBC), Analytical Anisotropic Algorithm (AAA) and Acuros XB (AXB) as well as heterogeneity correction on risk categorization of patients. Methods: A retrospective analysis of 19 3DCRT or IMRT plans of 17 patients was conducted, calculating the dose delivered to CIED using three different calculation algorithms. Doses were evaluated with and without heterogeneity correction for comparison. Risk categorization of the patients was based on their CIED dependency and cumulative dose in the devices. Results: Total estimated doses at CIED calculated by AAA or AXB were higher than those calculated by PBC in 56% of the cases. In average, the doses at CIED calculated by AAA and AXB were higher than those calculated by PBC (29% and 4% higher, respectively). The maximum difference of doses calculated by each algorithm was about 1 Gy, either using heterogeneity correction or not. Values of maximum dose calculated with heterogeneity correction showed that dose at CIED was at least equal or higher in 84% of the cases with PBC, 77% with AAA and 67% with AXB than dose obtained with no heterogeneity correction. Conclusion: The dose calculation algorithm and heterogeneity correction did not change the risk categorization. Since higher estimated doses delivered to CIED do not compromise treatment precautions to be taken, it’s recommend that the most sophisticated algorithm available should be used to predict dose at the CIED using heterogeneity correction.
Raghunath, N.; Faber, T. L.; Suryanarayanan, S.; Votaw, J. R.
2009-02-01
Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.
International Nuclear Information System (INIS)
Raghunath, N; Faber, T L; Suryanarayanan, S; Votaw, J R
2009-01-01
Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.
Energy Technology Data Exchange (ETDEWEB)
Raghunath, N; Faber, T L; Suryanarayanan, S; Votaw, J R [Department of Radiology, Emory University Hospital, 1364 Clifton Road, N.E. Atlanta, GA 30322 (United States)], E-mail: John.Votaw@Emory.edu
2009-02-07
Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.
International Nuclear Information System (INIS)
Baba, Yuji; Murakami, Ryuji; Mizukami, Naohisa; Morishita, Shoji; Yamashita, Yasuyuki; Araki, Fujio; Moribe, Nobuyuki; Hirata, Yukinori
2004-01-01
The purpose of this study was to compare radiation doses of small lung nodules calculated with beam scattering compensation and those without compensation in heterogeneous tissues. Computed tomography (CT) data of 34 small (1-2 cm: 12 nodules, 2-3 cm 11 nodules, 3-4 cm 11 nodules) lung nodules were used in the radiation dose measurements. Radiation planning for lung nodule was performed with a commercially available unit using two different radiation dose calculation methods: the superposition method (with scatter compensation in heterogeneous tissues), and the Clarkson method (without scatter compensation in heterogeneous tissues). The energy of the linac photon used in this study was 10 MV and 4 MV. Monitor unit (MU) to deliver 10 Gy at the center of the radiation field (center of the nodule) calculated with the two methods were compared. In 1-2 cm nodules, MU calculated by Clarkson method (MUc) was 90.0±1.1% (4 MV photon) and 80.5±2.7% (10 MV photon) compared to MU calculated by superposion method (MUs), in 2-3 cm nodules, MUc was 92.9±1.1% (4 MV photon) and 86.6±2.8% (10 MV photon) compared to MUs, and in 3-4 cm nodules, MUc was 90.5±2.0% (4 MV photon) and 90.1±1.7% (10 MV photon) compared to MUs. In 1-2 cm nodules, MU calculated without lung compensation (MUn) was 120.6±8.3% (4 MV photon) and 95.1±4.1% (10 MV photon) compared to MU calculated by superposion method (MUs), in 2-3 cm nodules, MUc was 120.3±11.5% (4 MV photon) and 100.5±4.6% (10 MV photon) compared to MUs, and in 3-4 cm nodules, MUc was 105.3±9.0% (4 MV photon) and 103.4±4.9% (10 MV photon) compared to MUs. The MU calculated without lung compensation was not significantly different from the MU calculated by superposition method in 2-3 cm nodules. We found that the conventional dose calculation algorithm without scatter compensation in heterogeneous tissues substantially overestimated the radiation dose of small nodules in the lung field. In the calculation of dose distribution of small
International Nuclear Information System (INIS)
Häggström, I; Karlsson, M; Larsson, A; Schmidtlein, C
2014-01-01
Purpose: To investigate the effects of corrections for random and scattered coincidences on kinetic parameters in brain tumors, by using ten Monte Carlo (MC) simulated dynamic FLT-PET brain scans. Methods: The GATE MC software was used to simulate ten repetitions of a 1 hour dynamic FLT-PET scan of a voxelized head phantom. The phantom comprised six normal head tissues, plus inserted regions for blood and tumor tissue. Different time-activity-curves (TACs) for all eight tissue types were used in the simulation and were generated in Matlab using a 2-tissue model with preset parameter values (K1,k2,k3,k4,Va,Ki). The PET data was reconstructed into 28 frames by both ordered-subset expectation maximization (OSEM) and 3D filtered back-projection (3DFBP). Five image sets were reconstructed, all with normalization and different additional corrections C (A=attenuation, R=random, S=scatter): Trues (AC), trues+randoms (ARC), trues+scatters (ASC), total counts (ARSC) and total counts (AC). Corrections for randoms and scatters were based on real random and scatter sinograms that were back-projected, blurred and then forward projected and scaled to match the real counts. Weighted non-linearleast- squares fitting of TACs from the blood and tumor regions was used to obtain parameter estimates. Results: The bias was not significantly different for trues (AC), trues+randoms (ARC), trues+scatters (ASC) and total counts (ARSC) for either 3DFBP or OSEM (p<0.05). Total counts with only AC stood out however, with an up to 160% larger bias. In general, there was no difference in bias found between 3DFBP and OSEM, except in parameter Va and Ki. Conclusion: According to our results, the methodology of correcting the PET data for randoms and scatters performed well for the dynamic images where frames have much lower counts compared to static images. Generally, no bias was introduced by the corrections and their importance was emphasized since omitting them increased bias extensively
Yang, Minglin; Wu, Yueqian; Sheng, Xinqing; Ren, Kuan Fang
2017-12-01
Computation of scattering of shaped beams by large nonspherical particles is a challenge in both optics and electromagnetics domains since it concerns many research fields. In this paper, we report our new progress in the numerical computation of the scattering diagrams. Our algorithm permits to calculate the scattering of a particle of size as large as 110 wavelengths or 700 in size parameter. The particle can be transparent or absorbing of arbitrary shape, smooth or with a sharp surface, such as the Chebyshev particles or ice crystals. To illustrate the capacity of the algorithm, a zero order Bessel beam is taken as the incident beam, and the scattering of ellipsoidal particles and Chebyshev particles are taken as examples. Some special phenomena have been revealed and examined. The scattering problem is formulated with the combined tangential formulation and solved iteratively with the aid of the multilevel fast multipole algorithm, which is well parallelized with the message passing interface on the distributed memory computer platform using the hybrid partitioning strategy. The numerical predictions are compared with the results of the rigorous method for a spherical particle to validate the accuracy of the approach. The scattering diagrams of large ellipsoidal particles with various parameters are examined. The effect of aspect ratios, as well as half-cone angle of the incident zero-order Bessel beam and the off-axis distance on scattered intensity, is studied. Scattering by asymmetry Chebyshev particle with size parameter larger than 700 is also given to show the capability of the method for computing scattering by arbitrary shaped particles.
Meta-heuristic cuckoo search algorithm for the correction of faulty array antenna
International Nuclear Information System (INIS)
Khan, S.U.; Qureshi, I.M.
2015-01-01
In this article, we introduce a CSA (Cuckoo Search Algorithm) for compensation of faulty array antenna. It is assumed that the faulty elemental location is also known. When the sensor fails, it disturbs the power pattern, owing to which its SLL (Sidelobe Level) raises and nulls are shifted from their required positions. In this approach, the CSA optimizes the weights of the active elements for the reduction of SLL and null position in the desired direction. The meta-heuristic CSA is used for the control of SLL and steering of nulls at their required positions. The CSA is based on the necessitated kids bloodsucking behavior of cuckoo sort in arrangement with the Levy flight manners. The fitness function is used to reduce the error between the preferred and probable pattern along with null constraints. Imitational consequences for various scenarios are given to exhibit the validity and presentation of the proposed method. (author)
Liu, Yang
2014-07-01
The computational complexity and memory requirements of classically formulated marching-on-in-time (MOT)-based surface integral equation (SIE) solvers scale as O(Nt Ns 2) and O(Ns 2), respectively; here Nt and Ns denote the number of temporal and spatial degrees of freedom of the current density. The multilevel plane wave time domain (PWTD) algorithm, viz., the time domain counterpart of the multilevel fast multipole method, reduces these costs to O(Nt Nslog2 Ns) and O(Ns 1.5) (Ergin et al., IEEE Trans. Antennas Mag., 41, 39-52, 1999). Previously, PWTD-accelerated MOT-SIE solvers have been used to analyze transient scattering from perfect electrically conducting (PEC) and homogeneous dielectric objects discretized in terms of a million spatial unknowns (Shanker et al., IEEE Trans. Antennas Propag., 51, 628-641, 2003). More recently, an efficient parallelized solver that employs an advanced hierarchical and provably scalable spatial, angular, and temporal load partitioning strategy has been developed to analyze transient scattering problems that involve ten million spatial unknowns (Liu et. al., in URSI Digest, 2013).
Algorithms and computer codes for atomic and molecular quantum scattering theory. Volume I
Energy Technology Data Exchange (ETDEWEB)
Thomas, L. (ed.)
1979-01-01
The goals of this workshop are to identify which of the existing computer codes for solving the coupled equations of quantum molecular scattering theory perform most efficiently on a variety of test problems, and to make tested versions of those codes available to the chemistry community through the NRCC software library. To this end, many of the most active developers and users of these codes have been invited to discuss the methods and to solve a set of test problems using the LBL computers. The first volume of this workshop report is a collection of the manuscripts of the talks that were presented at the first meeting held at the Argonne National Laboratory, Argonne, Illinois June 25-27, 1979. It is hoped that this will serve as an up-to-date reference to the most popular methods with their latest refinements and implementations.
Algorithms and computer codes for atomic and molecular quantum scattering theory. Volume I
International Nuclear Information System (INIS)
Thomas, L.
1979-01-01
The goals of this workshop are to identify which of the existing computer codes for solving the coupled equations of quantum molecular scattering theory perform most efficiently on a variety of test problems, and to make tested versions of those codes available to the chemistry community through the NRCC software library. To this end, many of the most active developers and users of these codes have been invited to discuss the methods and to solve a set of test problems using the LBL computers. The first volume of this workshop report is a collection of the manuscripts of the talks that were presented at the first meeting held at the Argonne National Laboratory, Argonne, Illinois June 25-27, 1979. It is hoped that this will serve as an up-to-date reference to the most popular methods with their latest refinements and implementations
Wang, Chunpeng; Lou, Zhengzhao Johnny; Chen, Xiuhong; Zeng, Xiping; Tao, Wei-Kuo; Huang, Xianglei
2014-01-01
Cloud-top temperature (CTT) is an important parameter for convective clouds and is usually different from the 11-micrometers brightness temperature due to non-blackbody effects. This paper presents an algorithm for estimating convective CTT by using simultaneous passive [Moderate Resolution Imaging Spectroradiometer (MODIS)] and active [CloudSat 1 Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO)] measurements of clouds to correct for the non-blackbody effect. To do this, a weighting function of the MODIS 11-micrometers band is explicitly calculated by feeding cloud hydrometer profiles from CloudSat and CALIPSO retrievals and temperature and humidity profiles based on ECMWF analyses into a radiation transfer model.Among 16 837 tropical deep convective clouds observed by CloudSat in 2008, the averaged effective emission level (EEL) of the 11-mm channel is located at optical depth; approximately 0.72, with a standard deviation of 0.3. The distance between the EEL and cloud-top height determined by CloudSat is shown to be related to a parameter called cloud-top fuzziness (CTF), defined as the vertical separation between 230 and 10 dBZ of CloudSat radar reflectivity. On the basis of these findings a relationship is then developed between the CTF and the difference between MODIS 11-micrometers brightness temperature and physical CTT, the latter being the non-blackbody correction of CTT. Correction of the non-blackbody effect of CTT is applied to analyze convective cloud-top buoyancy. With this correction, about 70% of the convective cores observed by CloudSat in the height range of 6-10 km have positive buoyancy near cloud top, meaning clouds are still growing vertically, although their final fate cannot be determined by snapshot observations.
An Iterative Optimization Algorithm for Lens Distortion Correction Using Two-Parameter Models
Directory of Open Access Journals (Sweden)
Daniel Santana-Cedrés
2016-12-01
Full Text Available We present a method for the automatic estimation of two-parameter radial distortion models, considering polynomial as well as division models. The method first detects the longest distorted lines within the image by applying the Hough transform enriched with a radial distortion parameter. From these lines, the first distortion parameter is estimated, then we initialize the second distortion parameter to zero and the two-parameter model is embedded into an iterative nonlinear optimization process to improve the estimation. This optimization aims at reducing the distance from the edge points to the lines, adjusting two distortion parameters as well as the coordinates of the center of distortion. Furthermore, this allows detecting more points belonging to the distorted lines, so that the Hough transform is iteratively repeated to extract a better set of lines until no improvement is achieved. We present some experiments on real images with significant distortion to show the ability of the proposed approach to automatically correct this type of distortion as well as a comparison between the polynomial and division models.
Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno
2015-01-01
For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. Methods We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 seconds. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.04375 mAs, were investigated. Both the analytical FDK algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. Results With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels
International Nuclear Information System (INIS)
Freitas, B.M.; Silva, A.X. da
2014-01-01
The Instituto de Radioprotecao e Dosimetria (IRD) runs a neutron individual monitoring service with albedo type monitor and thermoluminescent detectors (TLD). Moreover the largest number of workers exposed to neutrons in Brazil is exposed to 241 Am-Be fields. Therefore a study of the response of albedo dosemeter due to neutron scattering from 241 Am-Be source is important for a proper calibration. In this work, it has been evaluated the influence of the scattering correction in two distances at the Low Scattering Laboratory of the Neutron Laboratory of the Brazilian National Laboratory (Lab. Nacional de Metrologia Brasileira de Radiacoes Ionizantes) in the calibration of that albedo dosemeter for a 241 Am-Be source. (author)
International Nuclear Information System (INIS)
Slopsema, R. L.; Flampouri, S.; Yeung, D.; Li, Z.; Lin, L.; McDonough, J. E.; Palta, J.
2014-01-01
Purpose: The purpose of this investigation is to determine if a single set of beam data, described by a minimal set of equations and fitting variables, can be used to commission different installations of a proton double-scattering system in a commercial pencil-beam dose calculation algorithm. Methods: The beam model parameters required to commission the pencil-beam dose calculation algorithm (virtual and effective SAD, effective source size, and pristine-peak energy spread) are determined for a commercial double-scattering system. These parameters are measured in a first room and parameterized as function of proton energy and nozzle settings by fitting four analytical equations to the measured data. The combination of these equations and fitting values constitutes the golden beam data (GBD). To determine the variation in dose delivery between installations, the same dosimetric properties are measured in two additional rooms at the same facility, as well as in a single room at another facility. The difference between the room-specific measurements and the GBD is evaluated against tolerances that guarantee the 3D dose distribution in each of the rooms matches the GBD-based dose distribution within clinically reasonable limits. The pencil-beam treatment-planning algorithm is commissioned with the GBD. The three-dimensional dose distribution in water is evaluated in the four treatment rooms and compared to the treatment-planning calculated dose distribution. Results: The virtual and effective SAD measurements fall between 226 and 257 cm. The effective source size varies between 2.4 and 6.2 cm for the large-field options, and 1.0 and 2.0 cm for the small-field options. The pristine-peak energy spread decreases from 1.05% at the lowest range to 0.6% at the highest. The virtual SAD as well as the effective source size can be accurately described by a linear relationship as function of the inverse of the residual energy. An additional linear correction term as function of
Energy Technology Data Exchange (ETDEWEB)
Slopsema, R. L., E-mail: rslopsema@floridaproton.org; Flampouri, S.; Yeung, D.; Li, Z. [University of Florida Proton Therapy Institute, 2015 North Jefferson Street, Jacksonville, Florida 32205 (United States); Lin, L.; McDonough, J. E. [Department of Radiation Oncology, University of Pennsylvania, 3400 Civic Boulevard, 2326W TRC, PCAM, Philadelphia, Pennsylvania 19104 (United States); Palta, J. [VCU Massey Cancer Center, Virginia Commonwealth University, 401 College Street, Richmond, Virginia 23298 (United States)
2014-09-15
Purpose: The purpose of this investigation is to determine if a single set of beam data, described by a minimal set of equations and fitting variables, can be used to commission different installations of a proton double-scattering system in a commercial pencil-beam dose calculation algorithm. Methods: The beam model parameters required to commission the pencil-beam dose calculation algorithm (virtual and effective SAD, effective source size, and pristine-peak energy spread) are determined for a commercial double-scattering system. These parameters are measured in a first room and parameterized as function of proton energy and nozzle settings by fitting four analytical equations to the measured data. The combination of these equations and fitting values constitutes the golden beam data (GBD). To determine the variation in dose delivery between installations, the same dosimetric properties are measured in two additional rooms at the same facility, as well as in a single room at another facility. The difference between the room-specific measurements and the GBD is evaluated against tolerances that guarantee the 3D dose distribution in each of the rooms matches the GBD-based dose distribution within clinically reasonable limits. The pencil-beam treatment-planning algorithm is commissioned with the GBD. The three-dimensional dose distribution in water is evaluated in the four treatment rooms and compared to the treatment-planning calculated dose distribution. Results: The virtual and effective SAD measurements fall between 226 and 257 cm. The effective source size varies between 2.4 and 6.2 cm for the large-field options, and 1.0 and 2.0 cm for the small-field options. The pristine-peak energy spread decreases from 1.05% at the lowest range to 0.6% at the highest. The virtual SAD as well as the effective source size can be accurately described by a linear relationship as function of the inverse of the residual energy. An additional linear correction term as function of
International Nuclear Information System (INIS)
Jones, Andrew Osler
2004-01-01
There is an increasing interest in the use of inhomogeneity corrections for lung, air, and bone in radiotherapy treatment planning. Traditionally, corrections based on physical density have been used. Modern algorithms use the electron density derived from CT images. Small fields are used in both conformal radiotherapy and IMRT, however, their beam characteristics in inhomogeneous media have not been extensively studied. This work compares traditional and modern treatment planning algorithms to Monte Carlo simulations in and near low-density inhomogeneities. Field sizes ranging from 0.5 cm to 5 cm in diameter are projected onto a phantom containing inhomogeneities and depth dose curves are compared. Comparisons of the Dose Perturbation Factors (DPF) are presented as functions of density and field size. Dose Correction Factors (DCF), which scale the algorithms to the Monte Carlo data, are compared for each algorithm. Physical scaling algorithms such as Batho and Equivalent Pathlength (EPL) predict an increase in dose for small fields passing through lung tissue, where Monte Carlo simulations show a sharp dose drop. The physical model-based collapsed cone convolution (CCC) algorithm correctly predicts the dose drop, but does not accurately predict the magnitude. Because the model-based algorithms do not correctly account for the change in backscatter, the dose drop predicted by CCC occurs farther downstream compared to that predicted by the Monte Carlo simulations. Beyond the tissue inhomogeneity all of the algorithms studied predict dose distributions in close agreement with Monte Carlo simulations. Dose-volume relationships are important in understanding the effects of radiation to the lung. The dose within the lung is affected by a complex function of beam energy, lung tissue density, and field size. Dose algorithms vary in their abilities to correctly predict the dose to the lung tissue. A thorough analysis of the effects of density, and field size on dose to the
Energy Technology Data Exchange (ETDEWEB)
Kitamura, Keishi [Shimadzu Corporation, 1 Nishinokyo-Kuwabaracho, Nakagyo-ku, Kyoto-shi, Kyoto 604-8511 (Japan)]. E-mail: kitam@shimadzu.co.jp; Ishikawa, Akihiro [Shimadzu Corporation, 1 Nishinokyo-Kuwabaracho, Nakagyo-ku, Kyoto-shi, Kyoto 604-8511 (Japan); Mizuta, Tetsuro [Shimadzu Corporation, 1 Nishinokyo-Kuwabaracho, Nakagyo-ku, Kyoto-shi, Kyoto 604-8511 (Japan); Yamaya, Taiga [National Institute of Radiological Sciences, 9-1 Anagawa-4, Inage-ku, Chiba-shi, Chiba 263-8555 (Japan); Yoshida, Eiji [National Institute of Radiological Sciences, 9-1 Anagawa-4, Inage-ku, Chiba-shi, Chiba 263-8555 (Japan); Murayama, Hideo [National Institute of Radiological Sciences, 9-1 Anagawa-4, Inage-ku, Chiba-shi, Chiba 263-8555 (Japan)
2007-02-01
The jPET-D4 is a brain positron emission tomography (PET) scanner composed of 4-layer depth-of-interaction (DOI) detectors with a large number of GSO crystals, which achieves both high spatial resolution and high scanner sensitivity. Since the sensitivity of each crystal element is highly dependent on DOI layer depth and incidental {gamma} ray energy, it is difficult to estimate normalization factors and scatter components with high statistical accuracy. In this work, we implemented a hybrid scatter correction method combined with component-based normalization, which estimates scatter components with a dual energy acquisition using a convolution subtraction-method for an estimation of trues from an upper energy window. In order to reduce statistical noise in sinograms, the implemented scheme uses the DOI compression (DOIC) method, that combines deep pairs of DOI layers into the nearest shallow pairs of DOI layers with natural detector samplings. Since the compressed data preserve the block detector configuration, as if the data are acquired using 'virtual' detectors with high {gamma}-ray stopping power, these correction methods can be applied directly to DOIC sinograms. The proposed method provides high-quality corrected images with low statistical noise, even for a multi-layer DOI-PET.
Sosnovik, David E; Dai, Guangping; Nahrendorf, Matthias; Rosen, Bruce R; Seethamraju, Ravi
2007-08-01
To evaluate the use of a transmit-receive surface (TRS) coil and a cardiac-tailored intensity-correction algorithm for cardiac MRI in mice at 9.4 Tesla (9.4T). Fast low-angle shot (FLASH) cines, with and without delays alternating with nutations for tailored excitation (DANTE) tagging, were acquired in 13 mice. An intensity-correction algorithm was developed to compensate for the sensitivity profile of the surface coil, and was tailored to account for the unique distribution of noise and flow artifacts in cardiac MR images. Image quality was extremely high and allowed fine structures such as trabeculations, valve cusps, and coronary arteries to be clearly visualized. The tag lines created with the surface coil were also sharp and clearly visible. Application of the intensity-correction algorithm improved signal intensity, tissue contrast, and image quality even further. Importantly, the cardiac-tailored properties of the correction algorithm prevented noise and flow artifacts from being significantly amplified. The feasibility and value of cardiac MRI in mice with a TRS coil has been demonstrated. In addition, a cardiac-tailored intensity-correction algorithm has been developed and shown to improve image quality even further. The use of these techniques could produce significant potential benefits over a broad range of scanners, coil configurations, and field strengths. (c) 2007 Wiley-Liss, Inc.
Energy Technology Data Exchange (ETDEWEB)
Pontone, Gianluca; Bertella, Erika; Baggiano, Andrea; Mushtaq, Saima; Loguercio, Monica; Segurini, Chiara; Conte, Edoardo; Beltrama, Virginia; Annoni, Andrea; Formenti, Alberto; Petulla, Maria; Trabattoni, Daniela; Pepi, Mauro [Centro Cardiologico Monzino, IRCCS, Milan (Italy); Andreini, Daniele; Montorsi, Piero; Bartorelli, Antonio L. [Centro Cardiologico Monzino, IRCCS, Milan (Italy); University of Milan, Department of Cardiovascular Sciences and Community Health, Milan (Italy); Guaricci, Andrea I. [University of Foggia, Department of Cardiology, Foggia (Italy)
2016-01-15
The aim of this study was to evaluate the impact of a novel intra-cycle motion correction algorithm (MCA) on overall evaluability and diagnostic accuracy of cardiac computed tomography coronary angiography (CCT). From a cohort of 900 consecutive patients referred for CCT for suspected coronary artery disease (CAD), we enrolled 160 (18 %) patients (mean age 65.3 ± 11.7 years, 101 male) with at least one coronary segment classified as non-evaluable for motion artefacts. The CCT data sets were evaluated using a standard reconstruction algorithm (SRA) and MCA and compared in terms of subjective image quality, evaluability and diagnostic accuracy. The mean heart rate during the examination was 68.3 ± 9.4 bpm. The MCA showed a higher Likert score (3.1 ± 0.9 vs. 2.5 ± 1.1, p < 0.001) and evaluability (94%vs.79 %, p < 0.001) than the SRA. In a 45-patient subgroup studied by clinically indicated invasive coronary angiography, specificity, positive predictive value and accuracy were higher in MCA vs. SRA in segment-based and vessel-based models, respectively (87%vs.73 %, 50%vs.34 %, 85%vs.73 %, p < 0.001 and 62%vs.28 %, 66%vs.51 % and 75%vs.57 %, p < 0.001). In a patient-based model, MCA showed higher accuracy vs. SCA (93%vs.76 %, p < 0.05). MCA can significantly improve subjective image quality, overall evaluability and diagnostic accuracy of CCT. (orig.)
Sakkas, Georgios; Sakellariou, Nikolaos
2018-05-01
Strong motion recordings are the key in many earthquake engineering applications and are also fundamental for seismic design. The present study focuses on the automated correction of accelerograms, analog and digital. The main feature of the proposed algorithm is the automatic selection for the cut-off frequencies based on a minimum spectral value in a predefined frequency bandwidth, instead of the typical signal-to-noise approach. The algorithm follows the basic steps of the correction procedure (instrument correction, baseline correction and appropriate filtering). Besides the corrected time histories, Peak Ground Acceleration, Peak Ground Velocity, Peak Ground Displacement values and the corrected Fourier Spectra are also calculated as well as the response spectra. The algorithm is written in Matlab environment, is fast enough and can be used for batch processing or in real-time applications. In addition, the possibility to also perform a signal-to-noise ratio is added as well as to perform causal or acausal filtering. The algorithm has been tested in six significant earthquakes (Kozani-Grevena 1995, Aigio 1995, Athens 1999, Lefkada 2003 and Kefalonia 2014) of the Greek territory with analog and digital accelerograms.
Indian Academy of Sciences (India)
ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...
Indian Academy of Sciences (India)
of programs, illustrate a method of establishing the ... importance of methods of establishing the correctness of .... Thus, the proof will be different for each input ..... Formal methods are pivotal in the design, development, and maintenance of ...
Energy Technology Data Exchange (ETDEWEB)
Shi, L; Zhu, L [Georgia Institute of Technology, Atlanta, GA (Georgia); Vedantham, S; Karellas, A [University of Massachusetts Medical School, Worcester, MA (United States)
2016-06-15
Purpose: Scatter contamination is detrimental to image quality in dedicated cone-beam breast CT (CBBCT), resulting in cupping artifacts and loss of contrast in reconstructed images. Such effects impede visualization of breast lesions and the quantitative accuracy. Previously, we proposed a library-based software approach to suppress scatter on CBBCT images. In this work, we quantify the efficacy and stability of this approach using datasets from 15 human subjects. Methods: A pre-computed scatter library is generated using Monte Carlo simulations for semi-ellipsoid breast models and homogeneous fibroglandular/adipose tissue mixture encompassing the range reported in literature. Projection datasets from 15 human subjects that cover 95 percentile of breast dimensions and fibroglandular volume fraction were included in the analysis. Our investigations indicate that it is sufficient to consider the breast dimensions alone and variation in fibroglandular fraction does not significantly affect the scatter-to-primary ratio. The breast diameter is measured from a first-pass reconstruction; the appropriate scatter distribution is selected from the library; and, deformed by considering the discrepancy in total projection intensity between the clinical dataset and the simulated semi-ellipsoidal breast. The deformed scatter-distribution is subtracted from the measured projections for scatter correction. Spatial non-uniformity (SNU) and contrast-to-noise ratio (CNR) were used as quantitative metrics to evaluate the results. Results: On the 15 patient cases, our method reduced the overall image spatial non-uniformity (SNU) from 7.14%±2.94% (mean ± standard deviation) to 2.47%±0.68% in coronal view and from 10.14%±4.1% to 3.02% ±1.26% in sagittal view. The average contrast to noise ratio (CNR) improved by a factor of 1.49±0.40 in coronal view and by 2.12±1.54 in sagittal view. Conclusion: We demonstrate the robustness and effectiveness of a library-based scatter correction
Directory of Open Access Journals (Sweden)
I. S. Timchenko
2015-07-01
Full Text Available For calculating the radiative tails in the spectra of inelastic electron scattering by nuclei, the approximation, namely, the equivalent radiator method (ERM, is used. However, the applicability of this method for evaluating the radiative tail from the elastic scattering peak has been little investigated, and therefore, it has become the subject of the present study for the case of light nuclei. As a result, spectral regions were found, where a significant discrepancy between the ERM calculation and the exact-formula calculation was observed. A link was established between this phenomenon and the diffraction minimum of the squared form-factor of the nuclear ground state. Varieties of calculations were carried out for different kinematics of electron scattering by nuclei. The analysis of the calculation results has shown the conditions, at which the equivalent radiator method can be applied for adequately evaluating the radiative tail of the elastic scattering peak.
International Nuclear Information System (INIS)
Narabayashi, Masaru; Mizowaki, Takashi; Matsuo, Yukinori; Nakamura, Mitsuhiro; Takayama, Kenji; Norihisa, Yoshiki; Sakanaka, Katsuyuki; Hiraoka, Masahiro
2012-01-01
Heterogeneity correction algorithms can have a large impact on the dose distributions of stereotactic body radiation therapy (SBRT) for lung tumors. Treatment plans of 20 patients who underwent SBRT for lung tumors with the prescribed dose of 48 Gy in four fractions at the isocenter were reviewed retrospectively and recalculated with different heterogeneity correction algorithms: the pencil beam convolution algorithm with a Batho power-law correction (BPL) in Eclipse, the radiological path length algorithm (RPL), and the X-ray Voxel Monte Carlo algorithm (XVMC) in iPlan. The doses at the periphery (minimum dose and D95) of the planning target volume (PTV) were compared using the same monitor units among the three heterogeneity correction algorithms, and the monitor units were compared between two methods of dose prescription, that is, an isocenter dose prescription (IC prescription) and dose-volume based prescription (D95 prescription). Mean values of the dose at the periphery of the PTV were significantly lower with XVMC than with BPL using the same monitor units (P<0.001). In addition, under IC prescription using BPL, RPL and XVMC, the ratios of mean values of monitor units were 1, 0.959 and 0.986, respectively. Under D95 prescription, they were 1, 0.937 and 1.088, respectively. These observations indicated that the application of XVMC under D95 prescription results in an increase in the actually delivered dose by 8.8% on average compared with the application of BPL. The appropriateness of switching heterogeneity correction algorithms and dose prescription methods should be carefully validated from a clinical viewpoint. (author)
He, Xiaojun; Ma, Haotong; Luo, Chuanxin
2016-10-01
The optical multi-aperture imaging system is an effective way to magnify the aperture and increase the resolution of telescope optical system, the difficulty of which lies in detecting and correcting of co-phase error. This paper presents a method based on stochastic parallel gradient decent algorithm (SPGD) to correct the co-phase error. Compared with the current method, SPGD method can avoid detecting the co-phase error. This paper analyzed the influence of piston error and tilt error on image quality based on double-aperture imaging system, introduced the basic principle of SPGD algorithm, and discuss the influence of SPGD algorithm's key parameters (the gain coefficient and the disturbance amplitude) on error control performance. The results show that SPGD can efficiently correct the co-phase error. The convergence speed of the SPGD algorithm is improved with the increase of gain coefficient and disturbance amplitude, but the stability of the algorithm reduced. The adaptive gain coefficient can solve this problem appropriately. This paper's results can provide the theoretical reference for the co-phase error correction of the multi-aperture imaging system.
Energy Technology Data Exchange (ETDEWEB)
Shiga, Tohru; Takano, Akihiro; Tsukamoto, Eriko; Tamaki, Nagara [Department of Nuclear Medicine, Hokkaido University School of Medicine, Sapporo (Japan); Kubo, Naoki [Department of Radiological Technology, College of Medical Technology, Hokkaido University, Sapporo (Japan); Kobayashi, Junko; Takeda, Yoji; Nakamura, Fumihiro; Koyama, Tsukasa [Department of Psychiatry and Neurology, Hokkaido University School of Medicine, Sapporo (Japan); Katoh, Chietsugu [Department of Tracer Kinetics, Hokkaido University School of Medicine, Sapporo (Japan)
2002-03-01
Scatter correction (SC) using the triple energy window method (TEW) has recently been applied for brain perfusion single-photon emission tomography (SPET). The aim of this study was to investigate the effect of scatter correction using TEW on N-isopropyl-p-[{sup 123}I]iodoamphetamine ({sup 123}I-IMP) SPET in normal subjects. The study population consisted of 15 right-handed normal subjects. SPET data were acquired from 20 min to 40 min after the injection of 167 MBq of IMP, using a triple-head gamma camera. Images were reconstructed with and without SC. 3D T1-weighted magnetic resonance (MR) images were also obtained with a 1.5-Tesla scanner. First, IMP images with and without SC were co-registered to the 3D MRI. Second, the two co-registered IMP images were normalised using SPM96. A t statistic image for the contrast condition effect was constructed. We investigated areas using a voxel-level threshold of 0.001, with a corrected threshold of 0.05. Compared with results obtained without SC, the IMP distribution with SC was significantly decreased in the peripheral areas of the cerebellum, the cortex and the ventricle, and also in the lateral occipital cortex and the base of the temporal lobe. On the other hand, the IMP distribution with SC was significantly increased in the anterior and posterior cingulate cortex, the insular cortex and the medial part of the thalamus. It is concluded that differences in the IMP distribution with and without SC exist not only in the peripheral areas of the cerebellum, the cortex and the ventricle but also in the occipital lobe, the base of the temporal lobe, the insular cortex, the medial part of the thalamus, and the anterior and posterior cingulate cortex. This needs to be recognised for adequate interpretation of IMP brain perfusion SPET after scatter correction. (orig.)
International Nuclear Information System (INIS)
Jayaswal, B.; Mazumder, S.
1998-09-01
Small-angle scattering data from strong scattering systems, e.g. porous materials, cannot be analysed invoking single scattering approximation as specimen needed to replicate the bulk matrix in essential properties are too thick to validate the approximation. The presence of multiple scattering is indicated by invalidity of the functional invariance property of the observed scattering profile with variation of sample thickness and/or wave length of the probing radiation. This article delineates how non accounting of multiple scattering affects the results of analysis and then how to correct the data for its effect. It deals with an algorithm to extract single scattering profile from small-angle scattering data affected by multiple scattering. The algorithm can process the scattering data and deduce single scattering profile in absolute scale. A software package, SIMSAS, is introduced for executing this inversion step. This package is useful both to simulate and to analyse multiple small-angle scattering data. (author)
International Nuclear Information System (INIS)
Pickrell, M.M.; Rinard, P.M.
1992-01-01
The 252 Cf shuffler assays fissile uranium and plutonium using active neutron interrogation and then counting the induced delayed neutrons. Using the shuffler, we conducted over 1700 assays of 55-gal. drums with 28 different matrices and several different fissionable materials. We measured the drums to dispose the matrix and position effects on 252 Cf shuffler assays. We used several neutron flux monitors during irradiation and kept statistics on the count rates of individual detector banks. The intent of these measurements was to gauge the effect of the matrix independently from the uranium assay. Although shufflers have previously been equipped neutron monitors, the functional relationship between the flux monitor sepals and the matrix-induced perturbation has been unknown. There are several flux monitors so the problem is multivariate, and the response is complicated. Conventional regression techniques cannot address complicated multivariate problems unless the underlying functional form and approximate parameter values are known in advance. Neither was available in this case. To address this problem, we used a new technique called alternating conditional expectations (ACE), which requires neither the functional relationship nor the initial parameters. The ACE algorithm develops the functional form and performs a numerical regression from only the empirical data. We applied the ACE algorithm to the shuffler-assay and flux-monitor data and developed an analytic function for the matrix correction. This function was optimized using conventional multivariate techniques. We were able to reduce the matrix-induced-bias error for homogeneous samples to 12.7%. The bias error for inhomogeneous samples was reduced to 13.5%. These results used only a few adjustable parameters compared to the number of available data points; the data were not ''over fit,'' but rather the results are general and robust
Orlov, Yu. V.; Irgaziev, B. F.; Nabi, Jameel-Un
2017-08-01
A new algorithm for the asymptotic nuclear coefficients calculation, which we call the Δ method, is proved and developed. This method was proposed by Ramírez Suárez and Sparenberg (arXiv:1602.04082.) but no proof was given. We apply it to the bound state situated near the channel threshold when the Sommerfeld parameter is quite large within the experimental energy region. As a result, the value of the conventional effective-range function Kl(k2) is actually defined by the Coulomb term. One of the resulting effects is a wrong description of the energy behavior of the elastic scattering phase shift δl reproduced from the fitted total effective-range function Kl(k2) . This leads to an improper value of the asymptotic normalization coefficient (ANC) value. No such problem arises if we fit only the nuclear term. The difference between the total effective-range function and the Coulomb part at real energies is the same as the nuclear term. Then we can proceed using just this Δ method to calculate the pole position values and the ANC. We apply it to the vertices 4He+12C ↔16O and 3He+4He↔7Be . The calculated ANCs can be used to find the radiative capture reaction cross sections of the transfers to the 16O bound final states as well as to the 7Be.
Indian Academy of Sciences (India)
algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).
Indian Academy of Sciences (India)
algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...
International Nuclear Information System (INIS)
Kuttig, Jan; Steiding, Christian; Hupfer, Martin; Karolczak, Marek; Kolditz, Daniel
2015-01-01
In this study we compared various defect pixel correction methods for reducing artifact appearance within projection images used for computed tomography (CT) reconstructions.Defect pixel correction algorithms were examined with respect to their artifact behaviour within planar projection images as well as in volumetric CT reconstructions. We investigated four algorithms: nearest neighbour, linear and adaptive linear interpolation, and a frequency-selective spectral-domain approach.To characterise the quality of each algorithm in planar image data, we inserted line defects of varying widths and orientations into images. The structure preservation of each algorithm was analysed by corrupting and correcting the image of a slit phantom pattern and by evaluating its line spread function (LSF). The noise preservation was assessed by interpolating corrupted flat images and estimating the noise power spectrum (NPS) of the interpolated region.For the volumetric investigations, we examined the structure and noise preservation within a structured aluminium foam, a mid-contrast cone-beam phantom and a homogeneous Polyurethane (PUR) cylinder.The frequency-selective algorithm showed the best structure and noise preservation for planar data of the correction methods tested. For volumetric data it still showed the best noise preservation, whereas the structure preservation was outperformed by the linear interpolation.The frequency-selective spectral-domain approach in the correction of line defects is recommended for planar image data, but its abilities within high-contrast volumes are restricted. In that case, the application of a simple linear interpolation might be the better choice to correct line defects within projection images used for CT. (paper)
Directory of Open Access Journals (Sweden)
Arran Schlosberg
2014-05-01
Full Text Available Improvements in speed and cost of genome sequencing are resulting in increasing numbers of novel non-synonymous single nucleotide polymorphisms (nsSNPs in genes known to be associated with disease. The large number of nsSNPs makes laboratory-based classification infeasible and familial co-segregation with disease is not always possible. In-silico methods for classification or triage are thus utilised. A popular tool based on multiple-species sequence alignments (MSAs and work by Grantham, Align-GVGD, has been shown to underestimate deleterious effects, particularly as sequence numbers increase. We utilised the DEFLATE compression algorithm to account for expected variation across a number of species. With the adjusted Grantham measure we derived a means of quantitatively clustering known neutral and deleterious nsSNPs from the same gene; this was then used to assign novel variants to the most appropriate cluster as a means of binary classification. Scaling of clusters allows for inter-gene comparison of variants through a single pathogenicity score. The approach improves upon the classification accuracy of Align-GVGD while correcting for sensitivity to large MSAs. Open-source code and a web server are made available at https://github.com/aschlosberg/CompressGV.
Magota, Keiichi; Shiga, Tohru; Asano, Yukari; Shinyama, Daiki; Ye, Jinghan; Perkins, Amy E; Maniawski, Piotr J; Toyonaga, Takuya; Kobayashi, Kentaro; Hirata, Kenji; Katoh, Chietsugu; Hattori, Naoya; Tamaki, Nagara
2017-12-01
In 3-dimensional PET/CT imaging of the brain with 15 O-gas inhalation, high radioactivity in the face mask creates cold artifacts and affects the quantitative accuracy when scatter is corrected by conventional methods (e.g., single-scatter simulation [SSS] with tail-fitting scaling [TFS-SSS]). Here we examined the validity of a newly developed scatter-correction method that combines SSS with a scaling factor calculated by Monte Carlo simulation (MCS-SSS). Methods: We performed phantom experiments and patient studies. In the phantom experiments, a plastic bottle simulating a face mask was attached to a cylindric phantom simulating the brain. The cylindric phantom was filled with 18 F-FDG solution (3.8-7.0 kBq/mL). The bottle was filled with nonradioactive air or various levels of 18 F-FDG (0-170 kBq/mL). Images were corrected either by TFS-SSS or MCS-SSS using the CT data of the bottle filled with nonradioactive air. We compared the image activity concentration in the cylindric phantom with the true activity concentration. We also performed 15 O-gas brain PET based on the steady-state method on patients with cerebrovascular disease to obtain quantitative images of cerebral blood flow and oxygen metabolism. Results: In the phantom experiments, a cold artifact was observed immediately next to the bottle on TFS-SSS images, where the image activity concentrations in the cylindric phantom were underestimated by 18%, 36%, and 70% at the bottle radioactivity levels of 2.4, 5.1, and 9.7 kBq/mL, respectively. At higher bottle radioactivity, the image activity concentrations in the cylindric phantom were greater than 98% underestimated. For the MCS-SSS, in contrast, the error was within 5% at each bottle radioactivity level, although the image generated slight high-activity artifacts around the bottle when the bottle contained significantly high radioactivity. In the patient imaging with 15 O 2 and C 15 O 2 inhalation, cold artifacts were observed on TFS-SSS images, whereas
Indian Academy of Sciences (India)
will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...
Minet, Olaf; Scheibe, Patrick; Beuthan, Jürgen; Zabarylo, Urszula
2010-02-01
State-of-the-art image processing methods offer new possibilities for diagnosing diseases using scattered light. The optical diagnosis of rheumatism is taken as an example to show that the diagnostic sensitivity can be improved using overlapped pseudo-coloured images of different wavelengths, provided that multispectral images are recorded to compensate for any motion related artefacts which occur during examination.
International Nuclear Information System (INIS)
Hashimoto, Jun; Kubo, Atsushi; Ogawa, Koichi; Ichihara, Takashi; Motomura, Nobutoku; Takayama, Takuzo; Iwanaga, Shiro; Mitamura, Hideo; Ogawa, Satoshi
1998-01-01
A practical method for scatter and attenuation compensation was employed in thallium-201 myocardial single-photon emission tomography (SPET or ECT) with the triple-energy-window (TEW) technique and an iterative attenuation correction method by using a measured attenuation map. The map was reconstructed from technetium-99m transmission CT (TCT) data. A dual-headed SPET gamma camera system equipped with parallel-hole collimators was used for ECT/TCT data acquisition and a new type of external source named ''sheet line source'' was designed for TCT data acquisition. This sheet line source was composed of a narrow long fluoroplastic tube embedded in a rectangular acrylic board. After injection of 99m Tc solution into the tube by an automatic injector, the board was attached in front of the collimator surface of one of the two detectors. After acquiring emission and transmission data separately or simultaneously, we eliminated scattered photons in the transmission and emission data with the TEW method, and reconstructed both images. Then, the effect of attenuation in the scatter-corrected ECT images was compensated with Chang's iterative method by using measured attenuation maps. Our method was validated by several phantom studies and clinical cardiac studies. The method offered improved homogeneity in distribution of myocardial activity and accurate measurements of myocardial tracer uptake. We conclude that the above correction method is feasible because a new type of 99m Tc external source may not produce truncation in TCT images and is cost-effective and easy to prepare in clinical situations. (orig.)
De Kesel, Pieter M M; Capiau, Sara; Stove, Veronique V; Lambert, Willy E; Stove, Christophe P
2014-10-01
Although dried blood spot (DBS) sampling is increasingly receiving interest as a potential alternative to traditional blood sampling, the impact of hematocrit (Hct) on DBS results is limiting its final breakthrough in routine bioanalysis. To predict the Hct of a given DBS, potassium (K(+)) proved to be a reliable marker. The aim of this study was to evaluate whether application of an algorithm, based upon predicted Hct or K(+) concentrations as such, allowed correction for the Hct bias. Using validated LC-MS/MS methods, caffeine, chosen as a model compound, was determined in whole blood and corresponding DBS samples with a broad Hct range (0.18-0.47). A reference subset (n = 50) was used to generate an algorithm based on K(+) concentrations in DBS. Application of the developed algorithm on an independent test set (n = 50) alleviated the assay bias, especially at lower Hct values. Before correction, differences between DBS and whole blood concentrations ranged from -29.1 to 21.1%. The mean difference, as obtained by Bland-Altman comparison, was -6.6% (95% confidence interval (CI), -9.7 to -3.4%). After application of the algorithm, differences between corrected and whole blood concentrations lay between -19.9 and 13.9% with a mean difference of -2.1% (95% CI, -4.5 to 0.3%). The same algorithm was applied to a separate compound, paraxanthine, which was determined in 103 samples (Hct range, 0.17-0.47), yielding similar results. In conclusion, a K(+)-based algorithm allows correction for the Hct bias in the quantitative analysis of caffeine and its metabolite paraxanthine.
International Nuclear Information System (INIS)
Andrushevskii, N.M.; Shchedrin, B.M.; Simonov, V.I.
2004-01-01
New algorithms for solving the atomic structure of equivalent nanodimensional clusters of the same orientations randomly distributed over the initial single crystal (crystal matrix) have been suggested. A cluster is a compact group of substitutional, interstitial or other atoms displaced from their positions in the crystal matrix. The structure is solved based on X-ray or neutron diffuse scattering data obtained from such objects. The use of the mathematical apparatus of Fourier transformations of finite functions showed that the appropriate sampling of the intensities of continuous diffuse scattering allows one to synthesize multiperiodic difference Patterson functions that reveal the systems of the interatomic vectors of an individual cluster. The suggested algorithms are tested on a model one-dimensional structure
Energy Technology Data Exchange (ETDEWEB)
Krzyżanowska, A. [AGH-UST, Cracow; Deptuch, G. W. [Fermilab; Maj, P. [AGH-UST, Cracow; Gryboś, P. [AGH-UST, Cracow; Szczygieł, R. [AGH-UST, Cracow
2017-08-01
This paper presents the detailed characterization of a single photon counting chip, named CHASE Jr., built in a CMOS 40-nm process, operating with synchrotron radiation. The chip utilizes an on-chip implementation of the C8P1 algorithm. The algorithm eliminates the charge sharing related uncertainties, namely, the dependence of the number of registered photons on the discriminator’s threshold, set for monochromatic irradiation, and errors in the assignment of an event to a certain pixel. The article presents a short description of the algorithm as well as the architecture of the CHASE Jr., chip. The analog and digital functionalities, allowing for proper operation of the C8P1 algorithm are described, namely, an offset correction for two discriminators independently, two-stage gain correction, and different operation modes of the digital blocks. The results of tests of the C8P1 operation are presented for the chip bump bonded to a silicon sensor and exposed to the 3.5- μm -wide pencil beam of 8-keV photons of synchrotron radiation. It was studied how sensitive the algorithm performance is to the chip settings, as well as the uniformity of parameters of the analog front-end blocks. Presented results prove that the C8P1 algorithm enables counting all photons hitting the detector in between readout channels and retrieving the actual photon energy.
Energy Technology Data Exchange (ETDEWEB)
Semchishen, A V; Seminogov, V N; Semchishen, V A [Institute of Laser and Information Technologies, Russian Academy of Sciences, Troitsk, Moscow Region (Russian Federation)
2012-04-30
Forward scattering of light passing through large-scale irregularities of the interface between two media having different refractive indices is considered. An analytical expression for the ratio of intensities of directional and diffusion components of scattered light in the far-field zone is derived. It is theoretically shown that the critical depth of possible interface relief irregularities, starting from which the intensity of the diffuse component in the passing light flow becomes comparable with the directional light component, responsible for the image formation on the eye retina, is 3 - 4 {mu}m, with the increase in the refractive index in the postoperational zone taken into account. These profile depth values agree with the experimentally measured ones and may affect the contrast sensitivity of vision.
International Nuclear Information System (INIS)
Semchishen, A V; Seminogov, V N; Semchishen, V A
2012-01-01
Forward scattering of light passing through large-scale irregularities of the interface between two media having different refractive indices is considered. An analytical expression for the ratio of intensities of directional and diffusion components of scattered light in the far-field zone is derived. It is theoretically shown that the critical depth of possible interface relief irregularities, starting from which the intensity of the diffuse component in the passing light flow becomes comparable with the directional light component, responsible for the image formation on the eye retina, is 3 - 4 μm, with the increase in the refractive index in the postoperational zone taken into account. These profile depth values agree with the experimentally measured ones and may affect the contrast sensitivity of vision.
International Nuclear Information System (INIS)
Vivanco, M.G. Bernui de; Cardenas R, A.
2006-01-01
The ocular brachytherapy many times unique alternative to conserve the visual organ in patients of ocular cancer, one comes carrying out in the National Institute of Neoplastic Illnesses (INEN) using threads of Iridium 192; those which, they are placed in radial form on the interior surface of a spherical cap of gold of 18 K; the cap remains in the eye until reaching the prescribed dose by the doctor. The main objective of this work is to be able to calculate in a correct and practical way the one time that the treatment of ocular brachytherapy should last to reach the dose prescribed by the doctor. To reach this objective I use the Sievert integral corrected by attenuation effects and scattering (Meisberg polynomials); calculating it by the Simpson method. In the calculations by means of the Sievert integral doesn't take into account the scattering produced by the gold cap neither the variation of the constant of frequency of exposure with the distance. The calculations by means of Sievert integral are compared with those obtained using the Monte Carlo Penelope simulation code, where it is observed that they agree at distances of the surface of the cap greater or equal to 2mm. (Author)
International Nuclear Information System (INIS)
Marchand, D.
1998-11-01
This thesis presents the radiative corrections to the virtual compton scattering and the magnetic method adopted in the Hall A at Jefferson Laboratory, to measure the electrons beam energy with an accuracy of 10 4 . The virtual compton scattering experiments allow the access to the generalised polarizabilities of the protons. The extraction of these polarizabilities is obtained by the experimental and theoretical cross sections comparison. That's why the systematic errors and the radiative effects of the experiments have to be controlled very seriously. In this scope, a whole calculation of the internal radiative corrections has been realised in the framework of the quantum electrodynamic. The method of the dimensional regularisation has been used to the treatment of the ultraviolet and infra-red divergences. The absolute measure method of the energy, takes into account the magnetic deviation, made up of eight identical dipoles. The energy is determined from the deviation angle calculation of the beam and the measure of the magnetic field integral along the deviation
Exact fan-beam and 4π-acquisition cone-beam SPECT algorithms with uniform attenuation correction
International Nuclear Information System (INIS)
Tang Qiulin; Zeng, Gengsheng L.; Wu Jiansheng; Gullberg, Grant T.
2005-01-01
This paper presents analytical fan-beam and cone-beam reconstruction algorithms that compensate for uniform attenuation in single photon emission computed tomography. First, a fan-beam algorithm is developed by obtaining a relationship between the two-dimensional (2D) Fourier transform of parallel-beam projections and fan-beam projections. Using this relationship, 2D Fourier transforms of equivalent parallel-beam projection data are obtained from the fan-beam projection data. Then a quasioptimal analytical reconstruction algorithm for uniformly attenuated Radon data, developed by Metz and Pan, is used to reconstruct the image. A cone-beam algorithm is developed by extending the fan-beam algorithm to 4π solid angle geometry. The cone-beam algorithm is also an exact algorithm
O(α2L2) radiative corrections to deep inelastic ep scattering for different kinematical variables
International Nuclear Information System (INIS)
Bluemlein, J.
1994-03-01
The QED radiative corrections are calculated in the leading log approximation up to O(α 2 ) for different definitions of the kinematical variables using jet measurement, the 'mixed' variables, the double angle method, and a measurement based on θ e and y JB . Higher order contributions due to exponentiation of soft radiation are included. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Fernandez, B [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1963-07-01
A calculation for double scattering and absorption corrections in fast neutron scattering experiments using Monte-Carlo method is given. Application to cylindrical target is presented in FORTRAN symbolic language. (author) [French] Un calcul des corrections de double diffusion et d'absorption dans les experiences de diffusion de neutrons rapides par la methode de Monte-Carlo est presente. L'application au cas d'une cible cylindrique est traitee en langage symbolique FORTRAN. (auteur)
International Nuclear Information System (INIS)
Pereira, Marcelo O.; Anjos, Marcelino J.; Lopes, Ricardo T.
2009-01-01
Non-destructive techniques with X-ray, such as tomography, radiography and X-ray fluorescence are sensitive to the attenuation coefficient and have a large field of applications in medical as well as industrial area. In the case of X-ray fluorescence analysis the knowledge of photon X-ray attenuation coefficients provides important information to obtain the elemental concentration. On the other hand, the mass attenuation coefficient values are determined by transmission methods. So, the use of X-ray scattering can be considered as an alternative to transmission methods. This work proposes a new method for obtain the X-ray absorption curve through superposition peak Rayleigh and Compton scattering of the lines L a e L β of Tungsten (Tungsten L lines of an X-ray tube with W anode). The absorption curve was obtained using standard samples with effective atomic number in the range from 6 to 16. The method were applied in certified samples of bovine liver (NIST 1577B) , milk powder and V-10. The experimental measurements were obtained using the portable system EDXRF of the Nuclear Instrumentation Laboratory (LIN-COPPE/UFRJ) with Tungsten (W) anode. (author)
Wuhrer, R.; Moran, K.
2014-03-01
Quantitative X-ray mapping with silicon drift detectors and multi-EDS detector systems have become an invaluable analysis technique and one of the most useful methods of X-ray microanalysis today. The time to perform an X-ray map has reduced considerably with the ability to map minor and trace elements very accurately due to the larger detector area and higher count rate detectors. Live X-ray imaging can now be performed with a significant amount of data collected in a matter of minutes. A great deal of information can be obtained from X-ray maps. This includes; elemental relationship or scatter diagram creation, elemental ratio mapping, chemical phase mapping (CPM) and quantitative X-ray maps. In obtaining quantitative x-ray maps, we are able to easily generate atomic number (Z), absorption (A), fluorescence (F), theoretical back scatter coefficient (η), and quantitative total maps from each pixel in the image. This allows us to generate an image corresponding to each factor (for each element present). These images allow the user to predict and verify where they are likely to have problems in our images, and are especially helpful to look at possible interface artefacts. The post-processing techniques to improve the quantitation of X-ray map data and the development of post processing techniques for improved characterisation are covered in this paper.
Conti, C. C.; Anjos, M. J.; Salgado, C. M.
2014-09-01
X-ray fluorescence technique plays an important role in nondestructive analysis nowadays. The development of equipment, including portable ones, enables a wide assortment of possibilities for analysis of stable elements, even in trace concentrations. Nevertheless, despite of the advantages, one important drawback is radiation self-attenuation in the sample being measured, which needs to be considered in the calculation for the proper determination of elemental concentration. The mass attenuation coefficient can be determined by transmission measurement, but, in this case, the sample must be in slab shape geometry and demands two different setups and measurements. The Rayleigh to Compton scattering ratio, determined from the X-ray fluorescence spectrum, provides a link to the mass attenuation coefficient by means of a polynomial type equation. This work presents a way to construct a Rayleigh to Compton scattering ratio versus mass attenuation coefficient curve by using the MCNP5 Monte Carlo computer code. The comparison between the calculated and literature values of the mass attenuation coefficient for some known samples showed to be within 15%. This calculation procedure is available on-line at www.macx.net.br.
International Nuclear Information System (INIS)
Wuhrer, R; Moran, K
2014-01-01
Quantitative X-ray mapping with silicon drift detectors and multi-EDS detector systems have become an invaluable analysis technique and one of the most useful methods of X-ray microanalysis today. The time to perform an X-ray map has reduced considerably with the ability to map minor and trace elements very accurately due to the larger detector area and higher count rate detectors. Live X-ray imaging can now be performed with a significant amount of data collected in a matter of minutes. A great deal of information can be obtained from X-ray maps. This includes; elemental relationship or scatter diagram creation, elemental ratio mapping, chemical phase mapping (CPM) and quantitative X-ray maps. In obtaining quantitative x-ray maps, we are able to easily generate atomic number (Z), absorption (A), fluorescence (F), theoretical back scatter coefficient (η), and quantitative total maps from each pixel in the image. This allows us to generate an image corresponding to each factor (for each element present). These images allow the user to predict and verify where they are likely to have problems in our images, and are especially helpful to look at possible interface artefacts. The post-processing techniques to improve the quantitation of X-ray map data and the development of post processing techniques for improved characterisation are covered in this paper
International Nuclear Information System (INIS)
Blanco, F.; Rosado, J.; Illana, A.; Garcia, G.
2010-01-01
The SCAR and EGAR procedures have been proposed in order to extend to lower energies the applicability of the additivity rule for calculation of electron-molecule total cross sections. Both those approximate treatments arise after considering geometrical screening corrections due to partial overlapping of atoms in the molecule, as seen by the incident electrons. The main features, results and limitations of both treatments are put here in comparison by means of their application to some different sized species.
Schowalter, M; Müller, K; Rosenauer, A
2012-01-01
Modified atomic scattering amplitudes (MASAs), taking into account the redistribution of charge due to bonds, and the respective correction factors considering the effect of static atomic displacements were computed for the chemically sensitive 002 reflection for ternary III-V and II-VI semiconductors. MASAs were derived from computations within the density functional theory formalism. Binary eight-atom unit cells were strained according to each strain state s (thin, intermediate, thick and fully relaxed electron microscopic specimen) and each concentration (x = 0, …, 1 in 0.01 steps), where the lattice parameters for composition x in strain state s were calculated using continuum elasticity theory. The concentration dependence was derived by computing MASAs for each of these binary cells. Correction factors for static atomic displacements were computed from relaxed atom positions by generating 50 × 50 × 50 supercells using the lattice parameter of the eight-atom unit cells. Atoms were randomly distributed according to the required composition. Polynomials were fitted to the composition dependence of the MASAs and the correction factors for the different strain states. Fit parameters are given in the paper.
Optimization-based scatter estimation using primary modulation for computed tomography
Energy Technology Data Exchange (ETDEWEB)
Chen, Yi; Ma, Jingchen; Zhao, Jun, E-mail: junzhao@sjtu.edu.cn [School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240 (China); Song, Ying [Department of Radiation Oncology, West China Hospital, Sichuan University, Chengdu 610041 (China)
2016-08-15
Purpose: Scatter reduces the image quality in computed tomography (CT), but scatter correction remains a challenge. A previously proposed primary modulation method simultaneously obtains the primary and scatter in a single scan. However, separating the scatter and primary in primary modulation is challenging because it is an underdetermined problem. In this study, an optimization-based scatter estimation (OSE) algorithm is proposed to estimate and correct scatter. Methods: In the concept of primary modulation, the primary is modulated, but the scatter remains smooth by inserting a modulator between the x-ray source and the object. In the proposed algorithm, an objective function is designed for separating the scatter and primary. Prior knowledge is incorporated in the optimization-based framework to improve the accuracy of the estimation: (1) the primary is always positive; (2) the primary is locally smooth and the scatter is smooth; (3) the location of penumbra can be determined; and (4) the scatter-contaminated data provide knowledge about which part is smooth. Results: The simulation study shows that the edge-preserving weighting in OSE improves the estimation accuracy near the object boundary. Simulation study also demonstrates that OSE outperforms the two existing primary modulation algorithms for most regions of interest in terms of the CT number accuracy and noise. The proposed method was tested on a clinical cone beam CT, demonstrating that OSE corrects the scatter even when the modulator is not accurately registered. Conclusions: The proposed OSE algorithm improves the robustness and accuracy in scatter estimation and correction. This method is promising for scatter correction of various kinds of x-ray imaging modalities, such as x-ray radiography, cone beam CT, and the fourth-generation CT.
Energy Technology Data Exchange (ETDEWEB)
Ferguson, S; Ahmad, S; Chen, Y; Ferreira, C; Islam, M; Lau, A; Jin, H [University of Oklahoma Health Sciences Center, Oklahoma City, OK (United States); Keeling, V [Carti, Inc., Little Rock, AR (United States)
2016-06-15
Purpose: To commission and investigate the accuracy of an output (cGy/MU) prediction model for a compact passively scattered proton therapy system. Methods: A previously published output prediction model (Sahoo et al, Med Phys, 35, 5088–5097, 2008) was commissioned for our Mevion S250 proton therapy system. This model is a correction-based model that multiplies correction factors (d/MUwnc=ROFxSOBPF xRSFxSOBPOCFxOCRxFSFxISF). These factors accounted for changes in output due to options (12 large, 5 deep, and 7 small), modulation width M, range R, off-center, off-axis, field-size, and off-isocenter. In this study, the model was modified to ROFxSOBPFxRSFxOCRxFSFxISF-OCFxGACF by merging SOBPOCF and ISF for simplicity and introducing a gantry angle correction factor (GACF). To commission the model, outputs over 1,000 data points were taken at the time of the system commissioning. The output was predicted by interpolation (1D for SOBPF, FSF, and GACF; 2D for RSF and OCR) with inverse-square calculation (ISF-OCR). The outputs of 273 combinations of R and M covering total 24 options were measured to test the model. To minimize fluence perturbation, scattered dose from range compensator and patient was not considered. The percent differences between the predicted (P) and measured (M) outputs were calculated to test the prediction accuracy ([P-M]/Mx100%). Results: GACF was required because of up to 3.5% output variation dependence on the gantry angle. A 2D interpolation was required for OCR because the dose distribution was not radially symmetric especially for the deep options. The average percent differences were −0.03±0.98% (mean±SD) and the differences of all the measurements fell within ±3%. Conclusion: It is concluded that the model can be clinically used for the compact passively scattered proton therapy system. However, great care should be taken when the field-size is less than 5×5 cm{sup 2} where a direct output measurement is required due to substantial
Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph
2016-02-26
Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified.
Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph
2016-01-01
Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified. PMID:26927169
International Nuclear Information System (INIS)
Konstantinidis, Anastasios C.; Olivo, Alessandro; Speller, Robert D.
2011-01-01
Purpose: The x-ray performance evaluation of digital x-ray detectors is based on the calculation of the modulation transfer function (MTF), the noise power spectrum (NPS), and the resultant detective quantum efficiency (DQE). The flat images used for the extraction of the NPS should not contain any fixed pattern noise (FPN) to avoid contamination from nonstochastic processes. The ''gold standard'' method used for the reduction of the FPN (i.e., the different gain between pixels) in linear x-ray detectors is based on normalization with an average reference flat-field. However, the noise in the corrected image depends on the number of flat frames used for the average flat image. The aim of this study is to modify the standard gain correction algorithm to make it independent on the used reference flat frames. Methods: Many publications suggest the use of 10-16 reference flat frames, while other studies use higher numbers (e.g., 48 frames) to reduce the propagated noise from the average flat image. This study quantifies experimentally the effect of the number of used reference flat frames on the NPS and DQE values and appropriately modifies the gain correction algorithm to compensate for this effect. Results: It is shown that using the suggested gain correction algorithm a minimum number of reference flat frames (i.e., down to one frame) can be used to eliminate the FPN from the raw flat image. This saves computer memory and time during the x-ray performance evaluation. Conclusions: The authors show that the method presented in the study (a) leads to the maximum DQE value that one would have by using the conventional method and very large number of frames and (b) has been compared to an independent gain correction method based on the subtraction of flat-field images, leading to identical DQE values. They believe this provides robust validation of the proposed method.
Directory of Open Access Journals (Sweden)
V. S. Kudryashov
2016-01-01
Full Text Available The article is devoted to the development of a correction control algorithm by temperature mode of a periodic rubber mixing process for JSC "Voronezh tire plant". The algorithm is designed to perform in the main controller a section of rubber mixing Siemens S7 CPU319F-3 PN/DP, which forms tasks for the local temperature controllers HESCH HE086 and Jumo dTRON304, operating by tempering stations. To compile the algorithm was performed a systematic analysis of rubber mixing process as an object of control and was developed a mathematical model of the process based on the heat balance equations describing the processes of heat transfer through the walls of technological devices, the change of coolant temperature and the temperature of the rubber compound mixing until discharge from the mixer chamber. Due to the complexity and nonlinearity of the control object – Rubber mixers and the availability of methods and a wide experience of this device control in an industrial environment, a correction algorithm is implemented on the basis of an artificial single-layer neural network and it provides the correction of tasks for local controllers on the cooling water temperature and air temperature in the workshop, which may vary considerably depending on the time of the year, and during prolonged operation of the equipment or its downtime. Tempering stations control is carried out by changing the flow of cold water from the cooler and on/off control of the heating elements. The analysis of the model experiments results and practical research at the main controller programming in the STEP 7 environment at the enterprise showed a decrease in the mixing time for different types of rubbers by reducing of heat transfer process control error.
International Nuclear Information System (INIS)
Matsubara, Keisuke; Ibaraki, Masanobu; Nakamura, Kazuhiro; Yamaguchi, Hiroshi; Umetsu, Atsushi; Kinoshita, Fumiko; Kinoshita, Toshibumi
2013-01-01
Subject head motion during sequential 15 O positron emission tomography (PET) scans can result in artifacts in cerebral blood flow (CBF) and oxygen metabolism maps. However, to our knowledge, there are no systematic studies examining this issue. Herein, we investigated the effect of head motion on quantification of CBF and oxygen metabolism, and proposed an image-based motion correction method dedicated to 15 O PET study, correcting for transmission-emission mismatch and inter-scan mismatch of emission scans. We analyzed 15 O PET data for patients with major arterial steno-occlusive disease (n=130) to determine the occurrence frequency of head motion during 15 O PET examination. Image-based motion correction without and with realignment between transmission and emission scans, termed simple and 2-step method, respectively, was applied to the cases that showed severe inter-scan motion. Severe inter-scan motion (>3 mm translation or >5deg rotation) was observed in 27 of 520 adjacent scan pairs (5.2%). In these cases, unrealistic values of oxygen extraction fraction (OEF) or cerebrovascular reactivity (CVR) were observed without motion correction. Motion correction eliminated these artifacts. The volume-of-interest (VOI) analysis demonstrated that the motion correction changed the OEF on the middle cerebral artery territory by 17.3% at maximum. The inter-scan motion also affected cerebral blood volume (CBV), cerebral metabolism rate of oxygen (CMRO 2 ) and CBF, which were improved by the motion correction. A difference of VOI values between the simple and 2-step method was also observed. These data suggest that image-based motion correction is useful for accurate measurement of CBF and oxygen metabolism by 15 O PET. (author)
Two-loop master integrals for the mixed EW-QCD virtual corrections to Drell-Yan scattering
Energy Technology Data Exchange (ETDEWEB)
Bonciani, Roberto [' ' La Sapienza' ' Univ., Rome (Italy). Dipt. di Fisica; INFN Sezione Roma (Italy); Di Vita, Stefano [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Mastrolia, Pierpaolo [Max-Planck-Institut fuer Physik, Muenchen (Germany); Padova Univ. (Italy). Dipt. di Fisica e Astronomia; INFN Sezione di Padova (Italy); Schubert, Ulrich [Max-Planck-Institut fuer Physik, Muenchen (Germany)
2016-04-15
We present the calculation of the master integrals needed for the two-loop QCD x EW corrections to q+ anti q → l{sup -}+l{sup +} and q+ anti q{sup '} → l{sup -}+ anti ν, for massless external particles. We treat W and Z bosons as degenerate in mass. We identify three types of diagrams, according to the presence of massive internal lines: the no-mass type, the one-mass type, and the two-mass type, where all massive propagators, when occurring, contain the same mass value. We find a basis of 49 master integrals and evaluate them with the method of the differential equations. The Magnus exponential is employed to choose a set of master integrals that obeys a canonical system of differential equations. Boundary conditions are found either by matching the solutions onto simpler integrals in special kinematic configurations, or by requiring the regularity of the solution at pseudo-thresholds. The canonical master integrals are finally given as Taylor series around d=4 space-time dimensions, up to order four, with coefficients given in terms of iterated integrals, respectively up to weight four.
Directory of Open Access Journals (Sweden)
Mehrdad Mohammadpour
2013-10-01
Full Text Available Purpose: To assess the safety, efficacy and predictability of photorefractive keratectomy (PRK [Tissue-saving (TS versus Plano-scan (PS ablation algorithms] of Technolas 217z excimer laser for correction of myopic astigmatismMethods: In this retrospective study one hundred and seventy eyes of 85 patients (107 eyes (62.9% with PS and 63 eyes (37.1% with TS algorithm were included. TS algorithm was applied for those with central corneal thickness less than 500 µm or estimated residual stromal thickness less than 420 µm. Mitomycin C (MMC was applied for 120 eyes (70.6%; in case of an ablation depth more than 60 μm and/or astigmatic correction more than one diopter (D. Mean sphere, cylinder, spherical equivalent (SE refraction, uncorrected visual acuity (UCVA, best corrected visual acuity (BCVA were measured preoperatively, and 4 weeks,12 weeks and 24 weeks postoperatively.Results: One, three and six months postoperatively, 60%, 92.9%, 97.5% of eyes had UCVA of 20/20 or better, respectively. Mean preoperative and 1, 3, 6 months postoperative SE were -3.48±1.28 D (-1.00 to -8.75, -0.08±0.62D, -0.02±0.57 and -0.004± 0.29, respectively. And also, 87.6%, 94.1% and 100% were within ±1.0 D of emmetropia and 68.2, 75.3, 95% were within ±0.5 of emmetropia. The safety and efficacy indices were 0.99 and 0.99 at 12 weeks and 1.009 and 0.99 at 24 weeks, respectively. There was no clinically or statistically significant difference between the outcomes of PS or TS algorithms or between those with or without MMC in either group in terms of safety, efficacy, predictability or stability. Dividing the eyes with subjective SE≤4 D and SE≥4 D postoperatively, there was no significant difference between the predictability of the two groups. There was no intra- or postoperative complication.Conclusion: Outcomes of PRK for correction of myopic astigmatism showed great promise with both PS and TS algorithms.
Bucur, Doina
2017-01-01
A genetic algorithm with stochastic macro mutation operators which merge, split, move, reverse and align DNA contigs on a scaffold is shown to accurately and consistently assemble raw DNA reads from an accurately sequenced single-read library into a contiguous genome. A candidate solution is a
Kuligowski, J; Quintás, G; Garrigues, S; de la Guardia, M
2010-03-15
A new background correction method for the on-line coupling of gradient liquid chromatography and Fourier transform infrared spectrometry has been developed. It is based on the use of a point-to-point matching algorithm that compares the absorption spectra of the sample data set with those of a previously recorded reference data set in order to select an appropriate reference spectrum. The spectral range used for the point-to-point comparison is selected with minimal user-interaction, thus facilitating considerably the application of the whole method. The background correction method has been successfully tested on a chromatographic separation of four nitrophenols running acetonitrile (0.08%, v/v TFA):water (0.08%, v/v TFA) gradients with compositions ranging from 35 to 85% (v/v) acetonitrile, giving accurate results for both, baseline resolved and overlapped peaks. Copyright (c) 2009 Elsevier B.V. All rights reserved.
Stamnes, Knut; Tsay, S.-CHEE; Jayaweera, Kolf; Wiscombe, Warren
1988-01-01
The transfer of monochromatic radiation in a scattering, absorbing, and emitting plane-parallel medium with a specified bidirectional reflectivity at the lower boundary is considered. The equations and boundary conditions are summarized. The numerical implementation of the theory is discussed with attention given to the reliable and efficient computation of eigenvalues and eigenvectors. Ways of avoiding fatal overflows and ill-conditioning in the matrix inversion needed to determine the integration constants are also presented.
International Nuclear Information System (INIS)
Pirotta, M.; Aquilina, D.; Bhikha, T.; Georg, D.
2005-01-01
The ESTRO formalism for monitor unit (MU) calculations was evaluated and implemented to replace a previous methodology based on dosimetric data measured in a full-scatter phantom. This traditional method relies on data normalised at the depth of dose maximum (z m ), as well as on the utilisation of the BJR 25 table for the conversion of rectangular fields into equivalent square fields. The treatment planning system (TPS) was subsequently updated to reflect the new beam data normalised at a depth z R of 10 cm. Comparisons were then carried out between the ESTRO formalism, the Clarkson-based dose calculation algorithm on the TPS (with beam data normalised at z m and z R ), and the traditional ''full-scatter'' methodology. All methodologies, except for the ''full-scatter'' methodology, separated head-scatter from phantom-scatter effects and none of the methodologies; except for the ESTRO formalism, utilised wedge depth dose information for calculations. The accuracy of MU calculations was verified against measurements in a homogeneous phantom for square and rectangular open and wedged fields, as well as blocked open and wedged fields, at 5, 10, and 20 cm depths, under fixed SSD and isocentric geometries for 6 and 10 MV. Overall, the ESTRO Formalism showed the most accurate performance, with the root mean square (RMS) error with respect to measurements remaining below 1% even for the most complex beam set-ups investigated. The RMS error for the TPS deteriorated with the introduction of a wedge, with a worse RMS error for the beam data normalised at z m (4% at 6 MV and 1.6% at 10 MV) than at z R (1.9% at 6 MV and 1.1% at 10 MV). The further addition of blocking had only a marginal impact on the accuracy of this methodology. The ''full-scatter'' methodology showed a loss in accuracy for calculations involving either wedges or blocking, and performed worst for blocked wedged fields (RMS errors of 7.1% at 6 MV and 5% at 10 MV). The origins of these discrepancies were
Energy Technology Data Exchange (ETDEWEB)
Bilbao, Aivett; Gibbons, Bryson C.; Slysz, Gordon W.; Crowell, Kevin L.; Monroe, Matthew E.; Ibrahim, Yehia M.; Smith, Richard D.; Payne, Samuel H.; Baker, Erin S.
2018-04-01
The mass accuracy and peak intensity of ions detected by mass spectrometry (MS) measurements are essential to facilitate compound identification and quantitation. However, high concentration species can easily cause problems if their ion intensities reach beyond the limits of the detection system, leading to distorted and non-ideal detector response (e.g. saturation), and largely precluding the calculation of accurate m/z and intensity values. Here we present an open source computational method to correct peaks above a defined intensity (saturated) threshold determined by the MS instrumentation such as the analog-to-digital converters or time-to-digital converters used in conjunction with time-of-flight MS. In this method, the isotopic envelope for each observed ion above the saturation threshold is compared to its expected theoretical isotopic distribution. The most intense isotopic peak for which saturation does not occur is then utilized to re-calculate the precursor m/z and correct the intensity, resulting in both higher mass accuracy and greater dynamic range. The benefits of this approach were evaluated with proteomic and lipidomic datasets of varying complexities. After correcting the high concentration species, reduced mass errors and enhanced dynamic range were observed for both simple and complex omic samples. Specifically, the mass error dropped by more than 50% in most cases with highly saturated species and dynamic range increased by 1-2 orders of magnitude for peptides in a blood serum sample.
Energy Technology Data Exchange (ETDEWEB)
Malhotra, M. [Stanford Univ., CA (United States)
1996-12-31
Finite-element discretizations of time-harmonic acoustic wave problems in exterior domains result in large sparse systems of linear equations with complex symmetric coefficient matrices. In many situations, these matrix problems need to be solved repeatedly for different right-hand sides, but with the same coefficient matrix. For instance, multiple right-hand sides arise in radiation problems due to multiple load cases, and also in scattering problems when multiple angles of incidence of an incoming plane wave need to be considered. In this talk, we discuss the iterative solution of multiple linear systems arising in radiation and scattering problems in structural acoustics by means of a complex symmetric variant of the BL-QMR method. First, we summarize the governing partial differential equations for time-harmonic structural acoustics, the finite-element discretization of these equations, and the resulting complex symmetric matrix problem. Next, we sketch the special version of BL-QMR method that exploits complex symmetry, and we describe the preconditioners we have used in conjunction with BL-QMR. Finally, we report some typical results of our extensive numerical tests to illustrate the typical convergence behavior of BL-QMR method for multiple radiation and scattering problems in structural acoustics, to identify appropriate preconditioners for these problems, and to demonstrate the importance of deflation in block Krylov-subspace methods. Our numerical results show that the multiple systems arising in structural acoustics can be solved very efficiently with the preconditioned BL-QMR method. In fact, for multiple systems with up to 40 and more different right-hand sides we get consistent and significant speed-ups over solving the systems individually.
Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun
2017-09-19
In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions.
International Nuclear Information System (INIS)
Arbuzov, A.; Kalinovskaya, L.; Bardin, D.; Deutsches Elektronen-Synchrotron; Bluemlein, J.; Riemann, T.
1995-11-01
A description of the Fortran program HECTOR for a variety of semi-analytical calculations of radiative QED, QCD, and electroweak corrections to the double-differential cross sections of NC and CC deep inelastic charged lepton proton (or lepton deuteron) scattering is presented. HECTOR originates from the substantially improved and extended earlier programs HELIOS and TERAD91. It is mainly intended for applications at HERA or LEP x LHC, but may be used also for μN scattering in fixed target experiments. The QED corrections may be calculated in different sets of variables: leptonic, hadronic, mixed, Jaquet-Blondel, double angle etc. Besides the leading logarithmic approximation up to order O(α 2 ), exact O(α) corrections and inclusive soft photon exponentiation are taken into account. The photoproduction region is also covered. (orig.)
Directory of Open Access Journals (Sweden)
Min Liu
2018-03-01
Full Text Available Sidelobe reduction is a very primary task for synthetic aperture radar (SAR images. Various methods have been proposed for broadside SAR, which can suppress the sidelobes effectively while maintaining high image resolution at the same time. Alternatively, squint SAR, especially highly squint SAR, has emerged as an important tool that provides more mobility and flexibility and has become a focus of recent research studies. One of the research challenges for squint SAR is how to resolve the severe range-azimuth coupling of echo signals. Unlike broadside SAR images, the range and azimuth sidelobes of the squint SAR images no longer locate on the principal axes with high probability. Thus the spatially variant apodization (SVA filters could hardly get all the sidelobe information, and hence the sidelobe reduction process is not optimal. In this paper, we present an improved algorithm called double spatially variant apodization (D-SVA for better sidelobe suppression. Satisfactory sidelobe reduction results are achieved with the proposed algorithm by comparing the squint SAR images to the broadside SAR images. Simulation results also demonstrate the reliability and efficiency of the proposed method.
Liebi, Marianne; Georgiadis, Marios; Kohlbrecher, Joachim; Holler, Mirko; Raabe, Jörg; Usov, Ivan; Menzel, Andreas; Schneider, Philipp; Bunk, Oliver; Guizar-Sicairos, Manuel
2018-01-01
Small-angle X-ray scattering tensor tomography, which allows reconstruction of the local three-dimensional reciprocal-space map within a three-dimensional sample as introduced by Liebi et al. [Nature (2015), 527, 349-352], is described in more detail with regard to the mathematical framework and the optimization algorithm. For the case of trabecular bone samples from vertebrae it is shown that the model of the three-dimensional reciprocal-space map using spherical harmonics can adequately describe the measured data. The method enables the determination of nanostructure orientation and degree of orientation as demonstrated previously in a single momentum transfer q range. This article presents a reconstruction of the complete reciprocal-space map for the case of bone over extended ranges of q. In addition, it is shown that uniform angular sampling and advanced regularization strategies help to reduce the amount of data required.
Scherer, Artur; Valiron, Benoît; Mau, Siun-Chuon; Alexander, Scott; van den Berg, Eric; Chapuran, Thomas E.
2017-03-01
We provide a detailed estimate for the logical resource requirements of the quantum linear-system algorithm (Harrow et al. in Phys Rev Lett 103:150502, 2009) including the recently described elaborations and application to computing the electromagnetic scattering cross section of a metallic target (Clader et al. in Phys Rev Lett 110:250504, 2013). Our resource estimates are based on the standard quantum-circuit model of quantum computation; they comprise circuit width (related to parallelism), circuit depth (total number of steps), the number of qubits and ancilla qubits employed, and the overall number of elementary quantum gate operations as well as more specific gate counts for each elementary fault-tolerant gate from the standard set { X, Y, Z, H, S, T, { CNOT } }. In order to perform these estimates, we used an approach that combines manual analysis with automated estimates generated via the Quipper quantum programming language and compiler. Our estimates pertain to the explicit example problem size N=332{,}020{,}680 beyond which, according to a crude big-O complexity comparison, the quantum linear-system algorithm is expected to run faster than the best known classical linear-system solving algorithm. For this problem size, a desired calculation accuracy ɛ =0.01 requires an approximate circuit width 340 and circuit depth of order 10^{25} if oracle costs are excluded, and a circuit width and circuit depth of order 10^8 and 10^{29}, respectively, if the resource requirements of oracles are included, indicating that the commonly ignored oracle resources are considerable. In addition to providing detailed logical resource estimates, it is also the purpose of this paper to demonstrate explicitly (using a fine-grained approach rather than relying on coarse big-O asymptotic approximations) how these impressively large numbers arise with an actual circuit implementation of a quantum algorithm. While our estimates may prove to be conservative as more efficient
Li, Yinlin; Kundu, Bijoy K.
2018-03-01
The three-compartment model with spillover (SP) and partial volume (PV) corrections has been widely used for noninvasive kinetic parameter studies of dynamic 2-[18F] fluoro-2deoxy-D-glucose (FDG) positron emission tomography images of small animal hearts in vivo. However, the approach still suffers from estimation uncertainty or slow convergence caused by the commonly used optimization algorithms. The aim of this study was to develop an improved optimization algorithm with better estimation performance. Femoral artery blood samples, image-derived input functions from heart ventricles and myocardial time-activity curves (TACs) were derived from data on 16 C57BL/6 mice obtained from the UCLA Mouse Quantitation Program. Parametric equations of the average myocardium and the blood pool TACs with SP and PV corrections in a three-compartment tracer kinetic model were formulated. A hybrid method integrating artificial immune-system and interior-reflective Newton methods were developed to solve the equations. Two penalty functions and one late time-point tail vein blood sample were used to constrain the objective function. The estimation accuracy of the method was validated by comparing results with experimental values using the errors in the areas under curves (AUCs) of the model corrected input function (MCIF) and the 18F-FDG influx constant K i . Moreover, the elapsed time was used to measure the convergence speed. The overall AUC error of MCIF for the 16 mice averaged -1.4 ± 8.2%, with correlation coefficients of 0.9706. Similar results can be seen in the overall K i error percentage, which was 0.4 ± 5.8% with a correlation coefficient of 0.9912. The t-test P value for both showed no significant difference. The mean and standard deviation of the MCIF AUC and K i percentage errors have lower values compared to the previously published methods. The computation time of the hybrid method is also several times lower than using just a stochastic
International Nuclear Information System (INIS)
Lovesey, S.W.
1987-05-01
The report reviews, at an introductory level, the theory of photon scattering from condensed matter. Magnetic scattering, which arises from first-order relativistic corrections to the Thomson scattering amplitude, is treated in detail and related to the corresponding interaction in the magnetic neutron diffraction amplitude. (author)
Morshed, Mohammad Sarwar; Kamal, Mostafa Mashnoon; Khan, Somaiya Islam
2016-07-01
Inventory has been a major concern in supply chain and numerous researches have been done lately on inventory control which brought forth a number of methods that efficiently manage inventory and related overheads by reducing cost of replenishment. This research is aimed towards providing a better replenishment policy in case of multi-product, single supplier situations for chemical raw materials of textile industries in Bangladesh. It is assumed that industries currently pursue individual replenishment system. The purpose is to find out the optimum ideal cycle time and individual replenishment cycle time of each product for replenishment that will cause lowest annual holding and ordering cost, and also find the optimum ordering quantity. In this paper indirect grouping strategy has been used. It is suggested that indirect grouping Strategy outperforms direct grouping strategy when major cost is high. An algorithm by Kaspi and Rosenblatt (1991) called RAND is exercised for its simplicity and ease of application. RAND provides an ideal cycle time (T) for replenishment and integer multiplier (ki) for individual items. Thus the replenishment cycle time for each product is found as T×ki. Firstly, based on data, a comparison between currently prevailing (individual) process and RAND is provided that uses the actual demands which presents 49% improvement in total cost of replenishment. Secondly, discrepancies in demand is corrected by using Holt's method. However, demands can only be forecasted one or two months into the future because of the demand pattern of the industry under consideration. Evidently, application of RAND with corrected demand display even greater improvement. The results of this study demonstrates that cost of replenishment can be significantly reduced by applying RAND algorithm and exponential smoothing models.
Directory of Open Access Journals (Sweden)
Georgii N. Lebedev
2017-01-01
Full Text Available The improvement in the effectiveness of airfield operation largely depends on the problem solving quality on the interaction boundaries of different technological sections. One of such hotspots is the use of the same runway by inbound and outbound aircraft. At certain intensity of outbound and inbound air traffic flow the conflict of aircraft interests appears, where it may be quite difficult to sort out priorities even for experienced controllers, in consequence of which mistakes in decision-making unavoidably appear.In this work the task of response correction of landing and takeoff time of the aircraft using the same RW, in condition of the conflict of interests “arrival – departure” at the increased operating intensity is formulated. The choice of optimal solution is made taking into account mutual interests without the complete sorting and the evaluation of all solutions.Accordingly, the genetic algorithm, which offers a simple and effective approach to optimal control problem solution by providing flight safety at an acceptably high level, is proposed. The estimation of additional aviation fuel consumption is used as optimal choice evaluation criterion.The advantages of the genetic algorithm application at decision-making in comparison with today’s “team” solution of the conflict “departure – arrival” in the airfield area are shown.
Lee, Ji Won; Kim, Chang Won; Lee, Geewon; Lee, Han Cheol; Kim, Sang-Pil; Choi, Bum Sung; Jeong, Yeon Joo
2018-02-01
Background Using the hybrid electrocardiogram (ECG)-gated computed tomography (CT) technique, assessment of entire aorta, coronary arteries, and aortic valve can be possible using single-bolus contrast administration within a single acquisition. Purpose To compare the image quality of hybrid ECG-gated and non-gated CT angiography of the aorta and evaluate the effect of a motion correction algorithm (MCA) on coronary artery image quality in a hybrid ECG-gated aorta CT group. Material and Methods In total, 104 patients (76 men; mean age = 65.8 years) prospectively randomized into two groups (Group 1 = hybrid ECG-gated CT; Group 2 = non-gated CT) underwent wide-detector array aorta CT. Image quality, assessed using a four-point scale, was compared between the groups. Coronary artery image quality was compared between the conventional reconstruction and motion correction reconstruction subgroups in Group 1. Results Group 1 showed significant advantages over Group 2 in aortic wall, cardiac chamber, aortic valve, coronary ostia, and main coronary arteries image quality (all P ECG-gated CT significantly improved the heart and aortic wall image quality and the MCA can further improve the image quality and interpretability of coronary arteries.
New resonance cross section calculational algorithms
International Nuclear Information System (INIS)
Mathews, D.R.
1978-01-01
Improved resonance cross section calculational algorithms were developed and tested for inclusion in a fast reactor version of the MICROX code. The resonance energy portion of the MICROX code solves the neutron slowing-down equations for a two-region lattice cell on a very detailed energy grid (about 14,500 energies). In the MICROX algorithms, the exact P 0 elastic scattering kernels are replaced by synthetic (approximate) elastic scattering kernels which permit the use of an efficient and numerically stable recursion relation solution of the slowing-down equation. In the work described here, the MICROX algorithms were modified as follows: an additional delta function term was included in the P 0 synthetic scattering kernel. The additional delta function term allows one more moments of the exact elastic scattering kernel to be preserved without much extra computational effort. With the improved synthetic scattering kernel, the flux returns more closely to the exact flux below a resonance than with the original MICROX kernel. The slowing-down calculation was extended to a true B 1 hyperfine energy grid calculatn in each region by using P 1 synthetic scattering kernels and tranport-corrected P 0 collision probabilities to couple the two regions. 1 figure, 6 tables
Zhang, Yuan; Yang, Bin; Liu, Xiaohui; Wang, Cuizhen
2017-05-01
Fast and accurate estimation of rice yield plays a role in forecasting rice productivity for ensuring regional or national food security. Microwave synthetic aperture radar (SAR) data has been proved to have a great potential for rice monitoring and parameters retrieval. In this study, a rice canopy scattering model (RCSM) was revised and then was applied to simulate the backscatter of rice canopy. The combination of RCSM and genetic algorithm (GA) was proposed for retrieving two important rice parameters relating to grain yield, ear length and ear number density, from a C-band, dual-polarization (HH and HV) Radarsat-2 SAR data. The stability of retrieved results of GA inversion was also evaluated by changing various parameter configurations. Results show that RCSM can effectively simulate backscattering coefficients of rice canopy at HH and HV mode with an error of <1 dB. Reasonable selection of GA's parameters is essential for stability and efficiency of rice parameter retrieval. Two rice parameters are retrieved by the proposed RCSM-GA technology with better accuracy. The rice ear length are estimated with error of <1.5 cm, and ear number density with error of <23 #/m2. Rice grain yields are effectively estimated and mapped by the retrieved ear length and number density via a simple yield regression equation. This study further illustrates the capability of C-band Radarsat-2 SAR data on retrieval of rice ear parameters and the practicability of radar remote sensing technology for operational yield estimation.
Directory of Open Access Journals (Sweden)
Stéfani Novoa
2017-01-01
Full Text Available The accurate measurement of suspended particulate matter (SPM concentrations in coastal waters is of crucial importance for ecosystem studies, sediment transport monitoring, and assessment of anthropogenic impacts in the coastal ocean. Ocean color remote sensing is an efficient tool to monitor SPM spatio-temporal variability in coastal waters. However, near-shore satellite images are complex to correct for atmospheric effects due to the proximity of land and to the high level of reflectance caused by high SPM concentrations in the visible and near-infrared spectral regions. The water reflectance signal (ρw tends to saturate at short visible wavelengths when the SPM concentration increases. Using a comprehensive dataset of high-resolution satellite imagery and in situ SPM and water reflectance data, this study presents (i an assessment of existing atmospheric correction (AC algorithms developed for turbid coastal waters; and (ii a switching method that automatically selects the most sensitive SPM vs. ρw relationship, to avoid saturation effects when computing the SPM concentration. The approach is applied to satellite data acquired by three medium-high spatial resolution sensors (Landsat-8/Operational Land Imager, National Polar-Orbiting Partnership/Visible Infrared Imaging Radiometer Suite and Aqua/Moderate Resolution Imaging Spectrometer to map the SPM concentration in some of the most turbid areas of the European coastal ocean, namely the Gironde and Loire estuaries as well as Bourgneuf Bay on the French Atlantic coast. For all three sensors, AC methods based on the use of short-wave infrared (SWIR spectral bands were tested, and the consistency of the retrieved water reflectance was examined along transects from low- to high-turbidity waters. For OLI data, we also compared a SWIR-based AC (ACOLITE with a method based on multi-temporal analyses of atmospheric constituents (MACCS. For the selected scenes, the ACOLITE-MACCS difference was
Energy Technology Data Exchange (ETDEWEB)
Xing, Yan; Zhao, Yuan; Pan, Cun Xue; Azati, Gulina; Wang, Yan Wei; Liu, Wen Ya [Imaging Center, The First Affiliated Hospital of Xinjiang Medical University, Xinjiang (China); Guo, Ning [CT Imaging Research Center, GE Healthcare, Beijing (China)
2017-11-15
Using a pulsating coronary artery phantom at high heart rate settings, we investigated the efficacy of a motion correction algorithm (MCA) to improve the image quality in dual-energy spectral coronary CT angiography (CCTA). Coronary flow phantoms were scanned at heart rates of 60–100 beats/min at 10-beats/min increments, using dual-energy spectral CT mode. Virtual monochromatic images were reconstructed from 50 to 90 keV at 10-keV increments. Two blinded observers assessed image quality using a 4-point Likert Scale (1 = non-diagnostic, 4 = excellent) and the fraction of interpretable segments using MCA versus conventional algorithm (CA). Comparison of variables was performed with the Wilcoxon rank sum test and McNemar test. At heart rates of 70, 80, 90, and 100 beats/min, images with MCA were rated as higher image scores compared to those with CA on monochromatic levels of 50, 60, and 70 keV (each p < 0.05). Meanwhile, at a heart rate of 90 beats/min, image interpretability was improved by MCA at a monochromatic level of 60 keV (p < 0.05) and 70 keV (p < 0.05). At a heart rate of 100 beats/min, image interpretability was improved by MCA at monochromatic levels of 50 keV (from 69.4% to 86.1%, p < 0.05), 60 keV (from 55.6% to 83.3%, p < 0.05) and 70 keV (from 33.3% to 69.3%, p < 0.05). Low-keV monochromatic images combined with MCA improves image quality and image interpretability in CCTAs at high heart rates.
Lee, Suk-Jun; Yu, Seung-Man
2017-08-01
The purpose of this study was to evaluate the usefulness and clinical applications of MultiVaneXD which was applying iterative motion correction reconstruction algorithm T2-weighted images compared with MultiVane images taken with a 3T MRI. A total of 20 patients with suspected pathologies of the liver and pancreatic-biliary system based on clinical and laboratory findings underwent upper abdominal MRI, acquired using the MultiVane and MultiVaneXD techniques. Two reviewers analyzed the MultiVane and MultiVaneXD T2-weighted images qualitatively and quantitatively. Each reviewer evaluated vessel conspicuity by observing motion artifacts and the sharpness of the portal vein, hepatic vein, and upper organs. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were calculated by one reviewer for quantitative analysis. The interclass correlation coefficient was evaluated to measure inter-observer reliability. There were significant differences between MultiVane and MultiVaneXD in motion artifact evaluation. Furthermore, MultiVane was given a better score than MultiVaneXD in abdominal organ sharpness and vessel conspicuity, but the difference was insignificant. The reliability coefficient values were over 0.8 in every evaluation. MultiVaneXD (2.12) showed a higher value than did MultiVane (1.98), but the difference was insignificant ( p = 0.135). MultiVaneXD is a motion correction method that is more advanced than MultiVane, and it produced an increased SNR, resulting in a greater ability to detect focal abdominal lesions.
Botta, Francesca; Ferrari, Mahila; Chiesa, Carlo; Vitali, Sara; Guerriero, Francesco; Nile, Maria Chiara De; Mira, Marta; Lorenzon, Leda; Pacilio, Massimiliano; Cremonesi, Marta
2018-04-01
To investigate the clinical implication of performing pre-treatment dosimetry for 90 Y-microspheres liver radioembolization on 99m Tc-MAA SPECT images reconstructed without attenuation or scatter correction and quantified with the patient relative calibration methodology. Twenty-five patients treated with SIR-Spheres ® at Istituto Europeo di Oncologia and 31 patients treated with TheraSphere ® at Istituto Nazionale Tumori were considered. For each acquired 99m Tc-MAA SPECT, four reconstructions were performed: with attenuation and scatter correction (AC_SC), only attenuation (AC_NoSC), only scatter (NoAC_SC) and without corrections (NoAC_NoSC). Absorbed dose maps were calculated from the activity maps, quantified applying the patient relative calibration to the SPECT images. Whole Liver (WL) and Tumor (T) regions were drawn on CT images. Injected Liver (IL) region was defined including the voxels receiving absorbed dose >3.8 Gy/GBq. Whole Healthy Liver (WHL) and Healthy Injected Liver (HIL) regions were obtained as WHL = WL - T and HIL = IL - T. Average absorbed dose to WHL and HIL were calculated, and the injection activity was derived following each Institute's procedure. The values obtained from AC_NoSC, NoAC_SC and NoAC_NoSC images were compared to the reference value suggested by AC_SC images using Bland-Altman analysis and Wilcoxon paired test (5% significance threshold). Absorbed-dose maps were compared to the reference map (AC_SC) in global terms using the Voxel Normalized Mean Square Error (%VNMSE), and at voxel level by calculating for each voxel the normalized difference with the reference value. The uncertainty affecting absorbed dose at voxel level was accounted for in the comparison; to this purpose, the voxel counts fluctuation due to Poisson and reconstruction noise was estimated from SPECT images of a water phantom acquired and reconstructed as patient images. NoAC_SC images lead to activity prescriptions not significantly different from the
Fast analytical scatter estimation using graphics processing units.
Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris
2015-01-01
To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh