WorldWideScience

Sample records for scatter reduction algorithms

  1. The use of anatomical information for molecular image reconstruction algorithms: Attention/Scatter correction, motion compensation, and noise reduction

    Energy Technology Data Exchange (ETDEWEB)

    Chun, Se Young [School of Electrical and Computer Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan (Korea, Republic of)

    2016-03-15

    PET and SPECT are important tools for providing valuable molecular information about patients to clinicians. Advances in nuclear medicine hardware technologies and statistical image reconstruction algorithms enabled significantly improved image quality. Sequentially or simultaneously acquired anatomical images such as CT and MRI from hybrid scanners are also important ingredients for improving the image quality of PET or SPECT further. High-quality anatomical information has been used and investigated for attenuation and scatter corrections, motion compensation, and noise reduction via post-reconstruction filtering and regularization in inverse problems. In this article, we will review works using anatomical information for molecular image reconstruction algorithms for better image quality by describing mathematical models, discussing sources of anatomical information for different cases, and showing some examples.

  2. An experimental study of the scatter correction by using a beam-stop-array algorithm with digital breast tomosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ye-Seul; Park, Hye-Suk; Kim, Hee-Joung [Yonsei University, Wonju (Korea, Republic of); Choi, Young-Wook; Choi, Jae-Gu [Korea Electrotechnology Research Institute, Ansan (Korea, Republic of)

    2014-12-15

    Digital breast tomosynthesis (DBT) is a technique that was developed to overcome the limitations of conventional digital mammography by reconstructing slices through the breast from projections acquired at different angles. In developing and optimizing DBT, The x-ray scatter reduction technique remains a significant challenge due to projection geometry and radiation dose limitations. The most common approach to scatter reduction is a beam-stop-array (BSA) algorithm; however, this method raises concerns regarding the additional exposure involved in acquiring the scatter distribution. The compressed breast is roughly symmetric, and the scatter profiles from projections acquired at axially opposite angles are similar to mirror images. The purpose of this study was to apply the BSA algorithm with only two scans with a beam stop array, which estimates the scatter distribution with minimum additional exposure. The results of the scatter correction with angular interpolation were comparable to those of the scatter correction with all scatter distributions at each angle. The exposure increase was less than 13%. This study demonstrated the influence of the scatter correction obtained by using the BSA algorithm with minimum exposure, which indicates its potential for practical applications.

  3. Angle Statistics Reconstruction: a robust reconstruction algorithm for Muon Scattering Tomography

    Science.gov (United States)

    Stapleton, M.; Burns, J.; Quillin, S.; Steer, C.

    2014-11-01

    Muon Scattering Tomography (MST) is a technique for using the scattering of cosmic ray muons to probe the contents of enclosed volumes. As a muon passes through material it undergoes multiple Coulomb scattering, where the amount of scattering is dependent on the density and atomic number of the material as well as the path length. Hence, MST has been proposed as a means of imaging dense materials, for instance to detect special nuclear material in cargo containers. Algorithms are required to generate an accurate reconstruction of the material density inside the volume from the muon scattering information and some have already been proposed, most notably the Point of Closest Approach (PoCA) and Maximum Likelihood/Expectation Maximisation (MLEM) algorithms. However, whilst PoCA-based algorithms are easy to implement, they perform rather poorly in practice. Conversely, MLEM is a complicated algorithm to implement and computationally intensive and there is currently no published, fast and easily-implementable algorithm that performs well in practice. In this paper, we first provide a detailed analysis of the source of inaccuracy in PoCA-based algorithms. We then motivate an alternative method, based on ideas first laid out by Morris et al, presenting and fully specifying an algorithm that performs well against simulations of realistic scenarios. We argue this new algorithm should be adopted by developers of Muon Scattering Tomography as an alternative to PoCA.

  4. Channel Parameter Estimation for Scatter Cluster Model Using Modified MUSIC Algorithm

    Directory of Open Access Journals (Sweden)

    Jinsheng Yang

    2012-01-01

    Full Text Available Recently, the scatter cluster models which precisely evaluate the performance of the wireless communication system have been proposed in the literature. However, the conventional SAGE algorithm does not work for these scatter cluster-based models because it performs poorly when the transmit signals are highly correlated. In this paper, we estimate the time of arrival (TOA, the direction of arrival (DOA, and Doppler frequency for scatter cluster model by the modified multiple signal classification (MUSIC algorithm. Using the space-time characteristics of the multiray channel, the proposed algorithm combines the temporal filtering techniques and the spatial smoothing techniques to isolate and estimate the incoming rays. The simulation results indicated that the proposed algorithm has lower complexity and is less time-consuming in the dense multipath environment than SAGE algorithm. Furthermore, the estimations’ performance increases with elements of receive array and samples length. Thus, the problem of the channel parameter estimation of the scatter cluster model can be effectively addressed with the proposed modified MUSIC algorithm.

  5. An algorithm for reduct cardinality minimization

    KAUST Repository

    AbouEisha, Hassan M.

    2013-12-01

    This is devoted to the consideration of a new algorithm for reduct cardinality minimization. This algorithm transforms the initial table to a decision table of a special kind, simplify this table, and use a dynamic programming algorithm to finish the construction of an optimal reduct. Results of computer experiments with decision tables from UCI ML Repository are discussed. © 2013 IEEE.

  6. An algorithm for reduct cardinality minimization

    KAUST Repository

    AbouEisha, Hassan M.; Al Farhan, Mohammed; Chikalov, Igor; Moshkov, Mikhail

    2013-01-01

    This is devoted to the consideration of a new algorithm for reduct cardinality minimization. This algorithm transforms the initial table to a decision table of a special kind, simplify this table, and use a dynamic programming algorithm to finish the construction of an optimal reduct. Results of computer experiments with decision tables from UCI ML Repository are discussed. © 2013 IEEE.

  7. Coastal Zone Color Scanner atmospheric correction algorithm - Multiple scattering effects

    Science.gov (United States)

    Gordon, Howard R.; Castano, Diego J.

    1987-01-01

    Errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm are analyzed. The analysis is based on radiative transfer computations in model atmospheres, in which the aerosols and molecules are distributed vertically in an exponential manner, with most of the aerosol scattering located below the molecular scattering. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates, making it possible to determine the errors along typical CZCS scan lines. Information provided by the analysis makes it possible to judge the efficacy of the current algorithm with the current sensor and to estimate the impact of the algorithm-induced errors on a variety of applications.

  8. Parallel Algorithms for Groebner-Basis Reduction

    Science.gov (United States)

    1987-09-25

    22209 ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Include Security Classification) * PARALLEL ALGORITHMS FOR GROEBNER -BASIS REDUCTION 12. PERSONAL...All other editions are obsolete. Productivity Engineering in the UNIXt Environment p Parallel Algorithms for Groebner -Basis Reduction Technical Report

  9. An Algorithm for Computing Screened Coulomb Scattering in Geant4

    OpenAIRE

    Mendenhall, Marcus H.; Weller, Robert A.

    2004-01-01

    An algorithm has been developed for the Geant4 Monte-Carlo package for the efficient computation of screened Coulomb interatomic scattering. It explicitly integrates the classical equations of motion for scattering events, resulting in precise tracking of both the projectile and the recoil target nucleus. The algorithm permits the user to plug in an arbitrary screening function, such as Lens-Jensen screening, which is good for backscattering calculations, or Ziegler-Biersack-Littmark screenin...

  10. An algorithm to determine backscattering ratio and single scattering albedo

    Digital Repository Service at National Institute of Oceanography (India)

    Suresh, T.; Desa, E.; Matondkar, S.G.P.; Mascarenhas, A.A.M.Q.; Nayak, S.R.; Naik, P.

    Algorithms to determine the inherent optical properties of water, backscattering probability and single scattering albedo at 490 and 676 nm from the apparent optical property, remote sensing reflectance are presented here. The measured scattering...

  11. An algorithm for 3D target scatterer feature estimation from sparse SAR apertures

    Science.gov (United States)

    Jackson, Julie Ann; Moses, Randolph L.

    2009-05-01

    We present an algorithm for extracting 3D canonical scattering features from complex targets observed over sparse 3D SAR apertures. The algorithm begins with complex phase history data and ends with a set of geometrical features describing the scene. The algorithm provides a pragmatic approach to initialization of a nonlinear feature estimation scheme, using regularization methods to deconvolve the point spread function and obtain sparse 3D images. Regions of high energy are detected in the sparse images, providing location initializations for scattering center estimates. A single canonical scattering feature, corresponding to a geometric shape primitive, is fit to each region via nonlinear optimization of fit error between the regularized data and parametric canonical scattering models. Results of the algorithm are presented using 3D scattering prediction data of a simple scene for both a densely-sampled and a sparsely-sampled SAR measurement aperture.

  12. Algorithm FIRE-Feynman Integral REduction

    International Nuclear Information System (INIS)

    Smirnov, A.V.

    2008-01-01

    The recently developed algorithm FIRE performs the reduction of Feynman integrals to master integrals. It is based on a number of strategies, such as applying the Laporta algorithm, the s-bases algorithm, region-bases and integrating explicitly over loop momenta when possible. Currently it is being used in complicated three-loop calculations.

  13. Bridging Ground Validation and Algorithms: Using Scattering and Integral Tables to Incorporate Observed DSD Correlations into Satellite Algorithms

    Science.gov (United States)

    Williams, C. R.

    2012-12-01

    The NASA Global Precipitation Mission (GPM) raindrop size distribution (DSD) Working Group is composed of NASA PMM Science Team Members and is charged to "investigate the correlations between DSD parameters using Ground Validation (GV) data sets that support, or guide, the assumptions used in satellite retrieval algorithms." Correlations between DSD parameters can be used to constrain the unknowns and reduce the degrees-of-freedom in under-constrained satellite algorithms. Over the past two years, the GPM DSD Working Group has analyzed GV data and has found correlations between the mass-weighted mean raindrop diameter (Dm) and the mass distribution standard deviation (Sm) that follows a power-law relationship. This Dm-Sm power-law relationship appears to be robust and has been observed in surface disdrometer and vertically pointing radar observations. One benefit of a Dm-Sm power-law relationship is that a three parameter DSD can be modeled with just two parameters: Dm and Nw that determines the DSD amplitude. In order to incorporate observed DSD correlations into satellite algorithms, the GPM DSD Working Group is developing scattering and integral tables that can be used by satellite algorithms. Scattering tables describe the interaction of electromagnetic waves on individual particles to generate cross sections of backscattering, extinction, and scattering. Scattering tables are independent of the distribution of particles. Integral tables combine scattering table outputs with DSD parameters and DSD correlations to generate integrated normalized reflectivity, attenuation, scattering, emission, and asymmetry coefficients. Integral tables contain both frequency dependent scattering properties and cloud microphysics. The GPM DSD Working Group has developed scattering tables for raindrops at both Dual Precipitation Radar (DPR) frequencies and at all GMI radiometer frequencies less than 100 GHz. Scattering tables include Mie and T-matrix scattering with H- and V

  14. N-Dimensional LLL Reduction Algorithm with Pivoted Reflection

    Directory of Open Access Journals (Sweden)

    Zhongliang Deng

    2018-01-01

    Full Text Available The Lenstra-Lenstra-Lovász (LLL lattice reduction algorithm and many of its variants have been widely used by cryptography, multiple-input-multiple-output (MIMO communication systems and carrier phase positioning in global navigation satellite system (GNSS to solve the integer least squares (ILS problem. In this paper, we propose an n-dimensional LLL reduction algorithm (n-LLL, expanding the Lovász condition in LLL algorithm to n-dimensional space in order to obtain a further reduced basis. We also introduce pivoted Householder reflection into the algorithm to optimize the reduction time. For an m-order positive definite matrix, analysis shows that the n-LLL reduction algorithm will converge within finite steps and always produce better results than the original LLL reduction algorithm with n > 2. The simulations clearly prove that n-LLL is better than the original LLL in reducing the condition number of an ill-conditioned input matrix with 39% improvement on average for typical cases, which can significantly reduce the searching space for solving ILS problem. The simulation results also show that the pivoted reflection has significantly declined the number of swaps in the algorithm by 57%, making n-LLL a more practical reduction algorithm.

  15. A Hierarchical Volumetric Shadow Algorithm for Single Scattering

    OpenAIRE

    Baran, Ilya; Chen, Jiawen; Ragan-Kelley, Jonathan Millar; Durand, Fredo; Lehtinen, Jaakko

    2010-01-01

    Volumetric effects such as beams of light through participating media are an important component in the appearance of the natural world. Many such effects can be faithfully modeled by a single scattering medium. In the presence of shadows, rendering these effects can be prohibitively expensive: current algorithms are based on ray marching, i.e., integrating the illumination scattered towards the camera along each view ray, modulated by visibility to the light source at each sample. Visibility...

  16. Sensitivity Analysis of the Scattering-Based SARBM3D Despeckling Algorithm.

    Science.gov (United States)

    Di Simone, Alessio

    2016-06-25

    Synthetic Aperture Radar (SAR) imagery greatly suffers from multiplicative speckle noise, typical of coherent image acquisition sensors, such as SAR systems. Therefore, a proper and accurate despeckling preprocessing step is almost mandatory to aid the interpretation and processing of SAR data by human users and computer algorithms, respectively. Very recently, a scattering-oriented version of the popular SAR Block-Matching 3D (SARBM3D) despeckling filter, named Scattering-Based (SB)-SARBM3D, was proposed. The new filter is based on the a priori knowledge of the local topography of the scene. In this paper, an experimental sensitivity analysis of the above-mentioned despeckling algorithm is carried out, and the main results are shown and discussed. In particular, the role of both electromagnetic and geometrical parameters of the surface and the impact of its scattering behavior are investigated. Furthermore, a comprehensive sensitivity analysis of the SB-SARBM3D filter against the Digital Elevation Model (DEM) resolution and the SAR image-DEM coregistration step is also provided. The sensitivity analysis shows a significant robustness of the algorithm against most of the surface parameters, while the DEM resolution plays a key role in the despeckling process. Furthermore, the SB-SARBM3D algorithm outperforms the original SARBM3D in the presence of the most realistic scattering behaviors of the surface. An actual scenario is also presented to assess the DEM role in real-life conditions.

  17. Gain reduction measurements in transient stimulated Raman scattering

    NARCIS (Netherlands)

    Heeman, R.J.; Godfried, H.P

    1995-01-01

    Threshold energy measurements of transient rotational stimulated Raman scattering are compared to Raman conversion calculations from semiclassical theories using a simple concept of a gain reduction factor which expresses the reduction of the gain from its steady-state value due to transient

  18. An algorithm for computing screened Coulomb scattering in GEANT4

    Energy Technology Data Exchange (ETDEWEB)

    Mendenhall, Marcus H. [Vanderbilt University Free Electron Laser Center, P.O. Box 351816 Station B, Nashville, TN 37235-1816 (United States)]. E-mail: marcus.h.mendenhall@vanderbilt.edu; Weller, Robert A. [Department of Electrical Engineering and Computer Science, Vanderbilt University, P.O. Box 351821 Station B, Nashville, TN 37235-1821 (United States)]. E-mail: robert.a.weller@vanderbilt.edu

    2005-01-01

    An algorithm has been developed for the GEANT4 Monte-Carlo package for the efficient computation of screened Coulomb interatomic scattering. It explicitly integrates the classical equations of motion for scattering events, resulting in precise tracking of both the projectile and the recoil target nucleus. The algorithm permits the user to plug in an arbitrary screening function, such as Lens-Jensen screening, which is good for backscattering calculations, or Ziegler-Biersack-Littmark screening, which is good for nuclear straggling and implantation problems. This will allow many of the applications of the TRIM and SRIM codes to be extended into the much more general GEANT4 framework where nuclear and other effects can be included.

  19. An algorithm for computing screened Coulomb scattering in GEANT4

    International Nuclear Information System (INIS)

    Mendenhall, Marcus H.; Weller, Robert A.

    2005-01-01

    An algorithm has been developed for the GEANT4 Monte-Carlo package for the efficient computation of screened Coulomb interatomic scattering. It explicitly integrates the classical equations of motion for scattering events, resulting in precise tracking of both the projectile and the recoil target nucleus. The algorithm permits the user to plug in an arbitrary screening function, such as Lens-Jensen screening, which is good for backscattering calculations, or Ziegler-Biersack-Littmark screening, which is good for nuclear straggling and implantation problems. This will allow many of the applications of the TRIM and SRIM codes to be extended into the much more general GEANT4 framework where nuclear and other effects can be included

  20. A reconstruction algorithm for coherent scatter computed tomography based on filtered back-projection

    International Nuclear Information System (INIS)

    Stevendaal, U. van; Schlomka, J.-P.; Harding, A.; Grass, M.

    2003-01-01

    Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter form factor of the investigated object. Reconstruction from coherently scattered x-rays is commonly done using algebraic reconstruction techniques (ART). In this paper, we propose an alternative approach based on filtered back-projection. For the first time, a three-dimensional (3D) filtered back-projection technique using curved 3D back-projection lines is applied to two-dimensional coherent scatter projection data. The proposed algorithm is tested with simulated projection data as well as with projection data acquired with a demonstrator setup similar to a multi-line CT scanner geometry. While yielding comparable image quality as ART reconstruction, the modified 3D filtered back-projection algorithm is about two orders of magnitude faster. In contrast to iterative reconstruction schemes, it has the advantage that subfield-of-view reconstruction becomes feasible. This allows a selective reconstruction of the coherent-scatter form factor for a region of interest. The proposed modified 3D filtered back-projection algorithm is a powerful reconstruction technique to be implemented in a CSCT scanning system. This method gives coherent scatter CT the potential of becoming a competitive modality for medical imaging or nondestructive testing

  1. Modified automatic term selection v2: A faster algorithm to calculate inelastic scattering cross-sections

    Energy Technology Data Exchange (ETDEWEB)

    Rusz, Ján, E-mail: jan.rusz@fysik.uu.se

    2017-06-15

    Highlights: • New algorithm for calculating double differential scattering cross-section. • Shown good convergence properties. • Outperforms older MATS algorithm, particularly in zone axis calculations. - Abstract: We present a new algorithm for calculating inelastic scattering cross-section for fast electrons. Compared to the previous Modified Automatic Term Selection (MATS) algorithm (Rusz et al. [18]), it has far better convergence properties in zone axis calculations and it allows to identify contributions of individual atoms. One can think of it as a blend of MATS algorithm and a method described by Weickenmeier and Kohl [10].

  2. Polarized X-ray excitation for scatter reduction in X-ray fluorescence computed tomography.

    Science.gov (United States)

    Vernekohl, Don; Tzoumas, Stratis; Zhao, Wei; Xing, Lei

    2018-05-25

    X-ray fluorescence computer tomography (XFCT) is a new molecular imaging modality which uses X-ray excitation to stimulate the emission of fluorescent photons in high atomic number contrast agents. Scatter contamination is one of the main challenges in XFCT imaging which limits the molecular sensitivity. When polarized X-rays are used, it is possible to reduce the scatter contamination significantly by placing detectors perpendicular to the polarization direction. This study quantifies scatter contamination for polarized and unpolarized X-ray excitation and determines the advantages of scatter reduction. The amount of scatter in preclinical XFCT is quantified in Monte Carlo simulations. The fluorescent X-rays are emitted isotropically, while scattered X-rays propagate in polarization direction. The magnitude of scatter contamination is studied in XFCT simulations of a mouse phantom. In this study, the contrast agent gold is examined as an example but a scatter reduction from polarized excitation is also expected for other elements. The scatter reduction capability is examined for different polarization intensities with a monoenergetic X-ray excitation energy of 82 keV. The study evaluates two different geometrical shapes of CZT detectors which are modeled with an energy resolution of 1 keV FWHM at an X-ray energy of 80 keV. Benefits of a detector placement perpendicular to the polarization direction are shown in iterative and analytic image reconstruction including scatter correction. The contrast to noise ratio (CNR) and the normalized mean square error (NMSE) are analyzed and compared for the reconstructed images. A substantial scatter reduction for common detector sizes was achieved for 100% and 80% linear polarization while lower polarization intensities provide a decreased scatter reduction. By placing the detector perpendicular to the polarization direction, a scatter reduction by factor up to 5.5 can be achieved for common detector sizes. The image

  3. Modeling of detective quantum efficiency considering scatter-reduction devices

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ji Woong; Kim, Dong Woon; Kim, Ho Kyung [Pusan National University, Busan (Korea, Republic of)

    2016-05-15

    The reduction of signal-to-noise ratio (SNR) cannot be restored and thus has become a severe issue in digital mammography.1 Therefore, antiscatter grids are typically used in mammography. Scatter-cleanup performance of various scatter-reduction devices, such as air gaps,2 linear (1D) or cellular (2D) grids,3, 4 and slot-scanning devices,5 has been extensively investigated by many research groups. In the present time, a digital mammography system with the slotscanning geometry is also commercially available.6 In this study, we theoretically investigate the effect of scattered photons on the detective quantum efficiency (DQE) performance of digital mammography detectors by using the cascaded-systems analysis (CSA) approach. We show a simple DQE formalism describing digital mammography detector systems equipped with scatter reduction devices by regarding the scattered photons as additive noise sources. The LFD increased with increasing PMMA thickness, and the amounts of LFD indicated the corresponding SF. The estimated SFs were 0.13, 0.21, and 0.29 for PMMA thicknesses of 10, 20, and 30 mm, respectively. While the solid line describing the measured MTF for PMMA with 0 mm was the result of least-squares of regression fit using Eq. (14), the other lines were simply resulted from the multiplication of the fit result (for PMMA with 0 mm) with the (1-SF) estimated from the LFDs in the measured MTFs. Spectral noise-power densities over the entire frequency range were not much changed with increasing scatter. On the other hand, the calculation results showed that the spectral noise-power densities increased with increasing scatter. This discrepancy may be explained by that the model developed in this study does not account for the changes in x-ray interaction parameters for varying spectral shapes due to beam hardening with increasing PMMA thicknesses.

  4. Stray light reduction for Thomson scattering

    NARCIS (Netherlands)

    Bakker, L.P.; Kroesen, G.M.W.; Doebele, H.F.; Muraoka, K.

    1999-01-01

    In order to perform Thomson scattering in a gas discharge tube, the reduction of stray light is very important because of the very small Thomson cross-section. By introducing a sodium absorption cell as a notch filter, we can reduce the measured stray light considerably. Then we have to use a dye

  5. Patient-specific scatter correction in clinical cone beam computed tomography imaging made possible by the combination of Monte Carlo simulations and a ray tracing algorithm

    International Nuclear Information System (INIS)

    Thing, Rune S.; Bernchou, Uffe; Brink, Carsten; Mainegra-Hing, Ernesto

    2013-01-01

    Purpose: Cone beam computed tomography (CBCT) image quality is limited by scattered photons. Monte Carlo (MC) simulations provide the ability of predicting the patient-specific scatter contamination in clinical CBCT imaging. Lengthy simulations prevent MC-based scatter correction from being fully implemented in a clinical setting. This study investigates the combination of using fast MC simulations to predict scatter distributions with a ray tracing algorithm to allow calibration between simulated and clinical CBCT images. Material and methods: An EGSnrc-based user code (egs c bct), was used to perform MC simulations of an Elekta XVI CBCT imaging system. A 60keV x-ray source was used, and air kerma scored at the detector plane. Several variance reduction techniques (VRTs) were used to increase the scatter calculation efficiency. Three patient phantoms based on CT scans were simulated, namely a brain, a thorax and a pelvis scan. A ray tracing algorithm was used to calculate the detector signal due to primary photons. A total of 288 projections were simulated, one for each thread on the computer cluster used for the investigation. Results: Scatter distributions for the brain, thorax and pelvis scan were simulated within 2 % statistical uncertainty in two hours per scan. Within the same time, the ray tracing algorithm provided the primary signal for each of the projections. Thus, all the data needed for MC-based scatter correction in clinical CBCT imaging was obtained within two hours per patient, using a full simulation of the clinical CBCT geometry. Conclusions: This study shows that use of MC-based scatter corrections in CBCT imaging has a great potential to improve CBCT image quality. By use of powerful VRTs to predict scatter distributions and a ray tracing algorithm to calculate the primary signal, it is possible to obtain the necessary data for patient specific MC scatter correction within two hours per patient

  6. Slot technique - an alternative method of scatter reduction in radiography

    International Nuclear Information System (INIS)

    Panzer, W.; Widenmann, L.

    1983-01-01

    The most common method of scatter reduction in radiography is the use of an antiscatter grid. Its disadvantage is the absorption of a certain percentage of primary radiation in the lead strips of the grid and the fact that due to the limited thickness of the lead strips their scatter absorption is also limited. A possibility for avoiding this disadvantage is offered by the so-called slot technique, ie, the successive exposure of the subject with a narrow fan beam provided by slots in rather thick lead plates. The results of a comparison between grid and slot technique regarding dose to the patient, scatter reduction, image quality and the effect of automatic exposure control are reported. (author)

  7. A numerical study of super-resolution through fast 3D wideband algorithm for scattering in highly-heterogeneous media

    KAUST Repository

    Létourneau, Pierre-David

    2016-09-19

    We present a wideband fast algorithm capable of accurately computing the full numerical solution of the problem of acoustic scattering of waves by multiple finite-sized bodies such as spherical scatterers in three dimensions. By full solution, we mean that no assumption (e.g. Rayleigh scattering, geometrical optics, weak scattering, Born single scattering, etc.) is necessary regarding the properties of the scatterers, their distribution or the background medium. The algorithm is also fast in the sense that it scales linearly with the number of unknowns. We use this algorithm to study the phenomenon of super-resolution in time-reversal refocusing in highly-scattering media recently observed experimentally (Lemoult et al., 2011), and provide numerical arguments towards the fact that such a phenomenon can be explained through a homogenization theory.

  8. ANALYSIS OF PARAMETERIZATION VALUE REDUCTION OF SOFT SETS AND ITS ALGORITHM

    Directory of Open Access Journals (Sweden)

    Mohammed Adam Taheir Mohammed

    2016-02-01

    Full Text Available In this paper, the parameterization value reduction of soft sets and its algorithm in decision making are studied and described. It is based on parameterization reduction of soft sets. The purpose of this study is to investigate the inherited disadvantages of parameterization reduction of soft sets and its algorithm. The algorithms presented in this study attempt to reduce the value of least parameters from soft set. Through the analysis, two techniques have been described. Through this study, it is found that parameterization reduction of soft sets and its algorithm has yielded a different and inconsistency in suboptimal result.

  9. On distribution reduction and algorithm implementation in inconsistent ordered information systems.

    Science.gov (United States)

    Zhang, Yanqin

    2014-01-01

    As one part of our work in ordered information systems, distribution reduction is studied in inconsistent ordered information systems (OISs). Some important properties on distribution reduction are studied and discussed. The dominance matrix is restated for reduction acquisition in dominance relations based information systems. Matrix algorithm for distribution reduction acquisition is stepped. And program is implemented by the algorithm. The approach provides an effective tool for the theoretical research and the applications for ordered information systems in practices. For more detailed and valid illustrations, cases are employed to explain and verify the algorithm and the program which shows the effectiveness of the algorithm in complicated information systems.

  10. Development of a 3D muon disappearance algorithm for muon scattering tomography

    Science.gov (United States)

    Blackwell, T. B.; Kudryavtsev, V. A.

    2015-05-01

    Upon passing through a material, muons lose energy, scatter off nuclei and atomic electrons, and can stop in the material. Muons will more readily lose energy in higher density materials. Therefore multiple muon disappearances within a localized volume may signal the presence of high-density materials. We have developed a new technique that improves the sensitivity of standard muon scattering tomography. This technique exploits these muon disappearances to perform non-destructive assay of an inspected volume. Muons that disappear have their track evaluated using a 3D line extrapolation algorithm, which is in turn used to construct a 3D tomographic image of the inspected volume. Results of Monte Carlo simulations that measure muon disappearance in different types of target materials are presented. The ability to differentiate between different density materials using the 3D line extrapolation algorithm is established. Finally the capability of this new muon disappearance technique to enhance muon scattering tomography techniques in detecting shielded HEU in cargo containers has been demonstrated.

  11. Implementing peak load reduction algorithms for household electrical appliances

    International Nuclear Information System (INIS)

    Dlamini, Ndumiso G.; Cromieres, Fabien

    2012-01-01

    Considering household appliance automation for reduction of household peak power demand, this study explored aspects of the interaction between household automation technology and human behaviour. Given a programmable household appliance switching system, and user-reported appliance use times, we simulated the load reduction effectiveness of three types of algorithms, which were applied at both the single household level and across all 30 households. All three algorithms effected significant load reductions, while the least-to-highest potential user inconvenience ranking was: coordinating the timing of frequent intermittent loads (algorithm 2); moving period-of-day time-flexible loads to off-peak times (algorithm 1); and applying short-term time delays to avoid high peaks (algorithm 3) (least accommodating). Peak reduction was facilitated by load interruptibility, time of use flexibility and the willingness of users to forgo impulsive appliance use. We conclude that a general factor determining the ability to shift the load due to a particular appliance is the time-buffering between the service delivered and the power demand of an appliance. Time-buffering can be ‘technologically inherent’, due to human habits, or realised by managing user expectations. There are implications for the design of appliances and home automation systems. - Highlights: ► We explored the interaction between appliance automation and human behaviour. ► There is potential for considerable load shifting of household appliances. ► Load shifting for load reduction is eased with increased time buffering. ► Design, human habits and user expectations all influence time buffering. ► Certain automation and appliance design features can facilitate load shifting.

  12. MUSIC ALGORITHM FOR LOCATING POINT-LIKE SCATTERERS CONTAINED IN A SAMPLE ON FLAT SUBSTRATE

    Institute of Scientific and Technical Information of China (English)

    Dong Heping; Ma Fuming; Zhang Deyue

    2012-01-01

    In this paper,we consider a MUSIC algorithm for locating point-like scatterers contained in a sample on flat substrate.Based on an asymptotic expansion of the scattering amplitude proposed by Ammari et al.,the reconstruction problem can be reduced to a calculation of Green function corresponding to the background medium.In addition,we use an explicit formulation of Green function in the MUSIC algorithm to simplify the calculation when the cross-section of sample is a half-disc.Numerical experiments are included to demonstrate the feasibility of this method.

  13. Data reduction for neutron scattering from plutonium samples. Final report

    International Nuclear Information System (INIS)

    Seeger, P.A.

    1997-01-01

    An experiment performed in August, 1993, on the Low-Q Diffractometer (LQD) at the Manual Lujan Jr. Neutron Scattering Center (MLNSC) was designed to study the formation and annealing of He bubbles in aged 239 Pu metal. Significant complications arise in the reduction of the data because of the very high total neutron cross section of 239 Pu, and also because the sample are difficult to make uniform and to characterize. This report gives the details of the data and the data reduction procedures, presents the resulting scattering patterns in terms of macroscopic cross section as a function of momentum transfer, and suggests improvements for future experiments

  14. Fast sampling algorithm for the simulation of photon Compton scattering

    International Nuclear Information System (INIS)

    Brusa, D.; Salvat, F.

    1996-01-01

    A simple algorithm for the simulation of Compton interactions of unpolarized photons is described. The energy and direction of the scattered photon, as well as the active atomic electron shell, are sampled from the double-differential cross section obtained by Ribberfors from the relativistic impulse approximation. The algorithm consistently accounts for Doppler broadening and electron binding effects. Simplifications of Ribberfors' formula, required for efficient random sampling, are discussed. The algorithm involves a combination of inverse transform, composition and rejection methods. A parameterization of the Compton profile is proposed from which the simulation of Compton events can be performed analytically in terms of a few parameters that characterize the target atom, namely shell ionization energies, occupation numbers and maximum values of the one-electron Compton profiles. (orig.)

  15. Uneven-Layered Coding Metamaterial Tile for Ultra-wideband RCS Reduction and Diffuse Scattering.

    Science.gov (United States)

    Su, Jianxun; He, Huan; Li, Zengrui; Yang, Yaoqing Lamar; Yin, Hongcheng; Wang, Junhong

    2018-05-25

    In this paper, a novel uneven-layered coding metamaterial tile is proposed for ultra-wideband radar cross section (RCS) reduction and diffuse scattering. The metamaterial tile is composed of two kinds of square ring unit cells with different layer thickness. The reflection phase difference of 180° (±37°) between two unit cells covers an ultra-wide frequency range. Due to the phase cancellation between two unit cells, the metamaterial tile has the scattering pattern of four strong lobes deviating from normal direction. The metamaterial tile and its 90-degree rotation can be encoded as the '0' and '1' elements to cover an object, and diffuse scattering pattern can be realized by optimizing phase distribution, leading to reductions of the monostatic and bi-static RCSs simultaneously. The metamaterial tile can achieve -10 dB RCS reduction from 6.2 GHz to 25.7 GHz with the ratio bandwidth of 4.15:1 at normal incidence. The measured and simulated results are in good agreement and validate the proposed uneven-layered coding metamaterial tile can greatly expanding the bandwidth for RCS reduction and diffuse scattering.

  16. A systematic approach to robust preconditioning for gradient-based inverse scattering algorithms

    International Nuclear Information System (INIS)

    Nordebo, Sven; Fhager, Andreas; Persson, Mikael; Gustafsson, Mats

    2008-01-01

    This paper presents a systematic approach to robust preconditioning for gradient-based nonlinear inverse scattering algorithms. In particular, one- and two-dimensional inverse problems are considered where the permittivity and conductivity profiles are unknown and the input data consist of the scattered field over a certain bandwidth. A time-domain least-squares formulation is employed and the inversion algorithm is based on a conjugate gradient or quasi-Newton algorithm together with an FDTD-electromagnetic solver. A Fisher information analysis is used to estimate the Hessian of the error functional. A robust preconditioner is then obtained by incorporating a parameter scaling such that the scaled Fisher information has a unit diagonal. By improving the conditioning of the Hessian, the convergence rate of the conjugate gradient or quasi-Newton methods are improved. The preconditioner is robust in the sense that the scaling, i.e. the diagonal Fisher information, is virtually invariant to the numerical resolution and the discretization model that is employed. Numerical examples of image reconstruction are included to illustrate the efficiency of the proposed technique

  17. A method and algorithm for correlating scattered light and suspended particles in polluted water

    International Nuclear Information System (INIS)

    Sami Gumaan Daraigan; Mohd Zubir Matjafri; Khiruddin Abdullah; Azlan Abdul Aziz; Abdul Aziz Tajuddin; Mohd Firdaus Othman

    2005-01-01

    An optical model has been developed for measuring total suspended solids TSS concentrations in water. This approach is based on the characteristics of scattered light from the suspended particles in water samples. An optical sensor system (an active spectrometer) has been developed to correlate pollutant (total suspended solids TSS) concentration and the scattered radiation. Scattered light was measured in terms of the output voltage of the phototransistor of the sensor system. The developed algorithm was used to calculate and estimate the concentrations of the polluted water samples. The proposed algorithm was calibrated using the observed readings. The results display a strong correlation between the radiation values and the total suspended solids concentrations. The proposed system yields a high degree of accuracy with the correlation coefficient (R) of 0.99 and the root mean square error (RMS) of 63.57 mg/l. (Author)

  18. Sound Scattering and Its Reduction by a Janus Sphere Type

    Directory of Open Access Journals (Sweden)

    Deliya Kim

    2014-01-01

    Full Text Available Sound scattering by a Janus sphere type is considered. The sphere has two surface zones: a soft surface of zero acoustic impedance and a hard surface of infinite acoustic impedance. The zones are arranged such that axisymmetry of the sound field is preserved. The equivalent source method is used to compute the sound field. It is shown that, by varying the sizes of the soft and hard zones on the sphere, a significant reduction can be achieved in the scattered acoustic power and upstream directivity when the sphere is near a free surface and its soft zone faces the incoming wave and vice versa for a hard ground. In both cases the size of the sphere’s hard zone is much larger than that of its soft zone. The boundary location between the two zones coincides with the location of a zero pressure line of the incoming standing sound wave, thus masking the sphere within the sound field reflected by the free surface or the hard ground. The reduction in the scattered acoustic power diminishes when the sphere is placed in free space. Variations of the scattered acoustic power and directivity with the sound frequency are also given and discussed.

  19. A fast calculating two-stream-like multiple scattering algorithm that captures azimuthal and elevation variations

    Science.gov (United States)

    Fiorino, Steven T.; Elmore, Brannon; Schmidt, Jaclyn; Matchefts, Elizabeth; Burley, Jarred L.

    2016-05-01

    Properly accounting for multiple scattering effects can have important implications for remote sensing and possibly directed energy applications. For example, increasing path radiance can affect signal noise. This study describes the implementation of a fast-calculating two-stream-like multiple scattering algorithm that captures azimuthal and elevation variations into the Laser Environmental Effects Definition and Reference (LEEDR) atmospheric characterization and radiative transfer code. The multiple scattering algorithm fully solves for molecular, aerosol, cloud, and precipitation single-scatter layer effects with a Mie algorithm at every calculation point/layer rather than an interpolated value from a pre-calculated look-up-table. This top-down cumulative diffusivity method first considers the incident solar radiance contribution to a given layer accounting for solid angle and elevation, and it then measures the contribution of diffused energy from previous layers based on the transmission of the current level to produce a cumulative radiance that is reflected from a surface and measured at the aperture at the observer. Then a unique set of asymmetry and backscattering phase function parameter calculations are made which account for the radiance loss due to the molecular and aerosol constituent reflectivity within a level and allows for a more accurate characterization of diffuse layers that contribute to multiple scattered radiances in inhomogeneous atmospheres. The code logic is valid for spectral bands between 200 nm and radio wavelengths, and the accuracy is demonstrated by comparing the results from LEEDR to observed sky radiance data.

  20. Scatter-Reducing Sounding Filtration Using a Genetic Algorithm and Mean Monthly Standard Deviation

    Science.gov (United States)

    Mandrake, Lukas

    2013-01-01

    Retrieval algorithms like that used by the Orbiting Carbon Observatory (OCO)-2 mission generate massive quantities of data of varying quality and reliability. A computationally efficient, simple method of labeling problematic datapoints or predicting soundings that will fail is required for basic operation, given that only 6% of the retrieved data may be operationally processed. This method automatically obtains a filter designed to reduce scatter based on a small number of input features. Most machine-learning filter construction algorithms attempt to predict error in the CO2 value. By using a surrogate goal of Mean Monthly STDEV, the goal is to reduce the retrieved CO2 scatter rather than solving the harder problem of reducing CO2 error. This lends itself to improved interpretability and performance. This software reduces the scatter of retrieved CO2 values globally based on a minimum number of input features. It can be used as a prefilter to reduce the number of soundings requested, or as a post-filter to label data quality. The use of the MMS (Mean Monthly Standard deviation) provides a much cleaner, clearer filter than the standard ABS(CO2-truth) metrics previously employed by competitor methods. The software's main strength lies in a clearer (i.e., fewer features required) filter that more efficiently reduces scatter in retrieved CO2 rather than focusing on the more complex (and easily removed) bias issues.

  1. Development and performance analysis of a lossless data reduction algorithm for voip

    International Nuclear Information System (INIS)

    Misbahuddin, S.; Boulejfen, N.

    2014-01-01

    VoIP (Voice Over IP) is becoming an alternative way of voice communications over the Internet. To better utilize voice call bandwidth, some standard compression algorithms are applied in VoIP systems. However, these algorithms affect the voice quality with high compression ratios. This paper presents a lossless data reduction technique to improve VoIP data transfer rate over the IP network. The proposed algorithm exploits the data redundancies in digitized VFs (Voice Frames) generated by VoIP systems. Performance of proposed data reduction algorithm has been presented in terms of compression ratio. The proposed algorithm will help retain the voice quality along with the improvement in VoIP data transfer rates. (author)

  2. Tunable output-frequency filter algorithm for imaging through scattering media under LED illumination

    Science.gov (United States)

    Zhou, Meiling; Singh, Alok Kumar; Pedrini, Giancarlo; Osten, Wolfgang; Min, Junwei; Yao, Baoli

    2018-03-01

    We present a tunable output-frequency filter (TOF) algorithm to reconstruct the object from noisy experimental data under low-power partially coherent illumination, such as LED, when imaging through scattering media. In the iterative algorithm, we employ Gaussian functions with different filter windows at different stages of iteration process to reduce corruption from experimental noise to search for a global minimum in the reconstruction. In comparison with the conventional iterative phase retrieval algorithm, we demonstrate that the proposed TOF algorithm achieves consistent and reliable reconstruction in the presence of experimental noise. Moreover, the spatial resolution and distinctive features are retained in the reconstruction since the filter is applied only to the region outside the object. The feasibility of the proposed method is proved by experimental results.

  3. Cell light scattering characteristic numerical simulation research based on FDTD algorithm

    Science.gov (United States)

    Lin, Xiaogang; Wan, Nan; Zhu, Hao; Weng, Lingdong

    2017-01-01

    In this study, finite-difference time-domain (FDTD) algorithm has been used to work out the cell light scattering problem. Before beginning to do the simulation contrast, finding out the changes or the differences between normal cells and abnormal cells which may be cancerous or maldevelopment is necessary. The preparation of simulation are building up the simple cell model of cell which consists of organelles, nucleus and cytoplasm and setting up the suitable precision of mesh. Meanwhile, setting up the total field scattering field source as the excitation source and far field projection analysis group is also important. Every step need to be explained by the principles of mathematic such as the numerical dispersion, perfect matched layer boundary condition and near-far field extrapolation. The consequences of simulation indicated that the position of nucleus changed will increase the back scattering intensity and the significant difference on the peak value of scattering intensity may result from the changes of the size of cytoplasm. The study may help us find out the regulations based on the simulation consequences and the regulations can be meaningful for early diagnosis of cancers.

  4. A numerical study of super-resolution through fast 3D wideband algorithm for scattering in highly-heterogeneous media

    KAUST Repository

    Lé tourneau, Pierre-David; Wu, Ying; Papanicolaou, George; Garnier, Josselin; Darve, Eric

    2016-01-01

    We present a wideband fast algorithm capable of accurately computing the full numerical solution of the problem of acoustic scattering of waves by multiple finite-sized bodies such as spherical scatterers in three dimensions. By full solution, we

  5. Environmental Optimization Using the WAste Reduction Algorithm (WAR)

    Science.gov (United States)

    Traditionally chemical process designs were optimized using purely economic measures such as rate of return. EPA scientists developed the WAste Reduction algorithm (WAR) so that environmental impacts of designs could easily be evaluated. The goal of WAR is to reduce environme...

  6. Column Reduction of Polynomial Matrices; Some Remarks on the Algorithm of Wolovich

    NARCIS (Netherlands)

    Praagman, C.

    1996-01-01

    Recently an algorithm has been developed for column reduction of polynomial matrices. In a previous report the authors described a Fortran implementation of this algorithm. In this paper we compare the results of that implementation with an implementation of the algorithm originally developed by

  7. Focusing light through strongly scattering media using genetic algorithm with SBR discriminant

    Science.gov (United States)

    Zhang, Bin; Zhang, Zhenfeng; Feng, Qi; Liu, Zhipeng; Lin, Chengyou; Ding, Yingchun

    2018-02-01

    In this paper, we have experimentally demonstrated light focusing through strongly scattering media by performing binary amplitude optimization with a genetic algorithm. In the experiments, we control 160 000 mirrors of digital micromirror device to modulate and optimize the light transmission paths in the strongly scattering media. We replace the universal target-position-intensity (TPI) discriminant with signal-to-background ratio (SBR) discriminant in genetic algorithm. With 400 incident segments, a relative enhancement value of 17.5% with a ground glass diffuser is achieved, which is higher than the theoretical value of 1/(2π )≈ 15.9 % for binary amplitude optimization. According to our repetitive experiments, we conclude that, with the same segment number, the enhancement for the SBR discriminant is always higher than that for the TPI discriminant, which results from the background-weakening effect of SBR discriminant. In addition, with the SBR discriminant, the diameters of the focus can be changed ranging from 7 to 70 μm at arbitrary positions. Besides, multiple foci with high enhancement are obtained. Our work provides a meaningful reference for the study of binary amplitude optimization in the wavefront shaping field.

  8. Parameter-free Network Sparsification and Data Reduction by Minimal Algorithmic Information Loss

    KAUST Repository

    Zenil, Hector

    2018-02-16

    The study of large and complex datasets, or big data, organized as networks has emerged as one of the central challenges in most areas of science and technology. Cellular and molecular networks in biology is one of the prime examples. Henceforth, a number of techniques for data dimensionality reduction, especially in the context of networks, have been developed. Yet, current techniques require a predefined metric upon which to minimize the data size. Here we introduce a family of parameter-free algorithms based on (algorithmic) information theory that are designed to minimize the loss of any (enumerable computable) property contributing to the object\\'s algorithmic content and thus important to preserve in a process of data dimension reduction when forcing the algorithm to delete first the least important features. Being independent of any particular criterion, they are universal in a fundamental mathematical sense. Using suboptimal approximations of efficient (polynomial) estimations we demonstrate how to preserve network properties outperforming other (leading) algorithms for network dimension reduction. Our method preserves all graph-theoretic indices measured, ranging from degree distribution, clustering-coefficient, edge betweenness, and degree and eigenvector centralities. We conclude and demonstrate numerically that our parameter-free, Minimal Information Loss Sparsification (MILS) method is robust, has the potential to maximize the preservation of all recursively enumerable features in data and networks, and achieves equal to significantly better results than other data reduction and network sparsification methods.

  9. The collapsed cone algorithm for (192)Ir dosimetry using phantom-size adaptive multiple-scatter point kernels.

    Science.gov (United States)

    Tedgren, Åsa Carlsson; Plamondon, Mathieu; Beaulieu, Luc

    2015-07-07

    The aim of this work was to investigate how dose distributions calculated with the collapsed cone (CC) algorithm depend on the size of the water phantom used in deriving the point kernel for multiple scatter. A research version of the CC algorithm equipped with a set of selectable point kernels for multiple-scatter dose that had initially been derived in water phantoms of various dimensions was used. The new point kernels were generated using EGSnrc in spherical water phantoms of radii 5 cm, 7.5 cm, 10 cm, 15 cm, 20 cm, 30 cm and 50 cm. Dose distributions derived with CC in water phantoms of different dimensions and in a CT-based clinical breast geometry were compared to Monte Carlo (MC) simulations using the Geant4-based brachytherapy specific MC code Algebra. Agreement with MC within 1% was obtained when the dimensions of the phantom used to derive the multiple-scatter kernel were similar to those of the calculation phantom. Doses are overestimated at phantom edges when kernels are derived in larger phantoms and underestimated when derived in smaller phantoms (by around 2% to 7% depending on distance from source and phantom dimensions). CC agrees well with MC in the high dose region of a breast implant and is superior to TG43 in determining skin doses for all multiple-scatter point kernel sizes. Increased agreement between CC and MC is achieved when the point kernel is comparable to breast dimensions. The investigated approximation in multiple scatter dose depends on the choice of point kernel in relation to phantom size and yields a significant fraction of the total dose only at distances of several centimeters from a source/implant which correspond to volumes of low doses. The current implementation of the CC algorithm utilizes a point kernel derived in a comparatively large (radius 20 cm) water phantom. A fixed point kernel leads to predictable behaviour of the algorithm with the worst case being a source/implant located well within a patient

  10. Computational study of scattering of a zero-order Bessel beam by large nonspherical homogeneous particles with the multilevel fast multipole algorithm

    Science.gov (United States)

    Yang, Minglin; Wu, Yueqian; Sheng, Xinqing; Ren, Kuan Fang

    2017-12-01

    Computation of scattering of shaped beams by large nonspherical particles is a challenge in both optics and electromagnetics domains since it concerns many research fields. In this paper, we report our new progress in the numerical computation of the scattering diagrams. Our algorithm permits to calculate the scattering of a particle of size as large as 110 wavelengths or 700 in size parameter. The particle can be transparent or absorbing of arbitrary shape, smooth or with a sharp surface, such as the Chebyshev particles or ice crystals. To illustrate the capacity of the algorithm, a zero order Bessel beam is taken as the incident beam, and the scattering of ellipsoidal particles and Chebyshev particles are taken as examples. Some special phenomena have been revealed and examined. The scattering problem is formulated with the combined tangential formulation and solved iteratively with the aid of the multilevel fast multipole algorithm, which is well parallelized with the message passing interface on the distributed memory computer platform using the hybrid partitioning strategy. The numerical predictions are compared with the results of the rigorous method for a spherical particle to validate the accuracy of the approach. The scattering diagrams of large ellipsoidal particles with various parameters are examined. The effect of aspect ratios, as well as half-cone angle of the incident zero-order Bessel beam and the off-axis distance on scattered intensity, is studied. Scattering by asymmetry Chebyshev particle with size parameter larger than 700 is also given to show the capability of the method for computing scattering by arbitrary shaped particles.

  11. An EPID response calculation algorithm using spatial beam characteristics of primary, head scattered and MLC transmitted radiation

    International Nuclear Information System (INIS)

    Rosca, Florin; Zygmanski, Piotr

    2008-01-01

    We have developed an independent algorithm for the prediction of electronic portal imaging device (EPID) response. The algorithm uses a set of images [open beam, closed multileaf collimator (MLC), various fence and modified sweeping gap patterns] to separately characterize the primary and head-scatter contributions to EPID response. It also characterizes the relevant dosimetric properties of the MLC: Transmission, dosimetric gap, MLC scatter [P. Zygmansky et al., J. Appl. Clin. Med. Phys. 8(4) (2007)], inter-leaf leakage, and tongue and groove [F. Lorenz et al., Phys. Med. Biol. 52, 5985-5999 (2007)]. The primary radiation is modeled with a single Gaussian distribution defined at the target position, while the head-scatter radiation is modeled with a triple Gaussian distribution defined downstream of the target. The distances between the target and the head-scatter source, jaws, and MLC are model parameters. The scatter associated with the EPID is implicit in the model. Open beam images are predicted to within 1% of the maximum value across the image. Other MLC test patterns and intensity-modulated radiation therapy fluences are predicted to within 1.5% of the maximum value. The presented method was applied to the Varian aS500 EPID but is designed to work with any planar detector with sufficient spatial resolution

  12. Image noise reduction algorithm for digital subtraction angiography: clinical results.

    Science.gov (United States)

    Söderman, Michael; Holmin, Staffan; Andersson, Tommy; Palmgren, Charlotta; Babic, Draženko; Hoornaert, Bart

    2013-11-01

    To test the hypothesis that an image noise reduction algorithm designed for digital subtraction angiography (DSA) in interventional neuroradiology enables a reduction in the patient entrance dose by a factor of 4 while maintaining image quality. This clinical prospective study was approved by the local ethics committee, and all 20 adult patients provided informed consent. DSA was performed with the default reference DSA program, a quarter-dose DSA program with modified acquisition parameters (to reduce patient radiation dose exposure), and a real-time noise-reduction algorithm. Two consecutive biplane DSA data sets were acquired in each patient. The dose-area product (DAP) was calculated for each image and compared. A randomized, blinded, offline reading study was conducted to show noninferiority of the quarter-dose image sets. Overall, 40 samples per treatment group were necessary to acquire 80% power, which was calculated by using a one-sided α level of 2.5%. The mean DAP with the quarter-dose program was 25.3% ± 0.8 of that with the reference program. The median overall image quality scores with the reference program were 9, 13, and 12 for readers 1, 2, and 3, respectively. These scores increased slightly to 12, 15, and 12, respectively, with the quarter-dose program imaging chain. In DSA, a change in technique factors combined with a real-time noise-reduction algorithm will reduce the patient entrance dose by 75%, without a loss of image quality. RSNA, 2013

  13. Comparison of order reduction algorithms for application to electrical networks

    Directory of Open Access Journals (Sweden)

    Lj. Radić-Weissenfeld

    2009-05-01

    Full Text Available This paper addresses issues related to the minimization of the computational burden in terms of both memory and speed during the simulation of electrical models. In order to achieve a simple and computational fast model the order reduction of its reducible part is proposed. In this paper the overview of the order reduction algorithms and their application are discussed.

  14. The Support Reduction Algorithm for Computing Non-Parametric Function Estimates in Mixture Models

    OpenAIRE

    GROENEBOOM, PIET; JONGBLOED, GEURT; WELLNER, JON A.

    2008-01-01

    In this paper, we study an algorithm (which we call the support reduction algorithm) that can be used to compute non-parametric M-estimators in mixture models. The algorithm is compared with natural competitors in the context of convex regression and the ‘Aspect problem’ in quantum physics.

  15. Reduction rules-based search algorithm for opportunistic replacement strategy of multiple life-limited parts

    Directory of Open Access Journals (Sweden)

    Xuyun FU

    2018-01-01

    Full Text Available The opportunistic replacement of multiple Life-Limited Parts (LLPs is a problem widely existing in industry. The replacement strategy of LLPs has a great impact on the total maintenance cost to a lot of equipment. This article focuses on finding a quick and effective algorithm for this problem. To improve the algorithm efficiency, six reduction rules are suggested from the perspectives of solution feasibility, determination of the replacement of LLPs, determination of the maintenance occasion and solution optimality. Based on these six reduction rules, a search algorithm is proposed. This search algorithm can identify one or several optimal solutions. A numerical experiment shows that these six reduction rules are effective, and the time consumed by the algorithm is less than 38 s if the total life of equipment is shorter than 55000 and the number of LLPs is less than 11. A specific case shows that the algorithm can obtain optimal solutions which are much better than the result of the traditional method in 10 s, and it can provide support for determining to-be-replaced LLPs when determining the maintenance workscope of an aircraft engine. Therefore, the algorithm is applicable to engineering applications concerning opportunistic replacement of multiple LLPs in aircraft engines.

  16. Four-Component Scattering Power Decomposition Algorithm with Rotation of Covariance Matrix Using ALOS-PALSAR Polarimetric Data

    Directory of Open Access Journals (Sweden)

    Yasuhiro Nakamura

    2012-07-01

    Full Text Available The present study introduces the four-component scattering power decomposition (4-CSPD algorithm with rotation of covariance matrix, and presents an experimental proof of the equivalence between the 4-CSPD algorithms based on rotation of covariance matrix and coherency matrix. From a theoretical point of view, the 4-CSPD algorithms with rotation of the two matrices are identical. Although it seems obvious, no experimental evidence has yet been presented. In this paper, using polarimetric synthetic aperture radar (POLSAR data acquired by Phased Array L-band SAR (PALSAR on board of Advanced Land Observing Satellite (ALOS, an experimental proof is presented to show that both algorithms indeed produce identical results.

  17. A simple, direct method for x-ray scatter estimation and correction in digital radiography and cone-beam CT

    International Nuclear Information System (INIS)

    Siewerdsen, J.H.; Daly, M.J.; Bakhtiar, B.

    2006-01-01

    X-ray scatter poses a significant limitation to image quality in cone-beam CT (CBCT), resulting in contrast reduction, image artifacts, and lack of CT number accuracy. We report the performance of a simple scatter correction method in which scatter fluence is estimated directly in each projection from pixel values near the edge of the detector behind the collimator leaves. The algorithm operates on the simple assumption that signal in the collimator shadow is attributable to x-ray scatter, and the 2D scatter fluence is estimated by interpolating between pixel values measured along the top and bottom edges of the detector behind the collimator leaves. The resulting scatter fluence estimate is subtracted from each projection to yield an estimate of the primary-only images for CBCT reconstruction. Performance was investigated in phantom experiments on an experimental CBCT benchtop, and the effect on image quality was demonstrated in patient images (head, abdomen, and pelvis sites) obtained on a preclinical system for CBCT-guided radiation therapy. The algorithm provides significant reduction in scatter artifacts without compromise in contrast-to-noise ratio (CNR). For example, in a head phantom, cupping artifact was essentially eliminated, CT number accuracy was restored to within 3%, and CNR (breast-to-water) was improved by up to 50%. Similarly in a body phantom, cupping artifact was reduced by at least a factor of 2 without loss in CNR. Patient images demonstrate significantly increased uniformity, accuracy, and contrast, with an overall improvement in image quality in all sites investigated. Qualitative evaluation illustrates that soft-tissue structures that are otherwise undetectable are clearly delineated in scatter-corrected reconstructions. Since scatter is estimated directly in each projection, the algorithm is robust with respect to system geometry, patient size and heterogeneity, patient motion, etc. Operating without prior information, analytical modeling

  18. TU-F-18C-03: X-Ray Scatter Correction in Breast CT: Advances and Patient Testing

    International Nuclear Information System (INIS)

    Ramamurthy, S; Sechopoulos, I

    2014-01-01

    Purpose: To further develop and perform patient testing of an x-ray scatter correction algorithm for dedicated breast computed tomography (BCT). Methods: A previously proposed algorithm for x-ray scatter signal reduction in BCT imaging was modified and tested with a phantom and on patients. A wireless electronic positioner system was designed and added to the BCT system that positions a tungsten plate in and out of the x-ray beam. The interpolation used by the algorithm was replaced with a radial basis function-based algorithm, with automated exclusion of non-valid sampled points due to patient motion or other factors. A 3D adaptive noise reduction filter was also introduced to reduce the impact of scatter quantum noise post-reconstruction. The impact on image quality of the improved algorithm was evaluated using a breast phantom and seven patient breasts, using quantitative metrics such signal difference (SD) and signal difference-to-noise ratios (SDNR) and qualitatively using image profiles. Results: The improvements in the algorithm resulted in a more robust interpolation step, with no introduction of image artifacts, especially at the imaged object boundaries, which was an issue in the previous implementation. Qualitative evaluation of the reconstructed slices and corresponding profiles show excellent homogeneity of both the background and the higher density features throughout the whole imaged object, as well as increased accuracy in the Hounsfield Units (HU) values of the tissues. Profiles also demonstrate substantial increase in both SD and SDNR between glandular and adipose regions compared to both the uncorrected and system-corrected images. Conclusion: The improved scatter correction algorithm can be reliably used during patient BCT acquisitions with no introduction of artifacts, resulting in substantial improvement in image quality. Its impact on actual clinical performance needs to be evaluated in the future. Research Agreement, Koning Corp., Hologic

  19. Simulation of small-angle scattering patterns using a CPU-efficient algorithm

    Science.gov (United States)

    Anitas, E. M.

    2017-12-01

    Small-angle scattering (of neutrons, x-ray or light; SAS) is a well-established experimental technique for structural analysis of disordered systems at nano and micro scales. For complex systems, such as super-molecular assemblies or protein molecules, analytic solutions of SAS intensity are generally not available. Thus, a frequent approach to simulate the corresponding patterns is to use a CPU-efficient version of the Debye formula. For this purpose, in this paper we implement the well-known DALAI algorithm in Mathematica software. We present calculations for a series of 2D Sierpinski gaskets and respectively of pentaflakes, obtained from chaos game representation.

  20. High-performance bidiagonal reduction using tile algorithms on homogeneous multicore architectures

    KAUST Repository

    Ltaief, Hatem

    2013-04-01

    This article presents a new high-performance bidiagonal reduction (BRD) for homogeneous multicore architectures. This article is an extension of the high-performance tridiagonal reduction implemented by the same authors [Luszczek et al., IPDPS 2011] to the BRD case. The BRD is the first step toward computing the singular value decomposition of a matrix, which is one of the most important algorithms in numerical linear algebra due to its broad impact in computational science. The high performance of the BRD described in this article comes from the combination of four important features: (1) tile algorithms with tile data layout, which provide an efficient data representation in main memory; (2) a two-stage reduction approach that allows to cast most of the computation during the first stage (reduction to band form) into calls to Level 3 BLAS and reduces the memory traffic during the second stage (reduction from band to bidiagonal form) by using high-performance kernels optimized for cache reuse; (3) a data dependence translation layer that maps the general algorithm with column-major data layout into the tile data layout; and (4) a dynamic runtime system that efficiently schedules the newly implemented kernels across the processing units and ensures that the data dependencies are not violated. A detailed analysis is provided to understand the critical impact of the tile size on the total execution time, which also corresponds to the matrix bandwidth size after the reduction of the first stage. The performance results show a significant improvement over currently established alternatives. The new high-performance BRD achieves up to a 30-fold speedup on a 16-core Intel Xeon machine with a 12000×12000 matrix size against the state-of-the-art open source and commercial numerical software packages, namely LAPACK, compiled with optimized and multithreaded BLAS from MKL as well as Intel MKL version 10.2. © 2013 ACM.

  1. Output Current Ripple Reduction Algorithms for Home Energy Storage Systems

    Directory of Open Access Journals (Sweden)

    Jin-Hyuk Park

    2013-10-01

    Full Text Available This paper proposes an output current ripple reduction algorithm using a proportional-integral (PI controller for an energy storage system (ESS. In single-phase systems, the DC/AC inverter has a second-order harmonic at twice the grid frequency of a DC-link voltage caused by pulsation of the DC-link voltage. The output current of a DC/DC converter has a ripple component because of the ripple of the DC-link voltage. The second-order harmonic adversely affects the battery lifetime. The proposed algorithm has an advantage of reducing the second-order harmonic of the output current in the variable frequency system. The proposed algorithm is verified from the PSIM simulation and experiment with the 3 kW ESS model.

  2. An electron tomography algorithm for reconstructing 3D morphology using surface tangents of projected scattering interfaces

    Science.gov (United States)

    Petersen, T. C.; Ringer, S. P.

    2010-03-01

    Upon discerning the mere shape of an imaged object, as portrayed by projected perimeters, the full three-dimensional scattering density may not be of particular interest. In this situation considerable simplifications to the reconstruction problem are possible, allowing calculations based upon geometric principles. Here we describe and provide an algorithm which reconstructs the three-dimensional morphology of specimens from tilt series of images for application to electron tomography. Our algorithm uses a differential approach to infer the intersection of projected tangent lines with surfaces which define boundaries between regions of different scattering densities within and around the perimeters of specimens. Details of the algorithm implementation are given and explained using reconstruction calculations from simulations, which are built into the code. An experimental application of the algorithm to a nano-sized Aluminium tip is also presented to demonstrate practical analysis for a real specimen. Program summaryProgram title: STOMO version 1.0 Catalogue identifier: AEFS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2988 No. of bytes in distributed program, including test data, etc.: 191 605 Distribution format: tar.gz Programming language: C/C++ Computer: PC Operating system: Windows XP RAM: Depends upon the size of experimental data as input, ranging from 200 Mb to 1.5 Gb Supplementary material: Sample output files, for the test run provided, are available. Classification: 7.4, 14 External routines: Dev-C++ ( http://www.bloodshed.net/devcpp.html) Nature of problem: Electron tomography of specimens for which conventional back projection may fail and/or data for which there is a limited angular

  3. The Development of a Parameterized Scatter Removal Algorithm for Nuclear Materials Identification System Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Grogan, Brandon Robert [Univ. of Tennessee, Knoxville, TN (United States)

    2010-03-01

    This dissertation presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects non-intrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross-sections of features inside the object can be determined. The cross sections can then be used to identify the materials and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons which are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using

  4. THE DEVELOPMENT OF A PARAMETERIZED SCATTER REMOVAL ALGORITHM FOR NUCLEAR MATERIALS IDENTIFICATION SYSTEM IMAGING

    Energy Technology Data Exchange (ETDEWEB)

    Grogan, Brandon R [ORNL

    2010-05-01

    This report presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects nonintrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross sections of features inside the object can be determined. The cross sections can then be used to identify the materials, and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons that are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized, and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements, and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using the

  5. Effects of directional microphone and adaptive multichannel noise reduction algorithm on cochlear implant performance.

    Science.gov (United States)

    Chung, King; Zeng, Fan-Gang; Acker, Kyle N

    2006-10-01

    Although cochlear implant (CI) users have enjoyed good speech recognition in quiet, they still have difficulties understanding speech in noise. We conducted three experiments to determine whether a directional microphone and an adaptive multichannel noise reduction algorithm could enhance CI performance in noise and whether Speech Transmission Index (STI) can be used to predict CI performance in various acoustic and signal processing conditions. In Experiment I, CI users listened to speech in noise processed by 4 hearing aid settings: omni-directional microphone, omni-directional microphone plus noise reduction, directional microphone, and directional microphone plus noise reduction. The directional microphone significantly improved speech recognition in noise. Both directional microphone and noise reduction algorithm improved overall preference. In Experiment II, normal hearing individuals listened to the recorded speech produced by 4- or 8-channel CI simulations. The 8-channel simulation yielded similar speech recognition results as in Experiment I, whereas the 4-channel simulation produced no significant difference among the 4 settings. In Experiment III, we examined the relationship between STIs and speech recognition. The results suggested that STI could predict actual and simulated CI speech intelligibility with acoustic degradation and the directional microphone, but not the noise reduction algorithm. Implications for intelligibility enhancement are discussed.

  6. Reduction of Raman scattering and fluorescence from anvils in high pressure Raman scattering

    Science.gov (United States)

    Dierker, S. B.; Aronson, M. C.

    2018-05-01

    We describe a new design and use of a high pressure anvil cell that significantly reduces the Raman scattering and fluorescence from the anvils in high pressure Raman scattering experiments. The approach is particularly useful in Raman scattering studies of opaque, weakly scattering samples. The effectiveness of the technique is illustrated with measurements of two-magnon Raman scattering in La2CuO4.

  7. Algorithms for solving atomic structures of nanodimensional clusters in single crystals based on X-ray and neutron diffuse scattering data

    International Nuclear Information System (INIS)

    Andrushevskii, N.M.; Shchedrin, B.M.; Simonov, V.I.

    2004-01-01

    New algorithms for solving the atomic structure of equivalent nanodimensional clusters of the same orientations randomly distributed over the initial single crystal (crystal matrix) have been suggested. A cluster is a compact group of substitutional, interstitial or other atoms displaced from their positions in the crystal matrix. The structure is solved based on X-ray or neutron diffuse scattering data obtained from such objects. The use of the mathematical apparatus of Fourier transformations of finite functions showed that the appropriate sampling of the intensities of continuous diffuse scattering allows one to synthesize multiperiodic difference Patterson functions that reveal the systems of the interatomic vectors of an individual cluster. The suggested algorithms are tested on a model one-dimensional structure

  8. A necessary condition for applying MUSIC algorithm in limited-view inverse scattering problem

    Science.gov (United States)

    Park, Taehoon; Park, Won-Kwang

    2015-09-01

    Throughout various results of numerical simulations, it is well-known that MUltiple SIgnal Classification (MUSIC) algorithm can be applied in the limited-view inverse scattering problems. However, the application is somehow heuristic. In this contribution, we identify a necessary condition of MUSIC for imaging of collection of small, perfectly conducting cracks. This is based on the fact that MUSIC imaging functional can be represented as an infinite series of Bessel function of integer order of the first kind. Numerical experiments from noisy synthetic data supports our investigation.

  9. Concrete resource analysis of the quantum linear-system algorithm used to compute the electromagnetic scattering cross section of a 2D target

    Science.gov (United States)

    Scherer, Artur; Valiron, Benoît; Mau, Siun-Chuon; Alexander, Scott; van den Berg, Eric; Chapuran, Thomas E.

    2017-03-01

    We provide a detailed estimate for the logical resource requirements of the quantum linear-system algorithm (Harrow et al. in Phys Rev Lett 103:150502, 2009) including the recently described elaborations and application to computing the electromagnetic scattering cross section of a metallic target (Clader et al. in Phys Rev Lett 110:250504, 2013). Our resource estimates are based on the standard quantum-circuit model of quantum computation; they comprise circuit width (related to parallelism), circuit depth (total number of steps), the number of qubits and ancilla qubits employed, and the overall number of elementary quantum gate operations as well as more specific gate counts for each elementary fault-tolerant gate from the standard set { X, Y, Z, H, S, T, { CNOT } }. In order to perform these estimates, we used an approach that combines manual analysis with automated estimates generated via the Quipper quantum programming language and compiler. Our estimates pertain to the explicit example problem size N=332{,}020{,}680 beyond which, according to a crude big-O complexity comparison, the quantum linear-system algorithm is expected to run faster than the best known classical linear-system solving algorithm. For this problem size, a desired calculation accuracy ɛ =0.01 requires an approximate circuit width 340 and circuit depth of order 10^{25} if oracle costs are excluded, and a circuit width and circuit depth of order 10^8 and 10^{29}, respectively, if the resource requirements of oracles are included, indicating that the commonly ignored oracle resources are considerable. In addition to providing detailed logical resource estimates, it is also the purpose of this paper to demonstrate explicitly (using a fine-grained approach rather than relying on coarse big-O asymptotic approximations) how these impressively large numbers arise with an actual circuit implementation of a quantum algorithm. While our estimates may prove to be conservative as more efficient

  10. An MPCA/LDA Based Dimensionality Reduction Algorithm for Face Recognition

    Directory of Open Access Journals (Sweden)

    Jun Huang

    2014-01-01

    Full Text Available We proposed a face recognition algorithm based on both the multilinear principal component analysis (MPCA and linear discriminant analysis (LDA. Compared with current traditional existing face recognition methods, our approach treats face images as multidimensional tensor in order to find the optimal tensor subspace for accomplishing dimension reduction. The LDA is used to project samples to a new discriminant feature space, while the K nearest neighbor (KNN is adopted for sample set classification. The results of our study and the developed algorithm are validated with face databases ORL, FERET, and YALE and compared with PCA, MPCA, and PCA + LDA methods, which demonstrates an improvement in face recognition accuracy.

  11. Feature Reduction Based on Genetic Algorithm and Hybrid Model for Opinion Mining

    Directory of Open Access Journals (Sweden)

    P. Kalaivani

    2015-01-01

    Full Text Available With the rapid growth of websites and web form the number of product reviews is available on the sites. An opinion mining system is needed to help the people to evaluate emotions, opinions, attitude, and behavior of others, which is used to make decisions based on the user preference. In this paper, we proposed an optimized feature reduction that incorporates an ensemble method of machine learning approaches that uses information gain and genetic algorithm as feature reduction techniques. We conducted comparative study experiments on multidomain review dataset and movie review dataset in opinion mining. The effectiveness of single classifiers Naïve Bayes, logistic regression, support vector machine, and ensemble technique for opinion mining are compared on five datasets. The proposed hybrid method is evaluated and experimental results using information gain and genetic algorithm with ensemble technique perform better in terms of various measures for multidomain review and movie reviews. Classification algorithms are evaluated using McNemar’s test to compare the level of significance of the classifiers.

  12. Influence of different contributions of scatter and attenuation on the threshold values in contrast-based algorithms for volume segmentation.

    Science.gov (United States)

    Matheoud, Roberta; Della Monica, Patrizia; Secco, Chiara; Loi, Gianfranco; Krengli, Marco; Inglese, Eugenio; Brambilla, Marco

    2011-01-01

    The aim of this work is to evaluate the role of different amount of attenuation and scatter on FDG-PET image volume segmentation using a contrast-oriented method based on the target-to-background (TB) ratio and target dimensions. A phantom study was designed employing 3 phantom sets, which provided a clinical range of attenuation and scatter conditions, equipped with 6 spheres of different volumes (0.5-26.5 ml). The phantoms were: (1) the Hoffman 3-dimensional brain phantom, (2) a modified International Electro technical Commission (IEC) phantom with an annular ring of water bags of 3 cm thickness fit over the IEC phantom, and (3) a modified IEC phantom with an annular ring of water bags of 9 cm. The phantoms cavities were filled with a solution of FDG at 5.4 kBq/ml activity concentration, and the spheres with activity concentration ratios of about 16, 8, and 4 times the background activity concentration. Images were acquired with a Biograph 16 HI-REZ PET/CT scanner. Thresholds (TS) were determined as a percentage of the maximum intensity in the cross section area of the spheres. To reduce statistical fluctuations a nominal maximum value is calculated as the mean from all voxel > 95%. To find the TS value that yielded an area A best matching the true value, the cross section were auto-contoured in the attenuation corrected slices varying TS in step of 1%, until the area so determined differed by less than 10 mm² versus its known physical value. Multiple regression methods were used to derive an adaptive thresholding algorithm and to test its dependence on different conditions of attenuation and scatter. The errors of scatter and attenuation correction increased with increasing amount of attenuation and scatter in the phantoms. Despite these increasing inaccuracies, PET threshold segmentation algorithms resulted not influenced by the different condition of attenuation and scatter. The test of the hypothesis of coincident regression lines for the three phantoms used

  13. A necessary condition for applying MUSIC algorithm in limited-view inverse scattering problem

    International Nuclear Information System (INIS)

    Park, Taehoon; Park, Won-Kwang

    2015-01-01

    Throughout various results of numerical simulations, it is well-known that MUltiple SIgnal Classification (MUSIC) algorithm can be applied in the limited-view inverse scattering problems. However, the application is somehow heuristic. In this contribution, we identify a necessary condition of MUSIC for imaging of collection of small, perfectly conducting cracks. This is based on the fact that MUSIC imaging functional can be represented as an infinite series of Bessel function of integer order of the first kind. Numerical experiments from noisy synthetic data supports our investigation. (paper)

  14. Music algorithm for imaging of a sound-hard arc in limited-view inverse scattering problem

    Science.gov (United States)

    Park, Won-Kwang

    2017-07-01

    MUltiple SIgnal Classification (MUSIC) algorithm for a non-iterative imaging of sound-hard arc in limited-view inverse scattering problem is considered. In order to discover mathematical structure of MUSIC, we derive a relationship between MUSIC and an infinite series of Bessel functions of integer order. This structure enables us to examine some properties of MUSIC in limited-view problem. Numerical simulations are performed to support the identified structure of MUSIC.

  15. Optimization Solutions for Improving the Performance of the Parallel Reduction Algorithm Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2012-01-01

    Full Text Available In this paper, we research, analyze and develop optimization solutions for the parallel reduction function using graphics processing units (GPUs that implement the Compute Unified Device Architecture (CUDA, a modern and novel approach for improving the software performance of data processing applications and algorithms. Many of these applications and algorithms make use of the reduction function in their computational steps. After having designed the function and its algorithmic steps in CUDA, we have progressively developed and implemented optimization solutions for the reduction function. In order to confirm, test and evaluate the solutions' efficiency, we have developed a custom tailored benchmark suite. We have analyzed the obtained experimental results regarding: the comparison of the execution time and bandwidth when using graphic processing units covering the main CUDA architectures (Tesla GT200, Fermi GF100, Kepler GK104 and a central processing unit; the data type influence; the binary operator's influence.

  16. Development and evaluation of thermal model reduction algorithms for spacecraft

    Science.gov (United States)

    Deiml, Michael; Suderland, Martin; Reiss, Philipp; Czupalla, Markus

    2015-05-01

    This paper is concerned with the topic of the reduction of thermal models of spacecraft. The work presented here has been conducted in cooperation with the company OHB AG, formerly Kayser-Threde GmbH, and the Institute of Astronautics at Technische Universität München with the goal to shorten and automatize the time-consuming and manual process of thermal model reduction. The reduction of thermal models can be divided into the simplification of the geometry model for calculation of external heat flows and radiative couplings and into the reduction of the underlying mathematical model. For simplification a method has been developed which approximates the reduced geometry model with the help of an optimization algorithm. Different linear and nonlinear model reduction techniques have been evaluated for their applicability in reduction of the mathematical model. Thereby the compatibility with the thermal analysis tool ESATAN-TMS is of major concern, which restricts the useful application of these methods. Additional model reduction methods have been developed, which account to these constraints. The Matrix Reduction method allows the approximation of the differential equation to reference values exactly expect for numerical errors. The summation method enables a useful, applicable reduction of thermal models that can be used in industry. In this work a framework for model reduction of thermal models has been created, which can be used together with a newly developed graphical user interface for the reduction of thermal models in industry.

  17. Dosimetric Evaluation of Metal Artefact Reduction using Metal Artefact Reduction (MAR) Algorithm and Dual-energy Computed Tomography (CT) Method

    Science.gov (United States)

    Laguda, Edcer Jerecho

    Purpose: Computed Tomography (CT) is one of the standard diagnostic imaging modalities for the evaluation of a patient's medical condition. In comparison to other imaging modalities such as Magnetic Resonance Imaging (MRI), CT is a fast acquisition imaging device with higher spatial resolution and higher contrast-to-noise ratio (CNR) for bony structures. CT images are presented through a gray scale of independent values in Hounsfield units (HU). High HU-valued materials represent higher density. High density materials, such as metal, tend to erroneously increase the HU values around it due to reconstruction software limitations. This problem of increased HU values due to metal presence is referred to as metal artefacts. Hip prostheses, dental fillings, aneurysm clips, and spinal clips are a few examples of metal objects that are of clinical relevance. These implants create artefacts such as beam hardening and photon starvation that distort CT images and degrade image quality. This is of great significance because the distortions may cause improper evaluation of images and inaccurate dose calculation in the treatment planning system. Different algorithms are being developed to reduce these artefacts for better image quality for both diagnostic and therapeutic purposes. However, very limited information is available about the effect of artefact correction on dose calculation accuracy. This research study evaluates the dosimetric effect of metal artefact reduction algorithms on severe artefacts on CT images. This study uses Gemstone Spectral Imaging (GSI)-based MAR algorithm, projection-based Metal Artefact Reduction (MAR) algorithm, and the Dual-Energy method. Materials and Methods: The Gemstone Spectral Imaging (GSI)-based and SMART Metal Artefact Reduction (MAR) algorithms are metal artefact reduction protocols embedded in two different CT scanner models by General Electric (GE), and the Dual-Energy Imaging Method was developed at Duke University. All three

  18. A fast and pragmatic approach for scatter correction in flat-detector CT using elliptic modeling and iterative optimization

    Science.gov (United States)

    Meyer, Michael; Kalender, Willi A.; Kyriakou, Yiannis

    2010-01-01

    Scattered radiation is a major source of artifacts in flat detector computed tomography (FDCT) due to the increased irradiated volumes. We propose a fast projection-based algorithm for correction of scatter artifacts. The presented algorithm combines a convolution method to determine the spatial distribution of the scatter intensity distribution with an object-size-dependent scaling of the scatter intensity distributions using a priori information generated by Monte Carlo simulations. A projection-based (PBSE) and an image-based (IBSE) strategy for size estimation of the scanned object are presented. Both strategies provide good correction and comparable results; the faster PBSE strategy is recommended. Even with such a fast and simple algorithm that in the PBSE variant does not rely on reconstructed volumes or scatter measurements, it is possible to provide a reasonable scatter correction even for truncated scans. For both simulations and measurements, scatter artifacts were significantly reduced and the algorithm showed stable behavior in the z-direction. For simulated voxelized head, hip and thorax phantoms, a figure of merit Q of 0.82, 0.76 and 0.77 was reached, respectively (Q = 0 for uncorrected, Q = 1 for ideal). For a water phantom with 15 cm diameter, for example, a cupping reduction from 10.8% down to 2.1% was achieved. The performance of the correction method has limitations in the case of measurements using non-ideal detectors, intensity calibration, etc. An iterative approach to overcome most of these limitations was proposed. This approach is based on root finding of a cupping metric and may be useful for other scatter correction methods as well. By this optimization, cupping of the measured water phantom was further reduced down to 0.9%. The algorithm was evaluated on a commercial system including truncated and non-homogeneous clinically relevant objects.

  19. Electromagnetic scattering of large structures in layered earths using integral equations

    Science.gov (United States)

    Xiong, Zonghou; Tripp, Alan C.

    1995-07-01

    An electromagnetic scattering algorithm for large conductivity structures in stratified media has been developed and is based on the method of system iteration and spatial symmetry reduction using volume electric integral equations. The method of system iteration divides a structure into many substructures and solves the resulting matrix equation using a block iterative method. The block submatrices usually need to be stored on disk in order to save computer core memory. However, this requires a large disk for large structures. If the body is discretized into equal-size cells it is possible to use the spatial symmetry relations of the Green's functions to regenerate the scattering impedance matrix in each iteration, thus avoiding expensive disk storage. Numerical tests show that the system iteration converges much faster than the conventional point-wise Gauss-Seidel iterative method. The numbers of cells do not significantly affect the rate of convergency. Thus the algorithm effectively reduces the solution of the scattering problem to an order of O(N2), instead of O(N3) as with direct solvers.

  20. THE WASTE REDUCTION (WAR) ALGORITHM: ENVIRONMENTAL IMPACTS, ENERGY CONSUMPTION, AND ENGINEERING ECONOMICS

    Science.gov (United States)

    A general theory known as the WAste Reduction (WAR) algorithm has been developed to describe the flow and the generation of potential environmental impact through a chemical process. This theory defines potential environmental impact indexes that characterize the generation and t...

  1. Automated evaluation of one-loop scattering amplitudes

    International Nuclear Information System (INIS)

    Deurzen, Hans van

    2015-01-01

    In this dissertation the developments toward fully automated evaluation of one-loop scattering amplitudes will be presented, as implemented in the GoSam framework. The code Xsamurai, part of GoSam, is described, which implements the integrand reduction algorithm including an extension to higher-rank capability. GoSam was used to compute several Higgs boson production channels at NLO QCD. An interface between GoSam and a Monte Carlo program was constructed, which enables computing any process at NLO precision needed in the LHC era.

  2. PAPR Reduction in OFDM-based Visible Light Communication Systems Using a Combination of Novel Peak-value Feedback Algorithm and Genetic Algorithm

    Science.gov (United States)

    Deng, Honggui; Liu, Yan; Ren, Shuang; He, Hailang; Tang, Chengying

    2017-10-01

    We propose an enhanced partial transmit sequence technique based on novel peak-value feedback algorithm and genetic algorithm (GAPFA-PTS) to reduce peak-to-average power ratio (PAPR) of orthogonal frequency division multiplexing (OFDM) signals in visible light communication (VLC) systems(VLC-OFDM). To demonstrate the advantages of our proposed algorithm, we analyze the flow of proposed technique and compare the performances with other techniques through MATLAB simulation. The results show that GAPFA-PTS technique achieves a significant improvement in PAPR reduction while maintaining low bit error rate (BER) and low complexity in VLC-OFDM systems.

  3. Advanced defect detection algorithm using clustering in ultrasonic NDE

    Science.gov (United States)

    Gongzhang, Rui; Gachagan, Anthony

    2016-02-01

    A range of materials used in industry exhibit scattering properties which limits ultrasonic NDE. Many algorithms have been proposed to enhance defect detection ability, such as the well-known Split Spectrum Processing (SSP) technique. Scattering noise usually cannot be fully removed and the remaining noise can be easily confused with real feature signals, hence becoming artefacts during the image interpretation stage. This paper presents an advanced algorithm to further reduce the influence of artefacts remaining in A-scan data after processing using a conventional defect detection algorithm. The raw A-scan data can be acquired from either traditional single transducer or phased array configurations. The proposed algorithm uses the concept of unsupervised machine learning to cluster segmental defect signals from pre-processed A-scans into different classes. The distinction and similarity between each class and the ensemble of randomly selected noise segments can be observed by applying a classification algorithm. Each class will then be labelled as `legitimate reflector' or `artefacts' based on this observation and the expected probability of defection (PoD) and probability of false alarm (PFA) determined. To facilitate data collection and validate the proposed algorithm, a 5MHz linear array transducer is used to collect A-scans from both austenitic steel and Inconel samples. Each pulse-echo A-scan is pre-processed using SSP and the subsequent application of the proposed clustering algorithm has provided an additional reduction to PFA while maintaining PoD for both samples compared with SSP results alone.

  4. Data Reduction Algorithm Using Nonnegative Matrix Factorization with Nonlinear Constraints

    Science.gov (United States)

    Sembiring, Pasukat

    2017-12-01

    Processing ofdata with very large dimensions has been a hot topic in recent decades. Various techniques have been proposed in order to execute the desired information or structure. Non- Negative Matrix Factorization (NMF) based on non-negatives data has become one of the popular methods for shrinking dimensions. The main strength of this method is non-negative object, the object model by a combination of some basic non-negative parts, so as to provide a physical interpretation of the object construction. The NMF is a dimension reduction method thathasbeen used widely for numerous applications including computer vision,text mining, pattern recognitions,and bioinformatics. Mathematical formulation for NMF did not appear as a convex optimization problem and various types of algorithms have been proposed to solve the problem. The Framework of Alternative Nonnegative Least Square(ANLS) are the coordinates of the block formulation approaches that have been proven reliable theoretically and empirically efficient. This paper proposes a new algorithm to solve NMF problem based on the framework of ANLS.This algorithm inherits the convergenceproperty of the ANLS framework to nonlinear constraints NMF formulations.

  5. Reduction of metallic coil artefacts in computed tomography body imaging: effects of a new single-energy metal artefact reduction algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Kidoh, Masafumi; Utsunomiya, Daisuke; Ikeda, Osamu; Tamura, Yoshitaka; Oda, Seitaro; Yuki, Hideaki; Nakaura, Takeshi; Hirai, Toshinori; Yamashita, Yasuyuki [Kumamoto University, Department of Diagnostic Radiology, Faculty of Life Sciences, Kumamoto (Japan); Funama, Yoshinori [Kumamoto University, Department of Medical Physics, Faculty of Life Sciences, Kumamoto (Japan); Kawano, Takayuki [Kumamoto University Graduate School, Department of Neurosurgery, Faculty of Life Sciences Research, Kumamoto (Japan)

    2016-05-15

    We evaluated the effect of a single-energy metal artefact reduction (SEMAR) algorithm for metallic coil artefact reduction in body imaging. Computed tomography angiography (CTA) was performed in 30 patients with metallic coils (10 men, 20 women; mean age, 67.9 ± 11 years). Non-SEMAR images were reconstructed with iterative reconstruction alone, and SEMAR images were reconstructed with the iterative reconstruction plus SEMAR algorithms. We compared image noise around metallic coils and the maximum diameters of artefacts from coils between the non-SEMAR and SEMAR images. Two radiologists visually evaluated the metallic coil artefacts utilizing a four-point scale: 1 = extensive; 2 = strong; 3 = mild; 4 = minimal artefacts. The image noise and maximum diameters of the artefacts of the SEMAR images were significantly lower than those of the non-SEMAR images (65.1 ± 33.0 HU vs. 29.7 ± 10.3 HU; 163.9 ± 54.8 mm vs. 10.3 ± 19.0 mm, respectively; P < 0.001). Better visual scores were obtained with the SEMAR technique (3.4 ± 0.6 vs. 1.0 ± 0.0, P < 0.001). The SEMAR algorithm significantly reduced artefacts caused by metallic coils compared with the non-SEMAR algorithm. This technique can potentially increase CT performance for the evaluation of post-coil embolization complications. (orig.)

  6. Reduction of metallic coil artefacts in computed tomography body imaging: effects of a new single-energy metal artefact reduction algorithm

    International Nuclear Information System (INIS)

    Kidoh, Masafumi; Utsunomiya, Daisuke; Ikeda, Osamu; Tamura, Yoshitaka; Oda, Seitaro; Yuki, Hideaki; Nakaura, Takeshi; Hirai, Toshinori; Yamashita, Yasuyuki; Funama, Yoshinori; Kawano, Takayuki

    2016-01-01

    We evaluated the effect of a single-energy metal artefact reduction (SEMAR) algorithm for metallic coil artefact reduction in body imaging. Computed tomography angiography (CTA) was performed in 30 patients with metallic coils (10 men, 20 women; mean age, 67.9 ± 11 years). Non-SEMAR images were reconstructed with iterative reconstruction alone, and SEMAR images were reconstructed with the iterative reconstruction plus SEMAR algorithms. We compared image noise around metallic coils and the maximum diameters of artefacts from coils between the non-SEMAR and SEMAR images. Two radiologists visually evaluated the metallic coil artefacts utilizing a four-point scale: 1 = extensive; 2 = strong; 3 = mild; 4 = minimal artefacts. The image noise and maximum diameters of the artefacts of the SEMAR images were significantly lower than those of the non-SEMAR images (65.1 ± 33.0 HU vs. 29.7 ± 10.3 HU; 163.9 ± 54.8 mm vs. 10.3 ± 19.0 mm, respectively; P < 0.001). Better visual scores were obtained with the SEMAR technique (3.4 ± 0.6 vs. 1.0 ± 0.0, P < 0.001). The SEMAR algorithm significantly reduced artefacts caused by metallic coils compared with the non-SEMAR algorithm. This technique can potentially increase CT performance for the evaluation of post-coil embolization complications. (orig.)

  7. Optimizing cone beam CT scatter estimation in egs_cbct for a clinical and virtual chest phantom

    DEFF Research Database (Denmark)

    Slot Thing, Rune; Mainegra-Hing, Ernesto

    2014-01-01

    improving techniques (EITs) implemented inegs_cbct were varied. Simulation efficiencies were compared to analog simulations performed without using any EITs. Resulting scatter distributions were confirmed unbiased against the analog simulations. RESULTS: The optimal EIT parameter selection depends...... reduction techniques with a built-in denoising algorithm, efficiency improvements of 4 orders of magnitude were achieved. CONCLUSIONS: Using the built-in EITs inegs_cbct can improve scatter calculation efficiencies by more than 4 orders of magnitude. To achieve this, the user must optimize the input...

  8. Partial Transmit Sequence Optimization Using Improved Harmony Search Algorithm for PAPR Reduction in OFDM

    Directory of Open Access Journals (Sweden)

    Mangal Singh

    2017-12-01

    Full Text Available This paper considers the use of the Partial Transmit Sequence (PTS technique to reduce the Peak‐to‐Average Power Ratio (PAPR of an Orthogonal Frequency Division Multiplexing signal in wireless communication systems. Search complexity is very high in the traditional PTS scheme because it involves an extensive random search over all combinations of allowed phase vectors, and it increases exponentially with the number of phase vectors. In this paper, a suboptimal metaheuristic algorithm for phase optimization based on an improved harmony search (IHS is applied to explore the optimal combination of phase vectors that provides improved performance compared with existing evolutionary algorithms such as the harmony search algorithm and firefly algorithm. IHS enhances the accuracy and convergence rate of the conventional algorithms with very few parameters to adjust. Simulation results show that an improved harmony search‐based PTS algorithm can achieve a significant reduction in PAPR using a simple network structure compared with conventional algorithms.

  9. Implementation of on-line data reduction algorithms in the CMS Endcap Preshower Data Concentrator Cards

    CERN Document Server

    Barney, D; Kokkas, P; Manthos, N; Sidiropoulos, G; Reynaud, S; Vichoudis, P

    2007-01-01

    The CMS Endcap Preshower (ES) sub-detector comprises 4288 silicon sensors, each containing 32 strips. The data are transferred from the detector to the counting room via 1208 optical fibres running at 800Mbps. Each fibre carries data from two, three or four sensors. For the readout of the Preshower, a VME-based system, the Endcap Preshower Data Concentrator Card (ES-DCC), is currently under development. The main objective of each readout board is to acquire on-detector data from up to 36 optical links, perform on-line data reduction via zero suppression and pass the concentrated data to the CMS event builder. This document presents the conceptual design of the Reduction Algorithms as well as their implementation in the ES-DCC FPGAs. These algorithms, as implemented in the ES-DCC, result in a data-reduction factor of 20.

  10. Implementation of On-Line Data Reduction Algorithms in the CMS Endcap Preshower Data Concentrator Card

    CERN Document Server

    Barney, David; Kokkas, Panagiotis; Manthos, Nikolaos; Reynaud, Serge; Sidiropoulos, Georgios; Vichoudis, Paschalis

    2006-01-01

    The CMS Endcap Preshower (ES) sub-detector comprises 4288 silicon sensors, each containing 32 strips. The data are transferred from the detector to the counting room via 1208 optical fibres running at 800Mbps. Each fibre carries data from 2, 3 or 4 sensors. For the readout of the Preshower, a VME-based system - the Endcap Preshower Data Concentrator Card (ES-DCC) is currently under development. The main objective of each readout board is to acquire on-detector data from up to 36 optical links, perform on-line data reduction (zero suppression) and pass the concentrated data to the CMS event builder. This document presents the conceptual design of the Reduction Algorithms as well as their implementation into the ES-DCC FPGAs. The algorithms implemented into the ES-DCC resulted in a reduction factor of ~20.

  11. INCORPORATING ENVIRONMENTAL AND ECONOMIC CONSIDERATIONS INTO PROCESS DESIGN: THE WASTE REDUCTION (WAR) ALGORITHM

    Science.gov (United States)

    A general theory known as the WAste Reduction (WASR) algorithm has been developed to describe the flow and the generation of potential environmental impact through a chemical process. This theory integrates environmental impact assessment into chemical process design Potential en...

  12. A Problem-Reduction Evolutionary Algorithm for Solving the Capacitated Vehicle Routing Problem

    Directory of Open Access Journals (Sweden)

    Wanfeng Liu

    2015-01-01

    Full Text Available Assessment of the components of a solution helps provide useful information for an optimization problem. This paper presents a new population-based problem-reduction evolutionary algorithm (PREA based on the solution components assessment. An individual solution is regarded as being constructed by basic elements, and the concept of acceptability is introduced to evaluate them. The PREA consists of a searching phase and an evaluation phase. The acceptability of basic elements is calculated in the evaluation phase and passed to the searching phase. In the searching phase, for each individual solution, the original optimization problem is reduced to a new smaller-size problem. With the evolution of the algorithm, the number of common basic elements in the population increases until all individual solutions are exactly the same which is supposed to be the near-optimal solution of the optimization problem. The new algorithm is applied to a large variety of capacitated vehicle routing problems (CVRP with customers up to nearly 500. Experimental results show that the proposed algorithm has the advantages of fast convergence and robustness in solution quality over the comparative algorithms.

  13. Progress on Thomson scattering in the Pegasus Toroidal Experiment

    International Nuclear Information System (INIS)

    Schlossberg, D J; Bongard, M W; Fonck, R J; Schoenbeck, N L; Winz, G R

    2013-01-01

    A novel Thomson scattering system has been implemented on the Pegasus Toroidal Experiment where typical densities of 10 19 m −3 and electron temperatures of 10 to 500 eV are expected. The system leverages technological advances in high-energy pulsed lasers, volume phase holographic (VPH) diffraction gratings, and gated image intensified (ICCD) cameras to provide a relatively low-maintenance, economical, robust diagnostic system. Scattering is induced by a frequency-doubled, Q-switched Nd:YAG laser (2 J at 532 nm, 7 ns FWHM pulse) directed to the plasma over a 7.7 m long beam path, and focused to 80%) and fast-gated ICCDs (gate > 2 ns, Gen III intensifier) with high-throughput (F/1.8), achromatic lensing. A stray light mitigation facility has been implemented, consisting of a multi-aperture optical baffle system and a simple beam dump. Successful stray light reduction has enabled detection of scattered signal, and Rayleigh scattering has been used to provide a relative calibration. Initial temperature measurements have been made and data analysis algorithms are under development

  14. Progress on Thomson scattering in the Pegasus Toroidal Experiment

    Science.gov (United States)

    Schlossberg, D. J.; Bongard, M. W.; Fonck, R. J.; Schoenbeck, N. L.; Winz, G. R.

    2013-11-01

    A novel Thomson scattering system has been implemented on the Pegasus Toroidal Experiment where typical densities of 1019 m-3 and electron temperatures of 10 to 500 eV are expected. The system leverages technological advances in high-energy pulsed lasers, volume phase holographic (VPH) diffraction gratings, and gated image intensified (ICCD) cameras to provide a relatively low-maintenance, economical, robust diagnostic system. Scattering is induced by a frequency-doubled, Q-switched Nd:YAG laser (2 J at 532 nm, 7 ns FWHM pulse) directed to the plasma over a 7.7 m long beam path, and focused to VPH transmission gratings (eff. > 80%) and fast-gated ICCDs (gate > 2 ns, Gen III intensifier) with high-throughput (F/1.8), achromatic lensing. A stray light mitigation facility has been implemented, consisting of a multi-aperture optical baffle system and a simple beam dump. Successful stray light reduction has enabled detection of scattered signal, and Rayleigh scattering has been used to provide a relative calibration. Initial temperature measurements have been made and data analysis algorithms are under development.

  15. Iterative metal artefact reduction (MAR) in postsurgical chest CT: comparison of three iMAR-algorithms.

    Science.gov (United States)

    Aissa, Joel; Boos, Johannes; Sawicki, Lino Morris; Heinzler, Niklas; Krzymyk, Karl; Sedlmair, Martin; Kröpil, Patric; Antoch, Gerald; Thomas, Christoph

    2017-11-01

    The purpose of this study was to evaluate the impact of three novel iterative metal artefact (iMAR) algorithms on image quality and artefact degree in chest CT of patients with a variety of thoracic metallic implants. 27 postsurgical patients with thoracic implants who underwent clinical chest CT between March and May 2015 in clinical routine were retrospectively included. Images were retrospectively reconstructed with standard weighted filtered back projection (WFBP) and with three iMAR algorithms (iMAR-Algo1 = Cardiac algorithm, iMAR-Algo2 = Pacemaker algorithm and iMAR-Algo3 = ThoracicCoils algorithm). The subjective and objective image quality was assessed. Averaged over all artefacts, artefact degree was significantly lower for the iMAR-Algo1 (58.9 ± 48.5 HU), iMAR-Algo2 (52.7 ± 46.8 HU) and the iMAR-Algo3 (51.9 ± 46.1 HU) compared with WFBP (91.6 ± 81.6 HU, p algorithms, respectively. iMAR-Algo2 and iMAR-Algo3 reconstructions decreased mild and moderate artefacts compared with WFBP and iMAR-Algo1 (p algorithms led to a significant reduction of metal artefacts and increase in overall image quality compared with WFBP in chest CT of patients with metallic implants in subjective and objective analysis. The iMARAlgo2 and iMARAlgo3 were best for mild artefacts. IMARAlgo1 was superior for severe artefacts. Advances in knowledge: Iterative MAR led to significant artefact reduction and increase image-quality compared with WFBP in CT after implementation of thoracic devices. Adjusting iMAR-algorithms to patients' metallic implants can help to improve image quality in CT.

  16. Study on the Noise Reduction of Vehicle Exhaust NOX Spectra Based on Adaptive EEMD Algorithm

    Directory of Open Access Journals (Sweden)

    Kai Zhang

    2017-01-01

    Full Text Available It becomes a key technology to measure the concentration of the vehicle exhaust components with the transmission spectra. But in the conventional methods for noise reduction and baseline correction, such as wavelet transform, derivative, interpolation, polynomial fitting, and so forth, the basic functions of these algorithms, the number of decomposition layers, and the way to reconstruct the signal have to be adjusted according to the characteristics of different components in the transmission spectra. The parameter settings of the algorithms above are not transcendental, so with them, it is difficult to achieve the best noise reduction effect for the vehicle exhaust spectra which are sharp and drastic in the waveform. In this paper, an adaptive ensemble empirical mode decomposition (EEMD denoising model based on a special normalized index optimization is proposed and used in the spectral noise reduction of vehicle exhaust NOX. It is shown with the experimental results that the method can effectively improve the accuracy of the spectral noise reduction and simplify the denoising process and its operation difficulty.

  17. Scattered-field FDTD and PSTD algorithms with CPML absorbing boundary conditions for light scattering by aerosols

    International Nuclear Information System (INIS)

    Sun, Wenbo; Videen, Gorden; Fu, Qiang; Hu, Yongxiang

    2013-01-01

    As fundamental parameters for polarized-radiative-transfer calculations, the single-scattering phase matrix of irregularly shaped aerosol particles must be accurately modeled. In this study, a scattered-field finite-difference time-domain (FDTD) model and a scattered-field pseudo-spectral time-domain (PSTD) model are developed for light scattering by arbitrarily shaped dielectric aerosols. The convolutional perfectly matched layer (CPML) absorbing boundary condition (ABC) is used to truncate the computational domain. It is found that the PSTD method is generally more accurate than the FDTD in calculation of the single-scattering properties given similar spatial cell sizes. Since the PSTD can use a coarser grid for large particles, it can lower the memory requirement in the calculation. However, the Fourier transformations in the PSTD need significantly more CPU time than simple subtractions in the FDTD, and the fast Fourier transform requires a power of 2 elements in calculations, thus using the PSTD could not significantly reduce the CPU time required in the numerical modeling. Furthermore, because the scattered-field FDTD/PSTD equations include incident-wave source terms, the FDTD/PSTD model allows for the inclusion of an arbitrarily incident wave source, including a plane parallel wave or a Gaussian beam like those emitted by lasers usually used in laboratory particle characterizations, etc. The scattered-field FDTD and PSTD light-scattering models can be used to calculate single-scattering properties of arbitrarily shaped aerosol particles over broad size and wavelength ranges. -- Highlights: • Scattered-field FDTD and PSTD models are developed for light scattering by aerosols. • Convolutional perfectly matched layer absorbing boundary condition is used. • PSTD is generally more accurate than FDTD in calculating single-scattering properties. • Using same spatial resolution, PSTD requires much larger CPU time than FDTD

  18. Singular characteristic tracking algorithm for improved solution accuracy of the discrete ordinates method with isotropic scattering

    International Nuclear Information System (INIS)

    Duo, J. I.; Azmy, Y. Y.

    2007-01-01

    A new method, the Singular Characteristics Tracking algorithm, is developed to account for potential non-smoothness across the singular characteristics in the exact solution of the discrete ordinates approximation of the transport equation. Numerical results show improved rate of convergence of the solution to the discrete ordinates equations in two spatial dimensions with isotropic scattering using the proposed methodology. Unlike the standard Weighted Diamond Difference methods, the new algorithm achieves local convergence in the case of discontinuous angular flux along the singular characteristics. The method also significantly reduces the error for problems where the angular flux presents discontinuous spatial derivatives across these lines. For purposes of verifying the results, the Method of Manufactured Solutions is used to generate analytical reference solutions that permit estimating the local error in the numerical solution. (authors)

  19. Parton-parton scattering at two-loops

    International Nuclear Information System (INIS)

    Tejeda Yeomans, M.E.

    2001-01-01

    Abstract We present an algorithm for the calculation of scalar and tensor one- and two-loop integrals that contribute to the virtual corrections of 2 → 2 partonic scattering. First, the tensor integrals are related to scalar integrals that contain an irreducible propagator-like structure in the numerator. Then, we use Integration by Parts and Lorentz Invariance recurrence relations to build a general system of equations that enables the reduction of any scalar integral (with and without structure in the numerator) to a basis set of master integrals. Their expansions in ε = 2 - D/2 have already been calculated and we present a summary of the techniques that have been used to this end, as well as a compilation of the expansions we need in the different physical regions. We then apply this algorithm to the direct evaluation of the Feynman diagrams contributing to the O(α s 4 ) one- and two-loop matrix-elements for massless like and unlike quark-quark, quark-gluon and gluon-gluon scattering. The analytic expressions we provide are regularised in Convensional Dimensional Regularisation and renormalised in the MS-bar scheme. Finally, we show that the structure of the infrared divergences agrees with that predicted by the application of Catani's formalism to the analysis of each partonic scattering process. The results presented in this thesis provide the complete calculation of the one- and two-loop matrix-elements for 2 → 2 processes needed for the next-to-next-to-leading order contribution to inclusive jet production at hadron colliders. (author)

  20. Genetic Algorithm-Based Model Order Reduction of Aeroservoelastic Systems with Consistant States

    Science.gov (United States)

    Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter M.; Brenner, Martin J.

    2017-01-01

    This paper presents a model order reduction framework to construct linear parameter-varying reduced-order models of flexible aircraft for aeroservoelasticity analysis and control synthesis in broad two-dimensional flight parameter space. Genetic algorithms are used to automatically determine physical states for reduction and to generate reduced-order models at grid points within parameter space while minimizing the trial-and-error process. In addition, balanced truncation for unstable systems is used in conjunction with the congruence transformation technique to achieve locally optimal realization and weak fulfillment of state consistency across the entire parameter space. Therefore, aeroservoelasticity reduced-order models at any flight condition can be obtained simply through model interpolation. The methodology is applied to the pitch-plant model of the X-56A Multi-Use Technology Testbed currently being tested at NASA Armstrong Flight Research Center for flutter suppression and gust load alleviation. The present studies indicate that the reduced-order model with more than 12× reduction in the number of states relative to the original model is able to accurately predict system response among all input-output channels. The genetic-algorithm-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The interpolated aeroservoelasticity reduced order models exhibit smooth pole transition and continuously varying gains along a set of prescribed flight conditions, which verifies consistent state representation obtained by congruence transformation. The present model order reduction framework can be used by control engineers for robust aeroservoelasticity controller synthesis and novel vehicle design.

  1. FPGA based algorithms for data reduction at Belle II

    Energy Technology Data Exchange (ETDEWEB)

    Muenchow, David; Gessler, Thomas; Kuehn, Wolfgang; Lange, Jens Soeren; Liu, Ming; Spruck, Bjoern [II. Physikalisches Institut, Universitaet Giessen (Germany)

    2011-07-01

    Belle II, the upgrade of the existing Belle experiment at Super-KEKB in Tsukuba, Japan, is an asymmetric e{sup +}e{sup -} collider with a design luminosity of 8.10{sup 35}cm{sup -2}s{sup -1}. At Belle II the estimated event rate is {<=}30 kHz. The resulting data rate at the Pixel Detector (PXD) will be {<=}7.2 GB/s. This data rate needs to be reduced to be able to process and store the data. A region of interest (ROI) selection is based upon two mechanisms. a.) a tracklet finder using the silicon strip detector and b.) the HLT using all other Belle II subdetectors. These ROIs and the pixel data are forwarded to an FPGA based Compute Node for processing. Here a VHDL based algorithm on FPGA with the benefit of pipelining and parallelisation will be implemented. For a fast data handling we developed a dedicated memory management system for buffering and storing the data. The status of the implementation and performance tests of the memory manager and data reduction algorithm is presented.

  2. MUSIC algorithms for rebar detection

    International Nuclear Information System (INIS)

    Solimene, Raffaele; Leone, Giovanni; Dell’Aversano, Angela

    2013-01-01

    The MUSIC (MUltiple SIgnal Classification) algorithm is employed to detect and localize an unknown number of scattering objects which are small in size as compared to the wavelength. The ensemble of objects to be detected consists of both strong and weak scatterers. This represents a scattering environment challenging for detection purposes as strong scatterers tend to mask the weak ones. Consequently, the detection of more weakly scattering objects is not always guaranteed and can be completely impaired when the noise corrupting data is of a relatively high level. To overcome this drawback, here a new technique is proposed, starting from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed in the second stage focusing only on the weak scatterers. The role of an adequate scattering model is emphasized to improve drastically detection performance in realistic scenarios. (paper)

  3. Blooming Artifact Reduction in Coronary Artery Calcification by A New De-blooming Algorithm: Initial Study.

    Science.gov (United States)

    Li, Ping; Xu, Lei; Yang, Lin; Wang, Rui; Hsieh, Jiang; Sun, Zhonghua; Fan, Zhanming; Leipsic, Jonathon A

    2018-05-02

    The aim of this study was to investigate the use of de-blooming algorithm in coronary CT angiography (CCTA) for optimal evaluation of calcified plaques. Calcified plaques were simulated on a coronary vessel phantom and a cardiac motion phantom. Two convolution kernels, standard (STND) and high-definition standard (HD STND), were used for imaging reconstruction. A dedicated de-blooming algorithm was used for imaging processing. We found a smaller bias towards measurement of stenosis using the de-blooming algorithm (STND: bias 24.6% vs 15.0%, range 10.2% to 39.0% vs 4.0% to 25.9%; HD STND: bias 17.9% vs 11.0%, range 8.9% to 30.6% vs 0.5% to 21.5%). With use of de-blooming algorithm, specificity for diagnosing significant stenosis increased from 45.8% to 75.0% (STND), from 62.5% to 83.3% (HD STND); while positive predictive value (PPV) increased from 69.8% to 83.3% (STND), from 76.9% to 88.2% (HD STND). In the patient group, reduction in calcification volume was 48.1 ± 10.3%, reduction in coronary diameter stenosis over calcified plaque was 52.4 ± 24.2%. Our results suggest that the novel de-blooming algorithm could effectively decrease the blooming artifacts caused by coronary calcified plaques, and consequently improve diagnostic accuracy of CCTA in assessing coronary stenosis.

  4. Time-of-flight small-angle-neutron-scattering data reduction and analysis at LANSCE (Los Alamos Neutron Scattering Center) with program SMR

    International Nuclear Information System (INIS)

    Hjelm, R.P. Jr.; Seegar, P.A.

    1989-01-01

    A user-friendly, integrated system, SMR, for the display, reduction and analysis of data from time-of-flight small-angle neutron diffractometers is described. Its purpose is to provide facilities for data display and assessment and to provide these facilities in near real time. This allows the results of each scattering measurement to be available almost immediately, and enables the experimenter to use the results of a measurement as a basis for other measurements in the same instrument allocation. 8 refs., 11 figs

  5. TPSLVM: a dimensionality reduction algorithm based on thin plate splines.

    Science.gov (United States)

    Jiang, Xinwei; Gao, Junbin; Wang, Tianjiang; Shi, Daming

    2014-10-01

    Dimensionality reduction (DR) has been considered as one of the most significant tools for data analysis. One type of DR algorithms is based on latent variable models (LVM). LVM-based models can handle the preimage problem easily. In this paper we propose a new LVM-based DR model, named thin plate spline latent variable model (TPSLVM). Compared to the well-known Gaussian process latent variable model (GPLVM), our proposed TPSLVM is more powerful especially when the dimensionality of the latent space is low. Also, TPSLVM is robust to shift and rotation. This paper investigates two extensions of TPSLVM, i.e., the back-constrained TPSLVM (BC-TPSLVM) and TPSLVM with dynamics (TPSLVM-DM) as well as their combination BC-TPSLVM-DM. Experimental results show that TPSLVM and its extensions provide better data visualization and more efficient dimensionality reduction compared to PCA, GPLVM, ISOMAP, etc.

  6. Spectral CT metal artifact reduction with an optimization-based reconstruction algorithm

    Science.gov (United States)

    Gilat Schmidt, Taly; Barber, Rina F.; Sidky, Emil Y.

    2017-03-01

    Metal objects cause artifacts in computed tomography (CT) images. This work investigated the feasibility of a spectral CT method to reduce metal artifacts. Spectral CT acquisition combined with optimization-based reconstruction is proposed to reduce artifacts by modeling the physical effects that cause metal artifacts and by providing the flexibility to selectively remove corrupted spectral measurements in the spectral-sinogram space. The proposed Constrained `One-Step' Spectral CT Image Reconstruction (cOSSCIR) algorithm directly estimates the basis material maps while enforcing convex constraints. The incorporation of constraints on the reconstructed basis material maps is expected to mitigate undersampling effects that occur when corrupted data is excluded from reconstruction. The feasibility of the cOSSCIR algorithm to reduce metal artifacts was investigated through simulations of a pelvis phantom. The cOSSCIR algorithm was investigated with and without the use of a third basis material representing metal. The effects of excluding data corrupted by metal were also investigated. The results demonstrated that the proposed cOSSCIR algorithm reduced metal artifacts and improved CT number accuracy. For example, CT number error in a bright shading artifact region was reduced from 403 HU in the reference filtered backprojection reconstruction to 33 HU using the proposed algorithm in simulation. In the dark shading regions, the error was reduced from 1141 HU to 25 HU. Of the investigated approaches, decomposing the data into three basis material maps and excluding the corrupted data demonstrated the greatest reduction in metal artifacts.

  7. Scatter radiation in digital tomosynthesis of the breast

    International Nuclear Information System (INIS)

    Sechopoulos, Ioannis; Suryanarayanan, Sankararaman; Vedantham, Srinivasan; D'Orsi, Carl J.; Karellas, Andrew

    2007-01-01

    Digital tomosynthesis of the breast is being investigated as one possible solution to the problem of tissue superposition present in planar mammography. This imaging technique presents various advantages that would make it a feasible replacement for planar mammography, among them similar, if not lower, radiation glandular dose to the breast; implementation on conventional digital mammography technology via relatively simple modifications; and fast acquisition time. One significant problem that tomosynthesis of the breast must overcome, however, is the reduction of x-ray scatter inclusion in the projection images. In tomosynthesis, due to the projection geometry and radiation dose considerations, the use of an antiscatter grid presents several challenges. Therefore, the use of postacquisition software-based scatter reduction algorithms seems well justified, requiring a comprehensive evaluation of x-ray scatter content in the tomosynthesis projections. This study aims to gain insight into the behavior of x-ray scatter in tomosynthesis by characterizing the scatter point spread functions (PSFs) and the scatter to primary ratio (SPR) maps found in tomosynthesis of the breast. This characterization was performed using Monte Carlo simulations, based on the Geant4 toolkit, that simulate the conditions present in a digital tomosynthesis system, including the simulation of the compressed breast in both the cranio-caudal (CC) and the medio-lateral oblique (MLO) views. The variation of the scatter PSF with varying tomosynthesis projection angle, as well as the effects of varying breast glandular fraction and x-ray spectrum, was analyzed. The behavior of the SPR for different projection angle, breast size, thickness, glandular fraction, and x-ray spectrum was also analyzed, and computer fit equations for the magnitude of the SPR at the center of mass for both the CC and the MLO views were found. Within mammographic energies, the x-ray spectrum was found to have no appreciable

  8. Prototype metal artefact reduction algorithm in flat panel computed tomography - evaluation in patients undergoing transarterial hepatic radioembolisation.

    Science.gov (United States)

    Hamie, Qeumars Mustafa; Kobe, Adrian Raoul; Mietzsch, Leif; Manhart, Michael; Puippe, Gilbert Dominique; Pfammatter, Thomas; Guggenberger, Roman

    2018-01-01

    To investigate the effect of an on-site prototype metal artefact reduction (MAR) algorithm in cone-beam CT-catheter-arteriography (CBCT-CA) in patients undergoing transarterial radioembolisation (RE) of hepatic masses. Ethical board approved retrospective study of 29 patients (mean 63.7±13.7 years, 11 female), including 16 patients with arterial metallic coils, undergoing CBCT-CA (8s scan, 200 degrees rotation, 397 projections). Image reconstructions with and without prototype MAR algorithm were evaluated quantitatively (streak-artefact attenuation changes) and qualitatively (visibility of hepatic parenchyma and vessels) in near- (3cm) of artefact sources (metallic coils and catheters). Quantitative and qualitative measurements of uncorrected and MAR corrected images and different artefact sources were compared RESULTS: Quantitative evaluation showed significant reduction of near- and far-field streak-artefacts with MAR for both artefact sources (p0.05). Inhomogeneities of attenuation values were significantly higher for metallic coils compared to catheters (pprototype MAR algorithm improves image quality in proximity of metallic coil and catheter artefacts. • Metal objects cause artefacts in cone-beam computed tomography (CBCT) imaging. • These artefacts can be corrected by metal artefact reduction (MAR) algorithms. • Corrected images show significantly better visibility of nearby hepatic vessels and tissue. • Better visibility may facilitate image interpretation, save time and radiation exposure.

  9. Fast algorithms for transport models. Final report

    International Nuclear Information System (INIS)

    Manteuffel, T.A.

    1994-01-01

    This project has developed a multigrid in space algorithm for the solution of the S N equations with isotropic scattering in slab geometry. The algorithm was developed for the Modified Linear Discontinuous (MLD) discretization in space which is accurate in the thick diffusion limit. It uses a red/black two-cell μ-line relaxation. This relaxation solves for all angles on two adjacent spatial cells simultaneously. It takes advantage of the rank-one property of the coupling between angles and can perform this inversion in O(N) operations. A version of the multigrid in space algorithm was programmed on the Thinking Machines Inc. CM-200 located at LANL. It was discovered that on the CM-200 a block Jacobi type iteration was more efficient than the block red/black iteration. Given sufficient processors all two-cell block inversions can be carried out simultaneously with a small number of parallel steps. The bottleneck is the need for sums of N values, where N is the number of discrete angles, each from a different processor. These are carried out by machine intrinsic functions and are well optimized. The overall algorithm has computational complexity O(log(M)), where M is the number of spatial cells. The algorithm is very efficient and represents the state-of-the-art for isotropic problems in slab geometry. For anisotropic scattering in slab geometry, a multilevel in angle algorithm was developed. A parallel version of the multilevel in angle algorithm has also been developed. Upon first glance, the shifted transport sweep has limited parallelism. Once the right-hand-side has been computed, the sweep is completely parallel in angle, becoming N uncoupled initial value ODE's. The author has developed a cyclic reduction algorithm that renders it parallel with complexity O(log(M)). The multilevel in angle algorithm visits log(N) levels, where shifted transport sweeps are performed. The overall complexity is O(log(N)log(M))

  10. A customizable software for fast reduction and analysis of large X-ray scattering data sets: applications of the new DPDAK package to small-angle X-ray scattering and grazing-incidence small-angle X-ray scattering.

    Science.gov (United States)

    Benecke, Gunthard; Wagermaier, Wolfgang; Li, Chenghao; Schwartzkopf, Matthias; Flucke, Gero; Hoerth, Rebecca; Zizak, Ivo; Burghammer, Manfred; Metwalli, Ezzeldin; Müller-Buschbaum, Peter; Trebbin, Martin; Förster, Stephan; Paris, Oskar; Roth, Stephan V; Fratzl, Peter

    2014-10-01

    X-ray scattering experiments at synchrotron sources are characterized by large and constantly increasing amounts of data. The great number of files generated during a synchrotron experiment is often a limiting factor in the analysis of the data, since appropriate software is rarely available to perform fast and tailored data processing. Furthermore, it is often necessary to perform online data reduction and analysis during the experiment in order to interactively optimize experimental design. This article presents an open-source software package developed to process large amounts of data from synchrotron scattering experiments. These data reduction processes involve calibration and correction of raw data, one- or two-dimensional integration, as well as fitting and further analysis of the data, including the extraction of certain parameters. The software, DPDAK (directly programmable data analysis kit), is based on a plug-in structure and allows individual extension in accordance with the requirements of the user. The article demonstrates the use of DPDAK for on- and offline analysis of scanning small-angle X-ray scattering (SAXS) data on biological samples and microfluidic systems, as well as for a comprehensive analysis of grazing-incidence SAXS data. In addition to a comparison with existing software packages, the structure of DPDAK and the possibilities and limitations are discussed.

  11. Optimizing cone beam CT scatter estimation in egs-cbct for a clinical and virtual chest phantom

    International Nuclear Information System (INIS)

    Thing, Rune Slot; Mainegra-Hing, Ernesto

    2014-01-01

    Purpose: Cone beam computed tomography (CBCT) image quality suffers from contamination from scattered photons in the projection images. Monte Carlo simulations are a powerful tool to investigate the properties of scattered photons.egs-cbct, a recent EGSnrc user code, provides the ability of performing fast scatter calculations in CBCT projection images. This paper investigates how optimization of user inputs can provide the most efficient scatter calculations. Methods: Two simulation geometries with two different x-ray sources were simulated, while the user input parameters for the efficiency improving techniques (EITs) implemented inegs-cbct were varied. Simulation efficiencies were compared to analog simulations performed without using any EITs. Resulting scatter distributions were confirmed unbiased against the analog simulations. Results: The optimal EIT parameter selection depends on the simulation geometry and x-ray source. Forced detection improved the scatter calculation efficiency by 80%. Delta transport improved calculation efficiency by a further 34%, while particle splitting combined with Russian roulette improved the efficiency by a factor of 45 or more. Combining these variance reduction techniques with a built-in denoising algorithm, efficiency improvements of 4 orders of magnitude were achieved. Conclusions: Using the built-in EITs inegs-cbct can improve scatter calculation efficiencies by more than 4 orders of magnitude. To achieve this, the user must optimize the input parameters to the specific simulation geometry. Realizing the full potential of the denoising algorithm requires keeping the statistical uncertainty below a threshold value above which the efficiency drops exponentially

  12. Optimal design of minimum mean-square error noise reduction algorithms using the simulated annealing technique.

    Science.gov (United States)

    Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan

    2009-02-01

    The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.

  13. Metal artifact reduction algorithm based on model images and spatial information

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Jay [Institute of Radiological Science, Central Taiwan University of Science and Technology, Taichung, Taiwan (China); Shih, Cheng-Ting [Department of Biomedical Engineering and Environmental Sciences, National Tsing-Hua University, Hsinchu, Taiwan (China); Chang, Shu-Jun [Health Physics Division, Institute of Nuclear Energy Research, Taoyuan, Taiwan (China); Huang, Tzung-Chi [Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, Taiwan (China); Sun, Jing-Yi [Institute of Radiological Science, Central Taiwan University of Science and Technology, Taichung, Taiwan (China); Wu, Tung-Hsin, E-mail: tung@ym.edu.tw [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, No.155, Sec. 2, Linong Street, Taipei 112, Taiwan (China)

    2011-10-01

    Computed tomography (CT) has become one of the most favorable choices for diagnosis of trauma. However, high-density metal implants can induce metal artifacts in CT images, compromising image quality. In this study, we proposed a model-based metal artifact reduction (MAR) algorithm. First, we built a model image using the k-means clustering technique with spatial information and calculated the difference between the original image and the model image. Then, the projection data of these two images were combined using an exponential weighting function. At last, the corrected image was reconstructed using the filter back-projection algorithm. Two metal-artifact contaminated images were studied. For the cylindrical water phantom image, the metal artifact was effectively removed. The mean CT number of water was improved from -28.95{+-}97.97 to -4.76{+-}4.28. For the clinical pelvic CT image, the dark band and the metal line were removed, and the continuity and uniformity of the soft tissue were recovered as well. These results indicate that the proposed MAR algorithm is useful for reducing metal artifact and could improve the diagnostic value of metal-artifact contaminated CT images.

  14. Implementation of the U.S. Environmental Protection Agency's Waste Reduction (WAR) Algorithm in Cape-Open Based Process Simulators

    Science.gov (United States)

    The Sustainable Technology Division has recently completed an implementation of the U.S. EPA's Waste Reduction (WAR) Algorithm that can be directly accessed from a Cape-Open compliant process modeling environment. The WAR Algorithm add-in can be used in AmsterChem's COFE (Cape-Op...

  15. Optimization-based scatter estimation using primary modulation for computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yi; Ma, Jingchen; Zhao, Jun, E-mail: junzhao@sjtu.edu.cn [School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240 (China); Song, Ying [Department of Radiation Oncology, West China Hospital, Sichuan University, Chengdu 610041 (China)

    2016-08-15

    Purpose: Scatter reduces the image quality in computed tomography (CT), but scatter correction remains a challenge. A previously proposed primary modulation method simultaneously obtains the primary and scatter in a single scan. However, separating the scatter and primary in primary modulation is challenging because it is an underdetermined problem. In this study, an optimization-based scatter estimation (OSE) algorithm is proposed to estimate and correct scatter. Methods: In the concept of primary modulation, the primary is modulated, but the scatter remains smooth by inserting a modulator between the x-ray source and the object. In the proposed algorithm, an objective function is designed for separating the scatter and primary. Prior knowledge is incorporated in the optimization-based framework to improve the accuracy of the estimation: (1) the primary is always positive; (2) the primary is locally smooth and the scatter is smooth; (3) the location of penumbra can be determined; and (4) the scatter-contaminated data provide knowledge about which part is smooth. Results: The simulation study shows that the edge-preserving weighting in OSE improves the estimation accuracy near the object boundary. Simulation study also demonstrates that OSE outperforms the two existing primary modulation algorithms for most regions of interest in terms of the CT number accuracy and noise. The proposed method was tested on a clinical cone beam CT, demonstrating that OSE corrects the scatter even when the modulator is not accurately registered. Conclusions: The proposed OSE algorithm improves the robustness and accuracy in scatter estimation and correction. This method is promising for scatter correction of various kinds of x-ray imaging modalities, such as x-ray radiography, cone beam CT, and the fourth-generation CT.

  16. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    Science.gov (United States)

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  17. Power spectrum analysis of the x-ray scatter signal in mammography and breast tomosynthesis projections.

    Science.gov (United States)

    Sechopoulos, Ioannis; Bliznakova, Kristina; Fei, Baowei

    2013-10-01

    power spectrum reflected a fast drop-off with increasing spatial frequency, with a reduction of four orders of magnitude by 0.1 lp/mm. The β values for the scatter signal were 6.14 and 6.39 for the 0° and 30° projections, respectively. Although the low-frequency characteristics of scatter in mammography and breast tomosynthesis were known, a quantitative analysis of the frequency domain characteristics of this signal was needed in order to optimize previously proposed software-based x-ray scatter reduction algorithms for these imaging modalities.

  18. Scattering of targets over layered half space using a semi-analytic method in conjunction with FDTD algorithm.

    Science.gov (United States)

    Cao, Le; Wei, Bing

    2014-08-25

    Finite-difference time-domain (FDTD) algorithm with a new method of plane wave excitation is used to investigate the RCS (Radar Cross Section) characteristics of targets over layered half space. Compare with the traditional excitation plane wave method, the calculation memory and time requirement is greatly decreased. The FDTD calculation is performed with a plane wave incidence, and the RCS of far field is obtained by extrapolating the currently calculated data on the output boundary. However, methods available for extrapolating have to evaluate the half space Green function. In this paper, a new method which avoids using the complex and time-consuming half space Green function is proposed. Numerical results show that this method is in good agreement with classic algorithm and it can be used in the fast calculation of scattering and radiation of targets over layered half space.

  19. The fast multipole method and Fourier convolution for the solution of acoustic scattering on regular volumetric grids

    Science.gov (United States)

    Hesford, Andrew J.; Waag, Robert C.

    2010-10-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  20. WE-AB-207A-08: BEST IN PHYSICS (IMAGING): Advanced Scatter Correction and Iterative Reconstruction for Improved Cone-Beam CT Imaging On the TrueBeam Radiotherapy Machine

    Energy Technology Data Exchange (ETDEWEB)

    Wang, A; Paysan, P; Brehm, M; Maslowski, A; Lehmann, M; Messmer, P; Munro, P; Yoon, S; Star-Lack, J; Seghers, D [Varian Medical Systems, Palo Alto, CA (United States)

    2016-06-15

    Purpose: To improve CBCT image quality for image-guided radiotherapy by applying advanced reconstruction algorithms to overcome scatter, noise, and artifact limitations Methods: CBCT is used extensively for patient setup in radiotherapy. However, image quality generally falls short of diagnostic CT, limiting soft-tissue based positioning and potential applications such as adaptive radiotherapy. The conventional TrueBeam CBCT reconstructor uses a basic scatter correction and FDK reconstruction, resulting in residual scatter artifacts, suboptimal image noise characteristics, and other artifacts like cone-beam artifacts. We have developed an advanced scatter correction that uses a finite-element solver (AcurosCTS) to model the behavior of photons as they pass (and scatter) through the object. Furthermore, iterative reconstruction is applied to the scatter-corrected projections, enforcing data consistency with statistical weighting and applying an edge-preserving image regularizer to reduce image noise. The combined algorithms have been implemented on a GPU. CBCT projections from clinically operating TrueBeam systems have been used to compare image quality between the conventional and improved reconstruction methods. Planning CT images of the same patients have also been compared. Results: The advanced scatter correction removes shading and inhomogeneity artifacts, reducing the scatter artifact from 99.5 HU to 13.7 HU in a typical pelvis case. Iterative reconstruction provides further benefit by reducing image noise and eliminating streak artifacts, thereby improving soft-tissue visualization. In a clinical head and pelvis CBCT, the noise was reduced by 43% and 48%, respectively, with no change in spatial resolution (assessed visually). Additional benefits include reduction of cone-beam artifacts and reduction of metal artifacts due to intrinsic downweighting of corrupted rays. Conclusion: The combination of an advanced scatter correction with iterative reconstruction

  1. New resonance cross section calculational algorithms

    International Nuclear Information System (INIS)

    Mathews, D.R.

    1978-01-01

    Improved resonance cross section calculational algorithms were developed and tested for inclusion in a fast reactor version of the MICROX code. The resonance energy portion of the MICROX code solves the neutron slowing-down equations for a two-region lattice cell on a very detailed energy grid (about 14,500 energies). In the MICROX algorithms, the exact P 0 elastic scattering kernels are replaced by synthetic (approximate) elastic scattering kernels which permit the use of an efficient and numerically stable recursion relation solution of the slowing-down equation. In the work described here, the MICROX algorithms were modified as follows: an additional delta function term was included in the P 0 synthetic scattering kernel. The additional delta function term allows one more moments of the exact elastic scattering kernel to be preserved without much extra computational effort. With the improved synthetic scattering kernel, the flux returns more closely to the exact flux below a resonance than with the original MICROX kernel. The slowing-down calculation was extended to a true B 1 hyperfine energy grid calculatn in each region by using P 1 synthetic scattering kernels and tranport-corrected P 0 collision probabilities to couple the two regions. 1 figure, 6 tables

  2. An Algorithmic Comparison of the Hyper-Reduction and the Discrete Empirical Interpolation Method for a Nonlinear Thermal Problem

    Directory of Open Access Journals (Sweden)

    Felix Fritzen

    2018-02-01

    Full Text Available A novel algorithmic discussion of the methodological and numerical differences of competing parametric model reduction techniques for nonlinear problems is presented. First, the Galerkin reduced basis (RB formulation is presented, which fails at providing significant gains with respect to the computational efficiency for nonlinear problems. Renowned methods for the reduction of the computing time of nonlinear reduced order models are the Hyper-Reduction and the (Discrete Empirical Interpolation Method (EIM, DEIM. An algorithmic description and a methodological comparison of both methods are provided. The accuracy of the predictions of the hyper-reduced model and the (DEIM in comparison to the Galerkin RB is investigated. All three approaches are applied to a simple uncertainty quantification of a planar nonlinear thermal conduction problem. The results are compared to computationally intense finite element simulations.

  3. Coherency Identification of Generators Using a PAM Algorithm for Dynamic Reduction of Power Systems

    Directory of Open Access Journals (Sweden)

    Seung-Il Moon

    2012-11-01

    Full Text Available This paper presents a new coherency identification method for dynamic reduction of a power system. To achieve dynamic reduction, coherency-based equivalence techniques divide generators into groups according to coherency, and then aggregate them. In order to minimize the changes in the dynamic response of the reduced equivalent system, coherency identification of the generators should be clearly defined. The objective of the proposed coherency identification method is to determine the optimal coherent groups of generators with respect to the dynamic response, using the Partitioning Around Medoids (PAM algorithm. For this purpose, the coherency between generators is first evaluated from the dynamic simulation time response, and in the proposed method this result is then used to define a dissimilarity index. Based on the PAM algorithm, the coherent generator groups are then determined so that the sum of the index in each group is minimized. This approach ensures that the dynamic characteristics of the original system are preserved, by providing the optimized coherency identification. To validate the effectiveness of the technique, simulated cases with an IEEE 39-bus test system are evaluated using PSS/E. The proposed method is compared with an existing coherency identification method, which uses the K-means algorithm, and is found to provide a better estimate of the original system. 

  4. A parallel wavelet-enhanced PWTD algorithm for analyzing transient scattering from electrically very large PEC targets

    KAUST Repository

    Liu, Yang

    2014-07-01

    The computational complexity and memory requirements of classically formulated marching-on-in-time (MOT)-based surface integral equation (SIE) solvers scale as O(Nt Ns 2) and O(Ns 2), respectively; here Nt and Ns denote the number of temporal and spatial degrees of freedom of the current density. The multilevel plane wave time domain (PWTD) algorithm, viz., the time domain counterpart of the multilevel fast multipole method, reduces these costs to O(Nt Nslog2 Ns) and O(Ns 1.5) (Ergin et al., IEEE Trans. Antennas Mag., 41, 39-52, 1999). Previously, PWTD-accelerated MOT-SIE solvers have been used to analyze transient scattering from perfect electrically conducting (PEC) and homogeneous dielectric objects discretized in terms of a million spatial unknowns (Shanker et al., IEEE Trans. Antennas Propag., 51, 628-641, 2003). More recently, an efficient parallelized solver that employs an advanced hierarchical and provably scalable spatial, angular, and temporal load partitioning strategy has been developed to analyze transient scattering problems that involve ten million spatial unknowns (Liu et. al., in URSI Digest, 2013).

  5. Adaptive iterative dose reduction algorithm in CT: Effect on image quality compared with filtered back projection in body phantoms of different sizes

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Milim; Lee, Jeong Min; Son, Hyo Shin; Han, Joon Koo; Choi, Byung Ihn [College of Medicine, Seoul National University, Seoul (Korea, Republic of); Yoon, Jeong Hee; Choi, Jin Woo [Dept. of Radiology, Seoul National University Hospital, Seoul (Korea, Republic of)

    2014-04-15

    To evaluate the impact of the adaptive iterative dose reduction (AIDR) three-dimensional (3D) algorithm in CT on noise reduction and the image quality compared to the filtered back projection (FBP) algorithm and to compare the effectiveness of AIDR 3D on noise reduction according to the body habitus using phantoms with different sizes. Three different-sized phantoms with diameters of 24 cm, 30 cm, and 40 cm were built up using the American College of Radiology CT accreditation phantom and layers of pork belly fat. Each phantom was scanned eight times using different mAs. Images were reconstructed using the FBP and three different strengths of the AIDR 3D. The image noise, the contrast-to-noise ratio (CNR) and the signal-to-noise ratio (SNR) of the phantom were assessed. Two radiologists assessed the image quality of the 4 image sets in consensus. The effectiveness of AIDR 3D on noise reduction compared with FBP were also compared according to the phantom sizes. Adaptive iterative dose reduction 3D significantly reduced the image noise compared with FBP and enhanced the SNR and CNR (p < 0.05) with improved image quality (p < 0.05). When a stronger reconstruction algorithm was used, greater increase of SNR and CNR as well as noise reduction was achieved (p < 0.05). The noise reduction effect of AIDR 3D was significantly greater in the 40-cm phantom than in the 24-cm or 30-cm phantoms (p < 0.05). The AIDR 3D algorithm is effective to reduce the image noise as well as to improve the image-quality parameters compared by FBP algorithm, and its effectiveness may increase as the phantom size increases.

  6. Adaptive iterative dose reduction algorithm in CT: Effect on image quality compared with filtered back projection in body phantoms of different sizes

    International Nuclear Information System (INIS)

    Kim, Milim; Lee, Jeong Min; Son, Hyo Shin; Han, Joon Koo; Choi, Byung Ihn; Yoon, Jeong Hee; Choi, Jin Woo

    2014-01-01

    To evaluate the impact of the adaptive iterative dose reduction (AIDR) three-dimensional (3D) algorithm in CT on noise reduction and the image quality compared to the filtered back projection (FBP) algorithm and to compare the effectiveness of AIDR 3D on noise reduction according to the body habitus using phantoms with different sizes. Three different-sized phantoms with diameters of 24 cm, 30 cm, and 40 cm were built up using the American College of Radiology CT accreditation phantom and layers of pork belly fat. Each phantom was scanned eight times using different mAs. Images were reconstructed using the FBP and three different strengths of the AIDR 3D. The image noise, the contrast-to-noise ratio (CNR) and the signal-to-noise ratio (SNR) of the phantom were assessed. Two radiologists assessed the image quality of the 4 image sets in consensus. The effectiveness of AIDR 3D on noise reduction compared with FBP were also compared according to the phantom sizes. Adaptive iterative dose reduction 3D significantly reduced the image noise compared with FBP and enhanced the SNR and CNR (p < 0.05) with improved image quality (p < 0.05). When a stronger reconstruction algorithm was used, greater increase of SNR and CNR as well as noise reduction was achieved (p < 0.05). The noise reduction effect of AIDR 3D was significantly greater in the 40-cm phantom than in the 24-cm or 30-cm phantoms (p < 0.05). The AIDR 3D algorithm is effective to reduce the image noise as well as to improve the image-quality parameters compared by FBP algorithm, and its effectiveness may increase as the phantom size increases.

  7. A scatter-corrected list-mode reconstruction and a practical scatter/random approximation technique for dynamic PET imaging

    International Nuclear Information System (INIS)

    Cheng, J-C; Rahmim, Arman; Blinder, Stephan; Camborde, Marie-Laure; Raywood, Kelvin; Sossi, Vesna

    2007-01-01

    We describe an ordinary Poisson list-mode expectation maximization (OP-LMEM) algorithm with a sinogram-based scatter correction method based on the single scatter simulation (SSS) technique and a random correction method based on the variance-reduced delayed-coincidence technique. We also describe a practical approximate scatter and random-estimation approach for dynamic PET studies based on a time-averaged scatter and random estimate followed by scaling according to the global numbers of true coincidences and randoms for each temporal frame. The quantitative accuracy achieved using OP-LMEM was compared to that obtained using the histogram-mode 3D ordinary Poisson ordered subset expectation maximization (3D-OP) algorithm with similar scatter and random correction methods, and they showed excellent agreement. The accuracy of the approximated scatter and random estimates was tested by comparing time activity curves (TACs) as well as the spatial scatter distribution from dynamic non-human primate studies obtained from the conventional (frame-based) approach and those obtained from the approximate approach. An excellent agreement was found, and the time required for the calculation of scatter and random estimates in the dynamic studies became much less dependent on the number of frames (we achieved a nearly four times faster performance on the scatter and random estimates by applying the proposed method). The precision of the scatter fraction was also demonstrated for the conventional and the approximate approach using phantom studies

  8. Design of the algorithm of photons migration in the multilayer skin structure

    Science.gov (United States)

    Bulykina, Anastasiia B.; Ryzhova, Victoria A.; Korotaev, Valery V.; Samokhin, Nikita Y.

    2017-06-01

    Design of approaches and methods of the oncological diseases diagnostics has special significance. It allows determining any kind of tumors at early stages. The development of optical and laser technologies provided increase of a number of methods allowing making diagnostic studies of oncological diseases. A promising area of biomedical diagnostics is the development of automated nondestructive testing systems for the study of the skin polarizing properties based on backscattered radiation detection. Specification of the examined tissue polarizing properties allows studying of structural properties change influenced by various pathologies. Consequently, measurement and analysis of the polarizing properties of the scattered optical radiation for the development of methods for diagnosis and imaging of skin in vivo appear relevant. The purpose of this research is to design the algorithm of photons migration in the multilayer skin structure. In this research, the algorithm of photons migration in the multilayer skin structure was designed. It is based on the use of the Monte Carlo method. Implemented Monte Carlo method appears as a tracking the paths of photons experiencing random discrete direction changes before they are released from the analyzed area or decrease their intensity to negligible levels. Modeling algorithm consists of the medium and the source characteristics generation, a photon generating considering spatial coordinates of the polar and azimuthal angles, the photon weight reduction calculating due to specular and diffuse reflection, the photon mean free path definition, the photon motion direction angle definition as a result of random scattering with a Henyey-Greenstein phase function, the medium's absorption calculation. Biological tissue is modeled as a homogeneous scattering sheet characterized by absorption, a scattering and anisotropy coefficients.

  9. Robust frequency diversity based algorithm for clutter noise reduction of ultrasonic signals using multiple sub-spectrum phase coherence

    Energy Technology Data Exchange (ETDEWEB)

    Gongzhang, R.; Xiao, B.; Lardner, T.; Gachagan, A. [Centre for Ultrasonic Engineering, University of Strathclyde, Glasgow, G1 1XW (United Kingdom); Li, M. [School of Engineering, University of Glasgow, Glasgow, G12 8QQ (United Kingdom)

    2014-02-18

    This paper presents a robust frequency diversity based algorithm for clutter reduction in ultrasonic A-scan waveforms. The performance of conventional spectral-temporal techniques like Split Spectrum Processing (SSP) is highly dependent on the parameter selection, especially when the signal to noise ratio (SNR) is low. Although spatial beamforming offers noise reduction with less sensitivity to parameter variation, phased array techniques are not always available. The proposed algorithm first selects an ascending series of frequency bands. A signal is reconstructed for each selected band in which a defect is present when all frequency components are in uniform sign. Combining all reconstructed signals through averaging gives a probability profile of potential defect position. To facilitate data collection and validate the proposed algorithm, Full Matrix Capture is applied on the austenitic steel and high nickel alloy (HNA) samples with 5MHz transducer arrays. When processing A-scan signals with unrefined parameters, the proposed algorithm enhances SNR by 20dB for both samples and consequently, defects are more visible in B-scan images created from the large amount of A-scan traces. Importantly, the proposed algorithm is considered robust, while SSP is shown to fail on the austenitic steel data and achieves less SNR enhancement on the HNA data.

  10. Prior image constrained scatter correction in cone-beam computed tomography image-guided radiation therapy.

    Science.gov (United States)

    Brunner, Stephen; Nett, Brian E; Tolakanahalli, Ranjini; Chen, Guang-Hong

    2011-02-21

    X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes.

  11. Cross plane scattering correction

    International Nuclear Information System (INIS)

    Shao, L.; Karp, J.S.

    1990-01-01

    Most previous scattering correction techniques for PET are based on assumptions made for a single transaxial plane and are independent of axial variations. These techniques will incorrectly estimate the scattering fraction for volumetric PET imaging systems since they do not take the cross-plane scattering into account. In this paper, the authors propose a new point source scattering deconvolution method (2-D). The cross-plane scattering is incorporated into the algorithm by modeling a scattering point source function. In the model, the scattering dependence both on axial and transaxial directions is reflected in the exponential fitting parameters and these parameters are directly estimated from a limited number of measured point response functions. The authors' results comparing the standard in-plane point source deconvolution to the authors' cross-plane source deconvolution show that for a small source, the former technique overestimates the scatter fraction in the plane of the source and underestimate the scatter fraction in adjacent planes. In addition, the authors also propose a simple approximation technique for deconvolution

  12. Efficient sampling algorithms for Monte Carlo based treatment planning

    International Nuclear Information System (INIS)

    DeMarco, J.J.; Solberg, T.D.; Chetty, I.; Smathers, J.B.

    1998-01-01

    Efficient sampling algorithms are necessary for producing a fast Monte Carlo based treatment planning code. This study evaluates several aspects of a photon-based tracking scheme and the effect of optimal sampling algorithms on the efficiency of the code. Four areas were tested: pseudo-random number generation, generalized sampling of a discrete distribution, sampling from the exponential distribution, and delta scattering as applied to photon transport through a heterogeneous simulation geometry. Generalized sampling of a discrete distribution using the cutpoint method can produce speedup gains of one order of magnitude versus conventional sequential sampling. Photon transport modifications based upon the delta scattering method were implemented and compared with a conventional boundary and collision checking algorithm. The delta scattering algorithm is faster by a factor of six versus the conventional algorithm for a boundary size of 5 mm within a heterogeneous geometry. A comparison of portable pseudo-random number algorithms and exponential sampling techniques is also discussed

  13. Deconvolution of shift-variant broadening for Compton scatter imaging

    International Nuclear Information System (INIS)

    Evans, Brian L.; Martin, Jeffrey B.; Roggemann, Michael C.

    1999-01-01

    A technique is presented for deconvolving shift-variant Doppler broadening of singly Compton scattered gamma rays from their recorded energy distribution. Doppler broadening is important in Compton scatter imaging techniques employing gamma rays with energies below roughly 100 keV. The deconvolution unfolds an approximation to the angular distribution of scattered photons from their recorded energy distribution in the presence of statistical noise and background counts. Two unfolding methods are presented, one based on a least-squares algorithm and one based on a maximum likelihood algorithm. Angular distributions unfolded from measurements made on small scattering targets show less evidence of Compton broadening. This deconvolution is shown to improve the quality of filtered backprojection images in multiplexed Compton scatter tomography. Improved sharpness and contrast are evident in the images constructed from unfolded signals

  14. Inverse electronic scattering by Green's functions and singular values decomposition

    International Nuclear Information System (INIS)

    Mayer, A.; Vigneron, J.-P.

    2000-01-01

    An inverse scattering technique is developed to enable a sample reconstruction from the diffraction figures obtained by electronic projection microscopy. In its Green's functions formulation, this technique takes account of all orders of diffraction by performing an iterative reconstruction of the wave function on the observation screen. This scattered wave function is then backpropagated to the sample to determine the potential-energy distribution, which is assumed real valued. The method relies on the use of singular values decomposition techniques, thus providing the best least-squares solutions and enabling a reduction of noise. The technique is applied to the analysis of a two-dimensional nanometric sample that is observed in Fresnel conditions with an electronic energy of 25 eV. The algorithm turns out to provide results with a mean relative error of the order of 5% and to be very stable against random noise

  15. Metal artifact reduction in x-ray computed tomography by using analytical DBP-type algorithm

    Science.gov (United States)

    Wang, Zhen; Kudo, Hiroyuki

    2012-03-01

    This paper investigates a common metal artifacts problem in X-ray computed tomography (CT). The artifacts in reconstructed image may render image non-diagnostic because of inaccuracy beam hardening correction from high attenuation objects, satisfactory image could not be reconstructed from projections with missing or distorted data. In traditionally analytical metal artifact reduction (MAR) method, firstly subtract the metallic object part of projection data from the original obtained projection, secondly complete the subtracted part in original projection by using various interpolating method, thirdly reconstruction from the interpolated projection by filtered back-projection (FBP) algorithm. The interpolation error occurred during the second step can make unrealistic assumptions about the missing data, leading to DC shift artifact in the reconstructed images. We proposed a differentiated back-projection (DBP) type MAR method by instead of FBP algorithm with DBP algorithm in third step. In FBP algorithm the interpolated projection will be filtered on each projection view angle before back-projection, as a result the interpolation error is propagated to whole projection. However, the property of DBP algorithm provide a chance to do filter after the back-projection in a Hilbert filter direction, as a result the interpolation error affection would be reduce and there is expectation on improving quality of reconstructed images. In other word, if we choose the DBP algorithm instead of the FBP algorithm, less contaminated projection data with interpolation error would be used in reconstruction. A simulation study was performed to evaluate the proposed method using a given phantom.

  16. Λ scattering equations

    Science.gov (United States)

    Gomez, Humberto

    2016-06-01

    The CHY representation of scattering amplitudes is based on integrals over the moduli space of a punctured sphere. We replace the punctured sphere by a double-cover version. The resulting scattering equations depend on a parameter Λ controlling the opening of a branch cut. The new representation of scattering amplitudes possesses an enhanced redundancy which can be used to fix, modulo branches, the location of four punctures while promoting Λ to a variable. Via residue theorems we show how CHY formulas break up into sums of products of smaller (off-shell) ones times a propagator. This leads to a powerful way of evaluating CHY integrals of generic rational functions, which we call the Λ algorithm.

  17. A simple algorithm for calculating the scattering angle in atomic collisions

    International Nuclear Information System (INIS)

    Belchior, J.C.; Braga, J.P.

    1996-01-01

    A geometric approach to calculate the classical atomic scattering angle is presented. The trajectory of the particle is divided into several straight-lines and changing in direction from one sector to the other is used to calculate the scattering angle. In this model, calculation of the scattering angle does not involve either the direct evaluation of integrals nor classical turning points. (author)

  18. The Scatter Search Based Algorithm to Revenue Management Problem in Broadcasting Companies

    Science.gov (United States)

    Pishdad, Arezoo; Sharifyazdi, Mehdi; Karimpour, Reza

    2009-09-01

    The problem under question in this paper which is faced by broadcasting companies is how to benefit from a limited advertising space. This problem is due to the stochastic behavior of customers (advertiser) in different fare classes. To address this issue we propose a mathematical constrained nonlinear multi period model which incorporates cancellation and overbooking. The objective function is to maximize the total expected revenue and our numerical method performs it by determining the sales limits for each class of customer to present the revenue management control policy. Scheduling the advertising spots in breaks is another area of concern and we consider it as a constraint in our model. In this paper an algorithm based on Scatter search is developed to acquire a good feasible solution. This method uses simulation over customer arrival and in a continuous finite time horizon [0, T]. Several sensitivity analyses are conducted in computational result for depicting the effectiveness of proposed method. It also provides insight into better results of considering revenue management (control policy) compared to "no sales limit" policy in which sooner demand will served first.

  19. A Fast and High-precision Orientation Algorithm for BeiDou Based on Dimensionality Reduction

    Directory of Open Access Journals (Sweden)

    ZHAO Jiaojiao

    2015-05-01

    Full Text Available A fast and high-precision orientation algorithm for BeiDou is proposed by deeply analyzing the constellation characteristics of BeiDou and GEO satellites features.With the advantage of good east-west geometry, the baseline vector candidate values were solved by the GEO satellites observations combined with the dimensionality reduction theory at first.Then, we use the ambiguity function to judge the values in order to obtain the optical baseline vector and get the wide lane integer ambiguities. On this basis, the B1 ambiguities were solved. Finally, the high-precision orientation was estimated by the determinating B1 ambiguities. This new algorithm not only can improve the ill-condition of traditional algorithm, but also can reduce the ambiguity search region to a great extent, thus calculating the integer ambiguities in a single-epoch.The algorithm is simulated by the actual BeiDou ephemeris and the result shows that the method is efficient and fast for orientation. It is capable of very high single-epoch success rate(99.31% and accurate attitude angle (the standard deviation of pitch and heading is respectively 0.07°and 0.13°in a real time and dynamic environment.

  20. Executable Pseudocode for Graph Algorithms

    NARCIS (Netherlands)

    B. Ó Nualláin (Breanndán)

    2015-01-01

    textabstract Algorithms are written in pseudocode. However the implementation of an algorithm in a conventional, imperative programming language can often be scattered over hundreds of lines of code thus obscuring its essence. This can lead to difficulties in understanding or verifying the

  1. A model-based radiography restoration method based on simple scatter-degradation scheme for improving image visibility

    Science.gov (United States)

    Kim, K.; Kang, S.; Cho, H.; Kang, W.; Seo, C.; Park, C.; Lee, D.; Lim, H.; Lee, H.; Kim, G.; Park, S.; Park, J.; Kim, W.; Jeon, D.; Woo, T.; Oh, J.

    2018-02-01

    In conventional planar radiography, image visibility is often limited mainly due to the superimposition of the object structure under investigation and the artifacts caused by scattered x-rays and noise. Several methods, including computed tomography (CT) as a multiplanar imaging modality, air-gap and grid techniques for the reduction of scatters, phase-contrast imaging as another image-contrast modality, etc., have extensively been investigated in attempt to overcome these difficulties. However, those methods typically require higher x-ray doses or special equipment. In this work, as another approach, we propose a new model-based radiography restoration method based on simple scatter-degradation scheme where the intensity of scattered x-rays and the transmission function of a given object are estimated from a single x-ray image to restore the original degraded image. We implemented the proposed algorithm and performed an experiment to demonstrate its viability. Our results indicate that the degradation of image characteristics by scattered x-rays and noise was effectively recovered by using the proposed method, which improves the image visibility in radiography considerably.

  2. Memory sparing, fast scattering formalism for rigorous diffraction modeling

    Science.gov (United States)

    Iff, W.; Kämpfe, T.; Jourlin, Y.; Tishchenko, A. V.

    2017-07-01

    The basics and algorithmic steps of a novel scattering formalism suited for memory sparing and fast electromagnetic calculations are presented. The formalism, called ‘S-vector algorithm’ (by analogy with the known scattering-matrix algorithm), allows the calculation of the collective scattering spectra of individual layered micro-structured scattering objects. A rigorous method of linear complexity is applied to model the scattering at individual layers; here the generalized source method (GSM) resorting to Fourier harmonics as basis functions is used as one possible method of linear complexity. The concatenation of the individual scattering events can be achieved sequentially or in parallel, both having pros and cons. The present development will largely concentrate on a consecutive approach based on the multiple reflection series. The latter will be reformulated into an implicit formalism which will be associated with an iterative solver, resulting in improved convergence. The examples will first refer to 1D grating diffraction for the sake of simplicity and intelligibility, with a final 2D application example.

  3. Riemann–Hilbert problem approach for two-dimensional flow inverse scattering

    Energy Technology Data Exchange (ETDEWEB)

    Agaltsov, A. D., E-mail: agalets@gmail.com [Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University, 119991 Moscow (Russian Federation); Novikov, R. G., E-mail: novikov@cmap.polytechnique.fr [CNRS (UMR 7641), Centre de Mathématiques Appliquées, Ecole Polytechnique, 91128 Palaiseau (France); IEPT RAS, 117997 Moscow (Russian Federation); Moscow Institute of Physics and Technology, Dolgoprudny (Russian Federation)

    2014-10-15

    We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given.

  4. Riemann–Hilbert problem approach for two-dimensional flow inverse scattering

    International Nuclear Information System (INIS)

    Agaltsov, A. D.; Novikov, R. G.

    2014-01-01

    We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given

  5. Fault Diagnosis of Supervision and Homogenization Distance Based on Local Linear Embedding Algorithm

    Directory of Open Access Journals (Sweden)

    Guangbin Wang

    2015-01-01

    Full Text Available In view of the problems of uneven distribution of reality fault samples and dimension reduction effect of locally linear embedding (LLE algorithm which is easily affected by neighboring points, an improved local linear embedding algorithm of homogenization distance (HLLE is developed. The method makes the overall distribution of sample points tend to be homogenization and reduces the influence of neighboring points using homogenization distance instead of the traditional Euclidean distance. It is helpful to choose effective neighboring points to construct weight matrix for dimension reduction. Because the fault recognition performance improvement of HLLE is limited and unstable, the paper further proposes a new local linear embedding algorithm of supervision and homogenization distance (SHLLE by adding the supervised learning mechanism. On the basis of homogenization distance, supervised learning increases the category information of sample points so that the same category of sample points will be gathered and the heterogeneous category of sample points will be scattered. It effectively improves the performance of fault diagnosis and maintains stability at the same time. A comparison of the methods mentioned above was made by simulation experiment with rotor system fault diagnosis, and the results show that SHLLE algorithm has superior fault recognition performance.

  6. Interleaved segment correction achieves higher improvement factors in using genetic algorithm to optimize light focusing through scattering media

    Science.gov (United States)

    Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong

    2017-10-01

    Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.

  7. Scattering amplitudes over finite fields and multivariate functional reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Peraro, Tiziano [Higgs Centre for Theoretical Physics,School of Physics and Astronomy, The University of Edinburgh,James Clerk Maxwell Building, Peter Guthrie Tait Road, Edinburgh EH9 3FD (United Kingdom)

    2016-12-07

    Several problems in computer algebra can be efficiently solved by reducing them to calculations over finite fields. In this paper, we describe an algorithm for the reconstruction of multivariate polynomials and rational functions from their evaluation over finite fields. Calculations over finite fields can in turn be efficiently performed using machine-size integers in statically-typed languages. We then discuss the application of the algorithm to several techniques related to the computation of scattering amplitudes, such as the four- and six-dimensional spinor-helicity formalism, tree-level recursion relations, and multi-loop integrand reduction via generalized unitarity. The method has good efficiency and scales well with the number of variables and the complexity of the problem. As an example combining these techniques, we present the calculation of full analytic expressions for the two-loop five-point on-shell integrands of the maximal cuts of the planar penta-box and the non-planar double-pentagon topologies in Yang-Mills theory, for a complete set of independent helicity configurations.

  8. Scattering amplitudes over finite fields and multivariate functional reconstruction

    International Nuclear Information System (INIS)

    Peraro, Tiziano

    2016-01-01

    Several problems in computer algebra can be efficiently solved by reducing them to calculations over finite fields. In this paper, we describe an algorithm for the reconstruction of multivariate polynomials and rational functions from their evaluation over finite fields. Calculations over finite fields can in turn be efficiently performed using machine-size integers in statically-typed languages. We then discuss the application of the algorithm to several techniques related to the computation of scattering amplitudes, such as the four- and six-dimensional spinor-helicity formalism, tree-level recursion relations, and multi-loop integrand reduction via generalized unitarity. The method has good efficiency and scales well with the number of variables and the complexity of the problem. As an example combining these techniques, we present the calculation of full analytic expressions for the two-loop five-point on-shell integrands of the maximal cuts of the planar penta-box and the non-planar double-pentagon topologies in Yang-Mills theory, for a complete set of independent helicity configurations.

  9. Mosaic crystal algorithm for Monte Carlo simulations

    CERN Document Server

    Seeger, P A

    2002-01-01

    An algorithm is presented for calculating reflectivity, absorption, and scattering of mosaic crystals in Monte Carlo simulations of neutron instruments. The algorithm uses multi-step transport through the crystal with an exact solution of the Darwin equations at each step. It relies on the kinematical model for Bragg reflection (with parameters adjusted to reproduce experimental data). For computation of thermal effects (the Debye-Waller factor and coherent inelastic scattering), an expansion of the Debye integral as a rapidly converging series of exponential terms is also presented. Any crystal geometry and plane orientation may be treated. The algorithm has been incorporated into the neutron instrument simulation package NISP. (orig.)

  10. Time-of-flight small-angle neutron scattering data reduction and analysis at LANSCE with program SMR

    International Nuclear Information System (INIS)

    Hjelm, R.P. Jr.; Seeger, P.A.

    1988-01-01

    A user-friendly integrated system, SMR, for the display, reduction and analysis of data from time-of-flight small-angle neutron diffractometers is described. Its purpose is to provide facilities for data display and assessment, and to provide these facilities in near real time. This allows the results of each scattering measurement to be available almost immediately, and enables the user to use the results of a measurement as a basis for other measurements in the same time allocation of the instrument. 8 refs., 10 figs

  11. Scattering calculation and image reconstruction using elevation-focused beams.

    Science.gov (United States)

    Duncan, David P; Astheimer, Jeffrey P; Waag, Robert C

    2009-05-01

    Pressure scattered by cylindrical and spherical objects with elevation-focused illumination and reception has been analytically calculated, and corresponding cross sections have been reconstructed with a two-dimensional algorithm. Elevation focusing was used to elucidate constraints on quantitative imaging of three-dimensional objects with two-dimensional algorithms. Focused illumination and reception are represented by angular spectra of plane waves that were efficiently computed using a Fourier interpolation method to maintain the same angles for all temporal frequencies. Reconstructions were formed using an eigenfunction method with multiple frequencies, phase compensation, and iteration. The results show that the scattered pressure reduces to a two-dimensional expression, and two-dimensional algorithms are applicable when the region of a three-dimensional object within an elevation-focused beam is approximately constant in elevation. The results also show that energy scattered out of the reception aperture by objects contained within the focused beam can result in the reconstructed values of attenuation slope being greater than true values at the boundary of the object. Reconstructed sound speed images, however, appear to be relatively unaffected by the loss in scattered energy. The broad conclusion that can be drawn from these results is that two-dimensional reconstructions require compensation to account for uncaptured three-dimensional scattering.

  12. Deterministic simulation of first-order scattering in virtual X-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Freud, N. E-mail: nicolas.freud@insa-lyon.fr; Duvauchelle, P.; Pistrui-Maximean, S.A.; Letang, J.-M.; Babot, D

    2004-07-01

    A deterministic algorithm is proposed to compute the contribution of first-order Compton- and Rayleigh-scattered radiation in X-ray imaging. This algorithm has been implemented in a simulation code named virtual X-ray imaging. The physical models chosen to account for photon scattering are the well-known form factor and incoherent scattering function approximations, which are recalled in this paper and whose limits of validity are briefly discussed. The proposed algorithm, based on a voxel discretization of the inspected object, is presented in detail, as well as its results in simple configurations, which are shown to converge when the sampling steps are chosen sufficiently small. Simple criteria for choosing correct sampling steps (voxel and pixel size) are established. The order of magnitude of the computation time necessary to simulate first-order scattering images amounts to hours with a PC architecture and can even be decreased down to minutes, if only a profile is computed (along a linear detector). Finally, the results obtained with the proposed algorithm are compared to the ones given by the Monte Carlo code Geant4 and found to be in excellent accordance, which constitutes a validation of our algorithm. The advantages and drawbacks of the proposed deterministic method versus the Monte Carlo method are briefly discussed.

  13. Direct and inverse scattering for viscoelastic media

    International Nuclear Information System (INIS)

    Ammicht, E.; Corones, J.P.; Krueger, R.J.

    1987-01-01

    A time domain approach to direct and inverse scattering problems for one-dimensional viscoelastic media is presented. Such media can be characterized as having a constitutive relation between stress and strain which involves the past history of the strain through a memory function, the relaxation modulus. In the approach in this article, the relaxation modulus of a material is shown to be related to the reflection properties of the material. This relation provides a constructive algorithm for direct and inverse scattering problems. A numerical implementation of this algorithm is tested on several problems involving realistic relaxation moduli

  14. An Improved Algorithm to Delineate Urban Targets with Model-Based Decomposition of PolSAR Data

    Directory of Open Access Journals (Sweden)

    Dingfeng Duan

    2017-10-01

    Full Text Available In model-based decomposition algorithms using polarimetric synthetic aperture radar (PolSAR data, urban targets are typically identified based on the existence of strong double-bounced scattering. However, urban targets with large azimuth orientation angles (AOAs produce strong volumetric scattering that appears similar to scattering characteristics from tree canopies. Due to scattering ambiguity, urban targets can be classified into the vegetation category if the same classification scheme of the model-based PolSAR decomposition algorithms is followed. To resolve the ambiguity and to reduce the misclassification eventually, we introduced a correlation coefficient that characterized scattering mechanisms of urban targets with variable AOAs. Then, an existing volumetric scattering model was modified, and a PolSAR decomposition algorithm developed. The validity and effectiveness of the algorithm were examined using four PolSAR datasets. The algorithm was valid and effective to delineate urban targets with a wide range of AOAs, and applicable to a broad range of ground targets from urban areas, and from upland and flooded forest stands.

  15. Discrete inverse scattering theory and the continuum limit

    International Nuclear Information System (INIS)

    Berryman, J.G.; Greene, R.R.

    1978-01-01

    The class of satisfactory difference approximations for the Schroedinger equation in discrete inverse scattering theory is shown smaller than previously supposed. A fast algorithm (analogous to the Levinson algorithm for Toeplitz matrices) is found for solving the discrete inverse problem. (Auth.)

  16. Puzzle Imaging: Using Large-Scale Dimensionality Reduction Algorithms for Localization.

    Science.gov (United States)

    Glaser, Joshua I; Zamft, Bradley M; Church, George M; Kording, Konrad P

    2015-01-01

    Current high-resolution imaging techniques require an intact sample that preserves spatial relationships. We here present a novel approach, "puzzle imaging," that allows imaging a spatially scrambled sample. This technique takes many spatially disordered samples, and then pieces them back together using local properties embedded within the sample. We show that puzzle imaging can efficiently produce high-resolution images using dimensionality reduction algorithms. We demonstrate the theoretical capabilities of puzzle imaging in three biological scenarios, showing that (1) relatively precise 3-dimensional brain imaging is possible; (2) the physical structure of a neural network can often be recovered based only on the neural connectivity matrix; and (3) a chemical map could be reproduced using bacteria with chemosensitive DNA and conjugative transfer. The ability to reconstruct scrambled images promises to enable imaging based on DNA sequencing of homogenized tissue samples.

  17. Optimization of loss and gain multilayers for reducing the scattering of a perfect conducting cylinder

    Science.gov (United States)

    Zhen-Zhong, Yu; Guo-Shu, Zhao; Gang, Sun; Hai-Fei, Si; Zhong, Yang

    2016-07-01

    Reduction of electromagnetic scattering from a conducting cylinder could be achieved by covering it with optimized multilayers of normal dielectric and plasmonic material. The plasmonic material with intrinsic losses could degrade the cloaking effect. Using a genetic algorithm, we present the optimized design of loss and gain multilayers for reduction of the scattering from a perfect conducting cylinder. This multilayered structure is theoretically and numerically analyzed when the plasmonic material with low loss and high loss respectively is considered. We demonstrate by full-wave simulation that the optimized nonmagnetic gain-loss design can greatly compensate the decreased cloaking effect caused by loss material, which facilitates the realization of practical electromagnetic cloaking, especially in the optical range. Project supported by the Research Foundation of Jinling Institute of Technology, China (Grant No. JIT-B-201426), the Jiangsu Modern Education and Technology Key Project, China (Grant No. 2014-R-31984), the Jiangsu 333 Project Funded Research Project, China (Grant No. BRA2010004), and the University Science Research Project of Jiangsu Province, China (Grant No. 15KJB520010).

  18. Test and data reduction algorithm for the evaluation of lead-acid battery packs

    Energy Technology Data Exchange (ETDEWEB)

    Nowak, D.

    1986-01-15

    Experience from the DOE Electric Vehicle Demonstration Project indicated severe battery problems associated with driving electric cars in temperature extremes. The vehicle batteries suffered from a high module failure rate, reduced capacity, and low efficiency. To assess the nature and the extent of the battery problems encountered at various operating temperatures, a test program was established at the University of Alabama in Huntsville (UAH). A test facility was built that is based on Propel cycling equipment, the Hewlett Packard 3497A Data Acquisition System, and the HP85F and HP87 computers. The objective was to establish a cost effective facility that could generate the engineering data base needed for the development of thermal management systems, destratification systems, central watering systems and proper charge algorithms. It was hoped that the development and implementation of these systems by EV manufacturers and fleet operators of EVs would eliminate the most pressing problems that occurred in the DOE EV Demonstration Project. The data reduction algorithm is described.

  19. Neutron scattering studies of crude oil viscosity reduction with electric field

    Science.gov (United States)

    Du, Enpeng

    topic. Dr. Tao with his group at Temple University, using his electro or magnetic rheological viscosity theory has developed a new technology, which utilizes electric or magnetic fields to change the rheology of complex fluids to reduce the viscosity, while keeping the temperature unchanged. After we successfully reduced the viscosity of crude oil with field and investigated the microstructure changing in various crude oil samples with SANS, we have continued to reduce the viscosity of heavy crude oil, bunker diesel, ultra low sulfur diesel, bio-diesel and crude oil and ultra low temperature with electric field treatment. Our research group developed the viscosity electrorheology theory and investigated flow rate with laboratory and field pipeline. But we never visualize this aggregation. The small angle neutron scattering experiment has confirmed the theoretical prediction that a strong electric field induces the suspended nano-particles inside crude oil to aggregate into short chains along the field direction. This aggregation breaks the symmetry, making the viscosity anisotropic: along the field direction, the viscosity is significantly reduced. The experiment enables us to determine the induced chain size and shape, verifies that the electric field works for all kinds of crude oils, paraffin-based, asphalt-based, and mix-based. The basic physics of such field induced viscosity reduction is applicable to all kinds of suspensions.

  20. A two-domain real-time algorithm for optimal data reduction: A case study on accelerator magnet measurements

    CERN Document Server

    Arpaia, P; Inglese, V

    2010-01-01

    A real-time algorithm of data reduction, based on the combination a two lossy techniques specifically optimized for high-rate magnetic measurements in two domains (e.g. time and space), is proposed. The first technique exploits an adaptive sampling rule based on the power estimation of the flux increments in order to optimize the information to be gathered for magnetic field analysis in real time. The tracking condition is defined by the target noise level in the Nyquist band required by post-processing procedure of magnetic analysis. The second technique uses a data reduction algorithm in order to improve the compression ratio while preserving the consistency of the measured signal. The allowed loss is set equal to the random noise level in the signal in order to force the loss and the noise to cancel rather than to add, by improving the signal-to-noise ratio. Numerical analysis and experimental results of on-field performance characterization and validation for two case studies of magnetic measurement syste...

  1. SU-D-206-07: CBCT Scatter Correction Based On Rotating Collimator

    International Nuclear Information System (INIS)

    Yu, G; Feng, Z; Yin, Y; Qiang, L; Li, B; Huang, P; Li, D

    2016-01-01

    Purpose: Scatter correction in cone-beam computed tomography (CBCT) has obvious effect on the removal of image noise, the cup artifact and the increase of image contrast. Several methods using a beam blocker for the estimation and subtraction of scatter have been proposed. However, the inconvenience of mechanics and propensity to residual artifacts limited the further evolution of basic and clinical research. Here, we propose a rotating collimator-based approach, in conjunction with reconstruction based on a discrete Radon transform and Tchebichef moments algorithm, to correct scatter-induced artifacts. Methods: A rotating-collimator, comprising round tungsten alloy strips, was mounted on a linear actuator. The rotating-collimator is divided into 6 portions equally. The round strips space is evenly spaced on each portion but staggered between different portions. A step motor connected to the rotating collimator drove the blocker to around x-ray source during the CBCT acquisition. The CBCT reconstruction based on a discrete Radon transform and Tchebichef moments algorithm is performed. Experimental studies using water phantom and Catphan504 were carried out to evaluate the performance of the proposed scheme. Results: The proposed algorithm was tested on both the Monte Carlo simulation and actual experiments with the Catphan504 phantom. From the simulation result, the mean square error of the reconstruction error decreases from 16% to 1.18%, the cupping (τcup) from 14.005% to 0.66%, and the peak signal-to-noise ratio increase from 16.9594 to 31.45. From the actual experiments, the induced visual artifacts are significantly reduced. Conclusion: We conducted an experiment on CBCT imaging system with a rotating collimator to develop and optimize x-ray scatter control and reduction technique. The proposed method is attractive in applications where a high CBCT image quality is critical, for example, dose calculation in adaptive radiation therapy. We want to thank Dr. Lei

  2. SU-D-206-07: CBCT Scatter Correction Based On Rotating Collimator

    Energy Technology Data Exchange (ETDEWEB)

    Yu, G; Feng, Z [Shandong Normal University, Jinan, Shandong (China); Yin, Y [Shandong Cancer Hospital and Institute, China, Jinan, Shandong (China); Qiang, L [Zhang Jiagang STFK Medical Device Co, Zhangjiangkang, Suzhou (China); Li, B [Shandong Academy of Medical Sciences, Jinan, Shandong provice (China); Huang, P [Shandong Province Key Laboratory of Medical Physics and Image Processing Te, Ji’nan, Shandong province (China); Li, D [School of Physics and Electronics, Shandong Normal University, Jinan, Shandong (China)

    2016-06-15

    Purpose: Scatter correction in cone-beam computed tomography (CBCT) has obvious effect on the removal of image noise, the cup artifact and the increase of image contrast. Several methods using a beam blocker for the estimation and subtraction of scatter have been proposed. However, the inconvenience of mechanics and propensity to residual artifacts limited the further evolution of basic and clinical research. Here, we propose a rotating collimator-based approach, in conjunction with reconstruction based on a discrete Radon transform and Tchebichef moments algorithm, to correct scatter-induced artifacts. Methods: A rotating-collimator, comprising round tungsten alloy strips, was mounted on a linear actuator. The rotating-collimator is divided into 6 portions equally. The round strips space is evenly spaced on each portion but staggered between different portions. A step motor connected to the rotating collimator drove the blocker to around x-ray source during the CBCT acquisition. The CBCT reconstruction based on a discrete Radon transform and Tchebichef moments algorithm is performed. Experimental studies using water phantom and Catphan504 were carried out to evaluate the performance of the proposed scheme. Results: The proposed algorithm was tested on both the Monte Carlo simulation and actual experiments with the Catphan504 phantom. From the simulation result, the mean square error of the reconstruction error decreases from 16% to 1.18%, the cupping (τcup) from 14.005% to 0.66%, and the peak signal-to-noise ratio increase from 16.9594 to 31.45. From the actual experiments, the induced visual artifacts are significantly reduced. Conclusion: We conducted an experiment on CBCT imaging system with a rotating collimator to develop and optimize x-ray scatter control and reduction technique. The proposed method is attractive in applications where a high CBCT image quality is critical, for example, dose calculation in adaptive radiation therapy. We want to thank Dr. Lei

  3. Parallel Landscape Driven Data Reduction & Spatial Interpolation Algorithm for Big LiDAR Data

    Directory of Open Access Journals (Sweden)

    Rahil Sharma

    2016-06-01

    Full Text Available Airborne Light Detection and Ranging (LiDAR topographic data provide highly accurate digital terrain information, which is used widely in applications like creating flood insurance rate maps, forest and tree studies, coastal change mapping, soil and landscape classification, 3D urban modeling, river bank management, agricultural crop studies, etc. In this paper, we focus mainly on the use of LiDAR data in terrain modeling/Digital Elevation Model (DEM generation. Technological advancements in building LiDAR sensors have enabled highly accurate and highly dense LiDAR point clouds, which have made possible high resolution modeling of terrain surfaces. However, high density data result in massive data volumes, which pose computing issues. Computational time required for dissemination, processing and storage of these data is directly proportional to the volume of the data. We describe a novel technique based on the slope map of the terrain, which addresses the challenging problem in the area of spatial data analysis, of reducing this dense LiDAR data without sacrificing its accuracy. To the best of our knowledge, this is the first ever landscape-driven data reduction algorithm. We also perform an empirical study, which shows that there is no significant loss in accuracy for the DEM generated from a 52% reduced LiDAR dataset generated by our algorithm, compared to the DEM generated from an original, complete LiDAR dataset. For the accuracy of our statistical analysis, we perform Root Mean Square Error (RMSE comparing all of the grid points of the original DEM to the DEM generated by reduced data, instead of comparing a few random control points. Besides, our multi-core data reduction algorithm is highly scalable. We also describe a modified parallel Inverse Distance Weighted (IDW spatial interpolation method and show that the DEMs it generates are time-efficient and have better accuracy than the one’s generated by the traditional IDW method.

  4. Electron scattering in dense atomic and molecular gases: An empirical correlation of polarizability and electron scattering length

    International Nuclear Information System (INIS)

    Rupnik, K.; Asaf, U.; McGlynn, S.P.

    1990-01-01

    A linear correlation exists between the electron scattering length, as measured by a pressure shift method, and the polarizabilities for He, Ne, Ar, Kr, and Xe gases. The correlative algorithm has excellent predictive capability for the electron scattering lengths of mixtures of rare gases, simple molecular gases such as H 2 and N 2 and even complex molecular entities such as methane, CH 4

  5. SCIAMACHY WFM-DOAS XCO2: reduction of scattering related errors

    Directory of Open Access Journals (Sweden)

    R. Sussmann

    2012-10-01

    Full Text Available Global observations of column-averaged dry air mole fractions of carbon dioxide (CO2, denoted by XCO2 , retrieved from SCIAMACHY on-board ENVISAT can provide important and missing global information on the distribution and magnitude of regional CO2 surface fluxes. This application has challenging precision and accuracy requirements. In a previous publication (Heymann et al., 2012, it has been shown by analysing seven years of SCIAMACHY WFM-DOAS XCO2 (WFMDv2.1 that unaccounted thin cirrus clouds can result in significant errors. In order to enhance the quality of the SCIAMACHY XCO2 data product, we have developed a new version of the retrieval algorithm (WFMDv2.2, which is described in this manuscript. It is based on an improved cloud filtering and correction method using the 1.4 μm strong water vapour absorption and 0.76 μm O2-A bands. The new algorithm has been used to generate a SCIAMACHY XCO2 data set covering the years 2003–2009. The new XCO2 data set has been validated using ground-based observations from the Total Carbon Column Observing Network (TCCON. The validation shows a significant improvement of the new product (v2.2 in comparison to the previous product (v2.1. For example, the standard deviation of the difference to TCCON at Darwin, Australia, has been reduced from 4 ppm to 2 ppm. The monthly regional-scale scatter of the data (defined as the mean intra-monthly standard deviation of all quality filtered XCO2 retrievals within a radius of 350 km around various locations has also been reduced, typically by a factor of about 1.5. Overall, the validation of the new WFMDv2.2 XCO2 data product can be summarised by a single measurement precision of 3.8 ppm, an estimated regional-scale (radius of 500 km precision of monthly averages of 1.6 ppm and an estimated regional-scale relative accuracy of 0.8 ppm. In addition to the comparison with the limited number of TCCON sites, we also present a comparison with NOAA's global CO2 modelling

  6. A novel image-domain-based cone-beam computed tomography enhancement algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Li Xiang; Li Tianfang; Yang Yong; Heron, Dwight E; Huq, M Saiful, E-mail: lix@upmc.edu [Department of Radiation Oncology, University of Pittsburgh Cancer Institute, Pittsburgh, PA 15232 (United States)

    2011-05-07

    Kilo-voltage (kV) cone-beam computed tomography (CBCT) plays an important role in image-guided radiotherapy. However, due to a large cone-beam angle, scatter effects significantly degrade the CBCT image quality and limit its clinical application. The goal of this study is to develop an image enhancement algorithm to reduce the low-frequency CBCT image artifacts, which are also called the bias field. The proposed algorithm is based on the hypothesis that image intensities of different types of materials in CBCT images are approximately globally uniform (in other words, a piecewise property). A maximum a posteriori probability framework was developed to estimate the bias field contribution from a given CBCT image. The performance of the proposed CBCT image enhancement method was tested using phantoms and clinical CBCT images. Compared to the original CBCT images, the corrected images using the proposed method achieved a more uniform intensity distribution within each tissue type and significantly reduced cupping and shading artifacts. In a head and a pelvic case, the proposed method reduced the Hounsfield unit (HU) errors within the region of interest from 300 HU to less than 60 HU. In a chest case, the HU errors were reduced from 460 HU to less than 110 HU. The proposed CBCT image enhancement algorithm demonstrated a promising result by the reduction of the scatter-induced low-frequency image artifacts commonly encountered in kV CBCT imaging.

  7. A multifrequency MUSIC algorithm for locating small inhomogeneities in inverse scattering

    International Nuclear Information System (INIS)

    Griesmaier, Roland; Schmiedecke, Christian

    2017-01-01

    We consider an inverse scattering problem for time-harmonic acoustic or electromagnetic waves with sparse multifrequency far field data-sets. The goal is to localize several small penetrable objects embedded inside an otherwise homogeneous background medium from observations of far fields of scattered waves corresponding to incident plane waves with one fixed incident direction but several different frequencies. We assume that the far field is measured at a few observation directions only. Taking advantage of the smallness of the scatterers with respect to wavelength we utilize an asymptotic representation formula for the far field to design and analyze a MUSIC-type reconstruction method for this setup. We establish lower bounds on the number of frequencies and receiver directions that are required to recover the number and the positions of an ensemble of scatterers from the given measurements. Furthermore we briefly sketch a possible application of the reconstruction method to the practically relevant case of multifrequency backscattering data. Numerical examples are presented to document the potentials and limitations of this approach. (paper)

  8. Difference structures from time-resolved small-angle and wide-angle x-ray scattering

    Science.gov (United States)

    Nepal, Prakash; Saldin, D. K.

    2018-05-01

    Time-resolved small-angle x-ray scattering/wide-angle x-ray scattering (SAXS/WAXS) is capable of recovering difference structures directly from difference SAXS/WAXS curves. It does so by means of the theory described here because the structural changes in pump-probe detection in a typical time-resolved experiment are generally small enough to be confined to a single residue or group in close proximity which is identified by a method akin to the difference Fourier method of time-resolved crystallography. If it is assumed, as is usual with time-resolved structures, that the moved atoms lie within the residue, the 100-fold reduction in the search space (assuming a typical protein has about 100 residues) allows the exaction of the structure by a simulated annealing algorithm with a huge reduction in computing time and leads to a greater resolution by varying the positions of atoms only within that residue. This reduction in the number of potential moved atoms allows us to identify the actual motions of the individual atoms. In the case of a crystal, time-resolved calculations are normally performed using the difference Fourier method, which is, of course, not directly applicable to SAXS/WAXS. The method developed in this paper may be thought of as a substitute for that method which allows SAXS/WAXS (and hence disordered molecules) to also be used for time-resolved structural work.

  9. A Novel Sidelobe Reduction Algorithm Based on Two-Dimensional Sidelobe Correction Using D-SVA for Squint SAR Images

    Directory of Open Access Journals (Sweden)

    Min Liu

    2018-03-01

    Full Text Available Sidelobe reduction is a very primary task for synthetic aperture radar (SAR images. Various methods have been proposed for broadside SAR, which can suppress the sidelobes effectively while maintaining high image resolution at the same time. Alternatively, squint SAR, especially highly squint SAR, has emerged as an important tool that provides more mobility and flexibility and has become a focus of recent research studies. One of the research challenges for squint SAR is how to resolve the severe range-azimuth coupling of echo signals. Unlike broadside SAR images, the range and azimuth sidelobes of the squint SAR images no longer locate on the principal axes with high probability. Thus the spatially variant apodization (SVA filters could hardly get all the sidelobe information, and hence the sidelobe reduction process is not optimal. In this paper, we present an improved algorithm called double spatially variant apodization (D-SVA for better sidelobe suppression. Satisfactory sidelobe reduction results are achieved with the proposed algorithm by comparing the squint SAR images to the broadside SAR images. Simulation results also demonstrate the reliability and efficiency of the proposed method.

  10. 4D cone-beam computed tomography (CBCT) using a moving blocker for simultaneous radiation dose reduction and scatter correction

    Science.gov (United States)

    Zhao, Cong; Zhong, Yuncheng; Duan, Xinhui; Zhang, You; Huang, Xiaokun; Wang, Jing; Jin, Mingwu

    2018-06-01

    Four-dimensional (4D) x-ray cone-beam computed tomography (CBCT) is important for a precise radiation therapy for lung cancer. Due to the repeated use and 4D acquisition over a course of radiotherapy, the radiation dose becomes a concern. Meanwhile, the scatter contamination in CBCT deteriorates image quality for treatment tasks. In this work, we propose the use of a moving blocker (MB) during the 4D CBCT acquisition (‘4D MB’) and to combine motion-compensated reconstruction to address these two issues simultaneously. In 4D MB CBCT, the moving blocker reduces the x-ray flux passing through the patient and collects the scatter information in the blocked region at the same time. The scatter signal is estimated from the blocked region for correction. Even though the number of projection views and projection data in each view are not complete for conventional reconstruction, 4D reconstruction with a total-variation (TV) constraint and a motion-compensated temporal constraint can utilize both spatial gradient sparsity and temporal correlations among different phases to overcome the missing data problem. The feasibility simulation studies using the 4D NCAT phantom showed that 4D MB with motion-compensated reconstruction with 1/3 imaging dose reduction could produce satisfactory images and achieve 37% improvement on structural similarity (SSIM) index and 55% improvement on root mean square error (RMSE), compared to 4D reconstruction at the regular imaging dose without scatter correction. For the same 4D MB data, 4D reconstruction outperformed 3D TV reconstruction by 28% on SSIM and 34% on RMSE. A study of synthetic patient data also demonstrated the potential of 4D MB to reduce the radiation dose by 1/3 without compromising the image quality. This work paves the way for more comprehensive studies to investigate the dose reduction limit offered by this novel 4D MB method using physical phantom experiments and real patient data based on clinical relevant metrics.

  11. Finding optimal exact reducts

    KAUST Repository

    AbouEisha, Hassan M.

    2014-01-01

    The problem of attribute reduction is an important problem related to feature selection and knowledge discovery. The problem of finding reducts with minimum cardinality is NP-hard. This paper suggests a new algorithm for finding exact reducts with minimum cardinality. This algorithm transforms the initial table to a decision table of a special kind, apply a set of simplification steps to this table, and use a dynamic programming algorithm to finish the construction of an optimal reduct. I present results of computer experiments for a collection of decision tables from UCIML Repository. For many of the experimented tables, the simplification steps solved the problem.

  12. Comparison of Algorithms for the Optimal Location of Control Valves for Leakage Reduction in WDNs

    Directory of Open Access Journals (Sweden)

    Enrico Creaco

    2018-04-01

    Full Text Available The paper presents the comparison of two different algorithms for the optimal location of control valves for leakage reduction in water distribution networks (WDNs. The former is based on the sequential addition (SA of control valves. At the generic step Nval of SA, the search for the optimal combination of Nval valves is carried out, while containing the optimal combination of Nval − 1 valves found at the previous step. Therefore, only one new valve location is searched for at each step of SA, among all the remaining available locations. The latter algorithm consists of a multi-objective genetic algorithm (GA, in which valve locations are encoded inside individual genes. For the sake of consistency, the same embedded algorithm, based on iterated linear programming (LP, was used inside SA and GA, to search for the optimal valve settings at various time slots in the day. The results of applications to two WDNs show that SA and GA yield identical results for small values of Nval. When this number grows, the limitations of SA, related to its reduced exploration of the research space, emerge. In fact, for higher values of Nval, SA tends to produce less beneficial valve locations in terms of leakage abatement. However, the smaller computation time of SA may make this algorithm preferable in the case of large WDNs, for which the application of GA would be overly burdensome.

  13. X-ray scatter correction method for dedicated breast computed tomography: improvements and initial patient testing

    International Nuclear Information System (INIS)

    Ramamurthy, Senthil; D’Orsi, Carl J; Sechopoulos, Ioannis

    2016-01-01

    A previously proposed x-ray scatter correction method for dedicated breast computed tomography was further developed and implemented so as to allow for initial patient testing. The method involves the acquisition of a complete second set of breast CT projections covering 360° with a perforated tungsten plate in the path of the x-ray beam. To make patient testing feasible, a wirelessly controlled electronic positioner for the tungsten plate was designed and added to a breast CT system. Other improvements to the algorithm were implemented, including automated exclusion of non-valid primary estimate points and the use of a different approximation method to estimate the full scatter signal. To evaluate the effectiveness of the algorithm, evaluation of the resulting image quality was performed with a breast phantom and with nine patient images. The improvements in the algorithm resulted in the avoidance of introduction of artifacts, especially at the object borders, which was an issue in the previous implementation in some cases. Both contrast, in terms of signal difference and signal difference-to-noise ratio were improved with the proposed method, as opposed to with the correction algorithm incorporated in the system, which does not recover contrast. Patient image evaluation also showed enhanced contrast, better cupping correction, and more consistent voxel values for the different tissues. The algorithm also reduces artifacts present in reconstructions of non-regularly shaped breasts. With the implemented hardware and software improvements, the proposed method can be reliably used during patient breast CT imaging, resulting in improvement of image quality, no introduction of artifacts, and in some cases reduction of artifacts already present. The impact of the algorithm on actual clinical performance for detection, diagnosis and other clinical tasks in breast imaging remains to be evaluated. (paper)

  14. Scattering properties of electromagnetic waves from metal object in the lower terahertz region

    Science.gov (United States)

    Chen, Gang; Dang, H. X.; Hu, T. Y.; Su, Xiang; Lv, R. C.; Li, Hao; Tan, X. M.; Cui, T. J.

    2018-01-01

    An efficient hybrid algorithm is proposed to analyze the electromagnetic scattering properties of metal objects in the lower terahertz (THz) frequency. The metal object can be viewed as perfectly electrical conducting object with a slightly rough surface in the lower THz region. Hence the THz scattered field from metal object can be divided into coherent and incoherent parts. The physical optics and truncated-wedge incremental-length diffraction coefficients methods are combined to compute the coherent part; while the small perturbation method is used for the incoherent part. With the MonteCarlo method, the radar cross section of the rough metal surface is computed by the multilevel fast multipole algorithm and the proposed hybrid algorithm, respectively. The numerical results show that the proposed algorithm has good accuracy to simulate the scattering properties rapidly in the lower THz region.

  15. Prototype metal artefact reduction algorithm in flat panel computed tomography - evaluation in patients undergoing transarterial hepatic radioembolisation

    International Nuclear Information System (INIS)

    Hamie, Qeumars Mustafa; Kobe, Adrian Raoul; Mietzsch, Leif; Puippe, Gilbert Dominique; Pfammatter, Thomas; Guggenberger, Roman; Manhart, Michael

    2018-01-01

    To investigate the effect of an on-site prototype metal artefact reduction (MAR) algorithm in cone-beam CT-catheter-arteriography (CBCT-CA) in patients undergoing transarterial radioembolisation (RE) of hepatic masses. Ethical board approved retrospective study of 29 patients (mean 63.7±13.7 years, 11 female), including 16 patients with arterial metallic coils, undergoing CBCT-CA (8s scan, 200 degrees rotation, 397 projections). Image reconstructions with and without prototype MAR algorithm were evaluated quantitatively (streak-artefact attenuation changes) and qualitatively (visibility of hepatic parenchyma and vessels) in near- (<1cm) and far-field (>3cm) of artefact sources (metallic coils and catheters). Quantitative and qualitative measurements of uncorrected and MAR corrected images and different artefact sources were compared Quantitative evaluation showed significant reduction of near- and far-field streak-artefacts with MAR for both artefact sources (p<0.001), while remaining stable for unaffected organs (all p>0.05). Inhomogeneities of attenuation values were significantly higher for metallic coils compared to catheters (p<0.001) and decreased significantly for both after MAR (p<0.001). Qualitative image scores were significantly improved after MAR (all p<0.003) with by trend higher artefact degrees for metallic coils compared to catheters. In patients undergoing CBCT-CA for transarterial RE, prototype MAR algorithm improves image quality in proximity of metallic coil and catheter artefacts. (orig.)

  16. Prototype metal artefact reduction algorithm in flat panel computed tomography - evaluation in patients undergoing transarterial hepatic radioembolisation

    Energy Technology Data Exchange (ETDEWEB)

    Hamie, Qeumars Mustafa; Kobe, Adrian Raoul; Mietzsch, Leif; Puippe, Gilbert Dominique; Pfammatter, Thomas; Guggenberger, Roman [University Hospital Zurich, Department of Radiology, Zurich (Switzerland); Manhart, Michael [Imaging Concepts, HC AT IN IMC, Siemens Healthcare GmbH, Advanced Therapies, Innovation, Forchheim (Germany)

    2018-01-15

    To investigate the effect of an on-site prototype metal artefact reduction (MAR) algorithm in cone-beam CT-catheter-arteriography (CBCT-CA) in patients undergoing transarterial radioembolisation (RE) of hepatic masses. Ethical board approved retrospective study of 29 patients (mean 63.7±13.7 years, 11 female), including 16 patients with arterial metallic coils, undergoing CBCT-CA (8s scan, 200 degrees rotation, 397 projections). Image reconstructions with and without prototype MAR algorithm were evaluated quantitatively (streak-artefact attenuation changes) and qualitatively (visibility of hepatic parenchyma and vessels) in near- (<1cm) and far-field (>3cm) of artefact sources (metallic coils and catheters). Quantitative and qualitative measurements of uncorrected and MAR corrected images and different artefact sources were compared Quantitative evaluation showed significant reduction of near- and far-field streak-artefacts with MAR for both artefact sources (p<0.001), while remaining stable for unaffected organs (all p>0.05). Inhomogeneities of attenuation values were significantly higher for metallic coils compared to catheters (p<0.001) and decreased significantly for both after MAR (p<0.001). Qualitative image scores were significantly improved after MAR (all p<0.003) with by trend higher artefact degrees for metallic coils compared to catheters. In patients undergoing CBCT-CA for transarterial RE, prototype MAR algorithm improves image quality in proximity of metallic coil and catheter artefacts. (orig.)

  17. An algebraic approach to the scattering equations

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Rijun; Rao, Junjie [Zhejiang Institute of Modern Physics, Zhejiang University,Hangzhou, 310027 (China); Feng, Bo [Zhejiang Institute of Modern Physics, Zhejiang University,Hangzhou, 310027 (China); Center of Mathematical Science, Zhejiang University,Hangzhou, 310027 (China); He, Yang-Hui [School of Physics, NanKai University,Tianjin, 300071 (China); Department of Mathematics, City University,London, EC1V 0HB (United Kingdom); Merton College, University of Oxford,Oxford, OX14JD (United Kingdom)

    2015-12-10

    We employ the so-called companion matrix method from computational algebraic geometry, tailored for zero-dimensional ideals, to study the scattering equations. The method renders the CHY-integrand of scattering amplitudes computable using simple linear algebra and is amenable to an algorithmic approach. Certain identities in the amplitudes as well as rationality of the final integrand become immediate in this formalism.

  18. Diffuse scattering from crystals with point defects

    International Nuclear Information System (INIS)

    Andrushevsky, N.M.; Shchedrin, B.M.; Simonov, V.I.; Malakhova, L.F.

    2002-01-01

    The analytical expressions for calculating the intensities of X-ray diffuse scattering from a crystal of finite dimensions and monatomic substitutional, interstitial, or vacancy-type point defects have been derived. The method for the determination of the three-dimensional structure by experimental diffuse-scattering data from crystals with point defects having various concentrations is discussed and corresponding numerical algorithms are suggested

  19. An algebraic approach to the scattering equations

    International Nuclear Information System (INIS)

    Huang, Rijun; Rao, Junjie; Feng, Bo; He, Yang-Hui

    2015-01-01

    We employ the so-called companion matrix method from computational algebraic geometry, tailored for zero-dimensional ideals, to study the scattering equations. The method renders the CHY-integrand of scattering amplitudes computable using simple linear algebra and is amenable to an algorithmic approach. Certain identities in the amplitudes as well as rationality of the final integrand become immediate in this formalism.

  20. A scattering-based over-land rainfall retrieval algorithm for South Korea using GCOM-W1/AMSR-2 data

    Science.gov (United States)

    Kwon, Young-Joo; Shin, Hayan; Ban, Hyunju; Lee, Yang-Won; Park, Kyung-Ae; Cho, Jaeil; Park, No-Wook; Hong, Sungwook

    2017-08-01

    Heavy summer rainfall is a primary natural disaster affecting lives and properties in the Korean Peninsula. This study presents a satellite-based rainfall rate retrieval algorithm for the South Korea combining polarization-corrected temperature ( PCT) and scattering index ( SI) data from the 36.5 and 89.0 GHz channels of the Advanced microwave Scanning Radiometer 2 (AMSR-2) onboard the Global Change Observation Mission (GCOM)-W1 satellite. The coefficients for the algorithm were obtained from spatial and temporal collocation data from the AMSR-2 and groundbased automatic weather station rain gauges from 1 July - 30 August during the years, 2012-2015. There were time delays of about 25 minutes between the AMSR-2 observations and the ground raingauge measurements. A new linearly-combined rainfall retrieval algorithm focused on heavy rain for the PCT and SI was validated using ground-based rainfall observations for the South Korea from 1 July - 30 August, 2016. The validation presented PCT and SI methods showed slightly improved results for rainfall > 5 mm h-1 compared to the current ASMR-2 level 2 data. The best bias and root mean square error (RMSE) for the PCT method at AMSR-2 36.5 GHz were 2.09 mm h-1 and 7.29 mm h-1, respectively, while the current official AMSR-2 rainfall rates show a larger bias and RMSE (4.80 mm h-1 and 9.35 mm h-1, respectively). This study provides a scatteringbased over-land rainfall retrieval algorithm for South Korea affected by stationary front rain and typhoons with the advantages of the previous PCT and SI methods to be applied to a variety of spaceborne passive microwave radiometers.

  1. Neutron scattering studies in the actinide region

    International Nuclear Information System (INIS)

    Kegel, G.H.R.; Egan, J.J.

    1993-09-01

    This report discusses the following topics: Prompt fission neutron energy spectra for 235 U and 239 Pu; Two-parameter measurement of nuclear lifetimes; ''Black'' neutron detector; Data reduction techniques for neutron scattering experiments; Inelastic neutron scattering studies in 197 Au; Elastic and inelastic scattering studies in 239 Pu; and neutron induced defects in silicon dioxide MOS structures

  2. Genetic Algorithm-Guided, Adaptive Model Order Reduction of Flexible Aircrafts

    Science.gov (United States)

    Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter; Brenner, Martin J.

    2017-01-01

    This paper presents a methodology for automated model order reduction (MOR) of flexible aircrafts to construct linear parameter-varying (LPV) reduced order models (ROM) for aeroservoelasticity (ASE) analysis and control synthesis in broad flight parameter space. The novelty includes utilization of genetic algorithms (GAs) to automatically determine the states for reduction while minimizing the trial-and-error process and heuristics requirement to perform MOR; balanced truncation for unstable systems to achieve locally optimal realization of the full model; congruence transformation for "weak" fulfillment of state consistency across the entire flight parameter space; and ROM interpolation based on adaptive grid refinement to generate a globally functional LPV ASE ROM. The methodology is applied to the X-56A MUTT model currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that X-56A ROM with less than one-seventh the number of states relative to the original model is able to accurately predict system response among all input-output channels for pitch, roll, and ASE control at various flight conditions. The GA-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The adaptive refinement allows selective addition of the grid points in the parameter space where flight dynamics varies dramatically to enhance interpolation accuracy without over-burdening controller synthesis and onboard memory efforts downstream. The present MOR framework can be used by control engineers for robust ASE controller synthesis and novel vehicle design.

  3. Efficient scatter distribution estimation and correction in CBCT using concurrent Monte Carlo fitting

    Energy Technology Data Exchange (ETDEWEB)

    Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Verhaegen, F. [Department of Radiation Oncology - MAASTRO, GROW—School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4 (Canada); Jaffray, D. A. [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Ontario Cancer Institute, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada)

    2015-01-15

    Purpose: X-ray scatter is a significant impediment to image quality improvements in cone-beam CT (CBCT). The authors present and demonstrate a novel scatter correction algorithm using a scatter estimation method that simultaneously combines multiple Monte Carlo (MC) CBCT simulations through the use of a concurrently evaluated fitting function, referred to as concurrent MC fitting (CMCF). Methods: The CMCF method uses concurrently run MC CBCT scatter projection simulations that are a subset of the projection angles used in the projection set, P, to be corrected. The scattered photons reaching the detector in each MC simulation are simultaneously aggregated by an algorithm which computes the scatter detector response, S{sub MC}. S{sub MC} is fit to a function, S{sub F}, and if the fit of S{sub F} is within a specified goodness of fit (GOF), the simulations are terminated. The fit, S{sub F}, is then used to interpolate the scatter distribution over all pixel locations for every projection angle in the set P. The CMCF algorithm was tested using a frequency limited sum of sines and cosines as the fitting function on both simulated and measured data. The simulated data consisted of an anthropomorphic head and a pelvis phantom created from CT data, simulated with and without the use of a compensator. The measured data were a pelvis scan of a phantom and patient taken on an Elekta Synergy platform. The simulated data were used to evaluate various GOF metrics as well as determine a suitable fitness value. The simulated data were also used to quantitatively evaluate the image quality improvements provided by the CMCF method. A qualitative analysis was performed on the measured data by comparing the CMCF scatter corrected reconstruction to the original uncorrected and corrected by a constant scatter correction reconstruction, as well as a reconstruction created using a set of projections taken with a small cone angle. Results: Pearson’s correlation, r, proved to be a

  4. Research on the Scattering Characteristics and the RCS Reduction of Circularly Polarized Microstrip Antenna

    Directory of Open Access Journals (Sweden)

    W. Jiang

    2013-01-01

    Full Text Available Based on the study of the radiation and scattering of the circularly polarized (CP antenna, a novel radar cross-section (RCS reduction technique is proposed for CP antenna in this paper. Quasi-fractal slots are applied in the design of the antenna ground plane to reduce the RCS of the CP antenna. Both prototype antenna and array are designed, and their time-, frequency-, and space-domain characteristics are studied to authenticate the proposed technique. The simulated and measured results show that the RCS of the prototype antenna and array is reduced up to 7.85 dB and 6.95 dB in the band of 1 GHz–10 GHz. The proposed technique serves a candidate in the design of low RCS CP antennas and arrays.

  5. Image processing methods for noise reduction in the TJ-II Thomson Scattering diagnostic

    Energy Technology Data Exchange (ETDEWEB)

    Dormido-Canto, S., E-mail: sebas@dia.uned.es [Departamento de Informatica y Automatica, UNED, Madrid 28040 (Spain); Farias, G. [Pontificia Universidad Catolica de Valparaiso, Valparaiso (Chile); Vega, J.; Pastor, I. [Asociacion EURATOM/CIEMAT para Fusion, Madrid 28040 (Spain)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer We describe an approach in order to reduce or mitigate the stray-light on the images and show the exceptional results. Black-Right-Pointing-Pointer We analyze the parameters to take account in the proposed process. Black-Right-Pointing-Pointer We report a simplified exampled in order to explain the proposed process. - Abstract: The Thomsom Scattering diagnostic of the TJ-II stellarator provides temperature and density profiles. The CCD camera acquires images corrupted with noise that, in some cases, can produce unreliable profiles. The main source of noise is the so-called stray-light. In this paper we describe an approach that allows mitigation of the effects that stray-light has on the images: extraction regions with connected-components. In addition, the robustness and effectiveness of the noise reduction technique is validated in two ways: (1) supervised classification and (2) comparison of electron temperature profiles.

  6. A Numerical Method for Analyzing Electromagnetic Scattering Properties of a Moving Conducting Object

    Directory of Open Access Journals (Sweden)

    Lei Kuang

    2014-01-01

    Full Text Available A novel numerical approach is developed to analyze electromagnetic scattering properties of a moving conducting object based on the finite-difference time-domain (FDTD algorithm. Relativistic boundary conditions are implemented into the FDTD algorithm to calculate the electromagnetic field on the moving boundary. An improved technique is proposed to solve the scattered field in order to improve the computational efficiency and stability of solutions. The time-harmonic scattered field from a one-dimensional moving conducting surface is first simulated by the proposed approach. Numerical results show that the amplitude and frequency of the scattered field suffer a modulation shift. Then the transient scattered field is calculated, and broadband electromagnetic scattering properties of the moving conducting surface are obtained by the fast Fourier transform (FFT. Finally, the scattered field from a two-dimensional moving square cylinder is analyzed. The numerical results demonstrate the Doppler effect of a moving conducting object. The simulated results agree well with analytical results.

  7. SIMSAS - a window based software package for simulation and analysis of multiple small-angle scattering data

    International Nuclear Information System (INIS)

    Jayaswal, B.; Mazumder, S.

    1998-09-01

    Small-angle scattering data from strong scattering systems, e.g. porous materials, cannot be analysed invoking single scattering approximation as specimen needed to replicate the bulk matrix in essential properties are too thick to validate the approximation. The presence of multiple scattering is indicated by invalidity of the functional invariance property of the observed scattering profile with variation of sample thickness and/or wave length of the probing radiation. This article delineates how non accounting of multiple scattering affects the results of analysis and then how to correct the data for its effect. It deals with an algorithm to extract single scattering profile from small-angle scattering data affected by multiple scattering. The algorithm can process the scattering data and deduce single scattering profile in absolute scale. A software package, SIMSAS, is introduced for executing this inversion step. This package is useful both to simulate and to analyse multiple small-angle scattering data. (author)

  8. Rough surface scattering simulations using graphics cards

    International Nuclear Information System (INIS)

    Klapetek, Petr; Valtr, Miroslav; Poruba, Ales; Necas, David; Ohlidal, Miloslav

    2010-01-01

    In this article we present results of rough surface scattering calculations using a graphical processing unit implementation of the Finite Difference in Time Domain algorithm. Numerical results are compared to real measurements and computational performance is compared to computer processor implementation of the same algorithm. As a basis for computations, atomic force microscope measurements of surface morphology are used. It is shown that the graphical processing unit capabilities can be used to speedup presented computationally demanding algorithms without loss of precision.

  9. Hybrid radiosity-SP3 equation based bioluminescence tomography reconstruction for turbid medium with low- and non-scattering regions

    Science.gov (United States)

    Chen, Xueli; Zhang, Qitan; Yang, Defu; Liang, Jimin

    2014-01-01

    To provide an ideal solution for a specific problem of gastric cancer detection in which low-scattering regions simultaneously existed with both the non- and high-scattering regions, a novel hybrid radiosity-SP3 equation based reconstruction algorithm for bioluminescence tomography was proposed in this paper. In the algorithm, the third-order simplified spherical harmonics approximation (SP3) was combined with the radiosity equation to describe the bioluminescent light propagation in tissues, which provided acceptable accuracy for the turbid medium with both low- and non-scattering regions. The performance of the algorithm was evaluated with digital mouse based simulations and a gastric cancer-bearing mouse based in situ experiment. Primary results demonstrated the feasibility and superiority of the proposed algorithm for the turbid medium with low- and non-scattering regions.

  10. Hybrid radiosity-SP3 equation based bioluminescence tomography reconstruction for turbid medium with low- and non-scattering regions

    International Nuclear Information System (INIS)

    Chen, Xueli; Zhang, Qitan; Yang, Defu; Liang, Jimin

    2014-01-01

    To provide an ideal solution for a specific problem of gastric cancer detection in which low-scattering regions simultaneously existed with both the non- and high-scattering regions, a novel hybrid radiosity-SP 3 equation based reconstruction algorithm for bioluminescence tomography was proposed in this paper. In the algorithm, the third-order simplified spherical harmonics approximation (SP 3 ) was combined with the radiosity equation to describe the bioluminescent light propagation in tissues, which provided acceptable accuracy for the turbid medium with both low- and non-scattering regions. The performance of the algorithm was evaluated with digital mouse based simulations and a gastric cancer-bearing mouse based in situ experiment. Primary results demonstrated the feasibility and superiority of the proposed algorithm for the turbid medium with low- and non-scattering regions

  11. Comparison study of noise reduction algorithms in dual energy chest digital tomosynthesis

    Science.gov (United States)

    Lee, D.; Kim, Y.-S.; Choi, S.; Lee, H.; Choi, S.; Kim, H.-J.

    2018-04-01

    Dual energy chest digital tomosynthesis (CDT) is a recently developed medical technique that takes advantage of both tomosynthesis and dual energy X-ray images. However, quantum noise, which occurs in dual energy X-ray images, strongly interferes with diagnosis in various clinical situations. Therefore, noise reduction is necessary in dual energy CDT. In this study, noise-compensating algorithms, including a simple smoothing of high-energy images (SSH) and anti-correlated noise reduction (ACNR), were evaluated in a CDT system. We used a newly developed prototype CDT system and anthropomorphic chest phantom for experimental studies. The resulting images demonstrated that dual energy CDT can selectively image anatomical structures, such as bone and soft tissue. Among the resulting images, those acquired with ACNR showed the best image quality. Both coefficient of variation and contrast to noise ratio (CNR) were the highest in ACNR among the three different dual energy techniques, and the CNR of bone was significantly improved compared to the reconstructed images acquired at a single energy. This study demonstrated the clinical value of dual energy CDT and quantitatively showed that ACNR is the most suitable among the three developed dual energy techniques, including standard log subtraction, SSH, and ACNR.

  12. BioXTAS RAW, a software program for high-throughput automated small-angle X-ray scattering data reduction and preliminary analysis

    DEFF Research Database (Denmark)

    Nielsen, S.S.; Toft, K.N.; Snakenborg, Detlef

    2009-01-01

    A fully open source software program for automated two-dimensional and one-dimensional data reduction and preliminary analysis of isotropic small-angle X-ray scattering (SAXS) data is presented. The program is freely distributed, following the open-source philosophy, and does not rely on any...... commercial software packages. BioXTAS RAW is a fully automated program that, via an online feature, reads raw two-dimensional SAXS detector output files and processes and plots data as the data files are created during measurement sessions. The software handles all steps in the data reduction. This includes...... mask creation, radial averaging, error bar calculation, artifact removal, normalization and q calibration. Further data reduction such as background subtraction and absolute intensity scaling is fast and easy via the graphical user interface. BioXTAS RAW also provides preliminary analysis of one...

  13. Simultaneous optical image compression and encryption using error-reduction phase retrieval algorithm

    International Nuclear Information System (INIS)

    Liu, Wei; Liu, Shutian; Liu, Zhengjun

    2015-01-01

    We report a simultaneous image compression and encryption scheme based on solving a typical optical inverse problem. The secret images to be processed are multiplexed as the input intensities of a cascaded diffractive optical system. At the output plane, a compressed complex-valued data with a lot fewer measurements can be obtained by utilizing error-reduction phase retrieval algorithm. The magnitude of the output image can serve as the final ciphertext while its phase serves as the decryption key. Therefore the compression and encryption are simultaneously completed without additional encoding and filtering operations. The proposed strategy can be straightforwardly applied to the existing optical security systems that involve diffraction and interference. Numerical simulations are performed to demonstrate the validity and security of the proposal. (paper)

  14. Direct numerical reconstruction of conductivities in three dimensions using scattering transforms

    DEFF Research Database (Denmark)

    Bikowski, Jutta; Knudsen, Kim; Mueller, Jennifer L

    2011-01-01

    A direct three-dimensional EIT reconstruction algorithm based on complex geometrical optics solutions and a nonlinear scattering transform is presented and implemented for spherically symmetric conductivity distributions. The scattering transform is computed both with a Born approximation and from...

  15. An error reduction algorithm to improve lidar turbulence estimates for wind energy

    Directory of Open Access Journals (Sweden)

    J. F. Newman

    2017-02-01

    Full Text Available Remote-sensing devices such as lidars are currently being investigated as alternatives to cup anemometers on meteorological towers for the measurement of wind speed and direction. Although lidars can measure mean wind speeds at heights spanning an entire turbine rotor disk and can be easily moved from one location to another, they measure different values of turbulence than an instrument on a tower. Current methods for improving lidar turbulence estimates include the use of analytical turbulence models and expensive scanning lidars. While these methods provide accurate results in a research setting, they cannot be easily applied to smaller, vertically profiling lidars in locations where high-resolution sonic anemometer data are not available. Thus, there is clearly a need for a turbulence error reduction model that is simpler and more easily applicable to lidars that are used in the wind energy industry. In this work, a new turbulence error reduction algorithm for lidars is described. The Lidar Turbulence Error Reduction Algorithm, L-TERRA, can be applied using only data from a stand-alone vertically profiling lidar and requires minimal training with meteorological tower data. The basis of L-TERRA is a series of physics-based corrections that are applied to the lidar data to mitigate errors from instrument noise, volume averaging, and variance contamination. These corrections are applied in conjunction with a trained machine-learning model to improve turbulence estimates from a vertically profiling WINDCUBE v2 lidar. The lessons learned from creating the L-TERRA model for a WINDCUBE v2 lidar can also be applied to other lidar devices. L-TERRA was tested on data from two sites in the Southern Plains region of the United States. The physics-based corrections in L-TERRA brought regression line slopes much closer to 1 at both sites and significantly reduced the sensitivity of lidar turbulence errors to atmospheric stability. The accuracy of machine

  16. Research of scatter correction on industry computed tomography

    International Nuclear Information System (INIS)

    Sun Shaohua; Gao Wenhuan; Zhang Li; Chen Zhiqiang

    2002-01-01

    In the scanning process of industry computer tomography, scatter blurs the reconstructed image. The grey values of pixels in the reconstructed image are away from what is true and such effect need to be corrected. If the authors use the conventional method of deconvolution, many steps of iteration are needed and the computing time is not satisfactory. The author discusses a method combining Ordered Subsets Convex algorithm and scatter model to implement scatter correction and promising results are obtained in both speed and image quality

  17. Identification and angle reconstruction of the scattered electron with the ZEUS calorimeter

    International Nuclear Information System (INIS)

    Doeker, T.

    1992-10-01

    For the analysis of deep-inelastic electron-proton events with the ZEUS detector, a key ingredient is the reliable and efficient identification of a scattered electron. To this end an essential mean is the information from the uranium-scintillator calorimeter. In this work an algorithm is presented which uses the segmentation properties of the ZEUS calorimeter to identify the scattered electron in neutral current events. For energy deposits in adjacent calorimeter cells the algorithm determines the probability that these deposits result from an electromagnetic shower. Furthermore several methods of measuring the scattering angle of the final state electron are compared. An angular resolution of about 3 mrad is obtained. (orig.) [de

  18. Metal-induced streak artifact reduction using iterative reconstruction algorithms in x-ray computed tomography image of the dentoalveolar region.

    Science.gov (United States)

    Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia

    2013-02-01

    The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Option-4 algorithm for Florida pocket depth probe: reduction in the variance of site-specific probeable crevice depth measurements.

    Science.gov (United States)

    Breen, H J; Rogers, P; Johnson, N W; Slaney, R

    1999-08-01

    Clinical periodontal measurement is plagued by many sources of error which result in aberrant values (outliers). This study sets out to compare probeable crevice depth measurements (PCD) selected by the option-4 algorithm against those recorded with a conventional double-pass method and to quantify any reduction in site-specific PCD variances. A single clinician recorded full-mouth PCD at 1 visit in 32 subjects (mean age 45.5 years) with moderately advanced chronic adult periodontitis. PCD was recorded over 2 passes at 6 sites per tooth with the Florida Pocket Depth Probes, a 3rd generation probe. The option-4 algorithm compared the 1st pass site-specific PCD value (PCD1) to the 2nd pass site-specific PCD value (PCD2) and, if the difference between these values was >1.00 mm, allowed the recording of a maximum of 2 further measurements (3rd and 4th pass measurements PCD3 and PCD4): 4 site-specific measure-meets were considered to be the maximum subject and tissue tolerance. The algorithm selected the 1st 2 measurements whose difference was difference Y) (Y=[(A-B)/A]X 100) and a 75% reduction in the median site-specific variance of PCD1/PCD2.

  20. The integration of improved Monte Carlo compton scattering algorithms into the Integrated TIGER Series

    International Nuclear Information System (INIS)

    Quirk, Thomas J. IV

    2004-01-01

    The Integrated TIGER Series (ITS) is a software package that solves coupled electron-photon transport problems. ITS performs analog photon tracking for energies between 1 keV and 1 GeV. Unlike its deterministic counterpart, the Monte Carlo calculations of ITS do not require a memory-intensive meshing of phase space; however, its solutions carry statistical variations. Reducing these variations is heavily dependent on runtime. Monte Carlo simulations must therefore be both physically accurate and computationally efficient. Compton scattering is the dominant photon interaction above 100 keV and below 5-10 MeV, with higher cutoffs occurring in lighter atoms. In its current model of Compton scattering, ITS corrects the differential Klein-Nishina cross sections (which assumes a stationary, free electron) with the incoherent scattering function, a function dependent on both the momentum transfer and the atomic number of the scattering medium. While this technique accounts for binding effects on the scattering angle, it excludes the Doppler broadening the Compton line undergoes because of the momentum distribution in each bound state. To correct for these effects, Ribbefor's relativistic impulse approximation (IA) will be employed to create scattering cross section differential in both energy and angle for each element. Using the parameterizations suggested by Brusa et al., scattered photon energies and angle can be accurately sampled at a high efficiency with minimal physical data. Two-body kinematics then dictates the electron's scattered direction and energy. Finally, the atomic ionization is relaxed via Auger emission or fluorescence. Future work will extend these improvements in incoherent scattering to compounds and to adjoint calculations.

  1. A two-domain real-time algorithm for optimal data reduction: a case study on accelerator magnet measurements

    International Nuclear Information System (INIS)

    Arpaia, Pasquale; Buzio, Marco; Inglese, Vitaliano

    2010-01-01

    A real-time algorithm of data reduction, based on the combination of two lossy techniques specifically optimized for high-rate magnetic measurements in two domains (e.g. time and space), is proposed. The first technique exploits an adaptive sampling rule based on the power estimation of the flux increments in order to optimize the information to be gathered for magnetic field analysis in real time. The tracking condition is defined by the target noise level in the Nyquist band required by the post-processing procedure of magnetic analysis. The second technique uses a data reduction algorithm in order to improve the compression ratio while preserving the consistency of the measured signal. The allowed loss is set equal to the random noise level in the signal in order to force the loss and the noise to cancel rather than to add, by improving the signal-to-noise ratio. Numerical analysis and experimental results of on-field performance characterization and validation for two case studies of magnetic measurement systems for testing magnets of the Large Hadron Collider at the European Organization for Nuclear Research (CERN) are reported

  2. Observer Evaluation of a Metal Artifact Reduction Algorithm Applied to Head and Neck Cone Beam Computed Tomographic Images

    Energy Technology Data Exchange (ETDEWEB)

    Korpics, Mark; Surucu, Murat; Mescioglu, Ibrahim; Alite, Fiori; Block, Alec M.; Choi, Mehee; Emami, Bahman; Harkenrider, Matthew M.; Solanki, Abhishek A.; Roeske, John C., E-mail: jroeske@lumc.edu

    2016-11-15

    Purpose and Objectives: To quantify, through an observer study, the reduction in metal artifacts on cone beam computed tomographic (CBCT) images using a projection-interpolation algorithm, on images containing metal artifacts from dental fillings and implants in patients treated for head and neck (H&N) cancer. Methods and Materials: An interpolation-substitution algorithm was applied to H&N CBCT images containing metal artifacts from dental fillings and implants. Image quality with respect to metal artifacts was evaluated subjectively and objectively. First, 6 independent radiation oncologists were asked to rank randomly sorted blinded images (before and after metal artifact reduction) using a 5-point rating scale (1 = severe artifacts; 5 = no artifacts). Second, the standard deviation of different regions of interest (ROI) within each image was calculated and compared with the mean rating scores. Results: The interpolation-substitution technique successfully reduced metal artifacts in 70% of the cases. From a total of 60 images from 15 H&N cancer patients undergoing image guided radiation therapy, the mean rating score on the uncorrected images was 2.3 ± 1.1, versus 3.3 ± 1.0 for the corrected images. The mean difference in ranking score between uncorrected and corrected images was 1.0 (95% confidence interval: 0.9-1.2, P<.05). The standard deviation of each ROI significantly decreased after artifact reduction (P<.01). Moreover, a negative correlation between the mean rating score for each image and the standard deviation of the oral cavity and bilateral cheeks was observed. Conclusion: The interpolation-substitution algorithm is efficient and effective for reducing metal artifacts caused by dental fillings and implants on CBCT images, as demonstrated by the statistically significant increase in observer image quality ranking and by the decrease in ROI standard deviation between uncorrected and corrected images.

  3. Hybrid radiosity-SP{sub 3} equation based bioluminescence tomography reconstruction for turbid medium with low- and non-scattering regions

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xueli, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn; Zhang, Qitan; Yang, Defu; Liang, Jimin, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn [School of Life Science and Technology, Xidian University, Xi' an, Shaanxi 710071 (China)

    2014-01-14

    To provide an ideal solution for a specific problem of gastric cancer detection in which low-scattering regions simultaneously existed with both the non- and high-scattering regions, a novel hybrid radiosity-SP{sub 3} equation based reconstruction algorithm for bioluminescence tomography was proposed in this paper. In the algorithm, the third-order simplified spherical harmonics approximation (SP{sub 3}) was combined with the radiosity equation to describe the bioluminescent light propagation in tissues, which provided acceptable accuracy for the turbid medium with both low- and non-scattering regions. The performance of the algorithm was evaluated with digital mouse based simulations and a gastric cancer-bearing mouse based in situ experiment. Primary results demonstrated the feasibility and superiority of the proposed algorithm for the turbid medium with low- and non-scattering regions.

  4. A Suboptimal PTS Algorithm Based on Particle Swarm Optimization Technique for PAPR Reduction in OFDM Systems

    Directory of Open Access Journals (Sweden)

    Ho-Lung Hung

    2008-08-01

    Full Text Available A suboptimal partial transmit sequence (PTS based on particle swarm optimization (PSO algorithm is presented for the low computation complexity and the reduction of the peak-to-average power ratio (PAPR of an orthogonal frequency division multiplexing (OFDM system. In general, PTS technique can improve the PAPR statistics of an OFDM system. However, it will come with an exhaustive search over all combinations of allowed phase weighting factors and the search complexity increasing exponentially with the number of subblocks. In this paper, we work around potentially computational intractability; the proposed PSO scheme exploits heuristics to search the optimal combination of phase factors with low complexity. Simulation results show that the new technique can effectively reduce the computation complexity and PAPR reduction.

  5. A Suboptimal PTS Algorithm Based on Particle Swarm Optimization Technique for PAPR Reduction in OFDM Systems

    Directory of Open Access Journals (Sweden)

    Lee Shu-Hong

    2008-01-01

    Full Text Available Abstract A suboptimal partial transmit sequence (PTS based on particle swarm optimization (PSO algorithm is presented for the low computation complexity and the reduction of the peak-to-average power ratio (PAPR of an orthogonal frequency division multiplexing (OFDM system. In general, PTS technique can improve the PAPR statistics of an OFDM system. However, it will come with an exhaustive search over all combinations of allowed phase weighting factors and the search complexity increasing exponentially with the number of subblocks. In this paper, we work around potentially computational intractability; the proposed PSO scheme exploits heuristics to search the optimal combination of phase factors with low complexity. Simulation results show that the new technique can effectively reduce the computation complexity and PAPR reduction.

  6. Poster – 02: Positron Emission Tomography (PET) Imaging Reconstruction using higher order Scattered Photon Coincidences

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Hongwei; Pistorius, Stephen [Department of Physics and Astronomy, University of Manitoba, CancerCare, Manitoba (Canada)

    2016-08-15

    PET images are affected by the presence of scattered photons. Incorrect scatter-correction may cause artifacts, particularly in 3D PET systems. Current scatter reconstruction methods do not distinguish between single and higher order scattered photons. A dual-scattered reconstruction method (GDS-MLEM) that is independent of the number of Compton scattering interactions and less sensitive to the need for high energy resolution detectors, is proposed. To avoid overcorrecting for scattered coincidences, the attenuation coefficient was calculated by integrating the differential Klein-Nishina cross-section over a restricted energy range, accounting only for scattered photons that were not detected. The optimum image can be selected by choosing an energy threshold which is the upper energy limit for the calculation of the cross-section and the lower limit for scattered photons in the reconstruction. Data was simulated using the GATE platform. 500,000 multiple scattered photon coincidences with perfect energy resolution were reconstructed using various methods. The GDS-MLEM algorithm had the highest confidence (98%) in locating the annihilation position and was capable of reconstructing the two largest hot regions. 100,000 photon coincidences, with a scatter fraction of 40%, were used to test the energy resolution dependence of different algorithms. With a 350–650 keV energy window and the restricted attenuation correction model, the GDS-MLEM algorithm was able to improve contrast recovery and reduce the noise by 7.56%–13.24% and 12.4%–24.03%, respectively. This approach is less sensitive to the energy resolution and shows promise if detector energy resolutions of 12% can be achieved.

  7. Reduction of the scattered radiation during X-ray examination with screen-film systems

    Energy Technology Data Exchange (ETDEWEB)

    Vasiliev, V N; Stavitsky, R V [Moscow Research Inst. for Roentgenology and Radiology, Moscow (Russian Federation); Oshomkov, Yu V [Mosroentgen, Moscow Region (Russian Federation)

    1993-01-01

    In diagnostic radiography, during X-ray examination, photons scattered in the patient's body are detected by the intensifying screen and decrease the image contrast. A conventional way to avoid this image degradation is to attenuate the scattered radiation by an antiscatter grid placed between the patient's body and the screen. A grid selectivity effect originates from the greater attenuation of scattered as opposed to primary radiation. Previous authors calculated the primary and scattered radiation transmission factor of photons with initial energy 30-120 keV for a number of typical grids. The primary radiation transmission factor varied from 0.34 to 0.67 and the secondary radiation factor was equal from 0.03 to 0.13. This effect results in a contrast improvement from 2 to 6, but the patient exposure increases up to a factor of 10. In this work we studied the possibility of improving the image contrast by attenuating the scattered radiation by a secondary filter placed between the patient's body and the screen and made of an appropriate material. A selectivity effect due to the secondary filter arises from two circumstances. First, tilting incidence of the scattered radiation results in the path inside the filter being greater than the primary one. Second, the average energy of the scattered radiation is less than the primary and, hence, the attenuation coefficient is greater. (author).

  8. A Spectral Algorithm for Envelope Reduction of Sparse Matrices

    Science.gov (United States)

    Barnard, Stephen T.; Pothen, Alex; Simon, Horst D.

    1993-01-01

    The problem of reordering a sparse symmetric matrix to reduce its envelope size is considered. A new spectral algorithm for computing an envelope-reducing reordering is obtained by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. This Laplacian eigenvector solves a continuous relaxation of a discrete problem related to envelope minimization called the minimum 2-sum problem. The permutation vector computed by the spectral algorithm is a closest permutation vector to the specified Laplacian eigenvector. Numerical results show that the new reordering algorithm usually computes smaller envelope sizes than those obtained from the current standard algorithms such as Gibbs-Poole-Stockmeyer (GPS) or SPARSPAK reverse Cuthill-McKee (RCM), in some cases reducing the envelope by more than a factor of two.

  9. SU-E-J-218: Evaluation of CT Images Created Using a New Metal Artifact Reduction Reconstruction Algorithm for Radiation Therapy Treatment Planning

    Energy Technology Data Exchange (ETDEWEB)

    Niemkiewicz, J; Palmiotti, A; Miner, M; Stunja, L; Bergene, J [Lehigh Valley Health Network, Allentown, PA (United States)

    2014-06-01

    Purpose: Metal in patients creates streak artifacts in CT images. When used for radiation treatment planning, these artifacts make it difficult to identify internal structures and affects radiation dose calculations, which depend on HU numbers for inhomogeneity correction. This work quantitatively evaluates a new metal artifact reduction (MAR) CT image reconstruction algorithm (GE Healthcare CT-0521-04.13-EN-US DOC1381483) when metal is present. Methods: A Gammex Model 467 Tissue Characterization phantom was used. CT images were taken of this phantom on a GE Optima580RT CT scanner with and without steel and titanium plugs using both the standard and MAR reconstruction algorithms. HU values were compared pixel by pixel to determine if the MAR algorithm altered the HUs of normal tissues when no metal is present, and to evaluate the effect of using the MAR algorithm when metal is present. Also, CT images of patients with internal metal objects using standard and MAR reconstruction algorithms were compared. Results: Comparing the standard and MAR reconstructed images of the phantom without metal, 95.0% of pixels were within ±35 HU and 98.0% of pixels were within ±85 HU. Also, the MAR reconstruction algorithm showed significant improvement in maintaining HUs of non-metallic regions in the images taken of the phantom with metal. HU Gamma analysis (2%, 2mm) of metal vs. non-metal phantom imaging using standard reconstruction resulted in an 84.8% pass rate compared to 96.6% for the MAR reconstructed images. CT images of patients with metal show significant artifact reduction when reconstructed with the MAR algorithm. Conclusion: CT imaging using the MAR reconstruction algorithm provides improved visualization of internal anatomy and more accurate HUs when metal is present compared to the standard reconstruction algorithm. MAR reconstructed CT images provide qualitative and quantitative improvements over current reconstruction algorithms, thus improving radiation

  10. SU-E-J-218: Evaluation of CT Images Created Using a New Metal Artifact Reduction Reconstruction Algorithm for Radiation Therapy Treatment Planning

    International Nuclear Information System (INIS)

    Niemkiewicz, J; Palmiotti, A; Miner, M; Stunja, L; Bergene, J

    2014-01-01

    Purpose: Metal in patients creates streak artifacts in CT images. When used for radiation treatment planning, these artifacts make it difficult to identify internal structures and affects radiation dose calculations, which depend on HU numbers for inhomogeneity correction. This work quantitatively evaluates a new metal artifact reduction (MAR) CT image reconstruction algorithm (GE Healthcare CT-0521-04.13-EN-US DOC1381483) when metal is present. Methods: A Gammex Model 467 Tissue Characterization phantom was used. CT images were taken of this phantom on a GE Optima580RT CT scanner with and without steel and titanium plugs using both the standard and MAR reconstruction algorithms. HU values were compared pixel by pixel to determine if the MAR algorithm altered the HUs of normal tissues when no metal is present, and to evaluate the effect of using the MAR algorithm when metal is present. Also, CT images of patients with internal metal objects using standard and MAR reconstruction algorithms were compared. Results: Comparing the standard and MAR reconstructed images of the phantom without metal, 95.0% of pixels were within ±35 HU and 98.0% of pixels were within ±85 HU. Also, the MAR reconstruction algorithm showed significant improvement in maintaining HUs of non-metallic regions in the images taken of the phantom with metal. HU Gamma analysis (2%, 2mm) of metal vs. non-metal phantom imaging using standard reconstruction resulted in an 84.8% pass rate compared to 96.6% for the MAR reconstructed images. CT images of patients with metal show significant artifact reduction when reconstructed with the MAR algorithm. Conclusion: CT imaging using the MAR reconstruction algorithm provides improved visualization of internal anatomy and more accurate HUs when metal is present compared to the standard reconstruction algorithm. MAR reconstructed CT images provide qualitative and quantitative improvements over current reconstruction algorithms, thus improving radiation

  11. SU-E-T-802: Verification of Implanted Cardiac Pacemaker Doses in Intensity-Modulated Radiation Therapy: Dose Prediction Accuracy and Reduction Effect of a Lead Sheet

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J [Dept. of Radiation Oncology, Konkuk University Medical Center, Seoul (Korea, Republic of); Chung, J [Dept. of Radiation Oncology, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of)

    2015-06-15

    Purpose: To verify delivered doses on the implanted cardiac pacemaker, predicted doses with and without dose reduction method were verified using the MOSFET detectors in terms of beam delivery and dose calculation techniques in intensity-modulated radiation therapy (IMRT). Methods: The pacemaker doses for a patient with a tongue cancer were predicted according to the beam delivery methods [step-and-shoot (SS) and sliding window (SW)], intensity levels for dose optimization, and dose calculation algorithms. Dosimetric effects on the pacemaker were calculated three dose engines: pencil-beam convolution (PBC), analytical anisotropic algorithm (AAA), and Acuros-XB. A lead shield of 2 mm thickness was designed for minimizing irradiated doses to the pacemaker. Dose variations affected by the heterogeneous material properties of the pacemaker and effectiveness of the lead shield were predicted by the Acuros-XB. Dose prediction accuracy and the feasibility of the dose reduction strategy were verified based on the measured skin doses right above the pacemaker using mosfet detectors during the radiation treatment. Results: The Acuros-XB showed underestimated skin doses and overestimated doses by the lead-shield effect, even though the lower dose disagreement was observed. It led to improved dose prediction with higher intensity level of dose optimization in IMRT. The dedicated tertiary lead sheet effectively achieved reduction of pacemaker dose up to 60%. Conclusion: The current SS technique could deliver lower scattered doses than recommendation criteria, however, use of the lead sheet contributed to reduce scattered doses.Thin lead plate can be a useful tertiary shielder and it could not acuse malfunction or electrical damage of the implanted pacemaker in IMRT. It is required to estimate more accurate scattered doses of the patient with medical device to design proper dose reduction strategy.

  12. SU-E-T-802: Verification of Implanted Cardiac Pacemaker Doses in Intensity-Modulated Radiation Therapy: Dose Prediction Accuracy and Reduction Effect of a Lead Sheet

    International Nuclear Information System (INIS)

    Lee, J; Chung, J

    2015-01-01

    Purpose: To verify delivered doses on the implanted cardiac pacemaker, predicted doses with and without dose reduction method were verified using the MOSFET detectors in terms of beam delivery and dose calculation techniques in intensity-modulated radiation therapy (IMRT). Methods: The pacemaker doses for a patient with a tongue cancer were predicted according to the beam delivery methods [step-and-shoot (SS) and sliding window (SW)], intensity levels for dose optimization, and dose calculation algorithms. Dosimetric effects on the pacemaker were calculated three dose engines: pencil-beam convolution (PBC), analytical anisotropic algorithm (AAA), and Acuros-XB. A lead shield of 2 mm thickness was designed for minimizing irradiated doses to the pacemaker. Dose variations affected by the heterogeneous material properties of the pacemaker and effectiveness of the lead shield were predicted by the Acuros-XB. Dose prediction accuracy and the feasibility of the dose reduction strategy were verified based on the measured skin doses right above the pacemaker using mosfet detectors during the radiation treatment. Results: The Acuros-XB showed underestimated skin doses and overestimated doses by the lead-shield effect, even though the lower dose disagreement was observed. It led to improved dose prediction with higher intensity level of dose optimization in IMRT. The dedicated tertiary lead sheet effectively achieved reduction of pacemaker dose up to 60%. Conclusion: The current SS technique could deliver lower scattered doses than recommendation criteria, however, use of the lead sheet contributed to reduce scattered doses.Thin lead plate can be a useful tertiary shielder and it could not acuse malfunction or electrical damage of the implanted pacemaker in IMRT. It is required to estimate more accurate scattered doses of the patient with medical device to design proper dose reduction strategy

  13. A real-time artifact reduction algorithm based on precise threshold during short-separation optical probe insertion in neurosurgery

    Directory of Open Access Journals (Sweden)

    Weitao Li

    2017-01-01

    Full Text Available During neurosurgery, an optical probe has been used to guide the micro-electrode, which is punctured into the globus pallidus (GP to create a lesion that can relieve the cardinal symptoms. Accurate target localization is the key factor to affect the treatment. However, considering the scattering nature of the tissue, the “look ahead distance (LAD” of optical probe makes the boundary between the different tissues blurred and difficult to be distinguished, which is defined as artifact. Thus, it is highly desirable to reduce the artifact caused by LAD. In this paper, a real-time algorithm based on precise threshold was proposed to eliminate the artifact. The value of the threshold was determined by the maximum error of the measurement system during the calibration procession automatically. Then, the measured data was processed sequentially only based on the threshold and the former data. Moreover, 100μm double-fiber probe and two-layer and multi-layer phantom models were utilized to validate the precision of the algorithm. The error of the algorithm is one puncture step, which was proved in the theory and experiment. It was concluded that the present method could reduce the artifact caused by LAD and make the real boundary sharper and less blurred in real-time. It might be potentially used for the neurosurgery navigation.

  14. A Scalable Parallel PWTD-Accelerated SIE Solver for Analyzing Transient Scattering from Electrically Large Objects

    KAUST Repository

    Liu, Yang

    2015-12-17

    A scalable parallel plane-wave time-domain (PWTD) algorithm for efficient and accurate analysis of transient scattering from electrically large objects is presented. The algorithm produces scalable communication patterns on very large numbers of processors by leveraging two mechanisms: (i) a hierarchical parallelization strategy to evenly distribute the computation and memory loads at all levels of the PWTD tree among processors, and (ii) a novel asynchronous communication scheme to reduce the cost and memory requirement of the communications between the processors. The efficiency and accuracy of the algorithm are demonstrated through its applications to the analysis of transient scattering from a perfect electrically conducting (PEC) sphere with a diameter of 70 wavelengths and a PEC square plate with a dimension of 160 wavelengths. Furthermore, the proposed algorithm is used to analyze transient fields scattered from realistic airplane and helicopter models under high frequency excitation.

  15. A high-power spatial filter for Thomson scattering stray light reduction

    Science.gov (United States)

    Levesque, J. P.; Litzner, K. D.; Mauel, M. E.; Maurer, D. A.; Navratil, G. A.; Pedersen, T. S.

    2011-03-01

    The Thomson scattering diagnostic on the High Beta Tokamak-Extended Pulse (HBT-EP) is routinely used to measure electron temperature and density during plasma discharges. Avalanche photodiodes in a five-channel interference filter polychromator measure scattered light from a 6 ns, 800 mJ, 1064 nm Nd:YAG laser pulse. A low cost, high-power spatial filter was designed, tested, and added to the laser beamline in order to reduce stray laser light to levels which are acceptable for accurate Rayleigh calibration. A detailed analysis of the spatial filter design and performance is given. The spatial filter can be easily implemented in an existing Thomson scattering system without the need to disturb the vacuum chamber or significantly change the beamline. Although apertures in the spatial filter suffer substantial damage from the focused beam, with proper design they can last long enough to permit absolute calibration.

  16. An Improved Phase Gradient Autofocus Algorithm Used in Real-time Processing

    Directory of Open Access Journals (Sweden)

    Qing Ji-ming

    2015-10-01

    Full Text Available The Phase Gradient Autofocus (PGA algorithm can remove the high order phase error effectively, which is of great significance to get high resolution images in real-time processing. While PGA usually needs iteration, which necessitates long working hours. In addition, the performances of the algorithm are not stable in different scene applications. This severely constrains the application of PGA in real-time processing. Isolated scatter selection and windowing are two important algorithmic steps of Phase Gradient Autofocus Algorithm. Therefore, this paper presents an isolated scatter selection method based on sample mean and a windowing method based on pulse envelope. These two methods are highly adaptable to data, which would make the algorithm obtain better stability and need less iteration. The adaptability of the improved PGA is demonstrated with the experimental results of real radar data.

  17. Inverse Monte Carlo: a unified reconstruction algorithm for SPECT

    International Nuclear Information System (INIS)

    Floyd, C.E.; Coleman, R.E.; Jaszczak, R.J.

    1985-01-01

    Inverse Monte Carlo (IMOC) is presented as a unified reconstruction algorithm for Emission Computed Tomography (ECT) providing simultaneous compensation for scatter, attenuation, and the variation of collimator resolution with depth. The technique of inverse Monte Carlo is used to find an inverse solution to the photon transport equation (an integral equation for photon flux from a specified source) for a parameterized source and specific boundary conditions. The system of linear equations so formed is solved to yield the source activity distribution for a set of acquired projections. For the studies presented here, the equations are solved using the EM (Maximum Likelihood) algorithm although other solution algorithms, such as Least Squares, could be employed. While the present results specifically consider the reconstruction of camera-based Single Photon Emission Computed Tomographic (SPECT) images, the technique is equally valid for Positron Emission Tomography (PET) if a Monte Carlo model of such a system is used. As a preliminary evaluation, experimentally acquired SPECT phantom studies for imaging Tc-99m (140 keV) are presented which demonstrate the quantitative compensation for scatter and attenuation for a two dimensional (single slice) reconstruction. The algorithm may be expanded in a straight forward manner to full three dimensional reconstruction including compensation for out of plane scatter

  18. A general framework and review of scatter correction methods in cone beam CT. Part 2: Scatter estimation approaches

    International Nuclear Information System (INIS)

    Ruehrnschopf and, Ernst-Peter; Klingenbeck, Klaus

    2011-01-01

    The main components of scatter correction procedures are scatter estimation and a scatter compensation algorithm. This paper completes a previous paper where a general framework for scatter compensation was presented under the prerequisite that a scatter estimation method is already available. In the current paper, the authors give a systematic review of the variety of scatter estimation approaches. Scatter estimation methods are based on measurements, mathematical-physical models, or combinations of both. For completeness they present an overview of measurement-based methods, but the main topic is the theoretically more demanding models, as analytical, Monte-Carlo, and hybrid models. Further classifications are 3D image-based and 2D projection-based approaches. The authors present a system-theoretic framework, which allows to proceed top-down from a general 3D formulation, by successive approximations, to efficient 2D approaches. A widely useful method is the beam-scatter-kernel superposition approach. Together with the review of standard methods, the authors discuss their limitations and how to take into account the issues of object dependency, spatial variance, deformation of scatter kernels, external and internal absorbers. Open questions for further investigations are indicated. Finally, the authors refer on some special issues and applications, such as bow-tie filter, offset detector, truncated data, and dual-source CT.

  19. Visualizing quantum scattering on the CM-2 supercomputer

    International Nuclear Information System (INIS)

    Richardson, J.L.

    1991-01-01

    We implement parallel algorithms for solving the time-dependent Schroedinger equation on the CM-2 supercomputer. These methods are unconditionally stable as well as unitary at each time step and have the advantage of being spatially local and explicit. We show how to visualize the dynamics of quantum scattering using techniques for visualizing complex wave functions. Several scattering problems are solved to demonstrate the use of these methods. (orig.)

  20. A software-based x-ray scatter correction method for breast tomosynthesis

    International Nuclear Information System (INIS)

    Jia Feng, Steve Si; Sechopoulos, Ioannis

    2011-01-01

    -corrected reconstructions. The visibility of the findings in two patient images was also improved by the application of the scatter correction algorithm. The MTF of the images did not change after application of the scatter correction algorithm, indicating that spatial resolution was not adversely affected. Conclusions: Our software-based scatter correction algorithm exhibits great potential in improving the image quality of DBT acquisitions of both phantoms and patients. The proposed algorithm does not require a time-consuming MC simulation for each specific case to be corrected, making it applicable in the clinical realm.

  1. A verified LLL algorithm

    NARCIS (Netherlands)

    Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa

    2018-01-01

    The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,

  2. Cooperative vehicles for robust traffic congestion reduction: An analysis based on algorithmic, environmental and agent behavioral factors.

    Directory of Open Access Journals (Sweden)

    Prajakta Desai

    Full Text Available Traffic congestion continues to be a persistent problem throughout the world. As vehicle-to-vehicle communication develops, there is an opportunity of using cooperation among close proximity vehicles to tackle the congestion problem. The intuition is that if vehicles could cooperate opportunistically when they come close enough to each other, they could, in effect, spread themselves out among alternative routes so that vehicles do not all jam up on the same roads. Our previous work proposed a decentralized multiagent based vehicular congestion management algorithm entitled Congestion Avoidance and Route Allocation using Virtual Agent Negotiation (CARAVAN, wherein the vehicles acting as intelligent agents perform cooperative route allocation using inter-vehicular communication. This paper focuses on evaluating the practical applicability of this approach by testing its robustness and performance (in terms of travel time reduction, across variations in: (a environmental parameters such as road network topology and configuration; (b algorithmic parameters such as vehicle agent preferences and route cost/preference multipliers; and (c agent-related parameters such as equipped/non-equipped vehicles and compliant/non-compliant agents. Overall, the results demonstrate the adaptability and robustness of the decentralized cooperative vehicles approach to providing global travel time reduction using simple local coordination strategies.

  3. Cooperative vehicles for robust traffic congestion reduction: An analysis based on algorithmic, environmental and agent behavioral factors.

    Science.gov (United States)

    Desai, Prajakta; Loke, Seng W; Desai, Aniruddha

    2017-01-01

    Traffic congestion continues to be a persistent problem throughout the world. As vehicle-to-vehicle communication develops, there is an opportunity of using cooperation among close proximity vehicles to tackle the congestion problem. The intuition is that if vehicles could cooperate opportunistically when they come close enough to each other, they could, in effect, spread themselves out among alternative routes so that vehicles do not all jam up on the same roads. Our previous work proposed a decentralized multiagent based vehicular congestion management algorithm entitled Congestion Avoidance and Route Allocation using Virtual Agent Negotiation (CARAVAN), wherein the vehicles acting as intelligent agents perform cooperative route allocation using inter-vehicular communication. This paper focuses on evaluating the practical applicability of this approach by testing its robustness and performance (in terms of travel time reduction), across variations in: (a) environmental parameters such as road network topology and configuration; (b) algorithmic parameters such as vehicle agent preferences and route cost/preference multipliers; and (c) agent-related parameters such as equipped/non-equipped vehicles and compliant/non-compliant agents. Overall, the results demonstrate the adaptability and robustness of the decentralized cooperative vehicles approach to providing global travel time reduction using simple local coordination strategies.

  4. Quantitative Evaluation of 2 Scatter-Correction Techniques for 18F-FDG Brain PET/MRI in Regard to MR-Based Attenuation Correction.

    Science.gov (United States)

    Teuho, Jarmo; Saunavaara, Virva; Tolvanen, Tuula; Tuokkola, Terhi; Karlsson, Antti; Tuisku, Jouni; Teräs, Mika

    2017-10-01

    In PET, corrections for photon scatter and attenuation are essential for visual and quantitative consistency. MR attenuation correction (MRAC) is generally conducted by image segmentation and assignment of discrete attenuation coefficients, which offer limited accuracy compared with CT attenuation correction. Potential inaccuracies in MRAC may affect scatter correction, because the attenuation image (μ-map) is used in single scatter simulation (SSS) to calculate the scatter estimate. We assessed the impact of MRAC to scatter correction using 2 scatter-correction techniques and 3 μ-maps for MRAC. Methods: The tail-fitted SSS (TF-SSS) and a Monte Carlo-based single scatter simulation (MC-SSS) algorithm implementations on the Philips Ingenuity TF PET/MR were used with 1 CT-based and 2 MR-based μ-maps. Data from 7 subjects were used in the clinical evaluation, and a phantom study using an anatomic brain phantom was conducted. Scatter-correction sinograms were evaluated for each scatter correction method and μ-map. Absolute image quantification was investigated with the phantom data. Quantitative assessment of PET images was performed by volume-of-interest and ratio image analysis. Results: MRAC did not result in large differences in scatter algorithm performance, especially with TF-SSS. Scatter sinograms and scatter fractions did not reveal large differences regardless of the μ-map used. TF-SSS showed slightly higher absolute quantification. The differences in volume-of-interest analysis between TF-SSS and MC-SSS were 3% at maximum in the phantom and 4% in the patient study. Both algorithms showed excellent correlation with each other with no visual differences between PET images. MC-SSS showed a slight dependency on the μ-map used, with a difference of 2% on average and 4% at maximum when a μ-map without bone was used. Conclusion: The effect of different MR-based μ-maps on the performance of scatter correction was minimal in non-time-of-flight 18 F-FDG PET

  5. System automation for a bacterial colony detection and identification instrument via forward scattering

    International Nuclear Information System (INIS)

    Bae, Euiwon; Hirleman, E Daniel; Aroonnual, Amornrat; Bhunia, Arun K; Robinson, J Paul

    2009-01-01

    A system design and automation of a microbiological instrument that locates bacterial colonies and captures the forward-scattering signatures are presented. The proposed instrument integrates three major components: a colony locator, a forward scatterometer and a motion controller. The colony locator utilizes an off-axis light source to illuminate a Petri dish and an IEEE1394 camera to capture the diffusively scattered light to provide the number of bacterial colonies and two-dimensional coordinate information of the bacterial colonies with the help of a segmentation algorithm with region-growing. Then the Petri dish is automatically aligned with the respective centroid coordinate with a trajectory optimization method, such as the Traveling Salesman Algorithm. The forward scatterometer automatically computes the scattered laser beam from a monochromatic image sensor via quadrant intensity balancing and quantitatively determines the centeredness of the forward-scattering pattern. The final scattering signatures are stored to be analyzed to provide rapid identification and classification of the bacterial samples

  6. Robust parameterization of elastic and absorptive electron atomic scattering factors

    International Nuclear Information System (INIS)

    Peng, L.M.; Ren, G.; Dudarev, S.L.; Whelan, M.J.

    1996-01-01

    A robust algorithm and computer program have been developed for the parameterization of elastic and absorptive electron atomic scattering factors. The algorithm is based on a combined modified simulated-annealing and least-squares method, and the computer program works well for fitting both elastic and absorptive atomic scattering factors with five Gaussians. As an application of this program, the elastic electron atomic scattering factors have been parameterized for all neutral atoms and for s up to 6 A -1 . Error analysis shows that the present results are considerably more accurate than the previous analytical fits in terms of the mean square value of the deviation between the numerical and fitted scattering factors. Parameterization for absorptive atomic scattering factors has been made for 17 important materials with the zinc blende structure over the temperature range 1 to 1000 K, where appropriate, and for temperature ranges for which accurate Debye-Waller factors are available. For other materials, the parameterization of the absorptive electron atomic scattering factors can be made using the program by supplying the atomic number of the element, the Debye-Waller factor and the acceleration voltage. For ions or when more accurate numerical results for neutral atoms are available, the program can read in the numerical values of the elastic scattering factors and return the parameters for both the elastic and absorptive scattering factors. The computer routines developed have been tested both on computer workstations and desktop PC computers, and will be made freely available via electronic mail or on floppy disk upon request. (orig.)

  7. Point kernels and superposition methods for scatter dose calculations in brachytherapy

    International Nuclear Information System (INIS)

    Carlsson, A.K.

    2000-01-01

    Point kernels have been generated and applied for calculation of scatter dose distributions around monoenergetic point sources for photon energies ranging from 28 to 662 keV. Three different approaches for dose calculations have been compared: a single-kernel superposition method, a single-kernel superposition method where the point kernels are approximated as isotropic and a novel 'successive-scattering' superposition method for improved modelling of the dose from multiply scattered photons. An extended version of the EGS4 Monte Carlo code was used for generating the kernels and for benchmarking the absorbed dose distributions calculated with the superposition methods. It is shown that dose calculation by superposition at and below 100 keV can be simplified by using isotropic point kernels. Compared to the assumption of full in-scattering made by algorithms currently in clinical use, the single-kernel superposition method improves dose calculations in a half-phantom consisting of air and water. Further improvements are obtained using the successive-scattering superposition method, which reduces the overestimates of dose close to the phantom surface usually associated with kernel superposition methods at brachytherapy photon energies. It is also shown that scatter dose point kernels can be parametrized to biexponential functions, making them suitable for use with an effective implementation of the collapsed cone superposition algorithm. (author)

  8. MO-E-17A-05: Individualized Patient Dosimetry in CT Using the Patient Dose (PATDOSE) Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez, A; Boone, J [UC Davis Medical Center, Sacramento, CA (United States)

    2014-06-15

    Purpose: Radiation dose to the patient undergoing a CT examination has been the focus of many recent studies. While CTDIvol and SSDE-based methods are important tools for patient dose management, the CT image data provides important information with respect to CT dose and its distribution. Coupled with the known geometry and output factors (kV, mAs, pitch, etc.) of the CT scanner, the CT dataset can be used directly for computing absorbed dose. Methods: The HU numbers in a patient's CT data set can be converted to linear attenuation coefficients (LACs) with some assumptions. With this (PAT-DOSE) method, which is not Monte Carlo-based, the primary and scatter dose are computed separately. The primary dose is computed directly from the geometry of the scanner, x-ray spectrum, and the known patient LACs. Once the primary dose has been computed to all voxels in the patient, the scatter dose algorithm redistributes a fraction of the absorbed primary dose (based on the HU number of each source voxel), and the methods here invoke both tissue attenuation and absorption and solid angle geometry. The scatter dose algorithm can be run N times to include Nth-scatter redistribution. PAT-DOSE was deployed using simple PMMA phantoms, to validate its performance against Monte Carlo-derived dose distributions. Results: Comparison between PAT-DOSE and MCNPX primary dose distributions showed excellent agreement for several scan lengths. The 1st-scatter dose distributions showed relatively higher-amplitude, long-range scatter tails for the PAT-DOSE algorithm then for MCNPX simulations. Conclusion: The PAT-DOSE algorithm provides a fast, deterministic assessment of the 3-D dose distribution in CT, making use of scanner geometry and the patient image data set. The preliminary implementation of the algorithm produces accurate primary dose distributions however achieving scatter distribution agreement is more challenging. Addressing the polyenergetic x-ray spectrum and spatially

  9. Corrections for the effects of accidental coincidences, Compton scatter, and object size in positron emission mammography (PEM) imaging

    Science.gov (United States)

    Raylman, R. R.; Majewski, S.; Wojcik, R.; Weisenberger, A. G.; Kross, B.; Popov, V.

    2001-06-01

    Positron emission mammography (PEM) has begun to show promise as an effective method for the detection of breast lesions. Due to its utilization of tumor-avid radiopharmaceuticals labeled with positron-emitting radionuclides, this technique may be especially useful in imaging of women with radiodense or fibrocystic breasts. While the use of these radiotracers affords PEM unique capabilities, it also introduces some limitations. Specifically, acceptance of accidental and Compton-scattered coincidence events can decrease lesion detectability. The authors studied the effect of accidental coincidence events on PEM images produced by the presence of /sup 18/F-Fluorodeoxyglucose in the organs of a subject using an anthropomorphic phantom. A delayed-coincidence technique was tested as a method for correcting PEM images for the occurrence of accidental events. Also, a Compton scatter correction algorithm designed specifically for PEM was developed and tested using a compressed breast phantom. Finally, the effect of object size on image counts and a correction for this effect were explored. The imager used in this study consisted of two PEM detector heads mounted 20 cm apart on a Lorad biopsy apparatus. The results demonstrated that a majority of the accidental coincidence events (/spl sim/80%) detected by this system were produced by radiotracer uptake in the adipose and muscle tissue of the torso. The presence of accidental coincidence events was shown to reduce lesion detectability. Much of this effect was eliminated by correction of the images utilizing estimates of accidental-coincidence contamination acquired with delayed coincidence circuitry built into the PEM system. The Compton scatter fraction for this system was /spl sim/14%. Utilization of a new scatter correction algorithm reduced the scatter fraction to /spl sim/1.5%. Finally, reduction of count recovery due to object size was measured and a correction to the data applied. Application of correction techniques

  10. Brillouin scatter in laser-produced plasmas

    International Nuclear Information System (INIS)

    Phillion, D.W.; Kruer, W.L.; Rupert, V.C.

    1977-01-01

    The absorption of intense laser light is found to be reduced when targets are irradiated by 1.06 μm light with long pulse widths (150-400 psec) and large focal spots (100-250 μm). Estimates of Brillouin scatter which account for the finite heat capacity of the underdense plasma predict this reduction. Spectra of the back reflected light show red shifts indicative of Brillouin scattering

  11. Network reconfiguration for loss reduction in electrical distribution system using genetic algorithm

    International Nuclear Information System (INIS)

    Adail, A.S.A.A.

    2012-01-01

    Distribution system is a critical links between the utility and the nuclear installation. During feeding electricity to that installation there are power losses. The quality of the network depends on the reduction of these losses. Distribution system which feeds the nuclear installation must have a higher quality power. For example, in Inshas site, electrical power is supplied to it from two incoming feeders (one from new abu-zabal substation and the other from old abu-zabal substation). Each feeder is designed to carry the full load, while the operator preferred to connect with a new abu-zabal substation, which has a good power quality. Bad power quality affects directly the nuclear reactor and has a negative impact on the installed sensitive equipment's of the operation. The thesis is Studying the electrical losses in a distribution system (causes and effected factors), feeder reconfiguration methods, and applying of genetic algorithm in an electric distribution power system. In the end, this study proposes an optimization technique based on genetic algorithms for distribution network reconfiguration to reduce the network losses to minimum. The proposed method is applied to IEEE test network; that contain 3 feeders and 16 nodes. The technique is applied through two groups, distribution have general loads, and nuclear loads. In the groups the technique applied to seven cases at normal operation state, system fault condition as well as different loads conditions. Simulated results are drawn to show the accuracy of the technique.

  12. Energy-angle correlation correction algorithm for monochromatic computed tomography based on Thomson scattering X-ray source

    Science.gov (United States)

    Chi, Zhijun; Du, Yingchao; Huang, Wenhui; Tang, Chuanxiang

    2017-12-01

    The necessity for compact and relatively low cost x-ray sources with monochromaticity, continuous tunability of x-ray energy, high spatial coherence, straightforward polarization control, and high brightness has led to the rapid development of Thomson scattering x-ray sources. To meet the requirement of in-situ monochromatic computed tomography (CT) for large-scale and/or high-attenuation materials based on this type of x-ray source, there is an increasing demand for effective algorithms to correct the energy-angle correlation. In this paper, we take advantage of the parametrization of the x-ray attenuation coefficient to resolve this problem. The linear attenuation coefficient of a material can be decomposed into a linear combination of the energy-dependent photoelectric and Compton cross-sections in the keV energy regime without K-edge discontinuities, and the line integrals of the decomposition coefficients of the above two parts can be determined by performing two spectrally different measurements. After that, the line integral of the linear attenuation coefficient of an imaging object at a certain interested energy can be derived through the above parametrization formula, and monochromatic CT can be reconstructed at this energy using traditional reconstruction methods, e.g., filtered back projection or algebraic reconstruction technique. Not only can monochromatic CT be realized, but also the distributions of the effective atomic number and electron density of the imaging object can be retrieved at the expense of dual-energy CT scan. Simulation results validate our proposal and will be shown in this paper. Our results will further expand the scope of application for Thomson scattering x-ray sources.

  13. Development of algorithms for real time track selection in the TOTEM experiment

    CERN Document Server

    Minafra, Nicola; Radicioni, E

    The TOTEM experiment at the LHC has been designed to measure the total proton-proton cross-section with a luminosity independent method and to study elastic and diffractive scattering at energy up to 14 TeV in the center of mass. Elastic interactions are detected by Roman Pot stations, placed at 147m and 220m along the two exiting beams. At the present time, data acquired by these detectors are stored on disk without any data reduction by the data acquisition chain. In this thesis several tracking and selection algorithms, suitable for real-time implementation in the firmware of the back-end electronics, have been proposed and tested using real data.

  14. The multilevel fast multipole algorithm (MLFMA) for solving large-scale computational electromagnetics problems

    CERN Document Server

    Ergul, Ozgur

    2014-01-01

    The Multilevel Fast Multipole Algorithm (MLFMA) for Solving Large-Scale Computational Electromagnetic Problems provides a detailed and instructional overview of implementing MLFMA. The book: Presents a comprehensive treatment of the MLFMA algorithm, including basic linear algebra concepts, recent developments on the parallel computation, and a number of application examplesCovers solutions of electromagnetic problems involving dielectric objects and perfectly-conducting objectsDiscusses applications including scattering from airborne targets, scattering from red

  15. An algorithm for reduction of extracted power from photovoltaic strings in grid-tied photovoltaic power plants during voltage sags

    DEFF Research Database (Denmark)

    Tafti, Hossein Dehghani; Maswood, Ali Iftekhar; Pou, Josep

    2016-01-01

    strings should be reduced during voltage sags. In this paper, an algorithm is proposed for determining the reference voltage of the PV string which results in a reduction of the output power to a certain amount. The proposed algorithm calculates the reference voltage for the dc/dc converter controller......, based on the characteristics of the power-voltage curve of the PV string and therefore, no modification is required in the the controller of the dc/dc converter. Simulation results on a 50-kW PV string verified the effectiveness of the proposed algorithm in reducing the power from PV strings under......Due to the high penetration of the installed distributed generation units in the power system, the injection of reactive power is required for the medium-scale and large-scale grid-connected photovoltaic power plants (PVPPs). Because of the current limitation of the grid-connected inverter...

  16. Finding optimal exact reducts

    KAUST Repository

    AbouEisha, Hassan M.

    2014-01-01

    The problem of attribute reduction is an important problem related to feature selection and knowledge discovery. The problem of finding reducts with minimum cardinality is NP-hard. This paper suggests a new algorithm for finding exact reducts

  17. BioXTAS RAW: improvements to a free open-source program for small-angle X-ray scattering data reduction and analysis.

    Science.gov (United States)

    Hopkins, Jesse Bennett; Gillilan, Richard E; Skou, Soren

    2017-10-01

    BioXTAS RAW is a graphical-user-interface-based free open-source Python program for reduction and analysis of small-angle X-ray solution scattering (SAXS) data. The software is designed for biological SAXS data and enables creation and plotting of one-dimensional scattering profiles from two-dimensional detector images, standard data operations such as averaging and subtraction and analysis of radius of gyration and molecular weight, and advanced analysis such as calculation of inverse Fourier transforms and envelopes. It also allows easy processing of inline size-exclusion chromatography coupled SAXS data and data deconvolution using the evolving factor analysis method. It provides an alternative to closed-source programs such as Primus and ScÅtter for primary data analysis. Because it can calibrate, mask and integrate images it also provides an alternative to synchrotron beamline pipelines that scientists can install on their own computers and use both at home and at the beamline.

  18. Multiphase flows of N immiscible incompressible fluids: A reduction-consistent and thermodynamically-consistent formulation and associated algorithm

    Science.gov (United States)

    Dong, S.

    2018-05-01

    We present a reduction-consistent and thermodynamically consistent formulation and an associated numerical algorithm for simulating the dynamics of an isothermal mixture consisting of N (N ⩾ 2) immiscible incompressible fluids with different physical properties (densities, viscosities, and pair-wise surface tensions). By reduction consistency we refer to the property that if only a set of M (1 ⩽ M ⩽ N - 1) fluids are present in the system then the N-phase governing equations and boundary conditions will exactly reduce to those for the corresponding M-phase system. By thermodynamic consistency we refer to the property that the formulation honors the thermodynamic principles. Our N-phase formulation is developed based on a more general method that allows for the systematic construction of reduction-consistent formulations, and the method suggests the existence of many possible forms of reduction-consistent and thermodynamically consistent N-phase formulations. Extensive numerical experiments have been presented for flow problems involving multiple fluid components and large density ratios and large viscosity ratios, and the simulation results are compared with the physical theories or the available physical solutions. The comparisons demonstrate that our method produces physically accurate results for this class of problems.

  19. Comparing the ISO-recommended and the cumulative data-reduction algorithms in S-on-1 laser damage test by a reverse approach method

    Science.gov (United States)

    Zorila, Alexandru; Stratan, Aurel; Nemes, George

    2018-01-01

    We compare the ISO-recommended (the standard) data-reduction algorithm used to determine the surface laser-induced damage threshold of optical materials by the S-on-1 test with two newly suggested algorithms, both named "cumulative" algorithms/methods, a regular one and a limit-case one, intended to perform in some respects better than the standard one. To avoid additional errors due to real experiments, a simulated test is performed, named the reverse approach. This approach simulates the real damage experiments, by generating artificial test-data of damaged and non-damaged sites, based on an assumed, known damage threshold fluence of the target and on a given probability distribution function to induce the damage. In this work, a database of 12 sets of test-data containing both damaged and non-damaged sites was generated by using four different reverse techniques and by assuming three specific damage probability distribution functions. The same value for the threshold fluence was assumed, and a Gaussian fluence distribution on each irradiated site was considered, as usual for the S-on-1 test. Each of the test-data was independently processed by the standard and by the two cumulative data-reduction algorithms, the resulting fitted probability distributions were compared with the initially assumed probability distribution functions, and the quantities used to compare these algorithms were determined. These quantities characterize the accuracy and the precision in determining the damage threshold and the goodness of fit of the damage probability curves. The results indicate that the accuracy in determining the absolute damage threshold is best for the ISO-recommended method, the precision is best for the limit-case of the cumulative method, and the goodness of fit estimator (adjusted R-squared) is almost the same for all three algorithms.

  20. A discontinuous galerkin time domain-boundary integral method for analyzing transient electromagnetic scattering

    KAUST Repository

    Li, Ping

    2014-07-01

    This paper presents an algorithm hybridizing discontinuous Galerkin time domain (DGTD) method and time domain boundary integral (BI) algorithm for 3-D open region electromagnetic scattering analysis. The computational domain of DGTD is rigorously truncated by analytically evaluating the incoming numerical flux from the outside of the truncation boundary through BI method based on the Huygens\\' principle. The advantages of the proposed method are that it allows the truncation boundary to be conformal to arbitrary (convex/ concave) scattering objects, well-separated scatters can be truncated by their local meshes without losing the physics (such as coupling/multiple scattering) of the problem, thus reducing the total mesh elements. Furthermore, low frequency waves can be efficiently absorbed, and the field outside the truncation domain can be conveniently calculated using the same BI formulation. Numerical examples are benchmarked to demonstrate the accuracy and versatility of the proposed method.

  1. Decoding Interleaved Gabidulin Codes using Alekhnovich's Algorithm

    DEFF Research Database (Denmark)

    Puchinger, Sven; Müelich, Sven; Mödinger, David

    2017-01-01

    We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log⁡(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent.......We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log⁡(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent....

  2. Robust inverse scattering full waveform seismic tomography for imaging complex structure

    International Nuclear Information System (INIS)

    Nurhandoko, Bagus Endar B.; Sukmana, Indriani; Wibowo, Satryo; Deny, Agus; Kurniadi, Rizal; Widowati, Sri; Mubarok, Syahrul; Susilowati; Kaswandhi

    2012-01-01

    Seismic tomography becomes important tool recently for imaging complex subsurface. It is well known that imaging complex rich fault zone is difficult. In this paper, The application of time domain inverse scattering wave tomography to image the complex fault zone would be shown on this paper, especially an efficient time domain inverse scattering tomography and their run in cluster parallel computer which has been developed. This algorithm is purely based on scattering theory through solving Lippmann Schwienger integral by using Born's approximation. In this paper, it is shown the robustness of this algorithm especially in avoiding the inversion trapped in local minimum to reach global minimum. A large data are solved by windowing and blocking technique of memory as well as computation. Parameter of windowing computation is based on shot gather's aperture. This windowing technique reduces memory as well as computation significantly. This parallel algorithm is done by means cluster system of 120 processors from 20 nodes of AMD Phenom II. Benchmarking of this algorithm is done by means Marmoussi model which can be representative of complex rich fault area. It is shown that the proposed method can image clearly the rich fault and complex zone in Marmoussi model even though the initial model is quite far from the true model. Therefore, this method can be as one of solution to image the very complex mode.

  3. Robust inverse scattering full waveform seismic tomography for imaging complex structure

    Energy Technology Data Exchange (ETDEWEB)

    Nurhandoko, Bagus Endar B.; Sukmana, Indriani; Wibowo, Satryo; Deny, Agus; Kurniadi, Rizal; Widowati, Sri; Mubarok, Syahrul; Susilowati; Kaswandhi [Wave Inversion and Subsurface Fluid Imaging Research (WISFIR) Lab., Complex System Research Division, Physics Department, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung. and Rock Fluid Imaging Lab., Rock Physics and Cluster C (Indonesia); Rock Fluid Imaging Lab., Rock Physics and Cluster Computing Center, Bandung (Indonesia); Physics Department of Institut Teknologi Bandung (Indonesia); Rock Fluid Imaging Lab., Rock Physics and Cluster Computing Center, Bandung, Indonesia and Institut Teknologi Telkom, Bandung (Indonesia); Rock Fluid Imaging Lab., Rock Physics and Cluster Computing Center, Bandung (Indonesia)

    2012-06-20

    Seismic tomography becomes important tool recently for imaging complex subsurface. It is well known that imaging complex rich fault zone is difficult. In this paper, The application of time domain inverse scattering wave tomography to image the complex fault zone would be shown on this paper, especially an efficient time domain inverse scattering tomography and their run in cluster parallel computer which has been developed. This algorithm is purely based on scattering theory through solving Lippmann Schwienger integral by using Born's approximation. In this paper, it is shown the robustness of this algorithm especially in avoiding the inversion trapped in local minimum to reach global minimum. A large data are solved by windowing and blocking technique of memory as well as computation. Parameter of windowing computation is based on shot gather's aperture. This windowing technique reduces memory as well as computation significantly. This parallel algorithm is done by means cluster system of 120 processors from 20 nodes of AMD Phenom II. Benchmarking of this algorithm is done by means Marmoussi model which can be representative of complex rich fault area. It is shown that the proposed method can image clearly the rich fault and complex zone in Marmoussi model even though the initial model is quite far from the true model. Therefore, this method can be as one of solution to image the very complex mode.

  4. Radioiodine therapy of hyperfunctioning thyroid nodules: usefulness of an implemented dose calculation algorithm allowing reduction of radioiodine amount.

    Science.gov (United States)

    Schiavo, M; Bagnara, M C; Pomposelli, E; Altrinetti, V; Calamia, I; Camerieri, L; Giusti, M; Pesce, G; Reitano, C; Bagnasco, M; Caputo, M

    2013-09-01

    Radioiodine is a common option for treatment of hyperfunctioning thyroid nodules. Due to the expected selective radioiodine uptake by adenoma, relatively high "fixed" activities are often used. Alternatively, the activity is individually calculated upon the prescription of a fixed value of target absorbed dose. We evaluated the use of an algorithm for personalized radioiodine activity calculation, which allows as a rule the administration of lower radioiodine activities. Seventy-five patients with single hyperfunctioning thyroid nodule eligible for 131I treatment were studied. The activities of 131I to be administered were estimated by the method described by Traino et al. and developed for Graves'disease, assuming selective and homogeneous 131I uptake by adenoma. The method takes into account 131I uptake and its effective half-life, target (adenoma) volume and its expected volume reduction during treatment. A comparison with the activities calculated by other dosimetric protocols, and the "fixed" activity method was performed. 131I uptake was measured by external counting, thyroid nodule volume by ultrasonography, thyroid hormones and TSH by ELISA. Remission of hyperthyroidism was observed in all but one patient; volume reduction of adenoma was closely similar to that assumed by our model. Effective half-life was highly variable in different patients, and critically affected dose calculation. The administered activities were clearly lower with respect to "fixed" activities and other protocols' prescription. The proposed algorithm proved to be effective also for single hyperfunctioning thyroid nodule treatment and allowed a significant reduction of administered 131I activities, without loss of clinical efficacy.

  5. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  6. Network Reduction Algorithm for Developing Distribution Feeders for Real-Time Simulators: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Nagarajan, Adarsh; Nelson, Austin; Prabakar, Kumaraguru; Hoke, Andy; Asano, Marc; Ueda, Reid; Nepal, Shaili

    2017-06-15

    As advanced grid-support functions (AGF) become more widely used in grid-connected photovoltaic (PV) inverters, utilities are increasingly interested in their impacts when implemented in the field. These effects can be understood by modeling feeders in real-time systems and testing PV inverters using power hardware-in-the-loop (PHIL) techniques. This paper presents a novel feeder model reduction algorithm using a Monte Carlo method that enables large feeders to be solved and operated on real-time computing platforms. Two Hawaiian Electric feeder models in Synergi Electric's load flow software were converted to reduced order models in OpenDSS, and subsequently implemented in the OPAL-RT real-time digital testing platform. Smart PV inverters were added to the real-time model with AGF responses modeled after characterizing commercially available hardware inverters. Finally, hardware inverters were tested in conjunction with the real-time model using PHIL techniques so that the effects of AGFs on the choice feeders could be analyzed.

  7. Phase object retrieval through scattering medium

    Science.gov (United States)

    Zhao, Ming; Zhao, Meijing; Wu, Houde; Xu, Wenhai

    2018-05-01

    Optical imaging through a scattering medium has been an interesting and important research topic, especially in the field of biomedical imaging. However, it is still a challenging task due to strong scattering. This paper proposes to recover the phase object behind the scattering medium from one single-shot speckle intensity image using calibrated transmission matrices (TMs). We construct the forward model as a non-linear mapping, since the intensity image loses the phase information, and then a generalized phase retrieval algorithm is employed to recover the hidden object. Moreover, we show that a phase object can be reconstructed with a small portion of the speckle image captured by the camera. The simulation is performed to demonstrate our scheme and test its performance. Finally, a real experiment is set up, we measure the TMs from the scattering medium, and then use it to reconstruct the hidden object. We show that a phase object of size 32 × 32 is retrieved from 150 × 150 speckle grains, which is only 1/50 of the speckles area. We believe our proposed method can benefit the community of imaging through the scattering medium.

  8. A modified CoSaMP algorithm for electromagnetic imaging of two dimensional domains

    KAUST Repository

    Sandhu, Ali Imran; Bagci, Hakan

    2017-01-01

    The compressive sampling matching pursuit (CoSaMP) algorithm is used for solving the electromagnetic inverse scattering problem on two-dimensional sparse domains. Since the scattering matrix, which is computed by sampling the Green function, does

  9. Broadband and Broad-angle Polarization-independent Metasurface for Radar Cross Section Reduction.

    Science.gov (United States)

    Sun, Hengyi; Gu, Changqing; Chen, Xinlei; Li, Zhuo; Liu, Liangliang; Xu, Bingzheng; Zhou, Zicheng

    2017-01-20

    In this work, a broadband and broad-angle polarization-independent random coding metasurface structure is proposed for radar cross section (RCS) reduction. An efficient genetic algorithm is utilized to obtain the optimal layout of the unit cells of the metasurface to get a uniform backscattering under normal incidence. Excellent agreement between the simulation and experimental results show that the proposed metasurface structure can significantly reduce the radar cross section more than 10 dB from 17 GHz to 42 GHz when the angle of incident waves varies from 10° to 50°. The proposed coding metasurface provides an efficient scheme to reduce the scattering of the electromagnetic waves.

  10. Coherent scattering noise reduction method with wavelength diversity detection for holographic data storage system

    Science.gov (United States)

    Nakamura, Yusuke; Hoshizawa, Taku; Takashima, Yuzuru

    2017-09-01

    A new method, wavelength diversity detection (WDD), for improving signal quality is proposed and its effectiveness is numerically confirmed. We consider that WDD is especially effective for high-capacity systems having low hologram diffraction efficiencies. In such systems, the signal quality is primarily limited by coherent scattering noise; thus, effective improvement of the signal quality under a scattering-limited system is of great interest. WDD utilizes a new degree of freedom, the spectrum width, and scattering by molecules to improve the signal quality of the system. We found that WDD improves the quality by counterbalancing the degradation of the quality due to Bragg mismatch. With WDD, a higher-scattering-coefficient medium can improve the quality. The result provides an interesting insight into the requirements for material characteristics, especially for a large-M/# material. In general, a larger-M/# material contains more molecules; thus, the system is subject to more scattering, which actually improves the quality with WDD. We propose a pathway for a future holographic data storage system (HDSS) using WDD, which can record a larger amount of data than a conventional HDSS.

  11. Fully 3D iterative scatter-corrected OSEM for HRRT PET using a GPU

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kyung Sang; Ye, Jong Chul, E-mail: kssigari@kaist.ac.kr, E-mail: jong.ye@kaist.ac.kr [Bio-Imaging and Signal Processing Lab., Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), 335 Gwahak-no, Yuseong-gu, Daejon 305-701 (Korea, Republic of)

    2011-08-07

    Accurate scatter correction is especially important for high-resolution 3D positron emission tomographies (PETs) such as high-resolution research tomograph (HRRT) due to large scatter fraction in the data. To address this problem, a fully 3D iterative scatter-corrected ordered subset expectation maximization (OSEM) in which a 3D single scatter simulation (SSS) is alternatively performed with a 3D OSEM reconstruction was recently proposed. However, due to the computational complexity of both SSS and OSEM algorithms for a high-resolution 3D PET, it has not been widely used in practice. The main objective of this paper is, therefore, to accelerate the fully 3D iterative scatter-corrected OSEM using a graphics processing unit (GPU) and verify its performance for an HRRT. We show that to exploit the massive thread structures of the GPU, several algorithmic modifications are necessary. For SSS implementation, a sinogram-driven approach is found to be more appropriate compared to a detector-driven approach, as fast linear interpolation can be performed in the sinogram domain through the use of texture memory. Furthermore, a pixel-driven backprojector and a ray-driven projector can be significantly accelerated by assigning threads to voxels and sinograms, respectively. Using Nvidia's GPU and compute unified device architecture (CUDA), the execution time of a SSS is less than 6 s, a single iteration of OSEM with 16 subsets takes 16 s, and a single iteration of the fully 3D scatter-corrected OSEM composed of a SSS and six iterations of OSEM takes under 105 s for the HRRT geometry, which corresponds to acceleration factors of 125x and 141x for OSEM and SSS, respectively. The fully 3D iterative scatter-corrected OSEM algorithm is validated in simulations using Geant4 application for tomographic emission and in actual experiments using an HRRT.

  12. Scattering Properties of Electromagnetic Waves from Randomly Oriented Rough Metal Plate in the Lower Terahertz Region

    Directory of Open Access Journals (Sweden)

    Chen Gang

    2018-02-01

    Full Text Available An efficient hybrid algorithm is proposed to analyze the electromagnetic scattering properties of an infinitely thin metal plate in the lower terahertz (THz frequency region. In this region, the metal plate can be viewed as a perfect electrically conductive object with a marginally rough surface. Hence, the THz scattered field from the metal plate can be divided into coherent and incoherent parts. The physical optics and truncated-wedge incremental-length diffraction coefficients methods are used to compute the coherent part, whereas the small perturbation method is used to compute the incoherent part. Then, the radar cross section of the rough metal plate surface is computed by the multilevel fast multipole and proposed hybrid algorithms. The numerical results show that the proposed algorithm has a good accuracy when rapidly simulating the scattering properties in the lower THz region.

  13. Importance of scatter compensation algorithm in heterogeneous tissue for the radiation dose calculation of small lung nodules. A clinical study

    International Nuclear Information System (INIS)

    Baba, Yuji; Murakami, Ryuji; Mizukami, Naohisa; Morishita, Shoji; Yamashita, Yasuyuki; Araki, Fujio; Moribe, Nobuyuki; Hirata, Yukinori

    2004-01-01

    The purpose of this study was to compare radiation doses of small lung nodules calculated with beam scattering compensation and those without compensation in heterogeneous tissues. Computed tomography (CT) data of 34 small (1-2 cm: 12 nodules, 2-3 cm 11 nodules, 3-4 cm 11 nodules) lung nodules were used in the radiation dose measurements. Radiation planning for lung nodule was performed with a commercially available unit using two different radiation dose calculation methods: the superposition method (with scatter compensation in heterogeneous tissues), and the Clarkson method (without scatter compensation in heterogeneous tissues). The energy of the linac photon used in this study was 10 MV and 4 MV. Monitor unit (MU) to deliver 10 Gy at the center of the radiation field (center of the nodule) calculated with the two methods were compared. In 1-2 cm nodules, MU calculated by Clarkson method (MUc) was 90.0±1.1% (4 MV photon) and 80.5±2.7% (10 MV photon) compared to MU calculated by superposion method (MUs), in 2-3 cm nodules, MUc was 92.9±1.1% (4 MV photon) and 86.6±2.8% (10 MV photon) compared to MUs, and in 3-4 cm nodules, MUc was 90.5±2.0% (4 MV photon) and 90.1±1.7% (10 MV photon) compared to MUs. In 1-2 cm nodules, MU calculated without lung compensation (MUn) was 120.6±8.3% (4 MV photon) and 95.1±4.1% (10 MV photon) compared to MU calculated by superposion method (MUs), in 2-3 cm nodules, MUc was 120.3±11.5% (4 MV photon) and 100.5±4.6% (10 MV photon) compared to MUs, and in 3-4 cm nodules, MUc was 105.3±9.0% (4 MV photon) and 103.4±4.9% (10 MV photon) compared to MUs. The MU calculated without lung compensation was not significantly different from the MU calculated by superposition method in 2-3 cm nodules. We found that the conventional dose calculation algorithm without scatter compensation in heterogeneous tissues substantially overestimated the radiation dose of small nodules in the lung field. In the calculation of dose distribution of small

  14. Radar Echo Scattering Modeling and Image Simulations of Full-scale Convex Rough Targets at Terahertz Frequencies

    Directory of Open Access Journals (Sweden)

    Gao Jingkun

    2018-02-01

    Full Text Available Echo simulation is a precondition for developing radar imaging systems, algorithms, and subsequent applications. Electromagnetic scattering modeling of the target is key to echo simulation. At terahertz (THz frequencies, targets are usually of ultra-large electrical size that makes applying classical electromagnetic calculation methods unpractical. In contrast, the short wavelength makes the surface roughness of targets a factor that cannot be ignored, and this makes the traditional echo simulation methods based on point scattering hypothesis in applicable. Modeling the scattering characteristics of targets and efficiently generating its radar echoes in THz bands has become a problem that must be solved. In this paper, a hierarchical semi-deterministic modeling method is proposed. A full-wave algorithm of rough surfaces is used to calculate the scattered field of facets. Then, the scattered fields of all facets are transformed into the target coordinate system and coherently summed. Finally, the radar echo containing phase information can be obtained. Using small-scale rough models, our method is compared with the standard high-frequency numerical method, which verifies the effectiveness of the proposed method. Imaging results of a full-scale cone-shape target is presented, and the scattering model and echo generation problem of the full-scale convex targets with rough surfaces in THz bands are preliminary solved; this lays the foundation for future research on imaging regimes and algorithms.

  15. Magnet sorting algorithms

    International Nuclear Information System (INIS)

    Dinev, D.

    1996-01-01

    Several new algorithms for sorting of dipole and/or quadrupole magnets in synchrotrons and storage rings are described. The algorithms make use of a combinatorial approach to the problem and belong to the class of random search algorithms. They use an appropriate metrization of the state space. The phase-space distortion (smear) is used as a goal function. Computational experiments for the case of the JINR-Dubna superconducting heavy ion synchrotron NUCLOTRON have shown a significant reduction of the phase-space distortion after the magnet sorting. (orig.)

  16. A study of W W scattering at the LHC

    CERN Document Server

    Nauyock, Farahnaaz

    2004-01-01

    This thesis presents a study of scattering at the LHC, a proton-proton collider being built at CERN and due to start its first run in 2007. The case where no new particles are discovered before the start of the LHC is analysed. The elastic scattering of is considered and the semileptonic 1 decay channels of the bosons are investigated. Signals and backgrounds are simulated using Atlfast, a fast simulation programme for the ATLAS experiment. This specific channel causes violation of unitarity at 1.2 TeV. Therefore, unitarisation is performed and this leads to different resonance scenarios, five of which are investigated. The final signal to background ratio after applying various kinematic cuts on events is greater than one for all the five scenarios. A comparison between the algorithm and cone algorithm is also performed to find out which jet-finding analysis yields a better signal to background ratio. The algorithm proves very efficient in reducing the background by an approximate factor of 1.5 better than t...

  17. Remarks on the inverse scattering transform associated with toda equations

    Science.gov (United States)

    Ablowitz, Mark J.; Villorroel, J.

    The Inverse Scattering Transforms used to solve both the 2+1 Toda equation and a novel reduction, the Toda differential-delay equations are outlined. There are a number of interesting features associated with these systems and the related scattering theory.

  18. Studies of Actinides Reduction on Iron Surfaces by Means of Resonant Inelastic X-ray Scattering

    International Nuclear Information System (INIS)

    Kvashnina, K.O.; Butorin, S.M.; Shuh, D.K.; Ollila, K.; Soroka, I.; Guo, J.-H.; Werme, L.; Nordgren, J.

    2006-01-01

    The interaction of actinides with corroded iron surfaces was studied using resonant inelastic x-ray scattering (RIXS) spectroscopy at actinide 5d edges. RIXS profiles, corresponding to the f-f excitations are found to be very sensitive to the chemical states of actinides in different systems. Our results clearly indicate that U(VI) (as soluble uranyl ion) was reduced to U(IV) in the form of relatively insoluble uranium species, indicating that the iron presence significantly affects the mobility of actinides, creating reducing conditions. Also Np(V) and Pu (VI) in the ground water solution were getting reduced by the iron surface to Np(IV) and Pu (IV) respectively. Studying the reduction of actinides compounds will have an important process controlling the environmental behavior. Using RIXS we have shown that actinides, formed by radiolysis of water in the disposal canister, are likely to be reduced on the inset corrosion products and prevent release from the canister

  19. Comparison between beamforming and super resolution imaging algorithms for non-destructive evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Fan, Chengguang [College of Mechatronic Engineering and Automation, National University of Defense Technology, Changsha 410073, PR China and Department of Mechanical Engineering, University of Bristol, Queen' s Building, University Walk, Bristol BS8 1TR (United Kingdom); Drinkwater, Bruce W. [Department of Mechanical Engineering, University of Bristol, Queen' s Building, University Walk, Bristol BS8 1TR (United Kingdom)

    2014-02-18

    In this paper the performance of total focusing method is compared with the widely used time-reversal MUSIC super resolution technique. The algorithms are tested with simulated and experimental ultrasonic array data, each containing different noise levels. The simulated time domain signals allow the effects of array geometry, frequency, scatterer location, scatterer size, scatterer separation and random noise to be carefully controlled. The performance of the imaging algorithms is evaluated in terms of resolution and sensitivity to random noise. It is shown that for the low noise situation, time-reversal MUSIC provides enhanced lateral resolution when compared to the total focusing method. However, for higher noise levels, the total focusing method shows robustness, whilst the performance of time-reversal MUSIC is significantly degraded.

  20. Comparison between beamforming and super resolution imaging algorithms for non-destructive evaluation

    International Nuclear Information System (INIS)

    Fan, Chengguang; Drinkwater, Bruce W.

    2014-01-01

    In this paper the performance of total focusing method is compared with the widely used time-reversal MUSIC super resolution technique. The algorithms are tested with simulated and experimental ultrasonic array data, each containing different noise levels. The simulated time domain signals allow the effects of array geometry, frequency, scatterer location, scatterer size, scatterer separation and random noise to be carefully controlled. The performance of the imaging algorithms is evaluated in terms of resolution and sensitivity to random noise. It is shown that for the low noise situation, time-reversal MUSIC provides enhanced lateral resolution when compared to the total focusing method. However, for higher noise levels, the total focusing method shows robustness, whilst the performance of time-reversal MUSIC is significantly degraded

  1. Compton scattering collision module for OSIRIS

    Science.gov (United States)

    Del Gaudio, Fabrizio; Grismayer, Thomas; Fonseca, Ricardo; Silva, Luís

    2017-10-01

    Compton scattering plays a fundamental role in a variety of different astrophysical environments, such as at the gaps of pulsars and the stagnation surface of black holes. In these scenarios, Compton scattering is coupled with self-consistent mechanisms such as pair cascades. We present the implementation of a novel module, embedded in the self-consistent framework of the PIC code OSIRIS 4.0, capable of simulating Compton scattering from first principles and that is fully integrated with the self-consistent plasma dynamics. The algorithm accounts for the stochastic nature of Compton scattering reproducing without approximations the exchange of energy between photons and unbound charged species. We present benchmarks of the code against the analytical results of Blumenthal et al. and the numerical solution of the linear Kompaneets equation and good agreement is found between the simulations and the theoretical models. This work is supported by the European Research Council Grant (ERC- 2015-AdG 695088) and the Fundao para a Céncia e Tecnologia (Bolsa de Investigao PD/BD/114323/2016).

  2. Two-dimensional Fast ESPRIT Algorithm for Linear Array SAR Imaging

    Directory of Open Access Journals (Sweden)

    Zhao Yi-chao

    2015-10-01

    Full Text Available The linear array Synthetic Aperture Radar (SAR system is a popular research tool, because it can realize three-dimensional imaging. However, owning to limitations of the aircraft platform and actual conditions, resolution improvement is difficult in cross-track and along-track directions. In this study, a twodimensional fast Estimation of Signal Parameters by Rotational Invariance Technique (ESPRIT algorithm for linear array SAR imaging is proposed to overcome these limitations. This approach combines the Gerschgorin disks method and the ESPRIT algorithm to estimate the positions of scatterers in cross and along-rack directions. Moreover, the reflectivity of scatterers is obtained by a modified pairing method based on “region growing”, replacing the least-squares method. The simulation results demonstrate the applicability of the algorithm with high resolution, quick calculation, and good real-time response.

  3. Multiple scattering processes: inverse and direct

    International Nuclear Information System (INIS)

    Kagiwada, H.H.; Kalaba, R.; Ueno, S.

    1975-01-01

    The purpose of the work is to formulate inverse problems in radiative transfer, to introduce the functions b and h as parameters of internal intensity in homogeneous slabs, and to derive initial value problems to replace the more traditional boundary value problems and integral equations of multiple scattering with high computational efficiency. The discussion covers multiple scattering processes in a one-dimensional medium; isotropic scattering in homogeneous slabs illuminated by parallel rays of radiation; the theory of functions b and h in homogeneous slabs illuminated by isotropic sources of radiation either at the top or at the bottom; inverse and direct problems of multiple scattering in slabs including internal sources; multiple scattering in inhomogeneous media, with particular reference to inverse problems for estimation of layers and total thickness of inhomogeneous slabs and to multiple scattering problems with Lambert's law and specular reflectors underlying slabs; and anisotropic scattering with reduction of the number of relevant arguments through axially symmetric fields and expansion in Legendre functions. Gaussian quadrature data for a seven point formula, a FORTRAN program for computing the functions b and h, and tables of these functions supplement the text

  4. Applications of time-dependent Raman scattering theory to the one-electron reduction of 4-cyano-n-methylpyridinium

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1992-01-01

    Activation barrier heights, and therefore rates, for molecule-based electron-transfer (ET) reactions are governed by redox thermodynamics and Frank-Condon effects. Quantitative assessment of the latter requires a detailed, quantitative knowledge of all internal and external normal-coordinate displacements, together with appropriate vibrational frequencies (v) or force constants (f). In favorable cases, the desire internal or vibrational displacement information can be satisfactorily estimated from redox-induced bond-length changes as provided, for example, by x-ray crystallography or extended x-ray absorption fine structure (EXAFS) measurements. Other potentially useful methods include Franck-Condon analysis of structured emission or absorption spectra, hole burning techniques, and application of empirical structure/frequency relationships (E.g., Badger's rules). There are, however, a number of limitations. The most obvious limitations for crystallography are that measurements can be made only in a crystalline environment and that experiments cannot be done on short-lived electron-transfer excited states or on systems which suffer from chemical decomposition following oxidation or reduction. For EXAFS there are additional constrains in that only selected elements display useful scattering and only atoms in close proximity to the scattering center may be detected. This report contains the first successful applications of the Raman methodology to a much larger class of ET reactions, namely, outer-sphere reactions. The report also necessarily represents the first application to a monomeric redox system

  5. Extracting quantum dynamics from genetic learning algorithms through principal control analysis

    International Nuclear Information System (INIS)

    White, J L; Pearson, B J; Bucksbaum, P H

    2004-01-01

    Genetic learning algorithms are widely used to control ultrafast optical pulse shapes for photo-induced quantum control of atoms and molecules. An unresolved issue is how to use the solutions found by these algorithms to learn about the system's quantum dynamics. We propose a simple method based on covariance analysis of the control space, which can reveal the degrees of freedom in the effective control Hamiltonian. We have applied this technique to stimulated Raman scattering in liquid methanol. A simple model of two-mode stimulated Raman scattering is consistent with the results. (letter to the editor)

  6. Admissible Crosstalk Limits in a Two Colour Interferometers for Plasma Density Diagnostics. A Reduction Algorithm

    International Nuclear Information System (INIS)

    Sanchez, M.; Esteban, L.; Kornejew, P.; Hirsch, M.

    2008-01-01

    Mid Infrared (10,6 μm CO 2 laser lines) interferometers as a plasma density diagnostic must use two-colour systems with superposed interferometers beams at different wavelengths in order to cope with mechanical vibrations and drifts. They require a highly precise phase difference measurement where all sources of error must be reduced. One of these is the cross-talk between the signals which creates nonlinear spurious periodic mixing products. The reason may be either optical or electrical crosstalk both resulting in similar perturbations of the measurement. In the TJII interferometer a post-processing algorithm is used to reduce the crosstalk in the data. This post-processing procedure is not appropriate for very long pulses, as it is the case for in new tokamak (ITER) or stellarator (W7-X) projects. In both cases an on-line reduction process is required or--even better--the unwanted signal components must be reduced in the system itself CO 2 laser interferometers which as the second wavelength use the CO laser line (5,3 μm), may apply a single common detector sensitive to both wavelengths and separate the corresponding IF signals by appropriate bandpass filters. This reduces complexity of the optical arrangement and avoids a possible source of vibration induced phase noise as both signals share the same beam path. To avoid cross talk in this arrangement filtering must be appropriate. In this paper we present calculations to define the limits of crosstalk for a desired plasma density precision. A crosstalk reduction algorithm has been developed and is applied to experimental results from TJ-II pulses. Results from a single detector arrangement as under investigation for the CO 2 /CO laser interferometer developed for W7-X are presented

  7. Motion tolerant iterative reconstruction algorithm for cone-beam helical CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, Hisashi; Goto, Taiga; Hirokawa, Koichi; Miyazaki, Osamu [Hitachi Medical Corporation, Chiba-ken (Japan). CT System Div.

    2011-07-01

    We have developed a new advanced iterative reconstruction algorithm for cone-beam helical CT. The features of this algorithm are: (a) it uses separable paraboloidal surrogate (SPS) technique as a foundation for reconstruction to reduce noise and cone-beam artifact, (b) it uses a view weight in the back-projection process to reduce motion artifact. To confirm the improvement of our proposed algorithm over other existing algorithm, such as Feldkamp-Davis-Kress (FDK) or SPS algorithm, we compared the motion artifact reduction, image noise reduction (standard deviation of CT number), and cone-beam artifact reduction on simulated and clinical data set. Our results demonstrate that the proposed algorithm dramatically reduces motion artifacts compared with the SPS algorithm, and decreases image noise compared with the FDK algorithm. In addition, the proposed algorithm potentially improves time resolution of iterative reconstruction. (orig.)

  8. Reconstruction of Kinematic Surfaces from Scattered Data

    DEFF Research Database (Denmark)

    Randrup, Thomas; Pottmann, Helmut; Lee, I.-K.

    1998-01-01

    Given a surface in 3-space or scattered points from a surface, we present algorithms for fitting the data by a surface which can be generated by a one--parameter subgroup of the group of similarities. These surfaces are general cones and cylinders, surfaces of revolution, helical surfaces and spi...

  9. An Interval Type-2 Fuzzy System with a Species-Based Hybrid Algorithm for Nonlinear System Control Design

    Directory of Open Access Journals (Sweden)

    Chung-Ta Li

    2014-01-01

    Full Text Available We propose a species-based hybrid of the electromagnetism-like mechanism (EM and back-propagation algorithms (SEMBP for an interval type-2 fuzzy neural system with asymmetric membership functions (AIT2FNS design. The interval type-2 asymmetric fuzzy membership functions (IT2 AFMFs and the TSK-type consequent part are adopted to implement the network structure in AIT2FNS. In addition, the type reduction procedure is integrated into an adaptive network structure to reduce computational complexity. Hence, the AIT2FNS can enhance the approximation accuracy effectively by using less fuzzy rules. The AIT2FNS is trained by the SEMBP algorithm, which contains the steps of uniform initialization, species determination, local search, total force calculation, movement, and evaluation. It combines the advantages of EM and back-propagation (BP algorithms to attain a faster convergence and a lower computational complexity. The proposed SEMBP algorithm adopts the uniform method (which evenly scatters solution agents over the feasible solution region and the species technique to improve the algorithm’s ability to find the global optimum. Finally, two illustrative examples of nonlinear systems control are presented to demonstrate the performance and the effectiveness of the proposed AIT2FNS with the SEMBP algorithm.

  10. Synthetic acceleration methods for linear transport problems with highly anisotropic scattering

    International Nuclear Information System (INIS)

    Khattab, K.M.; Larsen, E.W.

    1992-01-01

    The diffusion synthetic acceleration (DSA) algorithm effectively accelerates the iterative solution of transport problems with isotropic or mildly anisotropic scattering. However, DSA loses its effectiveness for transport problems that have strongly anisotropic scattering. Two generalizations of DSA are proposed, which, for highly anisotropic scattering problems, converge at least an order of magnitude (clock time) faster than the DSA method. These two methods are developed, the results of Fourier analysis that theoretically predict their efficiency are described, and numerical results that verify the theoretical predictions are presented. (author). 10 refs., 7 figs., 5 tabs

  11. Synthetic acceleration methods for linear transport problems with highly anisotropic scattering

    International Nuclear Information System (INIS)

    Khattab, K.M.; Larsen, E.W.

    1991-01-01

    This paper reports on the diffusion synthetic acceleration (DSA) algorithm that effectively accelerates the iterative solution of transport problems with isotropic or mildly anisotropic scattering. However, DSA loses its effectiveness for transport problems that have strongly anisotropic scattering. Two generalizations of DSA are proposed, which, for highly anisotropic scattering problems, converge at least an order of magnitude (clock time) faster than the DSA method. These two methods are developed, the results of Fourier analyses that theoretically predict their efficiency are described, and numerical results that verify the theoretical predictions are presented

  12. Atmospheric scattering corrections to solar radiometry

    International Nuclear Information System (INIS)

    Box, M.A.; Deepak, A.

    1979-01-01

    Whenever a solar radiometer is used to measure direct solar radiation, some diffuse sky radiation invariably enters the detector's field of view along with the direct beam. Therefore, the atmospheric optical depth obtained by the use of Bouguer's transmission law (also called Beer-Lambert's law), that is valid only for direct radiation, needs to be corrected by taking account of the scattered radiation. In this paper we shall discuss the correction factors needed to account for the diffuse (i.e., singly and multiply scattered) radiation and the algorithms developed for retrieving aerosol size distribution from such measurements. For a radiometer with a small field of view (half-cone angle 0 ) and relatively clear skies (optical depths <0.4), it is shown that the total diffuse contributions represents approximately l% of the total intensity. It is assumed here that the main contributions to the diffuse radiation within the detector's view cone are due to single scattering by molecules and aerosols and multiple scattering by molecules alone, aerosol multiple scattering contributions being treated as negligibly small. The theory and the numerical results discussed in this paper will be helpful not only in making corrections to the measured optical depth data but also in designing improved solar radiometers

  13. Reduction of artifacts caused by orthopedic hardware in the spine in spectral detector CT examinations using virtual monoenergetic image reconstructions and metal-artifact-reduction algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Grosse Hokamp, Nils; Neuhaus, V.; Abdullayev, N.; Laukamp, K.; Lennartz, S.; Mpotsaris, A.; Borggrefe, J. [University Hospital Cologne, Department of Diagnostic and Interventional Radiology, Cologne (Germany)

    2018-02-15

    Aim of this study was to assess the artifact reduction in patients with orthopedic hardware in the spine as provided by (1) metal-artifact-reduction algorithms (O-MAR) and (2) virtual monoenergetic images (MonoE) as provided by spectral detector CT (SDCT) compared to conventional iterative reconstruction (CI). In all, 28 consecutive patients with orthopedic hardware in the spine who underwent SDCT-examinations were included. CI, O-MAR and MonoE (40-200 keV) images were reconstructed. Attenuation (HU) and noise (SD) were measured in order to calculate signal-to-noise ratio (SNR) of paravertebral muscle and spinal canal. Subjective image quality was assessed by two radiologists in terms of image quality and extent of artifact reduction. O-MAR and high-keV MonoE showed significant decrease of hypodense artifacts in terms of higher attenuation as compared to CI (CI vs O-MAR, 200 keV MonoE: -396.5HU vs. -115.2HU, -48.1HU; both p ≤ 0.001). Further, artifacts as depicted by noise were reduced in O-MAR and high-keV MonoE as compared to CI in (1) paravertebral muscle and (2) spinal canal - CI vs. O-MAR/200 keV: (1) 34.7 ± 19.0 HU vs. 26.4 ± 14.4 HU, p ≤ 0.05/27.4 ± 16.1, n.s.; (2) 103.4 ± 61.3 HU vs. 72.6 ± 62.6 HU/60.9 ± 40.1 HU, both p ≤ 0.001. Subjectively both O-MAR and high-keV images yielded an artifact reduction in up to 24/28 patients. Both, O-MAR and high-keV MonoE reconstructions as provided by SDCT lead to objective and subjective artifact reduction, thus the combination of O-MAR and MonoE seems promising for further reduction. (orig.)

  14. Markov chain solution of photon multiple scattering through turbid slabs.

    Science.gov (United States)

    Lin, Ying; Northrop, William F; Li, Xuesong

    2016-11-14

    This work introduces a Markov Chain solution to model photon multiple scattering through turbid slabs via anisotropic scattering process, i.e., Mie scattering. Results show that the proposed Markov Chain model agree with commonly used Monte Carlo simulation for various mediums such as medium with non-uniform phase functions and absorbing medium. The proposed Markov Chain solution method successfully converts the complex multiple scattering problem with practical phase functions into a matrix form and solves transmitted/reflected photon angular distributions by matrix multiplications. Such characteristics would potentially allow practical inversions by matrix manipulation or stochastic algorithms where widely applied stochastic methods such as Monte Carlo simulations usually fail, and thus enable practical diagnostics reconstructions such as medical diagnosis, spray analysis, and atmosphere sciences.

  15. Development of a golden beam data set for the commissioning of a proton double-scattering system in a pencil-beam dose calculation algorithm

    International Nuclear Information System (INIS)

    Slopsema, R. L.; Flampouri, S.; Yeung, D.; Li, Z.; Lin, L.; McDonough, J. E.; Palta, J.

    2014-01-01

    Purpose: The purpose of this investigation is to determine if a single set of beam data, described by a minimal set of equations and fitting variables, can be used to commission different installations of a proton double-scattering system in a commercial pencil-beam dose calculation algorithm. Methods: The beam model parameters required to commission the pencil-beam dose calculation algorithm (virtual and effective SAD, effective source size, and pristine-peak energy spread) are determined for a commercial double-scattering system. These parameters are measured in a first room and parameterized as function of proton energy and nozzle settings by fitting four analytical equations to the measured data. The combination of these equations and fitting values constitutes the golden beam data (GBD). To determine the variation in dose delivery between installations, the same dosimetric properties are measured in two additional rooms at the same facility, as well as in a single room at another facility. The difference between the room-specific measurements and the GBD is evaluated against tolerances that guarantee the 3D dose distribution in each of the rooms matches the GBD-based dose distribution within clinically reasonable limits. The pencil-beam treatment-planning algorithm is commissioned with the GBD. The three-dimensional dose distribution in water is evaluated in the four treatment rooms and compared to the treatment-planning calculated dose distribution. Results: The virtual and effective SAD measurements fall between 226 and 257 cm. The effective source size varies between 2.4 and 6.2 cm for the large-field options, and 1.0 and 2.0 cm for the small-field options. The pristine-peak energy spread decreases from 1.05% at the lowest range to 0.6% at the highest. The virtual SAD as well as the effective source size can be accurately described by a linear relationship as function of the inverse of the residual energy. An additional linear correction term as function of

  16. Development of a golden beam data set for the commissioning of a proton double-scattering system in a pencil-beam dose calculation algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Slopsema, R. L., E-mail: rslopsema@floridaproton.org; Flampouri, S.; Yeung, D.; Li, Z. [University of Florida Proton Therapy Institute, 2015 North Jefferson Street, Jacksonville, Florida 32205 (United States); Lin, L.; McDonough, J. E. [Department of Radiation Oncology, University of Pennsylvania, 3400 Civic Boulevard, 2326W TRC, PCAM, Philadelphia, Pennsylvania 19104 (United States); Palta, J. [VCU Massey Cancer Center, Virginia Commonwealth University, 401 College Street, Richmond, Virginia 23298 (United States)

    2014-09-15

    Purpose: The purpose of this investigation is to determine if a single set of beam data, described by a minimal set of equations and fitting variables, can be used to commission different installations of a proton double-scattering system in a commercial pencil-beam dose calculation algorithm. Methods: The beam model parameters required to commission the pencil-beam dose calculation algorithm (virtual and effective SAD, effective source size, and pristine-peak energy spread) are determined for a commercial double-scattering system. These parameters are measured in a first room and parameterized as function of proton energy and nozzle settings by fitting four analytical equations to the measured data. The combination of these equations and fitting values constitutes the golden beam data (GBD). To determine the variation in dose delivery between installations, the same dosimetric properties are measured in two additional rooms at the same facility, as well as in a single room at another facility. The difference between the room-specific measurements and the GBD is evaluated against tolerances that guarantee the 3D dose distribution in each of the rooms matches the GBD-based dose distribution within clinically reasonable limits. The pencil-beam treatment-planning algorithm is commissioned with the GBD. The three-dimensional dose distribution in water is evaluated in the four treatment rooms and compared to the treatment-planning calculated dose distribution. Results: The virtual and effective SAD measurements fall between 226 and 257 cm. The effective source size varies between 2.4 and 6.2 cm for the large-field options, and 1.0 and 2.0 cm for the small-field options. The pristine-peak energy spread decreases from 1.05% at the lowest range to 0.6% at the highest. The virtual SAD as well as the effective source size can be accurately described by a linear relationship as function of the inverse of the residual energy. An additional linear correction term as function of

  17. Vector Boson Scattering at High Mass

    CERN Document Server

    The ATLAS collaboration

    2009-01-01

    In the absence of a light Higgs boson, the mechanism of electroweak symmetry breaking will be best studied in processes of vector boson scattering at high mass. Various models predict resonances in this channel. Here, we investigate $WW $scalar and vector resonances, $WZ$ vector resonances and a $ZZ$ scalar resonance over a range of diboson centre-of-mass energies. Particular attention is paid to the application of forward jet tagging and to the reconstruction of dijet pairs with low opening angle resulting from the decay of highly boosted vector bosons. The performances of different jet algorithms are compared. We find that resonances in vector boson scattering can be discovered with a few tens of inverse femtobarns of integrated luminosity.

  18. Reduction of the scatter dose to the testicle outside the radiation treatment fields

    International Nuclear Information System (INIS)

    Kubo, H.; Shipley, W.U.

    1982-01-01

    A technique is described to reduce the dose to the contralateral testicle of patients with testis tumors during retroperitoneal therapy with 10 MV X-rays. When a conventional clam-shell shielding device was used, the dose to the testis from the photons scattered by the patient and the collimator jaws was found to be about 1.6% of the prescribed midplane dose. A more substantial gonadal shield made of low melting Ostalloy, that reduced further the dose from internal scattered X rays, was therefore designed. A 10 cm thick lead scrotal block above the scrotum immediately outside the field is shown to reduce the external scattered radiation to negligible levels. Using the shield and the block, it is possible to reduce the dose to the testicle to one-tenth of one percent of the prescribed midplane dose

  19. Reduction of the scatter dose to the testicle outside the radiation treatment fields

    International Nuclear Information System (INIS)

    Kubo, H.; Shipley, W.U.

    1982-01-01

    A technique is described to reduce the dose to the contralateral testicle of patients with testis tumors during retroperitoneal therapy with 10 MV X rays. When a conventional clam-shell shielding device was used, the dose to the testis from the photons scattered by the patient and collimator jaws was found to be about 1.6% of the prescribed midplane dose. A more substantial gonadal shield made of low melting point Ostalloy, that reduced further the dose from internal scattered X rays, was therefore designed. A 10 cm thick lead scrotal block above the scrotum immediately outside the field is shown to reduce the external scattering radiation to negligible levels. Using the shield and the block, it is possible to reduce the dose to the testicle to one-tenth of one percent of the prescribed midplane dose

  20. THE OPTIMIZATION OF ELECTRODYNAMIC CONFIGURATION OBJECT WITH THE DESIRED CHARACTERISTICS OF SCATTERING.

    Directory of Open Access Journals (Sweden)

    A. P. Preobrazhensky

    2017-02-01

    Full Text Available This paper considers the problem of optimization of the characteristics of scattering of electromagnetic waves on periodic electrodynamic structure. The solution of the scattering problem is based on the method of integral equations, the optimization of the characteristics is based on the genetic algorithm. Recommendations on the parameters of the periodic structure under given angles are given.

  1. A modified CoSaMP algorithm for electromagnetic imaging of two dimensional domains

    KAUST Repository

    Sandhu, Ali Imran

    2017-05-13

    The compressive sampling matching pursuit (CoSaMP) algorithm is used for solving the electromagnetic inverse scattering problem on two-dimensional sparse domains. Since the scattering matrix, which is computed by sampling the Green function, does not satisfy the restricted isometry property, a damping parameter is added to the diagonal entries of the matrix to make the CoSaMP work. The damping factor can be selected based on the level of noise in the measurements. Numerical experiments, which demonstrate the accuracy and applicability of the proposed algorithm, are presented.

  2. A general algorithm for calculating jet cross sections in NLO QCD

    CERN Document Server

    Catani, S.; Catani, Stefano; Seymour, Michael H

    1997-01-01

    We present a new general algorithm for calculating arbitrary jet cross sections in arbitrary scattering processes to next-to-leading accuracy in perturbative QCD. The algorithm is based on the subtraction method. The key ingredients are new factorization formulae, called dipole formulae, which implement in a Lorentz covariant way both the usual soft and collinear approximations, smoothly interpolating the two. The corresponding dipole phase space obeys exact factorization, so that the dipole contributions to the cross section can be exactly integrated analytically over the whole of phase space. We obtain explicit analytic results for any jet observable in any scattering or fragmentation process in lepton, lepton-hadron or hadron-hadron collisions. All the analytical formulae necessary to construct a numerical program for next-to-leading order QCD calculations are provided. The algorithm is straightforwardly implementable in general purpose Monte Carlo programs.

  3. Multiple scattering corrections to the Beer-Lambert law. 1: Open detector.

    Science.gov (United States)

    Tam, W G; Zardecki, A

    1982-07-01

    Multiple scattering corrections to the Beer-Lambert law are analyzed by means of a rigorous small-angle solution to the radiative transfer equation. Transmission functions for predicting the received radiant power-a directly measured quantity in contrast to the spectral radiance in the Beer-Lambert law-are derived. Numerical algorithms and results relating to the multiple scattering effects for laser propagation in fog, cloud, and rain are presented.

  4. Thomson scattering in a low-pressure argon mercury positive column

    NARCIS (Netherlands)

    Bakker, L.P.; Kroesen, G.M.W.

    2000-01-01

    The electron density and the electron temperature in a low-pressure argon mercury positive column are determined using Thomson scattering. Special attention has been given to the stray light reduction in the Thomson scattering setup. The results are obtained in a discharge tube with a 26 mm diam, 5

  5. Thomson scattering in a low-pressure neon mercury positive column

    NARCIS (Netherlands)

    Bakker, L.P.; Kroesen, G.M.W.

    2001-01-01

    The electron density and the electron temperature in a low-pressure neon mercury positive column are determined using Thomson scattering. Special attention has been given to the stray light reduction in the Thomson scattering setup. The results are obtained in a discharge tube with a 26 mm diam, 10

  6. Numerical solution of the multichannel scattering problem

    International Nuclear Information System (INIS)

    Korobov, V.I.

    1992-01-01

    A numerical algorithm for solving the multichannel elastic and inelastic scattering problem is proposed. The starting point is the system of radial Schroedinger equations with linear boundary conditions imposed at some point R=R m placed somewhere in asymptotic region. It is discussed how the obtained linear equation can be splitted into a zero-order operator and its pertturbative part. It is shown that Lentini - Pereyra variable order finite-difference method appears to be very suitable for solving that kind of problems. The derived procedure is applied to dμ+t→tμ+d inelastic scattering in the framework of the adiabatic multichannel approach. 19 refs.; 1 fig.; 1 tab

  7. Assessing image quality and dose reduction of a new x-ray computed tomography iterative reconstruction algorithm using model observers

    International Nuclear Information System (INIS)

    Tseng, Hsin-Wu; Kupinski, Matthew A.; Fan, Jiahua; Sainath, Paavana; Hsieh, Jiang

    2014-01-01

    Purpose: A number of different techniques have been developed to reduce radiation dose in x-ray computed tomography (CT) imaging. In this paper, the authors will compare task-based measures of image quality of CT images reconstructed by two algorithms: conventional filtered back projection (FBP), and a new iterative reconstruction algorithm (IR). Methods: To assess image quality, the authors used the performance of a channelized Hotelling observer acting on reconstructed image slices. The selected channels are dense difference Gaussian channels (DDOG).A body phantom and a head phantom were imaged 50 times at different dose levels to obtain the data needed to assess image quality. The phantoms consisted of uniform backgrounds with low contrast signals embedded at various locations. The tasks the observer model performed included (1) detection of a signal of known location and shape, and (2) detection and localization of a signal of known shape. The employed DDOG channels are based on the response of the human visual system. Performance was assessed using the areas under ROC curves and areas under localization ROC curves. Results: For signal known exactly (SKE) and location unknown/signal shape known tasks with circular signals of different sizes and contrasts, the authors’ task-based measures showed that a FBP equivalent image quality can be achieved at lower dose levels using the IR algorithm. For the SKE case, the range of dose reduction is 50%–67% (head phantom) and 68%–82% (body phantom). For the study of location unknown/signal shape known, the dose reduction range can be reached at 67%–75% for head phantom and 67%–77% for body phantom case. These results suggest that the IR images at lower dose settings can reach the same image quality when compared to full dose conventional FBP images. Conclusions: The work presented provides an objective way to quantitatively assess the image quality of a newly introduced CT IR algorithm. The performance of the

  8. Independent dosimetric calculation with inclusion of head scatter and MLC transmission for IMRT

    International Nuclear Information System (INIS)

    Yang, Y.; Xing, L.; Li, J.G.; Palta, J.; Chen, Y.; Luxton, Gary; Boyer, A.

    2003-01-01

    Independent verification of the MU settings and dose calculation of IMRT treatment plans is an important step in the IMRT quality assurance (QA) procedure. At present, the verification is mainly based on experimental measurements, which are time consuming and labor intensive. Although a few simplified algorithms have recently been proposed for the independent dose (or MU) calculation, head scatter has not been precisely taken into account in all these investigations and the dose validation has mainly been limited to the central axis. In this work we developed an effective computer algorithm for IMRT MU and dose validation. The technique is superior to the currently available computer-based MU check systems in that (1) it takes full consideration of the head scatter and leaf transmission effects; and (2) it allows a precise dose calculation at an arbitrary spatial point instead of merely a point on the central axis. In the algorithm the dose at an arbitrary spatial point is expressed as a summation of the contributions of primary and scatter radiation from all beamlets. Each beamlet is modulated by a dynamic modulation factor (DMF), which is determined by the MLC leaf trajectories, the head scatter, the jaw positions, and the MLC leaf transmission. A three-source model was used to calculate the head scatter distribution for irregular segments shaped by MLC and the scatter dose contributions were computed using a modified Clarkson method. The system reads in MLC leaf sequence files (or RTP files) generated by the Corvus (NOMOS Corporation, Sewickley, PA) inverse planning system and then computes the doses at the desired points. The algorithm was applied to study the dose distributions of several testing intensity modulated fields and two multifield Corvus plans and the results were compared with Corvus plans and experimental measurements. The final dose calculations at most spatial points agreed with the experimental measurements to within 3% for both the specially

  9. Fast analytical scatter estimation using graphics processing units.

    Science.gov (United States)

    Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris

    2015-01-01

    To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.

  10. New Search Space Reduction Algorithm for Vertical Reference Trajectory Optimization

    Directory of Open Access Journals (Sweden)

    Alejandro MURRIETA-MENDOZA

    2016-06-01

    Full Text Available Burning the fuel required to sustain a given flight releases pollution such as carbon dioxide and nitrogen oxides, and the amount of fuel consumed is also a significant expense for airlines. It is desirable to reduce fuel consumption to reduce both pollution and flight costs. To increase fuel savings in a given flight, one option is to compute the most economical vertical reference trajectory (or flight plan. A deterministic algorithm was developed using a numerical aircraft performance model to determine the most economical vertical flight profile considering take-off weight, flight distance, step climb and weather conditions. This algorithm is based on linear interpolations of the performance model using the Lagrange interpolation method. The algorithm downloads the latest available forecast from Environment Canada according to the departure date and flight coordinates, and calculates the optimal trajectory taking into account the effects of wind and temperature. Techniques to avoid unnecessary calculations are implemented to reduce the computation time. The costs of the reference trajectories proposed by the algorithm are compared with the costs of the reference trajectories proposed by a commercial flight management system using the fuel consumption estimated by the FlightSim® simulator made by Presagis®.

  11. A Dynamic Enhancement With Background Reduction Algorithm: Overview and Application to Satellite-Based Dust Storm Detection

    Science.gov (United States)

    Miller, Steven D.; Bankert, Richard L.; Solbrig, Jeremy E.; Forsythe, John M.; Noh, Yoo-Jeong; Grasso, Lewis D.

    2017-12-01

    This paper describes a Dynamic Enhancement Background Reduction Algorithm (DEBRA) applicable to multispectral satellite imaging radiometers. DEBRA uses ancillary information about the clear-sky background to reduce false detections of atmospheric parameters in complex scenes. Applied here to the detection of lofted dust, DEBRA enlists a surface emissivity database coupled with a climatological database of surface temperature to approximate the clear-sky equivalent signal for selected infrared-based multispectral dust detection tests. This background allows for suppression of false alarms caused by land surface features while retaining some ability to detect dust above those problematic surfaces. The algorithm is applicable to both day and nighttime observations and enables weighted combinations of dust detection tests. The results are provided quantitatively, as a detection confidence factor [0, 1], but are also readily visualized as enhanced imagery. Utilizing the DEBRA confidence factor as a scaling factor in false color red/green/blue imagery enables depiction of the targeted parameter in the context of the local meteorology and topography. In this way, the method holds utility to both automated clients and human analysts alike. Examples of DEBRA performance from notable dust storms and comparisons against other detection methods and independent observations are presented.

  12. M4GB : Efficient Groebner Basis algorithm

    NARCIS (Netherlands)

    R.H. Makarim (Rusydi); M.M.J. Stevens (Marc)

    2017-01-01

    textabstractWe introduce a new efficient algorithm for computing Groebner-bases named M4GB. Like Faugere's algorithm F4 it is an extension of Buchberger's algorithm that describes: how to store already computed (tail-)reduced multiples of basis polynomials to prevent redundant work in the reduction

  13. A Coulomb collision algorithm for weighted particle simulations

    Science.gov (United States)

    Miller, Ronald H.; Combi, Michael R.

    1994-01-01

    A binary Coulomb collision algorithm is developed for weighted particle simulations employing Monte Carlo techniques. Charged particles within a given spatial grid cell are pair-wise scattered, explicitly conserving momentum and implicitly conserving energy. A similar algorithm developed by Takizuka and Abe (1977) conserves momentum and energy provided the particles are unweighted (each particle representing equal fractions of the total particle density). If applied as is to simulations incorporating weighted particles, the plasma temperatures equilibrate to an incorrect temperature, as compared to theory. Using the appropriate pairing statistics, a Coulomb collision algorithm is developed for weighted particles. The algorithm conserves energy and momentum and produces the appropriate relaxation time scales as compared to theoretical predictions. Such an algorithm is necessary for future work studying self-consistent multi-species kinetic transport.

  14. Estimate of repulsive interatomic pair potentials by low-energy alkali-metal-ion scattering and computer simulation

    International Nuclear Information System (INIS)

    Ghrayeb, R.; Purushotham, M.; Hou, M.; Bauer, E.

    1987-01-01

    Low-energy ion scattering is used in combination with computer simulation to study the interaction potential between 600-eV potassium ions and atoms in metallic surfaces. A special algorithm is described which is used with the computer simulation code marlowes. This algorithm builds up impact areas on the simulated solid surface from which scattering cross sections can be estimated with an accuracy better than 1%. This can be done by calculating no more than a couple of thousand trajectories. The screening length in the Moliere approximation to the Thomas-Fermi potential is fitted in such a way that the ratio between the calculated cross sections for double and single scattering matches the scattering intensity ratio measured experimentally and associated with the same mechanisms. The consistency of the method is checked by repeating the procedure for different incidence conditions and also by predicting the intensities associated with other surface scattering mechanisms. The screening length estimates are found to be insensitive to thermal vibrations. The calculated ratios between scattering cross sections by different processes are suggested to be sensitive enough to the relative atomic positions in order to be useful in surface-structure characterization

  15. Anatomic and energy variation of scatter compensation for digital chest radiography with Fourier deconvolution

    International Nuclear Information System (INIS)

    Floyd, C.E.; Beatty, P.T.; Ravin, C.E.

    1988-01-01

    The Fourier deconvolution algorithm for scatter compensation in digital chest radiography has been evaluated in four anatomically different regions at three energies. A shift invariant scatter distribution shape, optimized for the lung region at 140 kVp, was applied at 90 kVp and 120 kVp in the lung, retrocardiac, subdiaphragmatic, and thoracic spine regions. Scatter estimates from the deconvolution were compared with measured values. While some regional variation is apparent, the use of a shift invariant scatter distribution shape (optimized for a given energy) produces reasonable scatter compensation in the chest. A different set of deconvolution parameters were required at the different energies

  16. Electrical Impedance Tomography: 3D Reconstructions using Scattering Transforms

    DEFF Research Database (Denmark)

    Delbary, Fabrice; Hansen, Per Christian; Knudsen, Kim

    2012-01-01

    In three dimensions the Calderon problem was addressed and solved in theory in the 1980s. The main ingredients in the solution of the problem are complex geometrical optics solutions to the conductivity equation and a (non-physical) scattering transform. The resulting reconstruction algorithm...

  17. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    Science.gov (United States)

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.

  18. Kinematic aspects of pion-nucleus elastic scattering

    International Nuclear Information System (INIS)

    Weiss, D.L.; Ernst, D.J.

    1982-01-01

    The inclusion of relativistic kinematics in the theory of elastic scattering of pions from nuclei is examined. The investigation is performed in the context of the first order impulse approximation which incorporates the following features: (1) Relative momentum are defined according to relativistic theories consistent with time reversal invariance. (2) The two-nucleon interaction is a new, multichannel, separable potential model consistent with the most recent data derived from a recent nonpotential model of Ernst and Johnson. (3) The recoil of the pion-nucleon interacting pair and its resultant nonlocality are included. (4) The Fermi integral is treated by an optimal factorization approximation. It is shown how a careful definition of an intrinsic target density leads to an unambiguous method for including the recoil of the target. The target recoil corrections are found to be large for elastic scattering from 4 He and not negligible for scattering from 12 C. Relativistic potential theory kinematics, kinematics which result from covariant reduction approaches, and kinematics which result from replacing masses by energies in nonrelativistic formulas are compared. The relativistic potential theory kinematics and covariant reduction kinematics are shown to produce different elastic scattering at all pion energies examined (T/sub π/<300 MeV). Simple extensions of nonrelativistic kinematics are found to be reasonable approximations to relativistic potential theory

  19. Algorithm development for Maxwell's equations for computational electromagnetism

    Science.gov (United States)

    Goorjian, Peter M.

    1990-01-01

    A new algorithm has been developed for solving Maxwell's equations for the electromagnetic field. It solves the equations in the time domain with central, finite differences. The time advancement is performed implicitly, using an alternating direction implicit procedure. The space discretization is performed with finite volumes, using curvilinear coordinates with electromagnetic components along those directions. Sample calculations are presented of scattering from a metal pin, a square and a circle to demonstrate the capabilities of the new algorithm.

  20. Application of the exact solution for scattering by an infinite cylinder to the estimation of scattering by a finite cylinder.

    Science.gov (United States)

    Wang, R T; van de Hulst, H C

    1995-05-20

    A new algorithm for cylindrical Bessel functions that is similar to the one for spherical Bessel functions allows us to compute scattering functions for infinitely long cylinders covering sizes ka = 2πa/λ up to 8000 through the use of only an eight-digit single-precision machine computation. The scattering function and complex extinction coefficient of a finite cylinder that is seen near perpendicular incidence are derived from those of an infinitely long cylinder by the use of Huygens's principle. The result, which contains no arbitrary normalization factor, agrees quite well with analog microwave measurements of both extinction and scattering for such cylinders, even for an aspect ratio p = l/(2a) as low as 2. Rainbows produced by cylinders are similar to those for spherical drops but are brighter and have a lower contrast.

  1. Study of multiple scattering effects in heavy ion RBS

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Z.; O`Connor, D.J. [Newcastle Univ., NSW (Australia). Dept. of Physics

    1996-12-31

    Multiple scattering effect is normally neglected in conventional Rutherford Backscattering (RBS) analysis. The backscattered particle yield normally agrees well with the theory based on the single scattering model. However, when heavy incident ions are used such as in heavy ion Rutherford backscattering (HIRBS), or the incident ion energy is reduced, multiple scattering effect starts to play a role in the analysis. In this paper, the experimental data of 6MeV C ions backscattered from a Au target are presented. In measured time of flight spectrum a small step in front of the Au high energy edge is observed. The high energy edge of the step is about 3.4 ns ahead of the Au signal which corresponds to an energy {approx} 300 keV higher than the 135 degree single scattering energy. This value coincides with the double scattering energy of C ion undergoes two consecutive 67.5 degree scattering. Efforts made to investigate the origin of the high energy step observed lead to an Monte Carlo simulation aimed to reproduce the experimental spectrum on computer. As a large angle scattering event is a rare event, two consecutive large angle scattering is extremely hard to reproduce in a random simulation process. Thus, the simulation has not found a particle scattering into 130-140 deg with an energy higher than the single scattering energy. Obviously faster algorithms and a better physical model are necessary for a successful simulation. 16 refs., 3 figs.

  2. Study of multiple scattering effects in heavy ion RBS

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Z; O` Connor, D J [Newcastle Univ., NSW (Australia). Dept. of Physics

    1997-12-31

    Multiple scattering effect is normally neglected in conventional Rutherford Backscattering (RBS) analysis. The backscattered particle yield normally agrees well with the theory based on the single scattering model. However, when heavy incident ions are used such as in heavy ion Rutherford backscattering (HIRBS), or the incident ion energy is reduced, multiple scattering effect starts to play a role in the analysis. In this paper, the experimental data of 6MeV C ions backscattered from a Au target are presented. In measured time of flight spectrum a small step in front of the Au high energy edge is observed. The high energy edge of the step is about 3.4 ns ahead of the Au signal which corresponds to an energy {approx} 300 keV higher than the 135 degree single scattering energy. This value coincides with the double scattering energy of C ion undergoes two consecutive 67.5 degree scattering. Efforts made to investigate the origin of the high energy step observed lead to an Monte Carlo simulation aimed to reproduce the experimental spectrum on computer. As a large angle scattering event is a rare event, two consecutive large angle scattering is extremely hard to reproduce in a random simulation process. Thus, the simulation has not found a particle scattering into 130-140 deg with an energy higher than the single scattering energy. Obviously faster algorithms and a better physical model are necessary for a successful simulation. 16 refs., 3 figs.

  3. An analysis of radiation dose reduction in paediatric interventional cardiology by altering frame rate and use of the anti-scatter grid

    International Nuclear Information System (INIS)

    McFadden, S L; Hughes, C M; Winder, Robert J; Mooney, R B

    2013-01-01

    The purpose of this work is to investigate removal of the anti-scatter grid and alteration of the frame rate in paediatric interventional cardiology (IC) and assess the impact on radiation dose and image quality. Phantom based experimental studies were performed in a dedicated cardiac catheterisation suite to investigate variations in radiation dose and image quality, with various changes in imaging parameters. Phantom based experimental studies employing these variations in technique identified that radiation dose reductions of 28%–49% can be made to the patient with minimal loss of image quality in smaller sized patients. At present, there is no standard technique for carrying out paediatric IC in the UK or Ireland, resulting in the potential for a wide variation in radiation dose. Dose reductions to patients can be achieved with slight alterations to the imaging equipment with minimal compromise to the image quality. These simple modifications can be easily implemented in clinical practice in IC centres. (paper)

  4. Born amplitudes and seagull term in meson-soliton scattering

    International Nuclear Information System (INIS)

    Liang, Y.G.; Li, B.A.; Liu, K.F.; Su, R.K.

    1990-01-01

    The meson-soliton scattering for the φ 4 theory in 1+1 dimensions is calculated. We show that when the seagull term from the equal time commutator is included in addition to the Born amplitudes, the t-matrix from the reduction formula approach is identical to that of the potential scattering with small quantum fluctuations to leading order in weak coupling. The seagull term is equal to the Born term in the potential scattering. This confirms the speculation that the leading order Yukawa coupling is derivable from the classical soliton. (orig.)

  5. PROPOSAL OF ALGORITHM FOR ROUTE OPTIMIZATION

    OpenAIRE

    Robert Ramon de Carvalho Sousa; Abimael de Jesus Barros Costa; Eliezé Bulhões de Carvalho; Adriano de Carvalho Paranaíba; Daylyne Maerla Gomes Lima Sandoval

    2016-01-01

    This article uses “Six Sigma” methodology for the elaboration of an algorithm for routing problems which is able to obtain more efficient results than those from Clarke and Wright´s (CW) algorithm (1964) in situations of random increase of product delivery demands, facing the incapability of service level increase . In some situations, the algorithm proposed obtained more efficient results than the CW algorithm. The key factor was a reduction in the number of mistakes (on...

  6. Comparison of the ESTRO formalism for monitor unit calculation with a Clarkson based algorithm of a treatment planning system and a traditional ''full-scatter'' methodology

    International Nuclear Information System (INIS)

    Pirotta, M.; Aquilina, D.; Bhikha, T.; Georg, D.

    2005-01-01

    The ESTRO formalism for monitor unit (MU) calculations was evaluated and implemented to replace a previous methodology based on dosimetric data measured in a full-scatter phantom. This traditional method relies on data normalised at the depth of dose maximum (z m ), as well as on the utilisation of the BJR 25 table for the conversion of rectangular fields into equivalent square fields. The treatment planning system (TPS) was subsequently updated to reflect the new beam data normalised at a depth z R of 10 cm. Comparisons were then carried out between the ESTRO formalism, the Clarkson-based dose calculation algorithm on the TPS (with beam data normalised at z m and z R ), and the traditional ''full-scatter'' methodology. All methodologies, except for the ''full-scatter'' methodology, separated head-scatter from phantom-scatter effects and none of the methodologies; except for the ESTRO formalism, utilised wedge depth dose information for calculations. The accuracy of MU calculations was verified against measurements in a homogeneous phantom for square and rectangular open and wedged fields, as well as blocked open and wedged fields, at 5, 10, and 20 cm depths, under fixed SSD and isocentric geometries for 6 and 10 MV. Overall, the ESTRO Formalism showed the most accurate performance, with the root mean square (RMS) error with respect to measurements remaining below 1% even for the most complex beam set-ups investigated. The RMS error for the TPS deteriorated with the introduction of a wedge, with a worse RMS error for the beam data normalised at z m (4% at 6 MV and 1.6% at 10 MV) than at z R (1.9% at 6 MV and 1.1% at 10 MV). The further addition of blocking had only a marginal impact on the accuracy of this methodology. The ''full-scatter'' methodology showed a loss in accuracy for calculations involving either wedges or blocking, and performed worst for blocked wedged fields (RMS errors of 7.1% at 6 MV and 5% at 10 MV). The origins of these discrepancies were

  7. Reduction of metal artifacts: beam hardening and photon starvation effects

    Science.gov (United States)

    Yadava, Girijesh K.; Pal, Debashish; Hsieh, Jiang

    2014-03-01

    The presence of metal-artifacts in CT imaging can obscure relevant anatomy and interfere with disease diagnosis. The cause and occurrence of metal-artifacts are primarily due to beam hardening, scatter, partial volume and photon starvation; however, the contribution to the artifacts from each of them depends on the type of hardware. A comparison of CT images obtained with different metallic hardware in various applications, along with acquisition and reconstruction parameters, helps understand methods for reducing or overcoming such artifacts. In this work, a metal beam hardening correction (BHC) and a projection-completion based metal artifact reduction (MAR) algorithms were developed, and applied on phantom and clinical CT scans with various metallic implants. Stainless-steel and Titanium were used to model and correct for metal beam hardening effect. In the MAR algorithm, the corrupted projection samples are replaced by the combination of original projections and in-painted data obtained by forward projecting a prior image. The data included spine fixation screws, hip-implants, dental-filling, and body extremity fixations, covering range of clinically used metal implants. Comparison of BHC and MAR on different metallic implants was used to characterize dominant source of the artifacts, and conceivable methods to overcome those. Results of the study indicate that beam hardening could be a dominant source of artifact in many spine and extremity fixations, whereas dental and hip implants could be dominant source of photon starvation. The BHC algorithm could significantly improve image quality in CT scans with metallic screws, whereas MAR algorithm could alleviate artifacts in hip-implants and dentalfillings.

  8. Multichannel transfer function with dimensionality reduction

    KAUST Repository

    Kim, Han Suk

    2010-01-17

    The design of transfer functions for volume rendering is a difficult task. This is particularly true for multi-channel data sets, where multiple data values exist for each voxel. In this paper, we propose a new method for transfer function design. Our new method provides a framework to combine multiple approaches and pushes the boundary of gradient-based transfer functions to multiple channels, while still keeping the dimensionality of transfer functions to a manageable level, i.e., a maximum of three dimensions, which can be displayed visually in a straightforward way. Our approach utilizes channel intensity, gradient, curvature and texture properties of each voxel. The high-dimensional data of the domain is reduced by applying recently developed nonlinear dimensionality reduction algorithms. In this paper, we used Isomap as well as a traditional algorithm, Principle Component Analysis (PCA). Our results show that these dimensionality reduction algorithms significantly improve the transfer function design process without compromising visualization accuracy. In this publication we report on the impact of the dimensionality reduction algorithms on transfer function design for confocal microscopy data.

  9. A comparison of two photon planning algorithms for 8 MV and 25 MV X-ray beams in lung

    International Nuclear Information System (INIS)

    Kan, M.W.K.; Young, E.C.M.; Yu, P.K.N.

    1995-01-01

    The results of a comparison of two photon planning algorithms, the Clarkson Scatter Integration algorithm and the Equivalent Tissue-air Ratio algorithm are reported, using a simple lung phantom for 8 MV and 25 MV X-ray beams of field sizes 5 cm x 5 cm and 10 cm x 10 cm. Central axis depth-dose distributions were measured with a thimble chamber or a Markus parallel-plate chamber. Dose profile distributions were measured with TLD rods and films. Measured dose distributions were then compared to predicted dose distributions. Both algorithms overestimate the dose at mid-lung as they do not account for the effect of electronic disequilibrium. The Clarkson algorithm consistently shows less accurate results in comparison with the ETAR algorithm. There is additional error in the case of the Clarkson algorithm because of the assumption of a unit density medium in calculating scatter, which gives an overestimate in the effective scatter-air ratios in lung. For a 5 cm x 5 cm field, the error of dose prediction for 25 MV x-ray beam at mid-lung is 15.8 % and 12.8 % for Clarkson and ETAR algorithm respectively. At 8 MV the error is 9.3 % and 5.1 % respectively. In addition, both algorithms underestimate the penumbral width at mid-lung as they do not account for the penumbral flaring effect in low density medium. 25 refs., 2 tabs., 5 figs

  10. Waste reduction algorithm used as the case study of simulated bitumen production process

    Directory of Open Access Journals (Sweden)

    Savić Marina A.

    2011-01-01

    Full Text Available Waste reduction algorithm - WAR is a tool helping process engineers for environmental impact assessment. WAR algorithm is a methodology for determining the potential environmental impact (PEI of a chemical process. In particular, the bitumen production process was analyzed following three stages: a atmospheric distillation unit, b vacuum distillation unit, and c bitumen production unit. Study was developed for the middle sized oil refinery with capacity of 5000000 tones of crude oil per year. Results highlight the most vulnerable aspects of the environmental pollution that arise during the manufacturing process of bitumen. The overall rates of PEI leaving the system (PEI/h - Iout PEI/h are: a 2.14105, b 7.17104 and c 2.36103, respectively. The overall rates of PEI generated within the system - Igen PEI/h are: a 7.75104, b -4.31104 and c -4.32102, respectively. Atmospheric distillation unit have the highest overall rate of PEI while the bitumen production unit have the lowest overall rate of PEI. Comparison of Iout PEI/h and Igen PEI/h values for the atmospheric distillation unit, shows that the overall rate of PEI generated in the system is 36.21% of the overall rate of PEI leaving the system. In the cases of vacuum distillation and bitumen production units, the overall rate of PEI generated in system have negative values, i.e. the overall rate of PEI leaving the system is reduced at 60.11% (in the vacuum distillation unit and at 18.30% (in the bitumen production unit. Analysis of the obtained results for the overall rate of PEI, expressed by weight of the product, confirms conclusions.

  11. Microwave imaging for conducting scatterers by hybrid particle swarm optimization with simulated annealing

    International Nuclear Information System (INIS)

    Mhamdi, B.; Grayaa, K.; Aguili, T.

    2011-01-01

    In this paper, a microwave imaging technique for reconstructing the shape of two-dimensional perfectly conducting scatterers by means of a stochastic optimization approach is investigated. Based on the boundary condition and the measured scattered field derived by transverse magnetic illuminations, a set of nonlinear integral equations is obtained and the imaging problem is reformulated in to an optimization problem. A hybrid approximation algorithm, called PSO-SA, is developed in this work to solve the scattering inverse problem. In the hybrid algorithm, particle swarm optimization (PSO) combines global search and local search for finding the optimal results assignment with reasonable time and simulated annealing (SA) uses certain probability to avoid being trapped in a local optimum. The hybrid approach elegantly combines the exploration ability of PSO with the exploitation ability of SA. Reconstruction results are compared with exact shapes of some conducting cylinders; and good agreements with the original shapes are observed.

  12. SU-F-J-175: Evaluation of Metal Artifact Reduction Algorithms in Computed Tomography and Their Application to Radiation Therapy Treatment Planning

    International Nuclear Information System (INIS)

    Norris, H; Rangaraj, D; Kim, S

    2016-01-01

    Purpose: High-Z (metal) implants in CT scans cause significant streak-like artifacts in the reconstructed dataset. This results in both inaccurate CT Hounsfield units for the tissue as well as obscuration of the target and organs at risk (OARs) for radiation therapy planning. Herein we analyze two metal artifact reduction algorithms: GE’s Smart MAR and a Metal Deletion Technique (MDT) for geometric and Hounsfield Unit (HU) accuracy. Methods: A CT-to-electron density phantom, with multiple inserts of various densities and a custom Cerrobend insert (Zeff=76.8), is utilized in this continuing study. The phantom is scanned without metal (baseline) and again with the metal insert. Using one set of projection data, reconstructed CT volumes are created with filtered-back-projection (FBP) and the MAR and the MDT algorithms. Regions-of-Interest (ROIs) are evaluated for each insert for HU accuracy; the metal insert’s Full-Width-Half-Maximum (FWHM) is used to evaluate the geometric accuracy. Streak severity is quantified with an HU error metric over the phantom volume. Results: The original FBP reconstruction has a Root-Mean-Square-Error (RMSE) of 57.55 HU (STD=29.19, range=−145.8 to +79.2) compared to baseline. The MAR reconstruction has a RMSE of 20.98 HU (STD=13.92, range=−18.3 to +61.7). The MDT reconstruction has a RMSE of 10.05 HU (STD=10.5, range=−14.8 to +18.6). FWHM for baseline=162.05; FBP=161.84 (−0.13%); MAR=162.36 (+0.19%); MDT=162.99 (+0.58%). Streak severity metric for FBP=19.73 (22.659% bad pixels); MAR=8.743 (9.538% bad); MDT=4.899 (5.303% bad). Conclusion: Image quality, in terms of HU accuracy, in the presence of high-Z metal objects in CT scans is improved by metal artifact reduction reconstruction algorithms. The MDT algorithm had the highest HU value accuracy (RMSE=10.05 HU) and best streak severity metric, but scored the worst in terms of geometric accuracy. Qualitatively, the MAR and MDT algorithms increased detectability of inserts

  13. SU-F-J-175: Evaluation of Metal Artifact Reduction Algorithms in Computed Tomography and Their Application to Radiation Therapy Treatment Planning

    Energy Technology Data Exchange (ETDEWEB)

    Norris, H; Rangaraj, D; Kim, S [Baylor Scott & White Health, Temple, TX (United States)

    2016-06-15

    Purpose: High-Z (metal) implants in CT scans cause significant streak-like artifacts in the reconstructed dataset. This results in both inaccurate CT Hounsfield units for the tissue as well as obscuration of the target and organs at risk (OARs) for radiation therapy planning. Herein we analyze two metal artifact reduction algorithms: GE’s Smart MAR and a Metal Deletion Technique (MDT) for geometric and Hounsfield Unit (HU) accuracy. Methods: A CT-to-electron density phantom, with multiple inserts of various densities and a custom Cerrobend insert (Zeff=76.8), is utilized in this continuing study. The phantom is scanned without metal (baseline) and again with the metal insert. Using one set of projection data, reconstructed CT volumes are created with filtered-back-projection (FBP) and the MAR and the MDT algorithms. Regions-of-Interest (ROIs) are evaluated for each insert for HU accuracy; the metal insert’s Full-Width-Half-Maximum (FWHM) is used to evaluate the geometric accuracy. Streak severity is quantified with an HU error metric over the phantom volume. Results: The original FBP reconstruction has a Root-Mean-Square-Error (RMSE) of 57.55 HU (STD=29.19, range=−145.8 to +79.2) compared to baseline. The MAR reconstruction has a RMSE of 20.98 HU (STD=13.92, range=−18.3 to +61.7). The MDT reconstruction has a RMSE of 10.05 HU (STD=10.5, range=−14.8 to +18.6). FWHM for baseline=162.05; FBP=161.84 (−0.13%); MAR=162.36 (+0.19%); MDT=162.99 (+0.58%). Streak severity metric for FBP=19.73 (22.659% bad pixels); MAR=8.743 (9.538% bad); MDT=4.899 (5.303% bad). Conclusion: Image quality, in terms of HU accuracy, in the presence of high-Z metal objects in CT scans is improved by metal artifact reduction reconstruction algorithms. The MDT algorithm had the highest HU value accuracy (RMSE=10.05 HU) and best streak severity metric, but scored the worst in terms of geometric accuracy. Qualitatively, the MAR and MDT algorithms increased detectability of inserts

  14. Relationship between the Amplitude and Phase of a Signal Scattered by a Point-Like Acoustic Inhomogeneity

    Science.gov (United States)

    Burov, V. A.; Morozov, S. A.

    2001-11-01

    Wave scattering by a point-like inhomogeneity, i.e., a strong inhomogeneity with infinitesimal dimensions, is described. This type of inhomogeneity model is used in investigating the point-spread functions of different algorithms and systems. Two approaches are used to derive the rigorous relationship between the amplitude and phase of a signal scattered by a point-like acoustic inhomogeneity. The first approach is based on a Marchenko-type equation. The second approach uses the scattering by a scatterer whose size decreases simultaneously with an increase in its contrast. It is shown that the retarded and advanced waves are scattered differently despite the relationship between the phases of the corresponding scattered waves.

  15. Estimation of scattered photons using a neural network in SPECT

    International Nuclear Information System (INIS)

    Hasegawa, Wataru; Ogawa, Koichi

    1994-01-01

    In single photon emission CT (SPECT), measured projection data involve scattered photons. This causes degradation of spatial resolution and contrast in reconstructed images. The purpose of this study is to estimate the scattered photons, and eliminate them from measured data. To estimate the scattered photons, we used an artificial neural network which consists of five input units, five hidden units, and two output units. The inputs of the network are the ratios of the counts acquired by five narrow energy windows and their sum. The outputs are the ratios of the count of scattered photons and that of primary photons to the total count. The neural network was trained with a back-propagation algorithm using count data obtained by a Monte Carlo simulation. The results of simulation showed improvement of contrast and spatial resolution in reconstructed images. (author)

  16. Noise Reduction with Microphone Arrays for Speaker Identification

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, Z

    2011-12-22

    Reducing acoustic noise in audio recordings is an ongoing problem that plagues many applications. This noise is hard to reduce because of interfering sources and non-stationary behavior of the overall background noise. Many single channel noise reduction algorithms exist but are limited in that the more the noise is reduced; the more the signal of interest is distorted due to the fact that the signal and noise overlap in frequency. Specifically acoustic background noise causes problems in the area of speaker identification. Recording a speaker in the presence of acoustic noise ultimately limits the performance and confidence of speaker identification algorithms. In situations where it is impossible to control the environment where the speech sample is taken, noise reduction filtering algorithms need to be developed to clean the recorded speech of background noise. Because single channel noise reduction algorithms would distort the speech signal, the overall challenge of this project was to see if spatial information provided by microphone arrays could be exploited to aid in speaker identification. The goals are: (1) Test the feasibility of using microphone arrays to reduce background noise in speech recordings; (2) Characterize and compare different multichannel noise reduction algorithms; (3) Provide recommendations for using these multichannel algorithms; and (4) Ultimately answer the question - Can the use of microphone arrays aid in speaker identification?

  17. Implementation of pencil kernel and depth penetration algorithms for treatment planning of proton beams

    International Nuclear Information System (INIS)

    Russell, K.R.; Saxner, M.; Ahnesjoe, A.; Montelius, A.; Grusell, E.; Dahlgren, C.V.

    2000-01-01

    The implementation of two algorithms for calculating dose distributions for radiation therapy treatment planning of intermediate energy proton beams is described. A pencil kernel algorithm and a depth penetration algorithm have been incorporated into a commercial three-dimensional treatment planning system (Helax-TMS, Helax AB, Sweden) to allow conformal planning techniques using irregularly shaped fields, proton range modulation, range modification and dose calculation for non-coplanar beams. The pencil kernel algorithm is developed from the Fermi-Eyges formalism and Moliere multiple-scattering theory with range straggling corrections applied. The depth penetration algorithm is based on the energy loss in the continuous slowing down approximation with simple correction factors applied to the beam penumbra region and has been implemented for fast, interactive treatment planning. Modelling of the effects of air gaps and range modifying device thickness and position are implicit to both algorithms. Measured and calculated dose values are compared for a therapeutic proton beam in both homogeneous and heterogeneous phantoms of varying complexity. Both algorithms model the beam penumbra as a function of depth in a homogeneous phantom with acceptable accuracy. Results show that the pencil kernel algorithm is required for modelling the dose perturbation effects from scattering in heterogeneous media. (author)

  18. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  19. A maximum-likelihood reconstruction algorithm for tomographic gamma-ray nondestructive assay

    International Nuclear Information System (INIS)

    Prettyman, T.H.; Estep, R.J.; Cole, R.A.; Sheppard, G.A.

    1994-01-01

    A new tomographic reconstruction algorithm for nondestructive assay with high resolution gamma-ray spectroscopy (HRGS) is presented. The reconstruction problem is formulated using a maximum-likelihood approach in which the statistical structure of both the gross and continuum measurements used to determine the full-energy response in HRGS is precisely modeled. An accelerated expectation-maximization algorithm is used to determine the optimal solution. The algorithm is applied to safeguards and environmental assays of large samples (for example, 55-gal. drums) in which high continuum levels caused by Compton scattering are routinely encountered. Details of the implementation of the algorithm and a comparative study of the algorithm's performance are presented

  20. Effects of terraces, surface steps and 'over-specular' reflection due to inelastic energy losses on angular scattering spectra for glancing incidence scattering

    CERN Document Server

    Danailov, D; O'Connor, D J

    2002-01-01

    Recent experiments and our molecular-dynamics simulations indicate that the main signal of the angular scattering spectra of glancing incidence scattering are not affected by the thermal motion of surface atoms and can be explained by our row-model with averaged cylindrical potentials. At the ICACS-18 Conference [Nucl. Instr. and Meth. B 164-165 (2000) 583] we reported good agreement between experimental and calculated multimodal azimuthal angular scattering spectra for the glancing scattering of 10 and 15 keV [Nucl. Instr. and Meth. B 180 (2001) 265, Appl. Surf. Sci. 171 (2001) 113] He sup 0 beam along the [1 0 0] direction on the Fe(1 0 0) face. Our simulations also predicted that in contrast to the 2D angular scattering distribution, the 1D azimuthal angular distribution of scattered particles is very sensitive to the interaction potential used. Here, we report more detailed calculations incorporating the influence of terraces and surface steps on surface channeling, which show a reduction of the angular s...

  1. Compton scatter and randoms corrections for origin ensembles 3D PET reconstructions

    Energy Technology Data Exchange (ETDEWEB)

    Sitek, Arkadiusz [Harvard Medical School, Boston, MA (United States). Dept. of Radiology; Brigham and Women' s Hospital, Boston, MA (United States); Kadrmas, Dan J. [Utah Univ., Salt Lake City, UT (United States). Utah Center for Advanced Imaging Research (UCAIR)

    2011-07-01

    In this work we develop a novel approach to correction for scatter and randoms in reconstruction of data acquired by 3D positron emission tomography (PET) applicable to tomographic reconstruction done by the origin ensemble (OE) approach. The statistical image reconstruction using OE is based on calculation of expectations of the numbers of emitted events per voxel based on complete-data space. Since the OE estimation is fundamentally different than regular statistical estimators such those based on the maximum likelihoods, the standard methods of implementation of scatter and randoms corrections cannot be used. Based on prompts, scatter, and random rates, each detected event is graded in terms of a probability of being a true event. These grades are utilized by the Markov Chain Monte Carlo (MCMC) algorithm used in OE approach for calculation of the expectation over the complete-data space of the number of emitted events per voxel (OE estimator). We show that the results obtained with the OE are almost identical to results obtained by the maximum likelihood-expectation maximization (ML-EM) algorithm for reconstruction for experimental phantom data acquired using Siemens Biograph mCT 3D PET/CT scanner. The developed correction removes artifacts due to scatter and randoms in investigated 3D PET datasets. (orig.)

  2. A New Block Processing Algorithm of LLL for Fast High-dimension Ambiguity Resolution

    Directory of Open Access Journals (Sweden)

    LIU Wanke

    2016-02-01

    Full Text Available Due to high dimension and precision for the ambiguity vector under GNSS observations of multi-frequency and multi-system, a major problem to limit computational efficiency of ambiguity resolution is the longer reduction time when using conventional LLL algorithm. To address this problem, it is proposed a new block processing algorithm of LLL by analyzing the relationship between the reduction time and the dimensions and precision of ambiguity. The new algorithm reduces the reduction time to improve computational efficiency of ambiguity resolution, which is based on block processing ambiguity variance-covariance matrix that decreased the dimensions of single reduction matrix. It is validated that the new algorithm with two groups of measured data. The results show that the computing efficiency of the new algorithm increased by 65.2% and 60.2% respectively compared with that of LLL algorithm when choosing a reasonable number of blocks.

  3. Methods for reduction of scattered x-ray in measuring MTF with the square chart

    International Nuclear Information System (INIS)

    Hatagawa, Masakatsu; Yoshida, Rie

    1982-01-01

    A square wave chart has been used to measure the MTF of a screen-film system. The problem is that the scattered X-ray from the chart may give rise to measurement errors. In this paper, the authors proposed two methods to reduce the scattered X-ray: the first method is the use of a Pb mask and second is to provide for an air gap between the chart and the screen-film system. In these methods, the scattered X-ray from the chart was reduced. MTFs were measured by both of the new methods and the conventional method, and MTF values of the new methods were in good agreement while that of the conventional method was not. It was concluded that these new methods are able to reduce errors in the measurement of MTF. (author)

  4. Chlorophyll-a specific volume scattering function of phytoplankton.

    Science.gov (United States)

    Tan, Hiroyuki; Oishi, Tomohiko; Tanaka, Akihiko; Doerffer, Roland; Tan, Yasuhiro

    2017-06-12

    Chlorophyll-a specific light volume scattering functions (VSFs) by cultured phytoplankton in visible spectrum range is presented. Chlorophyll-a specific VSFs were determined based on the linear least squares method using a measured VSFs with different chlorophyll-a concentrations. We found obvious variability of it in terms of spectral and angular shapes of VSF between cultures. It was also presented that chlorophyll-a specific scattering significantly affected on spectral variation of the remote sensing reflectance, depending on spectral shape of b. This result is useful for developing an advance algorithm of ocean color remote sensing and for deep understanding of light in the sea.

  5. Positive Scattering Cross Sections using Constrained Least Squares

    International Nuclear Information System (INIS)

    Dahl, J.A.; Ganapol, B.D.; Morel, J.E.

    1999-01-01

    A method which creates a positive Legendre expansion from truncated Legendre cross section libraries is presented. The cross section moments of order two and greater are modified by a constrained least squares algorithm, subject to the constraints that the zeroth and first moments remain constant, and that the standard discrete ordinate scattering matrix is positive. A method using the maximum entropy representation of the cross section which reduces the error of these modified moments is also presented. These methods are implemented in PARTISN, and numerical results from a transport calculation using highly anisotropic scattering cross sections with the exponential discontinuous spatial scheme is presented

  6. Relativistic effects in elastic scattering of electrons in TEM

    International Nuclear Information System (INIS)

    Rother, Axel; Scheerschmidt, Kurt

    2009-01-01

    Transmission electron microscopy typically works with highly accelerated thus relativistic electrons. Consequently the scattering process is described within a relativistic formalism. In the following, we will examine three different relativistic formalisms for elastic electron scattering: Dirac, Klein-Gordon and approximated Klein-Gordon, the standard approach. This corresponds to a different consideration of spin effects and a different coupling to electromagnetic potentials. A detailed comparison is conducted by means of explicit numerical calculations. For this purpose two different formalisms have been applied to the approaches above: a numerical integration with predefined boundary conditions and the multislice algorithm, a standard procedure for such simulations. The results show a negligibly small difference between the different relativistic equations in the vicinity of electromagnetic potentials, prevailing in the electron microscope. The differences between the two numeric approaches are found to be small for small-angle scattering but eventually grow large for large-angle scattering, recorded for instance in high-angle annular dark field.

  7. Dynamic light scattering optical coherence tomography.

    Science.gov (United States)

    Lee, Jonghwan; Wu, Weicheng; Jiang, James Y; Zhu, Bo; Boas, David A

    2012-09-24

    We introduce an integration of dynamic light scattering (DLS) and optical coherence tomography (OCT) for high-resolution 3D imaging of heterogeneous diffusion and flow. DLS analyzes fluctuations in light scattered by particles to measure diffusion or flow of the particles, and OCT uses coherence gating to collect light only scattered from a small volume for high-resolution structural imaging. Therefore, the integration of DLS and OCT enables high-resolution 3D imaging of diffusion and flow. We derived a theory under the assumption that static and moving particles are mixed within the OCT resolution volume and the moving particles can exhibit either diffusive or translational motion. Based on this theory, we developed a fitting algorithm to estimate dynamic parameters including the axial and transverse velocities and the diffusion coefficient. We validated DLS-OCT measurements of diffusion and flow through numerical simulations and phantom experiments. As an example application, we performed DLS-OCT imaging of the living animal brain, resulting in 3D maps of the absolute and axial velocities, the diffusion coefficient, and the coefficient of determination.

  8. Optimization of virtual source parameters in neutron scattering instrumentation

    International Nuclear Information System (INIS)

    Habicht, K; Skoulatos, M

    2012-01-01

    We report on phase-space optimizations for neutron scattering instruments employing horizontal focussing crystal optics. Defining a figure of merit for a generic virtual source configuration we identify a set of optimum instrumental parameters. In order to assess the quality of the instrumental configuration we combine an evolutionary optimization algorithm with the analytical Popovici description using multidimensional Gaussian distributions. The optimum phase-space element which needs to be delivered to the virtual source by preceding neutron optics may be obtained using the same algorithm which is of general interest in instrument design.

  9. A Robust Computational Technique for Model Order Reduction of Two-Time-Scale Discrete Systems via Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Othman M. K. Alsmadi

    2015-01-01

    Full Text Available A robust computational technique for model order reduction (MOR of multi-time-scale discrete systems (single input single output (SISO and multi-input multioutput (MIMO is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.

  10. A flexibility-based method via the iterated improved reduction system and the cuckoo optimization algorithm for damage quantification with limited sensors

    International Nuclear Information System (INIS)

    Zare Hosseinzadeh, Ali; Ghodrati Amiri, Gholamreza; Bagheri, Abdollah; Koo, Ki-Young

    2014-01-01

    In this paper, a novel and effective damage diagnosis algorithm is proposed to localize and quantify structural damage using incomplete modal data, considering the existence of some limitations in the number of attached sensors on structures. The damage detection problem is formulated as an optimization problem by computing static displacements in the reduced model of a structure subjected to a unique static load. The static responses are computed through the flexibility matrix of the damaged structure obtained based on the incomplete modal data of the structure. In the algorithm, an iterated improved reduction system method is applied to prepare an accurate reduced model of a structure. The optimization problem is solved via a new evolutionary optimization algorithm called the cuckoo optimization algorithm. The efficiency and robustness of the presented method are demonstrated through three numerical examples. Moreover, the efficiency of the method is verified by an experimental study of a five-story shear building structure on a shaking table considering only two sensors. The obtained damage identification results for the numerical and experimental studies show the suitable and stable performance of the proposed damage identification method for structures with limited sensors. (paper)

  11. Efficient algorithms of multidimensional γ-ray spectra compression

    International Nuclear Information System (INIS)

    Morhac, M.; Matousek, V.

    2006-01-01

    The efficient algorithms to compress multidimensional γ-ray events are presented. Two alternative kinds of compression algorithms based on both the adaptive orthogonal and randomizing transforms are proposed. In both algorithms we employ the reduction of data volume due to the symmetry of the γ-ray spectra

  12. New developments in analytical calculation of first order scattering for 3D complex objects

    International Nuclear Information System (INIS)

    Duvauchelle, Philippe; Berthier, Jerome

    2007-01-01

    The principle of the analytical calculation of first order scattering used in our simulation code named VXI (Virtual X-ray Imaging) is based on a double ray-tracing. The first step consists in realizing a ray-tracing from the X-ray source point to each point of the object (an elementary volume in practice) including attenuation effect in the primary beam. This calculation gives the number of photons and their direction arriving on each voxel. A voxel acts as a secondary source which properties accord to the physics of X-ray scattering (Compton and Rayleigh). The second step of the ray-tracing is then done from each voxel of the object in the direction of each pixel of the detector, taking into account the attenuation along the scattering path. To simulate a 3D complex object, the first problem consists in realizing an automatic 3D sampling of the object. This is done by using an octree-based method optimized for deterministic scattering computation. The basic octree method consists in dividing recursively the volume of the object in decreasing-size voxels until each of them is completely included under the surface of the sample. The object volume is then always under evaluated. This is a problem because the scattering phenomenon strongly depends on the real volume of the object. The second problem is that artefacts due to sampling effects can occur in synthesis images. These two particular aspects are taken into account in our simulation code and an optimized octree-based method has been specially developed for this application. To respond to the first problem, our 3D sampling algorithm may accept voxels on the surface of the sample under conditions defined by the user. The second problem is treated in generating a random sampling instead of a regular one. The algorithm developed for 3D sampling is easily configurable, fast (about a few seconds maximum), robust and can be applied to all object shapes (thin, massive). The sampling time depends on the number of

  13. Analysis of Individual Preferences for Tuning Noise-Reduction Algorithms

    NARCIS (Netherlands)

    Houben, Rolph; Dijkstra, Tjeerd M. H.; Dreschler, Wouter A.

    2012-01-01

    There is little research on user preference for different settings of noise reduction, especially for individual users. We therefore measured individual preferences for pairs of audio streams differing in the trade-off between noise reduction and speech distortion. A logistic probability model was

  14. Contribution to the study of the transport-scattering equivalence

    International Nuclear Information System (INIS)

    Soldevila, Michel.

    1978-01-01

    The algorithm of the TERMINUS code that analytically resolves the equations of multigroup scattering in one dimensional plane geometry is described in this report. This code has been written and utilized to test the mathematical methods of transport-scattering equivalence. The results are then given of a comparison between the APOLLO, NEPTUNE and TERMINUS codes. The mathematical problem having been formulated, the reasons which led to the choice from among the alternative methods are explained thus enabling the ANACREON and KALGAN programmes to be written. The results achieved with these programs, both of which use TERMINUS as scattering code, are presented. The problems raised by coupling the ANACREON and KALGAN codes to the NEPTUNE system are mentioned and the results achieved with the equivalence module coupled to NEPTUNE are given [fr

  15. Safe reduction rules for weighted treewidth

    NARCIS (Netherlands)

    Eijkhof, F. van den; Bodlaender, H.L.; Koster, A.M.C.A.

    2002-01-01

    Several sets of reductions rules are known for preprocessing a graph when computing its treewidth. In this paper, we give reduction rules for a weighted variant of treewidth, motivated by the analysis of algorithms for probabilistic networks. We present two general reduction rules that are safe for

  16. FastScatTM: An Object-Oriented Program for Fast Scattering Computation

    Directory of Open Access Journals (Sweden)

    Lisa Hamilton

    1993-01-01

    Full Text Available FastScat is a state-of-the-art program for computing electromagnetic scattering and radiation. Its purpose is to support the study of recent algorithmic advancements, such as the fast multipole method, that promise speed-ups of several orders of magnitude over conventional algorithms. The complexity of these algorithms and their associated data structures led us to adopt an object-oriented methodology for FastScat. We discuss the program's design and several lessons learned from its C++ implementation including the appropriate level for object-orientedness in numeric software, maintainability benefits, interfacing to Fortran libraries such as LAPACK, and performance issues.

  17. Oversampling smoothness: an effective algorithm for phase retrieval of noisy diffraction intensities.

    Science.gov (United States)

    Rodriguez, Jose A; Xu, Rui; Chen, Chien-Chun; Zou, Yunfei; Miao, Jianwei

    2013-04-01

    Coherent diffraction imaging (CDI) is high-resolution lensless microscopy that has been applied to image a wide range of specimens using synchrotron radiation, X-ray free-electron lasers, high harmonic generation, soft X-ray lasers and electrons. Despite recent rapid advances, it remains a challenge to reconstruct fine features in weakly scattering objects such as biological specimens from noisy data. Here an effective iterative algorithm, termed oversampling smoothness (OSS), for phase retrieval of noisy diffraction intensities is presented. OSS exploits the correlation information among the pixels or voxels in the region outside of a support in real space. By properly applying spatial frequency filters to the pixels or voxels outside the support at different stages of the iterative process ( i.e. a smoothness constraint), OSS finds a balance between the hybrid input-output (HIO) and error reduction (ER) algorithms to search for a global minimum in solution space, while reducing the oscillations in the reconstruction. Both numerical simulations with Poisson noise and experimental data from a biological cell indicate that OSS consistently outperforms the HIO, ER-HIO and noise robust (NR)-HIO algorithms at all noise levels in terms of accuracy and consistency of the reconstructions. It is expected that OSS will find application in the rapidly growing CDI field, as well as other disciplines where phase retrieval from noisy Fourier magnitudes is needed. The MATLAB (The MathWorks Inc., Natick, MA, USA) source code of the OSS algorithm is freely available from http://www.physics.ucla.edu/research/imaging.

  18. Decoding Hermitian Codes with Sudan's Algorithm

    DEFF Research Database (Denmark)

    Høholdt, Tom; Nielsen, Rasmus Refslund

    1999-01-01

    We present an efficient implementation of Sudan's algorithm for list decoding Hermitian codes beyond half the minimum distance. The main ingredients are an explicit method to calculate so-called increasing zero bases, an efficient interpolation algorithm for finding the Q-polynomial, and a reduct......We present an efficient implementation of Sudan's algorithm for list decoding Hermitian codes beyond half the minimum distance. The main ingredients are an explicit method to calculate so-called increasing zero bases, an efficient interpolation algorithm for finding the Q...

  19. Inverse Scattering Method and Soliton Solution Family for String Effective Action

    International Nuclear Information System (INIS)

    Ya-Jun, Gao

    2009-01-01

    A modified Hauser–Ernst-type linear system is established and used to develop an inverse scattering method for solving the motion equations of the string effective action describing the coupled gravity, dilaton and Kalb–Ramond fields. The reduction procedures in this inverse scattering method are found to be fairly simple, which makes the proposed inverse scattering method applied fine and effective. As an application, a concrete family of soliton solutions for the considered theory is obtained

  20. Measurements of computed tomography radiation scatter

    International Nuclear Information System (INIS)

    Van Every, B.; Petty, R.J.

    1992-01-01

    This paper describes the measurement of scattered radiation from a computed tomography (CT) scanner in a clinical situation and compares the results with those obtained from a CT performance phantom and with data obtained from CT manufacturers. The results are presented as iso-dose contours. There are significant differences between the data obtained and that supplied by manufacturers, both in the shape of the iso-dose contours and in the nominal values. The observed scatter in a clinical situation (for an abdominal scan) varied between 3% and 430% of the manufacturers' stated values, with a marked reduction in scatter noted a the head and feet of the patient. These differences appear to be due to the fact that manufacturers use CT phantoms to obtain scatter data and these phantoms do not provide the same scatter absorption geometry as patients. CT scatter was observed to increase as scan field size and slice thickness increased, whilst there was little change in scatter with changes in gantry tilt and table slew. Using the iso-dose contours, the orientation of the CT scanner can be optimised with regard to the location and shielding requirements of doors and windows. Additionally, the positioning of staff who must remain in the room during scanning can be optimised to minimise their exposure. It is estimated that the data presented allows for realistic radiation protection assessments to be made. 13 refs., 5 tabs., 6 figs

  1. FIR-laser scattering for JT-60

    International Nuclear Information System (INIS)

    Itagaki, Tokiyoshi; Matoba, Tohru; Funahashi, Akimasa; Suzuki, Yasuo

    1977-09-01

    An ion Thomson scattering method with far infrared (FIR) laser has been studied for measuring the ion temperature in large tokamak JT-60 to be completed in 1981. Ion Thomson scattering has the advantage of measuring spatial variation of the ion temperature. The ion Thomson scattering in medium tokamak (PLT) and future large tokamak (JET) requires a FIR laser of several megawatts. Research and development of FIR high power pulse lasers with power up to 0.6 MW have proceeded in ion Thomson scattering for future high-temperature tokamaks. The FIR laser power will reach to the desired several megawatts in a few years, so JAERI plans to measure the ion temperature in JT-60 by ion Thomson scattering. A noise source of the ion Thomson scattering with 496 μm-CH 3 F laser is synchrotron radiation of which the power is similar to NEP of the Schottky-barrier diode. However, the synchrotron radiation power is one order smaller than that when a FIR laser is 385 μm-D 2 O laser. The FIR laser power corresponding to a signal to noise ratio of 1 is about 4 MW for CH 3 F laser, and 0.4 MW for D 2 O laser if NEP of the heterodyne mixer is one order less. A FIR laser scattering system for JT-60 should be realized with improvement of FIR laser power, NEP of heterodyne mixer and reduction of synchrotron radiation. (auth.)

  2. Cost reduction improvement for power generation system integrating WECS using harmony search algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Ngonkham, S. [Khonkaen Univ., Amphur Muang (Thailand). Dept. of Electrical Engineering; Buasri, P. [Khonkaen Univ., Amphur Muang (Thailand). Embed System Research Group

    2009-03-11

    A harmony search (HS) algorithm was used to optimize economic dispatch (ED) in a wind energy conversion system (WECS) for power system integration. The HS algorithm was based on a stochastic random search method. System costs for the WECS system were estimated in relation to average wind speeds. The HS algorithm was implemented to optimize the ED with a simple programming procedure. The study showed that the initial parameters must be carefully selected to ensure the accuracy of the HS algorithm. The algorithm demonstrated that total costs of the WECS system were higher than costs associated with energy efficiency procedures that reduced the same amount of greenhouse gas (GHG) emissions. 7 refs,. 10 tabs., 16 figs.

  3. An Out-of-Core GPU based dimensionality reduction algorithm for Big Mass Spectrometry Data and its application in bottom-up Proteomics.

    Science.gov (United States)

    Awan, Muaaz Gul; Saeed, Fahad

    2017-08-01

    Modern high resolution Mass Spectrometry instruments can generate millions of spectra in a single systems biology experiment. Each spectrum consists of thousands of peaks but only a small number of peaks actively contribute to deduction of peptides. Therefore, pre-processing of MS data to detect noisy and non-useful peaks are an active area of research. Most of the sequential noise reducing algorithms are impractical to use as a pre-processing step due to high time-complexity. In this paper, we present a GPU based dimensionality-reduction algorithm, called G-MSR, for MS2 spectra. Our proposed algorithm uses novel data structures which optimize the memory and computational operations inside GPU. These novel data structures include Binary Spectra and Quantized Indexed Spectra (QIS) . The former helps in communicating essential information between CPU and GPU using minimum amount of data while latter enables us to store and process complex 3-D data structure into a 1-D array structure while maintaining the integrity of MS data. Our proposed algorithm also takes into account the limited memory of GPUs and switches between in-core and out-of-core modes based upon the size of input data. G-MSR achieves a peak speed-up of 386x over its sequential counterpart and is shown to process over a million spectra in just 32 seconds. The code for this algorithm is available as a GPL open-source at GitHub at the following link: https://github.com/pcdslab/G-MSR.

  4. Scattering cross section of unequal length dipole arrays

    CERN Document Server

    Singh, Hema; Jha, Rakesh Mohan

    2016-01-01

    This book presents a detailed and systematic analytical treatment of scattering by an arbitrary dipole array configuration with unequal-length dipoles, different inter-element spacing and load impedance. It provides a physical interpretation of the scattering phenomena within the phased array system. The antenna radar cross section (RCS) depends on the field scattered by the antenna towards the receiver. It has two components, viz. structural RCS and antenna mode RCS. The latter component dominates the former, especially if the antenna is mounted on a low observable platform. The reduction in the scattering due to the presence of antennas on the surface is one of the concerns towards stealth technology. In order to achieve this objective, a detailed and accurate analysis of antenna mode scattering is required. In practical phased array, one cannot ignore the finite dimensions of antenna elements, coupling effect and the role of feed network while estimating the antenna RCS. This book presents the RCS estimati...

  5. Algorithm for calculations of asymptotic nuclear coefficients using phase-shift data for charged-particle scattering

    Science.gov (United States)

    Orlov, Yu. V.; Irgaziev, B. F.; Nabi, Jameel-Un

    2017-08-01

    A new algorithm for the asymptotic nuclear coefficients calculation, which we call the Δ method, is proved and developed. This method was proposed by Ramírez Suárez and Sparenberg (arXiv:1602.04082.) but no proof was given. We apply it to the bound state situated near the channel threshold when the Sommerfeld parameter is quite large within the experimental energy region. As a result, the value of the conventional effective-range function Kl(k2) is actually defined by the Coulomb term. One of the resulting effects is a wrong description of the energy behavior of the elastic scattering phase shift δl reproduced from the fitted total effective-range function Kl(k2) . This leads to an improper value of the asymptotic normalization coefficient (ANC) value. No such problem arises if we fit only the nuclear term. The difference between the total effective-range function and the Coulomb part at real energies is the same as the nuclear term. Then we can proceed using just this Δ method to calculate the pole position values and the ANC. We apply it to the vertices 4He+12C ↔16O and 3He+4He↔7Be . The calculated ANCs can be used to find the radiative capture reaction cross sections of the transfers to the 16O bound final states as well as to the 7Be.

  6. A DFT-based genetic algorithm search for AuCu nanoalloy electrocatalysts for CO2 reduction

    DEFF Research Database (Denmark)

    Lysgaard, Steen; Mýrdal, Jón Steinar Garðarsson; Hansen, Heine Anton

    2015-01-01

    Using a DFT-based genetic algorithm (GA) approach, we have determined the most stable structure and stoichiometry of a 309-atom icosahedral AuCu nanoalloy, for potential use as an electrocatalyst for CO2 reduction. The identified core–shell nano-particle consists of a copper core interspersed....... This shows that the mixed Cu135@Au174 core–shell nanoalloy has a similar adsorption energy, for the most favorable site, as a pure gold nano-particle. Cu, however, has the effect of stabilizing the icosahedral structure because Au particles are easily distorted when adding adsorbates....... that it is possible to use the LCAO mode to obtain a realistic estimate of the molecular chemisorption energy for systems where the computation in normal grid mode is not computationally feasible. These corrections are employed when calculating adsorption energies on the Cu, Au and most stable mixed particles...

  7. A data reduction program for the linac total-scattering amorphous materials spectrometer (LINDA)

    International Nuclear Information System (INIS)

    Clarke, J.H.

    1976-01-01

    A computer program has been written to reduce the data collected on the A.E.R.E., Harwell linac total-scattering spectrometer (TSS) to the differential scattering cross-section. This instrument, used for studying the structure of amorphous materials such as liquids and glasses, has been described in detail. Time-of-flight spectra are recorded by several arrays of detectors at different angles using a pulsed incident neutron beam with a continuous distribution of wavelengths. The program performs all necessary background and container subtractions and also absorption corrections using the method of Paalman and Pings. The incident neutron energy distribution is obtained from the intensity recorded from a standard vanadium sample, enabling the observed differential scattering cross-section dsigma/dΩ (theta, lambda) and the structure factor S(Q) to be obtained. Various sample and vanadium geometries can be analysed by the program and facilities exist for the summation of data sets, smoothing of data, application of Placzek corrections and the output of processed data onto magnetic tape or punched cards. A set of example data is provided and some structure factors are shown with absorption corrections. (author)

  8. Peak reduction for commercial buildings using energy storage

    Science.gov (United States)

    Chua, K. H.; Lim, Y. S.; Morris, S.

    2017-11-01

    Battery-based energy storage has emerged as a cost-effective solution for peak reduction due to the decrement of battery’s price. In this study, a battery-based energy storage system is developed and implemented to achieve an optimal peak reduction for commercial customers with the limited energy capacity of the energy storage. The energy storage system is formed by three bi-directional power converter rated at 5 kVA and a battery bank with capacity of 64 kWh. Three control algorithms, namely fixed-threshold, adaptive-threshold, and fuzzy-based control algorithms have been developed and implemented into the energy storage system in a campus building. The control algorithms are evaluated and compared under different load conditions. The overall experimental results show that the fuzzy-based controller is the most effective algorithm among the three controllers in peak reduction. The fuzzy-based control algorithm is capable of incorporating a priori qualitative knowledge and expertise about the load characteristic of the buildings as well as the useable energy without over-discharging the batteries.

  9. Dose Calculation Accuracy of the Monte Carlo Algorithm for CyberKnife Compared with Other Commercially Available Dose Calculation Algorithms

    International Nuclear Information System (INIS)

    Sharma, Subhash; Ott, Joseph; Williams, Jamone; Dickow, Danny

    2011-01-01

    Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required.

  10. Small-angle X-ray scattering tensor tomography: model of the three-dimensional reciprocal-space map, reconstruction algorithm and angular sampling requirements.

    Science.gov (United States)

    Liebi, Marianne; Georgiadis, Marios; Kohlbrecher, Joachim; Holler, Mirko; Raabe, Jörg; Usov, Ivan; Menzel, Andreas; Schneider, Philipp; Bunk, Oliver; Guizar-Sicairos, Manuel

    2018-01-01

    Small-angle X-ray scattering tensor tomography, which allows reconstruction of the local three-dimensional reciprocal-space map within a three-dimensional sample as introduced by Liebi et al. [Nature (2015), 527, 349-352], is described in more detail with regard to the mathematical framework and the optimization algorithm. For the case of trabecular bone samples from vertebrae it is shown that the model of the three-dimensional reciprocal-space map using spherical harmonics can adequately describe the measured data. The method enables the determination of nanostructure orientation and degree of orientation as demonstrated previously in a single momentum transfer q range. This article presents a reconstruction of the complete reciprocal-space map for the case of bone over extended ranges of q. In addition, it is shown that uniform angular sampling and advanced regularization strategies help to reduce the amount of data required.

  11. The analysis and correction of neutron scattering effects in neutron imaging

    International Nuclear Information System (INIS)

    Raine, D.A.; Brenizer, J.S.

    1997-01-01

    A method of correcting for the scattering effects present in neutron radiographic and computed tomographic imaging has been developed. Prior work has shown that beam, object, and imaging system geometry factors, such as the L/D ratio and angular divergence, are the primary sources contributing to the degradation of neutron images. With objects smaller than 20--40 mm in width, a parallel beam approximation can be made where the effects from geometry are negligible. Factors which remain important in the image formation process are the pixel size of the imaging system, neutron scattering, the size of the object, the conversion material, and the beam energy spectrum. The Monte Carlo N-Particle transport code, version 4A (MCNP4A), was used to separate and evaluate the effect that each of these parameters has on neutron image data. The simulations were used to develop a correction algorithm which is easy to implement and requires no a priori knowledge of the object. The correction algorithm is based on the determination of the object scatter function (OSF) using available data outside the object to estimate the shape and magnitude of the OSF based on a Gaussian functional form. For objects smaller than 1 mm (0.04 in.) in width, the correction function can be well approximated by a constant function. Errors in the determination and correction of the MCNP simulated neutron scattering component were under 5% and larger errors were only noted in objects which were at the extreme high end of the range of object sizes simulated. The Monte Carlo data also indicated that scattering does not play a significant role in the blurring of neutron radiographic and tomographic images. The effect of neutron scattering on computed tomography is shown to be minimal at best, with the most serious effect resulting when the basic backprojection method is used

  12. Characterization of adaptive statistical iterative reconstruction algorithm for dose reduction in CT: A pediatric oncology perspective

    International Nuclear Information System (INIS)

    Brady, S. L.; Yee, B. S.; Kaufman, R. A.

    2012-01-01

    Purpose: This study demonstrates a means of implementing an adaptive statistical iterative reconstruction (ASiR™) technique for dose reduction in computed tomography (CT) while maintaining similar noise levels in the reconstructed image. The effects of image quality and noise texture were assessed at all implementation levels of ASiR™. Empirically derived dose reduction limits were established for ASiR™ for imaging of the trunk for a pediatric oncology population ranging from 1 yr old through adolescence/adulthood. Methods: Image quality was assessed using metrics established by the American College of Radiology (ACR) CT accreditation program. Each image quality metric was tested using the ACR CT phantom with 0%–100% ASiR™ blended with filtered back projection (FBP) reconstructed images. Additionally, the noise power spectrum (NPS) was calculated for three common reconstruction filters of the trunk. The empirically derived limitations on ASiR™ implementation for dose reduction were assessed using (1, 5, 10) yr old and adolescent/adult anthropomorphic phantoms. To assess dose reduction limits, the phantoms were scanned in increments of increased noise index (decrementing mA using automatic tube current modulation) balanced with ASiR™ reconstruction to maintain noise equivalence of the 0% ASiR™ image. Results: The ASiR™ algorithm did not produce any unfavorable effects on image quality as assessed by ACR criteria. Conversely, low-contrast resolution was found to improve due to the reduction of noise in the reconstructed images. NPS calculations demonstrated that images with lower frequency noise had lower noise variance and coarser graininess at progressively higher percentages of ASiR™ reconstruction; and in spite of the similar magnitudes of noise, the image reconstructed with 50% or more ASiR™ presented a more smoothed appearance than the pre-ASiR™ 100% FBP image. Finally, relative to non-ASiR™ images with 100% of standard dose across the

  13. Cell motility dynamics: a novel segmentation algorithm to quantify multi-cellular bright field microscopy images.

    Directory of Open Access Journals (Sweden)

    Assaf Zaritsky

    Full Text Available Confocal microscopy analysis of fluorescence and morphology is becoming the standard tool in cell biology and molecular imaging. Accurate quantification algorithms are required to enhance the understanding of different biological phenomena. We present a novel approach based on image-segmentation of multi-cellular regions in bright field images demonstrating enhanced quantitative analyses and better understanding of cell motility. We present MultiCellSeg, a segmentation algorithm to separate between multi-cellular and background regions for bright field images, which is based on classification of local patches within an image: a cascade of Support Vector Machines (SVMs is applied using basic image features. Post processing includes additional classification and graph-cut segmentation to reclassify erroneous regions and refine the segmentation. This approach leads to a parameter-free and robust algorithm. Comparison to an alternative algorithm on wound healing assay images demonstrates its superiority. The proposed approach was used to evaluate common cell migration models such as wound healing and scatter assay. It was applied to quantify the acceleration effect of Hepatocyte growth factor/scatter factor (HGF/SF on healing rate in a time lapse confocal microscopy wound healing assay and demonstrated that the healing rate is linear in both treated and untreated cells, and that HGF/SF accelerates the healing rate by approximately two-fold. A novel fully automated, accurate, zero-parameters method to classify and score scatter-assay images was developed and demonstrated that multi-cellular texture is an excellent descriptor to measure HGF/SF-induced cell scattering. We show that exploitation of textural information from differential interference contrast (DIC images on the multi-cellular level can prove beneficial for the analyses of wound healing and scatter assays. The proposed approach is generic and can be used alone or alongside traditional

  14. Cell motility dynamics: a novel segmentation algorithm to quantify multi-cellular bright field microscopy images.

    Science.gov (United States)

    Zaritsky, Assaf; Natan, Sari; Horev, Judith; Hecht, Inbal; Wolf, Lior; Ben-Jacob, Eshel; Tsarfaty, Ilan

    2011-01-01

    Confocal microscopy analysis of fluorescence and morphology is becoming the standard tool in cell biology and molecular imaging. Accurate quantification algorithms are required to enhance the understanding of different biological phenomena. We present a novel approach based on image-segmentation of multi-cellular regions in bright field images demonstrating enhanced quantitative analyses and better understanding of cell motility. We present MultiCellSeg, a segmentation algorithm to separate between multi-cellular and background regions for bright field images, which is based on classification of local patches within an image: a cascade of Support Vector Machines (SVMs) is applied using basic image features. Post processing includes additional classification and graph-cut segmentation to reclassify erroneous regions and refine the segmentation. This approach leads to a parameter-free and robust algorithm. Comparison to an alternative algorithm on wound healing assay images demonstrates its superiority. The proposed approach was used to evaluate common cell migration models such as wound healing and scatter assay. It was applied to quantify the acceleration effect of Hepatocyte growth factor/scatter factor (HGF/SF) on healing rate in a time lapse confocal microscopy wound healing assay and demonstrated that the healing rate is linear in both treated and untreated cells, and that HGF/SF accelerates the healing rate by approximately two-fold. A novel fully automated, accurate, zero-parameters method to classify and score scatter-assay images was developed and demonstrated that multi-cellular texture is an excellent descriptor to measure HGF/SF-induced cell scattering. We show that exploitation of textural information from differential interference contrast (DIC) images on the multi-cellular level can prove beneficial for the analyses of wound healing and scatter assays. The proposed approach is generic and can be used alone or alongside traditional fluorescence single

  15. Testing a Fourier Accelerated Hybrid Monte Carlo Algorithm

    OpenAIRE

    Catterall, S.; Karamov, S.

    2001-01-01

    We describe a Fourier Accelerated Hybrid Monte Carlo algorithm suitable for dynamical fermion simulations of non-gauge models. We test the algorithm in supersymmetric quantum mechanics viewed as a one-dimensional Euclidean lattice field theory. We find dramatic reductions in the autocorrelation time of the algorithm in comparison to standard HMC.

  16. PROPOSAL OF ALGORITHM FOR ROUTE OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    Robert Ramon de Carvalho Sousa

    2016-06-01

    Full Text Available This article uses “Six Sigma” methodology for the elaboration of an algorithm for routing problems which is able to obtain more efficient results than those from Clarke and Wright´s (CW algorithm (1964 in situations of random increase of product delivery demands, facing the incapability of service level increase . In some situations, the algorithm proposed obtained more efficient results than the CW algorithm. The key factor was a reduction in the number of mistakes (one way routes and in the level of result variation.

  17. Equipment of Thomson scattering measurement on DIVA plasma

    International Nuclear Information System (INIS)

    Yamauchi, Toshihiko; Kumagai, Katsuaki; Funahashi, Akimasa; Matoba, Thoru; Sengoku, Seio

    1980-02-01

    Equipment of Thomson scattering measurement using ruby-laser light is explained. DIVA device was shut down in September 1979; it gave numerous fruitful experimental results during its five years operation. We measured the profiles of electron temperature and density with the Thomson scattering equipment, which played an important role in research of the energy confinement and heating characteristics. In Thomson scattering measurements on DIVA, studies and improvements were made for reduction of stray light, increase of measuring points and data processing. The profile of electron temperature and density were thus measured successful. In this report is given an over-all view of the Thomson scattering equipment together with the above improvements. As two representative examples, the measured results of electron temperature profiles on DIVA plasma under divertor operation and low-q discharge respectively are described. (author)

  18. A theoretical derivation of the condensed history algorithm

    International Nuclear Information System (INIS)

    Larsen, E.W.

    1992-01-01

    Although the Condensed History Algorithm is a successful and widely-used Monte Carlo method for solving electron transport problems, it has been derived only by an ad-hoc process based on physical reasoning. In this paper we show that the Condensed History Algorithm can be justified as a Monte Carlo simulation of an operator-split procedure in which the streaming, angular scattering, and slowing-down operators are separated within each time step. Different versions of the operator-split procedure lead to Ο(Δs) and Ο(Δs 2 ) versions of the method, where Δs is the path-length step. Our derivation also indicates that higher-order versions of the Condensed History Algorithm may be developed. (Author)

  19. Scattering analysis of periodic structures using finite-difference time-domain

    CERN Document Server

    ElMahgoub, Khaled; Elsherbeni, Atef Z

    2012-01-01

    Periodic structures are of great importance in electromagnetics due to their wide range of applications such as frequency selective surfaces (FSS), electromagnetic band gap (EBG) structures, periodic absorbers, meta-materials, and many others. The aim of this book is to develop efficient computational algorithms to analyze the scattering properties of various electromagnetic periodic structures using the finite-difference time-domain periodic boundary condition (FDTD/PBC) method. A new FDTD/PBC-based algorithm is introduced to analyze general skewed grid periodic structures while another algor

  20. The Impact of Microstructure on an Accurate Snow Scattering Parameterization at Microwave Wavelengths

    Science.gov (United States)

    Honeyager, Ryan

    High frequency microwave instruments are increasingly used to observe ice clouds and snow. These instruments are significantly more sensitive than conventional precipitation radar. This is ideal for analyzing ice-bearing clouds, for ice particles are tenuously distributed and have effective densities that are far less than liquid water. However, at shorter wavelengths, the electromagnetic response of ice particles is no longer solely dependent on particle mass. The shape of the ice particles also plays a significant role. Thus, in order to understand the observations of high frequency microwave radars and radiometers, it is essential to model the scattering properties of snowflakes correctly. Several research groups have proposed detailed models of snow aggregation. These particle models are coupled with computer codes that determine the particles' electromagnetic properties. However, there is a discrepancy between the particle model outputs and the requirements of the electromagnetic models. Snowflakes have countless variations in structure, but we also know that physically similar snowflakes scatter light in much the same manner. Structurally exact electromagnetic models, such as the discrete dipole approximation (DDA), require a high degree of structural resolution. Such methods are slow, spending considerable time processing redundant (i.e. useless) information. Conversely, when using techniques that incorporate too little structural information, the resultant radiative properties are not physically realistic. Then, we ask the question, what features are most important in determining scattering? This dissertation develops a general technique that can quickly parameterize the important structural aspects that determine the scattering of many diverse snowflake morphologies. A Voronoi bounding neighbor algorithm is first employed to decompose aggregates into well-defined interior and surface regions. The sensitivity of scattering to interior randomization is then

  1. SU-F-J-74: High Z Geometric Integrity and Beam Hardening Artifact Assessment Using a Retrospective Metal Artifact Reduction (MAR) Reconstruction Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Woods, K; DiCostanzo, D; Gupta, N [Ohio State University Columbus, OH (United States)

    2016-06-15

    Purpose: To test the efficacy of a retrospective metal artifact reduction (MAR) reconstruction algorithm for a commercial computed tomography (CT) scanner for radiation therapy purposes. Methods: High Z geometric integrity and artifact reduction analysis was performed with three phantoms using General Electric’s (GE) Discovery CT. The three phantoms included: a Computerized Imaging Reference Systems (CIRS) electron density phantom (Model 062) with a 6.5 mm diameter titanium rod insert, a custom spine phantom using Synthes Spine hardware submerged in water, and a dental phantom with various high Z fillings submerged in water. Each phantom was reconstructed using MAR and compared against the original scan. Furthermore, each scenario was tested using standard and extended Hounsfield Unit (HU) ranges. High Z geometric integrity was performed using the CIRS phantom, while the artifact reduction was performed using all three phantoms. Results: Geometric integrity of the 6.5 mm diameter rod was slightly overestimated for non-MAR scans for both standard and extended HU. With MAR reconstruction, the rod was underestimated for both standard and extended HU. For artifact reduction, the mean and standard deviation was compared in a volume of interest (VOI) in the surrounding material (water and water equivalent material, ∼0HU). Overall, the mean value of the VOI was closer to 0 HU for the MAR reconstruction compared to the non-MAR scan for most phantoms. Additionally, the standard deviations for all phantoms were greatly reduced using MAR reconstruction. Conclusion: GE’s MAR reconstruction algorithm improves image quality with the presence of high Z material with minimal degradation of its geometric integrity. High Z delineation can be carried out with proper contouring techniques. The effects of beam hardening artifacts are greatly reduced with MAR reconstruction. Tissue corrections due to these artifacts can be eliminated for simple high Z geometries and greatly

  2. SU-F-J-74: High Z Geometric Integrity and Beam Hardening Artifact Assessment Using a Retrospective Metal Artifact Reduction (MAR) Reconstruction Algorithm

    International Nuclear Information System (INIS)

    Woods, K; DiCostanzo, D; Gupta, N

    2016-01-01

    Purpose: To test the efficacy of a retrospective metal artifact reduction (MAR) reconstruction algorithm for a commercial computed tomography (CT) scanner for radiation therapy purposes. Methods: High Z geometric integrity and artifact reduction analysis was performed with three phantoms using General Electric’s (GE) Discovery CT. The three phantoms included: a Computerized Imaging Reference Systems (CIRS) electron density phantom (Model 062) with a 6.5 mm diameter titanium rod insert, a custom spine phantom using Synthes Spine hardware submerged in water, and a dental phantom with various high Z fillings submerged in water. Each phantom was reconstructed using MAR and compared against the original scan. Furthermore, each scenario was tested using standard and extended Hounsfield Unit (HU) ranges. High Z geometric integrity was performed using the CIRS phantom, while the artifact reduction was performed using all three phantoms. Results: Geometric integrity of the 6.5 mm diameter rod was slightly overestimated for non-MAR scans for both standard and extended HU. With MAR reconstruction, the rod was underestimated for both standard and extended HU. For artifact reduction, the mean and standard deviation was compared in a volume of interest (VOI) in the surrounding material (water and water equivalent material, ∼0HU). Overall, the mean value of the VOI was closer to 0 HU for the MAR reconstruction compared to the non-MAR scan for most phantoms. Additionally, the standard deviations for all phantoms were greatly reduced using MAR reconstruction. Conclusion: GE’s MAR reconstruction algorithm improves image quality with the presence of high Z material with minimal degradation of its geometric integrity. High Z delineation can be carried out with proper contouring techniques. The effects of beam hardening artifacts are greatly reduced with MAR reconstruction. Tissue corrections due to these artifacts can be eliminated for simple high Z geometries and greatly

  3. Extended Linear Embedding via Green's Operators for Analyzing Wave Scattering from Anisotropic Bodies

    Directory of Open Access Journals (Sweden)

    V. Lancellotti

    2014-01-01

    Full Text Available Linear embedding via Green’s operators (LEGO is a domain decomposition method particularly well suited for the solution of scattering and radiation problems comprised of many objects. The latter are enclosed in simple-shaped subdomains (electromagnetic bricks which are in turn described by means of scattering operators. In this paper we outline the extension of the LEGO approach to the case of penetrable objects with dyadic permittivity or permeability. Since a volume integral equation is only required to solve the scattering problem inside a brick and the scattering operators are inherently surface operators, the LEGO procedure per se can afford a reduction of the number of unknowns in the numerical solution with the Method of Moments and subsectional basis functions. Further substantial reduction is achieved with the eigencurrents expansion method (EEM which employs the eigenvectors of the scattering operator as local entire-domain basis functions over a brick’s surface. Through a few selected numerical examples we discuss the validation and the efficiency of the LEGO-EEM technique applied to clusters of anisotropic bodies.

  4. Realization of low-scattering metamaterial shell based on cylindrical wave expanding theory.

    Science.gov (United States)

    Wu, Xiaoyu; Hu, Chenggang; Wang, Min; Pu, Mingbo; Luo, Xiangang

    2015-04-20

    In this paper, we demonstrate the design of a low-scattering metamaterial shell with strong backward scattering reduction and a wide bandwidth at microwave frequencies. Low echo is achieved through cylindrical wave expanding theory, and such shell only contains one metamaterial layer with simultaneous low permittivity and permeability. Cut-wire structure is selected to realize the low electromagnetic (EM) parameters and low loss on the resonance brim region. The full-model simulations show good agreement with theoretical calculations, and illustrate that near -20dB reduction is achieved and the -10 dB bandwidth can reach up to 0.6 GHz. Compared with the cloak based on transformation electromagnetics, the design possesses advantage of simpler requirement of EM parameters and is much easier to be implemented when only backward scattering field is cared.

  5. Scattered radiation from applicators in clinical electron beams

    International Nuclear Information System (INIS)

    Battum, L J van; Zee, W van der; Huizenga, H

    2003-01-01

    In radiotherapy with high-energy (4-25 MeV) electron beams, scattered radiation from the electron applicator influences the dose distribution in the patient. In most currently available treatment planning systems for radiotherapy this component is not explicitly included and handled only by a slight change of the intensity of the primary beam. The scattered radiation from an applicator changes with the field size and distance from the applicator. The amount of scattered radiation is dependent on the applicator design and on the formation of the electron beam in the treatment head. Electron applicators currently applied in most treatment machines are essentially a set of diaphragms, but still do produce scattered radiation. This paper investigates the present level of scattered dose from electron applicators, and as such provides an extensive set of measured data. The data provided could for instance serve as example input data or benchmark data for advanced treatment planning algorithms which employ a parametrized initial phase space to characterize the clinical electron beam. Central axis depth dose curves of the electron beams have been measured with and without applicators in place, for various applicator sizes and energies, for a Siemens Primus, a Varian 2300 C/D and an Elekta SLi accelerator. Scattered radiation generated by the applicator has been found by subtraction of the central axis depth dose curves, obtained with and without applicator. Scattered radiation from Siemens, Varian and Elekta electron applicators is still significant and cannot be neglected in advanced treatment planning. Scattered radiation at the surface of a water phantom can be as high as 12%. Scattered radiation decreases almost linearly with depth. Scattered radiation from Varian applicators shows clear dependence on beam energy. The Elekta applicators produce less scattered radiation than those of Varian and Siemens, but feature a higher effective angular variance. The scattered

  6. Identifying multiple influential spreaders by a heuristic clustering algorithm

    International Nuclear Information System (INIS)

    Bao, Zhong-Kui; Liu, Jian-Guo; Zhang, Hai-Feng

    2017-01-01

    The problem of influence maximization in social networks has attracted much attention. However, traditional centrality indices are suitable for the case where a single spreader is chosen as the spreading source. Many times, spreading process is initiated by simultaneously choosing multiple nodes as the spreading sources. In this situation, choosing the top ranked nodes as multiple spreaders is not an optimal strategy, since the chosen nodes are not sufficiently scattered in networks. Therefore, one ideal situation for multiple spreaders case is that the spreaders themselves are not only influential but also they are dispersively distributed in networks, but it is difficult to meet the two conditions together. In this paper, we propose a heuristic clustering (HC) algorithm based on the similarity index to classify nodes into different clusters, and finally the center nodes in clusters are chosen as the multiple spreaders. HC algorithm not only ensures that the multiple spreaders are dispersively distributed in networks but also avoids the selected nodes to be very “negligible”. Compared with the traditional methods, our experimental results on synthetic and real networks indicate that the performance of HC method on influence maximization is more significant. - Highlights: • A heuristic clustering algorithm is proposed to identify the multiple influential spreaders in complex networks. • The algorithm can not only guarantee the selected spreaders are sufficiently scattered but also avoid to be “insignificant”. • The performance of our algorithm is generally better than other methods, regardless of real networks or synthetic networks.

  7. Identifying multiple influential spreaders by a heuristic clustering algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Bao, Zhong-Kui [School of Mathematical Science, Anhui University, Hefei 230601 (China); Liu, Jian-Guo [Data Science and Cloud Service Research Center, Shanghai University of Finance and Economics, Shanghai, 200133 (China); Zhang, Hai-Feng, E-mail: haifengzhang1978@gmail.com [School of Mathematical Science, Anhui University, Hefei 230601 (China); Department of Communication Engineering, North University of China, Taiyuan, Shan' xi 030051 (China)

    2017-03-18

    The problem of influence maximization in social networks has attracted much attention. However, traditional centrality indices are suitable for the case where a single spreader is chosen as the spreading source. Many times, spreading process is initiated by simultaneously choosing multiple nodes as the spreading sources. In this situation, choosing the top ranked nodes as multiple spreaders is not an optimal strategy, since the chosen nodes are not sufficiently scattered in networks. Therefore, one ideal situation for multiple spreaders case is that the spreaders themselves are not only influential but also they are dispersively distributed in networks, but it is difficult to meet the two conditions together. In this paper, we propose a heuristic clustering (HC) algorithm based on the similarity index to classify nodes into different clusters, and finally the center nodes in clusters are chosen as the multiple spreaders. HC algorithm not only ensures that the multiple spreaders are dispersively distributed in networks but also avoids the selected nodes to be very “negligible”. Compared with the traditional methods, our experimental results on synthetic and real networks indicate that the performance of HC method on influence maximization is more significant. - Highlights: • A heuristic clustering algorithm is proposed to identify the multiple influential spreaders in complex networks. • The algorithm can not only guarantee the selected spreaders are sufficiently scattered but also avoid to be “insignificant”. • The performance of our algorithm is generally better than other methods, regardless of real networks or synthetic networks.

  8. Generalized phase retrieval algorithm based on information measures

    OpenAIRE

    Shioya, Hiroyuki; Gohara, Kazutoshi

    2006-01-01

    An iterative phase retrieval algorithm based on the maximum entropy method (MEM) is presented. Introducing a new generalized information measure, we derive a novel class of algorithms which includes the conventionally used error reduction algorithm and a MEM-type iterative algorithm which is presented for the first time. These different phase retrieval methods are unified on the basis of the framework of information measures used in information theory.

  9. Classification of Non-Small Cell Lung Cancer Using Significance Analysis of Microarray-Gene Set Reduction Algorithm

    Directory of Open Access Journals (Sweden)

    Lei Zhang

    2016-01-01

    Full Text Available Among non-small cell lung cancer (NSCLC, adenocarcinoma (AC, and squamous cell carcinoma (SCC are two major histology subtypes, accounting for roughly 40% and 30% of all lung cancer cases, respectively. Since AC and SCC differ in their cell of origin, location within the lung, and growth pattern, they are considered as distinct diseases. Gene expression signatures have been demonstrated to be an effective tool for distinguishing AC and SCC. Gene set analysis is regarded as irrelevant to the identification of gene expression signatures. Nevertheless, we found that one specific gene set analysis method, significance analysis of microarray-gene set reduction (SAMGSR, can be adopted directly to select relevant features and to construct gene expression signatures. In this study, we applied SAMGSR to a NSCLC gene expression dataset. When compared with several novel feature selection algorithms, for example, LASSO, SAMGSR has equivalent or better performance in terms of predictive ability and model parsimony. Therefore, SAMGSR is a feature selection algorithm, indeed. Additionally, we applied SAMGSR to AC and SCC subtypes separately to discriminate their respective stages, that is, stage II versus stage I. Few overlaps between these two resulting gene signatures illustrate that AC and SCC are technically distinct diseases. Therefore, stratified analyses on subtypes are recommended when diagnostic or prognostic signatures of these two NSCLC subtypes are constructed.

  10. Shrinkage-thresholding enhanced born iterative method for solving 2D inverse electromagnetic scattering problem

    KAUST Repository

    Desmal, Abdulla; Bagci, Hakan

    2014-01-01

    A numerical framework that incorporates recently developed iterative shrinkage thresholding (IST) algorithms within the Born iterative method (BIM) is proposed for solving the two-dimensional inverse electromagnetic scattering problem. IST

  11. Inversion Algorithms and PS Detection in SAR Tomography, Case Study of Bucharest City

    Directory of Open Access Journals (Sweden)

    C. Dănişor

    2016-06-01

    Full Text Available Synthetic Aperture Radar (SAR tomography can reconstruct the elevation profile of each pixel based on a set of co-registered complex images of a scene. Its main advantage over classical interferometric methods consists in the capability to improve the detection of single persistent scatterers as well as to enable the detection of multiple scatterers interfering within the same pixel. In this paper, three tomographic algorithms are compared and applied to a dataset of 32 images to generate the elevation map of dominant scatterers from a scene. Targets which present stable proprieties over time - Persistent Scatterers (PS are then detected based on reflectivity functions reconstructed with Capon filtering.

  12. Some aspects of Trim-algorithm modernization for Monte-Carlo method

    International Nuclear Information System (INIS)

    Dovnar, S.V.; Grigor'ev, V.V.; Kamyshan, M.A.; Leont'ev, A.V.; Yanusko, S.V.

    2001-01-01

    Some aspects of Trim-algorithm modernization in Monte-Carlo method are discussed. This modification permits to raise the universality of program work with various potentials of ion-atom interactions and to improve the calculation precision for scattering angle θ c

  13. A Robust Algorithm to Determine the Topology of Space from the Cosmic Microwave Background Radiation

    OpenAIRE

    Weeks, Jeffrey R.

    2001-01-01

    Satellite measurements of the cosmic microwave back-ground radiation will soon provide an opportunity to test whether the universe is multiply connected. This paper presents a new algorithm for deducing the topology of the universe from the microwave background data. Unlike an older algorithm, the new algorithm gives the curvature of space and the radius of the last scattering surface as outputs, rather than requiring them as inputs. The new algorithm is also more tolerant of erro...

  14. From parallel to distributed computing for reactive scattering calculations

    International Nuclear Information System (INIS)

    Lagana, A.; Gervasi, O.; Baraglia, R.

    1994-01-01

    Some reactive scattering codes have been ported on different innovative computer architectures ranging from massively parallel machines to clustered workstations. The porting has required a drastic restructuring of the codes to single out computationally decoupled cpu intensive subsections. The suitability of different theoretical approaches for parallel and distributed computing restructuring is discussed and the efficiency of related algorithms evaluated

  15. Retrieval of the projected potential by inversion from the scattering matrix in electron-crystal scattering

    International Nuclear Information System (INIS)

    Allen, L.J.; Spargo, A.E.C.; Leeb, H.

    1998-01-01

    The retrieval of a unique crystal potential from the scattering matrix S in high energy transmission electron diffraction is discussed. It is shown that, in general, data taken at a single orientation are not sufficient to determine all the elements of S. Additional measurements with tilted incident beam are required for the determination of the whole S-matrix. An algorithm for the extraction of the crystal potential from the S-matrix measured at a single energy and thickness is presented. The limiting case of thin crystals is discussed. Several examples with simulated data are considered

  16. Correlation between porosity and roughness as obtained by porous silicon nano surface scattering spectrum

    Directory of Open Access Journals (Sweden)

    R Dariani

    2015-01-01

    Full Text Available Reflection spectra of four porous silicon samples under etching times of 2, 6, 10, and 14 min with current density of 10 mA/cm2 were measured. Reflection spectra behaviors for all samples were the same, but their intensities were different and decreased by increasing the etching time. The similar behavior of reflection spectra could be attributed to the electrolyte solution concentration which was the same during fabrication and reduction of reflection spectrum due to the reduction of particle size. Also, the region for the lowest intensity at reflection spectra was related to porous silicon energy gap which shows blue shift for porous silicon energy gap. Roughness study of porous silicon samples was done by scattering spectra measurements, Rayleigh criteria, and Davis-Bennet equation. Scattering spectra of the samples were measured at 10, 15, and 20 degrees by using spectrophotometer. Reflected light intensity reduced by increasing the scattering angle except for the normal scattering which agreed with Rayleigh criteria. Also, our results showed that by increasing the etching time, porosity (sizes and numbers of pores increases and therefore light absorption increases and scattering from surface reduces. But since scattering varies with the observation scale (wavelength, the relationship between scattering and porosity differs by varying the observation scale (wavelength

  17. The OMPS Limb Profiler Instrument: Two-Dimensional Retrieval Algorithm

    Science.gov (United States)

    Rault, Didier F.

    2010-01-01

    The upcoming Ozone Mapper and Profiler Suite (OMPS), which will be launched on the NPOESS Preparatory Project (NPP) platform in early 2011, will continue monitoring the global distribution of the Earth's middle atmosphere ozone and aerosol. OMPS is composed of three instruments, namely the Total Column Mapper (heritage: TOMS, OMI), the Nadir Profiler (heritage: SBUV) and the Limb Profiler (heritage: SOLSE/LORE, OSIRIS, SCIAMACHY, SAGE III). The ultimate goal of the mission is to better understand and quantify the rate of stratospheric ozone recovery. The focus of the paper will be on the Limb Profiler (LP) instrument. The LP instrument will measure the Earth's limb radiance (which is due to the scattering of solar photons by air molecules, aerosol and Earth surface) in the ultra-violet (UV), visible and near infrared, from 285 to 1000 nm. The LP simultaneously images the whole vertical extent of the Earth's limb through three vertical slits, each covering a vertical tangent height range of 100 km and each horizontally spaced by 250 km in the cross-track direction. Measurements are made every 19 seconds along the orbit track, which corresponds to a distance of about 150km. Several data analysis tools are presently being constructed and tested to retrieve ozone and aerosol vertical distribution from limb radiance measurements. The primary NASA algorithm is based on earlier algorithms developed for the SOLSE/LORE and SAGE III limb scatter missions. All the existing retrieval algorithms rely on a spherical symmetry assumption for the atmosphere structure. While this assumption is reasonable in most of the stratosphere, it is no longer valid in regions of prime scientific interest, such as polar vortex and UTLS regions. The paper will describe a two-dimensional retrieval algorithm whereby the ozone distribution is simultaneously retrieved vertically and horizontally for a whole orbit. The retrieval code relies on (1) a forward 2D Radiative Transfer code (to model limb

  18. Scattering-parameter extraction and calibration techniques for RF free-space material characterization

    DEFF Research Database (Denmark)

    Kaniecki, M.; Saenz, E.; Rolo, L.

    2014-01-01

    This paper demonstrates a method for material characterization (permittivity, permeability, loss tangent) based on the scattering parameters. The performance of the extraction algorithm will be shown for modelled and measured data. The measurements were carried out at the European Space Agency...

  19. Leakage Detection and Estimation Algorithm for Loss Reduction in Water Piping Networks

    Directory of Open Access Journals (Sweden)

    Kazeem B. Adedeji

    2017-10-01

    Full Text Available Water loss through leaking pipes constitutes a major challenge to the operational service of water utilities. In recent years, increasing concern about the financial loss and environmental pollution caused by leaking pipes has been driving the development of efficient algorithms for detecting leakage in water piping networks. Water distribution networks (WDNs are disperse in nature with numerous number of nodes and branches. Consequently, identifying the segment(s of the network and the exact leaking pipelines connected to this segment(s where higher background leakage outflow occurs is a challenging task. Background leakage concerns the outflow from small cracks or deteriorated joints. In addition, because they are diffuse flow, they are not characterised by quick pressure drop and are not detectable by measuring instruments. Consequently, they go unreported for a long period of time posing a threat to water loss volume. Most of the existing research focuses on the detection and localisation of burst type leakages which are characterised by a sudden pressure drop. In this work, an algorithm for detecting and estimating background leakage in water distribution networks is presented. The algorithm integrates a leakage model into a classical WDN hydraulic model for solving the network leakage flows. The applicability of the developed algorithm is demonstrated on two different water networks. The results of the tested networks are discussed and the solutions obtained show the benefits of the proposed algorithm. A noteworthy evidence is that the algorithm permits the detection of critical segments or pipes of the network experiencing higher leakage outflow and indicates the probable pipes of the network where pressure control can be performed. However, the possible position of pressure control elements along such critical pipes will be addressed in future work.

  20. A Faster Algorithm for Computing Straight Skeletons

    KAUST Repository

    Mencel, Liam A.

    2014-01-01

    computation in O(n (log n) log r) time. It improves on the previously best known algorithm for this reduction, which is randomised, and runs in expected O(n √(h+1) log² n) time for a polygon with h holes. Using known motorcycle graph algorithms, our result

  1. A DE-Based Scatter Search for Global Optimization Problems

    Directory of Open Access Journals (Sweden)

    Kun Li

    2015-01-01

    Full Text Available This paper proposes a hybrid scatter search (SS algorithm for continuous global optimization problems by incorporating the evolution mechanism of differential evolution (DE into the reference set updated procedure of SS to act as the new solution generation method. This hybrid algorithm is called a DE-based SS (SSDE algorithm. Since different kinds of mutation operators of DE have been proposed in the literature and they have shown different search abilities for different kinds of problems, four traditional mutation operators are adopted in the hybrid SSDE algorithm. To adaptively select the mutation operator that is most appropriate to the current problem, an adaptive mechanism for the candidate mutation operators is developed. In addition, to enhance the exploration ability of SSDE, a reinitialization method is adopted to create a new population and subsequently construct a new reference set whenever the search process of SSDE is trapped in local optimum. Computational experiments on benchmark problems show that the proposed SSDE is competitive or superior to some state-of-the-art algorithms in the literature.

  2. Numerical computations of interior transmission eigenvalues for scattering objects with cavities

    International Nuclear Information System (INIS)

    Peters, Stefan; Kleefeld, Andreas

    2016-01-01

    In this article we extend the inside-outside duality for acoustic transmission eigenvalue problems by allowing scattering objects that may contain cavities. In this context we provide the functional analytical framework necessary to transfer the techniques that have been used in Kirsch and Lechleiter (2013 Inverse Problems, 29 104011) to derive the inside-outside duality. Additionally, extensive numerical results are presented to show that we are able to successfully detect interior transmission eigenvalues with the inside-outside duality approach for a variety of obstacles with and without cavities in three dimensions. In this context, we also discuss the advantages and disadvantages of the inside-outside duality approach from a numerical point of view. Furthermore we derive the integral equations necessary to extend the algorithm in Kleefeld (2013 Inverse Problems, 29 104012) to compute highly accurate interior transmission eigenvalues for scattering objects with cavities, which we will then use as reference values to examine the accuracy of the inside-outside duality algorithm. (paper)

  3. Optimization of Selected Remote Sensing Algorithms for Embedded NVIDIA Kepler GPU Architecture

    Science.gov (United States)

    Riha, Lubomir; Le Moigne, Jacqueline; El-Ghazawi, Tarek

    2015-01-01

    This paper evaluates the potential of embedded Graphic Processing Units in the Nvidias Tegra K1 for onboard processing. The performance is compared to a general purpose multi-core CPU and full fledge GPU accelerator. This study uses two algorithms: Wavelet Spectral Dimension Reduction of Hyperspectral Imagery and Automated Cloud-Cover Assessment (ACCA) Algorithm. Tegra K1 achieved 51 for ACCA algorithm and 20 for the dimension reduction algorithm, as compared to the performance of the high-end 8-core server Intel Xeon CPU with 13.5 times higher power consumption.

  4. Resonant X-ray Scattering of carbonyl sulfide at the sulfur K edge

    International Nuclear Information System (INIS)

    Journel, Loïc; Marchenko, Tatiana; Guillemin, Renaud; Kawerk, Elie; Simon, Marc; Kavčič, Matjaž; Žit-nik, Matjaž; Bučar, Klemen; Bohinc, Rok

    2015-01-01

    New results on free OCS molecules have been obtained using Resonant X-ray Inelastic Scattering spectroscopy. A deconvolution algorithm has been applied to improve the energy resolution spectra of which we can extract detailed information on nuclear dynamics in the system. (paper)

  5. Resonant X-ray Scattering of carbonyl sulfide at the sulfur K edge

    OpenAIRE

    Journel , Loïc; Marchenko , Tatiana; Guillemin , Renaud; Kawerk , Elie; Kavčič , Matjaž; Žit-nik , Matjaž; Bučar , Klemen; Bohinc , Rok; Simon , Marc

    2015-01-01

    International audience; New results on free OCS molecules have been obtained using Resonant X-ray Inelastic Scattering spectroscopy. A deconvolution algorithm has been applied to improve the energy resolution spectra of which we can extract detailed information on nuclear dynamics in the system.

  6. Subjet distributions in deep inelastic scattering at HERA

    Energy Technology Data Exchange (ETDEWEB)

    Chekanov, S.; Derrick, M.; Magill, S. [Argonne National Lab., Argonne, IL (US)] (and others)

    2008-12-15

    Subjet distributions were measured in neutral current deep inelastic ep scattering with the ZEUS detector at HERA using an integrated luminosity of 81.7 pb{sup -1}. Jets were identified using the k{sub T} cluster algorithm in the laboratory frame. Sub-jets were defined as jet-like substructures identified by a reapplication of the cluster algorithm at a smaller value of the resolution parameter y{sub cut}. Measurements of subjet distributions for jets with exactly two subjets for y{sub cut}=0.05 are presented as functions of observables sensitive to the pattern of parton radiation and to the colour coherence between the initial and final states. Perturbative QCD predictions give an adequate description of the data. (orig.)

  7. Subjet distributions in deep inelastic scattering at HERA

    International Nuclear Information System (INIS)

    Chekanov, S.; Derrick, M.; Magill, S.

    2008-12-01

    Subjet distributions were measured in neutral current deep inelastic ep scattering with the ZEUS detector at HERA using an integrated luminosity of 81.7 pb -1 . Jets were identified using the k T cluster algorithm in the laboratory frame. Sub-jets were defined as jet-like substructures identified by a reapplication of the cluster algorithm at a smaller value of the resolution parameter y cut . Measurements of subjet distributions for jets with exactly two subjets for y cut =0.05 are presented as functions of observables sensitive to the pattern of parton radiation and to the colour coherence between the initial and final states. Perturbative QCD predictions give an adequate description of the data. (orig.)

  8. Impact of Noise Reduction Algorithm in Cochlear Implant Processing on Music Enjoyment.

    Science.gov (United States)

    Kohlberg, Gavriel D; Mancuso, Dean M; Griffin, Brianna M; Spitzer, Jaclyn B; Lalwani, Anil K

    2016-06-01

    Noise reduction algorithm (NRA) in speech processing strategy has positive impact on speech perception among cochlear implant (CI) listeners. We sought to evaluate the effect of NRA on music enjoyment. Prospective analysis of music enjoyment. Academic medical center. Normal-hearing (NH) adults (N = 16) and CI listeners (N = 9). Subjective rating of music excerpts. NH and CI listeners evaluated country music piece on three enjoyment modalities: pleasantness, musicality, and naturalness. Participants listened to the original version and 20 modified, less complex versions created by including subsets of musical instruments from the original song. NH participants listened to the segments through CI simulation and CI listeners listened to the segments with their usual speech processing strategy, with and without NRA. Decreasing the number of instruments was significantly associated with increase in the pleasantness and naturalness in both NH and CI subjects (p  0.05): this was true for the original and the modified music segments with one to three instruments (p > 0.05). NRA does not affect music enjoyment in CI listener or NH individual with CI simulation. This suggests that strategies to enhance speech processing will not necessarily have a positive impact on music enjoyment. However, reducing the complexity of music shows promise in enhancing music enjoyment and should be further explored.

  9. Improved Global Ocean Color Using Polymer Algorithm

    Science.gov (United States)

    Steinmetz, Francois; Ramon, Didier; Deschamps, ierre-Yves; Stum, Jacques

    2010-12-01

    A global ocean color product has been developed based on the use of the POLYMER algorithm to correct atmospheric scattering and sun glint and to process the data to a Level 2 ocean color product. Thanks to the use of this algorithm, the coverage and accuracy of the MERIS ocean color product have been significantly improved when compared to the standard product, therefore increasing its usefulness for global ocean monitor- ing applications like GLOBCOLOUR. We will present the latest developments of the algorithm, its first application to MODIS data and its validation against in-situ data from the MERMAID database. Examples will be shown of global NRT chlorophyll maps produced by CLS with POLYMER for operational applications like fishing or oil and gas industry, as well as its use by Scripps for a NASA study of the Beaufort and Chukchi seas.

  10. Subspace-Based Noise Reduction for Speech Signals via Diagonal and Triangular Matrix Decompositions

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Jensen, Søren Holdt

    2007-01-01

    We survey the definitions and use of rank-revealing matrix decompositions in single-channel noise reduction algorithms for speech signals. Our algorithms are based on the rank-reduction paradigm and, in particular, signal subspace techniques. The focus is on practical working algorithms, using both...... with working Matlab code and applications in speech processing....

  11. Invited Article: Acousto-optic finite-difference frequency-domain algorithm for first-principles simulations of on-chip acousto-optic devices

    Directory of Open Access Journals (Sweden)

    Yu Shi

    2017-02-01

    Full Text Available We introduce a finite-difference frequency-domain algorithm for coupled acousto-optic simulations. First-principles acousto-optic simulation in time domain has been challenging due to the fact that the acoustic and optical frequencies differ by many orders of magnitude. We bypass this difficulty by formulating the interactions between the optical and acoustic waves rigorously as a system of coupled nonlinear equations in frequency domain. This approach is particularly suited for on-chip devices that are based on a variety of acousto-optic interactions such as the stimulated Brillouin scattering. We validate our algorithm by simulating a stimulated Brillouin scattering process in a suspended waveguide structure and find excellent agreement with coupled-mode theory. We further provide an example of a simulation for a compact on-chip resonator device that greatly enhances the effect of stimulated Brillouin scattering. Our algorithm should facilitate the design of nanophotonic on-chip devices for the harnessing of photon-phonon interactions.

  12. LTREE - a lisp-based algorithm for cutset generation using Boolean reduction

    International Nuclear Information System (INIS)

    Finnicum, D.J.; Rzasa, P.W.

    1985-01-01

    Fault tree analysis is an important tool for evaluating the safety of nuclear power plants. The basic objective of fault tree analysis is to determine the probability that an undesired event or combination of events will occur. Fault tree analysis involves four main steps: (1) specifying the undesired event or events; (2) constructing the fault tree which represents the ways in which the postulated event(s) could occur; (3) qualitative evaluation of the logic model to identify the minimal cutsets; and (4) quantitative evaluation of the logic model to determine the probability that the postulated event(s) will occur given the probability of occurrence for each individual fault. This paper describes a LISP-based algorithm for the qualitative evaluation of fault trees. Development of this algorithm is the first step in a project to apply expert systems technology to the automation of the fault tree analysis process. The first section of this paper provides an overview of LISP and its capabilities, the second section describes the LTREE algorithm and the third section discusses the on-going research areas

  13. Variance Reduction Techniques in Monte Carlo Methods

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.

    2010-01-01

    Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the

  14. Strong paramagnon scattering in single atom Pd contacts

    DEFF Research Database (Denmark)

    Schendel, V.; Barreteau, Cyrille; Brandbyge, Mads

    2017-01-01

    Pd contacts shows a reduction with increasing bias, which gives rise to a peculiar Lambda-shaped spectrum. Supported by theoretical calculations, we correlate this finding with the lifetime of hot quasiparticles in Pd, which is strongly influenced by paramagnon scattering. In contrast to this, Co...

  15. Algorithm simulating the atom displacement processes induced by the gamma rays on the base of Monte Carlo method

    International Nuclear Information System (INIS)

    Cruz, C. M.; Pinera, I; Abreu, Y.; Leyva, A.

    2007-01-01

    Present work concerns with the implementation of a Monte Carlo based calculation algorithm describing particularly the occurrence of Atom Displacements induced by the Gamma Radiation interactions at a given target material. The Atom Displacement processes were considered only on the basis of single elastic scattering interactions among fast secondary electrons with matrix atoms, which are ejected from their crystalline sites at recoil energies higher than a given threshold energy. The secondary electron transport was described assuming typical approaches on this matter, where consecutive small angle scattering and very low energy transfer events behave as a continuously cuasi-classical electron state changes along a given path length delimited by two discrete high scattering angle and electron energy losses events happening on a random way. A limiting scattering angle was introduced and calculated according Moliere-Bethe-Goudsmit-Saunderson Electron Multiple Scattering, which allows splitting away secondary electrons single scattering processes from multiple one, according which a modified McKinley-Feshbach electron elastic scattering cross section arises. This distribution was statistically sampled and simulated in the framework of the Monte Carlo Method to perform discrete single electron scattering processes, particularly those leading to Atom Displacement events. The possibility of adding this algorithm to present existing open Monte Carlo code systems is analyze, in order to improve their capabilities. (Author)

  16. Maximum likelihood positioning algorithm for high-resolution PET scanners

    International Nuclear Information System (INIS)

    Gross-Weege, Nicolas; Schug, David; Hallen, Patrick; Schulz, Volkmar

    2016-01-01

    Purpose: In high-resolution positron emission tomography (PET), lightsharing elements are incorporated into typical detector stacks to read out scintillator arrays in which one scintillator element (crystal) is smaller than the size of the readout channel. In order to identify the hit crystal by means of the measured light distribution, a positioning algorithm is required. One commonly applied positioning algorithm uses the center of gravity (COG) of the measured light distribution. The COG algorithm is limited in spatial resolution by noise and intercrystal Compton scatter. The purpose of this work is to develop a positioning algorithm which overcomes this limitation. Methods: The authors present a maximum likelihood (ML) algorithm which compares a set of expected light distributions given by probability density functions (PDFs) with the measured light distribution. Instead of modeling the PDFs by using an analytical model, the PDFs of the proposed ML algorithm are generated assuming a single-gamma-interaction model from measured data. The algorithm was evaluated with a hot-rod phantom measurement acquired with the preclinical HYPERION II D PET scanner. In order to assess the performance with respect to sensitivity, energy resolution, and image quality, the ML algorithm was compared to a COG algorithm which calculates the COG from a restricted set of channels. The authors studied the energy resolution of the ML and the COG algorithm regarding incomplete light distributions (missing channel information caused by detector dead time). Furthermore, the authors investigated the effects of using a filter based on the likelihood values on sensitivity, energy resolution, and image quality. Results: A sensitivity gain of up to 19% was demonstrated in comparison to the COG algorithm for the selected operation parameters. Energy resolution and image quality were on a similar level for both algorithms. Additionally, the authors demonstrated that the performance of the ML

  17. Geometrical-optics approximation of forward scattering by coated particles.

    Science.gov (United States)

    Xu, Feng; Cai, Xiaoshu; Ren, Kuanfang

    2004-03-20

    By means of geometrical optics we present an approximation algorithm with which to accelerate the computation of scattering intensity distribution within a forward angular range (0 degrees-60 degrees) for coated particles illuminated by a collimated incident beam. Phases of emerging rays are exactly calculated to improve the approximation precision. This method proves effective for transparent and tiny absorbent particles with size parameters larger than 75 but fails to give good approximation results at scattering angles at which refractive rays are absent. When the absorption coefficient of a particle is greater than 0.01, the geometrical optics approximation is effective only for forward small angles, typically less than 10 degrees or so.

  18. A HYBRID HEURISTIC ALGORITHM FOR SOLVING THE RESOURCE CONSTRAINED PROJECT SCHEDULING PROBLEM (RCPSP

    Directory of Open Access Journals (Sweden)

    Juan Carlos Rivera

    Full Text Available The Resource Constrained Project Scheduling Problem (RCPSP is a problem of great interest for the scientific community because it belongs to the class of NP-Hard problems and no methods are known that can solve it accurately in polynomial processing times. For this reason heuristic methods are used to solve it in an efficient way though there is no guarantee that an optimal solution can be obtained. This research presents a hybrid heuristic search algorithm to solve the RCPSP efficiently, combining elements of the heuristic Greedy Randomized Adaptive Search Procedure (GRASP, Scatter Search and Justification. The efficiency obtained is measured taking into account the presence of the new elements added to the GRASP algorithm taken as base: Justification and Scatter Search. The algorithms are evaluated using three data bases of instances of the problem: 480 instances of 30 activities, 480 of 60, and 600 of 120 activities respectively, taken from the library PSPLIB available online. The solutions obtained by the developed algorithm for the instances of 30, 60 and 120 are compared with results obtained by other researchers at international level, where a prominent place is obtained, according to Chen (2011.

  19. Improved image quality in abdominal CT in patients who underwent treatment for hepatocellular carcinoma with small metal implants using a raw data-based metal artifact reduction algorithm.

    Science.gov (United States)

    Sofue, Keitaro; Yoshikawa, Takeshi; Ohno, Yoshiharu; Negi, Noriyuki; Inokawa, Hiroyasu; Sugihara, Naoki; Sugimura, Kazuro

    2017-07-01

    To determine the value of a raw data-based metal artifact reduction (SEMAR) algorithm for image quality improvement in abdominal CT for patients with small metal implants. Fifty-eight patients with small metal implants (3-15 mm in size) who underwent treatment for hepatocellular carcinoma were imaged with CT. CT data were reconstructed by filtered back projection with and without SEMAR algorithm in axial and coronal planes. To evaluate metal artefact reduction, mean CT number (HU and SD) and artefact index (AI) values within the liver were calculated. Two readers independently evaluated image quality of the liver and pancreas and visualization of vasculature using a 5-point visual score. HU and AI values and image quality on images with and without SEMAR were compared using the paired Student's t-test and Wilcoxon signed rank test. Interobserver agreement was evaluated using linear-weighted κ test. Mean HU and AI on images with SEMAR was significantly lower than those without SEMAR (P small metal implants by reducing metallic artefacts. • SEMAR algorithm significantly reduces metallic artefacts from small implants in abdominal CT. • SEMAR can improve image quality of the liver in dynamic CECT. • Confidence visualization of hepatic vascular anatomies can also be improved by SEMAR.

  20. Comparison of analyzer-based imaging computed tomography extraction algorithms and application to bone-cartilage imaging

    International Nuclear Information System (INIS)

    Diemoz, Paul C; Bravin, Alberto; Coan, Paola; Glaser, Christian

    2010-01-01

    In x-ray phase-contrast analyzer-based imaging, the contrast is provided by a combination of absorption, refraction and scattering effects. Several extraction algorithms, which attempt to separate and quantify these different physical contributions, have been proposed and applied. In a previous work, we presented a quantitative comparison of five among the most well-known extraction algorithms based on the geometrical optics approximation applied to planar images: diffraction-enhanced imaging (DEI), extended diffraction-enhanced imaging (E-DEI), generalized diffraction-enhanced imaging (G-DEI), multiple-image radiography (MIR) and Gaussian curve fitting (GCF). In this paper, we compare these algorithms in the case of the computed tomography (CT) modality. The extraction algorithms are applied to analyzer-based CT images of both plastic phantoms and biological samples (cartilage-on-bone cylinders). Absorption, refraction and scattering signals are derived. Results obtained with the different algorithms may vary greatly, especially in the case of large refraction angles. We show that ABI-CT extraction algorithms can provide an excellent tool to enhance the visualization of cartilage internal structures, which may find applications in a clinical context. Besides, by using the refraction images, the refractive index decrements for both the cartilage matrix and the cartilage cells have been estimated.

  1. Low-dose multiple-information retrieval algorithm for X-ray grating-based imaging

    International Nuclear Information System (INIS)

    Wang Zhentian; Huang Zhifeng; Chen Zhiqiang; Zhang Li; Jiang Xiaolei; Kang Kejun; Yin Hongxia; Wang Zhenchang; Stampanoni, Marco

    2011-01-01

    The present work proposes a low dose information retrieval algorithm for X-ray grating-based multiple-information imaging (GB-MII) method, which can retrieve the attenuation, refraction and scattering information of samples by only three images. This algorithm aims at reducing the exposure time and the doses delivered to the sample. The multiple-information retrieval problem in GB-MII is solved by transforming a nonlinear equations set to a linear equations and adopting the nature of the trigonometric functions. The proposed algorithm is validated by experiments both on conventional X-ray source and synchrotron X-ray source, and compared with the traditional multiple-image-based retrieval algorithm. The experimental results show that our algorithm is comparable with the traditional retrieval algorithm and especially suitable for high Signal-to-Noise system.

  2. Patterns of High energy Massive String Scatterings in the Regge Regime

    International Nuclear Information System (INIS)

    Lee Jen Chi

    2009-01-01

    We calculate high energy massive string scattering amplitudes of open bosonic string in the Regge regime (RR). We found that the number of high energy amplitudes for each fixed mass level in the RR is much more numerous than that of Gross regime (GR) calculated previously. Moreover, we discover that the leading order amplitudes in the RR can be expressed in terms of the Kummer function of the second kind. In particular, based on a summation algorithm for Stirling number identities developed recently, we discover that the ratios calculated previously among scattering amplitudes in the GR can be extracted from this Kummer function in the RR. We conjecture and give evidences that the existence of these GR ratios in the RR persists to sub-leading orders in the Regge expansion of all string scattering amplitudes. Finally, we demonstrate the universal power-law behavior for all massive string scattering amplitudes in the RR. (author)

  3. Analysis on Vertical Scattering Signatures in Forestry with PolInSAR

    Science.gov (United States)

    Guo, Shenglong; Li, Yang; Zhang, Jingjing; Hong, Wen

    2014-11-01

    We apply accurate topographic phase to the Freeman-Durden decomposition for polarimetric SAR interferometry (PolInSAR) data. The cross correlation matrix obtained from PolInSAR observations can be decomposed into three scattering mechanisms matrices accounting for the odd-bounce, double-bounce and volume scattering. We estimate the phase based on the Random volume over Ground (RVoG) model, and as the initial input parameter of the numerical method which is used to solve the parameters of decomposition. In addition, the modified volume scattering model introduced by Y. Yamaguchi is applied to the PolInSAR target decomposition in forest areas rather than the pure random volume scattering as proposed by Freeman-Durden to make best fit to the actual measured data. This method can accurately retrieve the magnitude associated with each mechanism and their vertical location along the vertical dimension. We test the algorithms with L- and P- band simulated data.

  4. Non-negative Matrix Factorization for Self-calibration of Photometric Redshift Scatter in Weak-lensing Surveys

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Le; Yu, Yu; Zhang, Pengjie, E-mail: lezhang@sjtu.edu.cn [Department of Astronomy, Shanghai Jiao Tong University, Shanghai, 200240 (China)

    2017-10-10

    Photo- z error is one of the major sources of systematics degrading the accuracy of weak-lensing cosmological inferences. Zhang et al. proposed a self-calibration method combining galaxy–galaxy correlations and galaxy–shear correlations between different photo- z bins. Fisher matrix analysis shows that it can determine the rate of photo- z outliers at a level of 0.01%–1% merely using photometric data and do not rely on any prior knowledge. In this paper, we develop a new algorithm to implement this method by solving a constrained nonlinear optimization problem arising in the self-calibration process. Based on the techniques of fixed-point iteration and non-negative matrix factorization, the proposed algorithm can efficiently and robustly reconstruct the scattering probabilities between the true- z and photo- z bins. The algorithm has been tested extensively by applying it to mock data from simulated stage IV weak-lensing projects. We find that the algorithm provides a successful recovery of the scatter rates at the level of 0.01%–1%, and the true mean redshifts of photo- z bins at the level of 0.001, which may satisfy the requirements in future lensing surveys.

  5. Characterization of adaptive statistical iterative reconstruction algorithm for dose reduction in CT: A pediatric oncology perspective

    Energy Technology Data Exchange (ETDEWEB)

    Brady, S. L.; Yee, B. S.; Kaufman, R. A. [Department of Radiological Sciences, St. Jude Children' s Research Hospital, Memphis, Tennessee 38105 (United States)

    2012-09-15

    Purpose: This study demonstrates a means of implementing an adaptive statistical iterative reconstruction (ASiR Trade-Mark-Sign ) technique for dose reduction in computed tomography (CT) while maintaining similar noise levels in the reconstructed image. The effects of image quality and noise texture were assessed at all implementation levels of ASiR Trade-Mark-Sign . Empirically derived dose reduction limits were established for ASiR Trade-Mark-Sign for imaging of the trunk for a pediatric oncology population ranging from 1 yr old through adolescence/adulthood. Methods: Image quality was assessed using metrics established by the American College of Radiology (ACR) CT accreditation program. Each image quality metric was tested using the ACR CT phantom with 0%-100% ASiR Trade-Mark-Sign blended with filtered back projection (FBP) reconstructed images. Additionally, the noise power spectrum (NPS) was calculated for three common reconstruction filters of the trunk. The empirically derived limitations on ASiR Trade-Mark-Sign implementation for dose reduction were assessed using (1, 5, 10) yr old and adolescent/adult anthropomorphic phantoms. To assess dose reduction limits, the phantoms were scanned in increments of increased noise index (decrementing mA using automatic tube current modulation) balanced with ASiR Trade-Mark-Sign reconstruction to maintain noise equivalence of the 0% ASiR Trade-Mark-Sign image. Results: The ASiR Trade-Mark-Sign algorithm did not produce any unfavorable effects on image quality as assessed by ACR criteria. Conversely, low-contrast resolution was found to improve due to the reduction of noise in the reconstructed images. NPS calculations demonstrated that images with lower frequency noise had lower noise variance and coarser graininess at progressively higher percentages of ASiR Trade-Mark-Sign reconstruction; and in spite of the similar magnitudes of noise, the image reconstructed with 50% or more ASiR Trade-Mark-Sign presented a more

  6. A general rough-surface inversion algorithm: Theory and application to SAR data

    Science.gov (United States)

    Moghaddam, M.

    1993-01-01

    Rough-surface inversion has significant applications in interpretation of SAR data obtained over bare soil surfaces and agricultural lands. Due to the sparsity of data and the large pixel size in SAR applications, it is not feasible to carry out inversions based on numerical scattering models. The alternative is to use parameter estimation techniques based on approximate analytical or empirical models. Hence, there are two issues to be addressed, namely, what model to choose and what estimation algorithm to apply. Here, a small perturbation model (SPM) is used to express the backscattering coefficients of the rough surface in terms of three surface parameters. The algorithm used to estimate these parameters is based on a nonlinear least-squares criterion. The least-squares optimization methods are widely used in estimation theory, but the distinguishing factor for SAR applications is incorporating the stochastic nature of both the unknown parameters and the data into formulation, which will be discussed in detail. The algorithm is tested with synthetic data, and several Newton-type least-squares minimization methods are discussed to compare their convergence characteristics. Finally, the algorithm is applied to multifrequency polarimetric SAR data obtained over some bare soil and agricultural fields. Results will be shown and compared to ground-truth measurements obtained from these areas. The strength of this general approach to inversion of SAR data is that it can be easily modified for use with any scattering model without changing any of the inversion steps. Note also that, for the same reason it is not limited to inversion of rough surfaces, and can be applied to any parameterized scattering process.

  7. Project Robust Scheduling Based on the Scattered Buffer Technology

    Directory of Open Access Journals (Sweden)

    Nansheng Pang

    2018-04-01

    Full Text Available The research object in this paper is the sub network formed by the predecessor’s affect on the solution activity. This paper is to study three types of influencing factors from the predecessors that lead to the delay of starting time of the solution activity on the longest path, and to analyze the influence degree on the delay of the solution activity’s starting time from different types of factors. On this basis, through the comprehensive analysis of various factors that influence the solution activity, this paper proposes a metric that is used to evaluate the solution robustness of the project scheduling, and this metric is taken as the optimization goal. This paper also adopts the iterative process to design a scattered buffer heuristics algorithm based on the robust scheduling of the time buffer. At the same time, the resource flow network is introduced in this algorithm, using the tabu search algorithm to solve baseline scheduling. For the generation of resource flow network in the baseline scheduling, this algorithm designs a resource allocation algorithm with the maximum use of the precedence relations. Finally, the algorithm proposed in this paper and some other algorithms in previous literature are taken into the simulation experiment; under the comparative analysis, the experimental results show that the algorithm proposed in this paper is reasonable and feasible.

  8. MADR: metal artifact detection and reduction

    Science.gov (United States)

    Jaiswal, Sunil Prasad; Ha, Sungsoo; Mueller, Klaus

    2016-04-01

    Metal in CT-imaged objects drastically reduces the quality of these images due to the severe artifacts it can cause. Most metal artifacts reduction (MAR) algorithms consider the metal-affected sinogram portions as the corrupted data and replace them via sophisticated interpolation methods. While these schemes are successful in removing the metal artifacts, they fail to recover some of the edge information. To address these problems, the frequency shift metal artifact reduction algorithm (FSMAR) was recently proposed. It exploits the information hidden in the uncorrected image and combines the high frequency (edge) components of the uncorrected image with the low frequency components of the corrected image. Although this can effectively transfer the edge information of the uncorrected image, it also introduces some unwanted artifacts. The essential problem of these algorithms is that they lack the capability of detecting the artifacts and as a result cannot discriminate between desired and undesired edges. We propose a scheme that does better in these respects. Our Metal Artifact Detection and Reduction (MADR) scheme constructs a weight map which stores whether a pixel in the uncorrected image belongs to an artifact region or a non-artifact region. This weight matrix is optimal in the Linear Minimum Mean Square Sense (LMMSE). Our results demonstrate that MADR outperforms the existing algorithms and ensures that the anatomical structures close to metal implants are better preserved.

  9. Assessment of Polarization Effect on Efficiency of Levenberg-Marquardt Algorithm in Case of Thin Atmosphere over Black Surface

    Science.gov (United States)

    Korkin, S.; Lyapustin, A.

    2012-12-01

    The Levenberg-Marquardt algorithm [1, 2] provides a numerical iterative solution to the problem of minimization of a function over a space of its parameters. In our work, the Levenberg-Marquardt algorithm retrieves optical parameters of a thin (single scattering) plane parallel atmosphere irradiated by collimated infinitely wide monochromatic beam of light. Black ground surface is assumed. Computational accuracy, sensitivity to the initial guess and the presence of noise in the signal, and other properties of the algorithm are investigated in scalar (using intensity only) and vector (including polarization) modes. We consider an atmosphere that contains a mixture of coarse and fine fractions. Following [3], the fractions are simulated using Henyey-Greenstein model. Though not realistic, this assumption is very convenient for tests [4, p.354]. In our case it yields analytical evaluation of Jacobian matrix. Assuming the MISR geometry of observation [5] as an example, the average scattering cosines and the ratio of coarse and fine fractions, the atmosphere optical depth, and the single scattering albedo, are the five parameters to be determined numerically. In our implementation of the algorithm, the system of five linear equations is solved using the fast Cramer's rule [6]. A simple subroutine developed by the authors, makes the algorithm independent from external libraries. All Fortran 90/95 codes discussed in the presentation will be available immediately after the meeting from sergey.v.korkin@nasa.gov by request. [1]. Levenberg K, A method for the solution of certain non-linear problems in least squares, Quarterly of Applied Mathematics, 1944, V.2, P.164-168. [2]. Marquardt D, An algorithm for least-squares estimation of nonlinear parameters, Journal on Applied Mathematics, 1963, V.11, N.2, P.431-441. [3]. Hovenier JW, Multiple scattering of polarized light in planetary atmospheres. Astronomy and Astrophysics, 1971, V.13, P.7 - 29. [4]. Mishchenko MI, Travis LD

  10. Bound states embedded into continuous spectrum as 'gathered' (compactified) scattering waves

    International Nuclear Information System (INIS)

    Zakhar'ev, B.N.; Chabanov, V.M.

    1995-01-01

    It is shown that states of continuous spectrum (the half-line case) can be considered as bound states normalized by unity but distributed on the infinite interval with vanishing density. Then the algorithms of shifting the range of primary localization of a chosen bound state in potential well of finite width appear to be applicable to scattering functions. The potential perturbations of the same type (but now on half-axis) concentrate the scattering wave in near vicinity of the origin, which leads to creation of bound state embedded into continuous spectrum. (author). 8 refs., 7 figs

  11. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  12. Freeman-Durden Decomposition with Oriented Dihedral Scattering

    Directory of Open Access Journals (Sweden)

    Yan Jian

    2014-10-01

    Full Text Available In this paper, when the azimuth direction of polarimetric Synthetic Aperature Radars (SAR differs from the planting direction of crops, the double bounce of the incident electromagnetic waves from the terrain surface to the growing crops is investigated and compared with the normal double bounce. Oriented dihedral scattering model is developed to explain the investigated double bounce and is introduced into the Freeman-Durden decomposition. The decomposition algorithm corresponding to the improved decomposition is then proposed. The airborne polarimetric SAR data for agricultural land covering two flight tracks are chosen to validate the algorithm; the decomposition results show that for agricultural vegetated land, the improved Freeman-Durden decomposition has the advantage of increasing the decomposition coherency among the polarimetric SAR data along the different flight tracks.

  13. Thermal-neutron multiple scattering: critical double scattering

    International Nuclear Information System (INIS)

    Holm, W.A.

    1976-01-01

    A quantum mechanical formulation for multiple scattering of thermal-neutrons from macroscopic targets is presented and applied to single and double scattering. Critical nuclear scattering from liquids and critical magnetic scattering from ferromagnets are treated in detail in the quasielastic approximation for target systems slightly above their critical points. Numerical estimates are made of the double scattering contribution to the critical magnetic cross section using relevant parameters from actual experiments performed on various ferromagnets. The effect is to alter the usual Lorentzian line shape dependence on neutron wave vector transfer. Comparison with corresponding deviations in line shape resulting from the use of Fisher's modified form of the Ornstein-Zernike spin correlations within the framework of single scattering theory leads to values for the critical exponent eta of the modified correlations which reproduce the effect of double scattering. In addition, it is shown that by restricting the range of applicability of the multiple scattering theory from the outset to critical scattering, Glauber's high energy approximation can be used to provide a much simpler and more powerful description of multiple scattering effects. When sufficiently close to the critical point, it provides a closed form expression for the differential cross section which includes all orders of scattering and has the same form as the single scattering cross section with a modified exponent for the wave vector transfer

  14. Multiobjective scatter search approach with new combination scheme applied to solve environmental/economic dispatch problem

    International Nuclear Information System (INIS)

    Athayde Costa e Silva, Marsil de; Klein, Carlos Eduardo; Mariani, Viviana Cocco; Santos Coelho, Leandro dos

    2013-01-01

    The environmental/economic dispatch (EED) is an important daily optimization task in the operation of many power systems. It involves the simultaneous optimization of fuel cost and emission objectives which are conflicting ones. The EED problem can be formulated as a large-scale highly constrained nonlinear multiobjective optimization problem. In recent years, many metaheuristic optimization approaches have been reported in the literature to solve the multiobjective EED. In terms of metaheuristics, recently, scatter search approaches are receiving increasing attention, because of their potential to effectively explore a wide range of complex optimization problems. This paper proposes an improved scatter search (ISS) to deal with multiobjective EED problems based on concepts of Pareto dominance and crowding distance and a new scheme for the combination method. In this paper, we have considered the standard IEEE (Institute of Electrical and Electronics Engineers) 30-bus system with 6-generators and the results obtained by proposed ISS algorithm are compared with the other recently reported results in the literature. Simulation results demonstrate that the proposed ISS algorithm is a capable candidate in solving the multiobjective EED problems. - Highlights: ► Economic dispatch. ► We solve the environmental/economic economic power dispatch problem with scatter search. ► Multiobjective scatter search can effectively improve the global search ability

  15. Four-phonon scattering significantly reduces intrinsic thermal conductivity of solids

    Science.gov (United States)

    Feng, Tianli; Lindsay, Lucas; Ruan, Xiulin

    2017-10-01

    For decades, the three-phonon scattering process has been considered to govern thermal transport in solids, while the role of higher-order four-phonon scattering has been persistently unclear and so ignored. However, recent quantitative calculations of three-phonon scattering have often shown a significant overestimation of thermal conductivity as compared to experimental values. In this Rapid Communication we show that four-phonon scattering is generally important in solids and can remedy such discrepancies. For silicon and diamond, the predicted thermal conductivity is reduced by 30% at 1000 K after including four-phonon scattering, bringing predictions in excellent agreement with measurements. For the projected ultrahigh-thermal conductivity material, zinc-blende BAs, a competitor of diamond as a heat sink material, four-phonon scattering is found to be strikingly strong as three-phonon processes have an extremely limited phase space for scattering. The four-phonon scattering reduces the predicted thermal conductivity from 2200 to 1400 W/m K at room temperature. The reduction at 1000 K is 60%. We also find that optical phonon scattering rates are largely affected, being important in applications such as phonon bottlenecks in equilibrating electronic excitations. Recognizing that four-phonon scattering is expensive to calculate, in the end we provide some guidelines on how to quickly assess the significance of four-phonon scattering, based on energy surface anharmonicity and the scattering phase space. Our work clears the decades-long fundamental question of the significance of higher-order scattering, and points out ways to improve thermoelectrics, thermal barrier coatings, nuclear materials, and radiative heat transfer.

  16. The hybrid model for sampling multiple elastic scattering angular deflections based on Goudsmit-Saunderson theory

    Directory of Open Access Journals (Sweden)

    Wasaye Muhammad Abdul

    2017-01-01

    Full Text Available An algorithm for the Monte Carlo simulation of electron multiple elastic scattering based on the framework of SuperMC (Super Monte Carlo simulation program for nuclear and radiation process is presented. This paper describes efficient and accurate methods by which the multiple scattering angular deflections are sampled. The Goudsmit-Saunderson theory of multiple scattering has been used for sampling angular deflections. Differential cross-sections of electrons and positrons by neutral atoms have been calculated by using Dirac partial wave program ELSEPA. The Legendre coefficients are accurately computed by using the Gauss-Legendre integration method. Finally, a novel hybrid method for sampling angular distribution has been developed. The model uses efficient rejection sampling method for low energy electrons (500 mean free paths. For small path lengths, a simple, efficient and accurate analytical distribution function has been proposed. The later uses adjustable parameters determined from the fitting of Goudsmith-Saunderson angular distribution. A discussion of the sampling efficiency and accuracy of this newly developed algorithm is given. The efficiency of rejection sampling algorithm is at least 50 % for electron kinetic energies less than 500 keV and longer path lengths (>500 mean free paths. Monte Carlo Simulation results are then compared with measured angular distributions of Ross et al. The comparison shows that our results are in good agreement with experimental measurements.

  17. A new modified fast fractal image compression algorithm

    DEFF Research Database (Denmark)

    Salarian, Mehdi; Nadernejad, Ehsan; MiarNaimi, Hossein

    2013-01-01

    In this paper, a new fractal image compression algorithm is proposed, in which the time of the encoding process is considerably reduced. The algorithm exploits a domain pool reduction approach, along with the use of innovative predefined values for contrast scaling factor, S, instead of searching...

  18. Two-dimensional analytic weighting functions for limb scattering

    Science.gov (United States)

    Zawada, D. J.; Bourassa, A. E.; Degenstein, D. A.

    2017-10-01

    Through the inversion of limb scatter measurements it is possible to obtain vertical profiles of trace species in the atmosphere. Many of these inversion methods require what is often referred to as weighting functions, or derivatives of the radiance with respect to concentrations of trace species in the atmosphere. Several radiative transfer models have implemented analytic methods to calculate weighting functions, alleviating the computational burden of traditional numerical perturbation methods. Here we describe the implementation of analytic two-dimensional weighting functions, where derivatives are calculated relative to atmospheric constituents in a two-dimensional grid of altitude and angle along the line of sight direction, in the SASKTRAN-HR radiative transfer model. Two-dimensional weighting functions are required for two-dimensional inversions of limb scatter measurements. Examples are presented where the analytic two-dimensional weighting functions are calculated with an underlying one-dimensional atmosphere. It is shown that the analytic weighting functions are more accurate than ones calculated with a single scatter approximation, and are orders of magnitude faster than a typical perturbation method. Evidence is presented that weighting functions for stratospheric aerosols calculated under a single scatter approximation may not be suitable for use in retrieval algorithms under solar backscatter conditions.

  19. Scattering amplitudes from multivariate polynomial division

    Energy Technology Data Exchange (ETDEWEB)

    Mastrolia, Pierpaolo, E-mail: pierpaolo.mastrolia@cern.ch [Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Muenchen (Germany); Dipartimento di Fisica e Astronomia, Universita di Padova, Padova (Italy); INFN Sezione di Padova, via Marzolo 8, 35131 Padova (Italy); Mirabella, Edoardo, E-mail: mirabell@mppmu.mpg.de [Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Muenchen (Germany); Ossola, Giovanni, E-mail: GOssola@citytech.cuny.edu [New York City College of Technology, City University of New York, 300 Jay Street, Brooklyn, NY 11201 (United States); Graduate School and University Center, City University of New York, 365 Fifth Avenue, New York, NY 10016 (United States); Peraro, Tiziano, E-mail: peraro@mppmu.mpg.de [Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Muenchen (Germany)

    2012-11-15

    We show that the evaluation of scattering amplitudes can be formulated as a problem of multivariate polynomial division, with the components of the integration-momenta as indeterminates. We present a recurrence relation which, independently of the number of loops, leads to the multi-particle pole decomposition of the integrands of the scattering amplitudes. The recursive algorithm is based on the weak Nullstellensatz theorem and on the division modulo the Groebner basis associated to all possible multi-particle cuts. We apply it to dimensionally regulated one-loop amplitudes, recovering the well-known integrand-decomposition formula. Finally, we focus on the maximum-cut, defined as a system of on-shell conditions constraining the components of all the integration-momenta. By means of the Finiteness Theorem and of the Shape Lemma, we prove that the residue at the maximum-cut is parametrized by a number of coefficients equal to the number of solutions of the cut itself.

  20. Subspace-Based Noise Reduction for Speech Signals via Diagonal and Triangular Matrix Decompositions

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Jensen, Søren Holdt

    We survey the definitions and use of rank-revealing matrix decompositions in single-channel noise reduction algorithms for speech signals. Our algorithms are based on the rank-reduction paradigm and, in particular, signal subspace techniques. The focus is on practical working algorithms, using both...... diagonal (eigenvalue and singular value) decompositions and rank-revealing triangular decompositions (ULV, URV, VSV, ULLV and ULLIV). In addition we show how the subspace-based algorithms can be evaluated and compared by means of simple FIR filter interpretations. The algorithms are illustrated...... with working Matlab code and applications in speech processing....

  1. Inverse scattering problems with multi-frequencies

    International Nuclear Information System (INIS)

    Bao, Gang; Li, Peijun; Lin, Junshan; Triki, Faouzi

    2015-01-01

    This paper is concerned with computational approaches and mathematical analysis for solving inverse scattering problems in the frequency domain. The problems arise in a diverse set of scientific areas with significant industrial, medical, and military applications. In addition to nonlinearity, there are two common difficulties associated with the inverse problems: ill-posedness and limited resolution (diffraction limit). Due to the diffraction limit, for a given frequency, only a low spatial frequency part of the desired parameter can be observed from measurements in the far field. The main idea developed here is that if the reconstruction is restricted to only the observable part, then the inversion will become stable. The challenging task is how to design stable numerical methods for solving these inverse scattering problems inspired by the diffraction limit. Recently, novel recursive linearization based algorithms have been presented in an attempt to answer the above question. These methods require multi-frequency scattering data and proceed via a continuation procedure with respect to the frequency from low to high. The objective of this paper is to give a brief review of these methods, their error estimates, and the related mathematical analysis. More attention is paid to the inverse medium and inverse source problems. Numerical experiments are included to illustrate the effectiveness of these methods. (topical review)

  2. A Proposal for User-defined Reductions in OpenMP

    Energy Technology Data Exchange (ETDEWEB)

    Duran, A; Ferrer, R; Klemm, M; de Supinski, B R; Ayguade, E

    2010-03-22

    Reductions are commonly used in parallel programs to produce a global result from partial results computed in parallel. Currently, OpenMP only supports reductions for primitive data types and a limited set of base language operators. This is a significant limitation for those applications that employ user-defined data types (e. g., objects). Implementing manual reduction algorithms makes software development more complex and error-prone. Additionally, an OpenMP runtime system cannot optimize a manual reduction algorithm in ways typically applied to reductions on primitive types. In this paper, we propose new mechanisms to allow the use of most pre-existing binary functions on user-defined data types as User-Defined Reduction (UDR) operators. Our measurements show that our UDR prototype implementation provides consistently good performance across a range of thread counts without increasing general runtime overheads.

  3. MLFMA-accelerated Nyström method for ultrasonic scattering - Numerical results and experimental validation

    Science.gov (United States)

    Gurrala, Praveen; Downs, Andrew; Chen, Kun; Song, Jiming; Roberts, Ron

    2018-04-01

    Full wave scattering models for ultrasonic waves are necessary for the accurate prediction of voltage signals received from complex defects/flaws in practical nondestructive evaluation (NDE) measurements. We propose the high-order Nyström method accelerated by the multilevel fast multipole algorithm (MLFMA) as an improvement to the state-of-the-art full-wave scattering models that are based on boundary integral equations. We present numerical results demonstrating improvements in simulation time and memory requirement. Particularly, we demonstrate the need for higher order geom-etry and field approximation in modeling NDE measurements. Also, we illustrate the importance of full-wave scattering models using experimental pulse-echo data from a spherical inclusion in a solid, which cannot be modeled accurately by approximation-based scattering models such as the Kirchhoff approximation.

  4. SU-E-I-07: An Improved Technique for Scatter Correction in PET

    International Nuclear Information System (INIS)

    Lin, S; Wang, Y; Lue, K; Lin, H; Chuang, K

    2014-01-01

    Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends on the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the insufficient

  5. SU-E-I-07: An Improved Technique for Scatter Correction in PET

    Energy Technology Data Exchange (ETDEWEB)

    Lin, S; Wang, Y; Lue, K; Lin, H; Chuang, K [Chuang, National Tsing Hua University, Hsichu, Taiwan (China)

    2014-06-01

    Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends on the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the insufficient

  6. Calculating the reduced scattering coefficient of turbid media from a single optical reflectance signal

    Science.gov (United States)

    Johns, Maureen; Liu, Hanli

    2003-07-01

    When light interacts with tissue, it can be absorbed, scattered or reflected. Such quantitative information can be used to characterize the optical properties of tissue, differentiate tissue types in vivo, and identify normal versus diseased tissue. The purpose of this research is to develop an algorithm that determines the reduced scattering coefficient (μs") of tissues from a single optical reflectance spectrum with a small source-detector separation. The basic relationship between μs" and optical reflectance was developed using Monte Carlo simulations. This produced an analytical equation containing μs" as a function of reflectance. To experimentally validate this relationship, a 1.3-mm diameter fiber optic probe containing two 400-micron diameter fibers was used to deliver light to and collect light from Intralipid solutions of various concentrations. Simultaneous measurements from optical reflectance and an ISS oximeter were performed to validate the calculated μs" values determined by the reflectance measurement against the 'gold standard" ISS readings. The calculated μs" values deviate from the expected values by approximately -/+ 5% with Intralipid concentrations between 0.5 - 2.5%. The scattering properties within this concentration range are similar to those of in vivo tissues. Additional calculations are performed to determine the scattering properties of rat brain tissues and to discuss accuracy of the algorithm for measured samples with a broad range of the absorption coefficient (μa).

  7. Canonical transformations method in the potential scattering problem

    International Nuclear Information System (INIS)

    Pavlenko, Yu.G.

    1984-01-01

    Canonical formalism of the first order is used in the present paper to solve the problem of scattering and other problems of quantum mechanics. The theory of canonical transformations (CT) being the basis of hamiltonian approach permits to develop several methods of integration being beyond the scope of the standard theory of perturbations. In this case it is essential for numerical counting that the theory permits to obtain algorithm for plotting highest approximations

  8. DESIGNING SUSTAINABLE PROCESSES WITH SIMULATION: THE WASTE REDUCTION (WAR) ALGORITHM

    Science.gov (United States)

    The WAR Algorithm, a methodology for determining the potential environmental impact (PEI) of a chemical process, is presented with modifications that account for the PEI of the energy consumed within that process. From this theory, four PEI indexes are used to evaluate the envir...

  9. Thermal diffuse scattering in angular-dispersive neutron diffraction

    International Nuclear Information System (INIS)

    Popa, N.C.; Willis, B.T.M.

    1998-01-01

    The theoretical treatment of one-phonon thermal diffuse scattering (TDS) in single-crystal neutron diffraction at fixed incident wavelength is reanalysed in the light of the analysis given by Popa and Willis [Acta Cryst. (1994), (1997)] for the time-of-flight method. Isotropic propagation of sound with different velocities for the longitudinal and transverse modes is assumed. As in time-of-flight diffraction, there exists, for certain scanning variables, a forbidden range in the one-phonon TDS of slower-than-sound neutrons, and this permits the determination of the sound velocity in the crystal. A fast algorithm is given for the TDS correction of neutron diffraction data collected at a fixed wavelength: this algorithm is similar to that reported earlier for the time-of-flight case. (orig.)

  10. High-order integral equation methods for problems of scattering by bumps and cavities on half-planes.

    Science.gov (United States)

    Pérez-Arancibia, Carlos; Bruno, Oscar P

    2014-08-01

    This paper presents high-order integral equation methods for the evaluation of electromagnetic wave scattering by dielectric bumps and dielectric cavities on perfectly conducting or dielectric half-planes. In detail, the algorithms introduced in this paper apply to eight classical scattering problems, namely, scattering by a dielectric bump on a perfectly conducting or a dielectric half-plane, and scattering by a filled, overfilled, or void dielectric cavity on a perfectly conducting or a dielectric half-plane. In all cases field representations based on single-layer potentials for appropriately chosen Green functions are used. The numerical far fields and near fields exhibit excellent convergence as discretizations are refined-even at and around points where singular fields and infinite currents exist.

  11. Same Element Conversion Reduction Algorithm Based on Discernibility Matrix and Discernibility Function%基于区分矩阵与区分函数的同元转换约简算法

    Institute of Scientific and Technical Information of China (English)

    徐宁; 章云; 周如旗

    2013-01-01

    Aiming at the difficulties of the form transferring on large datasets to get reducts,a same element conversion reduction algorithm based on discernibility matrix and discernibility function is put forward.It uses discernibility matrix to keep all classification information of data set,and discernibility function constructs the mathematical logic form from the classical information.The algorithm begins from lower rank of Conjunctive Normal Form(CNF) into Disjunctive Normal Form(DNF).According to the same element conversion algorithm and high element absorption algorithm,if higher ranks are absorbed,the algorithm can return; else the algorithm can enter itself to next circle.Calculation results show that this algorithm greatly reduces the once scale of transform,neatly uses the mature recursive algorithm and works compactly and effectively.%针对较大数据集在区分函数范式转换获得约简解集时的困难性,提出一种基于区分矩阵与区分函数的同元转换约简算法.利用区分矩阵保留数据集的全部分类信息,使用区分函数建立分类信息的数学逻辑范式,从低元的合取范式分步转换为析取范式,根据同元转换算法和高元吸收算法,若能够吸收完全则回退,否则再次调用算法进入转换运算.实例演算结果表明,该算法能缩小一次转换规模,灵活地运用递归算法,使得运算简洁有效.

  12. Coherent anti-Stokes Raman scattering and spontaneous Raman scattering diagnostics of nonequilibrium plasmas and flows

    Science.gov (United States)

    Lempert, Walter R.; Adamovich, Igor V.

    2014-10-01

    The paper provides an overview of the use of coherent anti-Stokes Raman scattering (CARS) and spontaneous Raman scattering for diagnostics of low-temperature nonequilibrium plasmas and nonequilibrium high-enthalpy flows. A brief review of the theoretical background of CARS, four-wave mixing and Raman scattering, as well as a discussion of experimental techniques and data reduction, are included. The experimental results reviewed include measurements of vibrational level populations, rotational/translational temperature, electric fields in a quasi-steady-state and transient molecular plasmas and afterglow, in nonequilibrium expansion flows, and behind strong shock waves. Insight into the kinetics of vibrational energy transfer, energy thermalization mechanisms and dynamics of the pulse discharge development, provided by these experiments, is discussed. Availability of short pulse duration, high peak power lasers, as well as broadband dye lasers, makes possible the use of these diagnostics at relatively low pressures, potentially with a sub-nanosecond time resolution, as well as obtaining single laser shot, high signal-to-noise spectra at higher pressures. Possibilities for the development of single-shot 2D CARS imaging and spectroscopy, using picosecond and femtosecond lasers, as well as novel phase matching and detection techniques, are discussed.

  13. A wavelet-based PWTD algorithm-accelerated time domain surface integral equation solver

    KAUST Repository

    Liu, Yang; Yucel, Abdulkadir C.; Gilbert, Anna C.; Bagci, Hakan; Michielssen, Eric

    2015-01-01

    © 2015 IEEE. The multilevel plane-wave time-domain (PWTD) algorithm allows for fast and accurate analysis of transient scattering from, and radiation by, electrically large and complex structures. When used in tandem with marching-on-in-time (MOT

  14. Study of the effects of photoelectron statistics on Thomson scattering data

    International Nuclear Information System (INIS)

    Hart, G.W.; Levinton, F.M.; McNeill, D.H.

    1986-01-01

    A computer code has been developed which simulates a Thomson scattering measurement, from the counting statistics of the input channels through the mathematical analysis of the data. The scattered and background signals in each of the wavelength channels are assumed to obey Poisson statistics, and the spectral data are fitted to a Gaussian curve using a nonlinear least-squares fitting algorithm. This method goes beyond the usual calculation of the signal-to-noise ratio for the hardware and gives a quantitative measure of the effect of the noise on the final measurement. This method is applicable to Thomson scattering measurements in which the signal-to-noise ratio is low due to either low signal or high background. Thomson scattering data from the S-1 spheromak have been compared to this simulation, and they have been found to be in good agreement. This code has proven to be useful in assessing the effects of counting statistics relative to shot-to-shot variability in producing the observed spread in the data. It was also useful for designing improvements for the S-1 Thomson scattering system, and this method would be applicable to any measurement affected by counting statistics

  15. Adaptive Equalizer Using Selective Partial Update Algorithm and Selective Regressor Affine Projection Algorithm over Shallow Water Acoustic Channels

    Directory of Open Access Journals (Sweden)

    Masoumeh Soflaei

    2014-01-01

    Full Text Available One of the most important problems of reliable communications in shallow water channels is intersymbol interference (ISI which is due to scattering from surface and reflecting from bottom. Using adaptive equalizers in receiver is one of the best suggested ways for overcoming this problem. In this paper, we apply the family of selective regressor affine projection algorithms (SR-APA and the family of selective partial update APA (SPU-APA which have low computational complexity that is one of the important factors that influences adaptive equalizer performance. We apply experimental data from Strait of Hormuz for examining the efficiency of the proposed methods over shallow water channel. We observe that the values of the steady-state mean square error (MSE of SR-APA and SPU-APA decrease by 5.8 (dB and 5.5 (dB, respectively, in comparison with least mean square (LMS algorithm. Also the families of SPU-APA and SR-APA have better convergence speed than LMS type algorithm.

  16. Migration of scattered teleseismic body waves

    Science.gov (United States)

    Bostock, M. G.; Rondenay, S.

    1999-06-01

    The retrieval of near-receiver mantle structure from scattered waves associated with teleseismic P and S and recorded on three-component, linear seismic arrays is considered in the context of inverse scattering theory. A Ray + Born formulation is proposed which admits linearization of the forward problem and economy in the computation of the elastic wave Green's function. The high-frequency approximation further simplifies the problem by enabling (1) the use of an earth-flattened, 1-D reference model, (2) a reduction in computations to 2-D through the assumption of 2.5-D experimental geometry, and (3) band-diagonalization of the Hessian matrix in the inverse formulation. The final expressions are in a form reminiscent of the classical diffraction stack of seismic migration. Implementation of this procedure demands an accurate estimate of the scattered wave contribution to the impulse response, and thus requires the removal of both the reference wavefield and the source time signature from the raw record sections. An approximate separation of direct and scattered waves is achieved through application of the inverse free-surface transfer operator to individual station records and a Karhunen-Loeve transform to the resulting record sections. This procedure takes the full displacement field to a wave vector space wherein the first principal component of the incident wave-type section is identified with the direct wave and is used as an estimate of the source time function. The scattered displacement field is reconstituted from the remaining principal components using the forward free-surface transfer operator, and may be reduced to a scattering impulse response upon deconvolution of the source estimate. An example employing pseudo-spectral synthetic seismograms demonstrates an application of the methodology.

  17. A Monte Carlo simulation of scattering reduction in spectral x-ray computed tomography

    DEFF Research Database (Denmark)

    Busi, Matteo; Olsen, Ulrik Lund; Bergbäck Knudsen, Erik

    2017-01-01

    In X-ray computed tomography (CT), scattered radiation plays an important role in the accurate reconstruction of the inspected object, leading to a loss of contrast between the different materials in the reconstruction volume and cupping artifacts in the images. We present a Monte Carlo simulation...

  18. The threshold anomaly for heavy-ion scattering

    Energy Technology Data Exchange (ETDEWEB)

    Satchler, G.R.

    1987-01-01

    The real parts of optical potentials deduced from heavy-ion scattering measurements become rapidly more attractive as the bombarding energy is reduced close to the top of the Coulomb barrier. This behavior is explained as a coupled-channels effect, and is related to the corresponding reduction in the absorptive potential through a dispersion relation which expresses the consequences of causality. Another manifestation of this ''anomaly'' is the striking enhancement observed for the near- and sub-barrier fusion of two heavy ions. The barrier penetration model of fusion is examined critically in this context. It is also stressed that similar anomalies could appear in the energy dependence of nonelastic scattering. 21 refs., 4 figs.

  19. A parallelizable compression scheme for Monte Carlo scatter system matrices in PET image reconstruction

    International Nuclear Information System (INIS)

    Rehfeld, Niklas; Alber, Markus

    2007-01-01

    Scatter correction techniques in iterative positron emission tomography (PET) reconstruction increasingly utilize Monte Carlo (MC) simulations which are very well suited to model scatter in the inhomogeneous patient. Due to memory constraints the results of these simulations are not stored in the system matrix, but added or subtracted as a constant term or recalculated in the projector at each iteration. This implies that scatter is not considered in the back-projector. The presented scheme provides a method to store the simulated Monte Carlo scatter in a compressed scatter system matrix. The compression is based on parametrization and B-spline approximation and allows the formation of the scatter matrix based on low statistics simulations. The compression as well as the retrieval of the matrix elements are parallelizable. It is shown that the proposed compression scheme provides sufficient compression so that the storage in memory of a scatter system matrix for a 3D scanner is feasible. Scatter matrices of two different 2D scanner geometries were compressed and used for reconstruction as a proof of concept. Compression ratios of 0.1% could be achieved and scatter induced artifacts in the images were successfully reduced by using the compressed matrices in the reconstruction algorithm

  20. A hybrid approach to simulate multiple photon scattering in X-ray imaging

    International Nuclear Information System (INIS)

    Freud, N.; Letang, J.-M.; Babot, D.

    2005-01-01

    A hybrid simulation approach is proposed to compute the contribution of scattered radiation in X- or γ-ray imaging. This approach takes advantage of the complementarity between the deterministic and probabilistic simulation methods. The proposed hybrid method consists of two stages. Firstly, a set of scattering events occurring in the inspected object is determined by means of classical Monte Carlo simulation. Secondly, this set of scattering events is used as a starting point to compute the energy imparted to the detector, with a deterministic algorithm based on a 'forced detection' scheme. For each scattering event, the probability for the scattered photon to reach each pixel of the detector is calculated using well-known physical models (form factor and incoherent scattering function approximations, in the case of Rayleigh and Compton scattering respectively). The results of the proposed hybrid approach are compared to those obtained with the Monte Carlo method alone (Geant4 code) and found to be in excellent agreement. The convergence of the results when the number of scattering events increases is studied. The proposed hybrid approach makes it possible to simulate the contribution of each type (Compton or Rayleigh) and order of scattering, separately or together, with a single PC, within reasonable computation times (from minutes to hours, depending on the number of pixels of the detector). This constitutes a substantial benefit, compared to classical simulation methods (Monte Carlo or deterministic approaches), which usually requires a parallel computing architecture to obtain comparable results

  1. A hybrid approach to simulate multiple photon scattering in X-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Freud, N. [CNDRI, Laboratory of Nondestructive Testing using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, avenue Albert Einstein, 69621 Villeurbanne Cedex (France)]. E-mail: nicolas.freud@insa-lyon.fr; Letang, J.-M. [CNDRI, Laboratory of Nondestructive Testing using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, avenue Albert Einstein, 69621 Villeurbanne Cedex (France); Babot, D. [CNDRI, Laboratory of Nondestructive Testing using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, avenue Albert Einstein, 69621 Villeurbanne Cedex (France)

    2005-01-01

    A hybrid simulation approach is proposed to compute the contribution of scattered radiation in X- or {gamma}-ray imaging. This approach takes advantage of the complementarity between the deterministic and probabilistic simulation methods. The proposed hybrid method consists of two stages. Firstly, a set of scattering events occurring in the inspected object is determined by means of classical Monte Carlo simulation. Secondly, this set of scattering events is used as a starting point to compute the energy imparted to the detector, with a deterministic algorithm based on a 'forced detection' scheme. For each scattering event, the probability for the scattered photon to reach each pixel of the detector is calculated using well-known physical models (form factor and incoherent scattering function approximations, in the case of Rayleigh and Compton scattering respectively). The results of the proposed hybrid approach are compared to those obtained with the Monte Carlo method alone (Geant4 code) and found to be in excellent agreement. The convergence of the results when the number of scattering events increases is studied. The proposed hybrid approach makes it possible to simulate the contribution of each type (Compton or Rayleigh) and order of scattering, separately or together, with a single PC, within reasonable computation times (from minutes to hours, depending on the number of pixels of the detector). This constitutes a substantial benefit, compared to classical simulation methods (Monte Carlo or deterministic approaches), which usually requires a parallel computing architecture to obtain comparable results.

  2. The single scattering properties of the aerosol particles as aggregated spheres

    International Nuclear Information System (INIS)

    Wu, Y.; Gu, X.; Cheng, T.; Xie, D.; Yu, T.; Chen, H.; Guo, J.

    2012-01-01

    The light scattering and absorption properties of anthropogenic aerosol particles such as soot aggregates are complicated in the temporal and spatial distribution, which introduce uncertainty of radiative forcing on global climate change. In order to study the single scattering properties of anthorpogenic aerosol particles, the structures of these aerosols such as soot paticles and soot-containing mixtures with the sulfate or organic matter, are simulated using the parallel diffusion limited aggregation algorithm (DLA) based on the transmission electron microscope images (TEM). Then, the single scattering properties of randomly oriented aerosols, such as scattering matrix, single scattering albedo (SSA), and asymmetry parameter (AP), are computed using the superposition T-matrix method. The comparisons of the single scattering properties of these specific types of clusters with different morphological and chemical factors such as fractal parameters, aspect ratio, monomer radius, mixture mode and refractive index, indicate that these different impact factors can respectively generate the significant influences on the single scattering properties of these aerosols. The results show that aspect ratio of circumscribed shape has relatively small effect on single scattering properties, for both differences of SSA and AP are less than 0.1. However, mixture modes of soot clusters with larger sulfate particles have remarkably important effects on the scattering and absorption properties of aggregated spheres, and SSA of those soot-containing mixtures are increased in proportion to the ratio of larger weakly absorbing attachments. Therefore, these complex aerosols come from man made pollution cannot be neglected in the aerosol retrievals. The study of the single scattering properties on these kinds of aggregated spheres is important and helpful in remote sensing observations and atmospheric radiation balance computations.

  3. Dependence of the forward light scattering on the refractive index of particles

    Science.gov (United States)

    Guo, Lufang; Shen, Jianqi

    2018-05-01

    In particle sizing technique based on forward light scattering, the scattered light signal (SLS) is closely related to the relative refractive index (RRI) of the particles to the surrounding, especially when the particles are transparent (or weakly absorbent) and the particles are small in size. The interference between the diffraction (Diff) and the multiple internal reflections (MIR) of scattered light can lead to the oscillation of the SLS on RRI and the abnormal intervals, especially for narrowly-distributed small particle systems. This makes the inverse problem more difficult. In order to improve the inverse results, Tikhonov regularization algorithm with B-spline functions is proposed, in which the matrix element is calculated for a range of particle sizes instead using the mean particle diameter of size fractions. In this way, the influence of abnormal intervals on the inverse results can be eliminated. In addition, for measurements on narrowly distributed small particles, it is suggested to detect the SLS in a wider scattering angle to include more information.

  4. Double Bounce Component in Cross-Polarimetric SAR from a New Scattering Target Decomposition

    Science.gov (United States)

    Hong, Sang-Hoon; Wdowinski, Shimon

    2013-08-01

    Common vegetation scattering theories assume that the Synthetic Aperture Radar (SAR) cross-polarization (cross-pol) signal represents solely volume scattering. We found this assumption incorrect based on SAR phase measurements acquired over the south Florida Everglades wetlands indicating that the cross-pol radar signal often samples the water surface beneath the vegetation. Based on these new observations, we propose that the cross-pol measurement consists of both volume scattering and double bounce components. The simplest multi-bounce scattering mechanism that generates cross-pol signal occurs by rotated dihedrals. Thus, we use the rotated dihedral mechanism with probability density function to revise some of the vegetation scattering theories and develop a three- component decomposition algorithm with single bounce, double bounce from both co-pol and cross-pol, and volume scattering components. We applied the new decomposition analysis to both urban and rural environments using Radarsat-2 quad-pol datasets. The decomposition of the San Francisco's urban area shows higher double bounce scattering and reduced volume scattering compared to other common three-component decomposition. The decomposition of the rural Everglades area shows that the relations between volume and cross-pol double bounce depend on the vegetation density. The new decomposition can be useful to better understand vegetation scattering behavior over the various surfaces and the estimation of above ground biomass using SAR observations.

  5. Evolutionary algorithms for the Vehicle Routing Problem with Time Windows

    NARCIS (Netherlands)

    Bräysy, Olli; Dullaert, Wout; Gendreau, Michel

    2004-01-01

    This paper surveys the research on evolutionary algorithms for the Vehicle Routing Problem with Time Windows (VRPTW). The VRPTW can be described as the problem of designing least cost routes from a single depot to a set of geographically scattered points. The routes must be designed in such a way

  6. SU-C-207B-02: Maximal Noise Reduction Filter with Anatomical Structures Preservation

    Energy Technology Data Exchange (ETDEWEB)

    Maitree, R; Guzman, G; Chundury, A; Roach, M; Yang, D [Washington University School of Medicine, St Louis, MO (United States)

    2016-06-15

    Purpose: All medical images contain noise, which can result in an undesirable appearance and can reduce the visibility of anatomical details. There are varieties of techniques utilized to reduce noise such as increasing the image acquisition time and using post-processing noise reduction algorithms. However, these techniques are increasing the imaging time and cost or reducing tissue contrast and effective spatial resolution which are useful diagnosis information. The three main focuses in this study are: 1) to develop a novel approach that can adaptively and maximally reduce noise while preserving valuable details of anatomical structures, 2) to evaluate the effectiveness of available noise reduction algorithms in comparison to the proposed algorithm, and 3) to demonstrate that the proposed noise reduction approach can be used clinically. Methods: To achieve a maximal noise reduction without destroying the anatomical details, the proposed approach automatically estimated the local image noise strength levels and detected the anatomical structures, i.e. tissue boundaries. Such information was used to adaptively adjust strength of the noise reduction filter. The proposed algorithm was tested on 34 repeating swine head datasets and 54 patients MRI and CT images. The performance was quantitatively evaluated by image quality metrics and manually validated for clinical usages by two radiation oncologists and one radiologist. Results: Qualitative measurements on repeated swine head images demonstrated that the proposed algorithm efficiently removed noise while preserving the structures and tissues boundaries. In comparisons, the proposed algorithm obtained competitive noise reduction performance and outperformed other filters in preserving anatomical structures. Assessments from the manual validation indicate that the proposed noise reduction algorithm is quite adequate for some clinical usages. Conclusion: According to both clinical evaluation (human expert ranking) and

  7. SU-C-207B-02: Maximal Noise Reduction Filter with Anatomical Structures Preservation

    International Nuclear Information System (INIS)

    Maitree, R; Guzman, G; Chundury, A; Roach, M; Yang, D

    2016-01-01

    Purpose: All medical images contain noise, which can result in an undesirable appearance and can reduce the visibility of anatomical details. There are varieties of techniques utilized to reduce noise such as increasing the image acquisition time and using post-processing noise reduction algorithms. However, these techniques are increasing the imaging time and cost or reducing tissue contrast and effective spatial resolution which are useful diagnosis information. The three main focuses in this study are: 1) to develop a novel approach that can adaptively and maximally reduce noise while preserving valuable details of anatomical structures, 2) to evaluate the effectiveness of available noise reduction algorithms in comparison to the proposed algorithm, and 3) to demonstrate that the proposed noise reduction approach can be used clinically. Methods: To achieve a maximal noise reduction without destroying the anatomical details, the proposed approach automatically estimated the local image noise strength levels and detected the anatomical structures, i.e. tissue boundaries. Such information was used to adaptively adjust strength of the noise reduction filter. The proposed algorithm was tested on 34 repeating swine head datasets and 54 patients MRI and CT images. The performance was quantitatively evaluated by image quality metrics and manually validated for clinical usages by two radiation oncologists and one radiologist. Results: Qualitative measurements on repeated swine head images demonstrated that the proposed algorithm efficiently removed noise while preserving the structures and tissues boundaries. In comparisons, the proposed algorithm obtained competitive noise reduction performance and outperformed other filters in preserving anatomical structures. Assessments from the manual validation indicate that the proposed noise reduction algorithm is quite adequate for some clinical usages. Conclusion: According to both clinical evaluation (human expert ranking) and

  8. Small-angle neutron scattering and cyclic voltammetry study on electrochemically oxidized and reduced pyrolytic carbon

    International Nuclear Information System (INIS)

    Braun, A.; Kohlbrecher, J.; Baertsch, M.; Schnyder, B.; Koetz, R.; Haas, O.; Wokaun, A.

    2004-01-01

    The electrochemical double layer capacitance and internal surface area of a pyrolytic carbon material after electrochemical oxidation and subsequent reduction was studied with cyclic voltammetry and small-angle neutron scattering. Oxidation yields an enhanced internal surface area (activation), and subsequent reduction causes a decrease of this internal surface area. The change of the Porod constant, as obtained from small-angle neutron scattering, reveals that the decrease in internal surface area is not caused merely by a closing or narrowing of the pores, but by a partial collapse of the pore network

  9. Fast decoding algorithms for coded aperture systems

    International Nuclear Information System (INIS)

    Byard, Kevin

    2014-01-01

    Fast decoding algorithms are described for a number of established coded aperture systems. The fast decoding algorithms for all these systems offer significant reductions in the number of calculations required when reconstructing images formed by a coded aperture system and hence require less computation time to produce the images. The algorithms may therefore be of use in applications that require fast image reconstruction, such as near real-time nuclear medicine and location of hazardous radioactive spillage. Experimental tests confirm the efficacy of the fast decoding techniques

  10. Streaming Reduction Circuit

    NARCIS (Netherlands)

    Gerards, Marco Egbertus Theodorus; Kuper, Jan; Kokkeler, Andre B.J.; Molenkamp, Egbert

    2009-01-01

    Reduction circuits are used to reduce rows of floating point values to single values. Binary floating point operators often have deep pipelines, which may cause hazards when many consecutive rows have to be reduced. We present an algorithm by which any number of consecutive rows of arbitrary lengths

  11. Evaluation of Underwater Image Enhancement Algorithms under Different Environmental Conditions

    Directory of Open Access Journals (Sweden)

    Marino Mangeruga

    2018-01-01

    Full Text Available Underwater images usually suffer from poor visibility, lack of contrast and colour casting, mainly due to light absorption and scattering. In literature, there are many algorithms aimed to enhance the quality of underwater images through different approaches. Our purpose was to identify an algorithm that performs well in different environmental conditions. We have selected some algorithms from the state of the art and we have employed them to enhance a dataset of images produced in various underwater sites, representing different environmental and illumination conditions. These enhanced images have been evaluated through some quantitative metrics. By analysing the results of these metrics, we tried to understand which of the selected algorithms performed better than the others. Another purpose of our research was to establish if a quantitative metric was enough to judge the behaviour of an underwater image enhancement algorithm. We aim to demonstrate that, even if the metrics can provide an indicative estimation of image quality, they could lead to inconsistent or erroneous evaluations.

  12. Impact of dose engine algorithm in pencil beam scanning proton therapy for breast cancer.

    Science.gov (United States)

    Tommasino, Francesco; Fellin, Francesco; Lorentini, Stefano; Farace, Paolo

    2018-06-01

    Proton therapy for the treatment of breast cancer is acquiring increasing interest, due to the potential reduction of radiation-induced side effects such as cardiac and pulmonary toxicity. While several in silico studies demonstrated the gain in plan quality offered by pencil beam scanning (PBS) compared to passive scattering techniques, the related dosimetric uncertainties have been poorly investigated so far. Five breast cancer patients were planned with Raystation 6 analytical pencil beam (APB) and Monte Carlo (MC) dose calculation algorithms. Plans were optimized with APB and then MC was used to recalculate dose distribution. Movable snout and beam splitting techniques (i.e. using two sub-fields for the same beam entrance, one with and the other without the use of a range shifter) were considered. PTV dose statistics were recorded. The same planning configurations were adopted for the experimental benchmark. Dose distributions were measured with a 2D array of ionization chambers and compared to APB and MC calculated ones by means of a γ analysis (agreement criteria 3%, 3 mm). Our results indicate that, when using proton PBS for breast cancer treatment, the Raystation 6 APB algorithm does not allow obtaining sufficient accuracy, especially with large air gaps. On the contrary, the MC algorithm resulted into much higher accuracy in all beam configurations tested and has to be recommended. Centers where a MC algorithm is not yet available should consider a careful use of APB, possibly combined with a movable snout system or in any case with strategies aimed at minimizing air gaps. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  13. Microstructural effect on radiative scattering coefficient and asymmetry factor of anisotropic thermal barrier coatings

    Science.gov (United States)

    Chen, X. W.; Zhao, C. Y.; Wang, B. X.

    2018-05-01

    Thermal barrier coatings are common porous materials coated on the surface of devices operating under high temperatures and designed for heat insulation. This study presents a comprehensive investigation on the microstructural effect on radiative scattering coefficient and asymmetry factor of anisotropic thermal barrier coatings. Based on the quartet structure generation set algorithm, the finite-difference-time-domain method is applied to calculate angular scattering intensity distribution of complicated random microstructure, which takes wave nature into account. Combining Monte Carlo method with Particle Swarm Optimization, asymmetry factor, scattering coefficient and absorption coefficient are retrieved simultaneously. The retrieved radiative properties are identified with the angular scattering intensity distribution under different pore shapes, which takes dependent scattering and anisotropic pore shape into account implicitly. It has been found that microstructure significantly affects the radiative properties in thermal barrier coatings. Compared with spherical shape, irregular anisotropic pore shape reduces the forward scattering peak. The method used in this paper can also be applied to other porous media, which designs a frame work for further quantitative study on porous media.

  14. Evaluation of ultrasonic array imaging algorithms for inspection of a coarse grained material

    Science.gov (United States)

    Van Pamel, A.; Lowe, M. J. S.; Brett, C. R.

    2014-02-01

    Improving the ultrasound inspection capability for coarse grain metals remains of longstanding interest to industry and the NDE research community and is expected to become increasingly important for next generation power plants. A test sample of coarse grained Inconel 625 which is representative of future power plant components has been manufactured to test the detectability of different inspection techniques. Conventional ultrasonic A, B, and C-scans showed the sample to be extraordinarily difficult to inspect due to its scattering behaviour. However, in recent years, array probes and Full Matrix Capture (FMC) imaging algorithms, which extract the maximum amount of information possible, have unlocked exciting possibilities for improvements. This article proposes a robust methodology to evaluate the detection performance of imaging algorithms, applying this to three FMC imaging algorithms; Total Focusing Method (TFM), Phase Coherent Imaging (PCI), and Decomposition of the Time Reversal Operator with Multiple Scattering (DORT MSF). The methodology considers the statistics of detection, presenting the detection performance as Probability of Detection (POD) and probability of False Alarm (PFA). The data is captured in pulse-echo mode using 64 element array probes at centre frequencies of 1MHz and 5MHz. All three algorithms are shown to perform very similarly when comparing their flaw detection capabilities on this particular case.

  15. Adaptive Kernel in Meshsize Boosting Algorithm in KDE ...

    African Journals Online (AJOL)

    This paper proposes the use of adaptive kernel in a meshsize boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  16. Monte Carlo simulation of scatter in non-uniform symmetrical attenuating media for point and distributed sources

    International Nuclear Information System (INIS)

    Henry, L.J.; Rosenthal, M.S.

    1992-01-01

    We report results of scatter simulations for both point and distributed sources of 99m Tc in symmetrical non-uniform attenuating media. The simulations utilized Monte Carlo techniques and were tested against experimental phantoms. Both point and ring sources were used inside a 10.5 cm radius acrylic phantom. Attenuating media consisted of combinations of water, ground beef (to simulate muscle mass), air and bone meal (to simulate bone mass). We estimated/measured energy spectra, detector efficiencies and peak height ratios for all cases. In all cases, the simulated spectra agree with the experimentally measured spectra within 2 SD. Detector efficiencies and peak height ratios also are in agreement. The Monte Carlo code is able to properly model the non-uniform attenuating media used in this project. With verification of the simulations, it is possible to perform initial evaluation studies of scatter correction algorithms by evaluating the mechanisms of action of the correction algorithm on the simulated spectra where the magnitude and sources of scatter are known. (author)

  17. Inverse scattering problem for a magnetic field in the Glauber approximation

    International Nuclear Information System (INIS)

    Bogdanov, I.V.

    1985-01-01

    New results in the general theory of scattering are obtained. An inverse problem at fixed energy for an axisymmetric magnetic field is formulated and solved within the frames of the quantum-mechanical Glauber approximation. The solution is found in quadratures in the form of an explicit inversion algorithm reproducing a vector potential by the angular dependence of the scattering amplitude. Extreme transitions from the eikonal inversion method to the classical and Born ones are investigated. Integral and differential equations are derived for the eikonal amplitude that ensure the real value of the vector potential and its energy independence. Magnetoelectric analogies the existence of equivalent axisymmetric electric and magnetic fields scattering charged particles in the same manner both in the Glauber and Born approximation are established. The mentioned analogies permit to simulate ion-potential scattering by potential one that is of interest from the practical viewpoint. Three-dimensional (excentral) eikonal inverse problems for the electric and magnetic fields are discussed. The results of the paper can be used in electron optics

  18. Determining Complex Structures using Docking Method with Single Particle Scattering Data

    Directory of Open Access Journals (Sweden)

    Haiguang Liu

    2017-04-01

    Full Text Available Protein complexes are critical for many molecular functions. Due to intrinsic flexibility and dynamics of complexes, their structures are more difficult to determine using conventional experimental methods, in contrast to individual subunits. One of the major challenges is the crystallization of protein complexes. Using X-ray free electron lasers (XFELs, it is possible to collect scattering signals from non-crystalline protein complexes, but data interpretation is more difficult because of unknown orientations. Here, we propose a hybrid approach to determine protein complex structures by combining XFEL single particle scattering data with computational docking methods. Using simulations data, we demonstrate that a small set of single particle scattering data collected at random orientations can be used to distinguish the native complex structure from the decoys generated using docking algorithms. The results also indicate that a small set of single particle scattering data is superior to spherically averaged intensity profile in distinguishing complex structures. Given the fact that XFEL experimental data are difficult to acquire and at low abundance, this hybrid approach should find wide applications in data interpretations.

  19. Search and optimization by metaheuristics techniques and algorithms inspired by nature

    CERN Document Server

    Du, Ke-Lin

    2016-01-01

    This textbook provides a comprehensive introduction to nature-inspired metaheuristic methods for search and optimization, including the latest trends in evolutionary algorithms and other forms of natural computing. Over 100 different types of these methods are discussed in detail. The authors emphasize non-standard optimization problems and utilize a natural approach to the topic, moving from basic notions to more complex ones. An introductory chapter covers the necessary biological and mathematical backgrounds for understanding the main material. Subsequent chapters then explore almost all of the major metaheuristics for search and optimization created based on natural phenomena, including simulated annealing, recurrent neural networks, genetic algorithms and genetic programming, differential evolution, memetic algorithms, particle swarm optimization, artificial immune systems, ant colony optimization, tabu search and scatter search, bee and bacteria foraging algorithms, harmony search, biomolecular computin...

  20. The hydrogen anomaly problem in neutron Compton scattering

    Science.gov (United States)

    Karlsson, Erik B.

    2018-03-01

    Neutron Compton scattering (also called ‘deep inelastic scattering of neutrons’, DINS) is a method used to study momentum distributions of light atoms in solids and liquids. It has been employed extensively since the start-up of intense pulsed neutron sources about 25 years ago. The information lies primarily in the width and shape of the Compton profile and not in the absolute intensity of the Compton peaks. It was therefore not immediately recognized that the relative intensities of Compton peaks arising from scattering on different isotopes did not always agree with values expected from standard neutron cross-section tables. The discrepancies were particularly large for scattering on protons, a phenomenon that became known as ‘the hydrogen anomaly problem’. The present paper is a review of the discovery, experimental tests to prove or disprove the existence of the hydrogen anomaly and discussions concerning its origin. It covers a twenty-year-long history of experimentation, theoretical treatments and discussions. The problem is of fundamental interest, since it involves quantum phenomena on the subfemtosecond time scale, which are not visible in conventional thermal neutron scattering but are important in Compton scattering where neutrons have two orders of magnitude times higher energy. Different H-containing systems show different cross-section deficiencies and when the scattering processes are followed on the femtosecond time scale the cross-section losses disappear on different characteristic time scales for each H-environment. The last section of this review reproduces results from published papers based on quantum interference in scattering on identical particles (proton or deuteron pairs or clusters), which have given a quantitative theoretical explanation both regarding the H-cross-section reduction and its time dependence. Some new explanations are added and the concluding chapter summarizes the conditions for observing the specific quantum

  1. Reduct Driven Pattern Extraction from Clusters

    Directory of Open Access Journals (Sweden)

    Shuchita Upadhyaya

    2009-03-01

    Full Text Available Clustering algorithms give general description of clusters, listing number of clusters and member entities in those clusters. However, these algorithms lack in generating cluster description in the form of pattern. From data mining perspective, pattern learning from clusters is as important as cluster finding. In the proposed approach, reduct derived from rough set theory is employed for pattern formulation. Further, reduct are the set of attributes which distinguishes the entities in a homogenous cluster, hence these can be clear cut removed from the same. Remaining attributes are then ranked for their contribution in the cluster. Pattern is formulated with the conjunction of most contributing attributes such that pattern distinctively describes the cluster with minimum error.

  2. Electron scattering in graphene with adsorbed NaCl nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Drabińska, Aneta, E-mail: Aneta.Drabinska@fuw.edu.pl; Kaźmierczak, Piotr; Bożek, Rafał; Karpierz, Ewelina; Wysmołek, Andrzej; Kamińska, Maria [Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw (Poland); Wołoś, Agnieszka [Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw (Poland); Institute of Physics, Polish Academy of Sciences, Al. Lotników 32/46, 02-668 Warsaw (Poland); Pasternak, Iwona; Strupiński, Włodek [Institute of Electronic Materials Technology, Wólczyńska 133, 01-919 Warsaw (Poland); Krajewska, Aleksandra [Institute of Electronic Materials Technology, Wólczyńska 133, 01-919 Warsaw (Poland); Institute of Optoelectronics, Military University of Technology, Kaliskiego 2, 00-908 Warsaw (Poland)

    2015-01-07

    In this work, the results of contactless magnetoconductance and Raman spectroscopy measurements performed for a graphene sample after its immersion in NaCl solution were presented. The properties of the immersed sample were compared with those of a non-immersed reference sample. Atomic force microscopy and electron spin resonance experiments confirmed the deposition of NaCl nanoparticles on the graphene surface. A weak localization signal observed using contactless magnetoconductance showed the reduction of the coherence length after NaCl treatment of graphene. Temperature dependence of the coherence length indicated a change from ballistic to diffusive regime in electron transport after NaCl treatment. The main inelastic scattering process was of the electron-electron type but the major reason for the reduction of the coherence length at low temperatures was additional, temperature independent, inelastic scattering. We associate it with spin flip scattering, caused by NaCl nanoparticles present on the graphene surface. Raman spectroscopy showed an increase in the D and D′ bands intensities for graphene after its immersion in NaCl solution. An analysis of the D, D′, and G bands intensities proved that this additional scattering is related to the decoration of vacancies and grain boundaries with NaCl nanoparticles, as well as generation of new on-site defects as a result of the decoration of the graphene surface with NaCl nanoparticles. The observed energy shifts of 2D and G bands indicated that NaCl deposition on the graphene surface did not change carrier concentration, but reduced compressive biaxial strain in the graphene layer.

  3. Electron scattering in graphene with adsorbed NaCl nanoparticles

    International Nuclear Information System (INIS)

    Drabińska, Aneta; Kaźmierczak, Piotr; Bożek, Rafał; Karpierz, Ewelina; Wysmołek, Andrzej; Kamińska, Maria; Wołoś, Agnieszka; Pasternak, Iwona; Strupiński, Włodek; Krajewska, Aleksandra

    2015-01-01

    In this work, the results of contactless magnetoconductance and Raman spectroscopy measurements performed for a graphene sample after its immersion in NaCl solution were presented. The properties of the immersed sample were compared with those of a non-immersed reference sample. Atomic force microscopy and electron spin resonance experiments confirmed the deposition of NaCl nanoparticles on the graphene surface. A weak localization signal observed using contactless magnetoconductance showed the reduction of the coherence length after NaCl treatment of graphene. Temperature dependence of the coherence length indicated a change from ballistic to diffusive regime in electron transport after NaCl treatment. The main inelastic scattering process was of the electron-electron type but the major reason for the reduction of the coherence length at low temperatures was additional, temperature independent, inelastic scattering. We associate it with spin flip scattering, caused by NaCl nanoparticles present on the graphene surface. Raman spectroscopy showed an increase in the D and D′ bands intensities for graphene after its immersion in NaCl solution. An analysis of the D, D′, and G bands intensities proved that this additional scattering is related to the decoration of vacancies and grain boundaries with NaCl nanoparticles, as well as generation of new on-site defects as a result of the decoration of the graphene surface with NaCl nanoparticles. The observed energy shifts of 2D and G bands indicated that NaCl deposition on the graphene surface did not change carrier concentration, but reduced compressive biaxial strain in the graphene layer

  4. Dose calculations for irregular fields using three-dimensional first-scatter integration

    International Nuclear Information System (INIS)

    Boesecke, R.; Scharfenberg, H.; Schlegel, W.; Hartmann, G.H.

    1986-01-01

    This paper describes a method of dose calculations for irregular fields which requires only the mean energy of the incident photons, the geometrical properties of the irregular field and of the therapy unit, and the attenuation coefficient of tissue. The method goes back to an approach including spatial aspects of photon scattering for inhomogeneities for the calculation of dose reduction factors as proposed by Sontag and Cunningham (1978). It is based on the separation of dose into a primary component and a scattered component. The scattered component can generally be calculated for each field by integration over dose contributions from scattering in neighbouring volume elements. The quotient of this scattering contribution in the irregular field and the scattering contribution in the equivalent open field is then the correction factor for scattering in an irregular field. A correction factor for the primary component can be calculated if the attenuation of the photons in the shielding block is properly taken into account. The correction factor is simply given by the quotient of primary photons of the irregular field and the primary photons of the open field. (author)

  5. Determination of Atmospheric Aerosol Characteristics from the Polarization of Scattered Radiation

    Science.gov (United States)

    Harris, F. S., Jr.; McCormick, M. P.

    1973-01-01

    Aerosols affect the polarization of radiation in scattering, hence measured polarization can be used to infer the nature of the particles. Size distribution, particle shape, real and absorption parts of the complex refractive index affect the scattering. From Lorenz-Mie calculations of the 4-Stokes parameters as a function of scattering angle for various wavelengths the following polarization parameters were plotted: total intensity, intensity of polarization in plane of observation, intensity perpendicular to the plane of observation, polarization ratio, polarization (using all 4-Stokes parameters), plane of the polarization ellipse and its ellipticity. A six-component log-Gaussian size distribution model was used to study the effects of the nature of the polarization due to variations in the size distribution and complex refractive index. Though a rigorous inversion from measurements of scattering to detailed specification of aerosol characteristics is not possible, considerable information about the nature of the aerosols can be obtained. Only single scattering from aerosols was used in this paper. Also, the background due to Rayleigh gas scattering, the reduction of effects as a result of multiple scattering and polarization effects of possible ground background (airborne platforms) were not included.

  6. An analytical approach to estimate the number of small scatterers in 2D inverse scattering problems

    International Nuclear Information System (INIS)

    Fazli, Roohallah; Nakhkash, Mansor

    2012-01-01

    This paper presents an analytical method to estimate the location and number of actual small targets in 2D inverse scattering problems. This method is motivated from the exact maximum likelihood estimation of signal parameters in white Gaussian noise for the linear data model. In the first stage, the method uses the MUSIC algorithm to acquire all possible target locations and in the next stage, it employs an analytical formula that works as a spatial filter to determine which target locations are associated to the actual ones. The ability of the method is examined for both the Born and multiple scattering cases and for the cases of well-resolved and non-resolved targets. Many numerical simulations using both the coincident and non-coincident arrays demonstrate that the proposed method can detect the number of actual targets even in the case of very noisy data and when the targets are closely located. Using the experimental microwave data sets, we further show that this method is successful in specifying the number of small inclusions. (paper)

  7. Fast algorithm of track detection

    International Nuclear Information System (INIS)

    Nehrguj, B.

    1980-01-01

    A fast algorithm of variable-slope histograms is proposed, which allows a considerable reduction of computer memory size and is quite simple to carry out. Corresponding FORTRAN subprograms given a triple speed gain have been included in spiral reader data handling software

  8. Seismic scatterers in the mid-lower mantle beneath Tonga-Fiji

    Science.gov (United States)

    Kaneshima, Satoshi

    2018-01-01

    We analyze deep and intermediate-depth earthquakes at the Tonga-Fiji region in order to reveal the distribution of scattering objects in the mid-lower mantle. By array processing waveform data recorded at regional seismograph stations in the US, Alaska, and Japan, we investigate S-to-P scattering waves in the P coda, which arise from kilometer-scale chemically distinct objects in the mid-lower mantle beneath Tonga-Fiji. With ten scatterers previously reported by the author included, twenty-three mid-lower mantle scatterers have been detected below 900 km depth, while scatterers deeper than 1900 km have not been identified. Strong mid-lower mantle S-to-P scattering most frequently occurs at the scatterers located within a depth range between 1400 km and 1600 km. The number of scatterers decreases below 1600 km depth, and the deeper objects tend to be weaker. The scatterer distribution may reflect diminishing elastic anomalies of basaltic rocks with depth relative to the surrounding mantle rocks, which mineral physics has predicted to occur. The predominant occurrence of strong S-to-P scattering waves within a narrow depth range may reflect significant reduction of rigidity due to the ferro-elastic transformation of stishovite in basaltic rocks. Very large signals associated with mid-mantle scatterers are observed only for a small portion of the entire earthquake-array pairs. Such infrequent observations of large scattering signals, combined with quite large event-to-event differences in the scattering intensity for each scatterer, suggest both that the strong arrivals approximately represent ray theoretical S-to-P converted waves at objects with a plane geometry. The plane portions of the strong scatterers may often dip steeply, with the size exceeding 100 km. For a few strong scatterers, the range of receivers showing clear scattered waves varies substantially from earthquake-array pair to pair. Some of the scatterers are also observed at different arrays that have

  9. Simultaneous Reduction in Noise and Cross-Contamination Artifacts for Dual-Energy X-Ray CT

    Directory of Open Access Journals (Sweden)

    Baojun Li

    2013-01-01

    Full Text Available Purpose. Dual-energy CT imaging tends to suffer from much lower signal-to-noise ratio than single-energy CT. In this paper, we propose an improved anticorrelated noise reduction (ACNR method without causing cross-contamination artifacts. Methods. The proposed algorithm diffuses both basis material density images (e.g., water and iodine at the same time using a novel correlated diffusion algorithm. The algorithm has been compared to the original ACNR algorithm in a contrast-enhanced, IRB-approved patient study. Material density accuracy and noise reduction are quantitatively evaluated by the percent density error and the percent noise reduction. Results. Both algorithms have significantly reduced the noises of basis material density images in all cases. The average percent noise reduction is 69.3% and 66.5% with the ACNR algorithm and the proposed algorithm, respectively. However, the ACNR algorithm alters the original material density by an average of 13% (or 2.18 mg/cc with a maximum of 58.7% (or 8.97 mg/cc in this study. This is evident in the water density images as massive cross-contaminations are seen in all five clinical cases. On the contrary, the proposed algorithm only changes the mean density by 2.4% (or 0.69 mg/cc with a maximum of 7.6% (or 1.31 mg/cc. The cross-contamination artifacts are significantly minimized or absent with the proposed algorithm. Conclusion. The proposed algorithm can significantly reduce image noise present in basis material density images from dual-energy CT imaging, with minimized cross-contaminations compared to the ACNR algorithm.

  10. Shaping the light for the investigation of depth-extended scattering media

    Science.gov (United States)

    Osten, W.; Frenner, K.; Pedrini, G.; Singh, A. K.; Schindler, J.; Takeda, M.

    2018-02-01

    Scattering media are an ongoing challenge for all kind of imaging technologies including coherent and incoherent principles. Inspired by new approaches of computational imaging and supported by the availability of powerful computers, spatial light modulators, light sources and detectors, a variety of new methods ranging from holography to time-of-flight imaging, phase conjugation, phase recovery using iterative algorithms and correlation techniques have been introduced and applied to different types of objects. However, considering the obvious progress in this field, several problems are still matter of investigation and their solution could open new doors for the inspection and application of scattering media as well. In particular, these open questions include the possibility of extending the 2d-approach to the inspection of depth-extended objects, the direct use of a scattering media as a simple tool for imaging of complex objects and the improvement of coherent inspection techniques for the dimensional characterization of incoherently radiating spots embedded in scattering media. In this paper we show our recent findings in coping with these challenges. First we describe how to explore depth-extended objects by means of a scattering media. Afterwards, we extend this approach by implementing a new type of microscope making use of a simple scatter plate as a kind of flat and unconventional imaging lens. Finally, we introduce our shearing interferometer in combination with structured illumination for retrieving the axial position of fluorescent light emitting spots embedded in scattering media.

  11. Joint importance sampling of low-order volumetric scattering

    DEFF Research Database (Denmark)

    Georgiev, Iliyan; Křivánek, Jaroslav; Hachisuka, Toshiya

    2013-01-01

    Central to all Monte Carlo-based rendering algorithms is the construction of light transport paths from the light sources to the eye. Existing rendering approaches sample path vertices incrementally when constructing these light transport paths. The resulting probability density is thus a product...... of the conditional densities of each local sampling step, constructed without explicit control over the form of the final joint distribution of the complete path. We analyze why current incremental construction schemes often lead to high variance in the presence of participating media, and reveal...... that such approaches are an unnecessary legacy inherited from traditional surface-based rendering algorithms. We devise joint importance sampling of path vertices in participating media to construct paths that explicitly account for the product of all scattering and geometry terms along a sequence of vertices instead...

  12. Reconstruction of surface morphology from coherent scattering of white x-ray radiation

    Energy Technology Data Exchange (ETDEWEB)

    Sant, Tushar; Pietsch, Ullrich [Solid State Physics Group, University of Siegen, 57068 Siegen (Germany)

    2009-07-01

    Static speckle experiments were performed using coherent white X-ray radiation from a bending magnet at BESSYII. Semiconductor and polymer surfaces were investigated under incidence condition smaller than the critical angle of total external reflection. The scattering pattern of the sample results from the illumination function modified by the surface roughness. The periodic oscillations are caused by the illumination function whereas other irregular features are associated with sample surface. The speckle map of reflection from a laterally periodic structure like GaAs grating is studied. Under coherent illumination the grating peaks split into speckles because of fluctuations on the sample surface. The surface morphology can be reconstructed using phase retrieval algorithms. In case of 1D problem, these algorithms rarely yield a unique and converging solution. The algorithm is modified to contain additional propagator term and the phase of illumination function in the real space constraint. The modified algorithm converges faster than conventional algorithms. A detailed surface profiles from the real measurements of the sample are reconstructed using this algorithm.

  13. Quantitative anomalous small-angle X-ray scattering - The determination of chemical concentrations in nano-scale phases

    International Nuclear Information System (INIS)

    Goerigk, G.; Huber, K.; Mattern, N.; Williamson, D.L.

    2012-01-01

    In the last years Anomalous Small-Angle X-ray Scattering became a precise quantitative method resolving scattering contributions two or three orders of magnitude smaller compared to the overall small-angle scattering, which are related to the so-called pure-resonant scattering contribution. Additionally to the structural information precise quantitative information about the different constituents of multi-component systems like the fraction of a chemical component implemented into the materials nano-structures are obtained from these scattering contributions. The application of the Gauss elimination algorithm to the vector equation established by ASAXS measurements at three X-ray energies is demonstrated for three examples from chemistry and solid state physics. All examples deal with the quantitative analysis of the Resonant Invariant (RI-analysis). From the integrals of the pure-resonant scattering contribution the chemical concentrations in nano-scaled phases are determined. In one example the correlated analysis of the Resonant Invariant and the Non-resonant Invariant (NI-analysis) is employed. (authors)

  14. A combined finite element-boundary integral formulation for solution of two-dimensional scattering problems via CGFFT. [Conjugate Gradient Fast Fourier Transformation

    Science.gov (United States)

    Collins, Jeffery D.; Volakis, John L.; Jin, Jian-Ming

    1990-01-01

    A new technique is presented for computing the scattering by 2-D structures of arbitrary composition. The proposed solution approach combines the usual finite element method with the boundary-integral equation to formulate a discrete system. This is subsequently solved via the conjugate gradient (CG) algorithm. A particular characteristic of the method is the use of rectangular boundaries to enclose the scatterer. Several of the resulting boundary integrals are therefore convolutions and may be evaluated via the fast Fourier transform (FFT) in the implementation of the CG algorithm. The solution approach offers the principal advantage of having O(N) memory demand and employs a 1-D FFT versus a 2-D FFT as required with a traditional implementation of the CGFFT algorithm. The speed of the proposed solution method is compared with that of the traditional CGFFT algorithm, and results for rectangular bodies are given and shown to be in excellent agreement with the moment method.

  15. The new 'BerSANS-PC' software for reduction and treatment of small angle neutron scattering data

    International Nuclear Information System (INIS)

    Keiderling, U.

    2002-01-01

    Measurements on small angle neutron scattering (SANS) instruments are typically characterized by a large number of samples, short measurement times for the individual samples, and a frequent change of visiting scientist groups. Besides this, recent advances in instrumentation have led to more frequent measurements of kinetic sequences and a growing interest in analyzing two-dimensional scattering data, these requiring special software tools that enable the users to extract physically relevant information from the scattering data with a minimum of effort. The new 'BerSANS-PC' data-processing software has been developed at the Hahn-Meitner-Institut (HMI) in Berlin, Germany, to meet these requirements and to support an efficiently working guest-user service. Comprising some basic functions of the 'BerSANS' program available at the HMI and other institutes in the past, BerSANS-PC is a completely new development for network-independent use on local PCs with a full-feature graphical interface. (orig.)

  16. Thermal invisibility based on scattering cancellation and mantle cloaking

    KAUST Repository

    Farhat, Mohamed; Chen, P.-Y.; Bagci, Hakan; Amra, C.; Guenneau, S.; Alù , A.

    2015-01-01

    We theoretically and numerically analyze thermal invisibility based on the concept of scattering cancellation and mantle cloaking. We show that a small object can be made completely invisible to heat diffusion waves, by tailoring the heat conductivity of the spherical shell enclosing the object. This means that the thermal scattering from the object is suppressed, and the heat flow outside the object and the cloak made of these spherical shells behaves as if the object is not present. Thermal invisibility may open new vistas in hiding hot spots in infrared thermography, military furtivity, and electronics heating reduction.

  17. Thermal invisibility based on scattering cancellation and mantle cloaking

    KAUST Repository

    Farhat, Mohamed

    2015-04-30

    We theoretically and numerically analyze thermal invisibility based on the concept of scattering cancellation and mantle cloaking. We show that a small object can be made completely invisible to heat diffusion waves, by tailoring the heat conductivity of the spherical shell enclosing the object. This means that the thermal scattering from the object is suppressed, and the heat flow outside the object and the cloak made of these spherical shells behaves as if the object is not present. Thermal invisibility may open new vistas in hiding hot spots in infrared thermography, military furtivity, and electronics heating reduction.

  18. Small angle neutron scattering

    Directory of Open Access Journals (Sweden)

    Cousin Fabrice

    2015-01-01

    Full Text Available Small Angle Neutron Scattering (SANS is a technique that enables to probe the 3-D structure of materials on a typical size range lying from ∼ 1 nm up to ∼ a few 100 nm, the obtained information being statistically averaged on a sample whose volume is ∼ 1 cm3. This very rich technique enables to make a full structural characterization of a given object of nanometric dimensions (radius of gyration, shape, volume or mass, fractal dimension, specific area… through the determination of the form factor as well as the determination of the way objects are organized within in a continuous media, and therefore to describe interactions between them, through the determination of the structure factor. The specific properties of neutrons (possibility of tuning the scattering intensity by using the isotopic substitution, sensitivity to magnetism, negligible absorption, low energy of the incident neutrons make it particularly interesting in the fields of soft matter, biophysics, magnetic materials and metallurgy. In particular, the contrast variation methods allow to extract some informations that cannot be obtained by any other experimental techniques. This course is divided in two parts. The first one is devoted to the description of the principle of SANS: basics (formalism, coherent scattering/incoherent scattering, notion of elementary scatterer, form factor analysis (I(q→0, Guinier regime, intermediate regime, Porod regime, polydisperse system, structure factor analysis (2nd Virial coefficient, integral equations, characterization of aggregates, and contrast variation methods (how to create contrast in an homogeneous system, matching in ternary systems, extrapolation to zero concentration, Zero Averaged Contrast. It is illustrated by some representative examples. The second one describes the experimental aspects of SANS to guide user in its future experiments: description of SANS spectrometer, resolution of the spectrometer, optimization of

  19. Dijet production in diffractive deep inelastic scattering at HERA

    International Nuclear Information System (INIS)

    Chekanov, S.; Derrick, M.; Magill, S.

    2007-08-01

    The production of dijets in diffractive deep inelastic scattering has been measured with the ZEUS detector at HERA using an integrated luminosity of 61 pb -1 . The dijet cross section has been measured for virtualities of the exchanged virtual photon, 5 2 2 , and γ * p centre-of-mass energies, 100 T algorithm in the γ * p frame, were required to have a transverse energy E * T,jet >4 GeV and the jet with the highest transverse energy was required to have E * T,jet >5 GeV. All jets were required to be in the pseudorapidity range -3.5 * jet <0. The differential cross sections are compared to leading-order predictions and next-to-leading- order QCD calculations based on recent diffractive parton densities extracted from inclusive diffractive deep inelastic scattering data. (orig.)

  20. Applying Groebner bases to solve reduction problems for Feynman integrals

    International Nuclear Information System (INIS)

    Smirnov, Alexander V.; Smirnov, Vladimir A.

    2006-01-01

    We describe how Groebner bases can be used to solve the reduction problem for Feynman integrals, i.e. to construct an algorithm that provides the possibility to express a Feynman integral of a given family as a linear combination of some master integrals. Our approach is based on a generalized Buchberger algorithm for constructing Groebner-type bases associated with polynomials of shift operators. We illustrate it through various examples of reduction problems for families of one- and two-loop Feynman integrals. We also solve the reduction problem for a family of integrals contributing to the three-loop static quark potential

  1. Applying Groebner bases to solve reduction problems for Feynman integrals

    Energy Technology Data Exchange (ETDEWEB)

    Smirnov, Alexander V. [Mechanical and Mathematical Department and Scientific Research Computer Center of Moscow State University, Moscow 119992 (Russian Federation); Smirnov, Vladimir A. [Nuclear Physics Institute of Moscow State University, Moscow 119992 (Russian Federation)

    2006-01-15

    We describe how Groebner bases can be used to solve the reduction problem for Feynman integrals, i.e. to construct an algorithm that provides the possibility to express a Feynman integral of a given family as a linear combination of some master integrals. Our approach is based on a generalized Buchberger algorithm for constructing Groebner-type bases associated with polynomials of shift operators. We illustrate it through various examples of reduction problems for families of one- and two-loop Feynman integrals. We also solve the reduction problem for a family of integrals contributing to the three-loop static quark potential.

  2. Knowledge Reduction Based on Divide and Conquer Method in Rough Set Theory

    Directory of Open Access Journals (Sweden)

    Feng Hu

    2012-01-01

    Full Text Available The divide and conquer method is a typical granular computing method using multiple levels of abstraction and granulations. So far, although some achievements based on divided and conquer method in the rough set theory have been acquired, the systematic methods for knowledge reduction based on divide and conquer method are still absent. In this paper, the knowledge reduction approaches based on divide and conquer method, under equivalence relation and under tolerance relation, are presented, respectively. After that, a systematic approach, named as the abstract process for knowledge reduction based on divide and conquer method in rough set theory, is proposed. Based on the presented approach, two algorithms for knowledge reduction, including an algorithm for attribute reduction and an algorithm for attribute value reduction, are presented. Some experimental evaluations are done to test the methods on uci data sets and KDDCUP99 data sets. The experimental results illustrate that the proposed approaches are efficient to process large data sets with good recognition rate, compared with KNN, SVM, C4.5, Naive Bayes, and CART.

  3. Bayesian approach to the analysis of neutron Brillouin scattering data on liquid metals

    Science.gov (United States)

    De Francesco, A.; Guarini, E.; Bafile, U.; Formisano, F.; Scaccia, L.

    2016-08-01

    When the dynamics of liquids and disordered systems at mesoscopic level is investigated by means of inelastic scattering (e.g., neutron or x ray), spectra are often characterized by a poor definition of the excitation lines and spectroscopic features in general and one important issue is to establish how many of these lines need to be included in the modeling function and to estimate their parameters. Furthermore, when strongly damped excitations are present, commonly used and widespread fitting algorithms are particularly affected by the choice of initial values of the parameters. An inadequate choice may lead to an inefficient exploration of the parameter space, resulting in the algorithm getting stuck in a local minimum. In this paper, we present a Bayesian approach to the analysis of neutron Brillouin scattering data in which the number of excitation lines is treated as unknown and estimated along with the other model parameters. We propose a joint estimation procedure based on a reversible-jump Markov chain Monte Carlo algorithm, which efficiently explores the parameter space, producing a probabilistic measure to quantify the uncertainty on the number of excitation lines as well as reliable parameter estimates. The method proposed could turn out of great importance in extracting physical information from experimental data, especially when the detection of spectral features is complicated not only because of the properties of the sample, but also because of the limited instrumental resolution and count statistics. The approach is tested on generated data set and then applied to real experimental spectra of neutron Brillouin scattering from a liquid metal, previously analyzed in a more traditional way.

  4. Detailed Monte Carlo simulation of electron elastic scattering

    International Nuclear Information System (INIS)

    Chakarova, R.

    1994-04-01

    A detailed Monte Carlo model is described which simulates the transport of electrons penetrating a medium without energy loss. The trajectory of each electron is constructed as a series of successive interaction events - elastic or inelastic scattering. Differential elastic scattering cross sections, elastic and inelastic mean free paths are used to describe the interaction process. It is presumed that the cross sections data are available and the Monte Carlo algorithm does not include their evaluation. Electrons suffering successive elastic collisions are followed until they escape from the medium or (if the absorption is negligible) their path length exceeds a certain value. The inelastic events are thus treated as absorption. The medium geometry is a layered infinite slab. The electron source could be an incident electron beam or electrons created inside the material. The objective is to obtain the angular distribution, the path length and depth distribution and the collision number distribution of electrons emitted through the surface of the medium. The model is applied successfully to electrons with energy between 0.4 and 20 keV reflected from semi-infinite homogeneous materials with different scattering properties. 16 refs, 9 figs

  5. The operational methane retrieval algorithm for TROPOMI

    Directory of Open Access Journals (Sweden)

    H. Hu

    2016-11-01

    Full Text Available This work presents the operational methane retrieval algorithm for the Sentinel 5 Precursor (S5P satellite and its performance tested on realistic ensembles of simulated measurements. The target product is the column-averaged dry air volume mixing ratio of methane (XCH4, which will be retrieved simultaneously with scattering properties of the atmosphere. The algorithm attempts to fit spectra observed by the shortwave and near-infrared channels of the TROPOspheric Monitoring Instrument (TROPOMI spectrometer aboard S5P.The sensitivity of the retrieval performance to atmospheric scattering properties, atmospheric input data and instrument calibration errors is evaluated. In addition, we investigate the effect of inhomogeneous slit illumination on the instrument spectral response function. Finally, we discuss the cloud filters to be used operationally and as backup.We show that the required accuracy and precision of  < 1 % for the XCH4 product are met for clear-sky measurements over land surfaces and after appropriate filtering of difficult scenes. The algorithm is very stable, having a convergence rate of 99 %. The forward model error is less than 1 % for about 95 % of the valid retrievals. Model errors in the input profile of water do not influence the retrieval outcome noticeably. The methane product is expected to meet the requirements if errors in input profiles of pressure and temperature remain below 0.3 % and 2 K, respectively. We further find that, of all instrument calibration errors investigated here, our retrievals are the most sensitive to an error in the instrument spectral response function of the shortwave infrared channel.

  6. Monte Carlo evaluation of accuracy and noise properties of two scatter correction methods

    International Nuclear Information System (INIS)

    Narita, Y.; Eberl, S.; Nakamura, T.

    1996-01-01

    Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for 99m Tc and 201 Tl for numerical chest phantoms. Data were reconstructed with ordered-subset ML-EM algorithm including attenuation correction using the transmission data. In the chest phantom simulation, TDCS provided better S/N than TEW, and better accuracy, i.e., 1.0% vs -7.2% in myocardium, and -3.7% vs -30.1% in the ventricular chamber for 99m Tc with TDCS and TEW, respectively. For 201 Tl, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT

  7. High-performance bidiagonal reduction using tile algorithms on homogeneous multicore architectures

    KAUST Repository

    Ltaief, Hatem; Luszczek, Piotr R.; Dongarra, Jack

    2013-01-01

    dependence translation layer that maps the general algorithm with column-major data layout into the tile data layout; and (4) a dynamic runtime system that efficiently schedules the newly implemented kernels across the processing units and ensures

  8. Finite-difference time-domain analysis on radar cross section of conducting cube scatterer covered with plasmas

    International Nuclear Information System (INIS)

    Liu Shaobin; Zhang Guangfu; Yuan Naichang

    2004-01-01

    A PLJERC-FDTD algorithm is applied to the study of the scattering of perfectly conducting cube covered with homogeneous isotropic plasmas. The effects of plasma thickness, density and collision frequency on the radar cross section (RCS) of the conducting cube scatterer have been obtained. The results illustrate that the plasma cloaking can greatly reduce the RCS of radar targets, and the RCS of the perfectly conducting cube scatterer decreases with increasing plasma thickness when the plasma frequency is greatly less than the electromagnetic (EM) wave frequency; the RCS of the perfectly conducting cube scatterer decreases with increasing plasma thickness and plasma collision frequency when the plasma frequency is almost half as much as the EM wave frequency; the effects of plasma thickness and collision frequency on the RCS of the perfectly conducting cube scatterer is small when the plasma frequency is close to the EM wave frequency

  9. The MUSIC algorithm for sparse objects: a compressed sensing analysis

    International Nuclear Information System (INIS)

    Fannjiang, Albert C

    2011-01-01

    The multiple signal classification (MUSIC) algorithm, and its extension for imaging sparse extended objects, with noisy data is analyzed by compressed sensing (CS) techniques. A thresholding rule is developed to augment the standard MUSIC algorithm. The notion of restricted isometry property (RIP) and an upper bound on the restricted isometry constant (RIC) are employed to establish sufficient conditions for the exact localization by MUSIC with or without noise. In the noiseless case, the sufficient condition gives an upper bound on the numbers of random sampling and incident directions necessary for exact localization. In the noisy case, the sufficient condition assumes additionally an upper bound for the noise-to-object ratio in terms of the RIC and the dynamic range of objects. This bound points to the super-resolution capability of the MUSIC algorithm. Rigorous comparison of performance between MUSIC and the CS minimization principle, basis pursuit denoising (BPDN), is given. In general, the MUSIC algorithm guarantees to recover, with high probability, s scatterers with n=O(s 2 ) random sampling and incident directions and sufficiently high frequency. For the favorable imaging geometry where the scatterers are distributed on a transverse plane MUSIC guarantees to recover, with high probability, s scatterers with a median frequency and n=O(s) random sampling/incident directions. Moreover, for the problems of spectral estimation and source localizations both BPDN and MUSIC guarantee, with high probability, to identify exactly the frequencies of random signals with the number n=O(s) of sampling times. However, in the absence of abundant realizations of signals, BPDN is the preferred method for spectral estimation. Indeed, BPDN can identify the frequencies approximately with just one realization of signals with the recovery error at worst linearly proportional to the noise level. Numerical results confirm that BPDN outperforms MUSIC in the well-resolved case while

  10. Electron scattering by native defects in III-V nitrides and their alloys

    International Nuclear Information System (INIS)

    Hsu, L.; Walukiewicz, W.

    1996-03-01

    We have calculated the electron mobilities in GaN and InN taking into consideration scattering by short range potentials, in addition to all standard scattering mechanisms. These potentials are produced by the native defects which are responsible for the high electron concentrations in nominally undoped nitrides. Comparison of the calculated mobilities with experimental data shows that scattering by short range potentials is the dominant mechanism limiting the electron mobilities in unintentionally doped nitrides with large electron concentrations. In the case of Al x Ga 1-x N alloys, the reduction in the electron concentration due to the upward shift of the conduction band relative to the native defect level can account for the experimentally measured mobilities. Resonant scattering is shown to be important when the defect and Fermi levels are close in energy

  11. Real-time simulator for designing electron dual scattering foil systems.

    Science.gov (United States)

    Carver, Robert L; Hogstrom, Kenneth R; Price, Michael J; LeBlanc, Justin D; Pitcher, Garrett M

    2014-11-08

    The purpose of this work was to develop a user friendly, accurate, real-time com- puter simulator to facilitate the design of dual foil scattering systems for electron beams on radiotherapy accelerators. The simulator allows for a relatively quick, initial design that can be refined and verified with subsequent Monte Carlo (MC) calculations and measurements. The simulator also is a powerful educational tool. The simulator consists of an analytical algorithm for calculating electron fluence and X-ray dose and a graphical user interface (GUI) C++ program. The algorithm predicts electron fluence using Fermi-Eyges multiple Coulomb scattering theory with the reduced Gaussian formalism for scattering powers. The simulator also estimates central-axis and off-axis X-ray dose arising from the dual foil system. Once the geometry of the accelerator is specified, the simulator allows the user to continuously vary primary scattering foil material and thickness, secondary scat- tering foil material and Gaussian shape (thickness and sigma), and beam energy. The off-axis electron relative fluence or total dose profile and central-axis X-ray dose contamination are computed and displayed in real time. The simulator was validated by comparison of off-axis electron relative fluence and X-ray percent dose profiles with those calculated using EGSnrc MC. Over the energy range 7-20 MeV, using present foils on an Elekta radiotherapy accelerator, the simulator was able to reproduce MC profiles to within 2% out to 20 cm from the central axis. The central-axis X-ray percent dose predictions matched measured data to within 0.5%. The calculation time was approximately 100 ms using a single Intel 2.93 GHz processor, which allows for real-time variation of foil geometrical parameters using slider bars. This work demonstrates how the user-friendly GUI and real-time nature of the simulator make it an effective educational tool for gaining a better understanding of the effects that various system

  12. Reconstruction of surface morphology from coherent scattering of ''white'' synchrotron radiation in hard X-ray regime

    Energy Technology Data Exchange (ETDEWEB)

    Sant, Tushar

    2009-07-01

    Energy Dispersive Reflectometry (EDR) beamline at BESSY II provides ''white'' X-rays in the useful energy range of 5scattering material of Pt with high atom number, Z=78 and patterned semiconducting surface like a GaAs surface grating which provides a certain periodicity in the measured scattering intensity. Finally I measured the surface speckles from a spatially confined Si wafer under the constraint that the size of the sample is smaller than the footprint of the incoming beam at the sample position. To reconstruct surface morphology from coherent reflectivity data is a typical inverse problem. Conventional phase retrieval algorithms like Gerchberg-Saxton (GS) algorithm, error reduction (ER) algorithm, hybrid input-output (HIO) algorithm are used in earlier work by other authors. I modified the conventional GS algorithm and ER algorithm which takes into account the additional Fresnel propagator term and also the illumination function at the sample position. I tested the modified algorithm successfully for a model surface in the form of a surface grating. I used the modified algorithm to reconstruct surface morphology from various static speckle measurements I performed at EDR beamline. The surface profiles reconstructed for different samples from the data at different energies (below the critical energy for the material at a particular incident angle) show almost the same roughness behavior for surface height with mean roughness of {proportional_to}1 nm. With the static speckle data I measured I could retrieve a one-dimensional picture of the sample surface with spatial

  13. Elastic scattering dynamics of cavity polaritons: Evidence for time-energy uncertainty and polariton localization

    DEFF Research Database (Denmark)

    Langbein, Wolfgang Werner; Hvam, Jørn Märcher

    2002-01-01

    The directional dynamics of the resonant Rayleigh scattering from a semiconductor microcavity is investigated. When optically exciting the lower polariton branch, the strong dispersion results in a directional emission on a ring. The coherent emission ring shows a reduction of its angular width...... for increasing time after excitation, giving direct evidence for the time-energy uncertainty in the dynamics of the scattering by disorder. The ring width converges with time to a finite value, a direct measure of an intrinsic momentum broadening of the polariton states localized by multiple disorder scattering....

  14. Solving conic optimization problems via self-dual embedding and facial reduction: A unified approach

    DEFF Research Database (Denmark)

    Permenter, Frank; Friberg, Henrik A.; Andersen, Erling D.

    2017-01-01

    it fails to return a primal-dual optimal solution or a certificate of infeasibility. Using this observation, we give an algorithm based on facial reduction for solving the primal problem that, in principle, always succeeds. (An analogous algorithm is easily stated for the dual problem.) This algorithm has...... the appealing property that it only performs facial reduction when it is required, not when it is possible; e.g., if a primal-dual optimal solution exists, it will be found in lieu of a facial reduction certificate even if Slater's condition fails. For the case of linear, second-order, and semidefinite...

  15. Evaluation of a scattering correction method for high energy tomography

    Science.gov (United States)

    Tisseur, David; Bhatia, Navnina; Estre, Nicolas; Berge, Léonie; Eck, Daniel; Payan, Emmanuel

    2018-01-01

    One of the main drawbacks of Cone Beam Computed Tomography (CBCT) is the contribution of the scattered photons due to the object and the detector. Scattered photons are deflected from their original path after their interaction with the object. This additional contribution of the scattered photons results in increased measured intensities, since the scattered intensity simply adds to the transmitted intensity. This effect is seen as an overestimation in the measured intensity thus corresponding to an underestimation of absorption. This results in artifacts like cupping, shading, streaks etc. on the reconstructed images. Moreover, the scattered radiation provides a bias for the quantitative tomography reconstruction (for example atomic number and volumic mass measurement with dual-energy technique). The effect can be significant and difficult in the range of MeV energy using large objects due to higher Scatter to Primary Ratio (SPR). Additionally, the incident high energy photons which are scattered by the Compton effect are more forward directed and hence more likely to reach the detector. Moreover, for MeV energy range, the contribution of the photons produced by pair production and Bremsstrahlung process also becomes important. We propose an evaluation of a scattering correction technique based on the method named Scatter Kernel Superposition (SKS). The algorithm uses a continuously thickness-adapted kernels method. The analytical parameterizations of the scatter kernels are derived in terms of material thickness, to form continuously thickness-adapted kernel maps in order to correct the projections. This approach has proved to be efficient in producing better sampling of the kernels with respect to the object thickness. This technique offers applicability over a wide range of imaging conditions and gives users an additional advantage. Moreover, since no extra hardware is required by this approach, it forms a major advantage especially in those cases where

  16. An automated phase correction algorithm for retrieving permittivity and permeability of electromagnetic metamaterials

    Directory of Open Access Journals (Sweden)

    Z. X. Cao

    2014-06-01

    Full Text Available To retrieve complex-valued effective permittivity and permeability of electromagnetic metamaterials (EMMs based on resonant effect from scattering parameters using a complex logarithmic function is not inevitable. When complex values are expressed in terms of magnitude and phase, an infinite number of permissible phase angles is permissible due to the multi-valued property of complex logarithmic functions. Special attention needs to be paid to ensure continuity of the effective permittivity and permeability of lossy metamaterials as frequency sweeps. In this paper, an automated phase correction (APC algorithm is proposed to properly trace and compensate phase angles of the complex logarithmic function which may experience abrupt phase jumps near the resonant frequency region of the concerned EMMs, and hence the continuity of the effective optical properties of lossy metamaterials is ensured. The algorithm is then verified to extract effective optical properties from the simulated scattering parameters of the four different types of metamaterial media: a cut-wire cell array, a split ring resonator (SRR cell array, an electric-LC (E-LC resonator cell array, and a combined SRR and wire cell array respectively. The results demonstrate that the proposed algorithm is highly accurate and effective.

  17. Light Scattering by Ice Crystals Containing Air Bubbles

    Science.gov (United States)

    Zhang, J.; Panetta, R. L.; Yang, P.; Bi, L.

    2014-12-01

    The radiative effects of ice clouds are often difficult to estimate accurately, but are very important for interpretation of observations and for climate modeling. Our understanding of these effects is primarily based on scattering calculations, but due to the variability in ice habit it is computationally difficult to determine the required scattering and absorption properties, and the difficulties are only compounded by the need to include consideration of air and carbon inclusions of the sort frequently observed in collected samples. Much of the previous work on effects of inclusions in ice particles on scattering properties has been conducted with variants of geometric optics methods. We report on simulations of scattering by ice crystals with enclosed air bubbles using the pseudo-spectral time domain method (PSTD) and improved geometric optics method (IGOM). A Bouncing Ball Model (BBM) is proposed as a parametrization of air bubbles, and the results are compared with Monte Carlo radiative transfer calculations. Consistent with earlier studies, we find that air inclusions lead to a smoothing of variations in the phase function, weakening of halos, and a reduction of backscattering. We extend these studies by examining the effects of the particular arrangement of a fixed number of bubbles, as well as the effects of splitting a given number of bubbles into a greater number of smaller bubbles with the same total volume fraction. The result shows that the phase function will not change much for stochastic distributed air bubbles. It also shows that local maxima of phase functions are smoothed out for backward directions, when we break bubbles into small ones, single big bubble scatter favors more forward scattering than multi small internal scatters.

  18. Adaptive Kernel In The Bootstrap Boosting Algorithm In KDE ...

    African Journals Online (AJOL)

    This paper proposes the use of adaptive kernel in a bootstrap boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  19. Fast algorithms for transport models. Final report, June 1, 1993--May 31, 1994

    International Nuclear Information System (INIS)

    Manteuffel, T.

    1994-12-01

    The focus of this project is the study of multigrid and multilevel algorithms for the numerical solution of Boltzmann models of the transport of neutral and charged particles. In previous work a fast multigrid algorithm was developed for the numerical solution of the Boltzmann model of neutral particle transport in slab geometry assuming isotropic scattering. The new algorithm is extremely fast in the thick diffusion limit; the multigrid v-cycle convergence factor approaches zero as the mean-free-path between collisions approaches zero, independent of the mesh. Also, a fast multilevel method was developed for the numerical solution of the Boltzmann model of charged particle transport in the thick Fokker-Plank limit for slab geometry. Parallel implementations were developed for both algorithms

  20. Implementation of a parallel algorithm for spherical SN calculations on the IBM 3090

    International Nuclear Information System (INIS)

    Haghighat, A.; Lawrence, R.D.

    1989-01-01

    Parallel S N algorithms based on domain decomposition in angle are straightforward to develop in Cartesian geometry because the computation of the angular fluxes for a specific discrete ordinate can be performed independently of all other angles. This is not the case for curvilinear geometries, where the angular redistribution component of the discretized streaming operator results in coupling between angular fluxes along adjacent discrete ordinates. Previously, the authors developed a parallel algorithm for S N calculations in spherical geometry and examined its iterative convergence for criticality and detector problems with differing scattering/absorption ratios. In this paper, the authors describe the implementation of the algorithm on an IBM 3090 Model 400 (four processors) and present computational results illustrating the efficiency of the algorithm relative to serial execution