Energy Technology Data Exchange (ETDEWEB)
Chun, Se Young [School of Electrical and Computer Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan (Korea, Republic of)
2016-03-15
PET and SPECT are important tools for providing valuable molecular information about patients to clinicians. Advances in nuclear medicine hardware technologies and statistical image reconstruction algorithms enabled significantly improved image quality. Sequentially or simultaneously acquired anatomical images such as CT and MRI from hybrid scanners are also important ingredients for improving the image quality of PET or SPECT further. High-quality anatomical information has been used and investigated for attenuation and scatter corrections, motion compensation, and noise reduction via post-reconstruction filtering and regularization in inverse problems. In this article, we will review works using anatomical information for molecular image reconstruction algorithms for better image quality by describing mathematical models, discussing sources of anatomical information for different cases, and showing some examples.
An algorithm for reduct cardinality minimization
AbouEisha, Hassan M.
2013-12-01
This is devoted to the consideration of a new algorithm for reduct cardinality minimization. This algorithm transforms the initial table to a decision table of a special kind, simplify this table, and use a dynamic programming algorithm to finish the construction of an optimal reduct. Results of computer experiments with decision tables from UCI ML Repository are discussed. © 2013 IEEE.
An algorithm for reduct cardinality minimization
AbouEisha, Hassan M.; Al Farhan, Mohammed; Chikalov, Igor; Moshkov, Mikhail
2013-01-01
This is devoted to the consideration of a new algorithm for reduct cardinality minimization. This algorithm transforms the initial table to a decision table of a special kind, simplify this table, and use a dynamic programming algorithm to finish the construction of an optimal reduct. Results of computer experiments with decision tables from UCI ML Repository are discussed. © 2013 IEEE.
Stray light reduction for Thomson scattering
Bakker, L.P.; Kroesen, G.M.W.; Doebele, H.F.; Muraoka, K.
1999-01-01
In order to perform Thomson scattering in a gas discharge tube, the reduction of stray light is very important because of the very small Thomson cross-section. By introducing a sodium absorption cell as a notch filter, we can reduce the measured stray light considerably. Then we have to use a dye
Algorithm FIRE-Feynman Integral REduction
International Nuclear Information System (INIS)
Smirnov, A.V.
2008-01-01
The recently developed algorithm FIRE performs the reduction of Feynman integrals to master integrals. It is based on a number of strategies, such as applying the Laporta algorithm, the s-bases algorithm, region-bases and integrating explicitly over loop momenta when possible. Currently it is being used in complicated three-loop calculations.
Parallel Algorithms for Groebner-Basis Reduction
1987-09-25
22209 ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Include Security Classification) * PARALLEL ALGORITHMS FOR GROEBNER -BASIS REDUCTION 12. PERSONAL...All other editions are obsolete. Productivity Engineering in the UNIXt Environment p Parallel Algorithms for Groebner -Basis Reduction Technical Report
An algorithm to determine backscattering ratio and single scattering albedo
Digital Repository Service at National Institute of Oceanography (India)
Suresh, T.; Desa, E.; Matondkar, S.G.P.; Mascarenhas, A.A.M.Q.; Nayak, S.R.; Naik, P.
Algorithms to determine the inherent optical properties of water, backscattering probability and single scattering albedo at 490 and 676 nm from the apparent optical property, remote sensing reflectance are presented here. The measured scattering...
An Algorithm for Computing Screened Coulomb Scattering in Geant4
Mendenhall, Marcus H.; Weller, Robert A.
2004-01-01
An algorithm has been developed for the Geant4 Monte-Carlo package for the efficient computation of screened Coulomb interatomic scattering. It explicitly integrates the classical equations of motion for scattering events, resulting in precise tracking of both the projectile and the recoil target nucleus. The algorithm permits the user to plug in an arbitrary screening function, such as Lens-Jensen screening, which is good for backscattering calculations, or Ziegler-Biersack-Littmark screenin...
Coastal Zone Color Scanner atmospheric correction algorithm - Multiple scattering effects
Gordon, Howard R.; Castano, Diego J.
1987-01-01
Errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm are analyzed. The analysis is based on radiative transfer computations in model atmospheres, in which the aerosols and molecules are distributed vertically in an exponential manner, with most of the aerosol scattering located below the molecular scattering. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates, making it possible to determine the errors along typical CZCS scan lines. Information provided by the analysis makes it possible to judge the efficacy of the current algorithm with the current sensor and to estimate the impact of the algorithm-induced errors on a variety of applications.
A Hierarchical Volumetric Shadow Algorithm for Single Scattering
Baran, Ilya; Chen, Jiawen; Ragan-Kelley, Jonathan Millar; Durand, Fredo; Lehtinen, Jaakko
2010-01-01
Volumetric effects such as beams of light through participating media are an important component in the appearance of the natural world. Many such effects can be faithfully modeled by a single scattering medium. In the presence of shadows, rendering these effects can be prohibitively expensive: current algorithms are based on ray marching, i.e., integrating the illumination scattered towards the camera along each view ray, modulated by visibility to the light source at each sample. Visibility...
Gain reduction measurements in transient stimulated Raman scattering
Heeman, R.J.; Godfried, H.P
1995-01-01
Threshold energy measurements of transient rotational stimulated Raman scattering are compared to Raman conversion calculations from semiclassical theories using a simple concept of a gain reduction factor which expresses the reduction of the gain from its steady-state value due to transient
An algorithm for computing screened Coulomb scattering in GEANT4
Energy Technology Data Exchange (ETDEWEB)
Mendenhall, Marcus H. [Vanderbilt University Free Electron Laser Center, P.O. Box 351816 Station B, Nashville, TN 37235-1816 (United States)]. E-mail: marcus.h.mendenhall@vanderbilt.edu; Weller, Robert A. [Department of Electrical Engineering and Computer Science, Vanderbilt University, P.O. Box 351821 Station B, Nashville, TN 37235-1821 (United States)]. E-mail: robert.a.weller@vanderbilt.edu
2005-01-01
An algorithm has been developed for the GEANT4 Monte-Carlo package for the efficient computation of screened Coulomb interatomic scattering. It explicitly integrates the classical equations of motion for scattering events, resulting in precise tracking of both the projectile and the recoil target nucleus. The algorithm permits the user to plug in an arbitrary screening function, such as Lens-Jensen screening, which is good for backscattering calculations, or Ziegler-Biersack-Littmark screening, which is good for nuclear straggling and implantation problems. This will allow many of the applications of the TRIM and SRIM codes to be extended into the much more general GEANT4 framework where nuclear and other effects can be included.
An algorithm for computing screened Coulomb scattering in GEANT4
International Nuclear Information System (INIS)
Mendenhall, Marcus H.; Weller, Robert A.
2005-01-01
An algorithm has been developed for the GEANT4 Monte-Carlo package for the efficient computation of screened Coulomb interatomic scattering. It explicitly integrates the classical equations of motion for scattering events, resulting in precise tracking of both the projectile and the recoil target nucleus. The algorithm permits the user to plug in an arbitrary screening function, such as Lens-Jensen screening, which is good for backscattering calculations, or Ziegler-Biersack-Littmark screening, which is good for nuclear straggling and implantation problems. This will allow many of the applications of the TRIM and SRIM codes to be extended into the much more general GEANT4 framework where nuclear and other effects can be included
Energy Technology Data Exchange (ETDEWEB)
Kim, Ye-Seul; Park, Hye-Suk; Kim, Hee-Joung [Yonsei University, Wonju (Korea, Republic of); Choi, Young-Wook; Choi, Jae-Gu [Korea Electrotechnology Research Institute, Ansan (Korea, Republic of)
2014-12-15
Digital breast tomosynthesis (DBT) is a technique that was developed to overcome the limitations of conventional digital mammography by reconstructing slices through the breast from projections acquired at different angles. In developing and optimizing DBT, The x-ray scatter reduction technique remains a significant challenge due to projection geometry and radiation dose limitations. The most common approach to scatter reduction is a beam-stop-array (BSA) algorithm; however, this method raises concerns regarding the additional exposure involved in acquiring the scatter distribution. The compressed breast is roughly symmetric, and the scatter profiles from projections acquired at axially opposite angles are similar to mirror images. The purpose of this study was to apply the BSA algorithm with only two scans with a beam stop array, which estimates the scatter distribution with minimum additional exposure. The results of the scatter correction with angular interpolation were comparable to those of the scatter correction with all scatter distributions at each angle. The exposure increase was less than 13%. This study demonstrated the influence of the scatter correction obtained by using the BSA algorithm with minimum exposure, which indicates its potential for practical applications.
Fast sampling algorithm for the simulation of photon Compton scattering
International Nuclear Information System (INIS)
Brusa, D.; Salvat, F.
1996-01-01
A simple algorithm for the simulation of Compton interactions of unpolarized photons is described. The energy and direction of the scattered photon, as well as the active atomic electron shell, are sampled from the double-differential cross section obtained by Ribberfors from the relativistic impulse approximation. The algorithm consistently accounts for Doppler broadening and electron binding effects. Simplifications of Ribberfors' formula, required for efficient random sampling, are discussed. The algorithm involves a combination of inverse transform, composition and rejection methods. A parameterization of the Compton profile is proposed from which the simulation of Compton events can be performed analytically in terms of a few parameters that characterize the target atom, namely shell ionization energies, occupation numbers and maximum values of the one-electron Compton profiles. (orig.)
Data reduction for neutron scattering from plutonium samples. Final report
International Nuclear Information System (INIS)
Seeger, P.A.
1997-01-01
An experiment performed in August, 1993, on the Low-Q Diffractometer (LQD) at the Manual Lujan Jr. Neutron Scattering Center (MLNSC) was designed to study the formation and annealing of He bubbles in aged 239 Pu metal. Significant complications arise in the reduction of the data because of the very high total neutron cross section of 239 Pu, and also because the sample are difficult to make uniform and to characterize. This report gives the details of the data and the data reduction procedures, presents the resulting scattering patterns in terms of macroscopic cross section as a function of momentum transfer, and suggests improvements for future experiments
Slot technique - an alternative method of scatter reduction in radiography
International Nuclear Information System (INIS)
Panzer, W.; Widenmann, L.
1983-01-01
The most common method of scatter reduction in radiography is the use of an antiscatter grid. Its disadvantage is the absorption of a certain percentage of primary radiation in the lead strips of the grid and the fact that due to the limited thickness of the lead strips their scatter absorption is also limited. A possibility for avoiding this disadvantage is offered by the so-called slot technique, ie, the successive exposure of the subject with a narrow fan beam provided by slots in rather thick lead plates. The results of a comparison between grid and slot technique regarding dose to the patient, scatter reduction, image quality and the effect of automatic exposure control are reported. (author)
Modeling of detective quantum efficiency considering scatter-reduction devices
Energy Technology Data Exchange (ETDEWEB)
Park, Ji Woong; Kim, Dong Woon; Kim, Ho Kyung [Pusan National University, Busan (Korea, Republic of)
2016-05-15
The reduction of signal-to-noise ratio (SNR) cannot be restored and thus has become a severe issue in digital mammography.1 Therefore, antiscatter grids are typically used in mammography. Scatter-cleanup performance of various scatter-reduction devices, such as air gaps,2 linear (1D) or cellular (2D) grids,3, 4 and slot-scanning devices,5 has been extensively investigated by many research groups. In the present time, a digital mammography system with the slotscanning geometry is also commercially available.6 In this study, we theoretically investigate the effect of scattered photons on the detective quantum efficiency (DQE) performance of digital mammography detectors by using the cascaded-systems analysis (CSA) approach. We show a simple DQE formalism describing digital mammography detector systems equipped with scatter reduction devices by regarding the scattered photons as additive noise sources. The LFD increased with increasing PMMA thickness, and the amounts of LFD indicated the corresponding SF. The estimated SFs were 0.13, 0.21, and 0.29 for PMMA thicknesses of 10, 20, and 30 mm, respectively. While the solid line describing the measured MTF for PMMA with 0 mm was the result of least-squares of regression fit using Eq. (14), the other lines were simply resulted from the multiplication of the fit result (for PMMA with 0 mm) with the (1-SF) estimated from the LFDs in the measured MTFs. Spectral noise-power densities over the entire frequency range were not much changed with increasing scatter. On the other hand, the calculation results showed that the spectral noise-power densities increased with increasing scatter. This discrepancy may be explained by that the model developed in this study does not account for the changes in x-ray interaction parameters for varying spectral shapes due to beam hardening with increasing PMMA thicknesses.
N-Dimensional LLL Reduction Algorithm with Pivoted Reflection
Directory of Open Access Journals (Sweden)
Zhongliang Deng
2018-01-01
Full Text Available The Lenstra-Lenstra-Lovász (LLL lattice reduction algorithm and many of its variants have been widely used by cryptography, multiple-input-multiple-output (MIMO communication systems and carrier phase positioning in global navigation satellite system (GNSS to solve the integer least squares (ILS problem. In this paper, we propose an n-dimensional LLL reduction algorithm (n-LLL, expanding the Lovász condition in LLL algorithm to n-dimensional space in order to obtain a further reduced basis. We also introduce pivoted Householder reflection into the algorithm to optimize the reduction time. For an m-order positive definite matrix, analysis shows that the n-LLL reduction algorithm will converge within finite steps and always produce better results than the original LLL reduction algorithm with n > 2. The simulations clearly prove that n-LLL is better than the original LLL in reducing the condition number of an ill-conditioned input matrix with 39% improvement on average for typical cases, which can significantly reduce the searching space for solving ILS problem. The simulation results also show that the pivoted reflection has significantly declined the number of swaps in the algorithm by 57%, making n-LLL a more practical reduction algorithm.
Sound Scattering and Its Reduction by a Janus Sphere Type
Directory of Open Access Journals (Sweden)
Deliya Kim
2014-01-01
Full Text Available Sound scattering by a Janus sphere type is considered. The sphere has two surface zones: a soft surface of zero acoustic impedance and a hard surface of infinite acoustic impedance. The zones are arranged such that axisymmetry of the sound field is preserved. The equivalent source method is used to compute the sound field. It is shown that, by varying the sizes of the soft and hard zones on the sphere, a significant reduction can be achieved in the scattered acoustic power and upstream directivity when the sphere is near a free surface and its soft zone faces the incoming wave and vice versa for a hard ground. In both cases the size of the sphere’s hard zone is much larger than that of its soft zone. The boundary location between the two zones coincides with the location of a zero pressure line of the incoming standing sound wave, thus masking the sphere within the sound field reflected by the free surface or the hard ground. The reduction in the scattered acoustic power diminishes when the sphere is placed in free space. Variations of the scattered acoustic power and directivity with the sound frequency are also given and discussed.
Comparison of order reduction algorithms for application to electrical networks
Directory of Open Access Journals (Sweden)
Lj. Radić-Weissenfeld
2009-05-01
Full Text Available This paper addresses issues related to the minimization of the computational burden in terms of both memory and speed during the simulation of electrical models. In order to achieve a simple and computational fast model the order reduction of its reducible part is proposed. In this paper the overview of the order reduction algorithms and their application are discussed.
Williams, C. R.
2012-12-01
The NASA Global Precipitation Mission (GPM) raindrop size distribution (DSD) Working Group is composed of NASA PMM Science Team Members and is charged to "investigate the correlations between DSD parameters using Ground Validation (GV) data sets that support, or guide, the assumptions used in satellite retrieval algorithms." Correlations between DSD parameters can be used to constrain the unknowns and reduce the degrees-of-freedom in under-constrained satellite algorithms. Over the past two years, the GPM DSD Working Group has analyzed GV data and has found correlations between the mass-weighted mean raindrop diameter (Dm) and the mass distribution standard deviation (Sm) that follows a power-law relationship. This Dm-Sm power-law relationship appears to be robust and has been observed in surface disdrometer and vertically pointing radar observations. One benefit of a Dm-Sm power-law relationship is that a three parameter DSD can be modeled with just two parameters: Dm and Nw that determines the DSD amplitude. In order to incorporate observed DSD correlations into satellite algorithms, the GPM DSD Working Group is developing scattering and integral tables that can be used by satellite algorithms. Scattering tables describe the interaction of electromagnetic waves on individual particles to generate cross sections of backscattering, extinction, and scattering. Scattering tables are independent of the distribution of particles. Integral tables combine scattering table outputs with DSD parameters and DSD correlations to generate integrated normalized reflectivity, attenuation, scattering, emission, and asymmetry coefficients. Integral tables contain both frequency dependent scattering properties and cloud microphysics. The GPM DSD Working Group has developed scattering tables for raindrops at both Dual Precipitation Radar (DPR) frequencies and at all GMI radiometer frequencies less than 100 GHz. Scattering tables include Mie and T-matrix scattering with H- and V
Energy Technology Data Exchange (ETDEWEB)
Rusz, Ján, E-mail: jan.rusz@fysik.uu.se
2017-06-15
Highlights: • New algorithm for calculating double differential scattering cross-section. • Shown good convergence properties. • Outperforms older MATS algorithm, particularly in zone axis calculations. - Abstract: We present a new algorithm for calculating inelastic scattering cross-section for fast electrons. Compared to the previous Modified Automatic Term Selection (MATS) algorithm (Rusz et al. [18]), it has far better convergence properties in zone axis calculations and it allows to identify contributions of individual atoms. One can think of it as a blend of MATS algorithm and a method described by Weickenmeier and Kohl [10].
An algorithm for 3D target scatterer feature estimation from sparse SAR apertures
Jackson, Julie Ann; Moses, Randolph L.
2009-05-01
We present an algorithm for extracting 3D canonical scattering features from complex targets observed over sparse 3D SAR apertures. The algorithm begins with complex phase history data and ends with a set of geometrical features describing the scene. The algorithm provides a pragmatic approach to initialization of a nonlinear feature estimation scheme, using regularization methods to deconvolve the point spread function and obtain sparse 3D images. Regions of high energy are detected in the sparse images, providing location initializations for scattering center estimates. A single canonical scattering feature, corresponding to a geometric shape primitive, is fit to each region via nonlinear optimization of fit error between the regularized data and parametric canonical scattering models. Results of the algorithm are presented using 3D scattering prediction data of a simple scene for both a densely-sampled and a sparsely-sampled SAR measurement aperture.
Development and evaluation of thermal model reduction algorithms for spacecraft
Deiml, Michael; Suderland, Martin; Reiss, Philipp; Czupalla, Markus
2015-05-01
This paper is concerned with the topic of the reduction of thermal models of spacecraft. The work presented here has been conducted in cooperation with the company OHB AG, formerly Kayser-Threde GmbH, and the Institute of Astronautics at Technische Universität München with the goal to shorten and automatize the time-consuming and manual process of thermal model reduction. The reduction of thermal models can be divided into the simplification of the geometry model for calculation of external heat flows and radiative couplings and into the reduction of the underlying mathematical model. For simplification a method has been developed which approximates the reduced geometry model with the help of an optimization algorithm. Different linear and nonlinear model reduction techniques have been evaluated for their applicability in reduction of the mathematical model. Thereby the compatibility with the thermal analysis tool ESATAN-TMS is of major concern, which restricts the useful application of these methods. Additional model reduction methods have been developed, which account to these constraints. The Matrix Reduction method allows the approximation of the differential equation to reference values exactly expect for numerical errors. The summation method enables a useful, applicable reduction of thermal models that can be used in industry. In this work a framework for model reduction of thermal models has been created, which can be used together with a newly developed graphical user interface for the reduction of thermal models in industry.
Environmental Optimization Using the WAste Reduction Algorithm (WAR)
Traditionally chemical process designs were optimized using purely economic measures such as rate of return. EPA scientists developed the WAste Reduction algorithm (WAR) so that environmental impacts of designs could easily be evaluated. The goal of WAR is to reduce environme...
Implementing peak load reduction algorithms for household electrical appliances
International Nuclear Information System (INIS)
Dlamini, Ndumiso G.; Cromieres, Fabien
2012-01-01
Considering household appliance automation for reduction of household peak power demand, this study explored aspects of the interaction between household automation technology and human behaviour. Given a programmable household appliance switching system, and user-reported appliance use times, we simulated the load reduction effectiveness of three types of algorithms, which were applied at both the single household level and across all 30 households. All three algorithms effected significant load reductions, while the least-to-highest potential user inconvenience ranking was: coordinating the timing of frequent intermittent loads (algorithm 2); moving period-of-day time-flexible loads to off-peak times (algorithm 1); and applying short-term time delays to avoid high peaks (algorithm 3) (least accommodating). Peak reduction was facilitated by load interruptibility, time of use flexibility and the willingness of users to forgo impulsive appliance use. We conclude that a general factor determining the ability to shift the load due to a particular appliance is the time-buffering between the service delivered and the power demand of an appliance. Time-buffering can be ‘technologically inherent’, due to human habits, or realised by managing user expectations. There are implications for the design of appliances and home automation systems. - Highlights: ► We explored the interaction between appliance automation and human behaviour. ► There is potential for considerable load shifting of household appliances. ► Load shifting for load reduction is eased with increased time buffering. ► Design, human habits and user expectations all influence time buffering. ► Certain automation and appliance design features can facilitate load shifting.
Angle Statistics Reconstruction: a robust reconstruction algorithm for Muon Scattering Tomography
Stapleton, M.; Burns, J.; Quillin, S.; Steer, C.
2014-11-01
Muon Scattering Tomography (MST) is a technique for using the scattering of cosmic ray muons to probe the contents of enclosed volumes. As a muon passes through material it undergoes multiple Coulomb scattering, where the amount of scattering is dependent on the density and atomic number of the material as well as the path length. Hence, MST has been proposed as a means of imaging dense materials, for instance to detect special nuclear material in cargo containers. Algorithms are required to generate an accurate reconstruction of the material density inside the volume from the muon scattering information and some have already been proposed, most notably the Point of Closest Approach (PoCA) and Maximum Likelihood/Expectation Maximisation (MLEM) algorithms. However, whilst PoCA-based algorithms are easy to implement, they perform rather poorly in practice. Conversely, MLEM is a complicated algorithm to implement and computationally intensive and there is currently no published, fast and easily-implementable algorithm that performs well in practice. In this paper, we first provide a detailed analysis of the source of inaccuracy in PoCA-based algorithms. We then motivate an alternative method, based on ideas first laid out by Morris et al, presenting and fully specifying an algorithm that performs well against simulations of realistic scenarios. We argue this new algorithm should be adopted by developers of Muon Scattering Tomography as an alternative to PoCA.
Image noise reduction algorithm for digital subtraction angiography: clinical results.
Söderman, Michael; Holmin, Staffan; Andersson, Tommy; Palmgren, Charlotta; Babic, Draženko; Hoornaert, Bart
2013-11-01
To test the hypothesis that an image noise reduction algorithm designed for digital subtraction angiography (DSA) in interventional neuroradiology enables a reduction in the patient entrance dose by a factor of 4 while maintaining image quality. This clinical prospective study was approved by the local ethics committee, and all 20 adult patients provided informed consent. DSA was performed with the default reference DSA program, a quarter-dose DSA program with modified acquisition parameters (to reduce patient radiation dose exposure), and a real-time noise-reduction algorithm. Two consecutive biplane DSA data sets were acquired in each patient. The dose-area product (DAP) was calculated for each image and compared. A randomized, blinded, offline reading study was conducted to show noninferiority of the quarter-dose image sets. Overall, 40 samples per treatment group were necessary to acquire 80% power, which was calculated by using a one-sided α level of 2.5%. The mean DAP with the quarter-dose program was 25.3% ± 0.8 of that with the reference program. The median overall image quality scores with the reference program were 9, 13, and 12 for readers 1, 2, and 3, respectively. These scores increased slightly to 12, 15, and 12, respectively, with the quarter-dose program imaging chain. In DSA, a change in technique factors combined with a real-time noise-reduction algorithm will reduce the patient entrance dose by 75%, without a loss of image quality. RSNA, 2013
Output Current Ripple Reduction Algorithms for Home Energy Storage Systems
Directory of Open Access Journals (Sweden)
Jin-Hyuk Park
2013-10-01
Full Text Available This paper proposes an output current ripple reduction algorithm using a proportional-integral (PI controller for an energy storage system (ESS. In single-phase systems, the DC/AC inverter has a second-order harmonic at twice the grid frequency of a DC-link voltage caused by pulsation of the DC-link voltage. The output current of a DC/DC converter has a ripple component because of the ripple of the DC-link voltage. The second-order harmonic adversely affects the battery lifetime. The proposed algorithm has an advantage of reducing the second-order harmonic of the output current in the variable frequency system. The proposed algorithm is verified from the PSIM simulation and experiment with the 3 kW ESS model.
Reduction of Raman scattering and fluorescence from anvils in high pressure Raman scattering
Dierker, S. B.; Aronson, M. C.
2018-05-01
We describe a new design and use of a high pressure anvil cell that significantly reduces the Raman scattering and fluorescence from the anvils in high pressure Raman scattering experiments. The approach is particularly useful in Raman scattering studies of opaque, weakly scattering samples. The effectiveness of the technique is illustrated with measurements of two-magnon Raman scattering in La2CuO4.
TPSLVM: a dimensionality reduction algorithm based on thin plate splines.
Jiang, Xinwei; Gao, Junbin; Wang, Tianjiang; Shi, Daming
2014-10-01
Dimensionality reduction (DR) has been considered as one of the most significant tools for data analysis. One type of DR algorithms is based on latent variable models (LVM). LVM-based models can handle the preimage problem easily. In this paper we propose a new LVM-based DR model, named thin plate spline latent variable model (TPSLVM). Compared to the well-known Gaussian process latent variable model (GPLVM), our proposed TPSLVM is more powerful especially when the dimensionality of the latent space is low. Also, TPSLVM is robust to shift and rotation. This paper investigates two extensions of TPSLVM, i.e., the back-constrained TPSLVM (BC-TPSLVM) and TPSLVM with dynamics (TPSLVM-DM) as well as their combination BC-TPSLVM-DM. Experimental results show that TPSLVM and its extensions provide better data visualization and more efficient dimensionality reduction compared to PCA, GPLVM, ISOMAP, etc.
Channel Parameter Estimation for Scatter Cluster Model Using Modified MUSIC Algorithm
Directory of Open Access Journals (Sweden)
Jinsheng Yang
2012-01-01
Full Text Available Recently, the scatter cluster models which precisely evaluate the performance of the wireless communication system have been proposed in the literature. However, the conventional SAGE algorithm does not work for these scatter cluster-based models because it performs poorly when the transmit signals are highly correlated. In this paper, we estimate the time of arrival (TOA, the direction of arrival (DOA, and Doppler frequency for scatter cluster model by the modified multiple signal classification (MUSIC algorithm. Using the space-time characteristics of the multiray channel, the proposed algorithm combines the temporal filtering techniques and the spatial smoothing techniques to isolate and estimate the incoming rays. The simulation results indicated that the proposed algorithm has lower complexity and is less time-consuming in the dense multipath environment than SAGE algorithm. Furthermore, the estimations’ performance increases with elements of receive array and samples length. Thus, the problem of the channel parameter estimation of the scatter cluster model can be effectively addressed with the proposed modified MUSIC algorithm.
Data Reduction Algorithm Using Nonnegative Matrix Factorization with Nonlinear Constraints
Sembiring, Pasukat
2017-12-01
Processing ofdata with very large dimensions has been a hot topic in recent decades. Various techniques have been proposed in order to execute the desired information or structure. Non- Negative Matrix Factorization (NMF) based on non-negatives data has become one of the popular methods for shrinking dimensions. The main strength of this method is non-negative object, the object model by a combination of some basic non-negative parts, so as to provide a physical interpretation of the object construction. The NMF is a dimension reduction method thathasbeen used widely for numerous applications including computer vision,text mining, pattern recognitions,and bioinformatics. Mathematical formulation for NMF did not appear as a convex optimization problem and various types of algorithms have been proposed to solve the problem. The Framework of Alternative Nonnegative Least Square(ANLS) are the coordinates of the block formulation approaches that have been proven reliable theoretically and empirically efficient. This paper proposes a new algorithm to solve NMF problem based on the framework of ANLS.This algorithm inherits the convergenceproperty of the ANLS framework to nonlinear constraints NMF formulations.
Lé tourneau, Pierre-David; Wu, Ying; Papanicolaou, George; Garnier, Josselin; Darve, Eric
2016-01-01
We present a wideband fast algorithm capable of accurately computing the full numerical solution of the problem of acoustic scattering of waves by multiple finite-sized bodies such as spherical scatterers in three dimensions. By full solution, we
International Nuclear Information System (INIS)
Thing, Rune S.; Bernchou, Uffe; Brink, Carsten; Mainegra-Hing, Ernesto
2013-01-01
Purpose: Cone beam computed tomography (CBCT) image quality is limited by scattered photons. Monte Carlo (MC) simulations provide the ability of predicting the patient-specific scatter contamination in clinical CBCT imaging. Lengthy simulations prevent MC-based scatter correction from being fully implemented in a clinical setting. This study investigates the combination of using fast MC simulations to predict scatter distributions with a ray tracing algorithm to allow calibration between simulated and clinical CBCT images. Material and methods: An EGSnrc-based user code (egs c bct), was used to perform MC simulations of an Elekta XVI CBCT imaging system. A 60keV x-ray source was used, and air kerma scored at the detector plane. Several variance reduction techniques (VRTs) were used to increase the scatter calculation efficiency. Three patient phantoms based on CT scans were simulated, namely a brain, a thorax and a pelvis scan. A ray tracing algorithm was used to calculate the detector signal due to primary photons. A total of 288 projections were simulated, one for each thread on the computer cluster used for the investigation. Results: Scatter distributions for the brain, thorax and pelvis scan were simulated within 2 % statistical uncertainty in two hours per scan. Within the same time, the ray tracing algorithm provided the primary signal for each of the projections. Thus, all the data needed for MC-based scatter correction in clinical CBCT imaging was obtained within two hours per patient, using a full simulation of the clinical CBCT geometry. Conclusions: This study shows that use of MC-based scatter corrections in CBCT imaging has a great potential to improve CBCT image quality. By use of powerful VRTs to predict scatter distributions and a ray tracing algorithm to calculate the primary signal, it is possible to obtain the necessary data for patient specific MC scatter correction within two hours per patient
Polarized X-ray excitation for scatter reduction in X-ray fluorescence computed tomography.
Vernekohl, Don; Tzoumas, Stratis; Zhao, Wei; Xing, Lei
2018-05-25
X-ray fluorescence computer tomography (XFCT) is a new molecular imaging modality which uses X-ray excitation to stimulate the emission of fluorescent photons in high atomic number contrast agents. Scatter contamination is one of the main challenges in XFCT imaging which limits the molecular sensitivity. When polarized X-rays are used, it is possible to reduce the scatter contamination significantly by placing detectors perpendicular to the polarization direction. This study quantifies scatter contamination for polarized and unpolarized X-ray excitation and determines the advantages of scatter reduction. The amount of scatter in preclinical XFCT is quantified in Monte Carlo simulations. The fluorescent X-rays are emitted isotropically, while scattered X-rays propagate in polarization direction. The magnitude of scatter contamination is studied in XFCT simulations of a mouse phantom. In this study, the contrast agent gold is examined as an example but a scatter reduction from polarized excitation is also expected for other elements. The scatter reduction capability is examined for different polarization intensities with a monoenergetic X-ray excitation energy of 82 keV. The study evaluates two different geometrical shapes of CZT detectors which are modeled with an energy resolution of 1 keV FWHM at an X-ray energy of 80 keV. Benefits of a detector placement perpendicular to the polarization direction are shown in iterative and analytic image reconstruction including scatter correction. The contrast to noise ratio (CNR) and the normalized mean square error (NMSE) are analyzed and compared for the reconstructed images. A substantial scatter reduction for common detector sizes was achieved for 100% and 80% linear polarization while lower polarization intensities provide a decreased scatter reduction. By placing the detector perpendicular to the polarization direction, a scatter reduction by factor up to 5.5 can be achieved for common detector sizes. The image
International Nuclear Information System (INIS)
Stevendaal, U. van; Schlomka, J.-P.; Harding, A.; Grass, M.
2003-01-01
Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter form factor of the investigated object. Reconstruction from coherently scattered x-rays is commonly done using algebraic reconstruction techniques (ART). In this paper, we propose an alternative approach based on filtered back-projection. For the first time, a three-dimensional (3D) filtered back-projection technique using curved 3D back-projection lines is applied to two-dimensional coherent scatter projection data. The proposed algorithm is tested with simulated projection data as well as with projection data acquired with a demonstrator setup similar to a multi-line CT scanner geometry. While yielding comparable image quality as ART reconstruction, the modified 3D filtered back-projection algorithm is about two orders of magnitude faster. In contrast to iterative reconstruction schemes, it has the advantage that subfield-of-view reconstruction becomes feasible. This allows a selective reconstruction of the coherent-scatter form factor for a region of interest. The proposed modified 3D filtered back-projection algorithm is a powerful reconstruction technique to be implemented in a CSCT scanning system. This method gives coherent scatter CT the potential of becoming a competitive modality for medical imaging or nondestructive testing
MUSIC ALGORITHM FOR LOCATING POINT-LIKE SCATTERERS CONTAINED IN A SAMPLE ON FLAT SUBSTRATE
Institute of Scientific and Technical Information of China (English)
Dong Heping; Ma Fuming; Zhang Deyue
2012-01-01
In this paper,we consider a MUSIC algorithm for locating point-like scatterers contained in a sample on flat substrate.Based on an asymptotic expansion of the scattering amplitude proposed by Ammari et al.,the reconstruction problem can be reduced to a calculation of Green function corresponding to the background medium.In addition,we use an explicit formulation of Green function in the MUSIC algorithm to simplify the calculation when the cross-section of sample is a half-disc.Numerical experiments are included to demonstrate the feasibility of this method.
FPGA based algorithms for data reduction at Belle II
Energy Technology Data Exchange (ETDEWEB)
Muenchow, David; Gessler, Thomas; Kuehn, Wolfgang; Lange, Jens Soeren; Liu, Ming; Spruck, Bjoern [II. Physikalisches Institut, Universitaet Giessen (Germany)
2011-07-01
Belle II, the upgrade of the existing Belle experiment at Super-KEKB in Tsukuba, Japan, is an asymmetric e{sup +}e{sup -} collider with a design luminosity of 8.10{sup 35}cm{sup -2}s{sup -1}. At Belle II the estimated event rate is {<=}30 kHz. The resulting data rate at the Pixel Detector (PXD) will be {<=}7.2 GB/s. This data rate needs to be reduced to be able to process and store the data. A region of interest (ROI) selection is based upon two mechanisms. a.) a tracklet finder using the silicon strip detector and b.) the HLT using all other Belle II subdetectors. These ROIs and the pixel data are forwarded to an FPGA based Compute Node for processing. Here a VHDL based algorithm on FPGA with the benefit of pipelining and parallelisation will be implemented. For a fast data handling we developed a dedicated memory management system for buffering and storing the data. The status of the implementation and performance tests of the memory manager and data reduction algorithm is presented.
Fiorino, Steven T.; Elmore, Brannon; Schmidt, Jaclyn; Matchefts, Elizabeth; Burley, Jarred L.
2016-05-01
Properly accounting for multiple scattering effects can have important implications for remote sensing and possibly directed energy applications. For example, increasing path radiance can affect signal noise. This study describes the implementation of a fast-calculating two-stream-like multiple scattering algorithm that captures azimuthal and elevation variations into the Laser Environmental Effects Definition and Reference (LEEDR) atmospheric characterization and radiative transfer code. The multiple scattering algorithm fully solves for molecular, aerosol, cloud, and precipitation single-scatter layer effects with a Mie algorithm at every calculation point/layer rather than an interpolated value from a pre-calculated look-up-table. This top-down cumulative diffusivity method first considers the incident solar radiance contribution to a given layer accounting for solid angle and elevation, and it then measures the contribution of diffused energy from previous layers based on the transmission of the current level to produce a cumulative radiance that is reflected from a surface and measured at the aperture at the observer. Then a unique set of asymmetry and backscattering phase function parameter calculations are made which account for the radiance loss due to the molecular and aerosol constituent reflectivity within a level and allows for a more accurate characterization of diffuse layers that contribute to multiple scattered radiances in inhomogeneous atmospheres. The code logic is valid for spectral bands between 200 nm and radio wavelengths, and the accuracy is demonstrated by comparing the results from LEEDR to observed sky radiance data.
Sensitivity Analysis of the Scattering-Based SARBM3D Despeckling Algorithm.
Di Simone, Alessio
2016-06-25
Synthetic Aperture Radar (SAR) imagery greatly suffers from multiplicative speckle noise, typical of coherent image acquisition sensors, such as SAR systems. Therefore, a proper and accurate despeckling preprocessing step is almost mandatory to aid the interpretation and processing of SAR data by human users and computer algorithms, respectively. Very recently, a scattering-oriented version of the popular SAR Block-Matching 3D (SARBM3D) despeckling filter, named Scattering-Based (SB)-SARBM3D, was proposed. The new filter is based on the a priori knowledge of the local topography of the scene. In this paper, an experimental sensitivity analysis of the above-mentioned despeckling algorithm is carried out, and the main results are shown and discussed. In particular, the role of both electromagnetic and geometrical parameters of the surface and the impact of its scattering behavior are investigated. Furthermore, a comprehensive sensitivity analysis of the SB-SARBM3D filter against the Digital Elevation Model (DEM) resolution and the SAR image-DEM coregistration step is also provided. The sensitivity analysis shows a significant robustness of the algorithm against most of the surface parameters, while the DEM resolution plays a key role in the despeckling process. Furthermore, the SB-SARBM3D algorithm outperforms the original SARBM3D in the presence of the most realistic scattering behaviors of the surface. An actual scenario is also presented to assess the DEM role in real-life conditions.
A method and algorithm for correlating scattered light and suspended particles in polluted water
International Nuclear Information System (INIS)
Sami Gumaan Daraigan; Mohd Zubir Matjafri; Khiruddin Abdullah; Azlan Abdul Aziz; Abdul Aziz Tajuddin; Mohd Firdaus Othman
2005-01-01
An optical model has been developed for measuring total suspended solids TSS concentrations in water. This approach is based on the characteristics of scattered light from the suspended particles in water samples. An optical sensor system (an active spectrometer) has been developed to correlate pollutant (total suspended solids TSS) concentration and the scattered radiation. Scattered light was measured in terms of the output voltage of the phototransistor of the sensor system. The developed algorithm was used to calculate and estimate the concentrations of the polluted water samples. The proposed algorithm was calibrated using the observed readings. The results display a strong correlation between the radiation values and the total suspended solids concentrations. The proposed system yields a high degree of accuracy with the correlation coefficient (R) of 0.99 and the root mean square error (RMS) of 63.57 mg/l. (Author)
Létourneau, Pierre-David
2016-09-19
We present a wideband fast algorithm capable of accurately computing the full numerical solution of the problem of acoustic scattering of waves by multiple finite-sized bodies such as spherical scatterers in three dimensions. By full solution, we mean that no assumption (e.g. Rayleigh scattering, geometrical optics, weak scattering, Born single scattering, etc.) is necessary regarding the properties of the scatterers, their distribution or the background medium. The algorithm is also fast in the sense that it scales linearly with the number of unknowns. We use this algorithm to study the phenomenon of super-resolution in time-reversal refocusing in highly-scattering media recently observed experimentally (Lemoult et al., 2011), and provide numerical arguments towards the fact that such a phenomenon can be explained through a homogenization theory.
ANALYSIS OF PARAMETERIZATION VALUE REDUCTION OF SOFT SETS AND ITS ALGORITHM
Directory of Open Access Journals (Sweden)
Mohammed Adam Taheir Mohammed
2016-02-01
Full Text Available In this paper, the parameterization value reduction of soft sets and its algorithm in decision making are studied and described. It is based on parameterization reduction of soft sets. The purpose of this study is to investigate the inherited disadvantages of parameterization reduction of soft sets and its algorithm. The algorithms presented in this study attempt to reduce the value of least parameters from soft set. Through the analysis, two techniques have been described. Through this study, it is found that parameterization reduction of soft sets and its algorithm has yielded a different and inconsistency in suboptimal result.
On distribution reduction and algorithm implementation in inconsistent ordered information systems.
Zhang, Yanqin
2014-01-01
As one part of our work in ordered information systems, distribution reduction is studied in inconsistent ordered information systems (OISs). Some important properties on distribution reduction are studied and discussed. The dominance matrix is restated for reduction acquisition in dominance relations based information systems. Matrix algorithm for distribution reduction acquisition is stepped. And program is implemented by the algorithm. The approach provides an effective tool for the theoretical research and the applications for ordered information systems in practices. For more detailed and valid illustrations, cases are employed to explain and verify the algorithm and the program which shows the effectiveness of the algorithm in complicated information systems.
Zhou, Meiling; Singh, Alok Kumar; Pedrini, Giancarlo; Osten, Wolfgang; Min, Junwei; Yao, Baoli
2018-03-01
We present a tunable output-frequency filter (TOF) algorithm to reconstruct the object from noisy experimental data under low-power partially coherent illumination, such as LED, when imaging through scattering media. In the iterative algorithm, we employ Gaussian functions with different filter windows at different stages of iteration process to reduce corruption from experimental noise to search for a global minimum in the reconstruction. In comparison with the conventional iterative phase retrieval algorithm, we demonstrate that the proposed TOF algorithm achieves consistent and reliable reconstruction in the presence of experimental noise. Moreover, the spatial resolution and distinctive features are retained in the reconstruction since the filter is applied only to the region outside the object. The feasibility of the proposed method is proved by experimental results.
Scatter-Reducing Sounding Filtration Using a Genetic Algorithm and Mean Monthly Standard Deviation
Mandrake, Lukas
2013-01-01
Retrieval algorithms like that used by the Orbiting Carbon Observatory (OCO)-2 mission generate massive quantities of data of varying quality and reliability. A computationally efficient, simple method of labeling problematic datapoints or predicting soundings that will fail is required for basic operation, given that only 6% of the retrieved data may be operationally processed. This method automatically obtains a filter designed to reduce scatter based on a small number of input features. Most machine-learning filter construction algorithms attempt to predict error in the CO2 value. By using a surrogate goal of Mean Monthly STDEV, the goal is to reduce the retrieved CO2 scatter rather than solving the harder problem of reducing CO2 error. This lends itself to improved interpretability and performance. This software reduces the scatter of retrieved CO2 values globally based on a minimum number of input features. It can be used as a prefilter to reduce the number of soundings requested, or as a post-filter to label data quality. The use of the MMS (Mean Monthly Standard deviation) provides a much cleaner, clearer filter than the standard ABS(CO2-truth) metrics previously employed by competitor methods. The software's main strength lies in a clearer (i.e., fewer features required) filter that more efficiently reduces scatter in retrieved CO2 rather than focusing on the more complex (and easily removed) bias issues.
Column Reduction of Polynomial Matrices; Some Remarks on the Algorithm of Wolovich
Praagman, C.
1996-01-01
Recently an algorithm has been developed for column reduction of polynomial matrices. In a previous report the authors described a Fortran implementation of this algorithm. In this paper we compare the results of that implementation with an implementation of the algorithm originally developed by
The Support Reduction Algorithm for Computing Non-Parametric Function Estimates in Mixture Models
GROENEBOOM, PIET; JONGBLOED, GEURT; WELLNER, JON A.
2008-01-01
In this paper, we study an algorithm (which we call the support reduction algorithm) that can be used to compute non-parametric M-estimators in mixture models. The algorithm is compared with natural competitors in the context of convex regression and the ‘Aspect problem’ in quantum physics.
Energy Technology Data Exchange (ETDEWEB)
Grogan, Brandon Robert [Univ. of Tennessee, Knoxville, TN (United States)
2010-03-01
This dissertation presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects non-intrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross-sections of features inside the object can be determined. The cross sections can then be used to identify the materials and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons which are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using
Energy Technology Data Exchange (ETDEWEB)
Grogan, Brandon R [ORNL
2010-05-01
This report presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects nonintrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross sections of features inside the object can be determined. The cross sections can then be used to identify the materials, and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons that are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized, and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements, and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using the
A Spectral Algorithm for Envelope Reduction of Sparse Matrices
Barnard, Stephen T.; Pothen, Alex; Simon, Horst D.
1993-01-01
The problem of reordering a sparse symmetric matrix to reduce its envelope size is considered. A new spectral algorithm for computing an envelope-reducing reordering is obtained by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. This Laplacian eigenvector solves a continuous relaxation of a discrete problem related to envelope minimization called the minimum 2-sum problem. The permutation vector computed by the spectral algorithm is a closest permutation vector to the specified Laplacian eigenvector. Numerical results show that the new reordering algorithm usually computes smaller envelope sizes than those obtained from the current standard algorithms such as Gibbs-Poole-Stockmeyer (GPS) or SPARSPAK reverse Cuthill-McKee (RCM), in some cases reducing the envelope by more than a factor of two.
Development of a 3D muon disappearance algorithm for muon scattering tomography
Blackwell, T. B.; Kudryavtsev, V. A.
2015-05-01
Upon passing through a material, muons lose energy, scatter off nuclei and atomic electrons, and can stop in the material. Muons will more readily lose energy in higher density materials. Therefore multiple muon disappearances within a localized volume may signal the presence of high-density materials. We have developed a new technique that improves the sensitivity of standard muon scattering tomography. This technique exploits these muon disappearances to perform non-destructive assay of an inspected volume. Muons that disappear have their track evaluated using a 3D line extrapolation algorithm, which is in turn used to construct a 3D tomographic image of the inspected volume. Results of Monte Carlo simulations that measure muon disappearance in different types of target materials are presented. The ability to differentiate between different density materials using the 3D line extrapolation algorithm is established. Finally the capability of this new muon disappearance technique to enhance muon scattering tomography techniques in detecting shielded HEU in cargo containers has been demonstrated.
Uneven-Layered Coding Metamaterial Tile for Ultra-wideband RCS Reduction and Diffuse Scattering.
Su, Jianxun; He, Huan; Li, Zengrui; Yang, Yaoqing Lamar; Yin, Hongcheng; Wang, Junhong
2018-05-25
In this paper, a novel uneven-layered coding metamaterial tile is proposed for ultra-wideband radar cross section (RCS) reduction and diffuse scattering. The metamaterial tile is composed of two kinds of square ring unit cells with different layer thickness. The reflection phase difference of 180° (±37°) between two unit cells covers an ultra-wide frequency range. Due to the phase cancellation between two unit cells, the metamaterial tile has the scattering pattern of four strong lobes deviating from normal direction. The metamaterial tile and its 90-degree rotation can be encoded as the '0' and '1' elements to cover an object, and diffuse scattering pattern can be realized by optimizing phase distribution, leading to reductions of the monostatic and bi-static RCSs simultaneously. The metamaterial tile can achieve -10 dB RCS reduction from 6.2 GHz to 25.7 GHz with the ratio bandwidth of 4.15:1 at normal incidence. The measured and simulated results are in good agreement and validate the proposed uneven-layered coding metamaterial tile can greatly expanding the bandwidth for RCS reduction and diffuse scattering.
Analysis of Individual Preferences for Tuning Noise-Reduction Algorithms
Houben, Rolph; Dijkstra, Tjeerd M. H.; Dreschler, Wouter A.
2012-01-01
There is little research on user preference for different settings of noise reduction, especially for individual users. We therefore measured individual preferences for pairs of audio streams differing in the trade-off between noise reduction and speech distortion. A logistic probability model was
A systematic approach to robust preconditioning for gradient-based inverse scattering algorithms
International Nuclear Information System (INIS)
Nordebo, Sven; Fhager, Andreas; Persson, Mikael; Gustafsson, Mats
2008-01-01
This paper presents a systematic approach to robust preconditioning for gradient-based nonlinear inverse scattering algorithms. In particular, one- and two-dimensional inverse problems are considered where the permittivity and conductivity profiles are unknown and the input data consist of the scattered field over a certain bandwidth. A time-domain least-squares formulation is employed and the inversion algorithm is based on a conjugate gradient or quasi-Newton algorithm together with an FDTD-electromagnetic solver. A Fisher information analysis is used to estimate the Hessian of the error functional. A robust preconditioner is then obtained by incorporating a parameter scaling such that the scaled Fisher information has a unit diagonal. By improving the conditioning of the Hessian, the convergence rate of the conjugate gradient or quasi-Newton methods are improved. The preconditioner is robust in the sense that the scaling, i.e. the diagonal Fisher information, is virtually invariant to the numerical resolution and the discretization model that is employed. Numerical examples of image reconstruction are included to illustrate the efficiency of the proposed technique
Cell light scattering characteristic numerical simulation research based on FDTD algorithm
Lin, Xiaogang; Wan, Nan; Zhu, Hao; Weng, Lingdong
2017-01-01
In this study, finite-difference time-domain (FDTD) algorithm has been used to work out the cell light scattering problem. Before beginning to do the simulation contrast, finding out the changes or the differences between normal cells and abnormal cells which may be cancerous or maldevelopment is necessary. The preparation of simulation are building up the simple cell model of cell which consists of organelles, nucleus and cytoplasm and setting up the suitable precision of mesh. Meanwhile, setting up the total field scattering field source as the excitation source and far field projection analysis group is also important. Every step need to be explained by the principles of mathematic such as the numerical dispersion, perfect matched layer boundary condition and near-far field extrapolation. The consequences of simulation indicated that the position of nucleus changed will increase the back scattering intensity and the significant difference on the peak value of scattering intensity may result from the changes of the size of cytoplasm. The study may help us find out the regulations based on the simulation consequences and the regulations can be meaningful for early diagnosis of cancers.
DESIGNING SUSTAINABLE PROCESSES WITH SIMULATION: THE WASTE REDUCTION (WAR) ALGORITHM
The WAR Algorithm, a methodology for determining the potential environmental impact (PEI) of a chemical process, is presented with modifications that account for the PEI of the energy consumed within that process. From this theory, four PEI indexes are used to evaluate the envir...
New Search Space Reduction Algorithm for Vertical Reference Trajectory Optimization
Directory of Open Access Journals (Sweden)
Alejandro MURRIETA-MENDOZA
2016-06-01
Full Text Available Burning the fuel required to sustain a given flight releases pollution such as carbon dioxide and nitrogen oxides, and the amount of fuel consumed is also a significant expense for airlines. It is desirable to reduce fuel consumption to reduce both pollution and flight costs. To increase fuel savings in a given flight, one option is to compute the most economical vertical reference trajectory (or flight plan. A deterministic algorithm was developed using a numerical aircraft performance model to determine the most economical vertical flight profile considering take-off weight, flight distance, step climb and weather conditions. This algorithm is based on linear interpolations of the performance model using the Lagrange interpolation method. The algorithm downloads the latest available forecast from Environment Canada according to the departure date and flight coordinates, and calculates the optimal trajectory taking into account the effects of wind and temperature. Techniques to avoid unnecessary calculations are implemented to reduce the computation time. The costs of the reference trajectories proposed by the algorithm are compared with the costs of the reference trajectories proposed by a commercial flight management system using the fuel consumption estimated by the FlightSim® simulator made by Presagis®.
A necessary condition for applying MUSIC algorithm in limited-view inverse scattering problem
International Nuclear Information System (INIS)
Park, Taehoon; Park, Won-Kwang
2015-01-01
Throughout various results of numerical simulations, it is well-known that MUltiple SIgnal Classification (MUSIC) algorithm can be applied in the limited-view inverse scattering problems. However, the application is somehow heuristic. In this contribution, we identify a necessary condition of MUSIC for imaging of collection of small, perfectly conducting cracks. This is based on the fact that MUSIC imaging functional can be represented as an infinite series of Bessel function of integer order of the first kind. Numerical experiments from noisy synthetic data supports our investigation. (paper)
A necessary condition for applying MUSIC algorithm in limited-view inverse scattering problem
Park, Taehoon; Park, Won-Kwang
2015-09-01
Throughout various results of numerical simulations, it is well-known that MUltiple SIgnal Classification (MUSIC) algorithm can be applied in the limited-view inverse scattering problems. However, the application is somehow heuristic. In this contribution, we identify a necessary condition of MUSIC for imaging of collection of small, perfectly conducting cracks. This is based on the fact that MUSIC imaging functional can be represented as an infinite series of Bessel function of integer order of the first kind. Numerical experiments from noisy synthetic data supports our investigation.
Simulation of small-angle scattering patterns using a CPU-efficient algorithm
Anitas, E. M.
2017-12-01
Small-angle scattering (of neutrons, x-ray or light; SAS) is a well-established experimental technique for structural analysis of disordered systems at nano and micro scales. For complex systems, such as super-molecular assemblies or protein molecules, analytic solutions of SAS intensity are generally not available. Thus, a frequent approach to simulate the corresponding patterns is to use a CPU-efficient version of the Debye formula. For this purpose, in this paper we implement the well-known DALAI algorithm in Mathematica software. We present calculations for a series of 2D Sierpinski gaskets and respectively of pentaflakes, obtained from chaos game representation.
International Nuclear Information System (INIS)
Duo, J. I.; Azmy, Y. Y.
2007-01-01
A new method, the Singular Characteristics Tracking algorithm, is developed to account for potential non-smoothness across the singular characteristics in the exact solution of the discrete ordinates approximation of the transport equation. Numerical results show improved rate of convergence of the solution to the discrete ordinates equations in two spatial dimensions with isotropic scattering using the proposed methodology. Unlike the standard Weighted Diamond Difference methods, the new algorithm achieves local convergence in the case of discontinuous angular flux along the singular characteristics. The method also significantly reduces the error for problems where the angular flux presents discontinuous spatial derivatives across these lines. For purposes of verifying the results, the Method of Manufactured Solutions is used to generate analytical reference solutions that permit estimating the local error in the numerical solution. (authors)
Petersen, T. C.; Ringer, S. P.
2010-03-01
Upon discerning the mere shape of an imaged object, as portrayed by projected perimeters, the full three-dimensional scattering density may not be of particular interest. In this situation considerable simplifications to the reconstruction problem are possible, allowing calculations based upon geometric principles. Here we describe and provide an algorithm which reconstructs the three-dimensional morphology of specimens from tilt series of images for application to electron tomography. Our algorithm uses a differential approach to infer the intersection of projected tangent lines with surfaces which define boundaries between regions of different scattering densities within and around the perimeters of specimens. Details of the algorithm implementation are given and explained using reconstruction calculations from simulations, which are built into the code. An experimental application of the algorithm to a nano-sized Aluminium tip is also presented to demonstrate practical analysis for a real specimen. Program summaryProgram title: STOMO version 1.0 Catalogue identifier: AEFS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2988 No. of bytes in distributed program, including test data, etc.: 191 605 Distribution format: tar.gz Programming language: C/C++ Computer: PC Operating system: Windows XP RAM: Depends upon the size of experimental data as input, ranging from 200 Mb to 1.5 Gb Supplementary material: Sample output files, for the test run provided, are available. Classification: 7.4, 14 External routines: Dev-C++ ( http://www.bloodshed.net/devcpp.html) Nature of problem: Electron tomography of specimens for which conventional back projection may fail and/or data for which there is a limited angular
A general theory known as the WAste Reduction (WASR) algorithm has been developed to describe the flow and the generation of potential environmental impact through a chemical process. This theory integrates environmental impact assessment into chemical process design Potential en...
Focusing light through strongly scattering media using genetic algorithm with SBR discriminant
Zhang, Bin; Zhang, Zhenfeng; Feng, Qi; Liu, Zhipeng; Lin, Chengyou; Ding, Yingchun
2018-02-01
In this paper, we have experimentally demonstrated light focusing through strongly scattering media by performing binary amplitude optimization with a genetic algorithm. In the experiments, we control 160 000 mirrors of digital micromirror device to modulate and optimize the light transmission paths in the strongly scattering media. We replace the universal target-position-intensity (TPI) discriminant with signal-to-background ratio (SBR) discriminant in genetic algorithm. With 400 incident segments, a relative enhancement value of 17.5% with a ground glass diffuser is achieved, which is higher than the theoretical value of 1/(2π )≈ 15.9 % for binary amplitude optimization. According to our repetitive experiments, we conclude that, with the same segment number, the enhancement for the SBR discriminant is always higher than that for the TPI discriminant, which results from the background-weakening effect of SBR discriminant. In addition, with the SBR discriminant, the diameters of the focus can be changed ranging from 7 to 70 μm at arbitrary positions. Besides, multiple foci with high enhancement are obtained. Our work provides a meaningful reference for the study of binary amplitude optimization in the wavefront shaping field.
Directory of Open Access Journals (Sweden)
Mangal Singh
2017-12-01
Full Text Available This paper considers the use of the Partial Transmit Sequence (PTS technique to reduce the Peak‐to‐Average Power Ratio (PAPR of an Orthogonal Frequency Division Multiplexing signal in wireless communication systems. Search complexity is very high in the traditional PTS scheme because it involves an extensive random search over all combinations of allowed phase vectors, and it increases exponentially with the number of phase vectors. In this paper, a suboptimal metaheuristic algorithm for phase optimization based on an improved harmony search (IHS is applied to explore the optimal combination of phase vectors that provides improved performance compared with existing evolutionary algorithms such as the harmony search algorithm and firefly algorithm. IHS enhances the accuracy and convergence rate of the conventional algorithms with very few parameters to adjust. Simulation results show that an improved harmony search‐based PTS algorithm can achieve a significant reduction in PAPR using a simple network structure compared with conventional algorithms.
Development and performance analysis of a lossless data reduction algorithm for voip
International Nuclear Information System (INIS)
Misbahuddin, S.; Boulejfen, N.
2014-01-01
VoIP (Voice Over IP) is becoming an alternative way of voice communications over the Internet. To better utilize voice call bandwidth, some standard compression algorithms are applied in VoIP systems. However, these algorithms affect the voice quality with high compression ratios. This paper presents a lossless data reduction technique to improve VoIP data transfer rate over the IP network. The proposed algorithm exploits the data redundancies in digitized VFs (Voice Frames) generated by VoIP systems. Performance of proposed data reduction algorithm has been presented in terms of compression ratio. The proposed algorithm will help retain the voice quality along with the improvement in VoIP data transfer rates. (author)
International Nuclear Information System (INIS)
Sun, Wenbo; Videen, Gorden; Fu, Qiang; Hu, Yongxiang
2013-01-01
As fundamental parameters for polarized-radiative-transfer calculations, the single-scattering phase matrix of irregularly shaped aerosol particles must be accurately modeled. In this study, a scattered-field finite-difference time-domain (FDTD) model and a scattered-field pseudo-spectral time-domain (PSTD) model are developed for light scattering by arbitrarily shaped dielectric aerosols. The convolutional perfectly matched layer (CPML) absorbing boundary condition (ABC) is used to truncate the computational domain. It is found that the PSTD method is generally more accurate than the FDTD in calculation of the single-scattering properties given similar spatial cell sizes. Since the PSTD can use a coarser grid for large particles, it can lower the memory requirement in the calculation. However, the Fourier transformations in the PSTD need significantly more CPU time than simple subtractions in the FDTD, and the fast Fourier transform requires a power of 2 elements in calculations, thus using the PSTD could not significantly reduce the CPU time required in the numerical modeling. Furthermore, because the scattered-field FDTD/PSTD equations include incident-wave source terms, the FDTD/PSTD model allows for the inclusion of an arbitrarily incident wave source, including a plane parallel wave or a Gaussian beam like those emitted by lasers usually used in laboratory particle characterizations, etc. The scattered-field FDTD and PSTD light-scattering models can be used to calculate single-scattering properties of arbitrarily shaped aerosol particles over broad size and wavelength ranges. -- Highlights: • Scattered-field FDTD and PSTD models are developed for light scattering by aerosols. • Convolutional perfectly matched layer absorbing boundary condition is used. • PSTD is generally more accurate than FDTD in calculating single-scattering properties. • Using same spatial resolution, PSTD requires much larger CPU time than FDTD
Reduction of product platform complexity by vectorial Euclidean algorithm
International Nuclear Information System (INIS)
Navarrete, Israel Aguilera; Guzman, Alejandro A. Lozano
2013-01-01
In traditional machine, equipment and devices design, technical solutions are practically independent, thus increasing designs cost and complexity. Overcoming this situation has been tackled just using designer's experience. In this work, a product platform complexity reduction is presented based on a matrix representation of technical solutions versus product properties. This matrix represents the product platform. From this matrix, the Euclidean distances among technical solutions are obtained. Thus, the vectorial distances among technical solutions are identified in a new matrix of order of the number of technical solutions identified. This new matrix can be reorganized in groups with a hierarchical structure, in such a way that modular design of products is now more tractable. As a result of this procedure, the minimum vector distances are found thus being possible to identify the best technical solutions for the design problem raised. Application of these concepts is shown with two examples.
Directory of Open Access Journals (Sweden)
Xuyun FU
2018-01-01
Full Text Available The opportunistic replacement of multiple Life-Limited Parts (LLPs is a problem widely existing in industry. The replacement strategy of LLPs has a great impact on the total maintenance cost to a lot of equipment. This article focuses on finding a quick and effective algorithm for this problem. To improve the algorithm efficiency, six reduction rules are suggested from the perspectives of solution feasibility, determination of the replacement of LLPs, determination of the maintenance occasion and solution optimality. Based on these six reduction rules, a search algorithm is proposed. This search algorithm can identify one or several optimal solutions. A numerical experiment shows that these six reduction rules are effective, and the time consumed by the algorithm is less than 38 s if the total life of equipment is shorter than 55000 and the number of LLPs is less than 11. A specific case shows that the algorithm can obtain optimal solutions which are much better than the result of the traditional method in 10 s, and it can provide support for determining to-be-replaced LLPs when determining the maintenance workscope of an aircraft engine. Therefore, the algorithm is applicable to engineering applications concerning opportunistic replacement of multiple LLPs in aircraft engines.
Directory of Open Access Journals (Sweden)
Ion LUNGU
2012-01-01
Full Text Available In this paper, we research, analyze and develop optimization solutions for the parallel reduction function using graphics processing units (GPUs that implement the Compute Unified Device Architecture (CUDA, a modern and novel approach for improving the software performance of data processing applications and algorithms. Many of these applications and algorithms make use of the reduction function in their computational steps. After having designed the function and its algorithmic steps in CUDA, we have progressively developed and implemented optimization solutions for the reduction function. In order to confirm, test and evaluate the solutions' efficiency, we have developed a custom tailored benchmark suite. We have analyzed the obtained experimental results regarding: the comparison of the execution time and bandwidth when using graphic processing units covering the main CUDA architectures (Tesla GT200, Fermi GF100, Kepler GK104 and a central processing unit; the data type influence; the binary operator's influence.
Parameter-free Network Sparsification and Data Reduction by Minimal Algorithmic Information Loss
Zenil, Hector
2018-02-16
The study of large and complex datasets, or big data, organized as networks has emerged as one of the central challenges in most areas of science and technology. Cellular and molecular networks in biology is one of the prime examples. Henceforth, a number of techniques for data dimensionality reduction, especially in the context of networks, have been developed. Yet, current techniques require a predefined metric upon which to minimize the data size. Here we introduce a family of parameter-free algorithms based on (algorithmic) information theory that are designed to minimize the loss of any (enumerable computable) property contributing to the object\\'s algorithmic content and thus important to preserve in a process of data dimension reduction when forcing the algorithm to delete first the least important features. Being independent of any particular criterion, they are universal in a fundamental mathematical sense. Using suboptimal approximations of efficient (polynomial) estimations we demonstrate how to preserve network properties outperforming other (leading) algorithms for network dimension reduction. Our method preserves all graph-theoretic indices measured, ranging from degree distribution, clustering-coefficient, edge betweenness, and degree and eigenvector centralities. We conclude and demonstrate numerically that our parameter-free, Minimal Information Loss Sparsification (MILS) method is robust, has the potential to maximize the preservation of all recursively enumerable features in data and networks, and achieves equal to significantly better results than other data reduction and network sparsification methods.
A general theory known as the WAste Reduction (WAR) algorithm has been developed to describe the flow and the generation of potential environmental impact through a chemical process. This theory defines potential environmental impact indexes that characterize the generation and t...
International Nuclear Information System (INIS)
Broome, J.
1965-11-01
The programme SCATTER is a KDF9 programme in the Egtran dialect of Fortran to generate normalized angular distributions for elastically scattered neutrons from data input as the coefficients of a Legendre polynomial series, or from differential cross-section data. Also, differential cross-section data may be analysed to produce Legendre polynomial coefficients. Output on cards punched in the format of the U.K. A. E. A. Nuclear Data Library is optional. (author)
The Scatter Search Based Algorithm to Revenue Management Problem in Broadcasting Companies
Pishdad, Arezoo; Sharifyazdi, Mehdi; Karimpour, Reza
2009-09-01
The problem under question in this paper which is faced by broadcasting companies is how to benefit from a limited advertising space. This problem is due to the stochastic behavior of customers (advertiser) in different fare classes. To address this issue we propose a mathematical constrained nonlinear multi period model which incorporates cancellation and overbooking. The objective function is to maximize the total expected revenue and our numerical method performs it by determining the sales limits for each class of customer to present the revenue management control policy. Scheduling the advertising spots in breaks is another area of concern and we consider it as a constraint in our model. In this paper an algorithm based on Scatter search is developed to acquire a good feasible solution. This method uses simulation over customer arrival and in a continuous finite time horizon [0, T]. Several sensitivity analyses are conducted in computational result for depicting the effectiveness of proposed method. It also provides insight into better results of considering revenue management (control policy) compared to "no sales limit" policy in which sooner demand will served first.
Ogawa, Takahiro; Haseyama, Miki
2013-03-01
A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.
Chung, King; Zeng, Fan-Gang; Acker, Kyle N
2006-10-01
Although cochlear implant (CI) users have enjoyed good speech recognition in quiet, they still have difficulties understanding speech in noise. We conducted three experiments to determine whether a directional microphone and an adaptive multichannel noise reduction algorithm could enhance CI performance in noise and whether Speech Transmission Index (STI) can be used to predict CI performance in various acoustic and signal processing conditions. In Experiment I, CI users listened to speech in noise processed by 4 hearing aid settings: omni-directional microphone, omni-directional microphone plus noise reduction, directional microphone, and directional microphone plus noise reduction. The directional microphone significantly improved speech recognition in noise. Both directional microphone and noise reduction algorithm improved overall preference. In Experiment II, normal hearing individuals listened to the recorded speech produced by 4- or 8-channel CI simulations. The 8-channel simulation yielded similar speech recognition results as in Experiment I, whereas the 4-channel simulation produced no significant difference among the 4 settings. In Experiment III, we examined the relationship between STIs and speech recognition. The results suggested that STI could predict actual and simulated CI speech intelligibility with acoustic degradation and the directional microphone, but not the noise reduction algorithm. Implications for intelligibility enhancement are discussed.
International Nuclear Information System (INIS)
Rosca, Florin; Zygmanski, Piotr
2008-01-01
We have developed an independent algorithm for the prediction of electronic portal imaging device (EPID) response. The algorithm uses a set of images [open beam, closed multileaf collimator (MLC), various fence and modified sweeping gap patterns] to separately characterize the primary and head-scatter contributions to EPID response. It also characterizes the relevant dosimetric properties of the MLC: Transmission, dosimetric gap, MLC scatter [P. Zygmansky et al., J. Appl. Clin. Med. Phys. 8(4) (2007)], inter-leaf leakage, and tongue and groove [F. Lorenz et al., Phys. Med. Biol. 52, 5985-5999 (2007)]. The primary radiation is modeled with a single Gaussian distribution defined at the target position, while the head-scatter radiation is modeled with a triple Gaussian distribution defined downstream of the target. The distances between the target and the head-scatter source, jaws, and MLC are model parameters. The scatter associated with the EPID is implicit in the model. Open beam images are predicted to within 1% of the maximum value across the image. Other MLC test patterns and intensity-modulated radiation therapy fluences are predicted to within 1.5% of the maximum value. The presented method was applied to the Varian aS500 EPID but is designed to work with any planar detector with sufficient spatial resolution
Barney, D; Kokkas, P; Manthos, N; Sidiropoulos, G; Reynaud, S; Vichoudis, P
2007-01-01
The CMS Endcap Preshower (ES) sub-detector comprises 4288 silicon sensors, each containing 32 strips. The data are transferred from the detector to the counting room via 1208 optical fibres running at 800Mbps. Each fibre carries data from two, three or four sensors. For the readout of the Preshower, a VME-based system, the Endcap Preshower Data Concentrator Card (ES-DCC), is currently under development. The main objective of each readout board is to acquire on-detector data from up to 36 optical links, perform on-line data reduction via zero suppression and pass the concentrated data to the CMS event builder. This document presents the conceptual design of the Reduction Algorithms as well as their implementation in the ES-DCC FPGAs. These algorithms, as implemented in the ES-DCC, result in a data-reduction factor of 20.
Barney, David; Kokkas, Panagiotis; Manthos, Nikolaos; Reynaud, Serge; Sidiropoulos, Georgios; Vichoudis, Paschalis
2006-01-01
The CMS Endcap Preshower (ES) sub-detector comprises 4288 silicon sensors, each containing 32 strips. The data are transferred from the detector to the counting room via 1208 optical fibres running at 800Mbps. Each fibre carries data from 2, 3 or 4 sensors. For the readout of the Preshower, a VME-based system - the Endcap Preshower Data Concentrator Card (ES-DCC) is currently under development. The main objective of each readout board is to acquire on-detector data from up to 36 optical links, perform on-line data reduction (zero suppression) and pass the concentrated data to the CMS event builder. This document presents the conceptual design of the Reduction Algorithms as well as their implementation into the ES-DCC FPGAs. The algorithms implemented into the ES-DCC resulted in a reduction factor of ~20.
Aissa, Joel; Boos, Johannes; Sawicki, Lino Morris; Heinzler, Niklas; Krzymyk, Karl; Sedlmair, Martin; Kröpil, Patric; Antoch, Gerald; Thomas, Christoph
2017-11-01
The purpose of this study was to evaluate the impact of three novel iterative metal artefact (iMAR) algorithms on image quality and artefact degree in chest CT of patients with a variety of thoracic metallic implants. 27 postsurgical patients with thoracic implants who underwent clinical chest CT between March and May 2015 in clinical routine were retrospectively included. Images were retrospectively reconstructed with standard weighted filtered back projection (WFBP) and with three iMAR algorithms (iMAR-Algo1 = Cardiac algorithm, iMAR-Algo2 = Pacemaker algorithm and iMAR-Algo3 = ThoracicCoils algorithm). The subjective and objective image quality was assessed. Averaged over all artefacts, artefact degree was significantly lower for the iMAR-Algo1 (58.9 ± 48.5 HU), iMAR-Algo2 (52.7 ± 46.8 HU) and the iMAR-Algo3 (51.9 ± 46.1 HU) compared with WFBP (91.6 ± 81.6 HU, p algorithms, respectively. iMAR-Algo2 and iMAR-Algo3 reconstructions decreased mild and moderate artefacts compared with WFBP and iMAR-Algo1 (p algorithms led to a significant reduction of metal artefacts and increase in overall image quality compared with WFBP in chest CT of patients with metallic implants in subjective and objective analysis. The iMARAlgo2 and iMARAlgo3 were best for mild artefacts. IMARAlgo1 was superior for severe artefacts. Advances in knowledge: Iterative MAR led to significant artefact reduction and increase image-quality compared with WFBP in CT after implementation of thoracic devices. Adjusting iMAR-algorithms to patients' metallic implants can help to improve image quality in CT.
Metal artifact reduction in x-ray computed tomography by using analytical DBP-type algorithm
Wang, Zhen; Kudo, Hiroyuki
2012-03-01
This paper investigates a common metal artifacts problem in X-ray computed tomography (CT). The artifacts in reconstructed image may render image non-diagnostic because of inaccuracy beam hardening correction from high attenuation objects, satisfactory image could not be reconstructed from projections with missing or distorted data. In traditionally analytical metal artifact reduction (MAR) method, firstly subtract the metallic object part of projection data from the original obtained projection, secondly complete the subtracted part in original projection by using various interpolating method, thirdly reconstruction from the interpolated projection by filtered back-projection (FBP) algorithm. The interpolation error occurred during the second step can make unrealistic assumptions about the missing data, leading to DC shift artifact in the reconstructed images. We proposed a differentiated back-projection (DBP) type MAR method by instead of FBP algorithm with DBP algorithm in third step. In FBP algorithm the interpolated projection will be filtered on each projection view angle before back-projection, as a result the interpolation error is propagated to whole projection. However, the property of DBP algorithm provide a chance to do filter after the back-projection in a Hilbert filter direction, as a result the interpolation error affection would be reduce and there is expectation on improving quality of reconstructed images. In other word, if we choose the DBP algorithm instead of the FBP algorithm, less contaminated projection data with interpolation error would be used in reconstruction. A simulation study was performed to evaluate the proposed method using a given phantom.
Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan
2009-02-01
The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.
A simple algorithm for calculating the scattering angle in atomic collisions
International Nuclear Information System (INIS)
Belchior, J.C.; Braga, J.P.
1996-01-01
A geometric approach to calculate the classical atomic scattering angle is presented. The trajectory of the particle is divided into several straight-lines and changing in direction from one sector to the other is used to calculate the scattering angle. In this model, calculation of the scattering angle does not involve either the direct evaluation of integrals nor classical turning points. (author)
Directory of Open Access Journals (Sweden)
Yasuhiro Nakamura
2012-07-01
Full Text Available The present study introduces the four-component scattering power decomposition (4-CSPD algorithm with rotation of covariance matrix, and presents an experimental proof of the equivalence between the 4-CSPD algorithms based on rotation of covariance matrix and coherency matrix. From a theoretical point of view, the 4-CSPD algorithms with rotation of the two matrices are identical. Although it seems obvious, no experimental evidence has yet been presented. In this paper, using polarimetric synthetic aperture radar (POLSAR data acquired by Phased Array L-band SAR (PALSAR on board of Advanced Land Observing Satellite (ALOS, an experimental proof is presented to show that both algorithms indeed produce identical results.
Image processing methods for noise reduction in the TJ-II Thomson Scattering diagnostic
Energy Technology Data Exchange (ETDEWEB)
Dormido-Canto, S., E-mail: sebas@dia.uned.es [Departamento de Informatica y Automatica, UNED, Madrid 28040 (Spain); Farias, G. [Pontificia Universidad Catolica de Valparaiso, Valparaiso (Chile); Vega, J.; Pastor, I. [Asociacion EURATOM/CIEMAT para Fusion, Madrid 28040 (Spain)
2012-12-15
Highlights: Black-Right-Pointing-Pointer We describe an approach in order to reduce or mitigate the stray-light on the images and show the exceptional results. Black-Right-Pointing-Pointer We analyze the parameters to take account in the proposed process. Black-Right-Pointing-Pointer We report a simplified exampled in order to explain the proposed process. - Abstract: The Thomsom Scattering diagnostic of the TJ-II stellarator provides temperature and density profiles. The CCD camera acquires images corrupted with noise that, in some cases, can produce unreliable profiles. The main source of noise is the so-called stray-light. In this paper we describe an approach that allows mitigation of the effects that stray-light has on the images: extraction regions with connected-components. In addition, the robustness and effectiveness of the noise reduction technique is validated in two ways: (1) supervised classification and (2) comparison of electron temperature profiles.
Studies of Actinides Reduction on Iron Surfaces by Means of Resonant Inelastic X-ray Scattering
International Nuclear Information System (INIS)
Kvashnina, K.O.; Butorin, S.M.; Shuh, D.K.; Ollila, K.; Soroka, I.; Guo, J.-H.; Werme, L.; Nordgren, J.
2006-01-01
The interaction of actinides with corroded iron surfaces was studied using resonant inelastic x-ray scattering (RIXS) spectroscopy at actinide 5d edges. RIXS profiles, corresponding to the f-f excitations are found to be very sensitive to the chemical states of actinides in different systems. Our results clearly indicate that U(VI) (as soluble uranyl ion) was reduced to U(IV) in the form of relatively insoluble uranium species, indicating that the iron presence significantly affects the mobility of actinides, creating reducing conditions. Also Np(V) and Pu (VI) in the ground water solution were getting reduced by the iron surface to Np(IV) and Pu (IV) respectively. Studying the reduction of actinides compounds will have an important process controlling the environmental behavior. Using RIXS we have shown that actinides, formed by radiolysis of water in the disposal canister, are likely to be reduced on the inset corrosion products and prevent release from the canister
Directory of Open Access Journals (Sweden)
W. Jiang
2013-01-01
Full Text Available Based on the study of the radiation and scattering of the circularly polarized (CP antenna, a novel radar cross-section (RCS reduction technique is proposed for CP antenna in this paper. Quasi-fractal slots are applied in the design of the antenna ground plane to reduce the RCS of the CP antenna. Both prototype antenna and array are designed, and their time-, frequency-, and space-domain characteristics are studied to authenticate the proposed technique. The simulated and measured results show that the RCS of the prototype antenna and array is reduced up to 7.85 dB and 6.95 dB in the band of 1 GHz–10 GHz. The proposed technique serves a candidate in the design of low RCS CP antennas and arrays.
An MPCA/LDA Based Dimensionality Reduction Algorithm for Face Recognition
Directory of Open Access Journals (Sweden)
Jun Huang
2014-01-01
Full Text Available We proposed a face recognition algorithm based on both the multilinear principal component analysis (MPCA and linear discriminant analysis (LDA. Compared with current traditional existing face recognition methods, our approach treats face images as multidimensional tensor in order to find the optimal tensor subspace for accomplishing dimension reduction. The LDA is used to project samples to a new discriminant feature space, while the K nearest neighbor (KNN is adopted for sample set classification. The results of our study and the developed algorithm are validated with face databases ORL, FERET, and YALE and compared with PCA, MPCA, and PCA + LDA methods, which demonstrates an improvement in face recognition accuracy.
Laguda, Edcer Jerecho
Purpose: Computed Tomography (CT) is one of the standard diagnostic imaging modalities for the evaluation of a patient's medical condition. In comparison to other imaging modalities such as Magnetic Resonance Imaging (MRI), CT is a fast acquisition imaging device with higher spatial resolution and higher contrast-to-noise ratio (CNR) for bony structures. CT images are presented through a gray scale of independent values in Hounsfield units (HU). High HU-valued materials represent higher density. High density materials, such as metal, tend to erroneously increase the HU values around it due to reconstruction software limitations. This problem of increased HU values due to metal presence is referred to as metal artefacts. Hip prostheses, dental fillings, aneurysm clips, and spinal clips are a few examples of metal objects that are of clinical relevance. These implants create artefacts such as beam hardening and photon starvation that distort CT images and degrade image quality. This is of great significance because the distortions may cause improper evaluation of images and inaccurate dose calculation in the treatment planning system. Different algorithms are being developed to reduce these artefacts for better image quality for both diagnostic and therapeutic purposes. However, very limited information is available about the effect of artefact correction on dose calculation accuracy. This research study evaluates the dosimetric effect of metal artefact reduction algorithms on severe artefacts on CT images. This study uses Gemstone Spectral Imaging (GSI)-based MAR algorithm, projection-based Metal Artefact Reduction (MAR) algorithm, and the Dual-Energy method. Materials and Methods: The Gemstone Spectral Imaging (GSI)-based and SMART Metal Artefact Reduction (MAR) algorithms are metal artefact reduction protocols embedded in two different CT scanner models by General Electric (GE), and the Dual-Energy Imaging Method was developed at Duke University. All three
Music algorithm for imaging of a sound-hard arc in limited-view inverse scattering problem
Park, Won-Kwang
2017-07-01
MUltiple SIgnal Classification (MUSIC) algorithm for a non-iterative imaging of sound-hard arc in limited-view inverse scattering problem is considered. In order to discover mathematical structure of MUSIC, we derive a relationship between MUSIC and an infinite series of Bessel functions of integer order. This structure enables us to examine some properties of MUSIC in limited-view problem. Numerical simulations are performed to support the identified structure of MUSIC.
High-performance bidiagonal reduction using tile algorithms on homogeneous multicore architectures
Ltaief, Hatem
2013-04-01
This article presents a new high-performance bidiagonal reduction (BRD) for homogeneous multicore architectures. This article is an extension of the high-performance tridiagonal reduction implemented by the same authors [Luszczek et al., IPDPS 2011] to the BRD case. The BRD is the first step toward computing the singular value decomposition of a matrix, which is one of the most important algorithms in numerical linear algebra due to its broad impact in computational science. The high performance of the BRD described in this article comes from the combination of four important features: (1) tile algorithms with tile data layout, which provide an efficient data representation in main memory; (2) a two-stage reduction approach that allows to cast most of the computation during the first stage (reduction to band form) into calls to Level 3 BLAS and reduces the memory traffic during the second stage (reduction from band to bidiagonal form) by using high-performance kernels optimized for cache reuse; (3) a data dependence translation layer that maps the general algorithm with column-major data layout into the tile data layout; and (4) a dynamic runtime system that efficiently schedules the newly implemented kernels across the processing units and ensures that the data dependencies are not violated. A detailed analysis is provided to understand the critical impact of the tile size on the total execution time, which also corresponds to the matrix bandwidth size after the reduction of the first stage. The performance results show a significant improvement over currently established alternatives. The new high-performance BRD achieves up to a 30-fold speedup on a 16-core Intel Xeon machine with a 12000×12000 matrix size against the state-of-the-art open source and commercial numerical software packages, namely LAPACK, compiled with optimized and multithreaded BLAS from MKL as well as Intel MKL version 10.2. © 2013 ACM.
Neutron scattering studies of crude oil viscosity reduction with electric field
Du, Enpeng
topic. Dr. Tao with his group at Temple University, using his electro or magnetic rheological viscosity theory has developed a new technology, which utilizes electric or magnetic fields to change the rheology of complex fluids to reduce the viscosity, while keeping the temperature unchanged. After we successfully reduced the viscosity of crude oil with field and investigated the microstructure changing in various crude oil samples with SANS, we have continued to reduce the viscosity of heavy crude oil, bunker diesel, ultra low sulfur diesel, bio-diesel and crude oil and ultra low temperature with electric field treatment. Our research group developed the viscosity electrorheology theory and investigated flow rate with laboratory and field pipeline. But we never visualize this aggregation. The small angle neutron scattering experiment has confirmed the theoretical prediction that a strong electric field induces the suspended nano-particles inside crude oil to aggregate into short chains along the field direction. This aggregation breaks the symmetry, making the viscosity anisotropic: along the field direction, the viscosity is significantly reduced. The experiment enables us to determine the induced chain size and shape, verifies that the electric field works for all kinds of crude oils, paraffin-based, asphalt-based, and mix-based. The basic physics of such field induced viscosity reduction is applicable to all kinds of suspensions.
Tedgren, Åsa Carlsson; Plamondon, Mathieu; Beaulieu, Luc
2015-07-07
The aim of this work was to investigate how dose distributions calculated with the collapsed cone (CC) algorithm depend on the size of the water phantom used in deriving the point kernel for multiple scatter. A research version of the CC algorithm equipped with a set of selectable point kernels for multiple-scatter dose that had initially been derived in water phantoms of various dimensions was used. The new point kernels were generated using EGSnrc in spherical water phantoms of radii 5 cm, 7.5 cm, 10 cm, 15 cm, 20 cm, 30 cm and 50 cm. Dose distributions derived with CC in water phantoms of different dimensions and in a CT-based clinical breast geometry were compared to Monte Carlo (MC) simulations using the Geant4-based brachytherapy specific MC code Algebra. Agreement with MC within 1% was obtained when the dimensions of the phantom used to derive the multiple-scatter kernel were similar to those of the calculation phantom. Doses are overestimated at phantom edges when kernels are derived in larger phantoms and underestimated when derived in smaller phantoms (by around 2% to 7% depending on distance from source and phantom dimensions). CC agrees well with MC in the high dose region of a breast implant and is superior to TG43 in determining skin doses for all multiple-scatter point kernel sizes. Increased agreement between CC and MC is achieved when the point kernel is comparable to breast dimensions. The investigated approximation in multiple scatter dose depends on the choice of point kernel in relation to phantom size and yields a significant fraction of the total dose only at distances of several centimeters from a source/implant which correspond to volumes of low doses. The current implementation of the CC algorithm utilizes a point kernel derived in a comparatively large (radius 20 cm) water phantom. A fixed point kernel leads to predictable behaviour of the algorithm with the worst case being a source/implant located well within a patient
Matheoud, Roberta; Della Monica, Patrizia; Secco, Chiara; Loi, Gianfranco; Krengli, Marco; Inglese, Eugenio; Brambilla, Marco
2011-01-01
The aim of this work is to evaluate the role of different amount of attenuation and scatter on FDG-PET image volume segmentation using a contrast-oriented method based on the target-to-background (TB) ratio and target dimensions. A phantom study was designed employing 3 phantom sets, which provided a clinical range of attenuation and scatter conditions, equipped with 6 spheres of different volumes (0.5-26.5 ml). The phantoms were: (1) the Hoffman 3-dimensional brain phantom, (2) a modified International Electro technical Commission (IEC) phantom with an annular ring of water bags of 3 cm thickness fit over the IEC phantom, and (3) a modified IEC phantom with an annular ring of water bags of 9 cm. The phantoms cavities were filled with a solution of FDG at 5.4 kBq/ml activity concentration, and the spheres with activity concentration ratios of about 16, 8, and 4 times the background activity concentration. Images were acquired with a Biograph 16 HI-REZ PET/CT scanner. Thresholds (TS) were determined as a percentage of the maximum intensity in the cross section area of the spheres. To reduce statistical fluctuations a nominal maximum value is calculated as the mean from all voxel > 95%. To find the TS value that yielded an area A best matching the true value, the cross section were auto-contoured in the attenuation corrected slices varying TS in step of 1%, until the area so determined differed by less than 10 mm² versus its known physical value. Multiple regression methods were used to derive an adaptive thresholding algorithm and to test its dependence on different conditions of attenuation and scatter. The errors of scatter and attenuation correction increased with increasing amount of attenuation and scatter in the phantoms. Despite these increasing inaccuracies, PET threshold segmentation algorithms resulted not influenced by the different condition of attenuation and scatter. The test of the hypothesis of coincident regression lines for the three phantoms used
A Problem-Reduction Evolutionary Algorithm for Solving the Capacitated Vehicle Routing Problem
Directory of Open Access Journals (Sweden)
Wanfeng Liu
2015-01-01
Full Text Available Assessment of the components of a solution helps provide useful information for an optimization problem. This paper presents a new population-based problem-reduction evolutionary algorithm (PREA based on the solution components assessment. An individual solution is regarded as being constructed by basic elements, and the concept of acceptability is introduced to evaluate them. The PREA consists of a searching phase and an evaluation phase. The acceptability of basic elements is calculated in the evaluation phase and passed to the searching phase. In the searching phase, for each individual solution, the original optimization problem is reduced to a new smaller-size problem. With the evolution of the algorithm, the number of common basic elements in the population increases until all individual solutions are exactly the same which is supposed to be the near-optimal solution of the optimization problem. The new algorithm is applied to a large variety of capacitated vehicle routing problems (CVRP with customers up to nearly 500. Experimental results show that the proposed algorithm has the advantages of fast convergence and robustness in solution quality over the comparative algorithms.
Comparison of Algorithms for the Optimal Location of Control Valves for Leakage Reduction in WDNs
Directory of Open Access Journals (Sweden)
Enrico Creaco
2018-04-01
Full Text Available The paper presents the comparison of two different algorithms for the optimal location of control valves for leakage reduction in water distribution networks (WDNs. The former is based on the sequential addition (SA of control valves. At the generic step Nval of SA, the search for the optimal combination of Nval valves is carried out, while containing the optimal combination of Nval − 1 valves found at the previous step. Therefore, only one new valve location is searched for at each step of SA, among all the remaining available locations. The latter algorithm consists of a multi-objective genetic algorithm (GA, in which valve locations are encoded inside individual genes. For the sake of consistency, the same embedded algorithm, based on iterated linear programming (LP, was used inside SA and GA, to search for the optimal valve settings at various time slots in the day. The results of applications to two WDNs show that SA and GA yield identical results for small values of Nval. When this number grows, the limitations of SA, related to its reduced exploration of the research space, emerge. In fact, for higher values of Nval, SA tends to produce less beneficial valve locations in terms of leakage abatement. However, the smaller computation time of SA may make this algorithm preferable in the case of large WDNs, for which the application of GA would be overly burdensome.
Spectral CT metal artifact reduction with an optimization-based reconstruction algorithm
Gilat Schmidt, Taly; Barber, Rina F.; Sidky, Emil Y.
2017-03-01
Metal objects cause artifacts in computed tomography (CT) images. This work investigated the feasibility of a spectral CT method to reduce metal artifacts. Spectral CT acquisition combined with optimization-based reconstruction is proposed to reduce artifacts by modeling the physical effects that cause metal artifacts and by providing the flexibility to selectively remove corrupted spectral measurements in the spectral-sinogram space. The proposed Constrained `One-Step' Spectral CT Image Reconstruction (cOSSCIR) algorithm directly estimates the basis material maps while enforcing convex constraints. The incorporation of constraints on the reconstructed basis material maps is expected to mitigate undersampling effects that occur when corrupted data is excluded from reconstruction. The feasibility of the cOSSCIR algorithm to reduce metal artifacts was investigated through simulations of a pelvis phantom. The cOSSCIR algorithm was investigated with and without the use of a third basis material representing metal. The effects of excluding data corrupted by metal were also investigated. The results demonstrated that the proposed cOSSCIR algorithm reduced metal artifacts and improved CT number accuracy. For example, CT number error in a bright shading artifact region was reduced from 403 HU in the reference filtered backprojection reconstruction to 33 HU using the proposed algorithm in simulation. In the dark shading regions, the error was reduced from 1141 HU to 25 HU. Of the investigated approaches, decomposing the data into three basis material maps and excluding the corrupted data demonstrated the greatest reduction in metal artifacts.
Study on the Noise Reduction of Vehicle Exhaust NOX Spectra Based on Adaptive EEMD Algorithm
Directory of Open Access Journals (Sweden)
Kai Zhang
2017-01-01
Full Text Available It becomes a key technology to measure the concentration of the vehicle exhaust components with the transmission spectra. But in the conventional methods for noise reduction and baseline correction, such as wavelet transform, derivative, interpolation, polynomial fitting, and so forth, the basic functions of these algorithms, the number of decomposition layers, and the way to reconstruct the signal have to be adjusted according to the characteristics of different components in the transmission spectra. The parameter settings of the algorithms above are not transcendental, so with them, it is difficult to achieve the best noise reduction effect for the vehicle exhaust spectra which are sharp and drastic in the waveform. In this paper, an adaptive ensemble empirical mode decomposition (EEMD denoising model based on a special normalized index optimization is proposed and used in the spectral noise reduction of vehicle exhaust NOX. It is shown with the experimental results that the method can effectively improve the accuracy of the spectral noise reduction and simplify the denoising process and its operation difficulty.
Li, Ping; Xu, Lei; Yang, Lin; Wang, Rui; Hsieh, Jiang; Sun, Zhonghua; Fan, Zhanming; Leipsic, Jonathon A
2018-05-02
The aim of this study was to investigate the use of de-blooming algorithm in coronary CT angiography (CCTA) for optimal evaluation of calcified plaques. Calcified plaques were simulated on a coronary vessel phantom and a cardiac motion phantom. Two convolution kernels, standard (STND) and high-definition standard (HD STND), were used for imaging reconstruction. A dedicated de-blooming algorithm was used for imaging processing. We found a smaller bias towards measurement of stenosis using the de-blooming algorithm (STND: bias 24.6% vs 15.0%, range 10.2% to 39.0% vs 4.0% to 25.9%; HD STND: bias 17.9% vs 11.0%, range 8.9% to 30.6% vs 0.5% to 21.5%). With use of de-blooming algorithm, specificity for diagnosing significant stenosis increased from 45.8% to 75.0% (STND), from 62.5% to 83.3% (HD STND); while positive predictive value (PPV) increased from 69.8% to 83.3% (STND), from 76.9% to 88.2% (HD STND). In the patient group, reduction in calcification volume was 48.1 ± 10.3%, reduction in coronary diameter stenosis over calcified plaque was 52.4 ± 24.2%. Our results suggest that the novel de-blooming algorithm could effectively decrease the blooming artifacts caused by coronary calcified plaques, and consequently improve diagnostic accuracy of CCTA in assessing coronary stenosis.
International Nuclear Information System (INIS)
Quirk, Thomas J. IV
2004-01-01
The Integrated TIGER Series (ITS) is a software package that solves coupled electron-photon transport problems. ITS performs analog photon tracking for energies between 1 keV and 1 GeV. Unlike its deterministic counterpart, the Monte Carlo calculations of ITS do not require a memory-intensive meshing of phase space; however, its solutions carry statistical variations. Reducing these variations is heavily dependent on runtime. Monte Carlo simulations must therefore be both physically accurate and computationally efficient. Compton scattering is the dominant photon interaction above 100 keV and below 5-10 MeV, with higher cutoffs occurring in lighter atoms. In its current model of Compton scattering, ITS corrects the differential Klein-Nishina cross sections (which assumes a stationary, free electron) with the incoherent scattering function, a function dependent on both the momentum transfer and the atomic number of the scattering medium. While this technique accounts for binding effects on the scattering angle, it excludes the Doppler broadening the Compton line undergoes because of the momentum distribution in each bound state. To correct for these effects, Ribbefor's relativistic impulse approximation (IA) will be employed to create scattering cross section differential in both energy and angle for each element. Using the parameterizations suggested by Brusa et al., scattered photon energies and angle can be accurately sampled at a high efficiency with minimal physical data. Two-body kinematics then dictates the electron's scattered direction and energy. Finally, the atomic ionization is relaxed via Auger emission or fluorescence. Future work will extend these improvements in incoherent scattering to compounds and to adjoint calculations.
Coherency Identification of Generators Using a PAM Algorithm for Dynamic Reduction of Power Systems
Directory of Open Access Journals (Sweden)
Seung-Il Moon
2012-11-01
Full Text Available This paper presents a new coherency identification method for dynamic reduction of a power system. To achieve dynamic reduction, coherency-based equivalence techniques divide generators into groups according to coherency, and then aggregate them. In order to minimize the changes in the dynamic response of the reduced equivalent system, coherency identification of the generators should be clearly defined. The objective of the proposed coherency identification method is to determine the optimal coherent groups of generators with respect to the dynamic response, using the Partitioning Around Medoids (PAM algorithm. For this purpose, the coherency between generators is first evaluated from the dynamic simulation time response, and in the proposed method this result is then used to define a dissimilarity index. Based on the PAM algorithm, the coherent generator groups are then determined so that the sum of the index in each group is minimized. This approach ensures that the dynamic characteristics of the original system are preserved, by providing the optimized coherency identification. To validate the effectiveness of the technique, simulated cases with an IEEE 39-bus test system are evaluated using PSS/E. The proposed method is compared with an existing coherency identification method, which uses the K-means algorithm, and is found to provide a better estimate of the original system.
Feature Reduction Based on Genetic Algorithm and Hybrid Model for Opinion Mining
Directory of Open Access Journals (Sweden)
P. Kalaivani
2015-01-01
Full Text Available With the rapid growth of websites and web form the number of product reviews is available on the sites. An opinion mining system is needed to help the people to evaluate emotions, opinions, attitude, and behavior of others, which is used to make decisions based on the user preference. In this paper, we proposed an optimized feature reduction that incorporates an ensemble method of machine learning approaches that uses information gain and genetic algorithm as feature reduction techniques. We conducted comparative study experiments on multidomain review dataset and movie review dataset in opinion mining. The effectiveness of single classifiers Naïve Bayes, logistic regression, support vector machine, and ensemble technique for opinion mining are compared on five datasets. The proposed hybrid method is evaluated and experimental results using information gain and genetic algorithm with ensemble technique perform better in terms of various measures for multidomain review and movie reviews. Classification algorithms are evaluated using McNemar’s test to compare the level of significance of the classifiers.
A Fast and High-precision Orientation Algorithm for BeiDou Based on Dimensionality Reduction
Directory of Open Access Journals (Sweden)
ZHAO Jiaojiao
2015-05-01
Full Text Available A fast and high-precision orientation algorithm for BeiDou is proposed by deeply analyzing the constellation characteristics of BeiDou and GEO satellites features.With the advantage of good east-west geometry, the baseline vector candidate values were solved by the GEO satellites observations combined with the dimensionality reduction theory at first.Then, we use the ambiguity function to judge the values in order to obtain the optical baseline vector and get the wide lane integer ambiguities. On this basis, the B1 ambiguities were solved. Finally, the high-precision orientation was estimated by the determinating B1 ambiguities. This new algorithm not only can improve the ill-condition of traditional algorithm, but also can reduce the ambiguity search region to a great extent, thus calculating the integer ambiguities in a single-epoch.The algorithm is simulated by the actual BeiDou ephemeris and the result shows that the method is efficient and fast for orientation. It is capable of very high single-epoch success rate(99.31% and accurate attitude angle (the standard deviation of pitch and heading is respectively 0.07°and 0.13°in a real time and dynamic environment.
SCIAMACHY WFM-DOAS XCO2: reduction of scattering related errors
Directory of Open Access Journals (Sweden)
R. Sussmann
2012-10-01
Full Text Available Global observations of column-averaged dry air mole fractions of carbon dioxide (CO2, denoted by XCO2 , retrieved from SCIAMACHY on-board ENVISAT can provide important and missing global information on the distribution and magnitude of regional CO2 surface fluxes. This application has challenging precision and accuracy requirements. In a previous publication (Heymann et al., 2012, it has been shown by analysing seven years of SCIAMACHY WFM-DOAS XCO2 (WFMDv2.1 that unaccounted thin cirrus clouds can result in significant errors. In order to enhance the quality of the SCIAMACHY XCO2 data product, we have developed a new version of the retrieval algorithm (WFMDv2.2, which is described in this manuscript. It is based on an improved cloud filtering and correction method using the 1.4 μm strong water vapour absorption and 0.76 μm O2-A bands. The new algorithm has been used to generate a SCIAMACHY XCO2 data set covering the years 2003–2009. The new XCO2 data set has been validated using ground-based observations from the Total Carbon Column Observing Network (TCCON. The validation shows a significant improvement of the new product (v2.2 in comparison to the previous product (v2.1. For example, the standard deviation of the difference to TCCON at Darwin, Australia, has been reduced from 4 ppm to 2 ppm. The monthly regional-scale scatter of the data (defined as the mean intra-monthly standard deviation of all quality filtered XCO2 retrievals within a radius of 350 km around various locations has also been reduced, typically by a factor of about 1.5. Overall, the validation of the new WFMDv2.2 XCO2 data product can be summarised by a single measurement precision of 3.8 ppm, an estimated regional-scale (radius of 500 km precision of monthly averages of 1.6 ppm and an estimated regional-scale relative accuracy of 0.8 ppm. In addition to the comparison with the limited number of TCCON sites, we also present a comparison with NOAA's global CO2 modelling
Scherer, Artur; Valiron, Benoît; Mau, Siun-Chuon; Alexander, Scott; van den Berg, Eric; Chapuran, Thomas E.
2017-03-01
We provide a detailed estimate for the logical resource requirements of the quantum linear-system algorithm (Harrow et al. in Phys Rev Lett 103:150502, 2009) including the recently described elaborations and application to computing the electromagnetic scattering cross section of a metallic target (Clader et al. in Phys Rev Lett 110:250504, 2013). Our resource estimates are based on the standard quantum-circuit model of quantum computation; they comprise circuit width (related to parallelism), circuit depth (total number of steps), the number of qubits and ancilla qubits employed, and the overall number of elementary quantum gate operations as well as more specific gate counts for each elementary fault-tolerant gate from the standard set { X, Y, Z, H, S, T, { CNOT } }. In order to perform these estimates, we used an approach that combines manual analysis with automated estimates generated via the Quipper quantum programming language and compiler. Our estimates pertain to the explicit example problem size N=332{,}020{,}680 beyond which, according to a crude big-O complexity comparison, the quantum linear-system algorithm is expected to run faster than the best known classical linear-system solving algorithm. For this problem size, a desired calculation accuracy ɛ =0.01 requires an approximate circuit width 340 and circuit depth of order 10^{25} if oracle costs are excluded, and a circuit width and circuit depth of order 10^8 and 10^{29}, respectively, if the resource requirements of oracles are included, indicating that the commonly ignored oracle resources are considerable. In addition to providing detailed logical resource estimates, it is also the purpose of this paper to demonstrate explicitly (using a fine-grained approach rather than relying on coarse big-O asymptotic approximations) how these impressively large numbers arise with an actual circuit implementation of a quantum algorithm. While our estimates may prove to be conservative as more efficient
Indian Academy of Sciences (India)
polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.
Directory of Open Access Journals (Sweden)
Ho-Lung Hung
2008-08-01
Full Text Available A suboptimal partial transmit sequence (PTS based on particle swarm optimization (PSO algorithm is presented for the low computation complexity and the reduction of the peak-to-average power ratio (PAPR of an orthogonal frequency division multiplexing (OFDM system. In general, PTS technique can improve the PAPR statistics of an OFDM system. However, it will come with an exhaustive search over all combinations of allowed phase weighting factors and the search complexity increasing exponentially with the number of subblocks. In this paper, we work around potentially computational intractability; the proposed PSO scheme exploits heuristics to search the optimal combination of phase factors with low complexity. Simulation results show that the new technique can effectively reduce the computation complexity and PAPR reduction.
Directory of Open Access Journals (Sweden)
Lee Shu-Hong
2008-01-01
Full Text Available Abstract A suboptimal partial transmit sequence (PTS based on particle swarm optimization (PSO algorithm is presented for the low computation complexity and the reduction of the peak-to-average power ratio (PAPR of an orthogonal frequency division multiplexing (OFDM system. In general, PTS technique can improve the PAPR statistics of an OFDM system. However, it will come with an exhaustive search over all combinations of allowed phase weighting factors and the search complexity increasing exponentially with the number of subblocks. In this paper, we work around potentially computational intractability; the proposed PSO scheme exploits heuristics to search the optimal combination of phase factors with low complexity. Simulation results show that the new technique can effectively reduce the computation complexity and PAPR reduction.
Genetic Algorithm-Based Model Order Reduction of Aeroservoelastic Systems with Consistant States
Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter M.; Brenner, Martin J.
2017-01-01
This paper presents a model order reduction framework to construct linear parameter-varying reduced-order models of flexible aircraft for aeroservoelasticity analysis and control synthesis in broad two-dimensional flight parameter space. Genetic algorithms are used to automatically determine physical states for reduction and to generate reduced-order models at grid points within parameter space while minimizing the trial-and-error process. In addition, balanced truncation for unstable systems is used in conjunction with the congruence transformation technique to achieve locally optimal realization and weak fulfillment of state consistency across the entire parameter space. Therefore, aeroservoelasticity reduced-order models at any flight condition can be obtained simply through model interpolation. The methodology is applied to the pitch-plant model of the X-56A Multi-Use Technology Testbed currently being tested at NASA Armstrong Flight Research Center for flutter suppression and gust load alleviation. The present studies indicate that the reduced-order model with more than 12× reduction in the number of states relative to the original model is able to accurately predict system response among all input-output channels. The genetic-algorithm-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The interpolated aeroservoelasticity reduced order models exhibit smooth pole transition and continuously varying gains along a set of prescribed flight conditions, which verifies consistent state representation obtained by congruence transformation. The present model order reduction framework can be used by control engineers for robust aeroservoelasticity controller synthesis and novel vehicle design.
Cao, Le; Wei, Bing
2014-08-25
Finite-difference time-domain (FDTD) algorithm with a new method of plane wave excitation is used to investigate the RCS (Radar Cross Section) characteristics of targets over layered half space. Compare with the traditional excitation plane wave method, the calculation memory and time requirement is greatly decreased. The FDTD calculation is performed with a plane wave incidence, and the RCS of far field is obtained by extrapolating the currently calculated data on the output boundary. However, methods available for extrapolating have to evaluate the half space Green function. In this paper, a new method which avoids using the complex and time-consuming half space Green function is proposed. Numerical results show that this method is in good agreement with classic algorithm and it can be used in the fast calculation of scattering and radiation of targets over layered half space.
The Sustainable Technology Division has recently completed an implementation of the U.S. EPA's Waste Reduction (WAR) Algorithm that can be directly accessed from a Cape-Open compliant process modeling environment. The WAR Algorithm add-in can be used in AmsterChem's COFE (Cape-Op...
International Nuclear Information System (INIS)
Hjelm, R.P. Jr.; Seegar, P.A.
1989-01-01
A user-friendly, integrated system, SMR, for the display, reduction and analysis of data from time-of-flight small-angle neutron diffractometers is described. Its purpose is to provide facilities for data display and assessment and to provide these facilities in near real time. This allows the results of each scattering measurement to be available almost immediately, and enables the experimenter to use the results of a measurement as a basis for other measurements in the same instrument allocation. 8 refs., 11 figs
Metal artifact reduction algorithm based on model images and spatial information
Energy Technology Data Exchange (ETDEWEB)
Wu, Jay [Institute of Radiological Science, Central Taiwan University of Science and Technology, Taichung, Taiwan (China); Shih, Cheng-Ting [Department of Biomedical Engineering and Environmental Sciences, National Tsing-Hua University, Hsinchu, Taiwan (China); Chang, Shu-Jun [Health Physics Division, Institute of Nuclear Energy Research, Taoyuan, Taiwan (China); Huang, Tzung-Chi [Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, Taiwan (China); Sun, Jing-Yi [Institute of Radiological Science, Central Taiwan University of Science and Technology, Taichung, Taiwan (China); Wu, Tung-Hsin, E-mail: tung@ym.edu.tw [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, No.155, Sec. 2, Linong Street, Taipei 112, Taiwan (China)
2011-10-01
Computed tomography (CT) has become one of the most favorable choices for diagnosis of trauma. However, high-density metal implants can induce metal artifacts in CT images, compromising image quality. In this study, we proposed a model-based metal artifact reduction (MAR) algorithm. First, we built a model image using the k-means clustering technique with spatial information and calculated the difference between the original image and the model image. Then, the projection data of these two images were combined using an exponential weighting function. At last, the corrected image was reconstructed using the filter back-projection algorithm. Two metal-artifact contaminated images were studied. For the cylindrical water phantom image, the metal artifact was effectively removed. The mean CT number of water was improved from -28.95{+-}97.97 to -4.76{+-}4.28. For the clinical pelvic CT image, the dark band and the metal line were removed, and the continuity and uniformity of the soft tissue were recovered as well. These results indicate that the proposed MAR algorithm is useful for reducing metal artifact and could improve the diagnostic value of metal-artifact contaminated CT images.
A multifrequency MUSIC algorithm for locating small inhomogeneities in inverse scattering
International Nuclear Information System (INIS)
Griesmaier, Roland; Schmiedecke, Christian
2017-01-01
We consider an inverse scattering problem for time-harmonic acoustic or electromagnetic waves with sparse multifrequency far field data-sets. The goal is to localize several small penetrable objects embedded inside an otherwise homogeneous background medium from observations of far fields of scattered waves corresponding to incident plane waves with one fixed incident direction but several different frequencies. We assume that the far field is measured at a few observation directions only. Taking advantage of the smallness of the scatterers with respect to wavelength we utilize an asymptotic representation formula for the far field to design and analyze a MUSIC-type reconstruction method for this setup. We establish lower bounds on the number of frequencies and receiver directions that are required to recover the number and the positions of an ensemble of scatterers from the given measurements. Furthermore we briefly sketch a possible application of the reconstruction method to the practically relevant case of multifrequency backscattering data. Numerical examples are presented to document the potentials and limitations of this approach. (paper)
Parallel Landscape Driven Data Reduction & Spatial Interpolation Algorithm for Big LiDAR Data
Directory of Open Access Journals (Sweden)
Rahil Sharma
2016-06-01
Full Text Available Airborne Light Detection and Ranging (LiDAR topographic data provide highly accurate digital terrain information, which is used widely in applications like creating flood insurance rate maps, forest and tree studies, coastal change mapping, soil and landscape classification, 3D urban modeling, river bank management, agricultural crop studies, etc. In this paper, we focus mainly on the use of LiDAR data in terrain modeling/Digital Elevation Model (DEM generation. Technological advancements in building LiDAR sensors have enabled highly accurate and highly dense LiDAR point clouds, which have made possible high resolution modeling of terrain surfaces. However, high density data result in massive data volumes, which pose computing issues. Computational time required for dissemination, processing and storage of these data is directly proportional to the volume of the data. We describe a novel technique based on the slope map of the terrain, which addresses the challenging problem in the area of spatial data analysis, of reducing this dense LiDAR data without sacrificing its accuracy. To the best of our knowledge, this is the first ever landscape-driven data reduction algorithm. We also perform an empirical study, which shows that there is no significant loss in accuracy for the DEM generated from a 52% reduced LiDAR dataset generated by our algorithm, compared to the DEM generated from an original, complete LiDAR dataset. For the accuracy of our statistical analysis, we perform Root Mean Square Error (RMSE comparing all of the grid points of the original DEM to the DEM generated by reduced data, instead of comparing a few random control points. Besides, our multi-core data reduction algorithm is highly scalable. We also describe a modified parallel Inverse Distance Weighted (IDW spatial interpolation method and show that the DEMs it generates are time-efficient and have better accuracy than the one’s generated by the traditional IDW method.
Deng, Honggui; Liu, Yan; Ren, Shuang; He, Hailang; Tang, Chengying
2017-10-01
We propose an enhanced partial transmit sequence technique based on novel peak-value feedback algorithm and genetic algorithm (GAPFA-PTS) to reduce peak-to-average power ratio (PAPR) of orthogonal frequency division multiplexing (OFDM) signals in visible light communication (VLC) systems(VLC-OFDM). To demonstrate the advantages of our proposed algorithm, we analyze the flow of proposed technique and compare the performances with other techniques through MATLAB simulation. The results show that GAPFA-PTS technique achieves a significant improvement in PAPR reduction while maintaining low bit error rate (BER) and low complexity in VLC-OFDM systems.
Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong
2017-10-01
Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.
Reduction of the scattered radiation during X-ray examination with screen-film systems
Energy Technology Data Exchange (ETDEWEB)
Vasiliev, V N; Stavitsky, R V [Moscow Research Inst. for Roentgenology and Radiology, Moscow (Russian Federation); Oshomkov, Yu V [Mosroentgen, Moscow Region (Russian Federation)
1993-01-01
In diagnostic radiography, during X-ray examination, photons scattered in the patient's body are detected by the intensifying screen and decrease the image contrast. A conventional way to avoid this image degradation is to attenuate the scattered radiation by an antiscatter grid placed between the patient's body and the screen. A grid selectivity effect originates from the greater attenuation of scattered as opposed to primary radiation. Previous authors calculated the primary and scattered radiation transmission factor of photons with initial energy 30-120 keV for a number of typical grids. The primary radiation transmission factor varied from 0.34 to 0.67 and the secondary radiation factor was equal from 0.03 to 0.13. This effect results in a contrast improvement from 2 to 6, but the patient exposure increases up to a factor of 10. In this work we studied the possibility of improving the image contrast by attenuating the scattered radiation by a secondary filter placed between the patient's body and the screen and made of an appropriate material. A selectivity effect due to the secondary filter arises from two circumstances. First, tilting incidence of the scattered radiation results in the path inside the filter being greater than the primary one. Second, the average energy of the scattered radiation is less than the primary and, hence, the attenuation coefficient is greater. (author).
Puzzle Imaging: Using Large-Scale Dimensionality Reduction Algorithms for Localization.
Glaser, Joshua I; Zamft, Bradley M; Church, George M; Kording, Konrad P
2015-01-01
Current high-resolution imaging techniques require an intact sample that preserves spatial relationships. We here present a novel approach, "puzzle imaging," that allows imaging a spatially scrambled sample. This technique takes many spatially disordered samples, and then pieces them back together using local properties embedded within the sample. We show that puzzle imaging can efficiently produce high-resolution images using dimensionality reduction algorithms. We demonstrate the theoretical capabilities of puzzle imaging in three biological scenarios, showing that (1) relatively precise 3-dimensional brain imaging is possible; (2) the physical structure of a neural network can often be recovered based only on the neural connectivity matrix; and (3) a chemical map could be reproduced using bacteria with chemosensitive DNA and conjugative transfer. The ability to reconstruct scrambled images promises to enable imaging based on DNA sequencing of homogenized tissue samples.
International Nuclear Information System (INIS)
Liu, Wei; Liu, Shutian; Liu, Zhengjun
2015-01-01
We report a simultaneous image compression and encryption scheme based on solving a typical optical inverse problem. The secret images to be processed are multiplexed as the input intensities of a cascaded diffractive optical system. At the output plane, a compressed complex-valued data with a lot fewer measurements can be obtained by utilizing error-reduction phase retrieval algorithm. The magnitude of the output image can serve as the final ciphertext while its phase serves as the decryption key. Therefore the compression and encryption are simultaneously completed without additional encoding and filtering operations. The proposed strategy can be straightforwardly applied to the existing optical security systems that involve diffraction and interference. Numerical simulations are performed to demonstrate the validity and security of the proposal. (paper)
Indian Academy of Sciences (India)
to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...
International Nuclear Information System (INIS)
Sanchez, M.; Esteban, L.; Kornejew, P.; Hirsch, M.
2008-01-01
Mid Infrared (10,6 μm CO 2 laser lines) interferometers as a plasma density diagnostic must use two-colour systems with superposed interferometers beams at different wavelengths in order to cope with mechanical vibrations and drifts. They require a highly precise phase difference measurement where all sources of error must be reduced. One of these is the cross-talk between the signals which creates nonlinear spurious periodic mixing products. The reason may be either optical or electrical crosstalk both resulting in similar perturbations of the measurement. In the TJII interferometer a post-processing algorithm is used to reduce the crosstalk in the data. This post-processing procedure is not appropriate for very long pulses, as it is the case for in new tokamak (ITER) or stellarator (W7-X) projects. In both cases an on-line reduction process is required or--even better--the unwanted signal components must be reduced in the system itself CO 2 laser interferometers which as the second wavelength use the CO laser line (5,3 μm), may apply a single common detector sensitive to both wavelengths and separate the corresponding IF signals by appropriate bandpass filters. This reduces complexity of the optical arrangement and avoids a possible source of vibration induced phase noise as both signals share the same beam path. To avoid cross talk in this arrangement filtering must be appropriate. In this paper we present calculations to define the limits of crosstalk for a desired plasma density precision. A crosstalk reduction algorithm has been developed and is applied to experimental results from TJ-II pulses. Results from a single detector arrangement as under investigation for the CO 2 /CO laser interferometer developed for W7-X are presented
Network reconfiguration for loss reduction in electrical distribution system using genetic algorithm
International Nuclear Information System (INIS)
Adail, A.S.A.A.
2012-01-01
Distribution system is a critical links between the utility and the nuclear installation. During feeding electricity to that installation there are power losses. The quality of the network depends on the reduction of these losses. Distribution system which feeds the nuclear installation must have a higher quality power. For example, in Inshas site, electrical power is supplied to it from two incoming feeders (one from new abu-zabal substation and the other from old abu-zabal substation). Each feeder is designed to carry the full load, while the operator preferred to connect with a new abu-zabal substation, which has a good power quality. Bad power quality affects directly the nuclear reactor and has a negative impact on the installed sensitive equipment's of the operation. The thesis is Studying the electrical losses in a distribution system (causes and effected factors), feeder reconfiguration methods, and applying of genetic algorithm in an electric distribution power system. In the end, this study proposes an optimization technique based on genetic algorithms for distribution network reconfiguration to reduce the network losses to minimum. The proposed method is applied to IEEE test network; that contain 3 feeders and 16 nodes. The technique is applied through two groups, distribution have general loads, and nuclear loads. In the groups the technique applied to seven cases at normal operation state, system fault condition as well as different loads conditions. Simulated results are drawn to show the accuracy of the technique.
Test and data reduction algorithm for the evaluation of lead-acid battery packs
Energy Technology Data Exchange (ETDEWEB)
Nowak, D.
1986-01-15
Experience from the DOE Electric Vehicle Demonstration Project indicated severe battery problems associated with driving electric cars in temperature extremes. The vehicle batteries suffered from a high module failure rate, reduced capacity, and low efficiency. To assess the nature and the extent of the battery problems encountered at various operating temperatures, a test program was established at the University of Alabama in Huntsville (UAH). A test facility was built that is based on Propel cycling equipment, the Hewlett Packard 3497A Data Acquisition System, and the HP85F and HP87 computers. The objective was to establish a cost effective facility that could generate the engineering data base needed for the development of thermal management systems, destratification systems, central watering systems and proper charge algorithms. It was hoped that the development and implementation of these systems by EV manufacturers and fleet operators of EVs would eliminate the most pressing problems that occurred in the DOE EV Demonstration Project. The data reduction algorithm is described.
Nakamura, Yusuke; Hoshizawa, Taku; Takashima, Yuzuru
2017-09-01
A new method, wavelength diversity detection (WDD), for improving signal quality is proposed and its effectiveness is numerically confirmed. We consider that WDD is especially effective for high-capacity systems having low hologram diffraction efficiencies. In such systems, the signal quality is primarily limited by coherent scattering noise; thus, effective improvement of the signal quality under a scattering-limited system is of great interest. WDD utilizes a new degree of freedom, the spectrum width, and scattering by molecules to improve the signal quality of the system. We found that WDD improves the quality by counterbalancing the degradation of the quality due to Bragg mismatch. With WDD, a higher-scattering-coefficient medium can improve the quality. The result provides an interesting insight into the requirements for material characteristics, especially for a large-M/# material. In general, a larger-M/# material contains more molecules; thus, the system is subject to more scattering, which actually improves the quality with WDD. We propose a pathway for a future holographic data storage system (HDSS) using WDD, which can record a larger amount of data than a conventional HDSS.
International Nuclear Information System (INIS)
Brady, S. L.; Yee, B. S.; Kaufman, R. A.
2012-01-01
Purpose: This study demonstrates a means of implementing an adaptive statistical iterative reconstruction (ASiR™) technique for dose reduction in computed tomography (CT) while maintaining similar noise levels in the reconstructed image. The effects of image quality and noise texture were assessed at all implementation levels of ASiR™. Empirically derived dose reduction limits were established for ASiR™ for imaging of the trunk for a pediatric oncology population ranging from 1 yr old through adolescence/adulthood. Methods: Image quality was assessed using metrics established by the American College of Radiology (ACR) CT accreditation program. Each image quality metric was tested using the ACR CT phantom with 0%–100% ASiR™ blended with filtered back projection (FBP) reconstructed images. Additionally, the noise power spectrum (NPS) was calculated for three common reconstruction filters of the trunk. The empirically derived limitations on ASiR™ implementation for dose reduction were assessed using (1, 5, 10) yr old and adolescent/adult anthropomorphic phantoms. To assess dose reduction limits, the phantoms were scanned in increments of increased noise index (decrementing mA using automatic tube current modulation) balanced with ASiR™ reconstruction to maintain noise equivalence of the 0% ASiR™ image. Results: The ASiR™ algorithm did not produce any unfavorable effects on image quality as assessed by ACR criteria. Conversely, low-contrast resolution was found to improve due to the reduction of noise in the reconstructed images. NPS calculations demonstrated that images with lower frequency noise had lower noise variance and coarser graininess at progressively higher percentages of ASiR™ reconstruction; and in spite of the similar magnitudes of noise, the image reconstructed with 50% or more ASiR™ presented a more smoothed appearance than the pre-ASiR™ 100% FBP image. Finally, relative to non-ASiR™ images with 100% of standard dose across the
Energy Technology Data Exchange (ETDEWEB)
Brady, S. L.; Yee, B. S.; Kaufman, R. A. [Department of Radiological Sciences, St. Jude Children' s Research Hospital, Memphis, Tennessee 38105 (United States)
2012-09-15
Purpose: This study demonstrates a means of implementing an adaptive statistical iterative reconstruction (ASiR Trade-Mark-Sign ) technique for dose reduction in computed tomography (CT) while maintaining similar noise levels in the reconstructed image. The effects of image quality and noise texture were assessed at all implementation levels of ASiR Trade-Mark-Sign . Empirically derived dose reduction limits were established for ASiR Trade-Mark-Sign for imaging of the trunk for a pediatric oncology population ranging from 1 yr old through adolescence/adulthood. Methods: Image quality was assessed using metrics established by the American College of Radiology (ACR) CT accreditation program. Each image quality metric was tested using the ACR CT phantom with 0%-100% ASiR Trade-Mark-Sign blended with filtered back projection (FBP) reconstructed images. Additionally, the noise power spectrum (NPS) was calculated for three common reconstruction filters of the trunk. The empirically derived limitations on ASiR Trade-Mark-Sign implementation for dose reduction were assessed using (1, 5, 10) yr old and adolescent/adult anthropomorphic phantoms. To assess dose reduction limits, the phantoms were scanned in increments of increased noise index (decrementing mA using automatic tube current modulation) balanced with ASiR Trade-Mark-Sign reconstruction to maintain noise equivalence of the 0% ASiR Trade-Mark-Sign image. Results: The ASiR Trade-Mark-Sign algorithm did not produce any unfavorable effects on image quality as assessed by ACR criteria. Conversely, low-contrast resolution was found to improve due to the reduction of noise in the reconstructed images. NPS calculations demonstrated that images with lower frequency noise had lower noise variance and coarser graininess at progressively higher percentages of ASiR Trade-Mark-Sign reconstruction; and in spite of the similar magnitudes of noise, the image reconstructed with 50% or more ASiR Trade-Mark-Sign presented a more
Methods for reduction of scattered x-ray in measuring MTF with the square chart
International Nuclear Information System (INIS)
Hatagawa, Masakatsu; Yoshida, Rie
1982-01-01
A square wave chart has been used to measure the MTF of a screen-film system. The problem is that the scattered X-ray from the chart may give rise to measurement errors. In this paper, the authors proposed two methods to reduce the scattered X-ray: the first method is the use of a Pb mask and second is to provide for an air gap between the chart and the screen-film system. In these methods, the scattered X-ray from the chart was reduced. MTFs were measured by both of the new methods and the conventional method, and MTF values of the new methods were in good agreement while that of the conventional method was not. It was concluded that these new methods are able to reduce errors in the measurement of MTF. (author)
A high-power spatial filter for Thomson scattering stray light reduction
Levesque, J. P.; Litzner, K. D.; Mauel, M. E.; Maurer, D. A.; Navratil, G. A.; Pedersen, T. S.
2011-03-01
The Thomson scattering diagnostic on the High Beta Tokamak-Extended Pulse (HBT-EP) is routinely used to measure electron temperature and density during plasma discharges. Avalanche photodiodes in a five-channel interference filter polychromator measure scattered light from a 6 ns, 800 mJ, 1064 nm Nd:YAG laser pulse. A low cost, high-power spatial filter was designed, tested, and added to the laser beamline in order to reduce stray laser light to levels which are acceptable for accurate Rayleigh calibration. A detailed analysis of the spatial filter design and performance is given. The spatial filter can be easily implemented in an existing Thomson scattering system without the need to disturb the vacuum chamber or significantly change the beamline. Although apertures in the spatial filter suffer substantial damage from the focused beam, with proper design they can last long enough to permit absolute calibration.
Reduction of the scatter dose to the testicle outside the radiation treatment fields
International Nuclear Information System (INIS)
Kubo, H.; Shipley, W.U.
1982-01-01
A technique is described to reduce the dose to the contralateral testicle of patients with testis tumors during retroperitoneal therapy with 10 MV X-rays. When a conventional clam-shell shielding device was used, the dose to the testis from the photons scattered by the patient and the collimator jaws was found to be about 1.6% of the prescribed midplane dose. A more substantial gonadal shield made of low melting Ostalloy, that reduced further the dose from internal scattered X rays, was therefore designed. A 10 cm thick lead scrotal block above the scrotum immediately outside the field is shown to reduce the external scattered radiation to negligible levels. Using the shield and the block, it is possible to reduce the dose to the testicle to one-tenth of one percent of the prescribed midplane dose
Reduction of the scatter dose to the testicle outside the radiation treatment fields
International Nuclear Information System (INIS)
Kubo, H.; Shipley, W.U.
1982-01-01
A technique is described to reduce the dose to the contralateral testicle of patients with testis tumors during retroperitoneal therapy with 10 MV X rays. When a conventional clam-shell shielding device was used, the dose to the testis from the photons scattered by the patient and collimator jaws was found to be about 1.6% of the prescribed midplane dose. A more substantial gonadal shield made of low melting point Ostalloy, that reduced further the dose from internal scattered X rays, was therefore designed. A 10 cm thick lead scrotal block above the scrotum immediately outside the field is shown to reduce the external scattering radiation to negligible levels. Using the shield and the block, it is possible to reduce the dose to the testicle to one-tenth of one percent of the prescribed midplane dose
Genetic Algorithm-Guided, Adaptive Model Order Reduction of Flexible Aircrafts
Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter; Brenner, Martin J.
2017-01-01
This paper presents a methodology for automated model order reduction (MOR) of flexible aircrafts to construct linear parameter-varying (LPV) reduced order models (ROM) for aeroservoelasticity (ASE) analysis and control synthesis in broad flight parameter space. The novelty includes utilization of genetic algorithms (GAs) to automatically determine the states for reduction while minimizing the trial-and-error process and heuristics requirement to perform MOR; balanced truncation for unstable systems to achieve locally optimal realization of the full model; congruence transformation for "weak" fulfillment of state consistency across the entire flight parameter space; and ROM interpolation based on adaptive grid refinement to generate a globally functional LPV ASE ROM. The methodology is applied to the X-56A MUTT model currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that X-56A ROM with less than one-seventh the number of states relative to the original model is able to accurately predict system response among all input-output channels for pitch, roll, and ASE control at various flight conditions. The GA-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The adaptive refinement allows selective addition of the grid points in the parameter space where flight dynamics varies dramatically to enhance interpolation accuracy without over-burdening controller synthesis and onboard memory efforts downstream. The present MOR framework can be used by control engineers for robust ASE controller synthesis and novel vehicle design.
Comparison study of noise reduction algorithms in dual energy chest digital tomosynthesis
Lee, D.; Kim, Y.-S.; Choi, S.; Lee, H.; Choi, S.; Kim, H.-J.
2018-04-01
Dual energy chest digital tomosynthesis (CDT) is a recently developed medical technique that takes advantage of both tomosynthesis and dual energy X-ray images. However, quantum noise, which occurs in dual energy X-ray images, strongly interferes with diagnosis in various clinical situations. Therefore, noise reduction is necessary in dual energy CDT. In this study, noise-compensating algorithms, including a simple smoothing of high-energy images (SSH) and anti-correlated noise reduction (ACNR), were evaluated in a CDT system. We used a newly developed prototype CDT system and anthropomorphic chest phantom for experimental studies. The resulting images demonstrated that dual energy CDT can selectively image anatomical structures, such as bone and soft tissue. Among the resulting images, those acquired with ACNR showed the best image quality. Both coefficient of variation and contrast to noise ratio (CNR) were the highest in ACNR among the three different dual energy techniques, and the CNR of bone was significantly improved compared to the reconstructed images acquired at a single energy. This study demonstrated the clinical value of dual energy CDT and quantitatively showed that ACNR is the most suitable among the three developed dual energy techniques, including standard log subtraction, SSH, and ACNR.
International Nuclear Information System (INIS)
Baba, Yuji; Murakami, Ryuji; Mizukami, Naohisa; Morishita, Shoji; Yamashita, Yasuyuki; Araki, Fujio; Moribe, Nobuyuki; Hirata, Yukinori
2004-01-01
The purpose of this study was to compare radiation doses of small lung nodules calculated with beam scattering compensation and those without compensation in heterogeneous tissues. Computed tomography (CT) data of 34 small (1-2 cm: 12 nodules, 2-3 cm 11 nodules, 3-4 cm 11 nodules) lung nodules were used in the radiation dose measurements. Radiation planning for lung nodule was performed with a commercially available unit using two different radiation dose calculation methods: the superposition method (with scatter compensation in heterogeneous tissues), and the Clarkson method (without scatter compensation in heterogeneous tissues). The energy of the linac photon used in this study was 10 MV and 4 MV. Monitor unit (MU) to deliver 10 Gy at the center of the radiation field (center of the nodule) calculated with the two methods were compared. In 1-2 cm nodules, MU calculated by Clarkson method (MUc) was 90.0±1.1% (4 MV photon) and 80.5±2.7% (10 MV photon) compared to MU calculated by superposion method (MUs), in 2-3 cm nodules, MUc was 92.9±1.1% (4 MV photon) and 86.6±2.8% (10 MV photon) compared to MUs, and in 3-4 cm nodules, MUc was 90.5±2.0% (4 MV photon) and 90.1±1.7% (10 MV photon) compared to MUs. In 1-2 cm nodules, MU calculated without lung compensation (MUn) was 120.6±8.3% (4 MV photon) and 95.1±4.1% (10 MV photon) compared to MU calculated by superposion method (MUs), in 2-3 cm nodules, MUc was 120.3±11.5% (4 MV photon) and 100.5±4.6% (10 MV photon) compared to MUs, and in 3-4 cm nodules, MUc was 105.3±9.0% (4 MV photon) and 103.4±4.9% (10 MV photon) compared to MUs. The MU calculated without lung compensation was not significantly different from the MU calculated by superposition method in 2-3 cm nodules. We found that the conventional dose calculation algorithm without scatter compensation in heterogeneous tissues substantially overestimated the radiation dose of small nodules in the lung field. In the calculation of dose distribution of small
An error reduction algorithm to improve lidar turbulence estimates for wind energy
Directory of Open Access Journals (Sweden)
J. F. Newman
2017-02-01
Full Text Available Remote-sensing devices such as lidars are currently being investigated as alternatives to cup anemometers on meteorological towers for the measurement of wind speed and direction. Although lidars can measure mean wind speeds at heights spanning an entire turbine rotor disk and can be easily moved from one location to another, they measure different values of turbulence than an instrument on a tower. Current methods for improving lidar turbulence estimates include the use of analytical turbulence models and expensive scanning lidars. While these methods provide accurate results in a research setting, they cannot be easily applied to smaller, vertically profiling lidars in locations where high-resolution sonic anemometer data are not available. Thus, there is clearly a need for a turbulence error reduction model that is simpler and more easily applicable to lidars that are used in the wind energy industry. In this work, a new turbulence error reduction algorithm for lidars is described. The Lidar Turbulence Error Reduction Algorithm, L-TERRA, can be applied using only data from a stand-alone vertically profiling lidar and requires minimal training with meteorological tower data. The basis of L-TERRA is a series of physics-based corrections that are applied to the lidar data to mitigate errors from instrument noise, volume averaging, and variance contamination. These corrections are applied in conjunction with a trained machine-learning model to improve turbulence estimates from a vertically profiling WINDCUBE v2 lidar. The lessons learned from creating the L-TERRA model for a WINDCUBE v2 lidar can also be applied to other lidar devices. L-TERRA was tested on data from two sites in the Southern Plains region of the United States. The physics-based corrections in L-TERRA brought regression line slopes much closer to 1 at both sites and significantly reduced the sensitivity of lidar turbulence errors to atmospheric stability. The accuracy of machine
Yang, Minglin; Wu, Yueqian; Sheng, Xinqing; Ren, Kuan Fang
2017-12-01
Computation of scattering of shaped beams by large nonspherical particles is a challenge in both optics and electromagnetics domains since it concerns many research fields. In this paper, we report our new progress in the numerical computation of the scattering diagrams. Our algorithm permits to calculate the scattering of a particle of size as large as 110 wavelengths or 700 in size parameter. The particle can be transparent or absorbing of arbitrary shape, smooth or with a sharp surface, such as the Chebyshev particles or ice crystals. To illustrate the capacity of the algorithm, a zero order Bessel beam is taken as the incident beam, and the scattering of ellipsoidal particles and Chebyshev particles are taken as examples. Some special phenomena have been revealed and examined. The scattering problem is formulated with the combined tangential formulation and solved iteratively with the aid of the multilevel fast multipole algorithm, which is well parallelized with the message passing interface on the distributed memory computer platform using the hybrid partitioning strategy. The numerical predictions are compared with the results of the rigorous method for a spherical particle to validate the accuracy of the approach. The scattering diagrams of large ellipsoidal particles with various parameters are examined. The effect of aspect ratios, as well as half-cone angle of the incident zero-order Bessel beam and the off-axis distance on scattered intensity, is studied. Scattering by asymmetry Chebyshev particle with size parameter larger than 700 is also given to show the capability of the method for computing scattering by arbitrary shaped particles.
Liu, Yang
2014-07-01
The computational complexity and memory requirements of classically formulated marching-on-in-time (MOT)-based surface integral equation (SIE) solvers scale as O(Nt Ns 2) and O(Ns 2), respectively; here Nt and Ns denote the number of temporal and spatial degrees of freedom of the current density. The multilevel plane wave time domain (PWTD) algorithm, viz., the time domain counterpart of the multilevel fast multipole method, reduces these costs to O(Nt Nslog2 Ns) and O(Ns 1.5) (Ergin et al., IEEE Trans. Antennas Mag., 41, 39-52, 1999). Previously, PWTD-accelerated MOT-SIE solvers have been used to analyze transient scattering from perfect electrically conducting (PEC) and homogeneous dielectric objects discretized in terms of a million spatial unknowns (Shanker et al., IEEE Trans. Antennas Propag., 51, 628-641, 2003). More recently, an efficient parallelized solver that employs an advanced hierarchical and provably scalable spatial, angular, and temporal load partitioning strategy has been developed to analyze transient scattering problems that involve ten million spatial unknowns (Liu et. al., in URSI Digest, 2013).
A Monte Carlo simulation of scattering reduction in spectral x-ray computed tomography
DEFF Research Database (Denmark)
Busi, Matteo; Olsen, Ulrik Lund; Bergbäck Knudsen, Erik
2017-01-01
In X-ray computed tomography (CT), scattered radiation plays an important role in the accurate reconstruction of the inspected object, leading to a loss of contrast between the different materials in the reconstruction volume and cupping artifacts in the images. We present a Monte Carlo simulation...
Algorithms and computer codes for atomic and molecular quantum scattering theory. Volume I
Energy Technology Data Exchange (ETDEWEB)
Thomas, L. (ed.)
1979-01-01
The goals of this workshop are to identify which of the existing computer codes for solving the coupled equations of quantum molecular scattering theory perform most efficiently on a variety of test problems, and to make tested versions of those codes available to the chemistry community through the NRCC software library. To this end, many of the most active developers and users of these codes have been invited to discuss the methods and to solve a set of test problems using the LBL computers. The first volume of this workshop report is a collection of the manuscripts of the talks that were presented at the first meeting held at the Argonne National Laboratory, Argonne, Illinois June 25-27, 1979. It is hoped that this will serve as an up-to-date reference to the most popular methods with their latest refinements and implementations.
Algorithms and computer codes for atomic and molecular quantum scattering theory. Volume I
International Nuclear Information System (INIS)
Thomas, L.
1979-01-01
The goals of this workshop are to identify which of the existing computer codes for solving the coupled equations of quantum molecular scattering theory perform most efficiently on a variety of test problems, and to make tested versions of those codes available to the chemistry community through the NRCC software library. To this end, many of the most active developers and users of these codes have been invited to discuss the methods and to solve a set of test problems using the LBL computers. The first volume of this workshop report is a collection of the manuscripts of the talks that were presented at the first meeting held at the Argonne National Laboratory, Argonne, Illinois June 25-27, 1979. It is hoped that this will serve as an up-to-date reference to the most popular methods with their latest refinements and implementations
Waste reduction algorithm used as the case study of simulated bitumen production process
Directory of Open Access Journals (Sweden)
Savić Marina A.
2011-01-01
Full Text Available Waste reduction algorithm - WAR is a tool helping process engineers for environmental impact assessment. WAR algorithm is a methodology for determining the potential environmental impact (PEI of a chemical process. In particular, the bitumen production process was analyzed following three stages: a atmospheric distillation unit, b vacuum distillation unit, and c bitumen production unit. Study was developed for the middle sized oil refinery with capacity of 5000000 tones of crude oil per year. Results highlight the most vulnerable aspects of the environmental pollution that arise during the manufacturing process of bitumen. The overall rates of PEI leaving the system (PEI/h - Iout PEI/h are: a 2.14105, b 7.17104 and c 2.36103, respectively. The overall rates of PEI generated within the system - Igen PEI/h are: a 7.75104, b -4.31104 and c -4.32102, respectively. Atmospheric distillation unit have the highest overall rate of PEI while the bitumen production unit have the lowest overall rate of PEI. Comparison of Iout PEI/h and Igen PEI/h values for the atmospheric distillation unit, shows that the overall rate of PEI generated in the system is 36.21% of the overall rate of PEI leaving the system. In the cases of vacuum distillation and bitumen production units, the overall rate of PEI generated in system have negative values, i.e. the overall rate of PEI leaving the system is reduced at 60.11% (in the vacuum distillation unit and at 18.30% (in the bitumen production unit. Analysis of the obtained results for the overall rate of PEI, expressed by weight of the product, confirms conclusions.
Time-of-flight small-angle neutron scattering data reduction and analysis at LANSCE with program SMR
International Nuclear Information System (INIS)
Hjelm, R.P. Jr.; Seeger, P.A.
1988-01-01
A user-friendly integrated system, SMR, for the display, reduction and analysis of data from time-of-flight small-angle neutron diffractometers is described. Its purpose is to provide facilities for data display and assessment, and to provide these facilities in near real time. This allows the results of each scattering measurement to be available almost immediately, and enables the user to use the results of a measurement as a basis for other measurements in the same time allocation of the instrument. 8 refs., 10 figs
International Nuclear Information System (INIS)
Younes, R.B.; Mas, J.; Bidet, R.
1988-01-01
Contour detection is an important step in information extraction from nuclear medicine images. In order to perform accurate quantitative studies in single photon emission computed tomography (SPECT) a new procedure is described which can rapidly derive the best fit contour of an attenuated medium. Some authors evaluate the influence of the detected contour on the reconstructed images with various attenuation correction techniques. Most of the methods are strongly affected by inaccurately detected contours. This approach uses the Compton window to redetermine the convex contour: It seems to be simpler and more practical in clinical SPECT studies. The main advantages of this procedure are the high speed of computation, the accuracy of the contour found and the programme's automation. Results obtained using computer simulated and real phantoms or clinical studies demonstrate the reliability of the present algorithm. (orig.)
A data reduction program for the linac total-scattering amorphous materials spectrometer (LINDA)
International Nuclear Information System (INIS)
Clarke, J.H.
1976-01-01
A computer program has been written to reduce the data collected on the A.E.R.E., Harwell linac total-scattering spectrometer (TSS) to the differential scattering cross-section. This instrument, used for studying the structure of amorphous materials such as liquids and glasses, has been described in detail. Time-of-flight spectra are recorded by several arrays of detectors at different angles using a pulsed incident neutron beam with a continuous distribution of wavelengths. The program performs all necessary background and container subtractions and also absorption corrections using the method of Paalman and Pings. The incident neutron energy distribution is obtained from the intensity recorded from a standard vanadium sample, enabling the observed differential scattering cross-section dsigma/dΩ (theta, lambda) and the structure factor S(Q) to be obtained. Various sample and vanadium geometries can be analysed by the program and facilities exist for the summation of data sets, smoothing of data, application of Placzek corrections and the output of processed data onto magnetic tape or punched cards. A set of example data is provided and some structure factors are shown with absorption corrections. (author)
Network Reduction Algorithm for Developing Distribution Feeders for Real-Time Simulators: Preprint
Energy Technology Data Exchange (ETDEWEB)
Nagarajan, Adarsh; Nelson, Austin; Prabakar, Kumaraguru; Hoke, Andy; Asano, Marc; Ueda, Reid; Nepal, Shaili
2017-06-15
As advanced grid-support functions (AGF) become more widely used in grid-connected photovoltaic (PV) inverters, utilities are increasingly interested in their impacts when implemented in the field. These effects can be understood by modeling feeders in real-time systems and testing PV inverters using power hardware-in-the-loop (PHIL) techniques. This paper presents a novel feeder model reduction algorithm using a Monte Carlo method that enables large feeders to be solved and operated on real-time computing platforms. Two Hawaiian Electric feeder models in Synergi Electric's load flow software were converted to reduced order models in OpenDSS, and subsequently implemented in the OPAL-RT real-time digital testing platform. Smart PV inverters were added to the real-time model with AGF responses modeled after characterizing commercially available hardware inverters. Finally, hardware inverters were tested in conjunction with the real-time model using PHIL techniques so that the effects of AGFs on the choice feeders could be analyzed.
Impact of Noise Reduction Algorithm in Cochlear Implant Processing on Music Enjoyment.
Kohlberg, Gavriel D; Mancuso, Dean M; Griffin, Brianna M; Spitzer, Jaclyn B; Lalwani, Anil K
2016-06-01
Noise reduction algorithm (NRA) in speech processing strategy has positive impact on speech perception among cochlear implant (CI) listeners. We sought to evaluate the effect of NRA on music enjoyment. Prospective analysis of music enjoyment. Academic medical center. Normal-hearing (NH) adults (N = 16) and CI listeners (N = 9). Subjective rating of music excerpts. NH and CI listeners evaluated country music piece on three enjoyment modalities: pleasantness, musicality, and naturalness. Participants listened to the original version and 20 modified, less complex versions created by including subsets of musical instruments from the original song. NH participants listened to the segments through CI simulation and CI listeners listened to the segments with their usual speech processing strategy, with and without NRA. Decreasing the number of instruments was significantly associated with increase in the pleasantness and naturalness in both NH and CI subjects (p 0.05): this was true for the original and the modified music segments with one to three instruments (p > 0.05). NRA does not affect music enjoyment in CI listener or NH individual with CI simulation. This suggests that strategies to enhance speech processing will not necessarily have a positive impact on music enjoyment. However, reducing the complexity of music shows promise in enhancing music enjoyment and should be further explored.
Indian Academy of Sciences (India)
ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...
Chi, Zhijun; Du, Yingchao; Huang, Wenhui; Tang, Chuanxiang
2017-12-01
The necessity for compact and relatively low cost x-ray sources with monochromaticity, continuous tunability of x-ray energy, high spatial coherence, straightforward polarization control, and high brightness has led to the rapid development of Thomson scattering x-ray sources. To meet the requirement of in-situ monochromatic computed tomography (CT) for large-scale and/or high-attenuation materials based on this type of x-ray source, there is an increasing demand for effective algorithms to correct the energy-angle correlation. In this paper, we take advantage of the parametrization of the x-ray attenuation coefficient to resolve this problem. The linear attenuation coefficient of a material can be decomposed into a linear combination of the energy-dependent photoelectric and Compton cross-sections in the keV energy regime without K-edge discontinuities, and the line integrals of the decomposition coefficients of the above two parts can be determined by performing two spectrally different measurements. After that, the line integral of the linear attenuation coefficient of an imaging object at a certain interested energy can be derived through the above parametrization formula, and monochromatic CT can be reconstructed at this energy using traditional reconstruction methods, e.g., filtered back projection or algebraic reconstruction technique. Not only can monochromatic CT be realized, but also the distributions of the effective atomic number and electron density of the imaging object can be retrieved at the expense of dual-energy CT scan. Simulation results validate our proposal and will be shown in this paper. Our results will further expand the scope of application for Thomson scattering x-ray sources.
Energy Technology Data Exchange (ETDEWEB)
Kidoh, Masafumi; Utsunomiya, Daisuke; Ikeda, Osamu; Tamura, Yoshitaka; Oda, Seitaro; Yuki, Hideaki; Nakaura, Takeshi; Hirai, Toshinori; Yamashita, Yasuyuki [Kumamoto University, Department of Diagnostic Radiology, Faculty of Life Sciences, Kumamoto (Japan); Funama, Yoshinori [Kumamoto University, Department of Medical Physics, Faculty of Life Sciences, Kumamoto (Japan); Kawano, Takayuki [Kumamoto University Graduate School, Department of Neurosurgery, Faculty of Life Sciences Research, Kumamoto (Japan)
2016-05-15
We evaluated the effect of a single-energy metal artefact reduction (SEMAR) algorithm for metallic coil artefact reduction in body imaging. Computed tomography angiography (CTA) was performed in 30 patients with metallic coils (10 men, 20 women; mean age, 67.9 ± 11 years). Non-SEMAR images were reconstructed with iterative reconstruction alone, and SEMAR images were reconstructed with the iterative reconstruction plus SEMAR algorithms. We compared image noise around metallic coils and the maximum diameters of artefacts from coils between the non-SEMAR and SEMAR images. Two radiologists visually evaluated the metallic coil artefacts utilizing a four-point scale: 1 = extensive; 2 = strong; 3 = mild; 4 = minimal artefacts. The image noise and maximum diameters of the artefacts of the SEMAR images were significantly lower than those of the non-SEMAR images (65.1 ± 33.0 HU vs. 29.7 ± 10.3 HU; 163.9 ± 54.8 mm vs. 10.3 ± 19.0 mm, respectively; P < 0.001). Better visual scores were obtained with the SEMAR technique (3.4 ± 0.6 vs. 1.0 ± 0.0, P < 0.001). The SEMAR algorithm significantly reduced artefacts caused by metallic coils compared with the non-SEMAR algorithm. This technique can potentially increase CT performance for the evaluation of post-coil embolization complications. (orig.)
International Nuclear Information System (INIS)
Kidoh, Masafumi; Utsunomiya, Daisuke; Ikeda, Osamu; Tamura, Yoshitaka; Oda, Seitaro; Yuki, Hideaki; Nakaura, Takeshi; Hirai, Toshinori; Yamashita, Yasuyuki; Funama, Yoshinori; Kawano, Takayuki
2016-01-01
We evaluated the effect of a single-energy metal artefact reduction (SEMAR) algorithm for metallic coil artefact reduction in body imaging. Computed tomography angiography (CTA) was performed in 30 patients with metallic coils (10 men, 20 women; mean age, 67.9 ± 11 years). Non-SEMAR images were reconstructed with iterative reconstruction alone, and SEMAR images were reconstructed with the iterative reconstruction plus SEMAR algorithms. We compared image noise around metallic coils and the maximum diameters of artefacts from coils between the non-SEMAR and SEMAR images. Two radiologists visually evaluated the metallic coil artefacts utilizing a four-point scale: 1 = extensive; 2 = strong; 3 = mild; 4 = minimal artefacts. The image noise and maximum diameters of the artefacts of the SEMAR images were significantly lower than those of the non-SEMAR images (65.1 ± 33.0 HU vs. 29.7 ± 10.3 HU; 163.9 ± 54.8 mm vs. 10.3 ± 19.0 mm, respectively; P < 0.001). Better visual scores were obtained with the SEMAR technique (3.4 ± 0.6 vs. 1.0 ± 0.0, P < 0.001). The SEMAR algorithm significantly reduced artefacts caused by metallic coils compared with the non-SEMAR algorithm. This technique can potentially increase CT performance for the evaluation of post-coil embolization complications. (orig.)
Directory of Open Access Journals (Sweden)
Felix Fritzen
2018-02-01
Full Text Available A novel algorithmic discussion of the methodological and numerical differences of competing parametric model reduction techniques for nonlinear problems is presented. First, the Galerkin reduced basis (RB formulation is presented, which fails at providing significant gains with respect to the computational efficiency for nonlinear problems. Renowned methods for the reduction of the computing time of nonlinear reduced order models are the Hyper-Reduction and the (Discrete Empirical Interpolation Method (EIM, DEIM. An algorithmic description and a methodological comparison of both methods are provided. The accuracy of the predictions of the hyper-reduced model and the (DEIM in comparison to the Galerkin RB is investigated. All three approaches are applied to a simple uncertainty quantification of a planar nonlinear thermal conduction problem. The results are compared to computationally intense finite element simulations.
SU-F-T-441: Dose Calculation Accuracy in CT Images Reconstructed with Artifact Reduction Algorithm
Energy Technology Data Exchange (ETDEWEB)
Ng, C; Chan, S; Lee, F; Ngan, R [Queen Elizabeth Hospital (Hong Kong); Lee, V [University of Hong Kong, Hong Kong, HK (Hong Kong)
2016-06-15
Purpose: Accuracy of radiotherapy dose calculation in patients with surgical implants is complicated by two factors. First is the accuracy of CT number, second is the dose calculation accuracy. We compared measured dose with dose calculated on CT images reconstructed with FBP and an artifact reduction algorithm (OMAR, Philips) for a phantom with high density inserts. Dose calculation were done with Varian AAA and AcurosXB. Methods: A phantom was constructed with solid water in which 2 titanium or stainless steel rods could be inserted. The phantom was scanned with the Philips Brillance Big Bore CT. Image reconstruction was done with FBP and OMAR. Two 6 MV single field photon plans were constructed for each phantom. Radiochromic films were placed at different locations to measure the dose deposited. One plan has normal incidence on the titanium/steel rods. In the second plan, the beam is at almost glancing incidence on the metal rods. Measurements were then compared with dose calculated with AAA and AcurosXB. Results: The use of OMAR images slightly improved the dose calculation accuracy. The agreement between measured and calculated dose was best with AXB and image reconstructed with OMAR. Dose calculated on titanium phantom has better agreement with measurement. Large discrepancies were seen at points directly above and below the high density inserts. Both AAA and AXB underestimated the dose directly above the metal surface, while overestimated the dose below the metal surface. Doses measured downstream of metal were all within 3% of calculated values. Conclusion: When doing treatment planning for patients with metal implants, care must be taken to acquire correct CT images to improve dose calculation accuracy. Moreover, great discrepancies in measured and calculated dose were observed at metal/tissue interface. Care must be taken in estimating the dose in critical structures that come into contact with metals.
Reduction of ballistic spin scattering in a spin-FET using stray electric fields
International Nuclear Information System (INIS)
Nemnes, G A; Manolescu, A; Gudmundsson, V
2012-01-01
The quasi-bound states which appear as a consequence of the Rashba spin-orbit (SO) coupling, introduce a strongly irregular behavior of the spin-FET conductance at large Rashba parameter. Moreover, the presence of the bulk inversion asymmetry, i.e. the Dresselhaus SO coupling, may compromise the spin-valve effect even at small values of the Rashba parameter. However, by introducing stray electric fields in addition to the SO couplings, we show that the effect of the SO induced quasi-bound states can be tuned. The oscillations of the spin-resolved conductance become smoother and the control of the spin-FET characteristics becomes possible. For the calculations we employ a multi-channel scattering formalism, based on the R-matrix method extended to spin transport, in the presence of Rashba and Dresselhaus SO couplings.
Calculation of zero-norm states and reduction od stringy scattering amplitudes
International Nuclear Information System (INIS)
Lee Jen-Chi
2005-01-01
We give a simplified method to generate two types of zero-norm states in the old covariant first quantized (OCFQ) spectrum of open bosonic string. Zero-norm states up to the fourth massive level and general formulas of some zero-norm tensor states at arbitrary mass levels are calculated. On-shell Ward identities generated by zero-norm states and the factor-ization property of stringy vertex operators can then be used to argue that the string-tree scattering amplitudes of the degenerate lower spin propagating states are fixed by those of higher spin propagating states at each fixed mass level. This decoupling phenomenon is, in contrast to Gross's high-energy symmetries, valid to all energies. As examples, we explicitly demonstrate this stringy phenomenon up to fourth massive level (spin-five), which justifies the calculation of two other previous approaches based on the massive worldsheet sigma-model and Witten's string field theory (WSFT). (author)
Zhao, Cong; Zhong, Yuncheng; Duan, Xinhui; Zhang, You; Huang, Xiaokun; Wang, Jing; Jin, Mingwu
2018-06-01
Four-dimensional (4D) x-ray cone-beam computed tomography (CBCT) is important for a precise radiation therapy for lung cancer. Due to the repeated use and 4D acquisition over a course of radiotherapy, the radiation dose becomes a concern. Meanwhile, the scatter contamination in CBCT deteriorates image quality for treatment tasks. In this work, we propose the use of a moving blocker (MB) during the 4D CBCT acquisition (‘4D MB’) and to combine motion-compensated reconstruction to address these two issues simultaneously. In 4D MB CBCT, the moving blocker reduces the x-ray flux passing through the patient and collects the scatter information in the blocked region at the same time. The scatter signal is estimated from the blocked region for correction. Even though the number of projection views and projection data in each view are not complete for conventional reconstruction, 4D reconstruction with a total-variation (TV) constraint and a motion-compensated temporal constraint can utilize both spatial gradient sparsity and temporal correlations among different phases to overcome the missing data problem. The feasibility simulation studies using the 4D NCAT phantom showed that 4D MB with motion-compensated reconstruction with 1/3 imaging dose reduction could produce satisfactory images and achieve 37% improvement on structural similarity (SSIM) index and 55% improvement on root mean square error (RMSE), compared to 4D reconstruction at the regular imaging dose without scatter correction. For the same 4D MB data, 4D reconstruction outperformed 3D TV reconstruction by 28% on SSIM and 34% on RMSE. A study of synthetic patient data also demonstrated the potential of 4D MB to reduce the radiation dose by 1/3 without compromising the image quality. This work paves the way for more comprehensive studies to investigate the dose reduction limit offered by this novel 4D MB method using physical phantom experiments and real patient data based on clinical relevant metrics.
Hamie, Qeumars Mustafa; Kobe, Adrian Raoul; Mietzsch, Leif; Manhart, Michael; Puippe, Gilbert Dominique; Pfammatter, Thomas; Guggenberger, Roman
2018-01-01
To investigate the effect of an on-site prototype metal artefact reduction (MAR) algorithm in cone-beam CT-catheter-arteriography (CBCT-CA) in patients undergoing transarterial radioembolisation (RE) of hepatic masses. Ethical board approved retrospective study of 29 patients (mean 63.7±13.7 years, 11 female), including 16 patients with arterial metallic coils, undergoing CBCT-CA (8s scan, 200 degrees rotation, 397 projections). Image reconstructions with and without prototype MAR algorithm were evaluated quantitatively (streak-artefact attenuation changes) and qualitatively (visibility of hepatic parenchyma and vessels) in near- (3cm) of artefact sources (metallic coils and catheters). Quantitative and qualitative measurements of uncorrected and MAR corrected images and different artefact sources were compared RESULTS: Quantitative evaluation showed significant reduction of near- and far-field streak-artefacts with MAR for both artefact sources (p0.05). Inhomogeneities of attenuation values were significantly higher for metallic coils compared to catheters (pprototype MAR algorithm improves image quality in proximity of metallic coil and catheter artefacts. • Metal objects cause artefacts in cone-beam computed tomography (CBCT) imaging. • These artefacts can be corrected by metal artefact reduction (MAR) algorithms. • Corrected images show significantly better visibility of nearby hepatic vessels and tissue. • Better visibility may facilitate image interpretation, save time and radiation exposure.
Energy Technology Data Exchange (ETDEWEB)
Gongzhang, R.; Xiao, B.; Lardner, T.; Gachagan, A. [Centre for Ultrasonic Engineering, University of Strathclyde, Glasgow, G1 1XW (United Kingdom); Li, M. [School of Engineering, University of Glasgow, Glasgow, G12 8QQ (United Kingdom)
2014-02-18
This paper presents a robust frequency diversity based algorithm for clutter reduction in ultrasonic A-scan waveforms. The performance of conventional spectral-temporal techniques like Split Spectrum Processing (SSP) is highly dependent on the parameter selection, especially when the signal to noise ratio (SNR) is low. Although spatial beamforming offers noise reduction with less sensitivity to parameter variation, phased array techniques are not always available. The proposed algorithm first selects an ascending series of frequency bands. A signal is reconstructed for each selected band in which a defect is present when all frequency components are in uniform sign. Combining all reconstructed signals through averaging gives a probability profile of potential defect position. To facilitate data collection and validate the proposed algorithm, Full Matrix Capture is applied on the austenitic steel and high nickel alloy (HNA) samples with 5MHz transducer arrays. When processing A-scan signals with unrefined parameters, the proposed algorithm enhances SNR by 20dB for both samples and consequently, defects are more visible in B-scan images created from the large amount of A-scan traces. Importantly, the proposed algorithm is considered robust, while SSP is shown to fail on the austenitic steel data and achieves less SNR enhancement on the HNA data.
Orlov, Yu. V.; Irgaziev, B. F.; Nabi, Jameel-Un
2017-08-01
A new algorithm for the asymptotic nuclear coefficients calculation, which we call the Δ method, is proved and developed. This method was proposed by Ramírez Suárez and Sparenberg (arXiv:1602.04082.) but no proof was given. We apply it to the bound state situated near the channel threshold when the Sommerfeld parameter is quite large within the experimental energy region. As a result, the value of the conventional effective-range function Kl(k2) is actually defined by the Coulomb term. One of the resulting effects is a wrong description of the energy behavior of the elastic scattering phase shift δl reproduced from the fitted total effective-range function Kl(k2) . This leads to an improper value of the asymptotic normalization coefficient (ANC) value. No such problem arises if we fit only the nuclear term. The difference between the total effective-range function and the Coulomb part at real energies is the same as the nuclear term. Then we can proceed using just this Δ method to calculate the pole position values and the ANC. We apply it to the vertices 4He+12C ↔16O and 3He+4He↔7Be . The calculated ANCs can be used to find the radiative capture reaction cross sections of the transfers to the 16O bound final states as well as to the 7Be.
Indian Academy of Sciences (India)
algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).
Indian Academy of Sciences (India)
algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...
Energy Technology Data Exchange (ETDEWEB)
Ngonkham, S. [Khonkaen Univ., Amphur Muang (Thailand). Dept. of Electrical Engineering; Buasri, P. [Khonkaen Univ., Amphur Muang (Thailand). Embed System Research Group
2009-03-11
A harmony search (HS) algorithm was used to optimize economic dispatch (ED) in a wind energy conversion system (WECS) for power system integration. The HS algorithm was based on a stochastic random search method. System costs for the WECS system were estimated in relation to average wind speeds. The HS algorithm was implemented to optimize the ED with a simple programming procedure. The study showed that the initial parameters must be carefully selected to ensure the accuracy of the HS algorithm. The algorithm demonstrated that total costs of the WECS system were higher than costs associated with energy efficiency procedures that reduced the same amount of greenhouse gas (GHG) emissions. 7 refs,. 10 tabs., 16 figs.
International Nuclear Information System (INIS)
Johnson, C.S.
1992-01-01
Activation barrier heights, and therefore rates, for molecule-based electron-transfer (ET) reactions are governed by redox thermodynamics and Frank-Condon effects. Quantitative assessment of the latter requires a detailed, quantitative knowledge of all internal and external normal-coordinate displacements, together with appropriate vibrational frequencies (v) or force constants (f). In favorable cases, the desire internal or vibrational displacement information can be satisfactorily estimated from redox-induced bond-length changes as provided, for example, by x-ray crystallography or extended x-ray absorption fine structure (EXAFS) measurements. Other potentially useful methods include Franck-Condon analysis of structured emission or absorption spectra, hole burning techniques, and application of empirical structure/frequency relationships (E.g., Badger's rules). There are, however, a number of limitations. The most obvious limitations for crystallography are that measurements can be made only in a crystalline environment and that experiments cannot be done on short-lived electron-transfer excited states or on systems which suffer from chemical decomposition following oxidation or reduction. For EXAFS there are additional constrains in that only selected elements display useful scattering and only atoms in close proximity to the scattering center may be detected. This report contains the first successful applications of the Raman methodology to a much larger class of ET reactions, namely, outer-sphere reactions. The report also necessarily represents the first application to a monomeric redox system
Indian Academy of Sciences (India)
will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...
DEFF Research Database (Denmark)
Tafti, Hossein Dehghani; Maswood, Ali Iftekhar; Pou, Josep
2016-01-01
strings should be reduced during voltage sags. In this paper, an algorithm is proposed for determining the reference voltage of the PV string which results in a reduction of the output power to a certain amount. The proposed algorithm calculates the reference voltage for the dc/dc converter controller......, based on the characteristics of the power-voltage curve of the PV string and therefore, no modification is required in the the controller of the dc/dc converter. Simulation results on a 50-kW PV string verified the effectiveness of the proposed algorithm in reducing the power from PV strings under......Due to the high penetration of the installed distributed generation units in the power system, the injection of reactive power is required for the medium-scale and large-scale grid-connected photovoltaic power plants (PVPPs). Because of the current limitation of the grid-connected inverter...
International Nuclear Information System (INIS)
Kidoh, M.; Nakaura, T.; Nakamura, S.; Tokuyasu, S.; Osakabe, H.; Harada, K.; Yamashita, Y.
2014-01-01
Aim: To evaluate the image quality of O-MAR (Metal Artifact Reduction for Orthopedic Implants) for dental metal artefact reduction. Materials and methods: This prospective study received institutional review board approval and written informed consent was obtained. Thirty patients who had dental implants or dental fillings were included in this study. Computed tomography (CT) images were obtained through the oral cavity and neck during the portal venous phase. The system reconstructed the O-MAR-processed images in addition to the uncorrected images. CT attenuation and image noise of the soft tissue of the oral cavity were compared between the O-MAR and the uncorrected images. Qualitative analysis was undertaken between the two image groups. Results: The image noise of the O-MAR images was significantly lower than that of the uncorrected images (p < 0.01). O-MAR offered plausible attenuations of soft tissue compared with non-O-MAR. Better qualitative scores were obtained in the streaking artefacts and the degree of depiction of the oral cavity with O-MAR compared with non-O-MAR. Conclusion: O-MAR enables the depiction of structures in areas in which this was not previously possible due to dental metallic artefacts in qualitative image analysis. O-MAR images may have a supplementary role in addition to uncorrected images in oral diagnosis
High-performance bidiagonal reduction using tile algorithms on homogeneous multicore architectures
Ltaief, Hatem; Luszczek, Piotr R.; Dongarra, Jack
2013-01-01
dependence translation layer that maps the general algorithm with column-major data layout into the tile data layout; and (4) a dynamic runtime system that efficiently schedules the newly implemented kernels across the processing units and ensures
International Nuclear Information System (INIS)
Andrushevskii, N.M.; Shchedrin, B.M.; Simonov, V.I.
2004-01-01
New algorithms for solving the atomic structure of equivalent nanodimensional clusters of the same orientations randomly distributed over the initial single crystal (crystal matrix) have been suggested. A cluster is a compact group of substitutional, interstitial or other atoms displaced from their positions in the crystal matrix. The structure is solved based on X-ray or neutron diffuse scattering data obtained from such objects. The use of the mathematical apparatus of Fourier transformations of finite functions showed that the appropriate sampling of the intensities of continuous diffuse scattering allows one to synthesize multiperiodic difference Patterson functions that reveal the systems of the interatomic vectors of an individual cluster. The suggested algorithms are tested on a model one-dimensional structure
Directory of Open Access Journals (Sweden)
Weitao Li
2017-01-01
Full Text Available During neurosurgery, an optical probe has been used to guide the micro-electrode, which is punctured into the globus pallidus (GP to create a lesion that can relieve the cardinal symptoms. Accurate target localization is the key factor to affect the treatment. However, considering the scattering nature of the tissue, the “look ahead distance (LAD” of optical probe makes the boundary between the different tissues blurred and difficult to be distinguished, which is defined as artifact. Thus, it is highly desirable to reduce the artifact caused by LAD. In this paper, a real-time algorithm based on precise threshold was proposed to eliminate the artifact. The value of the threshold was determined by the maximum error of the measurement system during the calibration procession automatically. Then, the measured data was processed sequentially only based on the threshold and the former data. Moreover, 100μm double-fiber probe and two-layer and multi-layer phantom models were utilized to validate the precision of the algorithm. The error of the algorithm is one puncture step, which was proved in the theory and experiment. It was concluded that the present method could reduce the artifact caused by LAD and make the real boundary sharper and less blurred in real-time. It might be potentially used for the neurosurgery navigation.
International Nuclear Information System (INIS)
Son, Sang Jun; Park, Jang Pil; Kim, Min Jeong; Yoo, Suk Hyun
2014-01-01
The purpose of this study is evaluation for the applicability of O-MAR(Metal artifact Reduction for Orthopedic Implants)(ver. 3.6.0, Philips, Netherlands) in head and neck radiation treatment planning CT with metal artifact created by dental implant. All of the in this study's CT images were scanned by Brilliance Big Bore CT(Philips, Netherlands) at 120 kVp, 2 mm sliced and Metal artifact reduced by O-MAR. To compare the original and reconstructed CT images worked on RTPS(Eclipse ver 10.0.42, Varian, USA). In order to test the basic performance of the O-MAR, The phantom was made to create metal artifact by dental implant and other phantoms used for without artifact images. To measure a difference of HU in with artifact images and without artifact images, homogeneous phantom and inhomogeneous phantoms were used with cerrobend rods. Each of images were compared a difference of HU in ROIs. And also, 1 case of patient's original CT image applied O-MAR and density corrected CT were evaluated for dose distributions with SNC Patient(Sun Nuclear Co., USA). In cases of head and neck phantom, the difference of dose distribution is appeared 99.8% gamma passing rate(criteria 2 mm / 2%) between original and CT images applied O-MAR. And 98.5% appeared in patient case, among original CT, O-MAR and density corrected CT. The difference of total dose distribution is less than 2% that appeared both phantom and patient case study. Though the dose deviations are little, there are still matters to discuss that the dose deviations are concentrated so locally. In this study, The quality of all images applied O-MAR was improved. Unexpectedly, Increase of max. HU was founded in air cavity of the O-MAR images compare to cavity of the original images and wrong corrections were appeared, too. The result of study assuming restrained case of O-MAR adapted to near skin and low density area, it appeared image distortion and artifact correction simultaneously. In O-MAR CT, air cavity area
Leakage Detection and Estimation Algorithm for Loss Reduction in Water Piping Networks
Directory of Open Access Journals (Sweden)
Kazeem B. Adedeji
2017-10-01
Full Text Available Water loss through leaking pipes constitutes a major challenge to the operational service of water utilities. In recent years, increasing concern about the financial loss and environmental pollution caused by leaking pipes has been driving the development of efficient algorithms for detecting leakage in water piping networks. Water distribution networks (WDNs are disperse in nature with numerous number of nodes and branches. Consequently, identifying the segment(s of the network and the exact leaking pipelines connected to this segment(s where higher background leakage outflow occurs is a challenging task. Background leakage concerns the outflow from small cracks or deteriorated joints. In addition, because they are diffuse flow, they are not characterised by quick pressure drop and are not detectable by measuring instruments. Consequently, they go unreported for a long period of time posing a threat to water loss volume. Most of the existing research focuses on the detection and localisation of burst type leakages which are characterised by a sudden pressure drop. In this work, an algorithm for detecting and estimating background leakage in water distribution networks is presented. The algorithm integrates a leakage model into a classical WDN hydraulic model for solving the network leakage flows. The applicability of the developed algorithm is demonstrated on two different water networks. The results of the tested networks are discussed and the solutions obtained show the benefits of the proposed algorithm. A noteworthy evidence is that the algorithm permits the detection of critical segments or pipes of the network experiencing higher leakage outflow and indicates the probable pipes of the network where pressure control can be performed. However, the possible position of pressure control elements along such critical pipes will be addressed in future work.
Leakage detection and estimation algorithm for loss reduction in water piping networks
CSIR Research Space (South Africa)
Adedeji, KB
2017-10-01
Full Text Available the development of efficient algorithms for detecting leakage in water piping networks. Water distribution networks (WDNs) are disperse in nature with numerous number of nodes and branches. Consequently, identifying the segment(s) of the network and the exact...
Shibli, Hussain J.
2013-06-01
Opportunistic schedulers rely on the feedback of all users in order to schedule a set of users with favorable channel conditions. While the downlink channels can be easily estimated at all user terminals via a single broadcast, several key challenges are faced during uplink transmission. First of all, the statistics of the noisy and fading feedback channels are unknown at the base station (BS) and channel training is usually required from all users. Secondly, the amount of network resources (air-time) required for feedback transmission grows linearly with the number of users. In this paper, we tackle the above challenges and propose a Bayesian based scheduling algorithm that 1) reduces the air-time required to identify the strong users, and 2) is agnostic to the statistics of the feedback channels and utilizes the a priori statistics of the additive noise to identify the strong users. Numerical results show that the proposed algorithm reduces the feedback air-time while improving detection in the presence of fading and noisy channels when compared to recent compressed sensing based algorithms. Furthermore, the proposed algorithm achieves a sum-rate throughput close to that obtained by noiseless dedicated feedback systems. © 2013 IEEE.
International Nuclear Information System (INIS)
Hamie, Qeumars Mustafa; Kobe, Adrian Raoul; Mietzsch, Leif; Puippe, Gilbert Dominique; Pfammatter, Thomas; Guggenberger, Roman; Manhart, Michael
2018-01-01
To investigate the effect of an on-site prototype metal artefact reduction (MAR) algorithm in cone-beam CT-catheter-arteriography (CBCT-CA) in patients undergoing transarterial radioembolisation (RE) of hepatic masses. Ethical board approved retrospective study of 29 patients (mean 63.7±13.7 years, 11 female), including 16 patients with arterial metallic coils, undergoing CBCT-CA (8s scan, 200 degrees rotation, 397 projections). Image reconstructions with and without prototype MAR algorithm were evaluated quantitatively (streak-artefact attenuation changes) and qualitatively (visibility of hepatic parenchyma and vessels) in near- (<1cm) and far-field (>3cm) of artefact sources (metallic coils and catheters). Quantitative and qualitative measurements of uncorrected and MAR corrected images and different artefact sources were compared Quantitative evaluation showed significant reduction of near- and far-field streak-artefacts with MAR for both artefact sources (p<0.001), while remaining stable for unaffected organs (all p>0.05). Inhomogeneities of attenuation values were significantly higher for metallic coils compared to catheters (p<0.001) and decreased significantly for both after MAR (p<0.001). Qualitative image scores were significantly improved after MAR (all p<0.003) with by trend higher artefact degrees for metallic coils compared to catheters. In patients undergoing CBCT-CA for transarterial RE, prototype MAR algorithm improves image quality in proximity of metallic coil and catheter artefacts. (orig.)
Breen, H J; Rogers, P; Johnson, N W; Slaney, R
1999-08-01
Clinical periodontal measurement is plagued by many sources of error which result in aberrant values (outliers). This study sets out to compare probeable crevice depth measurements (PCD) selected by the option-4 algorithm against those recorded with a conventional double-pass method and to quantify any reduction in site-specific PCD variances. A single clinician recorded full-mouth PCD at 1 visit in 32 subjects (mean age 45.5 years) with moderately advanced chronic adult periodontitis. PCD was recorded over 2 passes at 6 sites per tooth with the Florida Pocket Depth Probes, a 3rd generation probe. The option-4 algorithm compared the 1st pass site-specific PCD value (PCD1) to the 2nd pass site-specific PCD value (PCD2) and, if the difference between these values was >1.00 mm, allowed the recording of a maximum of 2 further measurements (3rd and 4th pass measurements PCD3 and PCD4): 4 site-specific measure-meets were considered to be the maximum subject and tissue tolerance. The algorithm selected the 1st 2 measurements whose difference was difference Y) (Y=[(A-B)/A]X 100) and a 75% reduction in the median site-specific variance of PCD1/PCD2.
Energy Technology Data Exchange (ETDEWEB)
Hamie, Qeumars Mustafa; Kobe, Adrian Raoul; Mietzsch, Leif; Puippe, Gilbert Dominique; Pfammatter, Thomas; Guggenberger, Roman [University Hospital Zurich, Department of Radiology, Zurich (Switzerland); Manhart, Michael [Imaging Concepts, HC AT IN IMC, Siemens Healthcare GmbH, Advanced Therapies, Innovation, Forchheim (Germany)
2018-01-15
To investigate the effect of an on-site prototype metal artefact reduction (MAR) algorithm in cone-beam CT-catheter-arteriography (CBCT-CA) in patients undergoing transarterial radioembolisation (RE) of hepatic masses. Ethical board approved retrospective study of 29 patients (mean 63.7±13.7 years, 11 female), including 16 patients with arterial metallic coils, undergoing CBCT-CA (8s scan, 200 degrees rotation, 397 projections). Image reconstructions with and without prototype MAR algorithm were evaluated quantitatively (streak-artefact attenuation changes) and qualitatively (visibility of hepatic parenchyma and vessels) in near- (<1cm) and far-field (>3cm) of artefact sources (metallic coils and catheters). Quantitative and qualitative measurements of uncorrected and MAR corrected images and different artefact sources were compared Quantitative evaluation showed significant reduction of near- and far-field streak-artefacts with MAR for both artefact sources (p<0.001), while remaining stable for unaffected organs (all p>0.05). Inhomogeneities of attenuation values were significantly higher for metallic coils compared to catheters (p<0.001) and decreased significantly for both after MAR (p<0.001). Qualitative image scores were significantly improved after MAR (all p<0.003) with by trend higher artefact degrees for metallic coils compared to catheters. In patients undergoing CBCT-CA for transarterial RE, prototype MAR algorithm improves image quality in proximity of metallic coil and catheter artefacts. (orig.)
Introduction of Bootstrap Current Reduction in the Stellarator Optimization Using the Algorithm DAB
International Nuclear Information System (INIS)
Castejón, F.; Gómez-Iglesias, A.; Velasco, J. L.
2015-01-01
This work is devoted to introduce new optimization criterion in the DAB (Distributed Asynchronous Bees) code. With this new criterion, we have now in DAB the equilibrium and Mercier stability criteria, the minimization of Bxgrad(B) criterion, which ensures the reduction of neoclassical transport and the improvement of the confinement of fast particles, and the reduction of bootstrap current. We have started from a neoclassically optimised configuration of the helias type and imposed the reduction of bootstrap current. The obtained configuration only presents a modest reduction of total bootstrap current, but the local current density is reduced along the minor radii. Further investigations are developed to understand the reason of this modest improvement.
Directory of Open Access Journals (Sweden)
Min Liu
2018-03-01
Full Text Available Sidelobe reduction is a very primary task for synthetic aperture radar (SAR images. Various methods have been proposed for broadside SAR, which can suppress the sidelobes effectively while maintaining high image resolution at the same time. Alternatively, squint SAR, especially highly squint SAR, has emerged as an important tool that provides more mobility and flexibility and has become a focus of recent research studies. One of the research challenges for squint SAR is how to resolve the severe range-azimuth coupling of echo signals. Unlike broadside SAR images, the range and azimuth sidelobes of the squint SAR images no longer locate on the principal axes with high probability. Thus the spatially variant apodization (SVA filters could hardly get all the sidelobe information, and hence the sidelobe reduction process is not optimal. In this paper, we present an improved algorithm called double spatially variant apodization (D-SVA for better sidelobe suppression. Satisfactory sidelobe reduction results are achieved with the proposed algorithm by comparing the squint SAR images to the broadside SAR images. Simulation results also demonstrate the reliability and efficiency of the proposed method.
Leakage Detection and Estimation Algorithm for Loss Reduction in Water Piping Networks
Kazeem B. Adedeji; Yskandar Hamam; Bolanle T. Abe; Adnan M. Abu-Mahfouz
2017-01-01
Water loss through leaking pipes constitutes a major challenge to the operational service of water utilities. In recent years, increasing concern about the financial loss and environmental pollution caused by leaking pipes has been driving the development of efficient algorithms for detecting leakage in water piping networks. Water distribution networks (WDNs) are disperse in nature with numerous number of nodes and branches. Consequently, identifying the segment(s) of the network and the exa...
Directory of Open Access Journals (Sweden)
Hadi Ghashochi-Bargh
Full Text Available In Current paper, power consumption and vertical displacement optimization of composite plates subject to a step load are carried out by piezoelectric patches using the modified multi-objective Elitist-Artificial Bee Colony (E-ABC algorithm. The motivation behind this concept is to well balance the exploration and exploitation capability for attaining better convergence to the optimum. In order to reduce the calculation time, the elitist strategy is also used in Artificial Bee Colony algorithm. The voltages of patches, plate length/width ratios, ply angles, plate thickness/length ratios, number of layers and edge conditions are chosen as design variables. The formulation is based on the classical laminated plate theory (CLPT and Hamilton's principle. The performance of the new ABC approach is compared with the PSO algorithm and shows the good efficiency of the new ABC approach. To check the validity, the transient responses of isotropic and orthotropic plates are compared with those available in the literature and show a good agreement.
LTREE - a lisp-based algorithm for cutset generation using Boolean reduction
International Nuclear Information System (INIS)
Finnicum, D.J.; Rzasa, P.W.
1985-01-01
Fault tree analysis is an important tool for evaluating the safety of nuclear power plants. The basic objective of fault tree analysis is to determine the probability that an undesired event or combination of events will occur. Fault tree analysis involves four main steps: (1) specifying the undesired event or events; (2) constructing the fault tree which represents the ways in which the postulated event(s) could occur; (3) qualitative evaluation of the logic model to identify the minimal cutsets; and (4) quantitative evaluation of the logic model to determine the probability that the postulated event(s) will occur given the probability of occurrence for each individual fault. This paper describes a LISP-based algorithm for the qualitative evaluation of fault trees. Development of this algorithm is the first step in a project to apply expert systems technology to the automation of the fault tree analysis process. The first section of this paper provides an overview of LISP and its capabilities, the second section describes the LTREE algorithm and the third section discusses the on-going research areas
Energy Technology Data Exchange (ETDEWEB)
Korpics, Mark; Surucu, Murat; Mescioglu, Ibrahim; Alite, Fiori; Block, Alec M.; Choi, Mehee; Emami, Bahman; Harkenrider, Matthew M.; Solanki, Abhishek A.; Roeske, John C., E-mail: jroeske@lumc.edu
2016-11-15
Purpose and Objectives: To quantify, through an observer study, the reduction in metal artifacts on cone beam computed tomographic (CBCT) images using a projection-interpolation algorithm, on images containing metal artifacts from dental fillings and implants in patients treated for head and neck (H&N) cancer. Methods and Materials: An interpolation-substitution algorithm was applied to H&N CBCT images containing metal artifacts from dental fillings and implants. Image quality with respect to metal artifacts was evaluated subjectively and objectively. First, 6 independent radiation oncologists were asked to rank randomly sorted blinded images (before and after metal artifact reduction) using a 5-point rating scale (1 = severe artifacts; 5 = no artifacts). Second, the standard deviation of different regions of interest (ROI) within each image was calculated and compared with the mean rating scores. Results: The interpolation-substitution technique successfully reduced metal artifacts in 70% of the cases. From a total of 60 images from 15 H&N cancer patients undergoing image guided radiation therapy, the mean rating score on the uncorrected images was 2.3 ± 1.1, versus 3.3 ± 1.0 for the corrected images. The mean difference in ranking score between uncorrected and corrected images was 1.0 (95% confidence interval: 0.9-1.2, P<.05). The standard deviation of each ROI significantly decreased after artifact reduction (P<.01). Moreover, a negative correlation between the mean rating score for each image and the standard deviation of the oral cavity and bilateral cheeks was observed. Conclusion: The interpolation-substitution algorithm is efficient and effective for reducing metal artifacts caused by dental fillings and implants on CBCT images, as demonstrated by the statistically significant increase in observer image quality ranking and by the decrease in ROI standard deviation between uncorrected and corrected images.
Algorithm for statistical noise reduction in three-dimensional ion implant simulations
International Nuclear Information System (INIS)
Hernandez-Mangas, J.M.; Arias, J.; Jaraiz, M.; Bailon, L.; Barbolla, J.
2001-01-01
As integrated circuit devices scale into the deep sub-micron regime, ion implantation will continue to be the primary means of introducing dopant atoms into silicon. Different types of impurity profiles such as ultra-shallow profiles and retrograde profiles are necessary for deep submicron devices in order to realize the desired device performance. A new algorithm to reduce the statistical noise in three-dimensional ion implant simulations both in the lateral and shallow/deep regions of the profile is presented. The computational effort in BCA Monte Carlo ion implant simulation is also reduced
Dong, S.
2018-05-01
We present a reduction-consistent and thermodynamically consistent formulation and an associated numerical algorithm for simulating the dynamics of an isothermal mixture consisting of N (N ⩾ 2) immiscible incompressible fluids with different physical properties (densities, viscosities, and pair-wise surface tensions). By reduction consistency we refer to the property that if only a set of M (1 ⩽ M ⩽ N - 1) fluids are present in the system then the N-phase governing equations and boundary conditions will exactly reduce to those for the corresponding M-phase system. By thermodynamic consistency we refer to the property that the formulation honors the thermodynamic principles. Our N-phase formulation is developed based on a more general method that allows for the systematic construction of reduction-consistent formulations, and the method suggests the existence of many possible forms of reduction-consistent and thermodynamically consistent N-phase formulations. Extensive numerical experiments have been presented for flow problems involving multiple fluid components and large density ratios and large viscosity ratios, and the simulation results are compared with the physical theories or the available physical solutions. The comparisons demonstrate that our method produces physically accurate results for this class of problems.
Energy Technology Data Exchange (ETDEWEB)
Grosse Hokamp, Nils; Neuhaus, V.; Abdullayev, N.; Laukamp, K.; Lennartz, S.; Mpotsaris, A.; Borggrefe, J. [University Hospital Cologne, Department of Diagnostic and Interventional Radiology, Cologne (Germany)
2018-02-15
Aim of this study was to assess the artifact reduction in patients with orthopedic hardware in the spine as provided by (1) metal-artifact-reduction algorithms (O-MAR) and (2) virtual monoenergetic images (MonoE) as provided by spectral detector CT (SDCT) compared to conventional iterative reconstruction (CI). In all, 28 consecutive patients with orthopedic hardware in the spine who underwent SDCT-examinations were included. CI, O-MAR and MonoE (40-200 keV) images were reconstructed. Attenuation (HU) and noise (SD) were measured in order to calculate signal-to-noise ratio (SNR) of paravertebral muscle and spinal canal. Subjective image quality was assessed by two radiologists in terms of image quality and extent of artifact reduction. O-MAR and high-keV MonoE showed significant decrease of hypodense artifacts in terms of higher attenuation as compared to CI (CI vs O-MAR, 200 keV MonoE: -396.5HU vs. -115.2HU, -48.1HU; both p ≤ 0.001). Further, artifacts as depicted by noise were reduced in O-MAR and high-keV MonoE as compared to CI in (1) paravertebral muscle and (2) spinal canal - CI vs. O-MAR/200 keV: (1) 34.7 ± 19.0 HU vs. 26.4 ± 14.4 HU, p ≤ 0.05/27.4 ± 16.1, n.s.; (2) 103.4 ± 61.3 HU vs. 72.6 ± 62.6 HU/60.9 ± 40.1 HU, both p ≤ 0.001. Subjectively both O-MAR and high-keV images yielded an artifact reduction in up to 24/28 patients. Both, O-MAR and high-keV MonoE reconstructions as provided by SDCT lead to objective and subjective artifact reduction, thus the combination of O-MAR and MonoE seems promising for further reduction. (orig.)
Simulation of modified hybrid noise reduction algorithm to enhance the speech quality
International Nuclear Information System (INIS)
Waqas, A.; Muhammad, T.; Jamal, H.
2013-01-01
Speech is the most essential method of correspondence of humankind. Cell telephony, portable hearing assistants and, hands free are specific provisions in this respect. The performance of these communication devices could be affected because of distortions which might augment them. There are two essential sorts of distortions that might be recognized, specifically: convolutive and additive noises. These mutilations contaminate the clean speech and make it unsatisfactory to human audiences i.e. perceptual value and intelligibility of speech signal diminishes. The objective of speech upgrade systems is to enhance the quality and understandability of speech to make it more satisfactory to audiences. This paper recommends a modified hybrid approach for single channel devices to process the noisy signals considering only the effect of background noises. It is a mixture of pre-processing relative spectral amplitude (RASTA) filter, which is approximated by a straight forward 4th order band-pass filter, and conventional minimum mean square error short time spectral amplitude (MMSE STSA85) estimator. To analyze the performance of the algorithm an objective parameter called Perceptual estimation of speech quality (PESQ) is measured. The results show that the modified algorithm performs well to remove the background noises. SIMULINK implementation is also performed and its profile report has been generated to observe the execution time. (author)
Directory of Open Access Journals (Sweden)
Peigang Ning
Full Text Available OBJECTIVE: This work aims to explore the effects of adaptive statistical iterative reconstruction (ASiR and model-based iterative reconstruction (MBIR algorithms in reducing computed tomography (CT radiation dosages in abdominal imaging. METHODS: CT scans on a standard male phantom were performed at different tube currents. Images at the different tube currents were reconstructed with the filtered back-projection (FBP, 50% ASiR and MBIR algorithms and compared. The CT value, image noise and contrast-to-noise ratios (CNRs of the reconstructed abdominal images were measured. Volumetric CT dose indexes (CTDIvol were recorded. RESULTS: At different tube currents, 50% ASiR and MBIR significantly reduced image noise and increased the CNR when compared with FBP. The minimal tube current values required by FBP, 50% ASiR, and MBIR to achieve acceptable image quality using this phantom were 200, 140, and 80 mA, respectively. At the identical image quality, 50% ASiR and MBIR reduced the radiation dose by 35.9% and 59.9% respectively when compared with FBP. CONCLUSIONS: Advanced iterative reconstruction techniques are able to reduce image noise and increase image CNRs. Compared with FBP, 50% ASiR and MBIR reduced radiation doses by 35.9% and 59.9%, respectively.
Arpaia, P; Inglese, V
2010-01-01
A real-time algorithm of data reduction, based on the combination a two lossy techniques specifically optimized for high-rate magnetic measurements in two domains (e.g. time and space), is proposed. The first technique exploits an adaptive sampling rule based on the power estimation of the flux increments in order to optimize the information to be gathered for magnetic field analysis in real time. The tracking condition is defined by the target noise level in the Nyquist band required by post-processing procedure of magnetic analysis. The second technique uses a data reduction algorithm in order to improve the compression ratio while preserving the consistency of the measured signal. The allowed loss is set equal to the random noise level in the signal in order to force the loss and the noise to cancel rather than to add, by improving the signal-to-noise ratio. Numerical analysis and experimental results of on-field performance characterization and validation for two case studies of magnetic measurement syste...
International Nuclear Information System (INIS)
Arpaia, Pasquale; Buzio, Marco; Inglese, Vitaliano
2010-01-01
A real-time algorithm of data reduction, based on the combination of two lossy techniques specifically optimized for high-rate magnetic measurements in two domains (e.g. time and space), is proposed. The first technique exploits an adaptive sampling rule based on the power estimation of the flux increments in order to optimize the information to be gathered for magnetic field analysis in real time. The tracking condition is defined by the target noise level in the Nyquist band required by the post-processing procedure of magnetic analysis. The second technique uses a data reduction algorithm in order to improve the compression ratio while preserving the consistency of the measured signal. The allowed loss is set equal to the random noise level in the signal in order to force the loss and the noise to cancel rather than to add, by improving the signal-to-noise ratio. Numerical analysis and experimental results of on-field performance characterization and validation for two case studies of magnetic measurement systems for testing magnets of the Large Hadron Collider at the European Organization for Nuclear Research (CERN) are reported
Directory of Open Access Journals (Sweden)
Prajakta Desai
Full Text Available Traffic congestion continues to be a persistent problem throughout the world. As vehicle-to-vehicle communication develops, there is an opportunity of using cooperation among close proximity vehicles to tackle the congestion problem. The intuition is that if vehicles could cooperate opportunistically when they come close enough to each other, they could, in effect, spread themselves out among alternative routes so that vehicles do not all jam up on the same roads. Our previous work proposed a decentralized multiagent based vehicular congestion management algorithm entitled Congestion Avoidance and Route Allocation using Virtual Agent Negotiation (CARAVAN, wherein the vehicles acting as intelligent agents perform cooperative route allocation using inter-vehicular communication. This paper focuses on evaluating the practical applicability of this approach by testing its robustness and performance (in terms of travel time reduction, across variations in: (a environmental parameters such as road network topology and configuration; (b algorithmic parameters such as vehicle agent preferences and route cost/preference multipliers; and (c agent-related parameters such as equipped/non-equipped vehicles and compliant/non-compliant agents. Overall, the results demonstrate the adaptability and robustness of the decentralized cooperative vehicles approach to providing global travel time reduction using simple local coordination strategies.
Desai, Prajakta; Loke, Seng W; Desai, Aniruddha
2017-01-01
Traffic congestion continues to be a persistent problem throughout the world. As vehicle-to-vehicle communication develops, there is an opportunity of using cooperation among close proximity vehicles to tackle the congestion problem. The intuition is that if vehicles could cooperate opportunistically when they come close enough to each other, they could, in effect, spread themselves out among alternative routes so that vehicles do not all jam up on the same roads. Our previous work proposed a decentralized multiagent based vehicular congestion management algorithm entitled Congestion Avoidance and Route Allocation using Virtual Agent Negotiation (CARAVAN), wherein the vehicles acting as intelligent agents perform cooperative route allocation using inter-vehicular communication. This paper focuses on evaluating the practical applicability of this approach by testing its robustness and performance (in terms of travel time reduction), across variations in: (a) environmental parameters such as road network topology and configuration; (b) algorithmic parameters such as vehicle agent preferences and route cost/preference multipliers; and (c) agent-related parameters such as equipped/non-equipped vehicles and compliant/non-compliant agents. Overall, the results demonstrate the adaptability and robustness of the decentralized cooperative vehicles approach to providing global travel time reduction using simple local coordination strategies.
International Nuclear Information System (INIS)
Tseng, Hsin-Wu; Kupinski, Matthew A.; Fan, Jiahua; Sainath, Paavana; Hsieh, Jiang
2014-01-01
Purpose: A number of different techniques have been developed to reduce radiation dose in x-ray computed tomography (CT) imaging. In this paper, the authors will compare task-based measures of image quality of CT images reconstructed by two algorithms: conventional filtered back projection (FBP), and a new iterative reconstruction algorithm (IR). Methods: To assess image quality, the authors used the performance of a channelized Hotelling observer acting on reconstructed image slices. The selected channels are dense difference Gaussian channels (DDOG).A body phantom and a head phantom were imaged 50 times at different dose levels to obtain the data needed to assess image quality. The phantoms consisted of uniform backgrounds with low contrast signals embedded at various locations. The tasks the observer model performed included (1) detection of a signal of known location and shape, and (2) detection and localization of a signal of known shape. The employed DDOG channels are based on the response of the human visual system. Performance was assessed using the areas under ROC curves and areas under localization ROC curves. Results: For signal known exactly (SKE) and location unknown/signal shape known tasks with circular signals of different sizes and contrasts, the authors’ task-based measures showed that a FBP equivalent image quality can be achieved at lower dose levels using the IR algorithm. For the SKE case, the range of dose reduction is 50%–67% (head phantom) and 68%–82% (body phantom). For the study of location unknown/signal shape known, the dose reduction range can be reached at 67%–75% for head phantom and 67%–77% for body phantom case. These results suggest that the IR images at lower dose settings can reach the same image quality when compared to full dose conventional FBP images. Conclusions: The work presented provides an objective way to quantitatively assess the image quality of a newly introduced CT IR algorithm. The performance of the
Politi, Luigi; Biondi-Zoccai, Giuseppe; Nocetti, Luca; Costi, Tiziana; Monopoli, Daniel; Rossi, Rosario; Sgura, Fabio; Modena, Maria Grazia; Sangiorgi, Giuseppe M
2012-01-01
Occupational radiation exposure is a growing problem due to the increasing number and complexity of interventional procedures performed. Radial artery access has reduced the number of complications at the price of longer procedure duration. Radpad® scatter protection is a sterile, disposable bismuth-barium radiation shield drape that should be able to decrease the dose of operator radiation during diagnostic and interventional procedures. Such radiation shield has never been tested in a randomized study in humans. Sixty consecutive patients undergoing coronary angiography by radial approach were randomized 1:1 to Radpad use versus no radiation shield protection. The sterile shield was placed around the area of right radial artery sheath insertion and extended medially to the patient trunk. All diagnostic procedures were performed by the same operator to reduce variability in radiation absorption. Radiation exposure was measured blindly using thermoluminescence dosimeters positioned at the operator's chest, left eye, left wrist, and thyroid. Despite similar fluoroscopy time (3.52 ± 2.71 min vs. 3.46 ± 2.77 min, P = 0.898) and total examination dose (50.5 ± 30.7 vs. 45.8 ± 18.0 Gycm(2), P = 0.231), the mean total radiation exposure to the operator was significantly lower when Radpad was utilized (282.8 ± 32.55 μSv vs. 367.8 ± 105.4 μSv, P Radpad utilization at all body locations ranging from 13 to 34% reduction. This first-in-men randomized trial demonstrates that Radpad significantly reduces occupational radiation exposure during coronary angiography performed through right radial artery access. Copyright © 2011 Wiley Periodicals, Inc.
Schiavo, M; Bagnara, M C; Pomposelli, E; Altrinetti, V; Calamia, I; Camerieri, L; Giusti, M; Pesce, G; Reitano, C; Bagnasco, M; Caputo, M
2013-09-01
Radioiodine is a common option for treatment of hyperfunctioning thyroid nodules. Due to the expected selective radioiodine uptake by adenoma, relatively high "fixed" activities are often used. Alternatively, the activity is individually calculated upon the prescription of a fixed value of target absorbed dose. We evaluated the use of an algorithm for personalized radioiodine activity calculation, which allows as a rule the administration of lower radioiodine activities. Seventy-five patients with single hyperfunctioning thyroid nodule eligible for 131I treatment were studied. The activities of 131I to be administered were estimated by the method described by Traino et al. and developed for Graves'disease, assuming selective and homogeneous 131I uptake by adenoma. The method takes into account 131I uptake and its effective half-life, target (adenoma) volume and its expected volume reduction during treatment. A comparison with the activities calculated by other dosimetric protocols, and the "fixed" activity method was performed. 131I uptake was measured by external counting, thyroid nodule volume by ultrasonography, thyroid hormones and TSH by ELISA. Remission of hyperthyroidism was observed in all but one patient; volume reduction of adenoma was closely similar to that assumed by our model. Effective half-life was highly variable in different patients, and critically affected dose calculation. The administered activities were clearly lower with respect to "fixed" activities and other protocols' prescription. The proposed algorithm proved to be effective also for single hyperfunctioning thyroid nodule treatment and allowed a significant reduction of administered 131I activities, without loss of clinical efficacy.
Directory of Open Access Journals (Sweden)
Shunfang Wang
2015-12-01
Full Text Available An effective representation of a protein sequence plays a crucial role in protein sub-nuclear localization. The existing representations, such as dipeptide composition (DipC, pseudo-amino acid composition (PseAAC and position specific scoring matrix (PSSM, are insufficient to represent protein sequence due to their single perspectives. Thus, this paper proposes two fusion feature representations of DipPSSM and PseAAPSSM to integrate PSSM with DipC and PseAAC, respectively. When constructing each fusion representation, we introduce the balance factors to value the importance of its components. The optimal values of the balance factors are sought by genetic algorithm. Due to the high dimensionality of the proposed representations, linear discriminant analysis (LDA is used to find its important low dimensional structure, which is essential for classification and location prediction. The numerical experiments on two public datasets with KNN classifier and cross-validation tests showed that in terms of the common indexes of sensitivity, specificity, accuracy and MCC, the proposed fusing representations outperform the traditional representations in protein sub-nuclear localization, and the representation treated by LDA outperforms the untreated one.
Reduction of image-based ADI-to-AEI overlay inconsistency with improved algorithm
Chen, Yen-Liang; Lin, Shu-Hong; Chen, Kai-Hsiung; Ke, Chih-Ming; Gau, Tsai-Sheng
2013-04-01
In image-based overlay (IBO) measurement, the measurement quality of various measurement spectra can be judged by quality indicators and also the ADI-to-AEI similarity to determine the optimum light spectrum. However we found some IBO measured results showing erroneous indication of wafer expansion from the difference between the ADI and the AEI maps, even after their measurement spectra were optimized. To reduce this inconsistency, an improved image calculation algorithm is proposed in this paper. Different gray levels composed of inner- and outer-box contours are extracted to calculate their ADI overlay errors. The symmetry of intensity distribution at the thresholds dictated by a range of gray levels is used to determine the particular gray level that can minimize the ADI-to-AEI overlay inconsistency. After this improvement, the ADI is more similar to AEI with less expansion difference. The same wafer was also checked by the diffraction-based overlay (DBO) tool to verify that there is no physical wafer expansion. When there is actual wafer expansion induced by large internal stress, both the IBO and the DBO measurements indicate similar expansion results. The scanning white-light interference microscope was used to check the variation of wafer warpage during the ADI and AEI stages. It predicts a similar trend with the overlay difference map, confirming the internal stress.
Wang, Shunfang; Liu, Shuhui
2015-12-19
An effective representation of a protein sequence plays a crucial role in protein sub-nuclear localization. The existing representations, such as dipeptide composition (DipC), pseudo-amino acid composition (PseAAC) and position specific scoring matrix (PSSM), are insufficient to represent protein sequence due to their single perspectives. Thus, this paper proposes two fusion feature representations of DipPSSM and PseAAPSSM to integrate PSSM with DipC and PseAAC, respectively. When constructing each fusion representation, we introduce the balance factors to value the importance of its components. The optimal values of the balance factors are sought by genetic algorithm. Due to the high dimensionality of the proposed representations, linear discriminant analysis (LDA) is used to find its important low dimensional structure, which is essential for classification and location prediction. The numerical experiments on two public datasets with KNN classifier and cross-validation tests showed that in terms of the common indexes of sensitivity, specificity, accuracy and MCC, the proposed fusing representations outperform the traditional representations in protein sub-nuclear localization, and the representation treated by LDA outperforms the untreated one.
Dodge, Cristina T.; Tamm, Eric P.; Cody, Dianna D.; Liu, Xinming; Jensen, Corey T.; Wei, Wei; Kundra, Vikas
2016-01-01
The purpose of this study was to characterize image quality and dose performance with GE CT iterative reconstruction techniques, adaptive statistical iterative reconstruction (ASiR), and model‐based iterative reconstruction (MBIR), over a range of typical to low‐dose intervals using the Catphan 600 and the anthropomorphic Kyoto Kagaku abdomen phantoms. The scope of the project was to quantitatively describe the advantages and limitations of these approaches. The Catphan 600 phantom, supplemented with a fat‐equivalent oval ring, was scanned using a GE Discovery HD750 scanner at 120 kVp, 0.8 s rotation time, and pitch factors of 0.516, 0.984, and 1.375. The mA was selected for each pitch factor to achieve CTDIvol values of 24, 18, 12, 6, 3, 2, and 1 mGy. Images were reconstructed at 2.5 mm thickness with filtered back‐projection (FBP); 20%, 40%, and 70% ASiR; and MBIR. The potential for dose reduction and low‐contrast detectability were evaluated from noise and contrast‐to‐noise ratio (CNR) measurements in the CTP 404 module of the Catphan. Hounsfield units (HUs) of several materials were evaluated from the cylinder inserts in the CTP 404 module, and the modulation transfer function (MTF) was calculated from the air insert. The results were confirmed in the anthropomorphic Kyoto Kagaku abdomen phantom at 6, 3, 2, and 1 mGy. MBIR reduced noise levels five‐fold and increased CNR by a factor of five compared to FBP below 6 mGy CTDIvol, resulting in a substantial improvement in image quality. Compared to ASiR and FBP, HU in images reconstructed with MBIR were consistently lower, and this discrepancy was reversed by higher pitch factors in some materials. MBIR improved the conspicuity of the high‐contrast spatial resolution bar pattern, and MTF quantification confirmed the superior spatial resolution performance of MBIR versus FBP and ASiR at higher dose levels. While ASiR and FBP were relatively insensitive to changes in dose and pitch, the spatial
A DFT-based genetic algorithm search for AuCu nanoalloy electrocatalysts for CO2 reduction
DEFF Research Database (Denmark)
Lysgaard, Steen; Mýrdal, Jón Steinar Garðarsson; Hansen, Heine Anton
2015-01-01
Using a DFT-based genetic algorithm (GA) approach, we have determined the most stable structure and stoichiometry of a 309-atom icosahedral AuCu nanoalloy, for potential use as an electrocatalyst for CO2 reduction. The identified core–shell nano-particle consists of a copper core interspersed....... This shows that the mixed Cu135@Au174 core–shell nanoalloy has a similar adsorption energy, for the most favorable site, as a pure gold nano-particle. Cu, however, has the effect of stabilizing the icosahedral structure because Au particles are easily distorted when adding adsorbates....... that it is possible to use the LCAO mode to obtain a realistic estimate of the molecular chemisorption energy for systems where the computation in normal grid mode is not computationally feasible. These corrections are employed when calculating adsorption energies on the Cu, Au and most stable mixed particles...
Miller, Steven D.; Bankert, Richard L.; Solbrig, Jeremy E.; Forsythe, John M.; Noh, Yoo-Jeong; Grasso, Lewis D.
2017-12-01
This paper describes a Dynamic Enhancement Background Reduction Algorithm (DEBRA) applicable to multispectral satellite imaging radiometers. DEBRA uses ancillary information about the clear-sky background to reduce false detections of atmospheric parameters in complex scenes. Applied here to the detection of lofted dust, DEBRA enlists a surface emissivity database coupled with a climatological database of surface temperature to approximate the clear-sky equivalent signal for selected infrared-based multispectral dust detection tests. This background allows for suppression of false alarms caused by land surface features while retaining some ability to detect dust above those problematic surfaces. The algorithm is applicable to both day and nighttime observations and enables weighted combinations of dust detection tests. The results are provided quantitatively, as a detection confidence factor [0, 1], but are also readily visualized as enhanced imagery. Utilizing the DEBRA confidence factor as a scaling factor in false color red/green/blue imagery enables depiction of the targeted parameter in the context of the local meteorology and topography. In this way, the method holds utility to both automated clients and human analysts alike. Examples of DEBRA performance from notable dust storms and comparisons against other detection methods and independent observations are presented.
Directory of Open Access Journals (Sweden)
Lei Zhang
2016-01-01
Full Text Available Among non-small cell lung cancer (NSCLC, adenocarcinoma (AC, and squamous cell carcinoma (SCC are two major histology subtypes, accounting for roughly 40% and 30% of all lung cancer cases, respectively. Since AC and SCC differ in their cell of origin, location within the lung, and growth pattern, they are considered as distinct diseases. Gene expression signatures have been demonstrated to be an effective tool for distinguishing AC and SCC. Gene set analysis is regarded as irrelevant to the identification of gene expression signatures. Nevertheless, we found that one specific gene set analysis method, significance analysis of microarray-gene set reduction (SAMGSR, can be adopted directly to select relevant features and to construct gene expression signatures. In this study, we applied SAMGSR to a NSCLC gene expression dataset. When compared with several novel feature selection algorithms, for example, LASSO, SAMGSR has equivalent or better performance in terms of predictive ability and model parsimony. Therefore, SAMGSR is a feature selection algorithm, indeed. Additionally, we applied SAMGSR to AC and SCC subtypes separately to discriminate their respective stages, that is, stage II versus stage I. Few overlaps between these two resulting gene signatures illustrate that AC and SCC are technically distinct diseases. Therefore, stratified analyses on subtypes are recommended when diagnostic or prognostic signatures of these two NSCLC subtypes are constructed.
Stamnes, Knut; Tsay, S.-CHEE; Jayaweera, Kolf; Wiscombe, Warren
1988-01-01
The transfer of monochromatic radiation in a scattering, absorbing, and emitting plane-parallel medium with a specified bidirectional reflectivity at the lower boundary is considered. The equations and boundary conditions are summarized. The numerical implementation of the theory is discussed with attention given to the reliable and efficient computation of eigenvalues and eigenvectors. Ways of avoiding fatal overflows and ill-conditioning in the matrix inversion needed to determine the integration constants are also presented.
International Nuclear Information System (INIS)
Pirotta, M.; Aquilina, D.; Bhikha, T.; Georg, D.
2005-01-01
The ESTRO formalism for monitor unit (MU) calculations was evaluated and implemented to replace a previous methodology based on dosimetric data measured in a full-scatter phantom. This traditional method relies on data normalised at the depth of dose maximum (z m ), as well as on the utilisation of the BJR 25 table for the conversion of rectangular fields into equivalent square fields. The treatment planning system (TPS) was subsequently updated to reflect the new beam data normalised at a depth z R of 10 cm. Comparisons were then carried out between the ESTRO formalism, the Clarkson-based dose calculation algorithm on the TPS (with beam data normalised at z m and z R ), and the traditional ''full-scatter'' methodology. All methodologies, except for the ''full-scatter'' methodology, separated head-scatter from phantom-scatter effects and none of the methodologies; except for the ESTRO formalism, utilised wedge depth dose information for calculations. The accuracy of MU calculations was verified against measurements in a homogeneous phantom for square and rectangular open and wedged fields, as well as blocked open and wedged fields, at 5, 10, and 20 cm depths, under fixed SSD and isocentric geometries for 6 and 10 MV. Overall, the ESTRO Formalism showed the most accurate performance, with the root mean square (RMS) error with respect to measurements remaining below 1% even for the most complex beam set-ups investigated. The RMS error for the TPS deteriorated with the introduction of a wedge, with a worse RMS error for the beam data normalised at z m (4% at 6 MV and 1.6% at 10 MV) than at z R (1.9% at 6 MV and 1.1% at 10 MV). The further addition of blocking had only a marginal impact on the accuracy of this methodology. The ''full-scatter'' methodology showed a loss in accuracy for calculations involving either wedges or blocking, and performed worst for blocked wedged fields (RMS errors of 7.1% at 6 MV and 5% at 10 MV). The origins of these discrepancies were
Directory of Open Access Journals (Sweden)
Othman M. K. Alsmadi
2015-01-01
Full Text Available A robust computational technique for model order reduction (MOR of multi-time-scale discrete systems (single input single output (SISO and multi-input multioutput (MIMO is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.
Cho, Jae Heon; Lee, Jong Ho
2015-11-01
Manual calibration is common in rainfall-runoff model applications. However, rainfall-runoff models include several complicated parameters; thus, significant time and effort are required to manually calibrate the parameters individually and repeatedly. Automatic calibration has relative merit regarding time efficiency and objectivity but shortcomings regarding understanding indigenous processes in the basin. In this study, a watershed model calibration framework was developed using an influence coefficient algorithm and genetic algorithm (WMCIG) to automatically calibrate the distributed models. The optimization problem used to minimize the sum of squares of the normalized residuals of the observed and predicted values was solved using a genetic algorithm (GA). The final model parameters were determined from the iteration with the smallest sum of squares of the normalized residuals of all iterations. The WMCIG was applied to a Gomakwoncheon watershed located in an area that presents a total maximum daily load (TMDL) in Korea. The proportion of urbanized area in this watershed is low, and the diffuse pollution loads of nutrients such as phosphorus are greater than the point-source pollution loads because of the concentration of rainfall that occurs during the summer. The pollution discharges from the watershed were estimated for each land-use type, and the seasonal variations of the pollution loads were analyzed. Consecutive flow measurement gauges have not been installed in this area, and it is difficult to survey the flow and water quality in this area during the frequent heavy rainfall that occurs during the wet season. The Hydrological Simulation Program-Fortran (HSPF) model was used to calculate the runoff flow and water quality in this basin. Using the water quality results, a load duration curve was constructed for the basin, the exceedance frequency of the water quality standard was calculated for each hydrologic condition class, and the percent reduction
Energy Technology Data Exchange (ETDEWEB)
Willemink, Martin J.; Takx, Richard A.P.; Jong, Pim A. de; Budde, Ricardo P.J.; Schilham, Arnold M.R.; Leiner, Tim [Utrecht University Medical Center, Department of Radiology, Utrecht (Netherlands); Bleys, Ronald L.A.W. [Utrecht University Medical Center, Department of Anatomy, Utrecht (Netherlands); Das, Marco; Wildberger, Joachim E. [Maastricht University Medical Center, Department of Radiology, Maastricht (Netherlands); Prokop, Mathias [Radboud University Nijmegen Medical Center, Department of Radiology, Nijmegen (Netherlands); Buls, Nico; Mey, Johan de [UZ Brussel, Department of Radiology, Brussels (Belgium)
2014-09-15
To analyse the effects of radiation dose reduction and iterative reconstruction (IR) algorithms on coronary calcium scoring (CCS). Fifteen ex vivo human hearts were examined in an anthropomorphic chest phantom using computed tomography (CT) systems from four vendors and examined at four dose levels using unenhanced prospectively ECG-triggered protocols. Tube voltage was 120 kV and tube current differed between protocols. CT data were reconstructed with filtered back projection (FBP) and reduced dose CT data with IR. CCS was quantified with Agatston scores, calcification mass and calcification volume. Differences were analysed with the Friedman test. Fourteen hearts showed coronary calcifications. Dose reduction with FBP did not significantly change Agatston scores, calcification volumes and calcification masses (P > 0.05). Maximum differences in Agatston scores were 76, 26, 51 and 161 units, in calcification volume 97, 27, 42 and 162 mm{sup 3}, and in calcification mass 23, 23, 20 and 48 mg, respectively. IR resulted in a trend towards lower Agatston scores and calcification volumes with significant differences for one vendor (P < 0.05). Median relative differences between reference FBP and reduced dose IR for Agatston scores remained within 2.0-4.6 %, 1.0-5.3 %, 1.2-7.7 % and 2.6-4.5 %, for calcification volumes within 2.4-3.9 %, 1.0-5.6 %, 1.1-6.4 % and 3.7-4.7 %, for calcification masses within 1.9-4.1 %, 0.9-7.8 %, 2.9-4.7 % and 2.5-3.9 %, respectively. IR resulted in increased, decreased or similar calcification masses. CCS derived from standard FBP acquisitions was not affected by radiation dose reductions up to 80 %. IR resulted in a trend towards lower Agatston scores and calcification volumes. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Malhotra, M. [Stanford Univ., CA (United States)
1996-12-31
Finite-element discretizations of time-harmonic acoustic wave problems in exterior domains result in large sparse systems of linear equations with complex symmetric coefficient matrices. In many situations, these matrix problems need to be solved repeatedly for different right-hand sides, but with the same coefficient matrix. For instance, multiple right-hand sides arise in radiation problems due to multiple load cases, and also in scattering problems when multiple angles of incidence of an incoming plane wave need to be considered. In this talk, we discuss the iterative solution of multiple linear systems arising in radiation and scattering problems in structural acoustics by means of a complex symmetric variant of the BL-QMR method. First, we summarize the governing partial differential equations for time-harmonic structural acoustics, the finite-element discretization of these equations, and the resulting complex symmetric matrix problem. Next, we sketch the special version of BL-QMR method that exploits complex symmetry, and we describe the preconditioners we have used in conjunction with BL-QMR. Finally, we report some typical results of our extensive numerical tests to illustrate the typical convergence behavior of BL-QMR method for multiple radiation and scattering problems in structural acoustics, to identify appropriate preconditioners for these problems, and to demonstrate the importance of deflation in block Krylov-subspace methods. Our numerical results show that the multiple systems arising in structural acoustics can be solved very efficiently with the preconditioned BL-QMR method. In fact, for multiple systems with up to 40 and more different right-hand sides we get consistent and significant speed-ups over solving the systems individually.
The new 'BerSANS-PC' software for reduction and treatment of small angle neutron scattering data
International Nuclear Information System (INIS)
Keiderling, U.
2002-01-01
Measurements on small angle neutron scattering (SANS) instruments are typically characterized by a large number of samples, short measurement times for the individual samples, and a frequent change of visiting scientist groups. Besides this, recent advances in instrumentation have led to more frequent measurements of kinetic sequences and a growing interest in analyzing two-dimensional scattering data, these requiring special software tools that enable the users to extract physically relevant information from the scattering data with a minimum of effort. The new 'BerSANS-PC' data-processing software has been developed at the Hahn-Meitner-Institut (HMI) in Berlin, Germany, to meet these requirements and to support an efficiently working guest-user service. Comprising some basic functions of the 'BerSANS' program available at the HMI and other institutes in the past, BerSANS-PC is a completely new development for network-independent use on local PCs with a full-feature graphical interface. (orig.)
International Nuclear Information System (INIS)
Maximov, A.V.; Ourdev, I.G.; Rozmus, W.; Capjack, C.E.; Mounaix, Ph.; Huller, S.; Pesme, D.; Tikhonchuk, V.T.; Divol, L.
2000-01-01
It is shown that plasma-induced angular spreading and spectral broadening of a spatially incoherent laser beam correspond to increased spatial and temporal incoherence of the laser light. The spatial incoherence is characterized by an effective beam f-number, decreasing in space along the direction of light propagation. Plasma-induced beam smoothing can influence laser-plasma interaction physics. In particular, decreasing the correlation time of the propagating laser light may dramatically reduce the levels of backward stimulated Brillouin and Raman scattering inside the plasma. Also, the decrease of the laser beam effective f-number reduces the reflectivity of backward stimulated Brillouin scattering. (authors)
DEFF Research Database (Denmark)
Nielsen, S.S.; Toft, K.N.; Snakenborg, Detlef
2009-01-01
A fully open source software program for automated two-dimensional and one-dimensional data reduction and preliminary analysis of isotropic small-angle X-ray scattering (SAXS) data is presented. The program is freely distributed, following the open-source philosophy, and does not rely on any...... commercial software packages. BioXTAS RAW is a fully automated program that, via an online feature, reads raw two-dimensional SAXS detector output files and processes and plots data as the data files are created during measurement sessions. The software handles all steps in the data reduction. This includes...... mask creation, radial averaging, error bar calculation, artifact removal, normalization and q calibration. Further data reduction such as background subtraction and absolute intensity scaling is fast and easy via the graphical user interface. BioXTAS RAW also provides preliminary analysis of one...
A scattering-based over-land rainfall retrieval algorithm for South Korea using GCOM-W1/AMSR-2 data
Kwon, Young-Joo; Shin, Hayan; Ban, Hyunju; Lee, Yang-Won; Park, Kyung-Ae; Cho, Jaeil; Park, No-Wook; Hong, Sungwook
2017-08-01
Heavy summer rainfall is a primary natural disaster affecting lives and properties in the Korean Peninsula. This study presents a satellite-based rainfall rate retrieval algorithm for the South Korea combining polarization-corrected temperature ( PCT) and scattering index ( SI) data from the 36.5 and 89.0 GHz channels of the Advanced microwave Scanning Radiometer 2 (AMSR-2) onboard the Global Change Observation Mission (GCOM)-W1 satellite. The coefficients for the algorithm were obtained from spatial and temporal collocation data from the AMSR-2 and groundbased automatic weather station rain gauges from 1 July - 30 August during the years, 2012-2015. There were time delays of about 25 minutes between the AMSR-2 observations and the ground raingauge measurements. A new linearly-combined rainfall retrieval algorithm focused on heavy rain for the PCT and SI was validated using ground-based rainfall observations for the South Korea from 1 July - 30 August, 2016. The validation presented PCT and SI methods showed slightly improved results for rainfall > 5 mm h-1 compared to the current ASMR-2 level 2 data. The best bias and root mean square error (RMSE) for the PCT method at AMSR-2 36.5 GHz were 2.09 mm h-1 and 7.29 mm h-1, respectively, while the current official AMSR-2 rainfall rates show a larger bias and RMSE (4.80 mm h-1 and 9.35 mm h-1, respectively). This study provides a scatteringbased over-land rainfall retrieval algorithm for South Korea affected by stationary front rain and typhoons with the advantages of the previous PCT and SI methods to be applied to a variety of spaceborne passive microwave radiometers.
International Nuclear Information System (INIS)
Kim, Milim; Lee, Jeong Min; Son, Hyo Shin; Han, Joon Koo; Choi, Byung Ihn; Yoon, Jeong Hee; Choi, Jin Woo
2014-01-01
To evaluate the impact of the adaptive iterative dose reduction (AIDR) three-dimensional (3D) algorithm in CT on noise reduction and the image quality compared to the filtered back projection (FBP) algorithm and to compare the effectiveness of AIDR 3D on noise reduction according to the body habitus using phantoms with different sizes. Three different-sized phantoms with diameters of 24 cm, 30 cm, and 40 cm were built up using the American College of Radiology CT accreditation phantom and layers of pork belly fat. Each phantom was scanned eight times using different mAs. Images were reconstructed using the FBP and three different strengths of the AIDR 3D. The image noise, the contrast-to-noise ratio (CNR) and the signal-to-noise ratio (SNR) of the phantom were assessed. Two radiologists assessed the image quality of the 4 image sets in consensus. The effectiveness of AIDR 3D on noise reduction compared with FBP were also compared according to the phantom sizes. Adaptive iterative dose reduction 3D significantly reduced the image noise compared with FBP and enhanced the SNR and CNR (p < 0.05) with improved image quality (p < 0.05). When a stronger reconstruction algorithm was used, greater increase of SNR and CNR as well as noise reduction was achieved (p < 0.05). The noise reduction effect of AIDR 3D was significantly greater in the 40-cm phantom than in the 24-cm or 30-cm phantoms (p < 0.05). The AIDR 3D algorithm is effective to reduce the image noise as well as to improve the image-quality parameters compared by FBP algorithm, and its effectiveness may increase as the phantom size increases.
Energy Technology Data Exchange (ETDEWEB)
Kim, Milim; Lee, Jeong Min; Son, Hyo Shin; Han, Joon Koo; Choi, Byung Ihn [College of Medicine, Seoul National University, Seoul (Korea, Republic of); Yoon, Jeong Hee; Choi, Jin Woo [Dept. of Radiology, Seoul National University Hospital, Seoul (Korea, Republic of)
2014-04-15
To evaluate the impact of the adaptive iterative dose reduction (AIDR) three-dimensional (3D) algorithm in CT on noise reduction and the image quality compared to the filtered back projection (FBP) algorithm and to compare the effectiveness of AIDR 3D on noise reduction according to the body habitus using phantoms with different sizes. Three different-sized phantoms with diameters of 24 cm, 30 cm, and 40 cm were built up using the American College of Radiology CT accreditation phantom and layers of pork belly fat. Each phantom was scanned eight times using different mAs. Images were reconstructed using the FBP and three different strengths of the AIDR 3D. The image noise, the contrast-to-noise ratio (CNR) and the signal-to-noise ratio (SNR) of the phantom were assessed. Two radiologists assessed the image quality of the 4 image sets in consensus. The effectiveness of AIDR 3D on noise reduction compared with FBP were also compared according to the phantom sizes. Adaptive iterative dose reduction 3D significantly reduced the image noise compared with FBP and enhanced the SNR and CNR (p < 0.05) with improved image quality (p < 0.05). When a stronger reconstruction algorithm was used, greater increase of SNR and CNR as well as noise reduction was achieved (p < 0.05). The noise reduction effect of AIDR 3D was significantly greater in the 40-cm phantom than in the 24-cm or 30-cm phantoms (p < 0.05). The AIDR 3D algorithm is effective to reduce the image noise as well as to improve the image-quality parameters compared by FBP algorithm, and its effectiveness may increase as the phantom size increases.
International Nuclear Information System (INIS)
Liu, Patrick T.; Pavlicek, William P.; Peter, Mary B.; Roberts, Catherine C.; Paden, Robert G.; Spangehl, Mark J.
2009-01-01
Despite recent advances in CT technology, metal orthopedic implants continue to cause significant artifacts on many CT exams, often obscuring diagnostic information. We performed this prospective study to evaluate the effectiveness of an experimental metal artifact reduction (MAR) image reconstruction program for CT. We examined image quality on CT exams performed in patients with hip arthroplasties as well as other types of implanted metal orthopedic devices. The exam raw data were reconstructed using two different methods, the standard filtered backprojection (FBP) program and the MAR program. Images were evaluated for quality of the metal-cement-bone interfaces, trabeculae ≤1 cm from the metal, trabeculae 5 cm apart from the metal, streak artifact, and overall soft tissue detail. The Wilcoxon Rank Sum test was used to compare the image scores from the large and small prostheses. Interobserver agreement was calculated. When all patients were grouped together, the MAR images showed mild to moderate improvement over the FBP images. However, when the cases were divided by implant size, the MAR images consistently received higher image quality scores than the FBP images for large metal implants (total hip prostheses). For small metal implants (screws, plates, staples), conversely, the MAR images received lower image quality scores than the FBP images due to blurring artifact. The difference of image scores for the large and small implants was significant (p=0.002). Interobserver agreement was found to be high for all measures of image quality (k>0.9). The experimental MAR reconstruction algorithm significantly improved CT image quality for patients with large metal implants. However, the MAR algorithm introduced blurring artifact that reduced image quality with small metal implants. (orig.)
Liebi, Marianne; Georgiadis, Marios; Kohlbrecher, Joachim; Holler, Mirko; Raabe, Jörg; Usov, Ivan; Menzel, Andreas; Schneider, Philipp; Bunk, Oliver; Guizar-Sicairos, Manuel
2018-01-01
Small-angle X-ray scattering tensor tomography, which allows reconstruction of the local three-dimensional reciprocal-space map within a three-dimensional sample as introduced by Liebi et al. [Nature (2015), 527, 349-352], is described in more detail with regard to the mathematical framework and the optimization algorithm. For the case of trabecular bone samples from vertebrae it is shown that the model of the three-dimensional reciprocal-space map using spherical harmonics can adequately describe the measured data. The method enables the determination of nanostructure orientation and degree of orientation as demonstrated previously in a single momentum transfer q range. This article presents a reconstruction of the complete reciprocal-space map for the case of bone over extended ranges of q. In addition, it is shown that uniform angular sampling and advanced regularization strategies help to reduce the amount of data required.
Energy Technology Data Exchange (ETDEWEB)
Niemkiewicz, J; Palmiotti, A; Miner, M; Stunja, L; Bergene, J [Lehigh Valley Health Network, Allentown, PA (United States)
2014-06-01
Purpose: Metal in patients creates streak artifacts in CT images. When used for radiation treatment planning, these artifacts make it difficult to identify internal structures and affects radiation dose calculations, which depend on HU numbers for inhomogeneity correction. This work quantitatively evaluates a new metal artifact reduction (MAR) CT image reconstruction algorithm (GE Healthcare CT-0521-04.13-EN-US DOC1381483) when metal is present. Methods: A Gammex Model 467 Tissue Characterization phantom was used. CT images were taken of this phantom on a GE Optima580RT CT scanner with and without steel and titanium plugs using both the standard and MAR reconstruction algorithms. HU values were compared pixel by pixel to determine if the MAR algorithm altered the HUs of normal tissues when no metal is present, and to evaluate the effect of using the MAR algorithm when metal is present. Also, CT images of patients with internal metal objects using standard and MAR reconstruction algorithms were compared. Results: Comparing the standard and MAR reconstructed images of the phantom without metal, 95.0% of pixels were within ±35 HU and 98.0% of pixels were within ±85 HU. Also, the MAR reconstruction algorithm showed significant improvement in maintaining HUs of non-metallic regions in the images taken of the phantom with metal. HU Gamma analysis (2%, 2mm) of metal vs. non-metal phantom imaging using standard reconstruction resulted in an 84.8% pass rate compared to 96.6% for the MAR reconstructed images. CT images of patients with metal show significant artifact reduction when reconstructed with the MAR algorithm. Conclusion: CT imaging using the MAR reconstruction algorithm provides improved visualization of internal anatomy and more accurate HUs when metal is present compared to the standard reconstruction algorithm. MAR reconstructed CT images provide qualitative and quantitative improvements over current reconstruction algorithms, thus improving radiation
International Nuclear Information System (INIS)
Niemkiewicz, J; Palmiotti, A; Miner, M; Stunja, L; Bergene, J
2014-01-01
Purpose: Metal in patients creates streak artifacts in CT images. When used for radiation treatment planning, these artifacts make it difficult to identify internal structures and affects radiation dose calculations, which depend on HU numbers for inhomogeneity correction. This work quantitatively evaluates a new metal artifact reduction (MAR) CT image reconstruction algorithm (GE Healthcare CT-0521-04.13-EN-US DOC1381483) when metal is present. Methods: A Gammex Model 467 Tissue Characterization phantom was used. CT images were taken of this phantom on a GE Optima580RT CT scanner with and without steel and titanium plugs using both the standard and MAR reconstruction algorithms. HU values were compared pixel by pixel to determine if the MAR algorithm altered the HUs of normal tissues when no metal is present, and to evaluate the effect of using the MAR algorithm when metal is present. Also, CT images of patients with internal metal objects using standard and MAR reconstruction algorithms were compared. Results: Comparing the standard and MAR reconstructed images of the phantom without metal, 95.0% of pixels were within ±35 HU and 98.0% of pixels were within ±85 HU. Also, the MAR reconstruction algorithm showed significant improvement in maintaining HUs of non-metallic regions in the images taken of the phantom with metal. HU Gamma analysis (2%, 2mm) of metal vs. non-metal phantom imaging using standard reconstruction resulted in an 84.8% pass rate compared to 96.6% for the MAR reconstructed images. CT images of patients with metal show significant artifact reduction when reconstructed with the MAR algorithm. Conclusion: CT imaging using the MAR reconstruction algorithm provides improved visualization of internal anatomy and more accurate HUs when metal is present compared to the standard reconstruction algorithm. MAR reconstructed CT images provide qualitative and quantitative improvements over current reconstruction algorithms, thus improving radiation
Dlouhy, Brian J; Dahdaleh, Nader S; Menezes, Arnold H
2015-04-01
The craniovertebral junction (CVJ), or the craniocervical junction (CCJ) as it is otherwise known, houses the crossroads of the CNS and is composed of the occipital bone that surrounds the foramen magnum, the atlas vertebrae, the axis vertebrae, and their associated ligaments and musculature. The musculoskeletal organization of the CVJ is unique and complex, resulting in a wide range of congenital, developmental, and acquired pathology. The refinements of the transoral approach to the CVJ by the senior author (A.H.M.) in the late 1970s revolutionized the treatment of CVJ pathology. At the same time, a physiological approach to CVJ management was adopted at the University of Iowa Hospitals and Clinics in 1977 based on the stability and motion dynamics of the CVJ and the site of encroachment, incorporating the transoral approach for irreducible ventral CVJ pathology. Since then, approaches and techniques to treat ventral CVJ lesions have evolved. In the last 40 years at University of Iowa Hospitals and Clinics, multiple approaches to the CVJ have evolved and a better understanding of CVJ pathology has been established. In addition, new reduction strategies that have diminished the need to perform ventral decompressive approaches have been developed and implemented. In this era of surgical subspecialization, to properly treat complex CVJ pathology, the CVJ specialist must be trained in skull base transoral and endoscopic endonasal approaches, pediatric and adult CVJ spine surgery, and must understand and be able to treat the complex CSF dynamics present in CVJ pathology to provide the appropriate, optimal, and tailored treatment strategy for each individual patient, both child and adult. This is a comprehensive review of the history and evolution of the transoral approaches, extended transoral approaches, endoscopie assisted transoral approaches, endoscopie endonasal approaches, and CVJ reduction strategies. Incorporating these advancements, the authors update the
Energy Technology Data Exchange (ETDEWEB)
Hu, Yi; Pan, Shinong; Zhao, Xudong; Guo, Wenli; He, Ming; Guo, Qiyong [Shengjing Hospital of China Medical University, Shenyang (China)
2017-06-15
To evaluate orthopedic metal artifact reduction algorithm (O-MAR) in CT orthopedic metal artifact reduction at different tube voltages, identify an appropriate low tube voltage for clinical practice, and investigate its clinical application. The institutional ethical committee approved all the animal procedures. A stainless-steel plate and four screws were implanted into the femurs of three Japanese white rabbits. Preoperative CT was performed at 120 kVp without O-MAR reconstruction, and postoperative CT was performed at 80–140 kVp with O-MAR. Muscular CT attenuation, artifact index (AI) and signal-to-noise ratio (SNR) were compared between preoperative and postoperative images (unpaired t test), between paired O-MAR and non-O-MAR images (paired Student t test) and among different kVp settings (repeated measures ANOVA). Artifacts' severity, muscular homogeneity, visibility of inter-muscular space and definition of bony structures were subjectively evaluated and compared (Wilcoxon rank-sum test). In the clinical study, 20 patients undertook CT scan at low kVp with O-MAR with informed consent. The diagnostic satisfaction of clinical images was subjectively assessed. Animal experiments showed that the use of O-MAR resulted in accurate CT attenuation, lower AI, better SNR, and higher subjective scores (p < 0.010) at all tube voltages. O-MAR images at 100 kVp had almost the same AI and SNR as non-O-MAR images at 140 kVp. All O-MAR images were scored ≥ 3. In addition, 95% of clinical CT images performed at 100 kVp were considered satisfactory. O-MAR can effectively reduce orthopedic metal artifacts at different tube voltages, and facilitates low-tube-voltage CT for patients with orthopedic metal implants.
International Nuclear Information System (INIS)
Hu, Yi; Pan, Shinong; Zhao, Xudong; Guo, Wenli; He, Ming; Guo, Qiyong
2017-01-01
To evaluate orthopedic metal artifact reduction algorithm (O-MAR) in CT orthopedic metal artifact reduction at different tube voltages, identify an appropriate low tube voltage for clinical practice, and investigate its clinical application. The institutional ethical committee approved all the animal procedures. A stainless-steel plate and four screws were implanted into the femurs of three Japanese white rabbits. Preoperative CT was performed at 120 kVp without O-MAR reconstruction, and postoperative CT was performed at 80–140 kVp with O-MAR. Muscular CT attenuation, artifact index (AI) and signal-to-noise ratio (SNR) were compared between preoperative and postoperative images (unpaired t test), between paired O-MAR and non-O-MAR images (paired Student t test) and among different kVp settings (repeated measures ANOVA). Artifacts' severity, muscular homogeneity, visibility of inter-muscular space and definition of bony structures were subjectively evaluated and compared (Wilcoxon rank-sum test). In the clinical study, 20 patients undertook CT scan at low kVp with O-MAR with informed consent. The diagnostic satisfaction of clinical images was subjectively assessed. Animal experiments showed that the use of O-MAR resulted in accurate CT attenuation, lower AI, better SNR, and higher subjective scores (p < 0.010) at all tube voltages. O-MAR images at 100 kVp had almost the same AI and SNR as non-O-MAR images at 140 kVp. All O-MAR images were scored ≥ 3. In addition, 95% of clinical CT images performed at 100 kVp were considered satisfactory. O-MAR can effectively reduce orthopedic metal artifacts at different tube voltages, and facilitates low-tube-voltage CT for patients with orthopedic metal implants
International Nuclear Information System (INIS)
Brenner, David J; Elliston, Carl D; Hall, Eric J; Paganetti, Harald
2009-01-01
Proton radiotherapy represents a potential major advance in cancer therapy. Most current proton beams are spread out to cover the tumor using passive scattering and collimation, resulting in an extra whole-body high-energy neutron dose, primarily from proton interactions with the final collimator. There is considerable uncertainty as to the carcinogenic potential of low doses of high-energy neutrons, and thus we investigate whether this neutron dose can be significantly reduced without major modifications to passively scattered proton beam lines. Our goal is to optimize the design features of a patient-specific collimator or pre-collimator/collimator assembly. There are a number of often contradictory design features, in terms of geometry and material, involved in an optimal design. For example, plastic or hybrid plastic/metal collimators have a number of advantages. We quantify these design issues, and investigate the practical balances that can be achieved to significantly reduce the neutron dose without major alterations to the beamline design or function. Given that the majority of proton therapy treatments, at least for the next few years, will use passive scattering techniques, reducing the associated neutron-related risks by simple modifications of the collimator assembly design is a desirable goal.
International Nuclear Information System (INIS)
McFadden, S L; Hughes, C M; Winder, Robert J; Mooney, R B
2013-01-01
The purpose of this work is to investigate removal of the anti-scatter grid and alteration of the frame rate in paediatric interventional cardiology (IC) and assess the impact on radiation dose and image quality. Phantom based experimental studies were performed in a dedicated cardiac catheterisation suite to investigate variations in radiation dose and image quality, with various changes in imaging parameters. Phantom based experimental studies employing these variations in technique identified that radiation dose reductions of 28%–49% can be made to the patient with minimal loss of image quality in smaller sized patients. At present, there is no standard technique for carrying out paediatric IC in the UK or Ireland, resulting in the potential for a wide variation in radiation dose. Dose reductions to patients can be achieved with slight alterations to the imaging equipment with minimal compromise to the image quality. These simple modifications can be easily implemented in clinical practice in IC centres. (paper)
Energy Technology Data Exchange (ETDEWEB)
Weir, V [Baylor Scott and White Healthcare System, Dallas, TX (United States); Zhang, J [University of Kentucky, Lexington, KY (United States)
2016-06-15
Purpose: Iterative reconstruction (IR) algorithms have been adopted by medical centers in the past several years. IR has a potential to substantially reduce patient dose while maintaining or improving image quality. This study characterizes dose reductions in clinical settings for CT examinations using IR. Methods: We retrospectively analyzed dose information from patients who underwent abdomen/pelvis CT examinations with and without contrast media in multiple locations of our Healthcare system. A total of 743 patients scanned with ASIR on 64 slice GE lightspeed VCTs at three sites, and 30 patients scanned with SAFIRE on a Siemens 128 slice Definition Flash in one site was retrieved. For comparison, patient data (n=291) from a GE scanner and patient data (n=61) from two Siemens scanners where filtered back-projection (FBP) was used was collected retrospectively. 30% and 10% ASIR, and SAFIRE Level 2 was used. CTDIvol, Dose-length-product (DLP), weight and height from all patients was recorded. Body mass index (BMI) was calculated accordingly. To convert CTDIvol to SSDE, AP and lateral dimensions at the mid-liver level was measured for each patient. Results: Compared with FBP, 30% ASIR reduces dose by 44.1% (SSDE: 12.19mGy vs. 21.83mGy), while 10% ASIR reduced dose by 20.6% (SSDE 17.32mGy vs. 21.83). Use of SAFIRE reduced dose by 61.4% (SSDE: 8.77mGy vs. 22.7mGy). The geometric mean for patients scanned with ASIR was larger than for patients scanned with FBP (geometric mean is 297.48 mmm vs. 284.76 mm). The same trend was observed for the Siemens scanner where SAFIRE was used (geometric mean: 316 mm with SAFIRE vs. 239 mm with FBP). Patient size differences suggest that further dose reduction is possible. Conclusion: Our data confirmed that in clinical practice IR can significantly reduce dose to patients who undergo CT examinations, while meeting diagnostic requirements for image quality.
International Nuclear Information System (INIS)
Weir, V; Zhang, J
2016-01-01
Purpose: Iterative reconstruction (IR) algorithms have been adopted by medical centers in the past several years. IR has a potential to substantially reduce patient dose while maintaining or improving image quality. This study characterizes dose reductions in clinical settings for CT examinations using IR. Methods: We retrospectively analyzed dose information from patients who underwent abdomen/pelvis CT examinations with and without contrast media in multiple locations of our Healthcare system. A total of 743 patients scanned with ASIR on 64 slice GE lightspeed VCTs at three sites, and 30 patients scanned with SAFIRE on a Siemens 128 slice Definition Flash in one site was retrieved. For comparison, patient data (n=291) from a GE scanner and patient data (n=61) from two Siemens scanners where filtered back-projection (FBP) was used was collected retrospectively. 30% and 10% ASIR, and SAFIRE Level 2 was used. CTDIvol, Dose-length-product (DLP), weight and height from all patients was recorded. Body mass index (BMI) was calculated accordingly. To convert CTDIvol to SSDE, AP and lateral dimensions at the mid-liver level was measured for each patient. Results: Compared with FBP, 30% ASIR reduces dose by 44.1% (SSDE: 12.19mGy vs. 21.83mGy), while 10% ASIR reduced dose by 20.6% (SSDE 17.32mGy vs. 21.83). Use of SAFIRE reduced dose by 61.4% (SSDE: 8.77mGy vs. 22.7mGy). The geometric mean for patients scanned with ASIR was larger than for patients scanned with FBP (geometric mean is 297.48 mmm vs. 284.76 mm). The same trend was observed for the Siemens scanner where SAFIRE was used (geometric mean: 316 mm with SAFIRE vs. 239 mm with FBP). Patient size differences suggest that further dose reduction is possible. Conclusion: Our data confirmed that in clinical practice IR can significantly reduce dose to patients who undergo CT examinations, while meeting diagnostic requirements for image quality.
International Nuclear Information System (INIS)
Stimpson, Shane; Collins, Benjamin; Kochunas, Brendan
2017-01-01
The MPACT code, being developed collaboratively by the University of Michigan and Oak Ridge National Laboratory, is the primary deterministic neutron transport solver being deployed within the Virtual Environment for Reactor Applications (VERA) as part of the Consortium for Advanced Simulation of Light Water Reactors (CASL). In many applications of the MPACT code, transport-corrected scattering has proven to be an obstacle in terms of stability, and considerable effort has been made to try to resolve the convergence issues that arise from it. Most of the convergence problems seem related to the transport-corrected cross sections, particularly when used in the 2D method of characteristics (MOC) solver, which is the focus of this work. Here in this paper, the stability and performance of the 2-D MOC solver in MPACT is evaluated for two iteration schemes: Gauss-Seidel and Jacobi. With the Gauss-Seidel approach, as the MOC solver loops over groups, it uses the flux solution from the previous group to construct the inscatter source for the next group. Alternatively, the Jacobi approach uses only the fluxes from the previous outer iteration to determine the inscatter source for each group. Consequently for the Jacobi iteration, the loop over groups can be moved from the outermost loop-as is the case with the Gauss-Seidel sweeper-to the innermost loop, allowing for a substantial increase in efficiency by minimizing the overhead of retrieving segment, region, and surface index information from the ray tracing data. Several test problems are assessed: (1) Babcock & Wilcox 1810 Core I, (2) Dimple S01A-Sq, (3) VERA Progression Problem 5a, and (4) VERA Problem 2a. The Jacobi iteration exhibits better stability than Gauss-Seidel, allowing for converged solutions to be obtained over a much wider range of iteration control parameters. Additionally, the MOC solve time with the Jacobi approach is roughly 2.0-2.5× faster per sweep. While the performance and stability of the Jacobi
International Nuclear Information System (INIS)
Slopsema, R. L.; Flampouri, S.; Yeung, D.; Li, Z.; Lin, L.; McDonough, J. E.; Palta, J.
2014-01-01
Purpose: The purpose of this investigation is to determine if a single set of beam data, described by a minimal set of equations and fitting variables, can be used to commission different installations of a proton double-scattering system in a commercial pencil-beam dose calculation algorithm. Methods: The beam model parameters required to commission the pencil-beam dose calculation algorithm (virtual and effective SAD, effective source size, and pristine-peak energy spread) are determined for a commercial double-scattering system. These parameters are measured in a first room and parameterized as function of proton energy and nozzle settings by fitting four analytical equations to the measured data. The combination of these equations and fitting values constitutes the golden beam data (GBD). To determine the variation in dose delivery between installations, the same dosimetric properties are measured in two additional rooms at the same facility, as well as in a single room at another facility. The difference between the room-specific measurements and the GBD is evaluated against tolerances that guarantee the 3D dose distribution in each of the rooms matches the GBD-based dose distribution within clinically reasonable limits. The pencil-beam treatment-planning algorithm is commissioned with the GBD. The three-dimensional dose distribution in water is evaluated in the four treatment rooms and compared to the treatment-planning calculated dose distribution. Results: The virtual and effective SAD measurements fall between 226 and 257 cm. The effective source size varies between 2.4 and 6.2 cm for the large-field options, and 1.0 and 2.0 cm for the small-field options. The pristine-peak energy spread decreases from 1.05% at the lowest range to 0.6% at the highest. The virtual SAD as well as the effective source size can be accurately described by a linear relationship as function of the inverse of the residual energy. An additional linear correction term as function of
Energy Technology Data Exchange (ETDEWEB)
Slopsema, R. L., E-mail: rslopsema@floridaproton.org; Flampouri, S.; Yeung, D.; Li, Z. [University of Florida Proton Therapy Institute, 2015 North Jefferson Street, Jacksonville, Florida 32205 (United States); Lin, L.; McDonough, J. E. [Department of Radiation Oncology, University of Pennsylvania, 3400 Civic Boulevard, 2326W TRC, PCAM, Philadelphia, Pennsylvania 19104 (United States); Palta, J. [VCU Massey Cancer Center, Virginia Commonwealth University, 401 College Street, Richmond, Virginia 23298 (United States)
2014-09-15
Purpose: The purpose of this investigation is to determine if a single set of beam data, described by a minimal set of equations and fitting variables, can be used to commission different installations of a proton double-scattering system in a commercial pencil-beam dose calculation algorithm. Methods: The beam model parameters required to commission the pencil-beam dose calculation algorithm (virtual and effective SAD, effective source size, and pristine-peak energy spread) are determined for a commercial double-scattering system. These parameters are measured in a first room and parameterized as function of proton energy and nozzle settings by fitting four analytical equations to the measured data. The combination of these equations and fitting values constitutes the golden beam data (GBD). To determine the variation in dose delivery between installations, the same dosimetric properties are measured in two additional rooms at the same facility, as well as in a single room at another facility. The difference between the room-specific measurements and the GBD is evaluated against tolerances that guarantee the 3D dose distribution in each of the rooms matches the GBD-based dose distribution within clinically reasonable limits. The pencil-beam treatment-planning algorithm is commissioned with the GBD. The three-dimensional dose distribution in water is evaluated in the four treatment rooms and compared to the treatment-planning calculated dose distribution. Results: The virtual and effective SAD measurements fall between 226 and 257 cm. The effective source size varies between 2.4 and 6.2 cm for the large-field options, and 1.0 and 2.0 cm for the small-field options. The pristine-peak energy spread decreases from 1.05% at the lowest range to 0.6% at the highest. The virtual SAD as well as the effective source size can be accurately described by a linear relationship as function of the inverse of the residual energy. An additional linear correction term as function of
International Nuclear Information System (INIS)
Liao, Gwo-Ching
2011-01-01
An optimization algorithm is proposed in this paper to solve the economic dispatch problem that includes wind farm using the Chaotic Quantum Genetic Algorithm (CQGA). In addition to the detailed models of economic dispatch introduction and their associated constraints, the wind power effect is also included in this paper. The chaotic quantum genetic algorithm used to solve the economic dispatch process and discussed with real scenarios used for the simulation tests. After comparing the proposed algorithm with several other algorithms commonly used to solve optimization problems, the results show that the proposed algorithm is able to find the optimal solution quickly and accurately (i.e. to obtain the minimum cost for power generation in the shortest time). At the end, the impact to the total cost savings for power generation after adding (or not adding) wind power generation is also discussed. The actual implementation results prove that the proposed algorithm is economical, fast and practical. They are quite valuable for further research. -- Research highlights: → Quantum Genetic Algorithm can effectively improve the global search ability. → It can achieve the real objective of the global optimal solutions. → The CPU computation time is less than that other algorithms adopted in this paper.
Zhang, Yuan; Yang, Bin; Liu, Xiaohui; Wang, Cuizhen
2017-05-01
Fast and accurate estimation of rice yield plays a role in forecasting rice productivity for ensuring regional or national food security. Microwave synthetic aperture radar (SAR) data has been proved to have a great potential for rice monitoring and parameters retrieval. In this study, a rice canopy scattering model (RCSM) was revised and then was applied to simulate the backscatter of rice canopy. The combination of RCSM and genetic algorithm (GA) was proposed for retrieving two important rice parameters relating to grain yield, ear length and ear number density, from a C-band, dual-polarization (HH and HV) Radarsat-2 SAR data. The stability of retrieved results of GA inversion was also evaluated by changing various parameter configurations. Results show that RCSM can effectively simulate backscattering coefficients of rice canopy at HH and HV mode with an error of <1 dB. Reasonable selection of GA's parameters is essential for stability and efficiency of rice parameter retrieval. Two rice parameters are retrieved by the proposed RCSM-GA technology with better accuracy. The rice ear length are estimated with error of <1.5 cm, and ear number density with error of <23 #/m2. Rice grain yields are effectively estimated and mapped by the retrieved ear length and number density via a simple yield regression equation. This study further illustrates the capability of C-band Radarsat-2 SAR data on retrieval of rice ear parameters and the practicability of radar remote sensing technology for operational yield estimation.
International Nuclear Information System (INIS)
Masson-Laborde, P. E.; Depierreux, S.; Loiseau, P.; Hüller, S.; Pesme, D.; Labaune, Ch.; Bandulet, H.
2014-01-01
The origin of the low level of stimulated Brillouin scattering (SBS) observed in laser-plasma experiments carried out with a single laser speckle is investigated by means of three-dimensional simulations and modeling in the limit when the laser beam power P is well above the critical power for ponderomotive self-focusing We find that the order of magnitude of the time averaged reflectivities, together with the temporal and spatial SBS localization observed in our simulations, are correctly reproduced by our modeling. It is observed that, after a short transient stage, SBS reaches a significant level only (i) as long as the incident laser pulse is increasing in amplitude and (ii) in a single self-focused speckle located in the low-density front part of the plasma. In order to describe self-focusing in an inhomogeneous expanding plasma, we have derived a new Lagrangian density describing this process. Using then a variational approach, our model reproduces the position and the peak intensity of the self-focusing hot spot in the front part of the plasma density profile as well as the local density depletion in this hot spot. The knowledge of these parameters then makes it possible to estimate the spatial amplification of SBS as a function of the laser beam power and consequently to explain the experimentally observed SBS reflectivity, considerably reduced with respect to standard theory in the regime of large laser beam power
Hopkins, Jesse Bennett; Gillilan, Richard E; Skou, Soren
2017-10-01
BioXTAS RAW is a graphical-user-interface-based free open-source Python program for reduction and analysis of small-angle X-ray solution scattering (SAXS) data. The software is designed for biological SAXS data and enables creation and plotting of one-dimensional scattering profiles from two-dimensional detector images, standard data operations such as averaging and subtraction and analysis of radius of gyration and molecular weight, and advanced analysis such as calculation of inverse Fourier transforms and envelopes. It also allows easy processing of inline size-exclusion chromatography coupled SAXS data and data deconvolution using the evolving factor analysis method. It provides an alternative to closed-source programs such as Primus and ScÅtter for primary data analysis. Because it can calibrate, mask and integrate images it also provides an alternative to synchrotron beamline pipelines that scientists can install on their own computers and use both at home and at the beamline.
Zorila, Alexandru; Stratan, Aurel; Nemes, George
2018-01-01
We compare the ISO-recommended (the standard) data-reduction algorithm used to determine the surface laser-induced damage threshold of optical materials by the S-on-1 test with two newly suggested algorithms, both named "cumulative" algorithms/methods, a regular one and a limit-case one, intended to perform in some respects better than the standard one. To avoid additional errors due to real experiments, a simulated test is performed, named the reverse approach. This approach simulates the real damage experiments, by generating artificial test-data of damaged and non-damaged sites, based on an assumed, known damage threshold fluence of the target and on a given probability distribution function to induce the damage. In this work, a database of 12 sets of test-data containing both damaged and non-damaged sites was generated by using four different reverse techniques and by assuming three specific damage probability distribution functions. The same value for the threshold fluence was assumed, and a Gaussian fluence distribution on each irradiated site was considered, as usual for the S-on-1 test. Each of the test-data was independently processed by the standard and by the two cumulative data-reduction algorithms, the resulting fitted probability distributions were compared with the initially assumed probability distribution functions, and the quantities used to compare these algorithms were determined. These quantities characterize the accuracy and the precision in determining the damage threshold and the goodness of fit of the damage probability curves. The results indicate that the accuracy in determining the absolute damage threshold is best for the ISO-recommended method, the precision is best for the limit-case of the cumulative method, and the goodness of fit estimator (adjusted R-squared) is almost the same for all three algorithms.
DEFF Research Database (Denmark)
Slot Thing, Rune; Bernchou, Uffe; Mainegra-Hing, Ernesto
2013-01-01
Abstract Purpose. Cone beam computed tomography (CBCT) image quality is limited by scattered photons. Monte Carlo (MC) simulations provide the ability of predicting the patient-specific scatter contamination in clinical CBCT imaging. Lengthy simulations prevent MC-based scatter correction from...
Benecke, Gunthard; Wagermaier, Wolfgang; Li, Chenghao; Schwartzkopf, Matthias; Flucke, Gero; Hoerth, Rebecca; Zizak, Ivo; Burghammer, Manfred; Metwalli, Ezzeldin; Müller-Buschbaum, Peter; Trebbin, Martin; Förster, Stephan; Paris, Oskar; Roth, Stephan V; Fratzl, Peter
2014-10-01
X-ray scattering experiments at synchrotron sources are characterized by large and constantly increasing amounts of data. The great number of files generated during a synchrotron experiment is often a limiting factor in the analysis of the data, since appropriate software is rarely available to perform fast and tailored data processing. Furthermore, it is often necessary to perform online data reduction and analysis during the experiment in order to interactively optimize experimental design. This article presents an open-source software package developed to process large amounts of data from synchrotron scattering experiments. These data reduction processes involve calibration and correction of raw data, one- or two-dimensional integration, as well as fitting and further analysis of the data, including the extraction of certain parameters. The software, DPDAK (directly programmable data analysis kit), is based on a plug-in structure and allows individual extension in accordance with the requirements of the user. The article demonstrates the use of DPDAK for on- and offline analysis of scanning small-angle X-ray scattering (SAXS) data on biological samples and microfluidic systems, as well as for a comprehensive analysis of grazing-incidence SAXS data. In addition to a comparison with existing software packages, the structure of DPDAK and the possibilities and limitations are discussed.
Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia
2013-02-01
The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
K. Lenin
2014-04-01
Full Text Available This paper presents Hybrid Biogeography algorithm for solving the multi-objective reactive power dispatch problem in a power system. Real Power Loss minimization and maximization of voltage stability margin are taken as the objectives. Artificial bee colony optimization (ABC is quick and forceful algorithm for global optimization. Biogeography-Based Optimization (BBO is a new-fangled biogeography inspired algorithm. It mainly utilizes the biogeography-based relocation operator to share the information among solutions. In this work, a hybrid algorithm with BBO and ABC is projected, and named as HBBABC (Hybrid Biogeography based Artificial Bee Colony Optimization, for the universal numerical optimization problem. HBBABC merge the searching behavior of ABC with that of BBO. Both the algorithms have different solution probing tendency like ABC have good exploration probing tendency while BBO have good exploitation probing tendency. HBBABC used to solve the reactive power dispatch problem and the proposed technique has been tested in standard IEEE30 bus test system.
International Nuclear Information System (INIS)
Ueki, T.; Larsen, E.W.
1998-01-01
The authors show that Monte Carlo simulations of neutral particle transport in planargeometry anisotropically scattering media, using the exponential transform with angular biasing as a variance reduction device, are governed by a new Boltzman Monte Carlo (BMC) equation, which includes particle weight as an extra independent variable. The weight moments of the solution of the BMC equation determine the moments of the score and the mean number of collisions per history in the nonanalog Monte Carlo simulations. Therefore, the solution of the BMC equation predicts the variance of the score and the figure of merit in the simulation. Also, by (1) using an angular biasing function that is closely related to the ''asymptotic'' solution of the linear Boltzman equation and (2) requiring isotropic weight changes as collisions, they derive a new angular biasing scheme. Using the BMC equation, they propose a universal ''safe'' upper limit of the transform parameter, valid for any type of exponential transform. In numerical calculations, they demonstrate that the behavior of the Monte Carlo simulations and the performance predicted by deterministically solving the BMC equation agree well, and that the new angular biasing scheme is always advantageous
Bai, Chen; Han, Dongjuan
2018-04-01
MUSIC is widely used on DOA estimation. Triangle grid is a common kind of the arrangement of array, but it is more complicated than rectangular array in calculation of steering vector. In this paper, the quaternions algorithm can reduce dimension of vector and make the calculation easier.
Sofue, Keitaro; Yoshikawa, Takeshi; Ohno, Yoshiharu; Negi, Noriyuki; Inokawa, Hiroyasu; Sugihara, Naoki; Sugimura, Kazuro
2017-07-01
To determine the value of a raw data-based metal artifact reduction (SEMAR) algorithm for image quality improvement in abdominal CT for patients with small metal implants. Fifty-eight patients with small metal implants (3-15 mm in size) who underwent treatment for hepatocellular carcinoma were imaged with CT. CT data were reconstructed by filtered back projection with and without SEMAR algorithm in axial and coronal planes. To evaluate metal artefact reduction, mean CT number (HU and SD) and artefact index (AI) values within the liver were calculated. Two readers independently evaluated image quality of the liver and pancreas and visualization of vasculature using a 5-point visual score. HU and AI values and image quality on images with and without SEMAR were compared using the paired Student's t-test and Wilcoxon signed rank test. Interobserver agreement was evaluated using linear-weighted κ test. Mean HU and AI on images with SEMAR was significantly lower than those without SEMAR (P small metal implants by reducing metallic artefacts. • SEMAR algorithm significantly reduces metallic artefacts from small implants in abdominal CT. • SEMAR can improve image quality of the liver in dynamic CECT. • Confidence visualization of hepatic vascular anatomies can also be improved by SEMAR.
International Nuclear Information System (INIS)
Zare Hosseinzadeh, Ali; Ghodrati Amiri, Gholamreza; Bagheri, Abdollah; Koo, Ki-Young
2014-01-01
In this paper, a novel and effective damage diagnosis algorithm is proposed to localize and quantify structural damage using incomplete modal data, considering the existence of some limitations in the number of attached sensors on structures. The damage detection problem is formulated as an optimization problem by computing static displacements in the reduced model of a structure subjected to a unique static load. The static responses are computed through the flexibility matrix of the damaged structure obtained based on the incomplete modal data of the structure. In the algorithm, an iterated improved reduction system method is applied to prepare an accurate reduced model of a structure. The optimization problem is solved via a new evolutionary optimization algorithm called the cuckoo optimization algorithm. The efficiency and robustness of the presented method are demonstrated through three numerical examples. Moreover, the efficiency of the method is verified by an experimental study of a five-story shear building structure on a shaking table considering only two sensors. The obtained damage identification results for the numerical and experimental studies show the suitable and stable performance of the proposed damage identification method for structures with limited sensors. (paper)
Awan, Muaaz Gul; Saeed, Fahad
2017-08-01
Modern high resolution Mass Spectrometry instruments can generate millions of spectra in a single systems biology experiment. Each spectrum consists of thousands of peaks but only a small number of peaks actively contribute to deduction of peptides. Therefore, pre-processing of MS data to detect noisy and non-useful peaks are an active area of research. Most of the sequential noise reducing algorithms are impractical to use as a pre-processing step due to high time-complexity. In this paper, we present a GPU based dimensionality-reduction algorithm, called G-MSR, for MS2 spectra. Our proposed algorithm uses novel data structures which optimize the memory and computational operations inside GPU. These novel data structures include Binary Spectra and Quantized Indexed Spectra (QIS) . The former helps in communicating essential information between CPU and GPU using minimum amount of data while latter enables us to store and process complex 3-D data structure into a 1-D array structure while maintaining the integrity of MS data. Our proposed algorithm also takes into account the limited memory of GPUs and switches between in-core and out-of-core modes based upon the size of input data. G-MSR achieves a peak speed-up of 386x over its sequential counterpart and is shown to process over a million spectra in just 32 seconds. The code for this algorithm is available as a GPL open-source at GitHub at the following link: https://github.com/pcdslab/G-MSR.
DEFF Research Database (Denmark)
Martini, Enrica; Breinbjerg, Olav; Maci, Stefano
2008-01-01
A simple and effective procedure for the reduction of truncation errors in planar near-field measurements of aperture antennas is presented. The procedure relies on the consideration that, due to the scan plane truncation, the calculated plane wave spectrum of the field radiated by the antenna is...
International Nuclear Information System (INIS)
Woods, K; DiCostanzo, D; Gupta, N
2016-01-01
Purpose: To test the efficacy of a retrospective metal artifact reduction (MAR) reconstruction algorithm for a commercial computed tomography (CT) scanner for radiation therapy purposes. Methods: High Z geometric integrity and artifact reduction analysis was performed with three phantoms using General Electric’s (GE) Discovery CT. The three phantoms included: a Computerized Imaging Reference Systems (CIRS) electron density phantom (Model 062) with a 6.5 mm diameter titanium rod insert, a custom spine phantom using Synthes Spine hardware submerged in water, and a dental phantom with various high Z fillings submerged in water. Each phantom was reconstructed using MAR and compared against the original scan. Furthermore, each scenario was tested using standard and extended Hounsfield Unit (HU) ranges. High Z geometric integrity was performed using the CIRS phantom, while the artifact reduction was performed using all three phantoms. Results: Geometric integrity of the 6.5 mm diameter rod was slightly overestimated for non-MAR scans for both standard and extended HU. With MAR reconstruction, the rod was underestimated for both standard and extended HU. For artifact reduction, the mean and standard deviation was compared in a volume of interest (VOI) in the surrounding material (water and water equivalent material, ∼0HU). Overall, the mean value of the VOI was closer to 0 HU for the MAR reconstruction compared to the non-MAR scan for most phantoms. Additionally, the standard deviations for all phantoms were greatly reduced using MAR reconstruction. Conclusion: GE’s MAR reconstruction algorithm improves image quality with the presence of high Z material with minimal degradation of its geometric integrity. High Z delineation can be carried out with proper contouring techniques. The effects of beam hardening artifacts are greatly reduced with MAR reconstruction. Tissue corrections due to these artifacts can be eliminated for simple high Z geometries and greatly
Energy Technology Data Exchange (ETDEWEB)
Woods, K; DiCostanzo, D; Gupta, N [Ohio State University Columbus, OH (United States)
2016-06-15
Purpose: To test the efficacy of a retrospective metal artifact reduction (MAR) reconstruction algorithm for a commercial computed tomography (CT) scanner for radiation therapy purposes. Methods: High Z geometric integrity and artifact reduction analysis was performed with three phantoms using General Electric’s (GE) Discovery CT. The three phantoms included: a Computerized Imaging Reference Systems (CIRS) electron density phantom (Model 062) with a 6.5 mm diameter titanium rod insert, a custom spine phantom using Synthes Spine hardware submerged in water, and a dental phantom with various high Z fillings submerged in water. Each phantom was reconstructed using MAR and compared against the original scan. Furthermore, each scenario was tested using standard and extended Hounsfield Unit (HU) ranges. High Z geometric integrity was performed using the CIRS phantom, while the artifact reduction was performed using all three phantoms. Results: Geometric integrity of the 6.5 mm diameter rod was slightly overestimated for non-MAR scans for both standard and extended HU. With MAR reconstruction, the rod was underestimated for both standard and extended HU. For artifact reduction, the mean and standard deviation was compared in a volume of interest (VOI) in the surrounding material (water and water equivalent material, ∼0HU). Overall, the mean value of the VOI was closer to 0 HU for the MAR reconstruction compared to the non-MAR scan for most phantoms. Additionally, the standard deviations for all phantoms were greatly reduced using MAR reconstruction. Conclusion: GE’s MAR reconstruction algorithm improves image quality with the presence of high Z material with minimal degradation of its geometric integrity. High Z delineation can be carried out with proper contouring techniques. The effects of beam hardening artifacts are greatly reduced with MAR reconstruction. Tissue corrections due to these artifacts can be eliminated for simple high Z geometries and greatly
Energy Technology Data Exchange (ETDEWEB)
Sofue, Keitaro; Sugimura, Kazuro [Kobe University Graduate School of Medicine, Department of Radiology, Kobe, Hyogo (Japan); Yoshikawa, Takeshi; Ohno, Yoshiharu [Kobe University Graduate School of Medicine, Advanced Biomedical Imaging Research Center, Kobe, Hyogo (Japan); Kobe University Graduate School of Medicine, Division of Functional and Diagnostic Imaging Research, Department of Radiology, Kobe, Hyogo (Japan); Negi, Noriyuki [Kobe University Hospital, Division of Radiology, Kobe, Hyogo (Japan); Inokawa, Hiroyasu; Sugihara, Naoki [Toshiba Medical Systems Corporation, Otawara, Tochigi (Japan)
2017-07-15
To determine the value of a raw data-based metal artifact reduction (SEMAR) algorithm for image quality improvement in abdominal CT for patients with small metal implants. Fifty-eight patients with small metal implants (3-15 mm in size) who underwent treatment for hepatocellular carcinoma were imaged with CT. CT data were reconstructed by filtered back projection with and without SEMAR algorithm in axial and coronal planes. To evaluate metal artefact reduction, mean CT number (HU and SD) and artefact index (AI) values within the liver were calculated. Two readers independently evaluated image quality of the liver and pancreas and visualization of vasculature using a 5-point visual score. HU and AI values and image quality on images with and without SEMAR were compared using the paired Student's t-test and Wilcoxon signed rank test. Interobserver agreement was evaluated using linear-weighted κ test. Mean HU and AI on images with SEMAR was significantly lower than those without SEMAR (P < 0.0001). Liver and pancreas image qualities and visualizations of vasculature were significantly improved on CT with SEMAR (P < 0.0001) with substantial or almost perfect agreement (0.62 ≤ κ ≤ 0.83). SEMAR can improve image quality in abdominal CT in patients with small metal implants by reducing metallic artefacts. (orig.)
International Nuclear Information System (INIS)
Sofue, Keitaro; Sugimura, Kazuro; Yoshikawa, Takeshi; Ohno, Yoshiharu; Negi, Noriyuki; Inokawa, Hiroyasu; Sugihara, Naoki
2017-01-01
To determine the value of a raw data-based metal artifact reduction (SEMAR) algorithm for image quality improvement in abdominal CT for patients with small metal implants. Fifty-eight patients with small metal implants (3-15 mm in size) who underwent treatment for hepatocellular carcinoma were imaged with CT. CT data were reconstructed by filtered back projection with and without SEMAR algorithm in axial and coronal planes. To evaluate metal artefact reduction, mean CT number (HU and SD) and artefact index (AI) values within the liver were calculated. Two readers independently evaluated image quality of the liver and pancreas and visualization of vasculature using a 5-point visual score. HU and AI values and image quality on images with and without SEMAR were compared using the paired Student's t-test and Wilcoxon signed rank test. Interobserver agreement was evaluated using linear-weighted κ test. Mean HU and AI on images with SEMAR was significantly lower than those without SEMAR (P < 0.0001). Liver and pancreas image qualities and visualizations of vasculature were significantly improved on CT with SEMAR (P < 0.0001) with substantial or almost perfect agreement (0.62 ≤ κ ≤ 0.83). SEMAR can improve image quality in abdominal CT in patients with small metal implants by reducing metallic artefacts. (orig.)
International Nuclear Information System (INIS)
Norris, H; Rangaraj, D; Kim, S
2016-01-01
Purpose: High-Z (metal) implants in CT scans cause significant streak-like artifacts in the reconstructed dataset. This results in both inaccurate CT Hounsfield units for the tissue as well as obscuration of the target and organs at risk (OARs) for radiation therapy planning. Herein we analyze two metal artifact reduction algorithms: GE’s Smart MAR and a Metal Deletion Technique (MDT) for geometric and Hounsfield Unit (HU) accuracy. Methods: A CT-to-electron density phantom, with multiple inserts of various densities and a custom Cerrobend insert (Zeff=76.8), is utilized in this continuing study. The phantom is scanned without metal (baseline) and again with the metal insert. Using one set of projection data, reconstructed CT volumes are created with filtered-back-projection (FBP) and the MAR and the MDT algorithms. Regions-of-Interest (ROIs) are evaluated for each insert for HU accuracy; the metal insert’s Full-Width-Half-Maximum (FWHM) is used to evaluate the geometric accuracy. Streak severity is quantified with an HU error metric over the phantom volume. Results: The original FBP reconstruction has a Root-Mean-Square-Error (RMSE) of 57.55 HU (STD=29.19, range=−145.8 to +79.2) compared to baseline. The MAR reconstruction has a RMSE of 20.98 HU (STD=13.92, range=−18.3 to +61.7). The MDT reconstruction has a RMSE of 10.05 HU (STD=10.5, range=−14.8 to +18.6). FWHM for baseline=162.05; FBP=161.84 (−0.13%); MAR=162.36 (+0.19%); MDT=162.99 (+0.58%). Streak severity metric for FBP=19.73 (22.659% bad pixels); MAR=8.743 (9.538% bad); MDT=4.899 (5.303% bad). Conclusion: Image quality, in terms of HU accuracy, in the presence of high-Z metal objects in CT scans is improved by metal artifact reduction reconstruction algorithms. The MDT algorithm had the highest HU value accuracy (RMSE=10.05 HU) and best streak severity metric, but scored the worst in terms of geometric accuracy. Qualitatively, the MAR and MDT algorithms increased detectability of inserts
Energy Technology Data Exchange (ETDEWEB)
Norris, H; Rangaraj, D; Kim, S [Baylor Scott & White Health, Temple, TX (United States)
2016-06-15
Purpose: High-Z (metal) implants in CT scans cause significant streak-like artifacts in the reconstructed dataset. This results in both inaccurate CT Hounsfield units for the tissue as well as obscuration of the target and organs at risk (OARs) for radiation therapy planning. Herein we analyze two metal artifact reduction algorithms: GE’s Smart MAR and a Metal Deletion Technique (MDT) for geometric and Hounsfield Unit (HU) accuracy. Methods: A CT-to-electron density phantom, with multiple inserts of various densities and a custom Cerrobend insert (Zeff=76.8), is utilized in this continuing study. The phantom is scanned without metal (baseline) and again with the metal insert. Using one set of projection data, reconstructed CT volumes are created with filtered-back-projection (FBP) and the MAR and the MDT algorithms. Regions-of-Interest (ROIs) are evaluated for each insert for HU accuracy; the metal insert’s Full-Width-Half-Maximum (FWHM) is used to evaluate the geometric accuracy. Streak severity is quantified with an HU error metric over the phantom volume. Results: The original FBP reconstruction has a Root-Mean-Square-Error (RMSE) of 57.55 HU (STD=29.19, range=−145.8 to +79.2) compared to baseline. The MAR reconstruction has a RMSE of 20.98 HU (STD=13.92, range=−18.3 to +61.7). The MDT reconstruction has a RMSE of 10.05 HU (STD=10.5, range=−14.8 to +18.6). FWHM for baseline=162.05; FBP=161.84 (−0.13%); MAR=162.36 (+0.19%); MDT=162.99 (+0.58%). Streak severity metric for FBP=19.73 (22.659% bad pixels); MAR=8.743 (9.538% bad); MDT=4.899 (5.303% bad). Conclusion: Image quality, in terms of HU accuracy, in the presence of high-Z metal objects in CT scans is improved by metal artifact reduction reconstruction algorithms. The MDT algorithm had the highest HU value accuracy (RMSE=10.05 HU) and best streak severity metric, but scored the worst in terms of geometric accuracy. Qualitatively, the MAR and MDT algorithms increased detectability of inserts
Ellmann, Stephan; Kammerer, Ferdinand; Brand, Michael; Allmendinger, Thomas; May, Matthias S; Uder, Michael; Lell, Michael M; Kramer, Manuel
2016-05-01
The aim of this study was to determine the dose reduction potential of iterative reconstruction (IR) algorithms in computed tomography angiography (CTA) of the circle of Willis using a novel method of evaluating the quality of radiation dose-reduced images. This study relied on ReconCT, a proprietary reconstruction software that allows simulating CT scans acquired with reduced radiation dose based on the raw data of true scans. To evaluate the performance of ReconCT in this regard, a phantom study was performed to compare the image noise of true and simulated scans within simulated vessels of a head phantom. That followed, 10 patients scheduled for CTA of the circle of Willis were scanned according to our institute's standard protocol (100 kV, 145 reference mAs). Subsequently, CTA images of these patients were reconstructed as either a full-dose weighted filtered back projection or with radiation dose reductions down to 10% of the full-dose level and Sinogram-Affirmed Iterative Reconstruction (SAFIRE) with either strength 3 or 5. Images were marked with arrows pointing on vessels of different sizes, and image pairs were presented to observers. Five readers assessed image quality with 2-alternative forced choice comparisons. In the phantom study, no significant differences were observed between the noise levels of simulated and true scans in filtered back projection, SAFIRE 3, and SAFIRE 5 reconstructions.The dose reduction potential for patient scans showed a strong dependence on IR strength as well as on the size of the vessel of interest. Thus, the potential radiation dose reductions ranged from 84.4% for the evaluation of great vessels reconstructed with SAFIRE 5 to 40.9% for the evaluation of small vessels reconstructed with SAFIRE 3. This study provides a novel image quality evaluation method based on 2-alternative forced choice comparisons. In CTA of the circle of Willis, higher IR strengths and greater vessel sizes allowed higher degrees of radiation dose
SU-E-I-92: Is Photon Starvation Preventing Metal Artifact Reduction Algorithm From Working in KVCT?
Energy Technology Data Exchange (ETDEWEB)
Paudel, M [University of Alberta, Cross Cancer Institute, Edmonton, AB (Canada); currently at University of Toronto, Sunnybrook Health Sciences Center, Toronto, ON (Canada); MacKenzie, M; Fallone, B; Rathee, S [University of Alberta, Cross Cancer Institute, Edmonton, AB (Canada)
2014-06-01
Purpose: High density/high atomic number metallic objects create shading and streaking metal artifacts in the CT image that can cause inaccurate delineation of anatomical structures or inaccurate radiation dose calculation. A modified iterative maximum-likelihood polychromatic algorithm for CT (mIMPACT) that models the energy response of detectors, photon interaction processes and beam polychromaticity has successfully reduced metal artifacts in MVCT. Our extension of mIMPACT in kVCT did not significantly reduce metal artifacts for high density metal like steel. We hypothesize that photon starvation may result in the measured data in a commercial kVCT imaging beam. Methods: We measured attenuation of a range of steel plate thicknesses, sandwiched between two 12cm thick solid water blocks, using a Phillips Big Bore CTTM scanner in scout acquisition mode with 120kVp and 200mAs. The transmitted signal (y) was normalized to the air scan signal (y{sub 0}) to get attenuation [i.e., ln(y/y{sub 0})] data for a detector. Results: Below steel plate thickness of 13.4mm, the variations in measured attenuation as a function of view number are characterized by a quantum noise and show increased attenuation with metal thickness. On or above this thickness the attenuation shows discrete levels in addition to the quantum noise. Some views have saturated attenuation value. The histograms of the measured attenuation for up to 36.7mm of steel show this trend. The detector signal is so small that the quantization levels in the analog to digital (A-to-D) converter are visible, a clear indication of photon starvation. Conclusion: Photons reaching the kVCT detector after passing through a thick metal plate are either so low in number that the signal measured has large quantum noise, or are completely absorbed inside the plate creating photon starvation. This is un-interpretable by the mIMPACT algorithm and cannot reduce metal artifacts in kVCT for certain realistic thicknesses of steel
International Nuclear Information System (INIS)
Daraji, A H; Hale, J M
2012-01-01
The optimal placement of sensors and actuators in active vibration control is limited by the number of candidates in the search space. The search space of a small structure discretized to one hundred elements for optimising the location of ten actuators gives 1.73 × 10 13 possible solutions, one of which is the global optimum. In this work, a new quarter and half chromosome technique based on symmetry is developed, by which the search space for optimisation of sensor/actuator locations in active vibration control of flexible structures may be greatly reduced. The technique is applied to the optimisation for eight and ten actuators located on a 500×500mm square plate, in which the search space is reduced by up to 99.99%. This technique helps for updating genetic algorithm program by updating natural frequencies and mode shapes in each generation to find the global optimal solution in a greatly reduced number of generations. An isotropic plate with piezoelectric sensor/actuator pairs bonded to its surface was investigated using the finite element method and Hamilton's principle based on first order shear deformation theory. The placement and feedback gain of ten and eight sensor/actuator pairs was optimised for a cantilever and clamped-clamped plate to attenuate the first six modes of vibration, using minimization of linear quadratic index as an objective function.
Energy Technology Data Exchange (ETDEWEB)
Mennecke, Angelika; Svergun, Stanislav; Doerfler, Arnd; Struffert, Tobias [University of Erlangen-Nuremberg, Department of Neuroradiology, Erlangen (Germany); Scholz, Bernhard [Siemens Healthcare GmbH, Forchheim (Germany); Royalty, Kevin [Siemens Medical Solutions, USA, Inc., Hoffman Estates, IL (United States)
2017-01-15
Metal artefacts can impair accurate diagnosis of haemorrhage using flat detector CT (FD-CT), especially after aneurysm coiling. Within this work we evaluate a prototype metal artefact reduction algorithm by comparison of the artefact-reduced and the non-artefact-reduced FD-CT images to pre-treatment FD-CT and multi-slice CT images. Twenty-five patients with acute aneurysmal subarachnoid haemorrhage (SAH) were selected retrospectively. FD-CT and multi-slice CT before endovascular treatment as well as FD-CT data sets after treatment were available for all patients. The algorithm was applied to post-treatment FD-CT. The effect of the algorithm was evaluated utilizing the pre-post concordance of a modified Fisher score, a subjective image quality assessment, the range of the Hounsfield units within three ROIs, and the pre-post slice-wise Pearson correlation. The pre-post concordance of the modified Fisher score, the subjective image quality, and the pre-post correlation of the ranges of the Hounsfield units were significantly higher for artefact-reduced than for non-artefact-reduced images. Within the metal-affected slices, the pre-post slice-wise Pearson correlation coefficient was higher for artefact-reduced than for non-artefact-reduced images. The overall diagnostic quality of the artefact-reduced images was improved and reached the level of the pre-interventional FD-CT images. The metal-unaffected parts of the image were not modified. (orig.)
International Nuclear Information System (INIS)
Mennecke, Angelika; Svergun, Stanislav; Doerfler, Arnd; Struffert, Tobias; Scholz, Bernhard; Royalty, Kevin
2017-01-01
Metal artefacts can impair accurate diagnosis of haemorrhage using flat detector CT (FD-CT), especially after aneurysm coiling. Within this work we evaluate a prototype metal artefact reduction algorithm by comparison of the artefact-reduced and the non-artefact-reduced FD-CT images to pre-treatment FD-CT and multi-slice CT images. Twenty-five patients with acute aneurysmal subarachnoid haemorrhage (SAH) were selected retrospectively. FD-CT and multi-slice CT before endovascular treatment as well as FD-CT data sets after treatment were available for all patients. The algorithm was applied to post-treatment FD-CT. The effect of the algorithm was evaluated utilizing the pre-post concordance of a modified Fisher score, a subjective image quality assessment, the range of the Hounsfield units within three ROIs, and the pre-post slice-wise Pearson correlation. The pre-post concordance of the modified Fisher score, the subjective image quality, and the pre-post correlation of the ranges of the Hounsfield units were significantly higher for artefact-reduced than for non-artefact-reduced images. Within the metal-affected slices, the pre-post slice-wise Pearson correlation coefficient was higher for artefact-reduced than for non-artefact-reduced images. The overall diagnostic quality of the artefact-reduced images was improved and reached the level of the pre-interventional FD-CT images. The metal-unaffected parts of the image were not modified. (orig.)
Cross plane scattering correction
International Nuclear Information System (INIS)
Shao, L.; Karp, J.S.
1990-01-01
Most previous scattering correction techniques for PET are based on assumptions made for a single transaxial plane and are independent of axial variations. These techniques will incorrectly estimate the scattering fraction for volumetric PET imaging systems since they do not take the cross-plane scattering into account. In this paper, the authors propose a new point source scattering deconvolution method (2-D). The cross-plane scattering is incorporated into the algorithm by modeling a scattering point source function. In the model, the scattering dependence both on axial and transaxial directions is reflected in the exponential fitting parameters and these parameters are directly estimated from a limited number of measured point response functions. The authors' results comparing the standard in-plane point source deconvolution to the authors' cross-plane source deconvolution show that for a small source, the former technique overestimates the scatter fraction in the plane of the source and underestimate the scatter fraction in adjacent planes. In addition, the authors also propose a simple approximation technique for deconvolution
Bellez, Sami; Bourlier, Christophe; Kubické, Gildas
2015-03-01
This paper deals with the evaluation of electromagnetic scattering from a three-dimensional structure consisting of two nested homogeneous dielectric bodies with arbitrary shape. The scattering problem is formulated in terms of a set of Poggio-Miller-Chang-Harrington-Wu integral equations that are afterwards converted into a system of linear equations (impedance matrix equation) by applying the Galerkin method of moments (MoM) with Rao-Wilton-Glisson basis functions. The MoM matrix equation is then solved by deploying the iterative propagation-inside-layer expansion (PILE) method in order to obtain the unknown surface current densities, which are thereafter used to handle the radar cross-section (RCS) patterns. Some numerical results for various structures including canonical geometries are presented and compared with those of the FEKO software in order to validate the PILE-based approach as well as to show its efficiency to analyze the full-polarized RCS patterns.
Fast algorithms for transport models. Final report
International Nuclear Information System (INIS)
Manteuffel, T.A.
1994-01-01
This project has developed a multigrid in space algorithm for the solution of the S N equations with isotropic scattering in slab geometry. The algorithm was developed for the Modified Linear Discontinuous (MLD) discretization in space which is accurate in the thick diffusion limit. It uses a red/black two-cell μ-line relaxation. This relaxation solves for all angles on two adjacent spatial cells simultaneously. It takes advantage of the rank-one property of the coupling between angles and can perform this inversion in O(N) operations. A version of the multigrid in space algorithm was programmed on the Thinking Machines Inc. CM-200 located at LANL. It was discovered that on the CM-200 a block Jacobi type iteration was more efficient than the block red/black iteration. Given sufficient processors all two-cell block inversions can be carried out simultaneously with a small number of parallel steps. The bottleneck is the need for sums of N values, where N is the number of discrete angles, each from a different processor. These are carried out by machine intrinsic functions and are well optimized. The overall algorithm has computational complexity O(log(M)), where M is the number of spatial cells. The algorithm is very efficient and represents the state-of-the-art for isotropic problems in slab geometry. For anisotropic scattering in slab geometry, a multilevel in angle algorithm was developed. A parallel version of the multilevel in angle algorithm has also been developed. Upon first glance, the shifted transport sweep has limited parallelism. Once the right-hand-side has been computed, the sweep is completely parallel in angle, becoming N uncoupled initial value ODE's. The author has developed a cyclic reduction algorithm that renders it parallel with complexity O(log(M)). The multilevel in angle algorithm visits log(N) levels, where shifted transport sweeps are performed. The overall complexity is O(log(N)log(M))
AbouEisha, Hassan M.
2014-01-01
The problem of attribute reduction is an important problem related to feature selection and knowledge discovery. The problem of finding reducts with minimum cardinality is NP-hard. This paper suggests a new algorithm for finding exact reducts
Schädler, Marc R.; Warzybok, Anna; Kollmeier, Birger
2018-01-01
The simulation framework for auditory discrimination experiments (FADE) was adopted and validated to predict the individual speech-in-noise recognition performance of listeners with normal and impaired hearing with and without a given hearing-aid algorithm. FADE uses a simple automatic speech recognizer (ASR) to estimate the lowest achievable speech reception thresholds (SRTs) from simulated speech recognition experiments in an objective way, independent from any empirical reference data. Empirical data from the literature were used to evaluate the model in terms of predicted SRTs and benefits in SRT with the German matrix sentence recognition test when using eight single- and multichannel binaural noise-reduction algorithms. To allow individual predictions of SRTs in binaural conditions, the model was extended with a simple better ear approach and individualized by taking audiograms into account. In a realistic binaural cafeteria condition, FADE explained about 90% of the variance of the empirical SRTs for a group of normal-hearing listeners and predicted the corresponding benefits with a root-mean-square prediction error of 0.6 dB. This highlights the potential of the approach for the objective assessment of benefits in SRT without prior knowledge about the empirical data. The predictions for the group of listeners with impaired hearing explained 75% of the empirical variance, while the individual predictions explained less than 25%. Possibly, additional individual factors should be considered for more accurate predictions with impaired hearing. A competing talker condition clearly showed one limitation of current ASR technology, as the empirical performance with SRTs lower than −20 dB could not be predicted. PMID:29692200
Schädler, Marc R; Warzybok, Anna; Kollmeier, Birger
2018-01-01
The simulation framework for auditory discrimination experiments (FADE) was adopted and validated to predict the individual speech-in-noise recognition performance of listeners with normal and impaired hearing with and without a given hearing-aid algorithm. FADE uses a simple automatic speech recognizer (ASR) to estimate the lowest achievable speech reception thresholds (SRTs) from simulated speech recognition experiments in an objective way, independent from any empirical reference data. Empirical data from the literature were used to evaluate the model in terms of predicted SRTs and benefits in SRT with the German matrix sentence recognition test when using eight single- and multichannel binaural noise-reduction algorithms. To allow individual predictions of SRTs in binaural conditions, the model was extended with a simple better ear approach and individualized by taking audiograms into account. In a realistic binaural cafeteria condition, FADE explained about 90% of the variance of the empirical SRTs for a group of normal-hearing listeners and predicted the corresponding benefits with a root-mean-square prediction error of 0.6 dB. This highlights the potential of the approach for the objective assessment of benefits in SRT without prior knowledge about the empirical data. The predictions for the group of listeners with impaired hearing explained 75% of the empirical variance, while the individual predictions explained less than 25%. Possibly, additional individual factors should be considered for more accurate predictions with impaired hearing. A competing talker condition clearly showed one limitation of current ASR technology, as the empirical performance with SRTs lower than -20 dB could not be predicted.
International Nuclear Information System (INIS)
Shen, Z; Xia, P; Djemil, T; Klahr, P
2014-01-01
Purpose: To evaluate the impact of a commercial orthopedic metal artifact reduction (O-MAR) algorithm on CT image quality and dose calculation for patients with spinal prostheses near spinal tumors. Methods: A CT electron density phantom was scanned twice: with tissue-simulating inserts only, and with a titanium insert replacing solid water. A patient plan was mapped to the phantom images in two ways: with the titanium inside or outside of the spinal tumor. Pinnacle and Eclipse were used to evaluate the dosimetric effects of O-MAR on 12-bit and 16-bit CT data, respectively. CT images from five patients with spinal prostheses were reconstructed with and without O-MAR. Two observers assessed the image quality improvement from O-MAR. Both pencil beam and Monte Carlo dose calculation in iPlan were used for the patient study. The percentage differences between non-OMAR and O-MAR datasets were calculated for PTV-min, PTV-max, PTV-mean, PTV-V100, PTV-D90, OAR-V10Gy, OAR-max, and OAR-D0.1cc. Results: O-MAR improved image quality but did not significantly affect the dose distributions and DVHs for both 12-bit and 16- bit CT phantom data. All five patient cases demonstrated some degree of image quality improvement from O-MAR, ranging from small to large metal artifact reduction. For pencil beam, the largest discrepancy was observed for OARV-10Gy at 5.4%, while the other seven parameters were ≤0.6%. For Monte Carlo, the differences between non-O-MAR and O-MAR datasets were ≤3.0%. Conclusion: Both phantom and patient studies indicated that O-MAR can substantially reduce metal artifacts on CT images, allowing better visualization of the anatomical structures and metal objects. The dosimetric impact of O-MAR was insignificant regardless of the metal location, image bit-depth, and dose calculation algorithm. O-MAR corrected images are recommended for radiation treatment planning on patients with spinal prostheses because of the improved image quality and no need to modify
International Nuclear Information System (INIS)
Siewerdsen, J.H.; Daly, M.J.; Bakhtiar, B.
2006-01-01
X-ray scatter poses a significant limitation to image quality in cone-beam CT (CBCT), resulting in contrast reduction, image artifacts, and lack of CT number accuracy. We report the performance of a simple scatter correction method in which scatter fluence is estimated directly in each projection from pixel values near the edge of the detector behind the collimator leaves. The algorithm operates on the simple assumption that signal in the collimator shadow is attributable to x-ray scatter, and the 2D scatter fluence is estimated by interpolating between pixel values measured along the top and bottom edges of the detector behind the collimator leaves. The resulting scatter fluence estimate is subtracted from each projection to yield an estimate of the primary-only images for CBCT reconstruction. Performance was investigated in phantom experiments on an experimental CBCT benchtop, and the effect on image quality was demonstrated in patient images (head, abdomen, and pelvis sites) obtained on a preclinical system for CBCT-guided radiation therapy. The algorithm provides significant reduction in scatter artifacts without compromise in contrast-to-noise ratio (CNR). For example, in a head phantom, cupping artifact was essentially eliminated, CT number accuracy was restored to within 3%, and CNR (breast-to-water) was improved by up to 50%. Similarly in a body phantom, cupping artifact was reduced by at least a factor of 2 without loss in CNR. Patient images demonstrate significantly increased uniformity, accuracy, and contrast, with an overall improvement in image quality in all sites investigated. Qualitative evaluation illustrates that soft-tissue structures that are otherwise undetectable are clearly delineated in scatter-corrected reconstructions. Since scatter is estimated directly in each projection, the algorithm is robust with respect to system geometry, patient size and heterogeneity, patient motion, etc. Operating without prior information, analytical modeling
DEFF Research Database (Denmark)
Farhi, E.; Y., Debab,; Willendrup, Peter Kjær
2014-01-01
and noisy problems. These optimizers can then be used to fit models onto data objects, and optimize McStas instrument simulations. As an application, we propose a methodology to analyse neutron scattering measurements in a pure Monte Carlo optimization procedure using McStas and iFit. As opposed...
MUSIC algorithms for rebar detection
International Nuclear Information System (INIS)
Solimene, Raffaele; Leone, Giovanni; Dell’Aversano, Angela
2013-01-01
The MUSIC (MUltiple SIgnal Classification) algorithm is employed to detect and localize an unknown number of scattering objects which are small in size as compared to the wavelength. The ensemble of objects to be detected consists of both strong and weak scatterers. This represents a scattering environment challenging for detection purposes as strong scatterers tend to mask the weak ones. Consequently, the detection of more weakly scattering objects is not always guaranteed and can be completely impaired when the noise corrupting data is of a relatively high level. To overcome this drawback, here a new technique is proposed, starting from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed in the second stage focusing only on the weak scatterers. The role of an adequate scattering model is emphasized to improve drastically detection performance in realistic scenarios. (paper)
Directory of Open Access Journals (Sweden)
Hassan Abdullah Kubba
2015-05-01
Full Text Available The paper presents a highly accurate power flow solution, reducing the possibility of ending at local minima, by using Real-Coded Genetic Algorithm (RCGA with system reduction and restoration. The proposed method (RCGA is modified to reduce the total computing time by reducing the system in size to that of the generator buses, which, for any realistic system, will be smaller in number, and the load buses are eliminated. Then solving the power flow problem for the generator buses only by real-coded GA to calculate the voltage phase angles, whereas the voltage magnitudes are specified resulted in reduced computation time for the solution. Then the system is restored by calculating the voltages of the load buses in terms of the calculated voltages of the generator buses, after a derivation of equations for calculating the voltages of the load busbars. The proposed method was demonstrated on 14-bus IEEE test systems and the practical system 362-busbar IRAQI NATIONAL GRID (ING. The proposed method has reliable convergence, a highly accurate solution and less computing time for on-line applications. The method can conveniently be applied for on-line analysis and planning studies of large power systems.
Nagihara, S.; Hedlund, M.; Zacny, K.; Taylor, P. T.
2013-01-01
The needle probe method (also known as the' hot wire' or 'line heat source' method) is widely used for in-situ thermal conductivity measurements on soils and marine sediments on the earth. Variants of this method have also been used (or planned) for measuring regolith on the surfaces of extra-terrestrial bodies (e.g., the Moon, Mars, and comets). In the near-vacuum condition on the lunar and planetary surfaces, the measurement method used on the earth cannot be simply duplicated, because thermal conductivity of the regolith can be approximately 2 orders of magnitude lower. In addition, the planetary probes have much greater diameters, due to engineering requirements associated with the robotic deployment on extra-terrestrial bodies. All of these factors contribute to the planetary probes requiring much longer time of measurement, several tens of (if not over a hundred) hours, while a conventional terrestrial needle probe needs only 1 to 2 minutes. The long measurement time complicates the surface operation logistics of the lander. It also negatively affects accuracy of the thermal conductivity measurement, because the cumulative heat loss along the probe is no longer negligible. The present study improves the data reduction algorithm of the needle probe method by shortening the measurement time on planetary surfaces by an order of magnitude. The main difference between the new scheme and the conventional one is that the former uses the exact mathematical solution to the thermal model on which the needle probe measurement theory is based, while the latter uses an approximate solution that is valid only for large times. The present study demonstrates the benefit of the new data reduction technique by applying it to data from a series of needle probe experiments carried out in a vacuum chamber on JSC-1A lunar regolith stimulant. The use of the exact solution has some disadvantage, however, in requiring three additional parameters, but two of them (the diameter and the
International Nuclear Information System (INIS)
Samei, Ehsan; Richard, Samuel
2015-01-01
indicated a 46%–84% dose reduction potential, depending on task, without compromising the modeled detection performance. Conclusions: The presented methodology based on ACR phantom measurements extends current possibilities for the assessment of CT image quality under the complex resolution and noise characteristics exhibited with statistical and iterative reconstruction algorithms. The findings further suggest that MBIR can potentially make better use of the projections data to reduce CT dose by approximately a factor of 2. Alternatively, if the dose held unchanged, it can improve image quality by different levels for different tasks
Energy Technology Data Exchange (ETDEWEB)
Samei, Ehsan, E-mail: samei@duke.edu [Carl E. Ravin Advanced Imaging Laboratories, Clinical Imaging Physics Group, Departments of Radiology, Physics, Biomedical Engineering, and Electrical and Computer Engineering, Medical Physics Graduate Program, Duke University, Durham, North Carolina 27710 (United States); Richard, Samuel [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University, Durham, North Carolina 27710 (United States)
2015-01-15
indicated a 46%–84% dose reduction potential, depending on task, without compromising the modeled detection performance. Conclusions: The presented methodology based on ACR phantom measurements extends current possibilities for the assessment of CT image quality under the complex resolution and noise characteristics exhibited with statistical and iterative reconstruction algorithms. The findings further suggest that MBIR can potentially make better use of the projections data to reduce CT dose by approximately a factor of 2. Alternatively, if the dose held unchanged, it can improve image quality by different levels for different tasks.
Hesford, Andrew J.; Waag, Robert C.
2010-10-01
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.
Energy Technology Data Exchange (ETDEWEB)
Birchall, J. [Univ. of Manitoba, Winnipeg, Manitoba (Canada)
1987-12-15
An outline is given of an experiment planned at TRIUMF which will measure an angular distribution of the parity-violating analyzing power A{sub z} in proton-proton scattering at 230 MeV. Measurements will be made in six angle bins by a cylindrically symmetric planar ionization chamber. At the same time, a cross-check of the results will be provided by a low-noise ionization detector downstream of the target which will measure the angle-integrated A{sub z}. Emphasis is placed on the systematic errors that are expected to be present in this measurement and which are in some cases unlike systematic errors in previous measurements of parity violation in proton scattering. As in other measurements, the major origin of systematic error is the polarization of the beam not being entirely parallel to its momentum. A scanning polarimeter to determine the distribution of these polarization components throughout the beam is sketched. (author)
Raman scattering in La1-xSrxFeO3-δ thin films: annealing-induced reduction and phase transformation
Islam, Mohammad A.; Xie, Yujun; Scafetta, Mark D.; May, Steven J.; Spanier, Jonathan E.
2015-04-01
Raman scattering in thin film La0.2Sr0.8FeO3-δ on MgO(0 0 1) collected at 300 K after different stages of annealing at selected temperatures T (300 K topotactic transformation of the crystal structure from that of the rhombohedral ABO3 perovskites to that of Brownmillerite-like structure consisting of octahedrally and tetrahedrally coordinated Fe atoms.
Institute of Scientific and Technical Information of China (English)
徐宁; 章云; 周如旗
2013-01-01
Aiming at the difficulties of the form transferring on large datasets to get reducts,a same element conversion reduction algorithm based on discernibility matrix and discernibility function is put forward.It uses discernibility matrix to keep all classification information of data set,and discernibility function constructs the mathematical logic form from the classical information.The algorithm begins from lower rank of Conjunctive Normal Form(CNF) into Disjunctive Normal Form(DNF).According to the same element conversion algorithm and high element absorption algorithm,if higher ranks are absorbed,the algorithm can return; else the algorithm can enter itself to next circle.Calculation results show that this algorithm greatly reduces the once scale of transform,neatly uses the mature recursive algorithm and works compactly and effectively.%针对较大数据集在区分函数范式转换获得约简解集时的困难性,提出一种基于区分矩阵与区分函数的同元转换约简算法.利用区分矩阵保留数据集的全部分类信息,使用区分函数建立分类信息的数学逻辑范式,从低元的合取范式分步转换为析取范式,根据同元转换算法和高元吸收算法,若能够吸收完全则回退,否则再次调用算法进入转换运算.实例演算结果表明,该算法能缩小一次转换规模,灵活地运用递归算法,使得运算简洁有效.
Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa
2018-01-01
The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,
Raman Scattering in La0.2Sr0.8FeO3-δ thin film: annealing-induced reduction and phase transformation
Islam, Mohammad; Xie, Yujun; Scafetta, Mark; May, Steven; Spanier, Jonathan
2015-03-01
Raman scattering in thin film La0.2Sr0.8FeO3-δ on MgO(001) collected at 300 K following different stages of annealing at selected temperatures (300 K topotactic transformation of the crystal structure from that of the rhombohedral ABO3 perovskites to that of Brownmillerite-like structure consisting of octahedrally and tetrahedrally coordinated Fe atoms. We acknowledge the ONR (N00014-11-1-0664), the Drexel Centralized Research Facilities, the Army Research Office DURIP program, the Department of Education (GAANN-RETAIN, Award No. P200A100117), and Leszek Wielunski at Rutgers University.
Raman scattering in La1−xSrxFeO3−δ thin films: annealing-induced reduction and phase transformation
International Nuclear Information System (INIS)
Islam, Mohammad A; Xie, Yujun; Scafetta, Mark D; May, Steven J; Spanier, Jonathan E
2015-01-01
Raman scattering in thin film La 0.2 Sr 0.8 FeO 3−δ on MgO(0 0 1) collected at 300 K after different stages of annealing at selected temperatures T (300 K < T < 543 K, to 10 h) and analysis reveal changes in spectral characteristics due to a loss of oxygen, onset of oxygen vacancy-induced disorder, and activation of Raman-inactive modes that are attributed to symmetry lowering. The interpretation is further supported by carrier transport measurements under identical conditions showing orders of magnitude increase in the resistivity induced by oxygen loss. After prolonged annealing in air, evolution of the spectrum signals the appearance of a possible topotactic transformation of the crystal structure from that of the rhombohedral ABO 3 perovskites to that of Brownmillerite-like structure consisting of octahedrally and tetrahedrally coordinated Fe atoms. (paper)
Nghiem, S. V.; Brakenridge, G. R.; Nguyen, D. T.
2017-12-01
Hurricane Harvey inflicted historical catastrophic flooding across extensive regions around Houston and southeast Texas after making landfall on 25 August 2017. The Federal Emergency Management Agency (FEMA) requested urgent supports for flood mapping and monitoring in an emergency response to the extreme flood situation. An innovative satellite remote sensing method, called the Depolarization Reduction Algorithm for Global Observations of inundatioN (DRAGON), has been developed and implemented for use with Sentinel synthetic aperture radar (SAR) satellite data at a resolution of 10 meters to identify, map, and monitor inundation including pre-existing water bodies and newly flooded areas. Results from this new method are hydrologically consistent and have been verified with known surface waters (e.g., coastal ocean, rivers, lakes, reservoirs, etc.), with clear-sky high-resolution WorldView images (where waves can be seen on surface water in inundated areas within a small spatial coverage), and with other flood maps from the consortium of Global Flood Partnership derived from multiple satellite datasets (including clear-sky Landsat and MODIS at lower resolutions). Figure 1 is a high-resolution (4K UHD) image of a composite inundation map for the region around Rosharon (in Brazoria County, south of Houston, Texas). This composite inundation map reveals extensive flooding on 29 August 2017 (four days after Hurricane Harvey made landfall), and the inundation was still persistent in most of the west and south of Rosharon one week later (5 September 2017) while flooding was reduced in the east of Rosharon. Hurricane Irma brought flooding to a number of areas in Florida. As of 10 September 2017, Sentinel SAR flood maps reveal inundation in the Florida Panhandle and over lowland surfaces on several islands in the Florida Keys. However, Sentinel SAR results indicate that flooding along the Florida coast was not extreme despite Irma was a Category-5 hurricane that might
Gomez, Humberto
2016-06-01
The CHY representation of scattering amplitudes is based on integrals over the moduli space of a punctured sphere. We replace the punctured sphere by a double-cover version. The resulting scattering equations depend on a parameter Λ controlling the opening of a branch cut. The new representation of scattering amplitudes possesses an enhanced redundancy which can be used to fix, modulo branches, the location of four punctures while promoting Λ to a variable. Via residue theorems we show how CHY formulas break up into sums of products of smaller (off-shell) ones times a propagator. This leads to a powerful way of evaluating CHY integrals of generic rational functions, which we call the Λ algorithm.
International Nuclear Information System (INIS)
Peng Yingjing; Qiu Lihua; Pan Congtao; Wang Cancan; Shang Songmin; Yan Feng
2012-01-01
Water dispersible polypyrrole nanotube/silver nanoparticle hybrids (PPyNT-COOAgNP) were synthesized via a cation-exchange method. The approach involves the surface functionalization of PPyNTs with carboxylic acid groups (-COOH), and cation-exchange with silver ions (Ag + ) and followed by the reduction of metal ions. The morphology and optical properties of the produced PPyNT-COOAgNP nanohybrids were characterized by transmission electron microscopy (TEM), Fourier transform infrared (FT-IR) spectrometer, and UV–vis spectroscopy. The as-prepared PPyNT-COOAgNP nanohybrids exhibited well-defined response to the reduction of hydrogen peroxide, and as extremely suitable substrates for surface-enhanced Raman spectroscopy (SERS) with a high enhancement factor of 6.0 × 10 7 , and enabling the detection of 10 −12 M Rhodamine 6G solution.
Meyer, Michael; Kalender, Willi A.; Kyriakou, Yiannis
2010-01-01
Scattered radiation is a major source of artifacts in flat detector computed tomography (FDCT) due to the increased irradiated volumes. We propose a fast projection-based algorithm for correction of scatter artifacts. The presented algorithm combines a convolution method to determine the spatial distribution of the scatter intensity distribution with an object-size-dependent scaling of the scatter intensity distributions using a priori information generated by Monte Carlo simulations. A projection-based (PBSE) and an image-based (IBSE) strategy for size estimation of the scanned object are presented. Both strategies provide good correction and comparable results; the faster PBSE strategy is recommended. Even with such a fast and simple algorithm that in the PBSE variant does not rely on reconstructed volumes or scatter measurements, it is possible to provide a reasonable scatter correction even for truncated scans. For both simulations and measurements, scatter artifacts were significantly reduced and the algorithm showed stable behavior in the z-direction. For simulated voxelized head, hip and thorax phantoms, a figure of merit Q of 0.82, 0.76 and 0.77 was reached, respectively (Q = 0 for uncorrected, Q = 1 for ideal). For a water phantom with 15 cm diameter, for example, a cupping reduction from 10.8% down to 2.1% was achieved. The performance of the correction method has limitations in the case of measurements using non-ideal detectors, intensity calibration, etc. An iterative approach to overcome most of these limitations was proposed. This approach is based on root finding of a cupping metric and may be useful for other scatter correction methods as well. By this optimization, cupping of the measured water phantom was further reduced down to 0.9%. The algorithm was evaluated on a commercial system including truncated and non-homogeneous clinically relevant objects.
International Nuclear Information System (INIS)
Hategan, Cornel; Comisel, Horia; Ionescu, Remus A.
2004-01-01
The quasiresonant scattering consists from a single channel resonance coupled by direct interaction transitions to some competing reaction channels. A description of quasiresonant Scattering, in terms of generalized reduced K-, R- and S- Matrix, is developed in this work. The quasiresonance's decay width is, due to channels coupling, smaller than the width of the ancestral single channel resonance (resonance's direct compression). (author)
Donne, A. J. H.
1994-01-01
Thomson scattering is a very powerful diagnostic which is applied at nearly every magnetic confinement device. Depending on the experimental conditions different plasma parameters can be diagnosed. When the wave vector is much larger than the plasma Debye length, the total scattered power is
Neutron scattering studies in the actinide region
International Nuclear Information System (INIS)
Kegel, G.H.R.; Egan, J.J.
1993-09-01
This report discusses the following topics: Prompt fission neutron energy spectra for 235 U and 239 Pu; Two-parameter measurement of nuclear lifetimes; ''Black'' neutron detector; Data reduction techniques for neutron scattering experiments; Inelastic neutron scattering studies in 197 Au; Elastic and inelastic scattering studies in 239 Pu; and neutron induced defects in silicon dioxide MOS structures
International Nuclear Information System (INIS)
Sitenko, A.
1991-01-01
This book emerged out of graduate lectures given by the author at the University of Kiev and is intended as a graduate text. The fundamentals of non-relativistic quantum scattering theory are covered, including some topics, such as the phase-function formalism, separable potentials, and inverse scattering, which are not always coverded in textbooks on scattering theory. Criticisms of the text are minor, but the reviewer feels an inadequate index is provided and the citing of references in the Russian language is a hindrance in a graduate text
AbouEisha, Hassan M.
2014-01-01
The problem of attribute reduction is an important problem related to feature selection and knowledge discovery. The problem of finding reducts with minimum cardinality is NP-hard. This paper suggests a new algorithm for finding exact reducts with minimum cardinality. This algorithm transforms the initial table to a decision table of a special kind, apply a set of simplification steps to this table, and use a dynamic programming algorithm to finish the construction of an optimal reduct. I present results of computer experiments for a collection of decision tables from UCIML Repository. For many of the experimented tables, the simplification steps solved the problem.
Automated evaluation of one-loop scattering amplitudes
International Nuclear Information System (INIS)
Deurzen, Hans van
2015-01-01
In this dissertation the developments toward fully automated evaluation of one-loop scattering amplitudes will be presented, as implemented in the GoSam framework. The code Xsamurai, part of GoSam, is described, which implements the integrand reduction algorithm including an extension to higher-rank capability. GoSam was used to compute several Higgs boson production channels at NLO QCD. An interface between GoSam and a Monte Carlo program was constructed, which enables computing any process at NLO precision needed in the LHC era.
Scatter radiation in digital tomosynthesis of the breast
International Nuclear Information System (INIS)
Sechopoulos, Ioannis; Suryanarayanan, Sankararaman; Vedantham, Srinivasan; D'Orsi, Carl J.; Karellas, Andrew
2007-01-01
Digital tomosynthesis of the breast is being investigated as one possible solution to the problem of tissue superposition present in planar mammography. This imaging technique presents various advantages that would make it a feasible replacement for planar mammography, among them similar, if not lower, radiation glandular dose to the breast; implementation on conventional digital mammography technology via relatively simple modifications; and fast acquisition time. One significant problem that tomosynthesis of the breast must overcome, however, is the reduction of x-ray scatter inclusion in the projection images. In tomosynthesis, due to the projection geometry and radiation dose considerations, the use of an antiscatter grid presents several challenges. Therefore, the use of postacquisition software-based scatter reduction algorithms seems well justified, requiring a comprehensive evaluation of x-ray scatter content in the tomosynthesis projections. This study aims to gain insight into the behavior of x-ray scatter in tomosynthesis by characterizing the scatter point spread functions (PSFs) and the scatter to primary ratio (SPR) maps found in tomosynthesis of the breast. This characterization was performed using Monte Carlo simulations, based on the Geant4 toolkit, that simulate the conditions present in a digital tomosynthesis system, including the simulation of the compressed breast in both the cranio-caudal (CC) and the medio-lateral oblique (MLO) views. The variation of the scatter PSF with varying tomosynthesis projection angle, as well as the effects of varying breast glandular fraction and x-ray spectrum, was analyzed. The behavior of the SPR for different projection angle, breast size, thickness, glandular fraction, and x-ray spectrum was also analyzed, and computer fit equations for the magnitude of the SPR at the center of mass for both the CC and the MLO views were found. Within mammographic energies, the x-ray spectrum was found to have no appreciable
International Nuclear Information System (INIS)
Stirling, W.G.; Perry, S.C.
1996-01-01
We outline the theoretical and experimental background to neutron scattering studies of critical phenomena at magnetic and structural phase transitions. The displacive phase transition of SrTiO 3 is discussed, along with examples from recent work on magnetic materials from the rare-earth (Ho, Dy) and actinide (NpAs, NpSb, USb) classes. The impact of synchrotron X-ray scattering is discussed in conclusion. (author) 13 figs., 18 refs
Executable Pseudocode for Graph Algorithms
B. Ó Nualláin (Breanndán)
2015-01-01
textabstract Algorithms are written in pseudocode. However the implementation of an algorithm in a conventional, imperative programming language can often be scattered over hundreds of lines of code thus obscuring its essence. This can lead to difficulties in understanding or verifying the
Optimizing cone beam CT scatter estimation in egs-cbct for a clinical and virtual chest phantom
International Nuclear Information System (INIS)
Thing, Rune Slot; Mainegra-Hing, Ernesto
2014-01-01
Purpose: Cone beam computed tomography (CBCT) image quality suffers from contamination from scattered photons in the projection images. Monte Carlo simulations are a powerful tool to investigate the properties of scattered photons.egs-cbct, a recent EGSnrc user code, provides the ability of performing fast scatter calculations in CBCT projection images. This paper investigates how optimization of user inputs can provide the most efficient scatter calculations. Methods: Two simulation geometries with two different x-ray sources were simulated, while the user input parameters for the efficiency improving techniques (EITs) implemented inegs-cbct were varied. Simulation efficiencies were compared to analog simulations performed without using any EITs. Resulting scatter distributions were confirmed unbiased against the analog simulations. Results: The optimal EIT parameter selection depends on the simulation geometry and x-ray source. Forced detection improved the scatter calculation efficiency by 80%. Delta transport improved calculation efficiency by a further 34%, while particle splitting combined with Russian roulette improved the efficiency by a factor of 45 or more. Combining these variance reduction techniques with a built-in denoising algorithm, efficiency improvements of 4 orders of magnitude were achieved. Conclusions: Using the built-in EITs inegs-cbct can improve scatter calculation efficiencies by more than 4 orders of magnitude. To achieve this, the user must optimize the input parameters to the specific simulation geometry. Realizing the full potential of the denoising algorithm requires keeping the statistical uncertainty below a threshold value above which the efficiency drops exponentially
Inverse electronic scattering by Green's functions and singular values decomposition
International Nuclear Information System (INIS)
Mayer, A.; Vigneron, J.-P.
2000-01-01
An inverse scattering technique is developed to enable a sample reconstruction from the diffraction figures obtained by electronic projection microscopy. In its Green's functions formulation, this technique takes account of all orders of diffraction by performing an iterative reconstruction of the wave function on the observation screen. This scattered wave function is then backpropagated to the sample to determine the potential-energy distribution, which is assumed real valued. The method relies on the use of singular values decomposition techniques, thus providing the best least-squares solutions and enabling a reduction of noise. The technique is applied to the analysis of a two-dimensional nanometric sample that is observed in Fresnel conditions with an electronic energy of 25 eV. The algorithm turns out to provide results with a mean relative error of the order of 5% and to be very stable against random noise
TU-F-18C-03: X-Ray Scatter Correction in Breast CT: Advances and Patient Testing
International Nuclear Information System (INIS)
Ramamurthy, S; Sechopoulos, I
2014-01-01
Purpose: To further develop and perform patient testing of an x-ray scatter correction algorithm for dedicated breast computed tomography (BCT). Methods: A previously proposed algorithm for x-ray scatter signal reduction in BCT imaging was modified and tested with a phantom and on patients. A wireless electronic positioner system was designed and added to the BCT system that positions a tungsten plate in and out of the x-ray beam. The interpolation used by the algorithm was replaced with a radial basis function-based algorithm, with automated exclusion of non-valid sampled points due to patient motion or other factors. A 3D adaptive noise reduction filter was also introduced to reduce the impact of scatter quantum noise post-reconstruction. The impact on image quality of the improved algorithm was evaluated using a breast phantom and seven patient breasts, using quantitative metrics such signal difference (SD) and signal difference-to-noise ratios (SDNR) and qualitatively using image profiles. Results: The improvements in the algorithm resulted in a more robust interpolation step, with no introduction of image artifacts, especially at the imaged object boundaries, which was an issue in the previous implementation. Qualitative evaluation of the reconstructed slices and corresponding profiles show excellent homogeneity of both the background and the higher density features throughout the whole imaged object, as well as increased accuracy in the Hounsfield Units (HU) values of the tissues. Profiles also demonstrate substantial increase in both SD and SDNR between glandular and adipose regions compared to both the uncorrected and system-corrected images. Conclusion: The improved scatter correction algorithm can be reliably used during patient BCT acquisitions with no introduction of artifacts, resulting in substantial improvement in image quality. Its impact on actual clinical performance needs to be evaluated in the future. Research Agreement, Koning Corp., Hologic
An algebraic approach to the scattering equations
Energy Technology Data Exchange (ETDEWEB)
Huang, Rijun; Rao, Junjie [Zhejiang Institute of Modern Physics, Zhejiang University,Hangzhou, 310027 (China); Feng, Bo [Zhejiang Institute of Modern Physics, Zhejiang University,Hangzhou, 310027 (China); Center of Mathematical Science, Zhejiang University,Hangzhou, 310027 (China); He, Yang-Hui [School of Physics, NanKai University,Tianjin, 300071 (China); Department of Mathematics, City University,London, EC1V 0HB (United Kingdom); Merton College, University of Oxford,Oxford, OX14JD (United Kingdom)
2015-12-10
We employ the so-called companion matrix method from computational algebraic geometry, tailored for zero-dimensional ideals, to study the scattering equations. The method renders the CHY-integrand of scattering amplitudes computable using simple linear algebra and is amenable to an algorithmic approach. Certain identities in the amplitudes as well as rationality of the final integrand become immediate in this formalism.
An algebraic approach to the scattering equations
International Nuclear Information System (INIS)
Huang, Rijun; Rao, Junjie; Feng, Bo; He, Yang-Hui
2015-01-01
We employ the so-called companion matrix method from computational algebraic geometry, tailored for zero-dimensional ideals, to study the scattering equations. The method renders the CHY-integrand of scattering amplitudes computable using simple linear algebra and is amenable to an algorithmic approach. Certain identities in the amplitudes as well as rationality of the final integrand become immediate in this formalism.
Directory of Open Access Journals (Sweden)
Marco Pelizzone
2008-06-01
Full Text Available Users of cochlear implant systems, that is, of auditory aids which stimulate the auditory nerve at the cochlea electrically, often complain about poor speech understanding in noisy environments. Despite the proven advantages of multimicrophone directional noise reduction systems for conventional hearing aids, only one major manufacturer has so far implemented such a system in a product, presumably because of the added power consumption and size. We present a physically small (intermicrophone distance 7Ã¢Â€Â‰mm and computationally inexpensive adaptive noise reduction system suitable for behind-the-ear cochlear implant speech processors. Supporting algorithms, which allow the adjustment of the opening angle and the maximum noise suppression, are proposed and evaluated. A portable real-time device for test in real acoustic environments is presented.
DEFF Research Database (Denmark)
Mahnke, Martina; Uprichard, Emma
2014-01-01
Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...
International Nuclear Information System (INIS)
Botto, D.J.; Pratt, R.H.
1979-05-01
The current status of Compton scattering, both experimental observations and the theoretical predictions, is examined. Classes of experiments are distinguished and the results obtained are summarized. The validity of the incoherent scattering function approximation and the impulse approximation is discussed. These simple theoretical approaches are compared with predictions of the nonrelativistic dipole formula of Gavrila and with the relativistic results of Whittingham. It is noted that the A -2 based approximations fail to predict resonances and an infrared divergence, both of which have been observed. It appears that at present the various available theoretical approaches differ significantly in their predictions and that further and more systematic work is required
Energy Technology Data Exchange (ETDEWEB)
Botto, D.J.; Pratt, R.H.
1979-05-01
The current status of Compton scattering, both experimental observations and the theoretical predictions, is examined. Classes of experiments are distinguished and the results obtained are summarized. The validity of the incoherent scattering function approximation and the impulse approximation is discussed. These simple theoretical approaches are compared with predictions of the nonrelativistic dipole formula of Gavrila and with the relativistic results of Whittingham. It is noted that the A/sup -2/ based approximations fail to predict resonances and an infrared divergence, both of which have been observed. It appears that at present the various available theoretical approaches differ significantly in their predictions and that further and more systematic work is required.
Kompis, Martin; Bertram, Matthias; François, Jacques; Pelizzone, Marco
2008-01-01
Users of cochlear implant systems, that is, of auditory aids which stimulate the auditory nerve at the cochlea electrically, often complain about poor speech understanding in noisy environments. Despite the proven advantages of multimicrophone directional noise reduction systems for conventional hearing aids, only one major manufacturer has so far implemented such a system in a product, presumably because of the added power consumption and size. We present a physically small (intermicrophone ...
Sechopoulos, Ioannis; Bliznakova, Kristina; Fei, Baowei
2013-10-01
power spectrum reflected a fast drop-off with increasing spatial frequency, with a reduction of four orders of magnitude by 0.1 lp/mm. The β values for the scatter signal were 6.14 and 6.39 for the 0° and 30° projections, respectively. Although the low-frequency characteristics of scatter in mammography and breast tomosynthesis were known, a quantitative analysis of the frequency domain characteristics of this signal was needed in order to optimize previously proposed software-based x-ray scatter reduction algorithms for these imaging modalities.
Zhu, Peijuan; Ding, Wei; Tong, Wei; Ghosal, Anima; Alton, Kevin; Chowdhury, Swapan
2009-06-01
A retention-time-shift-tolerant background subtraction and noise reduction algorithm (BgS-NoRA) is implemented using the statistical programming language R to remove non-drug-related ion signals from accurate mass liquid chromatography/mass spectrometry (LC/MS) data. The background-subtraction part of the algorithm is similar to a previously published procedure (Zhang H and Yang Y. J. Mass Spectrom. 2008, 43: 1181-1190). The noise reduction algorithm (NoRA) is an add-on feature to help further clean up the residual matrix ion noises after background subtraction. It functions by removing ion signals that are not consistent across many adjacent scans. The effectiveness of BgS-NoRA was examined in biological matrices by spiking blank plasma extract, bile and urine with diclofenac and ibuprofen that have been pre-metabolized by microsomal incubation. Efficient removal of background ions permitted the detection of drug-related ions in in vivo samples (plasma, bile, urine and feces) obtained from rats orally dosed with (14)C-loratadine with minimal interference. Results from these experiments demonstrate that BgS-NoRA is more effective in removing analyte-unrelated ions than background subtraction alone. NoRA is shown to be particularly effective in the early retention region for urine samples and middle retention region for bile samples, where the matrix ion signals still dominate the total ion chromatograms (TICs) after background subtraction. In most cases, the TICs after BgS-NoRA are in excellent qualitative correlation to the radiochromatograms. BgS-NoRA will be a very useful tool in metabolite detection and identification work, especially in first-in-human (FIH) studies and multiple dose toxicology studies where non-radio-labeled drugs are administered. Data from these types of studies are critical to meet the latest FDA guidance on Metabolite in Safety Testing (MIST). Copyright (c) 2009 John Wiley & Sons, Ltd.
Progress on Thomson scattering in the Pegasus Toroidal Experiment
International Nuclear Information System (INIS)
Schlossberg, D J; Bongard, M W; Fonck, R J; Schoenbeck, N L; Winz, G R
2013-01-01
A novel Thomson scattering system has been implemented on the Pegasus Toroidal Experiment where typical densities of 10 19 m −3 and electron temperatures of 10 to 500 eV are expected. The system leverages technological advances in high-energy pulsed lasers, volume phase holographic (VPH) diffraction gratings, and gated image intensified (ICCD) cameras to provide a relatively low-maintenance, economical, robust diagnostic system. Scattering is induced by a frequency-doubled, Q-switched Nd:YAG laser (2 J at 532 nm, 7 ns FWHM pulse) directed to the plasma over a 7.7 m long beam path, and focused to 80%) and fast-gated ICCDs (gate > 2 ns, Gen III intensifier) with high-throughput (F/1.8), achromatic lensing. A stray light mitigation facility has been implemented, consisting of a multi-aperture optical baffle system and a simple beam dump. Successful stray light reduction has enabled detection of scattered signal, and Rayleigh scattering has been used to provide a relative calibration. Initial temperature measurements have been made and data analysis algorithms are under development
Progress on Thomson scattering in the Pegasus Toroidal Experiment
Schlossberg, D. J.; Bongard, M. W.; Fonck, R. J.; Schoenbeck, N. L.; Winz, G. R.
2013-11-01
A novel Thomson scattering system has been implemented on the Pegasus Toroidal Experiment where typical densities of 1019 m-3 and electron temperatures of 10 to 500 eV are expected. The system leverages technological advances in high-energy pulsed lasers, volume phase holographic (VPH) diffraction gratings, and gated image intensified (ICCD) cameras to provide a relatively low-maintenance, economical, robust diagnostic system. Scattering is induced by a frequency-doubled, Q-switched Nd:YAG laser (2 J at 532 nm, 7 ns FWHM pulse) directed to the plasma over a 7.7 m long beam path, and focused to VPH transmission gratings (eff. > 80%) and fast-gated ICCDs (gate > 2 ns, Gen III intensifier) with high-throughput (F/1.8), achromatic lensing. A stray light mitigation facility has been implemented, consisting of a multi-aperture optical baffle system and a simple beam dump. Successful stray light reduction has enabled detection of scattered signal, and Rayleigh scattering has been used to provide a relative calibration. Initial temperature measurements have been made and data analysis algorithms are under development.
Optimizing cone beam CT scatter estimation in egs_cbct for a clinical and virtual chest phantom
DEFF Research Database (Denmark)
Slot Thing, Rune; Mainegra-Hing, Ernesto
2014-01-01
improving techniques (EITs) implemented inegs_cbct were varied. Simulation efficiencies were compared to analog simulations performed without using any EITs. Resulting scatter distributions were confirmed unbiased against the analog simulations. RESULTS: The optimal EIT parameter selection depends...... reduction techniques with a built-in denoising algorithm, efficiency improvements of 4 orders of magnitude were achieved. CONCLUSIONS: Using the built-in EITs inegs_cbct can improve scatter calculation efficiencies by more than 4 orders of magnitude. To achieve this, the user must optimize the input...
Advanced defect detection algorithm using clustering in ultrasonic NDE
Gongzhang, Rui; Gachagan, Anthony
2016-02-01
A range of materials used in industry exhibit scattering properties which limits ultrasonic NDE. Many algorithms have been proposed to enhance defect detection ability, such as the well-known Split Spectrum Processing (SSP) technique. Scattering noise usually cannot be fully removed and the remaining noise can be easily confused with real feature signals, hence becoming artefacts during the image interpretation stage. This paper presents an advanced algorithm to further reduce the influence of artefacts remaining in A-scan data after processing using a conventional defect detection algorithm. The raw A-scan data can be acquired from either traditional single transducer or phased array configurations. The proposed algorithm uses the concept of unsupervised machine learning to cluster segmental defect signals from pre-processed A-scans into different classes. The distinction and similarity between each class and the ensemble of randomly selected noise segments can be observed by applying a classification algorithm. Each class will then be labelled as `legitimate reflector' or `artefacts' based on this observation and the expected probability of defection (PoD) and probability of false alarm (PFA) determined. To facilitate data collection and validate the proposed algorithm, a 5MHz linear array transducer is used to collect A-scans from both austenitic steel and Inconel samples. Each pulse-echo A-scan is pre-processed using SSP and the subsequent application of the proposed clustering algorithm has provided an additional reduction to PFA while maintaining PoD for both samples compared with SSP results alone.
International Nuclear Information System (INIS)
Leader, Elliot
1991-01-01
With very few unexplained results to challenge conventional ideas, physicists have to look hard to search for gaps in understanding. An area of physics which offers a lot more than meets the eye is elastic and diffractive scattering where particles either 'bounce' off each other, emerging unscathed, or just graze past, emerging relatively unscathed. The 'Blois' workshops provide a regular focus for this unspectacular, but compelling physics, attracting highly motivated devotees
International Nuclear Information System (INIS)
1991-02-01
The annual report on hand gives an overview of the research work carried out in the Laboratory for Neutron Scattering (LNS) of the ETH Zuerich in 1990. Using the method of neutron scattering, it is possible to examine in detail the static and dynamic properties of the condensed material. In accordance with the multidisciplined character of the method, the LNS has for years maintained a system of intensive co-operation with numerous institutes in the areas of biology, chemistry, solid-state physics, crystallography and materials research. In 1990 over 100 scientists from more than 40 research groups both at home and abroad took part in the experiments. It was again a pleasure to see the number of graduate students present, who were studying for a doctorate and who could be introduced into the neutron scattering during their stay at the LNS and thus were in the position to touch on central ways of looking at a problem in their dissertation using this modern experimental method of solid-state research. In addition to the numerous and interesting ways of formulating the questions to explain the structure, nowadays the scientific programme increasingly includes particularly topical studies in connection with high temperature-supraconductors and materials research
Friedrich, Harald
2016-01-01
This corrected and updated second edition of "Scattering Theory" presents a concise and modern coverage of the subject. In the present treatment, special attention is given to the role played by the long-range behaviour of the projectile-target interaction, and a theory is developed, which is well suited to describe near-threshold bound and continuum states in realistic binary systems such as diatomic molecules or molecular ions. It is motivated by the fact that experimental advances have shifted and broadened the scope of applications where concepts from scattering theory are used, e.g. to the field of ultracold atoms and molecules, which has been experiencing enormous growth in recent years, largely triggered by the successful realization of Bose-Einstein condensates of dilute atomic gases in 1995. The book contains sections on special topics such as near-threshold quantization, quantum reflection, Feshbach resonances and the quantum description of scattering in two dimensions. The level of abstraction is k...
Rough surface scattering simulations using graphics cards
International Nuclear Information System (INIS)
Klapetek, Petr; Valtr, Miroslav; Poruba, Ales; Necas, David; Ohlidal, Miloslav
2010-01-01
In this article we present results of rough surface scattering calculations using a graphical processing unit implementation of the Finite Difference in Time Domain algorithm. Numerical results are compared to real measurements and computational performance is compared to computer processor implementation of the same algorithm. As a basis for computations, atomic force microscope measurements of surface morphology are used. It is shown that the graphical processing unit capabilities can be used to speedup presented computationally demanding algorithms without loss of precision.
De Götzen , Amalia; Mion , Luca; Tache , Olivier
2007-01-01
International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Small angle neutron scattering
Directory of Open Access Journals (Sweden)
Cousin Fabrice
2015-01-01
Full Text Available Small Angle Neutron Scattering (SANS is a technique that enables to probe the 3-D structure of materials on a typical size range lying from ∼ 1 nm up to ∼ a few 100 nm, the obtained information being statistically averaged on a sample whose volume is ∼ 1 cm3. This very rich technique enables to make a full structural characterization of a given object of nanometric dimensions (radius of gyration, shape, volume or mass, fractal dimension, specific area… through the determination of the form factor as well as the determination of the way objects are organized within in a continuous media, and therefore to describe interactions between them, through the determination of the structure factor. The specific properties of neutrons (possibility of tuning the scattering intensity by using the isotopic substitution, sensitivity to magnetism, negligible absorption, low energy of the incident neutrons make it particularly interesting in the fields of soft matter, biophysics, magnetic materials and metallurgy. In particular, the contrast variation methods allow to extract some informations that cannot be obtained by any other experimental techniques. This course is divided in two parts. The first one is devoted to the description of the principle of SANS: basics (formalism, coherent scattering/incoherent scattering, notion of elementary scatterer, form factor analysis (I(q→0, Guinier regime, intermediate regime, Porod regime, polydisperse system, structure factor analysis (2nd Virial coefficient, integral equations, characterization of aggregates, and contrast variation methods (how to create contrast in an homogeneous system, matching in ternary systems, extrapolation to zero concentration, Zero Averaged Contrast. It is illustrated by some representative examples. The second one describes the experimental aspects of SANS to guide user in its future experiments: description of SANS spectrometer, resolution of the spectrometer, optimization of
Feng, Cui; Zhu, Di; Zou, Xianlun; Li, Anqin; Hu, Xuemei; Li, Zhen; Hu, Daoyu
2018-03-01
To investigate the subjective and quantitative image quality and radiation exposure of CT enterography (CTE) examination performed at low tube voltage and low concentration of contrast agent with adaptive statistical iterative reconstruction (ASIR) algorithm, compared with conventional CTE.One hundred thirty-seven patients with suspected or proved gastrointestinal diseases underwent contrast enhanced CTE in a multidetector computed tomography (MDCT) scanner. All cases were assigned to 2 groups. Group A (n = 79) underwent CT with low tube voltage based on patient body mass index (BMI) (BMI contrast agent (270 mg I/mL), the images were reconstructed with standard filtered back projection (FBP) algorithm and 50% ASIR algorithm. Group B (n = 58) underwent conventional CTE with 120 kVp and 350 mg I/mL contrast agent, the images were reconstructed with FBP algorithm. The computed tomography dose index volume (CTDIvol), dose length product (DLP), effective dose (ED), and total iodine dosage were calculated and compared. The CT values, contrast-to-noise ratio (CNR), and signal-to-noise ratio (SNR) of the normal bowel wall, gastrointestinal lesions, and mesenteric vessels were assessed and compared. The subjective image quality was assessed independently and blindly by 2 radiologists using a 5-point Likert scale.The differences of values for CTDIvol (8.64 ± 2.72 vs 11.55 ± 3.95, P .05) and all image quality scores were greater than or equal to 3 (moderate). Fifty percent ASIR-A group images provided lower image noise, but similar or higher quantitative image quality in comparison with FBP-B group images.Compared with the conventional protocol, CTE performed at low tube voltage, low concentration of contrast agent with 50% ASIR algorithm produce a diagnostically acceptable image quality with a mean ED of 6.34 mSv and a total iodine dose reduction of 26.1%.
International Nuclear Information System (INIS)
Abdallh, A.; Crevecoeur, G.; Dupré, L.
2012-01-01
The magnetic characteristics of the electromagnetic devices' core materials can be recovered by solving an inverse problem, where sets of measurements need to be properly interpreted using a forward numerical model of the device. However, the uncertainties of the geometrical parameter values in the forward model lead to appreciable recovery errors in the recovered values of the material parameters. In this paper, we propose an effective inverse approach technique, in which the influences of the uncertainties in the geometrical model parameters are minimized. In this proposed approach, the cost function that needs to be minimized is adapted with respect to the uncertain geometrical model parameters. The proposed methodology is applied onto the identification of the magnetizing B–H curve of the magnetic material of an EI core inductor. The numerical results show a significant reduction of the recovery errors in the identified magnetic material parameter values. Moreover, the proposed methodology is validated by solving an inverse problem starting from real magnetic measurements. - Highlights: ► A new method to minimize the influence of the uncertain parameters in inverse problems is proposed. ► The technique is based on adapting iteratively the objective function that needs to be minimized. ► The objective function is adapted by the model response sensitivity to the uncertain parameters. ► The proposed technique is applied for recovering the B–H curve of an EI core inductor material. ► The error in the inverse problem solution is dramatically reduced using the proposed methodology.
International Nuclear Information System (INIS)
Reboredo, F.A.; Hood, R.Q.; Kent, P.C.
2009-01-01
We develop a formalism and present an algorithm for optimization of the trial wave-function used in fixed-node diffusion quantum Monte Carlo (DMC) methods. The formalism is based on the DMC mixed estimator of the ground state probability density. We take advantage of a basic property of the walker configuration distribution generated in a DMC calculation, to (i) project-out a multi-determinant expansion of the fixed node ground state wave function and (ii) to define a cost function that relates the interacting-ground-state-fixed-node and the non-interacting trial wave functions. We show that (a) locally smoothing out the kink of the fixed-node ground-state wave function at the node generates a new trial wave function with better nodal structure and (b) we argue that the noise in the fixed-node wave function resulting from finite sampling plays a beneficial role, allowing the nodes to adjust towards the ones of the exact many-body ground state in a simulated annealing-like process. Based on these principles, we propose a method to improve both single determinant and multi-determinant expansions of the trial wave function. The method can be generalized to other wave function forms such as pfaffians. We test the method in a model system where benchmark configuration interaction calculations can be performed and most components of the Hamiltonian are evaluated analytically. Comparing the DMC calculations with the exact solutions, we find that the trial wave function is systematically improved. The overlap of the optimized trial wave function and the exact ground state converges to 100% even starting from wave functions orthogonal to the exact ground state. Similarly, the DMC total energy and density converges to the exact solutions for the model. In the optimization process we find an optimal non-interacting nodal potential of density-functional-like form whose existence was predicted in a previous publication (Phys. Rev. B 77 245110 (2008)). Tests of the method are
Solomon, Justin; Mileto, Achille; Ramirez-Giraldo, Juan Carlos; Samei, Ehsan
2015-06-01
To assess the effect of radiation dose reduction on low-contrast detectability by using an advanced modeled iterative reconstruction (ADMIRE; Siemens Healthcare, Forchheim, Germany) algorithm in a contrast-detail phantom with a third-generation dual-source multidetector computed tomography (CT) scanner. A proprietary phantom with a range of low-contrast cylindrical objects, representing five contrast levels (range, 5-20 HU) and three sizes (range, 2-6 mm) was fabricated with a three-dimensional printer and imaged with a third-generation dual-source CT scanner at various radiation dose index levels (range, 0.74-5.8 mGy). Image data sets were reconstructed by using different section thicknesses (range, 0.6-5.0 mm) and reconstruction algorithms (filtered back projection [FBP] and ADMIRE with a strength range of three to five). Eleven independent readers blinded to technique and reconstruction method assessed all data sets in two reading sessions by measuring detection accuracy with a two-alternative forced choice approach (first session) and by scoring the total number of visible object groups (second session). Dose reduction potentials based on both reading sessions were estimated. Results between FBP and ADMIRE were compared by using both paired t tests and analysis of variance tests at the 95% significance level. During the first session, detection accuracy increased with increasing contrast, size, and dose index (diagnostic accuracy range, 50%-87%; interobserver variability, ±7%). When compared with FBP, ADMIRE improved detection accuracy by 5.2% on average across the investigated variables (P material is available for this article. RSNA, 2015
De Wolf, E.A.
2002-01-01
We discuss basic concepts and properties of diffractive phenomena in soft hadron collisions and in deep-inelastic scattering at low Bjorken-x. The paper is not a review of the rapidly developing field but presents an attempt to show in simple terms the close inter-relationship between the dynamics of high-energy hadronic and deep-inelastic diffraction. Using the saturation model of Golec-Biernat and Wusthoff as an example, a simple explanation of geometrical scaling is presented. The relation between the QCD anomalous multiplicity dimension and the Pomeron intercept is discussed.
International Nuclear Information System (INIS)
Wolf, E.A. de
2002-01-01
We discuss basic concepts and properties of diffractive phenomena in soft hadron collisions and in deep-inelastic scattering at low Bjorken - x. The paper is not a review of the rapidly developing field but presents an attempt to show in simple terms the close inter-relationship between the dynamics of high-energy hadronic and deep-inelastic diffraction. Using the saturation model of Golec-Biernat and Wuesthoff as an example, a simple explanation of geometrical scaling is presented. The relation between the QCD anomalous multiplicity dimension and the Pomeron intercept is discussed. (author)
Geometric approximation algorithms
Har-Peled, Sariel
2011-01-01
Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.
Totally parallel multilevel algorithms
Frederickson, Paul O.
1988-01-01
Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.
International Nuclear Information System (INIS)
Dinev, D.
1996-01-01
Several new algorithms for sorting of dipole and/or quadrupole magnets in synchrotrons and storage rings are described. The algorithms make use of a combinatorial approach to the problem and belong to the class of random search algorithms. They use an appropriate metrization of the state space. The phase-space distortion (smear) is used as a goal function. Computational experiments for the case of the JINR-Dubna superconducting heavy ion synchrotron NUCLOTRON have shown a significant reduction of the phase-space distortion after the magnet sorting. (orig.)
Joux, Antoine
2009-01-01
Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic
Hougardy, Stefan
2016-01-01
Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.
Tel, G.
We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of
International Nuclear Information System (INIS)
Brueckel, Thomas; Heger, Gernot; Richter, Dieter; Roth, Georg; Zorn, Reiner
2012-01-01
The following topics are dealt with: Neutron scattering in contemporary research, neutron sources, symmetry of crystals, diffraction, nanostructures investigated by small-angle neutron scattering, the structure of macromolecules, spin dependent and magnetic scattering, structural analysis, neutron reflectometry, magnetic nanostructures, inelastic scattering, strongly correlated electrons, dynamics of macromolecules, applications of neutron scattering. (HSI)
Diffuse scattering from crystals with point defects
International Nuclear Information System (INIS)
Andrushevsky, N.M.; Shchedrin, B.M.; Simonov, V.I.; Malakhova, L.F.
2002-01-01
The analytical expressions for calculating the intensities of X-ray diffuse scattering from a crystal of finite dimensions and monatomic substitutional, interstitial, or vacancy-type point defects have been derived. The method for the determination of the three-dimensional structure by experimental diffuse-scattering data from crystals with point defects having various concentrations is discussed and corresponding numerical algorithms are suggested
Direct numerical reconstruction of conductivities in three dimensions using scattering transforms
DEFF Research Database (Denmark)
Bikowski, Jutta; Knudsen, Kim; Mueller, Jennifer L
2011-01-01
A direct three-dimensional EIT reconstruction algorithm based on complex geometrical optics solutions and a nonlinear scattering transform is presented and implemented for spherically symmetric conductivity distributions. The scattering transform is computed both with a Born approximation and from...
Direct and inverse scattering for viscoelastic media
International Nuclear Information System (INIS)
Ammicht, E.; Corones, J.P.; Krueger, R.J.
1987-01-01
A time domain approach to direct and inverse scattering problems for one-dimensional viscoelastic media is presented. Such media can be characterized as having a constitutive relation between stress and strain which involves the past history of the strain through a memory function, the relaxation modulus. In the approach in this article, the relaxation modulus of a material is shown to be related to the reflection properties of the material. This relation provides a constructive algorithm for direct and inverse scattering problems. A numerical implementation of this algorithm is tested on several problems involving realistic relaxation moduli
Efficient waste reduction algorithms based on alternative ...
African Journals Online (AJOL)
Alternative heuristic functions are investigated and applied to the modified Wang ... ∗Department of Computer Science and Information Systems, North-West ... rectangles have types ri (i = 1,...,n), where each type has a demand constraint of.
Bidirectional optical scattering facility
Federal Laboratory Consortium — Goniometric optical scatter instrument (GOSI)The bidirectional reflectance distribution function (BRDF) quantifies the angular distribution of light scattered from a...
Mosaic crystal algorithm for Monte Carlo simulations
Seeger, P A
2002-01-01
An algorithm is presented for calculating reflectivity, absorption, and scattering of mosaic crystals in Monte Carlo simulations of neutron instruments. The algorithm uses multi-step transport through the crystal with an exact solution of the Darwin equations at each step. It relies on the kinematical model for Bragg reflection (with parameters adjusted to reproduce experimental data). For computation of thermal effects (the Debye-Waller factor and coherent inelastic scattering), an expansion of the Debye integral as a rapidly converging series of exponential terms is also presented. Any crystal geometry and plane orientation may be treated. The algorithm has been incorporated into the neutron instrument simulation package NISP. (orig.)
Parton-parton scattering at two-loops
International Nuclear Information System (INIS)
Tejeda Yeomans, M.E.
2001-01-01
Abstract We present an algorithm for the calculation of scalar and tensor one- and two-loop integrals that contribute to the virtual corrections of 2 → 2 partonic scattering. First, the tensor integrals are related to scalar integrals that contain an irreducible propagator-like structure in the numerator. Then, we use Integration by Parts and Lorentz Invariance recurrence relations to build a general system of equations that enables the reduction of any scalar integral (with and without structure in the numerator) to a basis set of master integrals. Their expansions in ε = 2 - D/2 have already been calculated and we present a summary of the techniques that have been used to this end, as well as a compilation of the expansions we need in the different physical regions. We then apply this algorithm to the direct evaluation of the Feynman diagrams contributing to the O(α s 4 ) one- and two-loop matrix-elements for massless like and unlike quark-quark, quark-gluon and gluon-gluon scattering. The analytic expressions we provide are regularised in Convensional Dimensional Regularisation and renormalised in the MS-bar scheme. Finally, we show that the structure of the infrared divergences agrees with that predicted by the application of Catani's formalism to the analysis of each partonic scattering process. The results presented in this thesis provide the complete calculation of the one- and two-loop matrix-elements for 2 → 2 processes needed for the next-to-next-to-leading order contribution to inclusive jet production at hadron colliders. (author)
Parallel Algorithms and Patterns
Energy Technology Data Exchange (ETDEWEB)
Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.
Energy Technology Data Exchange (ETDEWEB)
Brueckel, Thomas; Heger, Gernot; Richter, Dieter; Roth, Georg; Zorn, Reiner [eds.
2010-07-01
The following topics are dealt with: Neutron sources, symmetry of crystals, diffraction, nanostructures investigated by small-angle neutron scattering, the structure of macromolecules, spin dependent and magnetic scattering, structural analysis, neutron reflectometry, magnetic nanostructures, inelastic scattering, strongly correlated electrons, dynamics of macromolecules, applications of neutron scattering. (HSI)
International Nuclear Information System (INIS)
Brueckel, Thomas; Heger, Gernot; Richter, Dieter; Roth, Georg; Zorn, Reiner
2013-01-01
The following topics are dealt with: Neutron sources, symmetry of crystals, nanostructures investigated by small-angle neutron scattering, structure of macromolecules, spin dependent and magnetic scattering, structural analysis, neutron reflectometry, magnetic nanostructures, inelastic neutron scattering, strongly correlated electrons, polymer dynamics, applications of neutron scattering. (HSI)
International Nuclear Information System (INIS)
Brueckel, Thomas; Heger, Gernot; Richter, Dieter; Roth, Georg; Zorn, Reiner
2010-01-01
The following topics are dealt with: Neutron sources, symmetry of crystals, diffraction, nanostructures investigated by small-angle neutron scattering, the structure of macromolecules, spin dependent and magnetic scattering, structural analysis, neutron reflectometry, magnetic nanostructures, inelastic scattering, strongly correlated electrons, dynamics of macromolecules, applications of neutron scattering. (HSI)
International Nuclear Information System (INIS)
Jayaswal, B.; Mazumder, S.
1998-09-01
Small-angle scattering data from strong scattering systems, e.g. porous materials, cannot be analysed invoking single scattering approximation as specimen needed to replicate the bulk matrix in essential properties are too thick to validate the approximation. The presence of multiple scattering is indicated by invalidity of the functional invariance property of the observed scattering profile with variation of sample thickness and/or wave length of the probing radiation. This article delineates how non accounting of multiple scattering affects the results of analysis and then how to correct the data for its effect. It deals with an algorithm to extract single scattering profile from small-angle scattering data affected by multiple scattering. The algorithm can process the scattering data and deduce single scattering profile in absolute scale. A software package, SIMSAS, is introduced for executing this inversion step. This package is useful both to simulate and to analyse multiple small-angle scattering data. (author)
Energy Technology Data Exchange (ETDEWEB)
Brueckel, Thomas; Heger, Gernot; Richter, Dieter; Roth, Georg; Zorn, Reiner (eds.)
2010-07-01
The following topics are dealt with: Neutron sources, neutron properties and elastic scattering, correlation functions measured by scattering experiments, symmetry of crystals, applications of neutron scattering, polarized-neutron scattering and polarization analysis, structural analysis, magnetic and lattice excitation studied by inelastic neutron scattering, macromolecules and self-assembly, dynamics of macromolecules, correlated electrons in complex transition-metal oxides, surfaces, interfaces, and thin films investigated by neutron reflectometry, nanomagnetism. (HSI)
International Nuclear Information System (INIS)
Brueckel, Thomas; Heger, Gernot; Richter, Dieter; Roth, Georg; Zorn, Reiner
2010-01-01
The following topics are dealt with: Neutron sources, neutron properties and elastic scattering, correlation functions measured by scattering experiments, symmetry of crystals, applications of neutron scattering, polarized-neutron scattering and polarization analysis, structural analysis, magnetic and lattice excitation studied by inelastic neutron scattering, macromolecules and self-assembly, dynamics of macromolecules, correlated electrons in complex transition-metal oxides, surfaces, interfaces, and thin films investigated by neutron reflectometry, nanomagnetism. (HSI)
Reconstruction of Kinematic Surfaces from Scattered Data
DEFF Research Database (Denmark)
Randrup, Thomas; Pottmann, Helmut; Lee, I.-K.
1998-01-01
Given a surface in 3-space or scattered points from a surface, we present algorithms for fitting the data by a surface which can be generated by a one--parameter subgroup of the group of similarities. These surfaces are general cones and cylinders, surfaces of revolution, helical surfaces and spi...
International Nuclear Information System (INIS)
Ramamurthy, Senthil; D’Orsi, Carl J; Sechopoulos, Ioannis
2016-01-01
A previously proposed x-ray scatter correction method for dedicated breast computed tomography was further developed and implemented so as to allow for initial patient testing. The method involves the acquisition of a complete second set of breast CT projections covering 360° with a perforated tungsten plate in the path of the x-ray beam. To make patient testing feasible, a wirelessly controlled electronic positioner for the tungsten plate was designed and added to a breast CT system. Other improvements to the algorithm were implemented, including automated exclusion of non-valid primary estimate points and the use of a different approximation method to estimate the full scatter signal. To evaluate the effectiveness of the algorithm, evaluation of the resulting image quality was performed with a breast phantom and with nine patient images. The improvements in the algorithm resulted in the avoidance of introduction of artifacts, especially at the object borders, which was an issue in the previous implementation in some cases. Both contrast, in terms of signal difference and signal difference-to-noise ratio were improved with the proposed method, as opposed to with the correction algorithm incorporated in the system, which does not recover contrast. Patient image evaluation also showed enhanced contrast, better cupping correction, and more consistent voxel values for the different tissues. The algorithm also reduces artifacts present in reconstructions of non-regularly shaped breasts. With the implemented hardware and software improvements, the proposed method can be reliably used during patient breast CT imaging, resulting in improvement of image quality, no introduction of artifacts, and in some cases reduction of artifacts already present. The impact of the algorithm on actual clinical performance for detection, diagnosis and other clinical tasks in breast imaging remains to be evaluated. (paper)
A modified CoSaMP algorithm for electromagnetic imaging of two dimensional domains
Sandhu, Ali Imran; Bagci, Hakan
2017-01-01
The compressive sampling matching pursuit (CoSaMP) algorithm is used for solving the electromagnetic inverse scattering problem on two-dimensional sparse domains. Since the scattering matrix, which is computed by sampling the Green function, does
M4GB : Efficient Groebner Basis algorithm
R.H. Makarim (Rusydi); M.M.J. Stevens (Marc)
2017-01-01
textabstractWe introduce a new efficient algorithm for computing Groebner-bases named M4GB. Like Faugere's algorithm F4 it is an extension of Buchberger's algorithm that describes: how to store already computed (tail-)reduced multiples of basis polynomials to prevent redundant work in the reduction
Electromagnetic scattering of large structures in layered earths using integral equations
Xiong, Zonghou; Tripp, Alan C.
1995-07-01
An electromagnetic scattering algorithm for large conductivity structures in stratified media has been developed and is based on the method of system iteration and spatial symmetry reduction using volume electric integral equations. The method of system iteration divides a structure into many substructures and solves the resulting matrix equation using a block iterative method. The block submatrices usually need to be stored on disk in order to save computer core memory. However, this requires a large disk for large structures. If the body is discretized into equal-size cells it is possible to use the spatial symmetry relations of the Green's functions to regenerate the scattering impedance matrix in each iteration, thus avoiding expensive disk storage. Numerical tests show that the system iteration converges much faster than the conventional point-wise Gauss-Seidel iterative method. The numbers of cells do not significantly affect the rate of convergency. Thus the algorithm effectively reduces the solution of the scattering problem to an order of O(N2), instead of O(N3) as with direct solvers.
International Nuclear Information System (INIS)
Rupnik, K.; Asaf, U.; McGlynn, S.P.
1990-01-01
A linear correlation exists between the electron scattering length, as measured by a pressure shift method, and the polarizabilities for He, Ne, Ar, Kr, and Xe gases. The correlative algorithm has excellent predictive capability for the electron scattering lengths of mixtures of rare gases, simple molecular gases such as H 2 and N 2 and even complex molecular entities such as methane, CH 4
Brillouin scatter in laser-produced plasmas
International Nuclear Information System (INIS)
Phillion, D.W.; Kruer, W.L.; Rupert, V.C.
1977-01-01
The absorption of intense laser light is found to be reduced when targets are irradiated by 1.06 μm light with long pulse widths (150-400 psec) and large focal spots (100-250 μm). Estimates of Brillouin scatter which account for the finite heat capacity of the underdense plasma predict this reduction. Spectra of the back reflected light show red shifts indicative of Brillouin scattering
Gerards, Marco Egbertus Theodorus; Kuper, Jan; Kokkeler, Andre B.J.; Molenkamp, Egbert
2009-01-01
Reduction circuits are used to reduce rows of ﬂoating point values to single values. Binary ﬂoating point operators often have deep pipelines, which may cause hazards when many consecutive rows have to be reduced. We present an algorithm by which any number of consecutive rows of arbitrary lengths
Scattering and multiple scattering in disordered materials
International Nuclear Information System (INIS)
Weaver, R.L.; Butler, W.H.
1992-01-01
The papers in this section were presented at a joint session of symposium V on Applications of Multiple Scattering Theory and of Symposium P on Disordered Systems. They show that the ideas of scattering theory can help us to understand a very broad class of phenomena
Discrete inverse scattering theory and the continuum limit
International Nuclear Information System (INIS)
Berryman, J.G.; Greene, R.R.
1978-01-01
The class of satisfactory difference approximations for the Schroedinger equation in discrete inverse scattering theory is shown smaller than previously supposed. A fast algorithm (analogous to the Levinson algorithm for Toeplitz matrices) is found for solving the discrete inverse problem. (Auth.)
PROPOSAL OF ALGORITHM FOR ROUTE OPTIMIZATION
Robert Ramon de Carvalho Sousa; Abimael de Jesus Barros Costa; Eliezé Bulhões de Carvalho; Adriano de Carvalho Paranaíba; Daylyne Maerla Gomes Lima Sandoval
2016-01-01
This article uses “Six Sigma” methodology for the elaboration of an algorithm for routing problems which is able to obtain more efficient results than those from Clarke and Wright´s (CW) algorithm (1964) in situations of random increase of product delivery demands, facing the incapability of service level increase . In some situations, the algorithm proposed obtained more efficient results than the CW algorithm. The key factor was a reduction in the number of mistakes (on...
Neutron scattering from fractals
DEFF Research Database (Denmark)
Kjems, Jørgen; Freltoft, T.; Richter, D.
1986-01-01
The scattering formalism for fractal structures is presented. Volume fractals are exemplified by silica particle clusters formed either from colloidal suspensions or by flame hydrolysis. The determination of the fractional dimensionality through scattering experiments is reviewed, and recent small...
Scatter from optical components
International Nuclear Information System (INIS)
Stover, J.C.
1989-01-01
This book is covered under the following topics: measurement and analysis techniques; BRDF standards, comparisons, and anomalies; scatter measurement of several materials; scatter from contaminations; and optical system contamination: effects, measurement, and control
International Nuclear Information System (INIS)
Cheng, J-C; Rahmim, Arman; Blinder, Stephan; Camborde, Marie-Laure; Raywood, Kelvin; Sossi, Vesna
2007-01-01
We describe an ordinary Poisson list-mode expectation maximization (OP-LMEM) algorithm with a sinogram-based scatter correction method based on the single scatter simulation (SSS) technique and a random correction method based on the variance-reduced delayed-coincidence technique. We also describe a practical approximate scatter and random-estimation approach for dynamic PET studies based on a time-averaged scatter and random estimate followed by scaling according to the global numbers of true coincidences and randoms for each temporal frame. The quantitative accuracy achieved using OP-LMEM was compared to that obtained using the histogram-mode 3D ordinary Poisson ordered subset expectation maximization (3D-OP) algorithm with similar scatter and random correction methods, and they showed excellent agreement. The accuracy of the approximated scatter and random estimates was tested by comparing time activity curves (TACs) as well as the spatial scatter distribution from dynamic non-human primate studies obtained from the conventional (frame-based) approach and those obtained from the approximate approach. An excellent agreement was found, and the time required for the calculation of scatter and random estimates in the dynamic studies became much less dependent on the number of frames (we achieved a nearly four times faster performance on the scatter and random estimates by applying the proposed method). The precision of the scatter fraction was also demonstrated for the conventional and the approximate approach using phantom studies
International Nuclear Information System (INIS)
Creutz, M.
1987-11-01
A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/
Hu, T C
2002-01-01
Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9
Efficient sampling algorithms for Monte Carlo based treatment planning
International Nuclear Information System (INIS)
DeMarco, J.J.; Solberg, T.D.; Chetty, I.; Smathers, J.B.
1998-01-01
Efficient sampling algorithms are necessary for producing a fast Monte Carlo based treatment planning code. This study evaluates several aspects of a photon-based tracking scheme and the effect of optimal sampling algorithms on the efficiency of the code. Four areas were tested: pseudo-random number generation, generalized sampling of a discrete distribution, sampling from the exponential distribution, and delta scattering as applied to photon transport through a heterogeneous simulation geometry. Generalized sampling of a discrete distribution using the cutpoint method can produce speedup gains of one order of magnitude versus conventional sequential sampling. Photon transport modifications based upon the delta scattering method were implemented and compared with a conventional boundary and collision checking algorithm. The delta scattering algorithm is faster by a factor of six versus the conventional algorithm for a boundary size of 5 mm within a heterogeneous geometry. A comparison of portable pseudo-random number algorithms and exponential sampling techniques is also discussed
Kim, K.; Kang, S.; Cho, H.; Kang, W.; Seo, C.; Park, C.; Lee, D.; Lim, H.; Lee, H.; Kim, G.; Park, S.; Park, J.; Kim, W.; Jeon, D.; Woo, T.; Oh, J.
2018-02-01
In conventional planar radiography, image visibility is often limited mainly due to the superimposition of the object structure under investigation and the artifacts caused by scattered x-rays and noise. Several methods, including computed tomography (CT) as a multiplanar imaging modality, air-gap and grid techniques for the reduction of scatters, phase-contrast imaging as another image-contrast modality, etc., have extensively been investigated in attempt to overcome these difficulties. However, those methods typically require higher x-ray doses or special equipment. In this work, as another approach, we propose a new model-based radiography restoration method based on simple scatter-degradation scheme where the intensity of scattered x-rays and the transmission function of a given object are estimated from a single x-ray image to restore the original degraded image. We implemented the proposed algorithm and performed an experiment to demonstrate its viability. Our results indicate that the degradation of image characteristics by scattered x-rays and noise was effectively recovered by using the proposed method, which improves the image visibility in radiography considerably.
Electron scattering from tetrahydrofuran
International Nuclear Information System (INIS)
Fuss, M C; Sanz, A G; García, G; Muñoz, A; Oller, J C; Blanco, F; Do, T P T; Brunger, M J; Almeida, D; Limão-Vieira, P
2012-01-01
Electron scattering from Tetrahydrofuran (C 4 H 8 O) was investigated over a wide range of energies. Following a mixed experimental and theoretical approach, total scattering, elastic scattering and ionization cross sections as well as electron energy loss distributions were obtained.
International Nuclear Information System (INIS)
Doll, P.
1990-02-01
Neutron-proton scattering as fundamental interaction process below and above hundred MeV is discussed. Quark model inspired interactions and phenomenological potential models are described. The seminar also indicates the experimental improvements for achieving new precise scattering data. Concluding remarks indicate the relevance of nucleon-nucleon scattering results to finite nuclei. (orig.) [de
Home Page | Facilities | Reference | Software | Conferences | Announcements | Mailing Lists Neutron Scattering Banner Neutron Scattering Software A new portal for neutron scattering has just been established sets KUPLOT: data plotting and fitting software ILL/TAS: Matlab probrams for analyzing triple axis data
International Nuclear Information System (INIS)
Lovesey, S.W.
1987-05-01
The report reviews, at an introductory level, the theory of photon scattering from condensed matter. Magnetic scattering, which arises from first-order relativistic corrections to the Thomson scattering amplitude, is treated in detail and related to the corresponding interaction in the magnetic neutron diffraction amplitude. (author)
Roessli, B.; Böni, P.
2000-01-01
The technique of polarized neutron scattering is reviewed with emphasis on applications. Many examples of the usefulness of the method in various fields of physics are given like the determination of spin density maps, measurement of complex magnetic structures with spherical neutron polarimetry, inelastic neutron scattering and separation of coherent and incoherent scattering with help of the generalized XYZ method.
Deconvolution of shift-variant broadening for Compton scatter imaging
International Nuclear Information System (INIS)
Evans, Brian L.; Martin, Jeffrey B.; Roggemann, Michael C.
1999-01-01
A technique is presented for deconvolving shift-variant Doppler broadening of singly Compton scattered gamma rays from their recorded energy distribution. Doppler broadening is important in Compton scatter imaging techniques employing gamma rays with energies below roughly 100 keV. The deconvolution unfolds an approximation to the angular distribution of scattered photons from their recorded energy distribution in the presence of statistical noise and background counts. Two unfolding methods are presented, one based on a least-squares algorithm and one based on a maximum likelihood algorithm. Angular distributions unfolded from measurements made on small scattering targets show less evidence of Compton broadening. This deconvolution is shown to improve the quality of filtered backprojection images in multiplexed Compton scatter tomography. Improved sharpness and contrast are evident in the images constructed from unfolded signals
New resonance cross section calculational algorithms
International Nuclear Information System (INIS)
Mathews, D.R.
1978-01-01
Improved resonance cross section calculational algorithms were developed and tested for inclusion in a fast reactor version of the MICROX code. The resonance energy portion of the MICROX code solves the neutron slowing-down equations for a two-region lattice cell on a very detailed energy grid (about 14,500 energies). In the MICROX algorithms, the exact P 0 elastic scattering kernels are replaced by synthetic (approximate) elastic scattering kernels which permit the use of an efficient and numerically stable recursion relation solution of the slowing-down equation. In the work described here, the MICROX algorithms were modified as follows: an additional delta function term was included in the P 0 synthetic scattering kernel. The additional delta function term allows one more moments of the exact elastic scattering kernel to be preserved without much extra computational effort. With the improved synthetic scattering kernel, the flux returns more closely to the exact flux below a resonance than with the original MICROX kernel. The slowing-down calculation was extended to a true B 1 hyperfine energy grid calculatn in each region by using P 1 synthetic scattering kernels and tranport-corrected P 0 collision probabilities to couple the two regions. 1 figure, 6 tables
Directory of Open Access Journals (Sweden)
Anna Bourmistrova
2011-02-01
Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.
Scattering amplitudes over finite fields and multivariate functional reconstruction
International Nuclear Information System (INIS)
Peraro, Tiziano
2016-01-01
Several problems in computer algebra can be efficiently solved by reducing them to calculations over finite fields. In this paper, we describe an algorithm for the reconstruction of multivariate polynomials and rational functions from their evaluation over finite fields. Calculations over finite fields can in turn be efficiently performed using machine-size integers in statically-typed languages. We then discuss the application of the algorithm to several techniques related to the computation of scattering amplitudes, such as the four- and six-dimensional spinor-helicity formalism, tree-level recursion relations, and multi-loop integrand reduction via generalized unitarity. The method has good efficiency and scales well with the number of variables and the complexity of the problem. As an example combining these techniques, we present the calculation of full analytic expressions for the two-loop five-point on-shell integrands of the maximal cuts of the planar penta-box and the non-planar double-pentagon topologies in Yang-Mills theory, for a complete set of independent helicity configurations.
Scattering amplitudes over finite fields and multivariate functional reconstruction
Energy Technology Data Exchange (ETDEWEB)
Peraro, Tiziano [Higgs Centre for Theoretical Physics,School of Physics and Astronomy, The University of Edinburgh,James Clerk Maxwell Building, Peter Guthrie Tait Road, Edinburgh EH9 3FD (United Kingdom)
2016-12-07
Several problems in computer algebra can be efficiently solved by reducing them to calculations over finite fields. In this paper, we describe an algorithm for the reconstruction of multivariate polynomials and rational functions from their evaluation over finite fields. Calculations over finite fields can in turn be efficiently performed using machine-size integers in statically-typed languages. We then discuss the application of the algorithm to several techniques related to the computation of scattering amplitudes, such as the four- and six-dimensional spinor-helicity formalism, tree-level recursion relations, and multi-loop integrand reduction via generalized unitarity. The method has good efficiency and scales well with the number of variables and the complexity of the problem. As an example combining these techniques, we present the calculation of full analytic expressions for the two-loop five-point on-shell integrands of the maximal cuts of the planar penta-box and the non-planar double-pentagon topologies in Yang-Mills theory, for a complete set of independent helicity configurations.
Brunner, Stephen; Nett, Brian E; Tolakanahalli, Ranjini; Chen, Guang-Hong
2011-02-21
X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes.
Design of the algorithm of photons migration in the multilayer skin structure
Bulykina, Anastasiia B.; Ryzhova, Victoria A.; Korotaev, Valery V.; Samokhin, Nikita Y.
2017-06-01
Design of approaches and methods of the oncological diseases diagnostics has special significance. It allows determining any kind of tumors at early stages. The development of optical and laser technologies provided increase of a number of methods allowing making diagnostic studies of oncological diseases. A promising area of biomedical diagnostics is the development of automated nondestructive testing systems for the study of the skin polarizing properties based on backscattered radiation detection. Specification of the examined tissue polarizing properties allows studying of structural properties change influenced by various pathologies. Consequently, measurement and analysis of the polarizing properties of the scattered optical radiation for the development of methods for diagnosis and imaging of skin in vivo appear relevant. The purpose of this research is to design the algorithm of photons migration in the multilayer skin structure. In this research, the algorithm of photons migration in the multilayer skin structure was designed. It is based on the use of the Monte Carlo method. Implemented Monte Carlo method appears as a tracking the paths of photons experiencing random discrete direction changes before they are released from the analyzed area or decrease their intensity to negligible levels. Modeling algorithm consists of the medium and the source characteristics generation, a photon generating considering spatial coordinates of the polar and azimuthal angles, the photon weight reduction calculating due to specular and diffuse reflection, the photon mean free path definition, the photon motion direction angle definition as a result of random scattering with a Henyey-Greenstein phase function, the medium's absorption calculation. Biological tissue is modeled as a homogeneous scattering sheet characterized by absorption, a scattering and anisotropy coefficients.
DEFF Research Database (Denmark)
Markham, Annette
This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....
Decoding Interleaved Gabidulin Codes using Alekhnovich's Algorithm
DEFF Research Database (Denmark)
Puchinger, Sven; Müelich, Sven; Mödinger, David
2017-01-01
We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent.......We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent....
Multiple scattering processes: inverse and direct
International Nuclear Information System (INIS)
Kagiwada, H.H.; Kalaba, R.; Ueno, S.
1975-01-01
The purpose of the work is to formulate inverse problems in radiative transfer, to introduce the functions b and h as parameters of internal intensity in homogeneous slabs, and to derive initial value problems to replace the more traditional boundary value problems and integral equations of multiple scattering with high computational efficiency. The discussion covers multiple scattering processes in a one-dimensional medium; isotropic scattering in homogeneous slabs illuminated by parallel rays of radiation; the theory of functions b and h in homogeneous slabs illuminated by isotropic sources of radiation either at the top or at the bottom; inverse and direct problems of multiple scattering in slabs including internal sources; multiple scattering in inhomogeneous media, with particular reference to inverse problems for estimation of layers and total thickness of inhomogeneous slabs and to multiple scattering problems with Lambert's law and specular reflectors underlying slabs; and anisotropic scattering with reduction of the number of relevant arguments through axially symmetric fields and expansion in Legendre functions. Gaussian quadrature data for a seven point formula, a FORTRAN program for computing the functions b and h, and tables of these functions supplement the text
Measurements of computed tomography radiation scatter
International Nuclear Information System (INIS)
Van Every, B.; Petty, R.J.
1992-01-01
This paper describes the measurement of scattered radiation from a computed tomography (CT) scanner in a clinical situation and compares the results with those obtained from a CT performance phantom and with data obtained from CT manufacturers. The results are presented as iso-dose contours. There are significant differences between the data obtained and that supplied by manufacturers, both in the shape of the iso-dose contours and in the nominal values. The observed scatter in a clinical situation (for an abdominal scan) varied between 3% and 430% of the manufacturers' stated values, with a marked reduction in scatter noted a the head and feet of the patient. These differences appear to be due to the fact that manufacturers use CT phantoms to obtain scatter data and these phantoms do not provide the same scatter absorption geometry as patients. CT scatter was observed to increase as scan field size and slice thickness increased, whilst there was little change in scatter with changes in gantry tilt and table slew. Using the iso-dose contours, the orientation of the CT scanner can be optimised with regard to the location and shielding requirements of doors and windows. Additionally, the positioning of staff who must remain in the room during scanning can be optimised to minimise their exposure. It is estimated that the data presented allows for realistic radiation protection assessments to be made. 13 refs., 5 tabs., 6 figs
FIR-laser scattering for JT-60
International Nuclear Information System (INIS)
Itagaki, Tokiyoshi; Matoba, Tohru; Funahashi, Akimasa; Suzuki, Yasuo
1977-09-01
An ion Thomson scattering method with far infrared (FIR) laser has been studied for measuring the ion temperature in large tokamak JT-60 to be completed in 1981. Ion Thomson scattering has the advantage of measuring spatial variation of the ion temperature. The ion Thomson scattering in medium tokamak (PLT) and future large tokamak (JET) requires a FIR laser of several megawatts. Research and development of FIR high power pulse lasers with power up to 0.6 MW have proceeded in ion Thomson scattering for future high-temperature tokamaks. The FIR laser power will reach to the desired several megawatts in a few years, so JAERI plans to measure the ion temperature in JT-60 by ion Thomson scattering. A noise source of the ion Thomson scattering with 496 μm-CH 3 F laser is synchrotron radiation of which the power is similar to NEP of the Schottky-barrier diode. However, the synchrotron radiation power is one order smaller than that when a FIR laser is 385 μm-D 2 O laser. The FIR laser power corresponding to a signal to noise ratio of 1 is about 4 MW for CH 3 F laser, and 0.4 MW for D 2 O laser if NEP of the heterodyne mixer is one order less. A FIR laser scattering system for JT-60 should be realized with improvement of FIR laser power, NEP of heterodyne mixer and reduction of synchrotron radiation. (auth.)
Fault Diagnosis of Supervision and Homogenization Distance Based on Local Linear Embedding Algorithm
Directory of Open Access Journals (Sweden)
Guangbin Wang
2015-01-01
Full Text Available In view of the problems of uneven distribution of reality fault samples and dimension reduction effect of locally linear embedding (LLE algorithm which is easily affected by neighboring points, an improved local linear embedding algorithm of homogenization distance (HLLE is developed. The method makes the overall distribution of sample points tend to be homogenization and reduces the influence of neighboring points using homogenization distance instead of the traditional Euclidean distance. It is helpful to choose effective neighboring points to construct weight matrix for dimension reduction. Because the fault recognition performance improvement of HLLE is limited and unstable, the paper further proposes a new local linear embedding algorithm of supervision and homogenization distance (SHLLE by adding the supervised learning mechanism. On the basis of homogenization distance, supervised learning increases the category information of sample points so that the same category of sample points will be gathered and the heterogeneous category of sample points will be scattered. It effectively improves the performance of fault diagnosis and maintains stability at the same time. A comparison of the methods mentioned above was made by simulation experiment with rotor system fault diagnosis, and the results show that SHLLE algorithm has superior fault recognition performance.
Remarks on the inverse scattering transform associated with toda equations
Ablowitz, Mark J.; Villorroel, J.
The Inverse Scattering Transforms used to solve both the 2+1 Toda equation and a novel reduction, the Toda differential-delay equations are outlined. There are a number of interesting features associated with these systems and the related scattering theory.
Scattering with polarized neutrons
International Nuclear Information System (INIS)
Schweizer, J.
2007-01-01
In the history of neutron scattering, it was shown very soon that the use of polarized neutron beams brings much more information than usual scattering with unpolarized neutrons. We shall develop here the different scattering methods that imply polarized neutrons: 1) polarized beams without polarization analysis, the flipping ratio method; 2) polarized beams with a uniaxial polarization analysis; 3) polarized beams with a spherical polarization analysis. For all these scattering methods, we shall give examples of the physical problems which can been solved by these methods, particularly in the field of magnetism: investigation of complex magnetic structures, investigation of spin or magnetization densities in metals, insulators and molecular compounds, separation of magnetic and nuclear scattering, investigation of magnetic properties of liquids and amorphous materials and even, for non magnetic material, separation between coherent and incoherent scattering. (author)
Neutron scattering and magnetism
International Nuclear Information System (INIS)
Mackintosh, A.R.
1983-01-01
Those properties of the neutron which make it a unique tool for the study of magnetism are described. The scattering of neutrons by magnetic solids is briefly reviewed, with emphasis on the information on the magnetic structure and dynamics which is inherent in the scattering cross-section. The contribution of neutron scattering to our understanding of magnetic ordering, excitations and phase transitions is illustrated by experimental results on a variety of magnetic crystals. (author)
Stationary theory of scattering
International Nuclear Information System (INIS)
Kato, T.
1977-01-01
A variant of the stationary methods is described, and it is shown that it is useful in a wide range of problems, including scattering, by long-range potentials, two-space scattering, and multichannel scattering. The method is based on the notion of spectral forms. The paper is restricted to the simplest case of continuous spectral forms defined on a Banach space embedded in the basic Hilbert space. (P.D.)
Introduction to neutron scattering
Energy Technology Data Exchange (ETDEWEB)
Fischer, W E [Paul Scherrer Inst. (PSI), Villigen (Switzerland)
1996-11-01
We give here an introduction to the theoretical principles of neutron scattering. The relationship between scattering- and correlation-functions is particularly emphasized. Within the framework of linear response theory (justified by the weakness of the basic interaction) the relation between fluctuation and dissipation is discussed. This general framework explains the particular power of neutron scattering as an experimental method. (author) 4 figs., 4 refs.
Shapiro, Lawrence
2018-04-01
Putnam's criticisms of the identity theory attack a straw man. Fodor's criticisms of reduction attack a straw man. Properly interpreted, Nagel offered a conception of reduction that captures everything a physicalist could want. I update Nagel, introducing the idea of overlap, and show why multiple realization poses no challenge to reduction so construed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Casanova, Henri; Robert, Yves
2008-01-01
""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi
DEFF Research Database (Denmark)
Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy
2007-01-01
We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...
International Nuclear Information System (INIS)
Futterman, J.A.H.; Handler, F.A.; Matzner, R.A.
1987-01-01
This book provides a comprehensive treatment of the propagation of waves in the presence of black holes. While emphasizing intuitive physical thinking in their treatment of the techniques of analysis of scattering, the authors also include chapters on the rigorous mathematical development of the subject. Introducing the concepts of scattering by considering the simplest, scalar wave case of scattering by a spherical (Schwarzschild) black hole, the book then develops the formalism of spin weighted spheroidal harmonics and of plane wave representations for neutrino, electromagnetic, and gravitational scattering. Details and results of numerical computations are given. The techniques involved have important applications (references are given) in acoustical and radar imaging
Wu Ta You
1962-01-01
This volume addresses the broad formal aspects and applications of the quantum theory of scattering in atomic and nuclear collisions. An encyclopedic source of pioneering work, it serves as a text for students and a reference for professionals in the fields of chemistry, physics, and astrophysics. The self-contained treatment begins with the general theory of scattering of a particle by a central field. Subsequent chapters explore particle scattering by a non-central field, collisions between composite particles, the time-dependent theory of scattering, and nuclear reactions. An examinati
A novel image-domain-based cone-beam computed tomography enhancement algorithm
Energy Technology Data Exchange (ETDEWEB)
Li Xiang; Li Tianfang; Yang Yong; Heron, Dwight E; Huq, M Saiful, E-mail: lix@upmc.edu [Department of Radiation Oncology, University of Pittsburgh Cancer Institute, Pittsburgh, PA 15232 (United States)
2011-05-07
Kilo-voltage (kV) cone-beam computed tomography (CBCT) plays an important role in image-guided radiotherapy. However, due to a large cone-beam angle, scatter effects significantly degrade the CBCT image quality and limit its clinical application. The goal of this study is to develop an image enhancement algorithm to reduce the low-frequency CBCT image artifacts, which are also called the bias field. The proposed algorithm is based on the hypothesis that image intensities of different types of materials in CBCT images are approximately globally uniform (in other words, a piecewise property). A maximum a posteriori probability framework was developed to estimate the bias field contribution from a given CBCT image. The performance of the proposed CBCT image enhancement method was tested using phantoms and clinical CBCT images. Compared to the original CBCT images, the corrected images using the proposed method achieved a more uniform intensity distribution within each tissue type and significantly reduced cupping and shading artifacts. In a head and a pelvic case, the proposed method reduced the Hounsfield unit (HU) errors within the region of interest from 300 HU to less than 60 HU. In a chest case, the HU errors were reduced from 460 HU to less than 110 HU. The proposed CBCT image enhancement algorithm demonstrated a promising result by the reduction of the scatter-induced low-frequency image artifacts commonly encountered in kV CBCT imaging.
Fast algorithm of track detection
International Nuclear Information System (INIS)
Nehrguj, B.
1980-01-01
A fast algorithm of variable-slope histograms is proposed, which allows a considerable reduction of computer memory size and is quite simple to carry out. Corresponding FORTRAN subprograms given a triple speed gain have been included in spiral reader data handling software
Study of multiple scattering effects in heavy ion RBS
Energy Technology Data Exchange (ETDEWEB)
Fang, Z; O` Connor, D J [Newcastle Univ., NSW (Australia). Dept. of Physics
1997-12-31
Multiple scattering effect is normally neglected in conventional Rutherford Backscattering (RBS) analysis. The backscattered particle yield normally agrees well with the theory based on the single scattering model. However, when heavy incident ions are used such as in heavy ion Rutherford backscattering (HIRBS), or the incident ion energy is reduced, multiple scattering effect starts to play a role in the analysis. In this paper, the experimental data of 6MeV C ions backscattered from a Au target are presented. In measured time of flight spectrum a small step in front of the Au high energy edge is observed. The high energy edge of the step is about 3.4 ns ahead of the Au signal which corresponds to an energy {approx} 300 keV higher than the 135 degree single scattering energy. This value coincides with the double scattering energy of C ion undergoes two consecutive 67.5 degree scattering. Efforts made to investigate the origin of the high energy step observed lead to an Monte Carlo simulation aimed to reproduce the experimental spectrum on computer. As a large angle scattering event is a rare event, two consecutive large angle scattering is extremely hard to reproduce in a random simulation process. Thus, the simulation has not found a particle scattering into 130-140 deg with an energy higher than the single scattering energy. Obviously faster algorithms and a better physical model are necessary for a successful simulation. 16 refs., 3 figs.
Study of multiple scattering effects in heavy ion RBS
Energy Technology Data Exchange (ETDEWEB)
Fang, Z.; O`Connor, D.J. [Newcastle Univ., NSW (Australia). Dept. of Physics
1996-12-31
Multiple scattering effect is normally neglected in conventional Rutherford Backscattering (RBS) analysis. The backscattered particle yield normally agrees well with the theory based on the single scattering model. However, when heavy incident ions are used such as in heavy ion Rutherford backscattering (HIRBS), or the incident ion energy is reduced, multiple scattering effect starts to play a role in the analysis. In this paper, the experimental data of 6MeV C ions backscattered from a Au target are presented. In measured time of flight spectrum a small step in front of the Au high energy edge is observed. The high energy edge of the step is about 3.4 ns ahead of the Au signal which corresponds to an energy {approx} 300 keV higher than the 135 degree single scattering energy. This value coincides with the double scattering energy of C ion undergoes two consecutive 67.5 degree scattering. Efforts made to investigate the origin of the high energy step observed lead to an Monte Carlo simulation aimed to reproduce the experimental spectrum on computer. As a large angle scattering event is a rare event, two consecutive large angle scattering is extremely hard to reproduce in a random simulation process. Thus, the simulation has not found a particle scattering into 130-140 deg with an energy higher than the single scattering energy. Obviously faster algorithms and a better physical model are necessary for a successful simulation. 16 refs., 3 figs.
Energy Technology Data Exchange (ETDEWEB)
Castejón, F.; Gómez-Iglesias, A.; Velasco, J. L.
2015-07-01
This work is devoted to introduce new optimization criterion in the DAB (Distributed Asynchronous Bees) code. With this new criterion, we have now in DAB the equilibrium and Mercier stability criteria, the minimization of Bxgrad(B) criterion, which ensures the reduction of neoclassical transport and the improvement of the confinement of fast particles, and the reduction of bootstrap current. We have started from a neoclassically optimised configuration of the helias type and imposed the reduction of bootstrap current. The obtained configuration only presents a modest reduction of total bootstrap current, but the local current density is reduced along the minor radii. Further investigations are developed to understand the reason of this modest improvement.
International Nuclear Information System (INIS)
Kuehnelt, H.
1975-01-01
We discuss a few properties of scattering amplitudes proved within the framework of the field theory and their significance in the derivation of quantitative statements. The state of the boundaries for the scattering lengths is to be especially discussed as well as the question as to how far it is possible to exclude various solutions from phase displacement analyses. (orig./LH) [de
Modelling Hyperboloid Sound Scattering
DEFF Research Database (Denmark)
Burry, Jane; Davis, Daniel; Peters, Brady
2011-01-01
The Responsive Acoustic Surfaces workshop project described here sought new understandings about the interaction between geometry and sound in the arena of sound scattering. This paper reports on the challenges associated with modelling, simulating, fabricating and measuring this phenomenon using...... both physical and digital models at three distinct scales. The results suggest hyperboloid geometry, while difficult to fabricate, facilitates sound scattering....
Donne, A. J. H.
1996-01-01
Thomson scattering is a very powerful diagnostic which is applied at nearly every magnetic confinement device. Depending on the experimental conditions different plasma parameters can be diagnosed. When the wave vector is much larger than the plasma Debye length, the total scattered power is
Concentric layered Hermite scatterers
Astheimer, Jeffrey P.; Parker, Kevin J.
2018-05-01
The long wavelength limit of scattering from spheres has a rich history in optics, electromagnetics, and acoustics. Recently it was shown that a common integral kernel pertains to formulations of weak spherical scatterers in both acoustics and electromagnetic regimes. Furthermore, the relationship between backscattered amplitude and wavenumber k was shown to follow power laws higher than the Rayleigh scattering k2 power law, when the inhomogeneity had a material composition that conformed to a Gaussian weighted Hermite polynomial. Although this class of scatterers, called Hermite scatterers, are plausible, it may be simpler to manufacture scatterers with a core surrounded by one or more layers. In this case the inhomogeneous material property conforms to a piecewise continuous constant function. We demonstrate that the necessary and sufficient conditions for supra-Rayleigh scattering power laws in this case can be stated simply by considering moments of the inhomogeneous function and its spatial transform. This development opens an additional path for construction of, and use of scatterers with unique power law behavior.
Introductory theory of neutron scattering
International Nuclear Information System (INIS)
Gunn, J.M.F.
1986-12-01
The paper comprises a set of six lecture notes which were delivered to the summer school on 'Neutron Scattering at a pulsed source', Rutherford Laboratory, United Kingdom, 1986. The lectures concern the physical principles of neutron scattering. The topics of the lectures include: diffraction, incoherent inelastic scattering, connection with the Schroedinger equation, magnetic scattering, coherent inelastic scattering, and surfaces and neutron optics. (UK)
Diffuse scattering of neutrons
International Nuclear Information System (INIS)
Novion, C.H. de.
1981-02-01
The use of neutron scattering to study atomic disorder in metals and alloys is described. The diffuse elastic scattering of neutrons by a perfect crystal lattice leads to a diffraction spectrum with only Bragg spreads. the existence of disorder in the crystal results in intensity and position modifications to these spreads, and above all, to the appearance of a low intensity scatter between Bragg peaks. The elastic scattering of neutrons is treated in this text, i.e. by measuring the number of scattered neutrons having the same energy as the incident neutrons. Such measurements yield information on the static disorder in the crystal and time average fluctuations in composition and atomic displacements [fr
A Faster Algorithm for Computing Straight Skeletons
Mencel, Liam A.
2014-01-01
computation in O(n (log n) log r) time. It improves on the previously best known algorithm for this reduction, which is randomised, and runs in expected O(n √(h+1) log² n) time for a polygon with h holes. Using known motorcycle graph algorithms, our result
Inelastic Light Scattering Processes
Fouche, Daniel G.; Chang, Richard K.
1973-01-01
Five different inelastic light scattering processes will be denoted by, ordinary Raman scattering (ORS), resonance Raman scattering (RRS), off-resonance fluorescence (ORF), resonance fluorescence (RF), and broad fluorescence (BF). A distinction between fluorescence (including ORF and RF) and Raman scattering (including ORS and RRS) will be made in terms of the number of intermediate molecular states which contribute significantly to the scattered amplitude, and not in terms of excited state lifetimes or virtual versus real processes. The theory of these processes will be reviewed, including the effects of pressure, laser wavelength, and laser spectral distribution on the scattered intensity. The application of these processes to the remote sensing of atmospheric pollutants will be discussed briefly. It will be pointed out that the poor sensitivity of the ORS technique cannot be increased by going toward resonance without also compromising the advantages it has over the RF technique. Experimental results on inelastic light scattering from I(sub 2) vapor will be presented. As a single longitudinal mode 5145 A argon-ion laser line was tuned away from an I(sub 2) absorption line, the scattering was observed to change from RF to ORF. The basis, of the distinction is the different pressure dependence of the scattered intensity. Nearly three orders of magnitude enhancement of the scattered intensity was measured in going from ORF to RF. Forty-seven overtones were observed and their relative intensities measured. The ORF cross section of I(sub 2) compared to the ORS cross section of N2 was found to be 3 x 10(exp 6), with I(sub 2) at its room temperature vapor pressure.
Visualizing quantum scattering on the CM-2 supercomputer
International Nuclear Information System (INIS)
Richardson, J.L.
1991-01-01
We implement parallel algorithms for solving the time-dependent Schroedinger equation on the CM-2 supercomputer. These methods are unconditionally stable as well as unitary at each time step and have the advantage of being spatially local and explicit. We show how to visualize the dynamics of quantum scattering using techniques for visualizing complex wave functions. Several scattering problems are solved to demonstrate the use of these methods. (orig.)
Directory of Open Access Journals (Sweden)
Andreea Koreanschi
2017-02-01
Full Text Available In this paper, an ‘in-house’ genetic algorithm is described and applied to an optimization problem for improving the aerodynamic performances of an aircraft wing tip through upper surface morphing. The algorithm’s performances were studied from the convergence point of view, in accordance with design conditions. The algorithm was compared to two other optimization methods, namely the artificial bee colony and a gradient method, for two optimization objectives, and the results of the optimizations with each of the three methods were plotted on response surfaces obtained with the Monte Carlo method, to show that they were situated in the global optimum region. The optimization results for 16 wind tunnel test cases and 2 objective functions were presented. The 16 cases used for the optimizations were included in the experimental test plan for the morphing wing-tip demonstrator, and the results obtained using the displacements given by the optimizations were evaluated.
Atmospheric scattering corrections to solar radiometry
International Nuclear Information System (INIS)
Box, M.A.; Deepak, A.
1979-01-01
Whenever a solar radiometer is used to measure direct solar radiation, some diffuse sky radiation invariably enters the detector's field of view along with the direct beam. Therefore, the atmospheric optical depth obtained by the use of Bouguer's transmission law (also called Beer-Lambert's law), that is valid only for direct radiation, needs to be corrected by taking account of the scattered radiation. In this paper we shall discuss the correction factors needed to account for the diffuse (i.e., singly and multiply scattered) radiation and the algorithms developed for retrieving aerosol size distribution from such measurements. For a radiometer with a small field of view (half-cone angle 0 ) and relatively clear skies (optical depths <0.4), it is shown that the total diffuse contributions represents approximately l% of the total intensity. It is assumed here that the main contributions to the diffuse radiation within the detector's view cone are due to single scattering by molecules and aerosols and multiple scattering by molecules alone, aerosol multiple scattering contributions being treated as negligibly small. The theory and the numerical results discussed in this paper will be helpful not only in making corrections to the measured optical depth data but also in designing improved solar radiometers
Phase object retrieval through scattering medium
Zhao, Ming; Zhao, Meijing; Wu, Houde; Xu, Wenhai
2018-05-01
Optical imaging through a scattering medium has been an interesting and important research topic, especially in the field of biomedical imaging. However, it is still a challenging task due to strong scattering. This paper proposes to recover the phase object behind the scattering medium from one single-shot speckle intensity image using calibrated transmission matrices (TMs). We construct the forward model as a non-linear mapping, since the intensity image loses the phase information, and then a generalized phase retrieval algorithm is employed to recover the hidden object. Moreover, we show that a phase object can be reconstructed with a small portion of the speckle image captured by the camera. The simulation is performed to demonstrate our scheme and test its performance. Finally, a real experiment is set up, we measure the TMs from the scattering medium, and then use it to reconstruct the hidden object. We show that a phase object of size 32 × 32 is retrieved from 150 × 150 speckle grains, which is only 1/50 of the speckles area. We believe our proposed method can benefit the community of imaging through the scattering medium.
Safe reduction rules for weighted treewidth
Eijkhof, F. van den; Bodlaender, H.L.; Koster, A.M.C.A.
2002-01-01
Several sets of reductions rules are known for preprocessing a graph when computing its treewidth. In this paper, we give reduction rules for a weighted variant of treewidth, motivated by the analysis of algorithms for probabilistic networks. We present two general reduction rules that are safe for
Energy Technology Data Exchange (ETDEWEB)
Wang, A; Paysan, P; Brehm, M; Maslowski, A; Lehmann, M; Messmer, P; Munro, P; Yoon, S; Star-Lack, J; Seghers, D [Varian Medical Systems, Palo Alto, CA (United States)
2016-06-15
Purpose: To improve CBCT image quality for image-guided radiotherapy by applying advanced reconstruction algorithms to overcome scatter, noise, and artifact limitations Methods: CBCT is used extensively for patient setup in radiotherapy. However, image quality generally falls short of diagnostic CT, limiting soft-tissue based positioning and potential applications such as adaptive radiotherapy. The conventional TrueBeam CBCT reconstructor uses a basic scatter correction and FDK reconstruction, resulting in residual scatter artifacts, suboptimal image noise characteristics, and other artifacts like cone-beam artifacts. We have developed an advanced scatter correction that uses a finite-element solver (AcurosCTS) to model the behavior of photons as they pass (and scatter) through the object. Furthermore, iterative reconstruction is applied to the scatter-corrected projections, enforcing data consistency with statistical weighting and applying an edge-preserving image regularizer to reduce image noise. The combined algorithms have been implemented on a GPU. CBCT projections from clinically operating TrueBeam systems have been used to compare image quality between the conventional and improved reconstruction methods. Planning CT images of the same patients have also been compared. Results: The advanced scatter correction removes shading and inhomogeneity artifacts, reducing the scatter artifact from 99.5 HU to 13.7 HU in a typical pelvis case. Iterative reconstruction provides further benefit by reducing image noise and eliminating streak artifacts, thereby improving soft-tissue visualization. In a clinical head and pelvis CBCT, the noise was reduced by 43% and 48%, respectively, with no change in spatial resolution (assessed visually). Additional benefits include reduction of cone-beam artifacts and reduction of metal artifacts due to intrinsic downweighting of corrupted rays. Conclusion: The combination of an advanced scatter correction with iterative reconstruction
Light scattering studies at UNICAMP
International Nuclear Information System (INIS)
Luzzi, R.; Cerdeira, H.A.; Salzberg, J.; Vasconcellos, A.R.; Frota Pessoa, S.; Reis, F.G. dos; Ferrari, C.A.; Algarte, C.A.S.; Tenan, M.A.
1975-01-01
Current theoretical studies on light scattering spectroscopy at UNICAMP is presented briefly, such as: inelastic scattering of radiation from a solid state plasma; resonant Ramman scattering; high excitation effects; saturated semiconductors and glasses
Zhen-Zhong, Yu; Guo-Shu, Zhao; Gang, Sun; Hai-Fei, Si; Zhong, Yang
2016-07-01
Reduction of electromagnetic scattering from a conducting cylinder could be achieved by covering it with optimized multilayers of normal dielectric and plasmonic material. The plasmonic material with intrinsic losses could degrade the cloaking effect. Using a genetic algorithm, we present the optimized design of loss and gain multilayers for reduction of the scattering from a perfect conducting cylinder. This multilayered structure is theoretically and numerically analyzed when the plasmonic material with low loss and high loss respectively is considered. We demonstrate by full-wave simulation that the optimized nonmagnetic gain-loss design can greatly compensate the decreased cloaking effect caused by loss material, which facilitates the realization of practical electromagnetic cloaking, especially in the optical range. Project supported by the Research Foundation of Jinling Institute of Technology, China (Grant No. JIT-B-201426), the Jiangsu Modern Education and Technology Key Project, China (Grant No. 2014-R-31984), the Jiangsu 333 Project Funded Research Project, China (Grant No. BRA2010004), and the University Science Research Project of Jiangsu Province, China (Grant No. 15KJB520010).
Research of scatter correction on industry computed tomography
International Nuclear Information System (INIS)
Sun Shaohua; Gao Wenhuan; Zhang Li; Chen Zhiqiang
2002-01-01
In the scanning process of industry computer tomography, scatter blurs the reconstructed image. The grey values of pixels in the reconstructed image are away from what is true and such effect need to be corrected. If the authors use the conventional method of deconvolution, many steps of iteration are needed and the computing time is not satisfactory. The author discusses a method combining Ordered Subsets Convex algorithm and scatter model to implement scatter correction and promising results are obtained in both speed and image quality
Energy Technology Data Exchange (ETDEWEB)
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Virtual neutron scattering experiments
DEFF Research Database (Denmark)
Overgaard, Julie Hougaard; Bruun, Jesper; May, Michael
2017-01-01
. In the last week of the course, students travel to a large-scale neutron scattering facility to perform real neutron scattering experiments. Through student interviews and survey answers, we argue, that the virtual training prepares the students to engage more fruitfully with experiments by letting them focus......We describe how virtual experiments can be utilized in a learning design that prepares students for hands-on experiments at large-scale facilities. We illustrate the design by showing how virtual experiments are used at the Niels Bohr Institute in a master level course on neutron scattering...
Scattering on magnetic monopoles
International Nuclear Information System (INIS)
Petry, H.R.
1980-01-01
The time-dependent scattering theory of charged particles on magnetic monopoles is investigated within a mathematical frame-work, which duely pays attention to the fact that the wavefunctions of the scattered particles are sections in a non-trivial complex line-bundle. It is found that Moeller operators have to be defined in a way which takes into account the peculiar long-range behaviour of the monopole field. Formulas for the scattering matrix and the differential cross-section are derived, and, as a by-product, a momentum space picture for particles, which are described by sections in the underlying complex line-bundle, is presented. (orig.)
Deep inelastic neutron scattering
International Nuclear Information System (INIS)
Mayers, J.
1989-03-01
The report is based on an invited talk given at a conference on ''Neutron Scattering at ISIS: Recent Highlights in Condensed Matter Research'', which was held in Rome, 1988, and is intended as an introduction to the techniques of Deep Inelastic Neutron Scattering. The subject is discussed under the following topic headings:- the impulse approximation I.A., scaling behaviour, kinematical consequences of energy and momentum conservation, examples of measurements, derivation of the I.A., the I.A. in a harmonic system, and validity of the I.A. in neutron scattering. (U.K.)
Algorithms for Electromagnetic Scattering Analysis of Electrically Large Structures
DEFF Research Database (Denmark)
Borries, Oscar Peter
Accurate analysis of electrically large antennas is often done using either Physical Optics (PO) or Method of Moments (MoM), where the former typically requires fewer computational resources but has a limited application regime. This study has focused on fast variants of these two methods, with t...
International Nuclear Information System (INIS)
Ruehrnschopf and, Ernst-Peter; Klingenbeck, Klaus
2011-01-01
The main components of scatter correction procedures are scatter estimation and a scatter compensation algorithm. This paper completes a previous paper where a general framework for scatter compensation was presented under the prerequisite that a scatter estimation method is already available. In the current paper, the authors give a systematic review of the variety of scatter estimation approaches. Scatter estimation methods are based on measurements, mathematical-physical models, or combinations of both. For completeness they present an overview of measurement-based methods, but the main topic is the theoretically more demanding models, as analytical, Monte-Carlo, and hybrid models. Further classifications are 3D image-based and 2D projection-based approaches. The authors present a system-theoretic framework, which allows to proceed top-down from a general 3D formulation, by successive approximations, to efficient 2D approaches. A widely useful method is the beam-scatter-kernel superposition approach. Together with the review of standard methods, the authors discuss their limitations and how to take into account the issues of object dependency, spatial variance, deformation of scatter kernels, external and internal absorbers. Open questions for further investigations are indicated. Finally, the authors refer on some special issues and applications, such as bow-tie filter, offset detector, truncated data, and dual-source CT.
Electron scattering from pyrimidine
International Nuclear Information System (INIS)
Colmenares, Rafael; Fuss, Martina C; García, Gustavo; Oller, Juan C; Muñoz, Antonio; Blanco, Francisco; Almeida, Diogo; Limão-Vieira, Paulo
2014-01-01
Electron scattering from pyrimidine (C 4 H 4 N 2 ) was investigated over a wide range of energies. Following different experimental and theoretical approaches, total, elastic and ionization cross sections as well as electron energy loss distributions were obtained.
Gravitational Bhabha scattering
International Nuclear Information System (INIS)
Santos, A F; Khanna, Faqir C
2017-01-01
Gravitoelectromagnetism (GEM) as a theory for gravity has been developed similar to the electromagnetic field theory. A weak field approximation of Einstein theory of relativity is similar to GEM. This theory has been quantized. Traditional Bhabha scattering, electron–positron scattering, is based on quantized electrodynamics theory. Usually the amplitude is written in terms of one photon exchange process. With the development of quantized GEM theory, the scattering amplitude will have an additional component based on an exchange of one graviton at the lowest order of perturbation theory. An analysis will provide the relative importance of the two amplitudes for Bhabha scattering. This will allow an analysis of the relative importance of the two amplitudes as the energy of the exchanged particles increases. (paper)
Applied electromagnetic scattering theory
Osipov, Andrey A
2017-01-01
Besides classical applications (radar and stealth, antennas, microwave engineering), scattering and diffraction are enabling phenomena for some emerging research fields (artificial electromagnetic materials or metamaterials, terahertz technologies, electromagnetic aspects of nano-science). This book is a tutorial for advanced students who need to study diffraction theory. The textbook gives fundamental knowledge about scattering and diffraction of electromagnetic waves and provides some working examples of solutions for practical high-frequency scattering and diffraction problems. The book focuses on the most important diffraction effects and mechanisms influencing the scattering process and describes efficient and physically justified simulation methods - physical optics (PO) and the physical theory of diffraction (PTD) - applicable in typical remote sensing scenarios. The material is presented in a comprehensible and logical form, which relates the presented results to the basic principles of electromag...
International Nuclear Information System (INIS)
Tezuka, Hirokazu.
1984-10-01
Scattering of a particle by bound nucleons is discussed. Effects of nucleons that are bound in a nucleus are taken as a structure function. The way how to calculate the structure function is given. (author)
International Nuclear Information System (INIS)
1991-07-01
This collection contains 21 papers on the application and development of LIDAR (Light Detection and Ranging) Thomson scattering techniques for the determination of spatially resolved electron temperature and density in magnetic confinement experiments, particularly tokamaks. Refs, figs and tabs
International Nuclear Information System (INIS)
Peterson, G.A.
1989-01-01
We briefly review some of the motivations, early results, and techniques of magnetic elastic and inelastic electron-nucleus scattering. We then discuss recent results, especially those acquired at high momentum transfers. 50 refs., 19 figs
Deep inelastic lepton scattering
International Nuclear Information System (INIS)
Nachtmann, O.
1977-01-01
Deep inelastic electron (muon) nucleon and neutrino nucleon scattering as well as electron positron annihilation into hadrons are reviewed from a theoretical point of view. The emphasis is placed on comparisons of quantum chromodynamics with the data. (orig.) [de
Small angle neutron scattering
International Nuclear Information System (INIS)
Bernardini, G.; Cherubini, G.; Fioravanti, A.; Olivi, A.
1976-09-01
A method for the analysis of the data derived from neutron small angle scattering measurements has been accomplished in the case of homogeneous particles, starting from the basic theory without making any assumption on the form of particle size distribution function. The experimental scattering curves are interpreted with the aid the computer by means of a proper routine. The parameters obtained are compared with the corresponding ones derived from observations at the transmission electron microscope
International Nuclear Information System (INIS)
Aprile-Giboni, E.; Cantale, G.; Hausammann, R.
1983-01-01
Using the PM1 polarized proton beam at SIN and a polarized target, the elastic pp scattering as well as the inelastic channel pp → π + d have been studied between 400 and 600 MeV. For the elastic reaction, a sufficient number of spin dependent parameters has been measured in order to do a direct reconstruction of the scattering matrix between 38 0 /sub cm/ and 90 0 /sub cm/. 10 references, 6 figures
Fast analytical scatter estimation using graphics processing units.
Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris
2015-01-01
To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.
Optimization-based scatter estimation using primary modulation for computed tomography
Energy Technology Data Exchange (ETDEWEB)
Chen, Yi; Ma, Jingchen; Zhao, Jun, E-mail: junzhao@sjtu.edu.cn [School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240 (China); Song, Ying [Department of Radiation Oncology, West China Hospital, Sichuan University, Chengdu 610041 (China)
2016-08-15
Purpose: Scatter reduces the image quality in computed tomography (CT), but scatter correction remains a challenge. A previously proposed primary modulation method simultaneously obtains the primary and scatter in a single scan. However, separating the scatter and primary in primary modulation is challenging because it is an underdetermined problem. In this study, an optimization-based scatter estimation (OSE) algorithm is proposed to estimate and correct scatter. Methods: In the concept of primary modulation, the primary is modulated, but the scatter remains smooth by inserting a modulator between the x-ray source and the object. In the proposed algorithm, an objective function is designed for separating the scatter and primary. Prior knowledge is incorporated in the optimization-based framework to improve the accuracy of the estimation: (1) the primary is always positive; (2) the primary is locally smooth and the scatter is smooth; (3) the location of penumbra can be determined; and (4) the scatter-contaminated data provide knowledge about which part is smooth. Results: The simulation study shows that the edge-preserving weighting in OSE improves the estimation accuracy near the object boundary. Simulation study also demonstrates that OSE outperforms the two existing primary modulation algorithms for most regions of interest in terms of the CT number accuracy and noise. The proposed method was tested on a clinical cone beam CT, demonstrating that OSE corrects the scatter even when the modulator is not accurately registered. Conclusions: The proposed OSE algorithm improves the robustness and accuracy in scatter estimation and correction. This method is promising for scatter correction of various kinds of x-ray imaging modalities, such as x-ray radiography, cone beam CT, and the fourth-generation CT.
International Nuclear Information System (INIS)
Floyd, C.E.; Beatty, P.T.; Ravin, C.E.
1988-01-01
The Fourier deconvolution algorithm for scatter compensation in digital chest radiography has been evaluated in four anatomically different regions at three energies. A shift invariant scatter distribution shape, optimized for the lung region at 140 kVp, was applied at 90 kVp and 120 kVp in the lung, retrocardiac, subdiaphragmatic, and thoracic spine regions. Scatter estimates from the deconvolution were compared with measured values. While some regional variation is apparent, the use of a shift invariant scatter distribution shape (optimized for a given energy) produces reasonable scatter compensation in the chest. A different set of deconvolution parameters were required at the different energies
Decoding Hermitian Codes with Sudan's Algorithm
DEFF Research Database (Denmark)
Høholdt, Tom; Nielsen, Rasmus Refslund
1999-01-01
We present an efficient implementation of Sudan's algorithm for list decoding Hermitian codes beyond half the minimum distance. The main ingredients are an explicit method to calculate so-called increasing zero bases, an efficient interpolation algorithm for finding the Q-polynomial, and a reduct......We present an efficient implementation of Sudan's algorithm for list decoding Hermitian codes beyond half the minimum distance. The main ingredients are an explicit method to calculate so-called increasing zero bases, an efficient interpolation algorithm for finding the Q...
PROPOSAL OF ALGORITHM FOR ROUTE OPTIMIZATION
Directory of Open Access Journals (Sweden)
Robert Ramon de Carvalho Sousa
2016-06-01
Full Text Available This article uses “Six Sigma” methodology for the elaboration of an algorithm for routing problems which is able to obtain more efficient results than those from Clarke and Wright´s (CW algorithm (1964 in situations of random increase of product delivery demands, facing the incapability of service level increase . In some situations, the algorithm proposed obtained more efficient results than the CW algorithm. The key factor was a reduction in the number of mistakes (one way routes and in the level of result variation.
Difference structures from time-resolved small-angle and wide-angle x-ray scattering
Nepal, Prakash; Saldin, D. K.
2018-05-01
Time-resolved small-angle x-ray scattering/wide-angle x-ray scattering (SAXS/WAXS) is capable of recovering difference structures directly from difference SAXS/WAXS curves. It does so by means of the theory described here because the structural changes in pump-probe detection in a typical time-resolved experiment are generally small enough to be confined to a single residue or group in close proximity which is identified by a method akin to the difference Fourier method of time-resolved crystallography. If it is assumed, as is usual with time-resolved structures, that the moved atoms lie within the residue, the 100-fold reduction in the search space (assuming a typical protein has about 100 residues) allows the exaction of the structure by a simulated annealing algorithm with a huge reduction in computing time and leads to a greater resolution by varying the positions of atoms only within that residue. This reduction in the number of potential moved atoms allows us to identify the actual motions of the individual atoms. In the case of a crystal, time-resolved calculations are normally performed using the difference Fourier method, which is, of course, not directly applicable to SAXS/WAXS. The method developed in this paper may be thought of as a substitute for that method which allows SAXS/WAXS (and hence disordered molecules) to also be used for time-resolved structural work.
SU-D-206-07: CBCT Scatter Correction Based On Rotating Collimator
International Nuclear Information System (INIS)
Yu, G; Feng, Z; Yin, Y; Qiang, L; Li, B; Huang, P; Li, D
2016-01-01
Purpose: Scatter correction in cone-beam computed tomography (CBCT) has obvious effect on the removal of image noise, the cup artifact and the increase of image contrast. Several methods using a beam blocker for the estimation and subtraction of scatter have been proposed. However, the inconvenience of mechanics and propensity to residual artifacts limited the further evolution of basic and clinical research. Here, we propose a rotating collimator-based approach, in conjunction with reconstruction based on a discrete Radon transform and Tchebichef moments algorithm, to correct scatter-induced artifacts. Methods: A rotating-collimator, comprising round tungsten alloy strips, was mounted on a linear actuator. The rotating-collimator is divided into 6 portions equally. The round strips space is evenly spaced on each portion but staggered between different portions. A step motor connected to the rotating collimator drove the blocker to around x-ray source during the CBCT acquisition. The CBCT reconstruction based on a discrete Radon transform and Tchebichef moments algorithm is performed. Experimental studies using water phantom and Catphan504 were carried out to evaluate the performance of the proposed scheme. Results: The proposed algorithm was tested on both the Monte Carlo simulation and actual experiments with the Catphan504 phantom. From the simulation result, the mean square error of the reconstruction error decreases from 16% to 1.18%, the cupping (τcup) from 14.005% to 0.66%, and the peak signal-to-noise ratio increase from 16.9594 to 31.45. From the actual experiments, the induced visual artifacts are significantly reduced. Conclusion: We conducted an experiment on CBCT imaging system with a rotating collimator to develop and optimize x-ray scatter control and reduction technique. The proposed method is attractive in applications where a high CBCT image quality is critical, for example, dose calculation in adaptive radiation therapy. We want to thank Dr. Lei
SU-D-206-07: CBCT Scatter Correction Based On Rotating Collimator
Energy Technology Data Exchange (ETDEWEB)
Yu, G; Feng, Z [Shandong Normal University, Jinan, Shandong (China); Yin, Y [Shandong Cancer Hospital and Institute, China, Jinan, Shandong (China); Qiang, L [Zhang Jiagang STFK Medical Device Co, Zhangjiangkang, Suzhou (China); Li, B [Shandong Academy of Medical Sciences, Jinan, Shandong provice (China); Huang, P [Shandong Province Key Laboratory of Medical Physics and Image Processing Te, Ji’nan, Shandong province (China); Li, D [School of Physics and Electronics, Shandong Normal University, Jinan, Shandong (China)
2016-06-15
Purpose: Scatter correction in cone-beam computed tomography (CBCT) has obvious effect on the removal of image noise, the cup artifact and the increase of image contrast. Several methods using a beam blocker for the estimation and subtraction of scatter have been proposed. However, the inconvenience of mechanics and propensity to residual artifacts limited the further evolution of basic and clinical research. Here, we propose a rotating collimator-based approach, in conjunction with reconstruction based on a discrete Radon transform and Tchebichef moments algorithm, to correct scatter-induced artifacts. Methods: A rotating-collimator, comprising round tungsten alloy strips, was mounted on a linear actuator. The rotating-collimator is divided into 6 portions equally. The round strips space is evenly spaced on each portion but staggered between different portions. A step motor connected to the rotating collimator drove the blocker to around x-ray source during the CBCT acquisition. The CBCT reconstruction based on a discrete Radon transform and Tchebichef moments algorithm is performed. Experimental studies using water phantom and Catphan504 were carried out to evaluate the performance of the proposed scheme. Results: The proposed algorithm was tested on both the Monte Carlo simulation and actual experiments with the Catphan504 phantom. From the simulation result, the mean square error of the reconstruction error decreases from 16% to 1.18%, the cupping (τcup) from 14.005% to 0.66%, and the peak signal-to-noise ratio increase from 16.9594 to 31.45. From the actual experiments, the induced visual artifacts are significantly reduced. Conclusion: We conducted an experiment on CBCT imaging system with a rotating collimator to develop and optimize x-ray scatter control and reduction technique. The proposed method is attractive in applications where a high CBCT image quality is critical, for example, dose calculation in adaptive radiation therapy. We want to thank Dr. Lei
Desmal, Abdulla; Bagci, Hakan
2014-01-01
A numerical framework that incorporates recently developed iterative shrinkage thresholding (IST) algorithms within the Born iterative method (BIM) is proposed for solving the two-dimensional inverse electromagnetic scattering problem. IST
Basis reduction for layered lattices
E.L. Torreão Dassen (Erwin)
2011-01-01
htmlabstractWe develop the theory of layered Euclidean spaces and layered lattices. With this new theory certain problems that usually are solved by using classical lattices with a "weighting" gain a new, more natural form. Using the layered lattice basis reduction algorithms introduced here these
Testing a Fourier Accelerated Hybrid Monte Carlo Algorithm
Catterall, S.; Karamov, S.
2001-01-01
We describe a Fourier Accelerated Hybrid Monte Carlo algorithm suitable for dynamical fermion simulations of non-gauge models. We test the algorithm in supersymmetric quantum mechanics viewed as a one-dimensional Euclidean lattice field theory. We find dramatic reductions in the autocorrelation time of the algorithm in comparison to standard HMC.
Efficient algorithms of multidimensional γ-ray spectra compression
International Nuclear Information System (INIS)
Morhac, M.; Matousek, V.
2006-01-01
The efficient algorithms to compress multidimensional γ-ray events are presented. Two alternative kinds of compression algorithms based on both the adaptive orthogonal and randomizing transforms are proposed. In both algorithms we employ the reduction of data volume due to the symmetry of the γ-ray spectra
Compton scattering collision module for OSIRIS
Del Gaudio, Fabrizio; Grismayer, Thomas; Fonseca, Ricardo; Silva, Luís
2017-10-01
Compton scattering plays a fundamental role in a variety of different astrophysical environments, such as at the gaps of pulsars and the stagnation surface of black holes. In these scenarios, Compton scattering is coupled with self-consistent mechanisms such as pair cascades. We present the implementation of a novel module, embedded in the self-consistent framework of the PIC code OSIRIS 4.0, capable of simulating Compton scattering from first principles and that is fully integrated with the self-consistent plasma dynamics. The algorithm accounts for the stochastic nature of Compton scattering reproducing without approximations the exchange of energy between photons and unbound charged species. We present benchmarks of the code against the analytical results of Blumenthal et al. and the numerical solution of the linear Kompaneets equation and good agreement is found between the simulations and the theoretical models. This work is supported by the European Research Council Grant (ERC- 2015-AdG 695088) and the Fundao para a Céncia e Tecnologia (Bolsa de Investigao PD/BD/114323/2016).
Riemann–Hilbert problem approach for two-dimensional flow inverse scattering
Energy Technology Data Exchange (ETDEWEB)
Agaltsov, A. D., E-mail: agalets@gmail.com [Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University, 119991 Moscow (Russian Federation); Novikov, R. G., E-mail: novikov@cmap.polytechnique.fr [CNRS (UMR 7641), Centre de Mathématiques Appliquées, Ecole Polytechnique, 91128 Palaiseau (France); IEPT RAS, 117997 Moscow (Russian Federation); Moscow Institute of Physics and Technology, Dolgoprudny (Russian Federation)
2014-10-15
We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given.
Riemann–Hilbert problem approach for two-dimensional flow inverse scattering
International Nuclear Information System (INIS)
Agaltsov, A. D.; Novikov, R. G.
2014-01-01
We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given
Scattering calculation and image reconstruction using elevation-focused beams.
Duncan, David P; Astheimer, Jeffrey P; Waag, Robert C
2009-05-01
Pressure scattered by cylindrical and spherical objects with elevation-focused illumination and reception has been analytically calculated, and corresponding cross sections have been reconstructed with a two-dimensional algorithm. Elevation focusing was used to elucidate constraints on quantitative imaging of three-dimensional objects with two-dimensional algorithms. Focused illumination and reception are represented by angular spectra of plane waves that were efficiently computed using a Fourier interpolation method to maintain the same angles for all temporal frequencies. Reconstructions were formed using an eigenfunction method with multiple frequencies, phase compensation, and iteration. The results show that the scattered pressure reduces to a two-dimensional expression, and two-dimensional algorithms are applicable when the region of a three-dimensional object within an elevation-focused beam is approximately constant in elevation. The results also show that energy scattered out of the reception aperture by objects contained within the focused beam can result in the reconstructed values of attenuation slope being greater than true values at the boundary of the object. Reconstructed sound speed images, however, appear to be relatively unaffected by the loss in scattered energy. The broad conclusion that can be drawn from these results is that two-dimensional reconstructions require compensation to account for uncaptured three-dimensional scattering.
Reduction of metal artifacts: beam hardening and photon starvation effects
Yadava, Girijesh K.; Pal, Debashish; Hsieh, Jiang
2014-03-01
The presence of metal-artifacts in CT imaging can obscure relevant anatomy and interfere with disease diagnosis. The cause and occurrence of metal-artifacts are primarily due to beam hardening, scatter, partial volume and photon starvation; however, the contribution to the artifacts from each of them depends on the type of hardware. A comparison of CT images obtained with different metallic hardware in various applications, along with acquisition and reconstruction parameters, helps understand methods for reducing or overcoming such artifacts. In this work, a metal beam hardening correction (BHC) and a projection-completion based metal artifact reduction (MAR) algorithms were developed, and applied on phantom and clinical CT scans with various metallic implants. Stainless-steel and Titanium were used to model and correct for metal beam hardening effect. In the MAR algorithm, the corrupted projection samples are replaced by the combination of original projections and in-painted data obtained by forward projecting a prior image. The data included spine fixation screws, hip-implants, dental-filling, and body extremity fixations, covering range of clinically used metal implants. Comparison of BHC and MAR on different metallic implants was used to characterize dominant source of the artifacts, and conceivable methods to overcome those. Results of the study indicate that beam hardening could be a dominant source of artifact in many spine and extremity fixations, whereas dental and hip implants could be dominant source of photon starvation. The BHC algorithm could significantly improve image quality in CT scans with metallic screws, whereas MAR algorithm could alleviate artifacts in hip-implants and dentalfillings.
Pseudo-deterministic Algorithms
Goldwasser , Shafi
2012-01-01
International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...
From parallel to distributed computing for reactive scattering calculations
International Nuclear Information System (INIS)
Lagana, A.; Gervasi, O.; Baraglia, R.
1994-01-01
Some reactive scattering codes have been ported on different innovative computer architectures ranging from massively parallel machines to clustered workstations. The porting has required a drastic restructuring of the codes to single out computationally decoupled cpu intensive subsections. The suitability of different theoretical approaches for parallel and distributed computing restructuring is discussed and the efficiency of related algorithms evaluated
Electrical Impedance Tomography: 3D Reconstructions using Scattering Transforms
DEFF Research Database (Denmark)
Delbary, Fabrice; Hansen, Per Christian; Knudsen, Kim
2012-01-01
In three dimensions the Calderon problem was addressed and solved in theory in the 1980s. The main ingredients in the solution of the problem are complex geometrical optics solutions to the conductivity equation and a (non-physical) scattering transform. The resulting reconstruction algorithm...
Variance Reduction Techniques in Monte Carlo Methods
Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.
2010-01-01
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the
Numerical solution of the multichannel scattering problem
International Nuclear Information System (INIS)
Korobov, V.I.
1992-01-01
A numerical algorithm for solving the multichannel elastic and inelastic scattering problem is proposed. The starting point is the system of radial Schroedinger equations with linear boundary conditions imposed at some point R=R m placed somewhere in asymptotic region. It is discussed how the obtained linear equation can be splitted into a zero-order operator and its pertturbative part. It is shown that Lentini - Pereyra variable order finite-difference method appears to be very suitable for solving that kind of problems. The derived procedure is applied to dμ+t→tμ+d inelastic scattering in the framework of the adiabatic multichannel approach. 19 refs.; 1 fig.; 1 tab
Vector Boson Scattering at High Mass
The ATLAS collaboration
2009-01-01
In the absence of a light Higgs boson, the mechanism of electroweak symmetry breaking will be best studied in processes of vector boson scattering at high mass. Various models predict resonances in this channel. Here, we investigate $WW $scalar and vector resonances, $WZ$ vector resonances and a $ZZ$ scalar resonance over a range of diboson centre-of-mass energies. Particular attention is paid to the application of forward jet tagging and to the reconstruction of dijet pairs with low opening angle resulting from the decay of highly boosted vector bosons. The performances of different jet algorithms are compared. We find that resonances in vector boson scattering can be discovered with a few tens of inverse femtobarns of integrated luminosity.
Directory of Open Access Journals (Sweden)
Chung-Ta Li
2014-01-01
Full Text Available We propose a species-based hybrid of the electromagnetism-like mechanism (EM and back-propagation algorithms (SEMBP for an interval type-2 fuzzy neural system with asymmetric membership functions (AIT2FNS design. The interval type-2 asymmetric fuzzy membership functions (IT2 AFMFs and the TSK-type consequent part are adopted to implement the network structure in AIT2FNS. In addition, the type reduction procedure is integrated into an adaptive network structure to reduce computational complexity. Hence, the AIT2FNS can enhance the approximation accuracy effectively by using less fuzzy rules. The AIT2FNS is trained by the SEMBP algorithm, which contains the steps of uniform initialization, species determination, local search, total force calculation, movement, and evaluation. It combines the advantages of EM and back-propagation (BP algorithms to attain a faster convergence and a lower computational complexity. The proposed SEMBP algorithm adopts the uniform method (which evenly scatters solution agents over the feasible solution region and the species technique to improve the algorithm’s ability to find the global optimum. Finally, two illustrative examples of nonlinear systems control are presented to demonstrate the performance and the effectiveness of the proposed AIT2FNS with the SEMBP algorithm.
Directory of Open Access Journals (Sweden)
A. P. Preobrazhensky
2017-02-01
Full Text Available This paper considers the problem of optimization of the characteristics of scattering of electromagnetic waves on periodic electrodynamic structure. The solution of the scattering problem is based on the method of integral equations, the optimization of the characteristics is based on the genetic algorithm. Recommendations on the parameters of the periodic structure under given angles are given.
Hakky, Tariq S; Martinez, Daniel; Yang, Christopher; Carrion, Rafael E
2015-01-01
Here we present the first video demonstration of reduction corporoplasty in the management of phallic disfigurement in a 17 year old man with a history sickle cell disease and priapism. Surgical management of aneurysmal dilation of the corpora has yet to be defined in the literature. We preformed bilateral elliptical incisions over the lateral corpora as management of aneurysmal dilation of the corpora to correct phallic disfigurement. The patient tolerated the procedure well and has resolution of his corporal disfigurement. Reduction corporoplasty using bilateral lateral elliptical incisions in the management of aneurysmal dilation of the corpora is a safe an feasible operation in the management of phallic disfigurement.
Inclusion of Scatter in HADES: Final Report
International Nuclear Information System (INIS)
Aufderheide, M.B.
2010-01-01
Covert nuclear attack is one of the foremost threats facing the United States and is a primary focus of the War on Terror. The Domestic Nuclear Detection Office (DNDO), within the Department of Homeland Security (DHS), is chartered to develop, and improve domestic systems to detect and interdict smuggling for the illicit use of a nuclear explosive device, fissile material or radiologica1 material. The CAARS (Cargo Advanced Automated Radiography System) program is a major part of the DHS effort to enhance US security by harnessing cutting-edge technologies to detect radiological and nuclear threats at points of entry to the United States. DNDO has selected vendors to develop complete radiographic systems. It is crucial that the initial design and testing concepts for the systems be validated and compared prior to the substantial efforts to build and deploy prototypes and subsequent large-scale production. An important aspect of these systems is the scatter which interferes with imaging. Monte Carlo codes, such as MCNP (X-5 Monte Carlo Team, 2005 Revision) allow scatter to be calculatied, but these calculations are very time consuming. It would be useful to have a fast scatter estimation algorithm in a fast ray tracing code. We have been extending the HADES ray-tracing radiographic simulation code to model vendor systems in a flexible and quick fashion and to use this tool to study a variety of questions involving system performance and the comparative value of surrogates. To enable this work, HADES has been linked to the BRL-CAD library (BRL-CAD Open Source Project, 2010), in order to enable the inclusion of complex CAD geometries in simulations, scanner geometries have been implemented in HADES, and the novel detector responses have been included in HADES. A major extension of HADES which has been required by this effort is the inclusion of scatter in these radiographic simulations. Ray tracing codes generally do not easily allow the inclusion of scatter, because
Memory sparing, fast scattering formalism for rigorous diffraction modeling
Iff, W.; Kämpfe, T.; Jourlin, Y.; Tishchenko, A. V.
2017-07-01
The basics and algorithmic steps of a novel scattering formalism suited for memory sparing and fast electromagnetic calculations are presented. The formalism, called ‘S-vector algorithm’ (by analogy with the known scattering-matrix algorithm), allows the calculation of the collective scattering spectra of individual layered micro-structured scattering objects. A rigorous method of linear complexity is applied to model the scattering at individual layers; here the generalized source method (GSM) resorting to Fourier harmonics as basis functions is used as one possible method of linear complexity. The concatenation of the individual scattering events can be achieved sequentially or in parallel, both having pros and cons. The present development will largely concentrate on a consecutive approach based on the multiple reflection series. The latter will be reformulated into an implicit formalism which will be associated with an iterative solver, resulting in improved convergence. The examples will first refer to 1D grating diffraction for the sake of simplicity and intelligibility, with a final 2D application example.
Virtual neutron scattering experiments
DEFF Research Database (Denmark)
Overgaard, Julie Hougaard; Bruun, Jesper; May, Michael
2016-01-01
We describe how virtual experiments can be utilized in a learning design that prepares students for hands-on experiments at large-scale facilities. We illustrate the design by showing how virtual experiments are used at the Niels Bohr Institute in a master level course on neutron scattering....... In the last week of the course, students travel to a large-scale neutron scattering facility to perform real neutron scattering experiments. Through student interviews and survey answers, we argue, that the virtual training prepares the students to engage more fruitfully with experiments by letting them focus...... on physics and data rather than the overwhelming instrumentation. We argue that this is because they can transfer their virtual experimental experience to the real-life situation. However, we also find that learning is still situated in the sense that only knowledge of particular experiments is transferred...
Electron scattering off nuclei
International Nuclear Information System (INIS)
Gattone, A.O.
1989-01-01
Two recently developed aspects related to the scattering of electrons off nuclei are presented. On the one hand, a model is introduced which emphasizes the relativistic aspects of the problem in the impulse approximation, by demanding strict maintenance of the algebra of the Poincare group. On the other hand, the second model aims at a more sophisticated description of the nuclear response in the case of collective excitations. Basically, it utilizes the RPA formalism with a new development which enables a more careful treatment of the states in the continuum as is the case for the giant resonances. Applications of both models to the description of elastic scattering, inelastic scattering to discrete levels, giant resonances and the quasi-elastic region are discussed. (Author) [es
Cold moderator scattering kernels
International Nuclear Information System (INIS)
MacFarlane, R.E.
1989-01-01
New thermal-scattering-law files in ENDF format have been developed for solid methane, liquid methane liquid ortho- and para-hydrogen, and liquid ortho- and para-deuterium using up-to-date models that include such effects as incoherent elastic scattering in the solid, diffusion and hindered vibration and rotations in the liquids, and spin correlations for the hydrogen and deuterium. These files were generated with the new LEAPR module of the NJOY Nuclear Data Processing System. Other modules of this system were used to produce cross sections for these moderators in the correct format for the continuous-energy Monte Carlo code (MCNP) being used for cold-moderator-design calculations at the Los Alamos Neutron Scattering Center (LANSCE). 20 refs., 14 figs
Quantum Optical Multiple Scattering
DEFF Research Database (Denmark)
Ott, Johan Raunkjær
. In the first part we use a scattering-matrix formalism combined with results from random-matrix theory to investigate the interference of quantum optical states on a multiple scattering medium. We investigate a single realization of a scattering medium thereby showing that it is possible to create entangled...... states by interference of squeezed beams. Mixing photon states on the single realization also shows that quantum interference naturally arises by interfering quantum states. We further investigate the ensemble averaged transmission properties of the quantized light and see that the induced quantum...... interference survives even after disorder averaging. The quantum interference manifests itself through increased photon correlations. Furthermore, the theoretical description of a measurement procedure is presented. In this work we relate the noise power spectrum of the total transmitted or reflected light...
Energy Technology Data Exchange (ETDEWEB)
ZALIZNYAK,I.A.; LEE,S.H.
2004-07-30
Much of our understanding of the atomic-scale magnetic structure and the dynamical properties of solids and liquids was gained from neutron-scattering studies. Elastic and inelastic neutron spectroscopy provided physicists with an unprecedented, detailed access to spin structures, magnetic-excitation spectra, soft-modes and critical dynamics at magnetic-phase transitions, which is unrivaled by other experimental techniques. Because the neutron has no electric charge, it is an ideal weakly interacting and highly penetrating probe of matter's inner structure and dynamics. Unlike techniques using photon electric fields or charged particles (e.g., electrons, muons) that significantly modify the local electronic environment, neutron spectroscopy allows determination of a material's intrinsic, unperturbed physical properties. The method is not sensitive to extraneous charges, electric fields, and the imperfection of surface layers. Because the neutron is a highly penetrating and non-destructive probe, neutron spectroscopy can probe the microscopic properties of bulk materials (not just their surface layers) and study samples embedded in complex environments, such as cryostats, magnets, and pressure cells, which are essential for understanding the physical origins of magnetic phenomena. Neutron scattering is arguably the most powerful and versatile experimental tool for studying the microscopic properties of the magnetic materials. The magnitude of the cross-section of the neutron magnetic scattering is similar to the cross-section of nuclear scattering by short-range nuclear forces, and is large enough to provide measurable scattering by the ordered magnetic structures and electron spin fluctuations. In the half-a-century or so that has passed since neutron beams with sufficient intensity for scattering applications became available with the advent of the nuclear reactors, they have became indispensable tools for studying a variety of important areas of modern
Electromagnetic scattering theory
Bird, J. F.; Farrell, R. A.
1986-01-01
Electromagnetic scattering theory is discussed with emphasis on the general stochastic variational principle (SVP) and its applications. The stochastic version of the Schwinger-type variational principle is presented, and explicit expressions for its integrals are considered. Results are summarized for scalar wave scattering from a classic rough-surface model and for vector wave scattering from a random dielectric-body model. Also considered are the selection of trial functions and the variational improvement of the Kirchhoff short-wave approximation appropriate to large size-parameters. Other applications of vector field theory discussed include a general vision theory and the analysis of hydromagnetism induced by ocean motion across the geomagnetic field. Levitational force-torque in the magnetic suspension of the disturbance compensation system (DISCOS), now deployed in NOVA satellites, is also analyzed using the developed theory.
Scattering cross section of unequal length dipole arrays
Singh, Hema; Jha, Rakesh Mohan
2016-01-01
This book presents a detailed and systematic analytical treatment of scattering by an arbitrary dipole array configuration with unequal-length dipoles, different inter-element spacing and load impedance. It provides a physical interpretation of the scattering phenomena within the phased array system. The antenna radar cross section (RCS) depends on the field scattered by the antenna towards the receiver. It has two components, viz. structural RCS and antenna mode RCS. The latter component dominates the former, especially if the antenna is mounted on a low observable platform. The reduction in the scattering due to the presence of antennas on the surface is one of the concerns towards stealth technology. In order to achieve this objective, a detailed and accurate analysis of antenna mode scattering is required. In practical phased array, one cannot ignore the finite dimensions of antenna elements, coupling effect and the role of feed network while estimating the antenna RCS. This book presents the RCS estimati...
Neutron scattering. Experiment manuals
Energy Technology Data Exchange (ETDEWEB)
Brueckel, Thomas; Heger, Gernot; Richter, Dieter; Roth, Georg; Zorn, Reiner (eds.)
2010-07-01
The following topics are dealt with: The thermal triple axis spectrometer PUMA, the high-resolution powder diffractometer SPODI, the hot single-crystal diffractometer HEiDi for structure analysis with neutrons, the backscattering spectrometer SPHERES, neutron polarization analysis with tht time-of-flight spectrometer DNS, the neutron spin-echo spectrometer J-NSE, small-angle neutron scattering with the KWS-1 and KWS-2 diffractometers, the very-small-angle neutron scattering diffractrometer with focusing mirror KWS-3, the resonance spin-echo spectrometer RESEDA, the reflectometer TREFF, the time-of-flight spectrometer TOFTOF. (HSI)
International Nuclear Information System (INIS)
Christillin, P.
1986-01-01
The theory of nuclear Compton scattering is reformulated with explicit consideration of both virtual and real pionic degrees of freedom. The effects due to low-lying nuclear states, to seagull terms, to pion condensation and to the Δ dynamics in the nucleus and their interplay in the different energy regions are examined. It is shown that all corrections to the one-body terms, of diffractive behaviour determined by the nuclear form factor, have an effective two-body character. The possibility of using Compton scattering as a complementary source of information about nuclear dynamics is restressed. (author)
Diffraction in nuclear scattering
International Nuclear Information System (INIS)
Wojciechowski, H.
1986-01-01
The elastic scattering amplitudes for charged and neutral particles have been decomposed into diffractive and refractive parts by splitting the nuclear elastic scattering matrix elements into components responsible for these effects. It has been shown that the pure geometrical diffractive effect which carries no information about the nuclear interaction is always predominant at forward angle of elastic angular distributions. This fact suggests that for strongly absorbed particles only elastic cross section at backward angles, i.e. the refractive cross section, can give us basic information about the central nuclear potential. 12 refs., 4 figs., 1 tab. (author)
Proton nuclear scattering radiography
International Nuclear Information System (INIS)
Saudinos, J.
1982-04-01
Nuclear scattering of protons allows to radiograph objects with specific properties: 3-dimensional radiography, different information as compared to X-ray technique, hydrogen radiography. Furthermore the nuclear scattering radiography (NSR) is a well adapted method to gating techniques allowing the radiography of fast periodic moving objects. Results obtained on phantoms, formalin fixed head and moving object are shown and discussed. The dose delivery is compatible with clinical use, but at the moment, the irradiation time is too long between 1 and 4 hours. Perspectives to make the radiograph faster and to get a practical method are discussed
Slow neutron scattering experiments
International Nuclear Information System (INIS)
Moon, R.M.
1985-01-01
Neutron scattering is a versatile technique that has been successfully applied to condensed-matter physics, biology, polymer science, chemistry, and materials science. The United States lost its leadership role in this field to Western Europe about 10 years ago. Recently, a modest investment in the United States in new facilities and a positive attitude on the part of the national laboratories toward outside users have resulted in a dramatic increase in the number of US scientists involved in neutron scattering research. Plans are being made for investments in new and improved facilities that could return the leadership role to the United States. 23 references, 4 figures, 3 tables
Neutron scattering. Experiment manuals
International Nuclear Information System (INIS)
Brueckel, Thomas; Heger, Gernot; Richter, Dieter; Roth, Georg; Zorn, Reiner
2014-01-01
The following topics are dealt with: The thermal triple-axis spectrometer PUMA, the high-resolution powder diffractometer SPODI, the hot-single-crystal diffractometer HEiDi, the three-axis spectrometer PANDA, the backscattering spectrometer SPHERES, the DNS neutron-polarization analysis, the neutron spin-echo spectrometer J-NSE, small-angle neutron scattering at KWS-1 and KWS-2, a very-small-angle neutron scattering diffractometer with focusing mirror, the reflectometer TREFF, the time-of-flight spectrometer TOFTOF. (HSI)
International Nuclear Information System (INIS)
McCarthy, I.E.
1991-07-01
The coupled-channels-optical method has been implemented using two different approximations to the optical potential. The half-on-shell optical potential involves drastic approximations for numerical feasibility but still gives a good semiquantitative description of the effect of uncoupled channels on electron scattering from hydrogen, helium and sodium. The distorted-wave optical potential makes no approximations other than the weak coupling approximation for uncoupled channels. In applications to hydrogen and sodium it shows promise of describing scattering phenomena excellently at all energies. 27 refs., 5 figs
Neutron scattering. Experiment manuals
International Nuclear Information System (INIS)
Brueckel, Thomas; Heger, Gernot; Richter, Dieter; Roth, Georg; Zorn, Reiner
2010-01-01
The following topics are dealt with: The thermal triple axis spectrometer PUMA, the high-resolution powder diffractometer SPODI, the hot single-crystal diffractometer HEiDi for structure analysis with neutrons, the backscattering spectrometer SPHERES, neutron polarization analysis with tht time-of-flight spectrometer DNS, the neutron spin-echo spectrometer J-NSE, small-angle neutron scattering with the KWS-1 and KWS-2 diffractometers, the very-small-angle neutron scattering diffractrometer with focusing mirror KWS-3, the resonance spin-echo spectrometer RESEDA, the reflectometer TREFF, the time-of-flight spectrometer TOFTOF. (HSI)
Environmental vibration reduction utilizing an array of mass scatterers
DEFF Research Database (Denmark)
Peplow, Andrew; Andersen, Lars Vabbersgaard; Bucinskas, Paulius
2017-01-01
.g. concrete or stone blocks, specially designed brick walls, etc.). The natural frequencies of vibration for such blocks depend on the local ground stiffness and on the mass of the blocks which can be chosen to provide resonance at specified frequencies. This work concerns the effectiveness of such “blocking......Ground vibration generated by rail and road traffic is a major source of environmental noise and vibration pollution in the low-frequency range. A promising and cost effective mitigation method can be the use of heavy masses placed as a periodic array on the ground surface near the road or track (e...
Nagayama, Y; Nakaura, T; Oda, S; Tsuji, A; Urata, J; Furusawa, M; Tanoue, S; Utsunomiya, D; Yamashita, Y
2018-02-01
To perform an intra-individual investigation of the usefulness of a contrast medium (CM) and radiation dose-reduction protocol using single-source computed tomography (CT) combined with 100 kVp and sinogram-affirmed iterative reconstruction (SAFIRE) for whole-body CT (WBCT; chest-abdomen-pelvis CT) in oncology patients. Forty-three oncology patients who had undergone WBCT under both 120 and 100 kVp protocols at different time points (mean interscan intervals: 98 days) were included retrospectively. The CM doses for the 120 and 100 kVp protocols were 600 and 480 mg iodine/kg, respectively; 120 kVp images were reconstructed with filtered back-projection (FBP), whereas 100 kVp images were reconstructed with FBP (100 kVp-F) and the SAFIRE (100 kVp-S). The size-specific dose estimate (SSDE), iodine load and image quality of each protocol were compared. The SSDE and iodine load of 100 kVp protocol were 34% and 21%, respectively, lower than of 120 kVp protocol (SSDE: 10.6±1.1 versus 16.1±1.8 mGy; iodine load: 24.8±4versus 31.5±5.5 g iodine, p<0.01). Contrast enhancement, objective image noise, contrast-to-noise-ratio, and visual score of 100 kVp-S were similar to or better than of 120 kVp protocol. Compared with the 120 kVp protocol, the combined use of 100 kVp and SAFIRE in WBCT for oncology assessment with an SSCT facilitated substantial reduction in the CM and radiation dose while maintaining image quality. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Diffuse Scattering Model of Indoor Wideband Propagation
DEFF Research Database (Denmark)
Franek, Ondrej; Andersen, Jørgen Bach; Pedersen, Gert Frølund
2011-01-01
segments in total and approximately 2 min running time on average computer. Frequency independent power levels at the walls around the circumference of the room and at four receiver locations in the middle of the room are observed. It is demonstrated that after finite period of initial excitation the field...... radio coverage predictions.......This paper presents a discrete-time numerical algorithm for computing field distribution in indoor environment by diffuse scattering from walls. Calculations are performed for a rectangular room with semi-reflective walls. The walls are divided into 0.5 x 0.5 m segments, resulting in 2272 wall...
Born amplitudes and seagull term in meson-soliton scattering
International Nuclear Information System (INIS)
Liang, Y.G.; Li, B.A.; Liu, K.F.; Su, R.K.
1990-01-01
The meson-soliton scattering for the φ 4 theory in 1+1 dimensions is calculated. We show that when the seagull term from the equal time commutator is included in addition to the Born amplitudes, the t-matrix from the reduction formula approach is identical to that of the potential scattering with small quantum fluctuations to leading order in weak coupling. The seagull term is equal to the Born term in the potential scattering. This confirms the speculation that the leading order Yukawa coupling is derivable from the classical soliton. (orig.)
Generalized phase retrieval algorithm based on information measures
Shioya, Hiroyuki; Gohara, Kazutoshi
2006-01-01
An iterative phase retrieval algorithm based on the maximum entropy method (MEM) is presented. Introducing a new generalized information measure, we derive a novel class of algorithms which includes the conventionally used error reduction algorithm and a MEM-type iterative algorithm which is presented for the first time. These different phase retrieval methods are unified on the basis of the framework of information measures used in information theory.
Migration of scattered teleseismic body waves
Bostock, M. G.; Rondenay, S.
1999-06-01
The retrieval of near-receiver mantle structure from scattered waves associated with teleseismic P and S and recorded on three-component, linear seismic arrays is considered in the context of inverse scattering theory. A Ray + Born formulation is proposed which admits linearization of the forward problem and economy in the computation of the elastic wave Green's function. The high-frequency approximation further simplifies the problem by enabling (1) the use of an earth-flattened, 1-D reference model, (2) a reduction in computations to 2-D through the assumption of 2.5-D experimental geometry, and (3) band-diagonalization of the Hessian matrix in the inverse formulation. The final expressions are in a form reminiscent of the classical diffraction stack of seismic migration. Implementation of this procedure demands an accurate estimate of the scattered wave contribution to the impulse response, and thus requires the removal of both the reference wavefield and the source time signature from the raw record sections. An approximate separation of direct and scattered waves is achieved through application of the inverse free-surface transfer operator to individual station records and a Karhunen-Loeve transform to the resulting record sections. This procedure takes the full displacement field to a wave vector space wherein the first principal component of the incident wave-type section is identified with the direct wave and is used as an estimate of the source time function. The scattered displacement field is reconstituted from the remaining principal components using the forward free-surface transfer operator, and may be reduced to a scattering impulse response upon deconvolution of the source estimate. An example employing pseudo-spectral synthetic seismograms demonstrates an application of the methodology.
Relativistic effects in elastic scattering of electrons in TEM
International Nuclear Information System (INIS)
Rother, Axel; Scheerschmidt, Kurt
2009-01-01
Transmission electron microscopy typically works with highly accelerated thus relativistic electrons. Consequently the scattering process is described within a relativistic formalism. In the following, we will examine three different relativistic formalisms for elastic electron scattering: Dirac, Klein-Gordon and approximated Klein-Gordon, the standard approach. This corresponds to a different consideration of spin effects and a different coupling to electromagnetic potentials. A detailed comparison is conducted by means of explicit numerical calculations. For this purpose two different formalisms have been applied to the approaches above: a numerical integration with predefined boundary conditions and the multislice algorithm, a standard procedure for such simulations. The results show a negligibly small difference between the different relativistic equations in the vicinity of electromagnetic potentials, prevailing in the electron microscope. The differences between the two numeric approaches are found to be small for small-angle scattering but eventually grow large for large-angle scattering, recorded for instance in high-angle annular dark field.
Estimation of scattered photons using a neural network in SPECT
International Nuclear Information System (INIS)
Hasegawa, Wataru; Ogawa, Koichi
1994-01-01
In single photon emission CT (SPECT), measured projection data involve scattered photons. This causes degradation of spatial resolution and contrast in reconstructed images. The purpose of this study is to estimate the scattered photons, and eliminate them from measured data. To estimate the scattered photons, we used an artificial neural network which consists of five input units, five hidden units, and two output units. The inputs of the network are the ratios of the counts acquired by five narrow energy windows and their sum. The outputs are the ratios of the count of scattered photons and that of primary photons to the total count. The neural network was trained with a back-propagation algorithm using count data obtained by a Monte Carlo simulation. The results of simulation showed improvement of contrast and spatial resolution in reconstructed images. (author)
Markov chain solution of photon multiple scattering through turbid slabs.
Lin, Ying; Northrop, William F; Li, Xuesong
2016-11-14
This work introduces a Markov Chain solution to model photon multiple scattering through turbid slabs via anisotropic scattering process, i.e., Mie scattering. Results show that the proposed Markov Chain model agree with commonly used Monte Carlo simulation for various mediums such as medium with non-uniform phase functions and absorbing medium. The proposed Markov Chain solution method successfully converts the complex multiple scattering problem with practical phase functions into a matrix form and solves transmitted/reflected photon angular distributions by matrix multiplications. Such characteristics would potentially allow practical inversions by matrix manipulation or stochastic algorithms where widely applied stochastic methods such as Monte Carlo simulations usually fail, and thus enable practical diagnostics reconstructions such as medical diagnosis, spray analysis, and atmosphere sciences.
International Nuclear Information System (INIS)
Wagner, P.
1976-04-01
Effects on graphite thermal conductivities due to controlled alterations of the graphite structure by impurity addition, porosity, and neutron irradiation are shown to be consistent with the phonon-scattering formulation 1/l = Σ/sub i equals 1/sup/n/ 1/l/sub i/. Observed temperature effects on these doped and irradiated graphites are also explained by this mechanism
International Nuclear Information System (INIS)
Johnson, R.C.
1980-01-01
High energy and small momentum transfer 2 'yields' 2 hadronic scattering processes are described in the physical framework of particle exchange. Particle production in high energy collisions is considered with emphasis on the features of inclusive reactions though with some remarks on exclusive processes. (U.K.)
Critical scattering by bubbles
International Nuclear Information System (INIS)
Fiedler-Ferrari, N.; Nussenzveig, H.M.
1986-11-01
We apply the complex angular momentum theory to the problem of the critical scattering of light by spherical cavities in the high frequency limit (permittivity greater than the external media) (e.g, air bubble in water) (M.W.O.) [pt
Radiation scattering techniques
International Nuclear Information System (INIS)
Edmonds, E.A.
1986-01-01
Radiation backscattering techniques are useful when access to an item to be inspected is restricted to one side. These techniques are very sensitive to geometrical effects. Scattering processes and their application to the determination of voids, thickness measuring, well-logging and the use of x-ray fluorescence techniques are discussed. (U.K.)
Energy Technology Data Exchange (ETDEWEB)
Friedrich, Harald [Technische Univ. Muenchen, Garching (Germany). Physik-Department
2016-07-01
This corrected and updated second edition of ''Scattering Theory'' presents a concise and modern coverage of the subject. In the present treatment, special attention is given to the role played by the long-range behaviour of the projectile-target interaction, and a theory is developed, which is well suited to describe near-threshold bound and continuum states in realistic binary systems such as diatomic molecules or molecular ions. It is motivated by the fact that experimental advances have shifted and broadened the scope of applications where concepts from scattering theory are used, e.g. to the field of ultracold atoms and molecules, which has been experiencing enormous growth in recent years, largely triggered by the successful realization of Bose-Einstein condensates of dilute atomic gases in 1995. The book contains sections on special topics such as near-threshold quantization, quantum reflection, Feshbach resonances and the quantum description of scattering in two dimensions. The level of abstraction is kept as low as at all possible and deeper questions related to the mathematical foundations of scattering theory are passed by. It should be understandable for anyone with a basic knowledge of nonrelativistic quantum mechanics. The book is intended for advanced students and researchers, and it is hoped that it will be useful for theorists and experimentalists alike.
International Nuclear Information System (INIS)
Windmolders, R.
1989-01-01
In this paper the following topics are reviewed: 1. the structure functions measured in deep inelastic e-N, μ-N and ν-N scattering; 2. nuclear effects on the structure functions; 3. nuclear effects on the fragmentation functions; 4. the spin dependent structure functions and their interpretation in terms of nucleon constituents. (orig./HSI)
Deeply Virtual Neutrino Scattering
International Nuclear Information System (INIS)
Ales Psaker
2007-01-01
We investigate the extension of the deeply virtual Compton scattering process into the weak interaction sector. Standard electromagnetic Compton scattering provides a unique tool for studying hadrons, which is one of the most fascinating frontiers of modern science. In this process the relevant Compton scattering amplitude probes the hadron structure by means of two quark electromagnetic currents. We argue that replacing one of the currents with the weak interaction current can promise a new insight. The paper is organized as follows. In Sec. II we briefly discuss the features of the handbag factorization scheme. We introduce a new set of phenomenological functions, known as generalized parton distributions (GPDs) [1-6], and discuss some of their basic properties in Sec. III. An application of the GPD formalism to the neutrino-induced deeply virtual Compton scattering in the kinematics relevant to future high-intensity neutrino experiments is given in Sec. IV. The cross section results are presented in Sec. V. Finally, in Sec. VI we draw some conclusions and discuss future prospects. Some of the formal results in this paper have appeared in preliminary reports in Refs. [7] and [8], whereas a comprehensive analysis of the weak neutral and weak charged current DVCS reactions in collaboration with W. Melnitchouk and A. Radyushkin has been presented in Ref. [9
Symposium on neutron scattering
International Nuclear Information System (INIS)
Lehmann, M.S.; Saenger, W.; Hildebrandt, G.; Dachs, H.
1984-01-01
Extended abstracts of the named symposium are presented. The first part of this report contains the abstracts of the lectures, the second those of the posters. Topics discussed on the symposium include neutron diffraction and neutron scattering studies in magnetism, solid state chemistry and physics, materials research. Some papers discussing instruments and methods are included too. (GSCH)
Inversion assuming weak scattering
DEFF Research Database (Denmark)
Xenaki, Angeliki; Gerstoft, Peter; Mosegaard, Klaus
2013-01-01
due to the complex nature of the field. A method based on linear inversion is employed to infer information about the statistical properties of the scattering field from the obtained cross-spectral matrix. A synthetic example based on an active high-frequency sonar demonstrates that the proposed...
International Nuclear Information System (INIS)
Santoso, B.
1976-01-01
Green Lippmann-Schwinger functions operator representations, derivation of perturbation method using Green function and atom electron scattering, are discussed. It is concluded that by using complex coordinate places where resonances occur, can be accurately identified. The resonance can be processed further for practical purposes, for example for the separation of atom. (RUW)
Electron Scattering on deuterium
International Nuclear Information System (INIS)
Platchkov, S.
1987-01-01
Selected electron scattering experiments on the deuteron system are discussed. The main advantages of the electromagnetic probe are recalled. The deuteron A(q 2 ) structure function is analyzed and found to be very sensitive to the neutron electric form factor. Electrodisintegration of the deuteron near threshold is presented as evidence for the importance of meson exchange currents in nuclei [fr
Parity violating electron scattering
International Nuclear Information System (INIS)
McKeown, R.D.
1990-01-01
Previous measurements of parity violation in electron scattering are reviewed with particular emphasis on experimental techniques. Significant progress in the attainment of higher precision is evident in these efforts. These pioneering experiments provide a basis for consideration of a future program of such measurements. In this paper some future plans and possibilities in this field are discussed
International Nuclear Information System (INIS)
Mermaz, M.C.
1984-01-01
Diffraction and refraction play an important role in particle elastic scattering. The optical model treats correctly and simultaneously both phenomena but without disentangling them. Semi-classical discussions in terms of trajectories emphasize the refractive aspect due to the real part of the optical potential. The separation due to to R.C. Fuller of the quantal cross section into two components coming from opposite side of the target nucleus allows to understand better the refractive phenomenon and the origin of the observed oscillations in the elastic scattering angular distributions. We shall see that the real part of the potential is responsible of a Coulomb and a nuclear rainbow which allows to determine better the nuclear potential in the interior region near the nuclear surface since the volume absorption eliminates any effect of the real part of the potential for the internal partial scattering waves. Resonance phenomena seen in heavy ion scattering will be discussed in terms of optical model potential and Regge pole analysis. Compound nucleus resonances or quasi-molecular states can be indeed the more correct and fundamental alternative
Multienergy anomalous diffuse scattering
Czech Academy of Sciences Publication Activity Database
Kopecký, Miloš; Fábry, Jan; Kub, Jiří; Lausi, A.; Busetto, E.
2008-01-01
Roč. 100, č. 19 (2008), 195504/1-195504/4 ISSN 0031-9007 R&D Projects: GA AV ČR IAA100100529 Institutional research plan: CEZ:AV0Z10100523 Keywords : diffuse scattering * x-rays * structure determination Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 7.180, year: 2008
Correlation in atomic scattering
International Nuclear Information System (INIS)
McGuire, J.H.
1987-01-01
Correlation due to the Coulomb interactions between electrons in many-electron targets colliding with charged particles is formulated, and various approximate probability amplitudes are evaluated. In the limit that the electron-electron, 1/r/sub i//sub j/, correlation interactions are ignored or approximated by central potentials, the independent-electron approximation is obtained. Two types of correlations, or corrections to the independent-electron approximation due to 1/r/sub i//sub j/ terms, are identified: namely, static and scattering correlation. Static correlation is that contained in the asymptotic, e.g., bound-state, wave functions. Scattering correlation, arising from correlation in the scattering operator, is new and is considered in some detail. Expressions for a scattering correlation amplitude, static correlation or rearrangement amplitude, and independent-electron or direct amplitude are derived at high collision velocity and compared. At high velocities the direct and rearrangement amplitudes dominate. At very high velocities, ν, the rearrangement amplitude falls off less rapidly with ν than the direct amplitude which, however, is dominant as electron-electron correlation tends to zero. Comparisons with experimental observations are discussed
Superradiative scattering magnons
International Nuclear Information System (INIS)
Shrivastava, K.N.
1980-01-01
A magnon-photon interaction for the magnetic vector of the electromagnetic wave perpendicular to the direction of magnetization in a ferromagnet is constructed. The magnon part of the interaction is reduced with the use of Bogoliubov transformation. The resulting magnon-photon interaction is found to contain several interesting new radiation effects. The self energy of the magnon is calculated and life times arising from the radiation scattering are predicted. The magnon frequency shift due to the radiation field is found. One of the terms arising from the one-magnon one-photon scattering gives a line width in reasonable agreement with the experimentally measured value of ferromagnetic resonance line width in yttrium iron garnet. Surface magnon scattering is indicated and the contribution of this type of scattering to the radiative line width is discussed. The problem of magnetic superradiance is indicated and it is shown that in anisotropic ferromagnets the emission is proportional to the sqare of the number of magnons and the divergence is considerably minimized. Accordingly the magnetic superradiance emerges as a hyperradiance with much more radiation intensity than in the case of disordered atomic superradiance. (author)
Directory of Open Access Journals (Sweden)
Robert de Mello Koch
2017-05-01
Full Text Available We study the worldsheet S-matrix of a string attached to a D-brane in AdS5×S5. The D-brane is either a giant graviton or a dual giant graviton. In the gauge theory, the operators we consider belong to the su(2|3 sector of the theory. Magnon excitations of open strings can exhibit both elastic (when magnons in the bulk of the string scatter and inelastic (when magnons at the endpoint of an open string participate scattering. Both of these S-matrices are determined (up to an overall phase by the su(2|22 global symmetry of the theory. In this note we study the S-matrix for inelastic scattering. We show that it exhibits poles corresponding to boundstates of bulk and boundary magnons. A crossing equation is derived for the overall phase. It reproduces the crossing equation for maximal giant gravitons, in the appropriate limit. Finally, scattering in the su(2 sector is computed to two loops. This two loop result, which determines the overall phase to two loops, will be useful when a unique solution to the crossing equation is to be selected.
Hamiltonian Algorithm Sound Synthesis
大矢, 健一
2013-01-01
Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.
Progressive geometric algorithms
Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.
2015-01-01
Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms
Progressive geometric algorithms
Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.
2014-01-01
Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms
Nagayama, Yasunori; Nakaura, Takeshi; Tsuji, Akinori; Urata, Joji; Furusawa, Mitsuhiro; Yuki, Hideaki; Hirarta, Kenichiro; Kidoh, Masafumi; Oda, Seitaro; Utsunomiya, Daisuke; Yamashita, Yasuyuki
2017-07-01
To retrospectively evaluate the image quality and radiation dose of 100-kVp scans with sinogram-affirmed iterative reconstruction (IR) for unenhanced head CT in adolescents. Sixty-nine patients aged 12-17 years underwent head CT under 120- (n = 34) or 100-kVp (n = 35) protocols. The 120-kVp images were reconstructed with filtered back-projection (FBP), 100-kVp images with FBP (100-kVp-F) and sinogram-affirmed IR (100-kVp-S). We compared the effective dose (ED), grey-white matter (GM-WM) contrast, image noise, and contrast-to-noise ratio (CNR) between protocols in supratentorial (ST) and posterior fossa (PS). We also assessed GM-WM contrast, image noise, sharpness, artifacts, and overall image quality on a four-point scale. ED was 46% lower with 100- than 120-kVp (p < 0.001). GM-WM contrast was higher, and image noise was lower, on 100-kVp-S than 120-kVp at ST (p < 0.001). CNR of 100-kVp-S was higher than of 120-kVp (p < 0.001). GM-WM contrast of 100-kVp-S was subjectively rated as better than of 120-kVp (p < 0.001). There were no significant differences in the other criteria between 100-kVp-S and 120-kVp (p = 0.072-0.966). The 100-kVp with sinogram-affirmed IR facilitated dramatic radiation reduction and better GM-WM contrast without increasing image noise in adolescent head CT. • 100-kVp head CT provides 46% radiation dose reduction compared with 120-kVp. • 100-kVp scanning improves subjective and objective GM-WM contrast. • Sinogram-affirmed IR decreases head CT image noise, especially in supratentorial region. • 100-kVp protocol with sinogram-affirmed IR is suited for adolescent head CT.
DEFF Research Database (Denmark)
Bucher, Taina
2017-01-01
the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...
Energy Technology Data Exchange (ETDEWEB)
Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
Directory of Open Access Journals (Sweden)
Tariq S. Hakky
2015-04-01
Full Text Available Objective Here we present the first video demonstration of reduction corporoplasty in the management of phallic disfigurement in a 17 year old man with a history sickle cell disease and priapism. Introduction Surgical management of aneurysmal dilation of the corpora has yet to be defined in the literature. Materials and Methods: We preformed bilateral elliptical incisions over the lateral corpora as management of aneurysmal dilation of the corpora to correct phallic disfigurement. Results The patient tolerated the procedure well and has resolution of his corporal disfigurement. Conclusions Reduction corporoplasty using bilateral lateral elliptical incisions in the management of aneurysmal dilation of the corpora is a safe an feasible operation in the management of phallic disfigurement.
Raylman, R. R.; Majewski, S.; Wojcik, R.; Weisenberger, A. G.; Kross, B.; Popov, V.
2001-06-01
Positron emission mammography (PEM) has begun to show promise as an effective method for the detection of breast lesions. Due to its utilization of tumor-avid radiopharmaceuticals labeled with positron-emitting radionuclides, this technique may be especially useful in imaging of women with radiodense or fibrocystic breasts. While the use of these radiotracers affords PEM unique capabilities, it also introduces some limitations. Specifically, acceptance of accidental and Compton-scattered coincidence events can decrease lesion detectability. The authors studied the effect of accidental coincidence events on PEM images produced by the presence of /sup 18/F-Fluorodeoxyglucose in the organs of a subject using an anthropomorphic phantom. A delayed-coincidence technique was tested as a method for correcting PEM images for the occurrence of accidental events. Also, a Compton scatter correction algorithm designed specifically for PEM was developed and tested using a compressed breast phantom. Finally, the effect of object size on image counts and a correction for this effect were explored. The imager used in this study consisted of two PEM detector heads mounted 20 cm apart on a Lorad biopsy apparatus. The results demonstrated that a majority of the accidental coincidence events (/spl sim/80%) detected by this system were produced by radiotracer uptake in the adipose and muscle tissue of the torso. The presence of accidental coincidence events was shown to reduce lesion detectability. Much of this effect was eliminated by correction of the images utilizing estimates of accidental-coincidence contamination acquired with delayed coincidence circuitry built into the PEM system. The Compton scatter fraction for this system was /spl sim/14%. Utilization of a new scatter correction algorithm reduced the scatter fraction to /spl sim/1.5%. Finally, reduction of count recovery due to object size was measured and a correction to the data applied. Application of correction techniques
Light scattering reviews 8 radiative transfer and light scattering
Kokhanovsky, Alexander A
2013-01-01
Light scattering review (vol 8) is aimed at the presentation of recent advances in radiative transfer and light scattering optics. The topics to be covered include: scattering of light by irregularly shaped particles suspended in atmosphere (dust, ice crystals), light scattering by particles much larger as compared the wavelength of incident radiation, atmospheric radiative forcing, astrophysical radiative transfer, radiative transfer and optical imaging in biological media, radiative transfer of polarized light, numerical aspects of radiative transfer.