Inverse Compton Light Source: A Compact Design Proposal
Deitrick, Kirsten Elizabeth
In the last decade, there has been an increasing demand for a compact Inverse Compton Light Source (ICLS) which is capable of producing high-quality X-rays by colliding an electron beam and a high-quality laser. It is only in recent years when both SRF and laser technology have advanced enough that compact sources can approach the quality found at large installations such as the Advanced Photon Source at Argonne National Laboratory. Previously, X-ray sources were either high flux and brilliance at a large facility or many orders of magnitude lesser when produced by a bremsstrahlung source. A recent compact source was constructed by Lyncean Technologies using a storage ring to produce the electron beam used to scatter the incident laser beam. By instead using a linear accelerator system for the electron beam, a significant increase in X-ray beam quality is possible, though even subsequent designs also featuring a storage ring offer improvement. Preceding the linear accelerator with an SRF reentrant gun allows for an extremely small transverse emittance, increasing the brilliance of the resulting X-ray source. In order to achieve sufficiently small emittances, optimization was done regarding both the geometry of the gun and the initial electron bunch distribution produced off the cathode. Using double-spoke SRF cavities to comprise the linear accelerator allows for an electron beam of reasonable size to be focused at the interaction point, while preserving the low emittance that was generated by the gun. An aggressive final focusing section following the electron beam's exit from the accelerator produces the small spot size at the interaction point which results in an X-ray beam of high flux and brilliance. Taking all of these advancements together, a world class compact X-ray source has been designed. It is anticipated that this source would far outperform the conventional bremsstrahlung and many other compact ICLSs, while coming closer to performing at the levels
Inverse compton light source: a compact design proposal
Energy Technology Data Exchange (ETDEWEB)
Deitrick, Kirsten Elizabeth [Old Dominion Univ., Norfolk, VA (United States); Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)
2017-05-01
In the last decade, there has been an increasing demand for a compact Inverse Compton Light Source (ICLS) which is capable of producing high-quality X-rays by colliding an electron beam and a high-quality laser. It is only in recent years when both SRF and laser technology have advanced enough that compact sources can approach the quality found at large installations such as the Advanced Photon Source at Argonne National Laboratory. Previously, X-ray sources were either high flux and brilliance at a large facility or many orders of magnitude lesser when produced by a bremsstrahlung source. A recent compact source was constructed by Lyncean Technologies using a storage ring to produce the electron beam used to scatter the incident laser beam. By instead using a linear accelerator system for the electron beam, a significant increase in X-ray beam quality is possible, though even subsequent designs also featuring a storage ring offer improvement. Preceding the linear accelerator with an SRF reentrant gun allows for an extremely small transverse emittance, increasing the brilliance of the resulting X-ray source. In order to achieve sufficiently small emittances, optimization was done regarding both the geometry of the gun and the initial electron bunch distribution produced off the cathode. Using double-spoke SRF cavities to comprise the linear accelerator allows for an electron beam of reasonable size to be focused at the interaction point, while preserving the low emittance that was generated by the gun. An aggressive final focusing section following the electron beam's exit from the accelerator produces the small spot size at the interaction point which results in an X-ray beam of high flux and brilliance. Taking all of these advancements together, a world class compact X-ray source has been designed. It is anticipated that this source would far outperform the conventional bremsstrahlung and many other compact ICLSs, while coming closer to performing at the
Inverse comptonization vs. thermal synchrotron
International Nuclear Information System (INIS)
Fenimore, E.E.; Klebesadel, R.W.; Laros, J.G.
1983-01-01
There are currently two radiation mechanisms being considered for gamma-ray bursts: thermal synchrotron and inverse comptonization. They are mutually exclusive since thermal synchrotron requires a magnetic field of approx. 10 12 Gauss whereas inverse comptonization cannot produce a monotonic spectrum if the field is larger than 10 11 and is too inefficient relative to thermal synchrotron unless the field is less than 10 9 Gauss. Neither mechanism can explain completely the observed characteristics of gamma-ray bursts. However, we conclude that thermal synchrotron is more consistent with the observations if the sources are approx. 40 kpc away whereas inverse comptonization is more consistent if they are approx. 300 pc away. Unfortunately, the source distance is still not known and, thus, the radiation mechanism is still uncertain
Analytical description of photon beam phase spaces in inverse Compton scattering sources
Directory of Open Access Journals (Sweden)
C. Curatolo
2017-08-01
Full Text Available We revisit the description of inverse Compton scattering sources and the photon beams generated therein, emphasizing the behavior of their phase space density distributions and how they depend upon those of the two colliding beams of electrons and photons. The main objective is to provide practical formulas for bandwidth, spectral density, brilliance, which are valid in general for any value of the recoil factor, i.e. both in the Thomson regime of negligible electron recoil, and in the deep Compton recoil dominated region, which is of interest for gamma-gamma colliders and Compton sources for the production of multi-GeV photon beams. We adopt a description based on the center of mass reference system of the electron-photon collision, in order to underline the role of the electron recoil and how it controls the relativistic Doppler/boost effect in various regimes. Using the center of mass reference frame greatly simplifies the treatment, allowing us to derive simple formulas expressed in terms of rms momenta of the two colliding beams (emittance, energy spread, etc. and the collimation angle in the laboratory system. Comparisons with Monte Carlo simulations of inverse Compton scattering in various scenarios are presented, showing very good agreement with the analytical formulas: in particular we find that the bandwidth dependence on the electron beam emittance, of paramount importance in Thomson regime, as it limits the amount of focusing imparted to the electron beam, becomes much less sensitive in deep Compton regime, allowing a stronger focusing of the electron beam to enhance luminosity without loss of mono-chromaticity. A similar effect occurs concerning the bandwidth dependence on the frequency spread of the incident photons: in deep recoil regime the bandwidth comes out to be much less dependent on the frequency spread. The set of formulas here derived are very helpful in designing inverse Compton sources in diverse regimes, giving a
Intense inverse compton {gamma}-ray source from Duke storage ring FEL
Energy Technology Data Exchange (ETDEWEB)
Litvinenko, V.N.; Madey, J.M.J. [Duke Univ., Durham, NC (United States)
1995-12-31
We suggest using FEL intracavity power in the Duke storage ring fortrays production via Inverse Compton Backscattering (ICB). The OK-4 FEL driven by the Duke storage ring will tens of watts of average lasing power in the UV/VUV range. Average intracavity power will be in kilowatt range and can be used to pump ICB source. The {gamma}-rays with maximum energy from 40 MeV to 200 MeV with intensity of 0.1-5 10{sup 10}{gamma} per second can be generated. In this paper we present expected parameters of {gamma}-ray beam parameters including its intensity and distribution. We discuss influence of e-beam parameters on collimated {gamma}-rays spectrum and optimization of photon-electron interaction point.
Development and characterization of a tunable ultrafast X-ray source via inverse-Compton-scattering
International Nuclear Information System (INIS)
Jochmann, Axel
2014-01-01
Ultrashort, nearly monochromatic hard X-ray pulses enrich the understanding of the dynamics and function of matter, e.g., the motion of atomic structures associated with ultrafast phase transitions, structural dynamics and (bio)chemical reactions. Inverse Compton backscattering of intense laser pulses from relativistic electrons not only allows for the generation of bright X-ray pulses which can be used in a pump-probe experiment, but also for the investigation of the electron beam dynamics at the interaction point. The focus of this PhD work lies on the detailed understanding of the kinematics during the interaction of the relativistic electron bunch and the laser pulse in order to quantify the influence of various experiment parameters on the emitted X-ray radiation. The experiment was conducted at the ELBE center for high power radiation sources using the ELBE superconducting linear accelerator and the DRACO Ti:sapphire laser system. The combination of both these state-of-the-art apparatuses guaranteed the control and stability of the interacting beam parameters throughout the measurement. The emitted X-ray spectra were detected with a pixelated detector of 1024 by 256 elements (each 26μm by 26μm) to achieve an unprecedented spatial and energy resolution for a full characterization of the emitted spectrum to reveal parameter influences and correlations of both interacting beams. In this work the influence of the electron beam energy, electron beam emittance, the laser bandwidth and the energy-anglecorrelation on the spectra of the backscattered X-rays is quantified. A rigorous statistical analysis comparing experimental data to ab-initio 3D simulations enabled, e.g., the extraction of the angular distribution of electrons with 1.5% accuracy and, in total, provides predictive capability for the future high brightness hard X-ray source PHOENIX (Photon electron collider for Narrow bandwidth Intense X-rays) and potential all optical gamma-ray sources. The results
Compact FEL-driven inverse compton scattering gamma-ray source
Energy Technology Data Exchange (ETDEWEB)
Placidi, M. [Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Di Mitri, S., E-mail: simone.dimitri@elettra.eu [Elettra - Sincrotrone Trieste S.C.p.A., 34149 Basovizza, Trieste (Italy); Pellegrini, C. [SLAC National Accelerator Laboratory, Menlo Park, CA 94025 (United States); University of California, Los Angeles, CA 90095 (United States); Penn, G. [Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)
2017-05-21
Many research and applications areas require photon sources capable of producing gamma-ray beams in the multi-MeV energy range with reasonably high fluxes and compact footprints. Besides industrial, nuclear physics and security applications, a considerable interest comes from the possibility to assess the state of conservation of cultural assets like statues, columns etc., via visualization and analysis techniques using high energy photon beams. Computed Tomography scans, widely adopted in medicine at lower photon energies, presently provide high quality three-dimensional imaging in industry and museums. We explore the feasibility of a compact source of quasi-monochromatic, multi-MeV gamma-rays based on Inverse Compton Scattering (ICS) from a high intensity ultra-violet (UV) beam generated in a free-electron laser by the electron beam itself. This scheme introduces a stronger relationship between the energy of the scattered photons and that of the electron beam, resulting in a device much more compact than a classic ICS for a given scattered energy. The same electron beam is used to produce gamma-rays in the 10–20 MeV range and UV radiation in the 10–15 eV range, in a ~4×22 m{sup 2} footprint system.
DESIGN OF A GAMMA-RAY SOURCE BASED ON INVERSE COMPTON SCATTERING AT THE FAST SUPERCONDUCTING LINAC
Energy Technology Data Exchange (ETDEWEB)
Mihalcea, D. [NICADD, DeKalb; Jacobson, B. [RadiaBeam Tech.; Murokh, A. [Fermilab; Piot, P. [Fermilab; Ruan, J. [Fermilab
2016-10-10
A watt-level average-power gamma-ray source is currently under development at the Fermilab Accelerator Science & Technology (FAST) facility. The source is based on the Inverse Compton Scattering of a high-brightness 300-MeV beam against a high-power laser beam circulating in an optical cavity. The back scattered gamma rays are expected to have photon energies up to 1.5 MeV. This paper discusses the optimization of the source, its performances, and the main challenges ahead.
Energy Technology Data Exchange (ETDEWEB)
Mihalcea, D.; Murokh, A.; Piot, P.; Ruan, J.
2017-07-01
A high-brilliance (~10^{22} photon s^{-1} mm^{-2} mrad^{-2} /0.1%) gamma-ray source experiment is currently being planned at Fermilab (E_{γ}≃1.1 MeV). The source implements a high-repetition-rate inverse Compton scattering by colliding electron bunches formed in a ~300-MeV superconducting linac with a high-intensity laser pulse. This paper describes the design rationale along with some of technical challenges associated to producing high-repetition-rate collision. The expected performances of the gamma-ray source are also presented.
Conceptual design of magnetic spectrometer for inverse-Compton X-ray source in MeV region
Directory of Open Access Journals (Sweden)
Xinjian Tan
2017-10-01
Full Text Available A novel magnetic spectrometer for Inverse-Compton X-ray source is proposed. Compton recoil electrons are generated by a lithium converter, and then confined by a complex collimator and spectrally resolved by a sector-shaped double-focusing magnet. A method of optimization for the converter is investigated, and the dependence of the best energy resolution on converting efficiency is quantitatively revealed. The configuration of the magnet is specially designed to cover a wide range of electron energy and to achieve a large collecting solid angle. The efficiency and relative energy resolution of the designed spectrometer, according to Monte-Carlo simulation using Geant4, are 10-4 e/p and about 5% respectively for 3 MeV photons.
Two-colour X-gamma ray inverse Compton back-scattering source
Drebot, I.; Petrillo, V.; Serafini, L.
2017-10-01
We present a simple and new scheme for producing two-colour Thomson/Compton radiation with the possibility of controlling separately the polarization of the two different colours, based on the interaction of one single electron beam with two light pulses that can come from the same laser setup or from two different lasers and that collide with the electrons at different angle. One of the most interesting cases for medical applications is to provide two X-ray pulses across the iodine K-edge at 33.2 keV. The iodine is used as contrast medium in various imaging techniques and the availability of two spectral lines accross the K-edge allows one to produce subtraction images with a great increase in accuracy.
Energy Technology Data Exchange (ETDEWEB)
Chaleil, A.; Le Flanchec, V.; Binet, A.; Nègre, J.P.; Devaux, J.F.; Jacob, V.; Millerioux, M.; Bayle, A.; Balleyguier, P. [CEA DAM DIF, F-91297 Arpajon (France); Prazeres, R. [CLIO/LCP, Bâtiment 201, Université Paris-Sud, F-91450 Orsay (France)
2016-12-21
An inverse Compton scattering source is under development at the ELSA linac of CEA, Bruyères-le-Châtel. Ultra-short X-ray pulses are produced by inverse Compton scattering of 30 ps-laser pulses by relativistic electron bunches. The source will be able to operate in single shot mode as well as in recurrent mode with 72.2 MHz pulse trains. Within this framework, an optical multipass system that multiplies the number of emitted X-ray photons in both regimes has been designed in 2014, then implemented and tested on ELSA facility in the course of 2015. The device is described from both geometrical and timing viewpoints. It is based on the idea of folding the laser optical path to pile-up laser pulses at the interaction point, thus increasing the interaction probability. The X-ray output gain measurements obtained using this system are presented and compared with calculated expectations.
Inverse Compton Gamma Rays from Dark Matter Annihilation in the ...
Indian Academy of Sciences (India)
As the e + e − − energy spectra have an exponential cut off at high energies, this may allow to discriminate some dark matter scenarios from other astrophysical sources. Finally, some more detailed study about the effect of inverse Compton scattering may help constrain the dark matter signature in the dSph galaxies.
Compton Sources of Electromagnetic Radiation
Energy Technology Data Exchange (ETDEWEB)
Geoffrey Krafft,Gerd Priebe
2011-01-01
When a relativistic electron beam interacts with a high-field laser beam, intense and highly collimated electromagnetic radiation will be generated through Compton scattering. Through relativistic upshifting and the relativistic Doppler effect, highly energetic polarized photons are radiated along the electron beam motion when the electrons interact with the laser light. For example, X-ray radiation can be obtained when optical lasers are scattered from electrons of tens-of-MeV beam energy. Because of the desirable properties of the radiation produced, many groups around the world have been designing, building, and utilizing Compton sources for a wide variety of purposes. In this review article, we discuss the generation and properties of the scattered radiation, the types of Compton source devices that have been constructed to date, and the prospects of radiation sources of this general type. Due to the possibilities of producing hard electromagnetic radiation in a device that is small compared to the alternative storage ring sources, it is foreseen that large numbers of such sources may be constructed in the future.
Time-independent inverse compton spectrum for photons from a ...
African Journals Online (AJOL)
The general theoretical aspects of inverse Compton scattering was investigated and an equation for the timeindependent inverse Compton spectrum for photons from a plasma cloud of finite extent was derived. This was done by convolving the Kompaneets equation used for describing the evolution of the photon spectrum ...
Multipass optical cavity for inverse Compton interactions
Energy Technology Data Exchange (ETDEWEB)
Rollason, A.J. E-mail: a.j.rollason@keele.ac.uk; Fang, X.; Dugdale, D.E
2004-07-01
The recycling of laser beams in the focal region of non-resonant multipass optical cavities has been investigated as a means of providing a high intensity of photons for weak interaction experiments. Ray-tracing simulations and measurements with an Ar-ion laser have been carried out to examine the intensity profiles of the laser field in different 2-mirror geometries. In particular, the use of such cavities in the generation of X-rays by inverse Compton scattering is considered. X-ray yields are calculated for electron beams of 0.05, 0.1, 0.2 and 0.5 mm diameter yielding enhancement factors of 10-200 compared to a free space laser interaction.
Khizhanok, Andrei
Development of a compact source of high-spectral brilliance and high impulse frequency gamma rays has been in scope of Fermi National Accelerator Laboratory for quite some time. Main goal of the project is to develop a setup to support gamma rays detection test and gamma ray spectroscopy. Potential applications include but not limited to nuclear astrophysics, nuclear medicine, oncology ('gamma knife'). Present work covers multiple interconnected stages of development of the interaction region to ensure high levels of structural strength and vibrational resistance. Inverse Compton scattering is a complex phenomenon, in which charged particle transfers a part of its energy to a photon. It requires extreme precision as the interaction point is estimated to be 20 microm. The slightest deflection of the mirrors will reduce effectiveness of conversion by orders of magnitude. For acceptable conversion efficiency laser cavity also must have >1000 finesse value, which requires a trade-off between size, mechanical stability, complexity, and price of the setup. This work focuses on advantages and weak points of different designs of interaction regions as well as in-depth description of analyses performed. This includes laser cavity amplification and finesse estimates, natural frequency mapping, harmonic analysis. Structural analysis is required as interaction must occur under high vacuum conditions.
Inverse Compton X-ray signature of AGN feedback
Bourne, Martin A.; Nayakshin, Sergei
2013-12-01
Bright AGN frequently show ultrafast outflows (UFOs) with outflow velocities vout ˜ 0.1c. These outflows may be the source of AGN feedback on their host galaxies sought by galaxy formation modellers. The exact effect of the outflows on the ambient galaxy gas strongly depends on whether the shocked UFOs cool rapidly or not. This in turn depends on whether the shocked electrons share the same temperature as ions (one-temperature regime, 1T) or decouple (2T), as has been recently suggested. Here we calculate the inverse Compton spectrum emitted by such shocks, finding a broad feature potentially detectable either in mid-to-high energy X-rays (1T case) or only in the soft X-rays (2T). We argue that current observations of AGN do not seem to show evidence for the 1T component. The limits on the 2T emission are far weaker, and in fact it is possible that the observed soft X-ray excess of AGN is partially or fully due to the 2T shock emission. This suggests that UFOs are in the energy-driven regime outside the central few pc, and must pump considerable amounts of not only momentum but also energy into the ambient gas. We encourage X-ray observers to look for the inverse Compton components calculated here in order to constrain AGN feedback models further.
Constraint on Parameters of Inverse Compton Scattering Model for ...
Indian Academy of Sciences (India)
J. Astrophys. Astr. (2011) 32, 299–300 c Indian Academy of Sciences. Constraint on Parameters of Inverse Compton Scattering Model for PSR B2319+60. H. G. Wang. ∗. & M. Lv. Center for Astrophysics,Guangzhou University, Guangzhou, China. ∗ e-mail: cosmic008@yahoo.com.cn. Abstract. Using the multifrequency radio ...
High-Energy Compton Scattering Light Sources
Hartemann, Fred V; Barty, C; Crane, John; Gibson, David J; Hartouni, E P; Tremaine, Aaron M
2005-01-01
No monochromatic, high-brightness, tunable light sources currently exist above 100 keV. Important applications that would benefit from such new hard x-ray sources include: nuclear resonance fluorescence spectroscopy, time-resolved positron annihilation spectroscopy, and MeV flash radiography. The peak brightness of Compton scattering light sources is derived for head-on collisions and found to scale with the electron beam brightness and the drive laser pulse energy. This gamma 2
A likely inverse-Compton emission from the Type IIb SN 2013df
Li, K. L.; Kong, A. K. H.
2016-08-01
The inverse-Compton X-ray emission model for supernovae has been well established to explain the X-ray properties of many supernovae for over 30 years. However, no observational case has yet been found to connect the X-rays with the optical lights as they should be. Here, we report the discovery of a hard X-ray source that is associated with a Type II-b supernova. Simultaneous emission enhancements have been found in both the X-ray and optical light curves twenty days after the supernova explosion. While the enhanced X-rays are likely dominated by inverse-Compton scatterings of the supernova’s lights from the Type II-b secondary peak, we propose a scenario of a high-speed supernova ejecta colliding with a low-density pre-supernova stellar wind that produces an optically thin and high-temperature electron gas for the Comptonization. The inferred stellar wind mass-loss rate is consistent with that of the supernova progenitor candidate as a yellow supergiant detected by the Hubble Space Telescope, providing an independent proof for the progenitor. This is also new evidence of the inverse-Compton emission during the early phase of a supernova.
Resonant Inverse Compton Scattering Spectra from Highly Magnetized Neutron Stars
Wadiasingh, Zorawar; Baring, Matthew G.; Gonthier, Peter L.; Harding, Alice K.
2018-02-01
Hard, nonthermal, persistent pulsed X-ray emission extending between 10 and ∼150 keV has been observed in nearly 10 magnetars. For inner-magnetospheric models of such emission, resonant inverse Compton scattering of soft thermal photons by ultrarelativistic charges is the most efficient production mechanism. We present angle-dependent upscattering spectra and pulsed intensity maps for uncooled, relativistic electrons injected in inner regions of magnetar magnetospheres, calculated using collisional integrals over field loops. Our computations employ a new formulation of the QED Compton scattering cross section in strong magnetic fields that is physically correct for treating important spin-dependent effects in the cyclotron resonance, thereby producing correct photon spectra. The spectral cutoff energies are sensitive to the choices of observer viewing geometry, electron Lorentz factor, and scattering kinematics. We find that electrons with energies ≲15 MeV will emit most of their radiation below 250 keV, consistent with inferred turnovers for magnetar hard X-ray tails. More energetic electrons still emit mostly below 1 MeV, except for viewing perspectives sampling field-line tangents. Pulse profiles may be singly or doubly peaked dependent on viewing geometry, emission locale, and observed energy band. Magnetic pair production and photon splitting will attenuate spectra to hard X-ray energies, suppressing signals in the Fermi-LAT band. The resonant Compton spectra are strongly polarized, suggesting that hard X-ray polarimetry instruments such as X-Calibur, or a future Compton telescope, can prove central to constraining model geometry and physics.
Advanced Source Deconvolution Methods for Compton Telescopes
Zoglauer, Andreas
The next generation of space telescopes utilizing Compton scattering for astrophysical observations is destined to one day unravel the mysteries behind Galactic nucleosynthesis, to determine the origin of the positron annihilation excess near the Galactic center, and to uncover the hidden emission mechanisms behind gamma-ray bursts. Besides astrophysics, Compton telescopes are establishing themselves in heliophysics, planetary sciences, medical imaging, accelerator physics, and environmental monitoring. Since the COMPTEL days, great advances in the achievable energy and position resolution were possible, creating an extremely vast, but also extremely sparsely sampled data space. Unfortunately, the optimum way to analyze the data from the next generation of Compton telescopes has not yet been found, which can retrieve all source parameters (location, spectrum, polarization, flux) and achieves the best possible resolution and sensitivity at the same time. This is especially important for all sciences objectives looking at the inner Galaxy: the large amount of expected sources, the high background (internal and Galactic diffuse emission), and the limited angular resolution, make it the most taxing case for data analysis. In general, two key challenges exist: First, what are the best data space representations to answer the specific science questions? Second, what is the best way to deconvolve the data to fully retrieve the source parameters? For modern Compton telescopes, the existing data space representations can either correctly reconstruct the absolute flux (binned mode) or achieve the best possible resolution (list-mode), both together were not possible up to now. Here we propose to develop a two-stage hybrid reconstruction method which combines the best aspects of both. Using a proof-of-concept implementation we can for the first time show that it is possible to alternate during each deconvolution step between a binned-mode approach to get the flux right and a
BOW TIES IN THE SKY. I. THE ANGULAR STRUCTURE OF INVERSE COMPTON GAMMA-RAY HALOS IN THE FERMI SKY
Energy Technology Data Exchange (ETDEWEB)
Broderick, Avery E.; Shalaby, Mohamad [Department of Physics and Astronomy, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1 (Canada); Tiede, Paul [Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, ON, N2L 2Y5 (Canada); Pfrommer, Christoph [Heidelberg Institute for Theoretical Studies, Schloss-Wolfsbrunnenweg 35, D-69118 Heidelberg (Germany); Puchwein, Ewald [Institute of Astronomy and Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge, CB3 0HA (United Kingdom); Chang, Philip [Department of Physics, University of Wisconsin-Milwaukee, 1900 E. Kenwood Boulevard, Milwaukee, WI 53211 (United States); Lamberts, Astrid [Theoretical Astrophysics, California Institute of Technology, Pasadena, CA 91125 (United States)
2016-12-01
Extended inverse Compton halos are generally anticipated around extragalactic sources of gamma rays with energies above 100 GeV. These result from inverse Compton scattered cosmic microwave background photons by a population of high-energy electron/positron pairs produced by the annihilation of the high-energy gamma rays on the infrared background. Despite the observed attenuation of the high-energy gamma rays, the halo emission has yet to be directly detected. Here, we demonstrate that in most cases these halos are expected to be highly anisotropic, distributing the upscattered gamma rays along axes defined either by the radio jets of the sources or oriented perpendicular to a global magnetic field. We present a pedagogical derivation of the angular structure in the inverse Compton halo and provide an analytic formalism that facilitates the generation of mock images. We discuss exploiting this fact for the purpose of detecting gamma-ray halos in a set of companion papers.
Beam dynamics in Compton ring gamma sources
Directory of Open Access Journals (Sweden)
Eugene Bulyak
2006-09-01
Full Text Available Electron storage rings of GeV energy with laser pulse stacking cavities are promising intense sources of polarized hard photons which, via pair production, can be used to generate polarized positron beams. In this paper, the dynamics of electron bunches circulating in a storage ring and interacting with high-power laser pulses is studied both analytically and by simulation. Both the common features and the differences in the behavior of bunches interacting with an extremely high power laser pulse and with a moderate pulse are discussed. Also considerations on particular lattice designs for Compton gamma rings are presented.
Observation of redshifting and harmonic radiation in inverse Compton scattering
Sakai, Y.; Pogorelsky, I.; Williams, O.; O'Shea, F.; Barber, S.; Gadjev, I.; Duris, J.; Musumeci, P.; Fedurin, M.; Korostyshevsky, A.; Malone, B.; Swinson, C.; Stenby, G.; Kusche, K.; Babzien, M.; Montemagno, M.; Jacob, P.; Zhong, Z.; Polyanskiy, M.; Yakimenko, V.; Rosenzweig, J.
2015-06-01
Inverse Compton scattering of laser photons by ultrarelativistic electron beam provides polarized x- to γ -ray pulses due to the Doppler blueshifting. Nonlinear electrodynamics in the relativistically intense linearly polarized laser field changes the radiation kinetics established during the Compton interaction. These are due to the induced figure-8 motion, which introduces an overall redshift in the radiation spectrum, with the concomitant emission of higher order harmonics. To experimentally analyze the strong field physics associated with the nonlinear electron-laser interaction, clear modifications to the angular and wavelength distributions of x rays are observed. The relativistic photon wave field is provided by the ps CO2 laser of peak normalized vector potential of 0.5 laser [M. Babzien et al., Phys. Rev. Lett. 96, 054802 (2006)]. The angular spectral characteristics are revealed using K -, L -edge, and high energy attenuation filters. The observation indicates existence of the electrons' longitudinal motion through frequency redshifting understood as the mass shift effect. Thus, the 3rd harmonic radiation has been observed containing on-axis x-ray component that is directly associated with the induced figure-8 motion. These are further supported by an initial evidence of off-axis 2nd harmonic radiation produced in a circularly polarized laser wave field. Total x-ray photon number per pulse, scattered by 65 MeV electron beam of 0.3 nC, at the interaction point is measured to be approximately 109 .
Nuclear photon science with inverse compton photon beam
International Nuclear Information System (INIS)
Fujiwara, Mamoru
2007-01-01
Recent developments of the synchrotron radiation facilities and intense lasers are now guiding us to a new research frontier with probes of a high energy GeV photon beam and an intense and short pulse MeV γ-ray beam. New directions of the science developments with photo-nuclear reactions are discussed. The inverse Compton γ-ray has two good advantages for searching for a microscopic quantum world; they are 1) good emittance and 2) high linear and circular polarizations. With these advantages, photon beams in the energy range from MeV to GeV are used for studying hadron structure, nuclear structure, astrophysics, materials science, as well as for applying medical science. (author)
Observation of redshifting and harmonic radiation in inverse Compton scattering
Directory of Open Access Journals (Sweden)
Y. Sakai
2015-06-01
Full Text Available Inverse Compton scattering of laser photons by ultrarelativistic electron beam provides polarized x- to γ-ray pulses due to the Doppler blueshifting. Nonlinear electrodynamics in the relativistically intense linearly polarized laser field changes the radiation kinetics established during the Compton interaction. These are due to the induced figure-8 motion, which introduces an overall redshift in the radiation spectrum, with the concomitant emission of higher order harmonics. To experimentally analyze the strong field physics associated with the nonlinear electron-laser interaction, clear modifications to the angular and wavelength distributions of x rays are observed. The relativistic photon wave field is provided by the ps CO_{2} laser of peak normalized vector potential of 0.5
Production of X-rays by inverse Compton effect
International Nuclear Information System (INIS)
Mainardi, R.T.
2005-01-01
X-rays and gamma rays of high energy values can be produced by the scattering of low energy photons with high energy electrons, being this a process controlled by the Compton scattering. If a laser beam is used, the x-ray beam inherits the properties of intensity, monochromaticity and collimation from the laser. In this work we analyze the generation of intense x-ray beams of energies between 10 and 100 KeV to be used in a wide range of applications where a high intensity and high degrees of monochromaticity and polarization are important properties to improve image reduce doses and improve radiation treatments. To this purpose we evaluated, using relativistic kinematics the scattered beam properties in terms of the scattering angle. This arrangement is being considered in several worldwide laboratories as an alternative to synchrotron radiation and is referred to as 'table top synchrotron radiation', since it cost of installation is orders of magnitude smaller than a 'synchrotron radiation source'. The radiation beam might exhibit non-linear properties in its interaction with matter, in a similar way as a laser beam and we will investigate how to calibrate and evaluate TLD dosemeters properties, both in low and high intensity fields either mono or polyenergetic in wide spectral energy ranges. (Author)
High-energy scaling of Compton scattering light sources
Directory of Open Access Journals (Sweden)
F. V. Hartemann
2005-10-01
Full Text Available No monochromatic (Δω_{x}/ω_{x}10^{20} photons/(mm^{2}×mrad^{2}×s×0.1% bandwidth], tunable light sources currently exist above 100 keV. Important applications that would benefit from such new hard x-ray and γ-ray sources include the following: nuclear resonance fluorescence spectroscopy and isotopic imaging, time-resolved positron annihilation spectroscopy, and MeV flash radiography. In this paper, the peak brightness of Compton scattering light sources is derived for head-on collisions and found to scale quadratically with the normalized energy, γ; inversely with the electron beam duration, Δτ, and the square of its normalized emittance, ε; and linearly with the bunch charge, eN_{e}, and the number of photons in the laser pulse, N_{γ}: B[over ^]_{x}∝γ^{2}N_{e}N_{γ}/ε^{2}Δτ. This γ^{2} scaling shows that for low normalized emittance electron beams (1 nC, 1 mm·mrad, 100 MeV, and tabletop laser systems (1–10 J, 5 ps the x-ray peak brightness can exceed 10^{23} photons/(mm^{2}×mrad^{2}×s×0.1% bandwidth near ℏω_{x}=1 MeV; this is confirmed by three-dimensional codes that have been benchmarked against Compton scattering experiments performed at Lawrence Livermore National Laboratory. The interaction geometry under consideration is head-on collisions, where the x-ray flash duration is shown to be equal to that of the electron bunch, and which produce the highest peak brightness for compressed electron beams. Important nonlinear effects, including spectral broadening, are also taken into account in our analysis; they show that there is an optimum laser pulse duration in this geometry, of the order of a few picoseconds, in sharp contrast with the initial approach to laser-driven Compton scattering sources where femtosecond laser systems were thought to be mandatory. The analytical expression for the peak on-axis brightness derived here is a powerful tool to
Optimization of Compton Source Performance through Electron Beam Shaping
Energy Technology Data Exchange (ETDEWEB)
Malyzhenkov, Alexander [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Yampolsky, Nikolai [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-09-26
We investigate a novel scheme for significantly increasing the brightness of x-ray light sources based on inverse Compton scattering (ICS) - scattering laser pulses off relativistic electron beams. The brightness of ICS sources is limited by the electron beam quality since electrons traveling at different angles, and/or having different energies, produce photons with different energies. Therefore, the spectral brightness of the source is defined by the 6d electron phase space shape and size, as well as laser beam parameters. The peak brightness of the ICS source can be maximized then if the electron phase space is transformed in a way so that all electrons scatter off the x-ray photons of same frequency in the same direction, arriving to the observer at the same time. We describe the x-ray photon beam quality through the Wigner function (6d photon phase space distribution) and derive it for the ICS source when the electron and laser rms matrices are arbitrary.
Beam dynamics simulation in the X-ray Compton source
Energy Technology Data Exchange (ETDEWEB)
Gladkikh, P.; Karnaukhov, I.; Telegin, Yu.; Shcherbakov, A. E-mail: shcherbakov@kipt.kharkov.ua; Zelinsky, A
2002-05-01
At the National Science Center 'Kharkov Institute of Physics and Technology' the X-ray source based on Compton scattering has been developed. The computer code for simulation of electron beam dynamics with taking into account the Compton scattering effect based on Monte Carlo method is described in this report. The first results of computer simulation of beam dynamics with electron-photon interaction, parameters of electron and photon beams are presented. Calculations were carried out with the lattice of synchrotron light source SRS-800 Ukrainian Synchrotron Center.
Beam dynamics simulation in the X-ray Compton source
International Nuclear Information System (INIS)
Gladkikh, P.; Karnaukhov, I.; Telegin, Yu.; Shcherbakov, A.; Zelinsky, A.
2002-01-01
At the National Science Center 'Kharkov Institute of Physics and Technology' the X-ray source based on Compton scattering has been developed. The computer code for simulation of electron beam dynamics with taking into account the Compton scattering effect based on Monte Carlo method is described in this report. The first results of computer simulation of beam dynamics with electron-photon interaction, parameters of electron and photon beams are presented. Calculations were carried out with the lattice of synchrotron light source SRS-800 Ukrainian Synchrotron Center
Beam dynamics simulation in the X-ray Compton source
Gladkikh, P; Telegin, Yu P; Shcherbakov, A; Zelinsky, A
2002-01-01
At the National Science Center 'Kharkov Institute of Physics and Technology' the X-ray source based on Compton scattering has been developed. The computer code for simulation of electron beam dynamics with taking into account the Compton scattering effect based on Monte Carlo method is described in this report. The first results of computer simulation of beam dynamics with electron-photon interaction, parameters of electron and photon beams are presented. Calculations were carried out with the lattice of synchrotron light source SRS-800 Ukrainian Synchrotron Center.
Directional Unfolded Source Term (DUST) for Compton Cameras.
Energy Technology Data Exchange (ETDEWEB)
Mitchell, Dean J.; Mitchell, Dean J.; Horne, Steven M.; O' Brien, Sean; Thoreson, Gregory G
2018-03-01
A Directional Unfolded Source Term (DUST) algorithm was developed to enable improved spectral analysis capabilities using data collected by Compton cameras. Achieving this objective required modification of the detector response function in the Gamma Detector Response and Analysis Software (GADRAS). Experimental data that were collected in support of this work include measurements of calibration sources at a range of separation distances and cylindrical depleted uranium castings.
Directory of Open Access Journals (Sweden)
Y. Sakai
2017-06-01
Full Text Available Inverse Compton scattering (ICS is a unique mechanism for producing fast pulses—picosecond and below—of bright photons, ranging from x to γ rays. These nominally narrow spectral bandwidth electromagnetic radiation pulses are efficiently produced in the interaction between intense, well-focused electron and laser beams. The spectral characteristics of such sources are affected by many experimental parameters, with intense laser effects often dominant. A laser field capable of inducing relativistic oscillatory motion may give rise to harmonic generation and, importantly for the present work, nonlinear redshifting, both of which dilute the spectral brightness of the radiation. As the applications enabled by this source often depend sensitively on its spectra, it is critical to resolve the details of the wavelength and angular distribution obtained from ICS collisions. With this motivation, we present an experimental study that greatly improves on previous spectral measurement methods based on x-ray K-edge filters, by implementing a multilayer bent-crystal x-ray spectrometer. In tandem with a collimating slit, this method reveals a projection of the double differential angular-wavelength spectrum of the ICS radiation in a single shot. The measurements enabled by this diagnostic illustrate the combined off-axis and nonlinear-field-induced redshifting in the ICS emission process. The spectra obtained illustrate in detail the strength of the normalized laser vector potential, and provide a nondestructive measure of the temporal and spatial electron-laser beam overlap.
Simulation of inverse Compton scattering and its implications on the scattered linewidth
Ranjan, N.; Terzić, B.; Krafft, G. A.; Petrillo, V.; Drebot, I.; Serafini, L.
2018-03-01
Rising interest in inverse Compton sources has increased the need for efficient models that properly quantify the behavior of scattered radiation given a set of interaction parameters. The current state-of-the-art simulations rely on Monte Carlo-based methods, which, while properly expressing scattering behavior in high-probability regions of the produced spectra, may not correctly simulate such behavior in low-probability regions (e.g. tails of spectra). Moreover, sampling may take an inordinate amount of time for the desired accuracy to be achieved. In this paper, we present an analytic derivation of the expression describing the scattered radiation linewidth and propose a model to describe the effects of horizontal and vertical emittance on the properties of the scattered radiation. We also present an improved version of the code initially reported in Krafft et al. [Phys. Rev. Accel. Beams 19, 121302 (2016), 10.1103/PhysRevAccelBeams.19.121302], that can perform the same simulations as those present in cain and give accurate results in low-probability regions by integrating over the emissions of the electrons. Finally, we use these codes to carry out simulations that closely verify the behavior predicted by the analytically derived scaling law.
Sources of the X-rays Based on Compton Scattering
International Nuclear Information System (INIS)
Androsov, V.; Bulyak, E.; Gladkikh, P.; Karnaukhov, I.; Mytsykov, A.; Telegin, Yu.; Shcherbakov, A.; Zelinsky, A.
2007-01-01
The principles of the intense X-rays generation by laser beam scattering on a relativistic electron beam are described and description of facilities assigned to produce the X-rays based on Compton scattering is presented. The possibilities of various types of such facilities are estimated and discussed. The source of the X-rays based on a storage ring with low beam energy is described in details and advantages of the sources of such type are discussed.The results of calculation and numerical simulation carried out for laser electron storage ring NESTOR that is under development in NSC KIPT show wide prospects of the accelerator facility of such type
Production of X-rays by inverse Compton effect; Produccion de rayos X por efecto Compton inverso
Energy Technology Data Exchange (ETDEWEB)
Mainardi, R.T. [Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, 5000 Cordoba (Argentina)
2005-07-01
X-rays and gamma rays of high energy values can be produced by the scattering of low energy photons with high energy electrons, being this a process controlled by the Compton scattering. If a laser beam is used, the x-ray beam inherits the properties of intensity, monochromaticity and collimation from the laser. In this work we analyze the generation of intense x-ray beams of energies between 10 and 100 KeV to be used in a wide range of applications where a high intensity and high degrees of monochromaticity and polarization are important properties to improve image reduce doses and improve radiation treatments. To this purpose we evaluated, using relativistic kinematics the scattered beam properties in terms of the scattering angle. This arrangement is being considered in several worldwide laboratories as an alternative to synchrotron radiation and is referred to as 'table top synchrotron radiation', since it cost of installation is orders of magnitude smaller than a 'synchrotron radiation source'. The radiation beam might exhibit non-linear properties in its interaction with matter, in a similar way as a laser beam and we will investigate how to calibrate and evaluate TLD dosemeters properties, both in low and high intensity fields either mono or polyenergetic in wide spectral energy ranges. (Author)
Testing earthquake source inversion methodologies
Page, Morgan T.
2011-01-01
Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.
Inverse Compton gamma-rays from galactic dark matter annihilation. Anisotropy signatures
International Nuclear Information System (INIS)
Zhang, Le; Sigl, Guenter; Miniati, Francesco
2010-08-01
High energy electrons and positrons from annihilating dark matter can imprint unique angular anisotropies on the diffuse gamma-ray flux by inverse Compton scattering off the interstellar radiation field. We develop a numerical tool to compute gamma-ray emission from such electrons and positrons diffusing in the smooth host halo and in substructure halos with masses down to 10 -6 M s un. We show that, unlike the total gamma-ray angular power spectrum observed by Fermi-LAT, the angular power spectrum from inverse Compton scattering is exponentially suppressed below an angular scale determined by the diffusion length of electrons and positrons. For TeV scale dark matter with a canonical thermal freeze-out cross section 3 x 10 -26 cm 3 /s, this feature may be detectable by Fermi-LAT in the energy range 100-300 GeV after more sophisticated foreground subtraction. We also find that the total flux and the shape of the angular power spectrum depends sensitively on the spatial distribution of subhalos in the Milky Way. Finally, the contribution from the smooth host halo component to the gamma-ray mean intensity is negligibly small compared to subhalos. (orig.)
Inverse Compton gamma-rays from galactic dark matter annihilation. Anisotropy signatures
Energy Technology Data Exchange (ETDEWEB)
Zhang, Le; Sigl, Guenter [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Miniati, Francesco [ETH Zuerich (Switzerland). Physics Dept.
2010-08-15
High energy electrons and positrons from annihilating dark matter can imprint unique angular anisotropies on the diffuse gamma-ray flux by inverse Compton scattering off the interstellar radiation field. We develop a numerical tool to compute gamma-ray emission from such electrons and positrons diffusing in the smooth host halo and in substructure halos with masses down to 10{sup -6}M{sub s}un. We show that, unlike the total gamma-ray angular power spectrum observed by Fermi-LAT, the angular power spectrum from inverse Compton scattering is exponentially suppressed below an angular scale determined by the diffusion length of electrons and positrons. For TeV scale dark matter with a canonical thermal freeze-out cross section 3 x 10{sup -26} cm{sup 3}/s, this feature may be detectable by Fermi-LAT in the energy range 100-300 GeV after more sophisticated foreground subtraction. We also find that the total flux and the shape of the angular power spectrum depends sensitively on the spatial distribution of subhalos in the Milky Way. Finally, the contribution from the smooth host halo component to the gamma-ray mean intensity is negligibly small compared to subhalos. (orig.)
International Nuclear Information System (INIS)
Masakazu Washio; Kazuyuki Sakaue; Yoshimasa Hama; Yoshio Kamiya; Tomoko Gowa; Akihiko Masuda; Aki Murata; Ryo Moriyama; Shigeru Kashiwagi; Junji Urakawa
2007-01-01
High quality beam generation project based on High-Tech Research Center Project, which has been approved by Ministry of Education, Culture, Sports, Science and Technology in 1999, has been conducted by advance research institute for science and engineering, Waseda University. In the project, laser photo-cathode RF-gun has been selected for the high quality electron beam source. RF cavities with low dark current, which were made by diamond turning technique, have been successfully manufactured. The low emittance electron beam was realized by choosing the modified laser injection technique. The obtained normalized emmitance was about 3 m.mrad at 100 pC of electron charge. The soft x-ray beam generation with the energy of 370 eV, which is in the energy region of so-called water window, by inverse Compton scattering has been performed by the collision between IR laser and the low emmitance electron beams. (Author)
X-band RF Photoinjector for Laser Compton X-ray and Gamma-ray Sources
Energy Technology Data Exchange (ETDEWEB)
Marsh, R. A. [Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States); Anderson, G. G. [Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States); Anderson, S. G. [Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States); Gibson, D. J. [Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States); Barty, C. J. [Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States)
2015-05-06
Extremely bright narrow bandwidth gamma-ray sources are expanding the application of accelerator technology and light sources in new directions. An X-band test station has been commissioned at LLNL to develop multi-bunch electron beams. This multi-bunch mode will have stringent requirements for the electron bunch properties including low emittance and energy spread, but across multiple bunches. The test station is a unique facility featuring a 200 MV/m 5.59 cell X-band photogun powered by a SLAC XL4 klystron driven by a Scandinova solid-state modulator. This paper focuses on its current status including the generation and initial characterization of first electron beam. Design and installation of the inverse-Compton scattering interaction region and upgrade paths will be discussed along with future applications.
Scaling laws in high-energy inverse compton scattering. II. Effect of bulk motions
International Nuclear Information System (INIS)
Nozawa, Satoshi; Kohyama, Yasuharu; Itoh, Naoki
2010-01-01
We study the inverse Compton scattering of the CMB photons off high-energy nonthermal electrons. We extend the formalism obtained by the previous paper to the case where the electrons have nonzero bulk motions with respect to the CMB frame. Assuming the power-law electron distribution, we find the same scaling law for the probability distribution function P 1,K (s) as P 1 (s) which corresponds to the zero bulk motions, where the peak height and peak position depend only on the power-index parameter. We solved the rate equation analytically. It is found that the spectral intensity function also has the same scaling law. The effect of the bulk motions to the spectral intensity function is found to be small. The present study will be applicable to the analysis of the x-ray and gamma-ray emission models from various astrophysical objects with nonzero bulk motions such as radio galaxies and astrophysical jets.
Brilliant GeV gamma-ray flash from inverse Compton scattering in the QED regime
Gong, Z.; Hu, R. H.; Lu, H. Y.; Yu, J. Q.; Wang, D. H.; Fu, E. G.; Chen, C. E.; He, X. T.; Yan, X. Q.
2018-04-01
An all-optical scheme is proposed for studying laser plasma based incoherent photon emission from inverse Compton scattering in the quantum electrodynamic regime. A theoretical model is presented to explain the coupling effects among radiation reaction trapping, the self-generated magnetic field and the spiral attractor in phase space, which guarantees the transfer of energy and angular momentum from electromagnetic fields to particles. Taking advantage of a prospective ∼ 1023 W cm‑2 laser facility, 3D particle-in-cell simulations show a gamma-ray flash with unprecedented multi-petawatt power and brightness of 1.7 × 1023 photons s‑1 mm‑2 mrad‑2/0.1% bandwidth (at 1 GeV). These results bode well for new research directions in particle physics and laboratory astrophysics exploring laser plasma interactions.
Inverse source problems in elastodynamics
Bao, Gang; Hu, Guanghui; Kian, Yavar; Yin, Tao
2018-04-01
We are concerned with time-dependent inverse source problems in elastodynamics. The source term is supposed to be the product of a spatial function and a temporal function with compact support. We present frequency-domain and time-domain approaches to show uniqueness in determining the spatial function from wave fields on a large sphere over a finite time interval. The stability estimate of the temporal function from the data of one receiver and the uniqueness result using partial boundary data are proved. Our arguments rely heavily on the use of the Fourier transform, which motivates inversion schemes that can be easily implemented. A Landweber iterative algorithm for recovering the spatial function and a non-iterative inversion scheme based on the uniqueness proof for recovering the temporal function are proposed. Numerical examples are demonstrated in both two and three dimensions.
A compact X-ray source based on Compton scattering
Energy Technology Data Exchange (ETDEWEB)
Bulyak, E.; Gladkikh, P.; Grigor' ev, Yu.; Guk, I.; Karnaukhov, I.; Khodyachikh, A.; Kononenko, S.; Mocheshnikov, N.; Mytsykov, A.; Shcherbakov, A. E-mail: shcherbakov@kipt.kharkov.ua; Tarasenko, A.; Telegin, Yu.; Zelinsky, A
2001-07-21
The main parameters of Kharkov electron storage ring N-100 with a beam energy range from 70 to 150 MeV are presented. The main results that were obtained in experimental researches are briefly described. The future of the N-100 upgrade to the development of the X-ray generator based on Compton back-scattering are presented. The electron beam energy range will be extended up to 250 MeV and the circumference of the storage ring will be 13.72 m. The lattice, parameters of the electron beam and the Compton back-scattering photons flux are described.
A compact X-ray source based on Compton scattering
International Nuclear Information System (INIS)
Bulyak, E.; Gladkikh, P.; Grigor'ev, Yu.; Guk, I.; Karnaukhov, I.; Khodyachikh, A.; Kononenko, S.; Mocheshnikov, N.; Mytsykov, A.; Shcherbakov, A.; Tarasenko, A.; Telegin, Yu.; Zelinsky, A.
2001-01-01
The main parameters of Kharkov electron storage ring N-100 with a beam energy range from 70 to 150 MeV are presented. The main results that were obtained in experimental researches are briefly described. The future of the N-100 upgrade to the development of the X-ray generator based on Compton back-scattering are presented. The electron beam energy range will be extended up to 250 MeV and the circumference of the storage ring will be 13.72 m. The lattice, parameters of the electron beam and the Compton back-scattering photons flux are described
Critchley, A. D. J.
2003-10-01
The main emphasis of the diode research project at the Atomic Weapons Establishment (AWE) UK is to produce small diameter radiographic spot sizes at high dose to improve the resolution of the transmission radiographs taken during hydrodynamic experiments. Experimental measurements of conditions within the diodes of Pulsed Power driven flash x-ray machines are vital to provide a benchmark for electromagnetic PIC codes such as LSP which are used to develop new diode designs. The potential use of inverse Compton scattering (ICS) as a diagnostic technique in the determination of electron energies within the diode has been investigated. The interaction of a laser beam with a beam of high-energy electrons will create an ICS spectrum of photons. Theoretically, one should be able to glean information on the energies and positions of the electrons from the energy spectrum and differential cross section of the scattered photons. The feasibility of fielding this technique on various diode designs has been explored, and an experimental setup with the greatest likelihood of success is proposed.
X-ray generation by inverse Compton scattering at the superconducting RF test facility
Energy Technology Data Exchange (ETDEWEB)
Shimizu, Hirotaka, E-mail: hirotaka@post.kek.jp [KEK, 1-1 Oho, Tsukuba 305-0801, Ibaraki (Japan); Akemoto, Mitsuo; Arai, Yasuo; Araki, Sakae; Aryshev, Alexander; Fukuda, Masafumi; Fukuda, Shigeki; Haba, Junji; Hara, Kazufumi; Hayano, Hitoshi; Higashi, Yasuo; Honda, Yosuke; Honma, Teruya; Kako, Eiji; Kojima, Yuji; Kondo, Yoshinari; Lekomtsev, Konstantin; Matsumoto, Toshihiro; Michizono, Shinichiro; Miyoshi, Toshinobu [KEK, 1-1 Oho, Tsukuba 305-0801, Ibaraki (Japan); and others
2015-02-01
Quasi-monochromatic X-rays with high brightness have a broad range of applications in fields such as life sciences, bio-, medical applications, and microlithography. One method for generating such X-rays is via inverse Compton scattering (ICS). X-ray generation experiments using ICS were carried out at the superconducting RF test facility (STF) accelerator at KEK. A new beam line, newly developed four-mirror optical cavity system, and new X-ray detector system were prepared for experiments downstream section of the STF electron accelerator. Amplified pulsed photons were accumulated into a four-mirror optical cavity and collided with an incoming 40 MeV electron beam. The generated X-rays were detected using a microchannel plate (MCP) detector for X-ray yield measurements and a new silicon-on-insulator (SOI) detector system for energy measurements. The detected X-ray yield by the MCP detector was 1756.8±272.2 photons/(244 electron bunches). To extrapolate this result to 1 ms train length under 5 Hz operations, 4.60×10{sup 5} photons/1%-bandwidth were obtained. The peak X-ray energy, which was confirmed by the SOI detector, was 29 keV, and this is consistent with ICS X-rays.
NuSTAR observations of the bullet cluster: constraints on inverse compton emission
DEFF Research Database (Denmark)
Wik, Daniel R.; Hornstrup, Allan; Molendi, S.
2014-01-01
component to describe the spectrum of the Bullet cluster, and instead argue that it is dominated at all energies by emission from purely thermal gas. The conservatively derived 90% upper limit on the IC flux of 1.1 × 10-12 erg s-1 cm-2 (50-100 keV), implying a lower limit on B ≳ 0.2 μG, is barely consistent......The search for diffuse non-thermal inverse Compton (IC) emission from galaxy clusters at hard X-ray energies has been undertaken with many instruments, with most detections being either of low significance or controversial. Because all prior telescopes sensitive at E > 10 keV do not focus light...... and have degree-scale fields of view, their backgrounds are both high and difficult to characterize. The associated uncertainties result in lower sensitivity to IC emission and a greater chance of false detection. In this work, we present 266 ks NuSTAR observations of the Bullet cluster, which is detected...
Inverse free electron laser accelerator for advanced light sources
Directory of Open Access Journals (Sweden)
J. P. Duris
2012-06-01
Full Text Available We discuss the inverse free electron laser (IFEL scheme as a compact high gradient accelerator solution for driving advanced light sources such as a soft x-ray free electron laser amplifier or an inverse Compton scattering based gamma-ray source. In particular, we present a series of new developments aimed at improving the design of future IFEL accelerators. These include a new procedure to optimize the choice of the undulator tapering, a new concept for prebunching which greatly improves the fraction of trapped particles and the final energy spread, and a self-consistent study of beam loading effects which leads to an energy-efficient high laser-to-beam power conversion.
High intensity compact Compton X-ray sources: Challenges and potential of applications
Energy Technology Data Exchange (ETDEWEB)
Jacquet, M., E-mail: mjacquet@lal.in2p3.fr
2014-07-15
Thanks to the exceptional development of high power femtosecond lasers in the last 15 years, Compton based X-ray sources are in full development over the world in the recent years. Compact Compton sources are able to combine the compactness of the instrument with a beam of high intensity, high quality, tunable in energy. In various fields of applications such as biomedical science, cultural heritage preservation and material science researches, these sources should provide an easy working environment and the methods currently used at synchrotrons could be largely developed in a lab-size environment as hospitals, labs, or museums.
Gamma ray burst source locations with the Ulysses/Compton/PVO Network
International Nuclear Information System (INIS)
Cline, T.L.; Hurley, K.C.; Boer, M.; Sommer, M.; Niel, M.; Fishman, G.J.; Kouveliotou, C.; Meegan, C.A.; Paciesas, W.S.; Wilson, R.B.; Laros, J.G.; Klebesadel, R.W.
1991-01-01
The new interplanetary gamma-ray burst network will determine source fields with unprecedented accuracy. The baseline of the Ulysses mission and the locations of Pioneer-Venus Orbiter and of Mars Observer will ensure precision to a few tens of arc seconds. Combined with the event phenomenologies of the Burst and Transient Source Experiment on Compton Observatory, the source locations to be achieved with this network may provide a basic new understanding of the puzzle of gamma ray bursts
Modulated method for efficient, narrow-bandwidth, laser Compton X-ray and gamma-ray sources
Barty, Christopher P. J.
2017-07-11
A method of x-ray and gamma-ray generation via laser Compton scattering uses the interaction of a specially-formatted, highly modulated, long duration, laser pulse with a high-frequency train of high-brightness electron bunches to both create narrow bandwidth x-ray and gamma-ray sources and significantly increase the laser to Compton photon conversion efficiency.
International Nuclear Information System (INIS)
Washio, M.; Sakaue, K.; Hama, Y.; Kamiya, Y.; Moriyama, R.; Hezume, K.; Saito, T.; Kuroda, R.; Kashiwagi, S.; Ushida, K.; Hayano, H.; Urakawa, J.
2006-01-01
High quality beam generation project based on High-Tech Research Center Project, which has been approved by Ministry of Education, Culture, Sports, Science and Technology in 1999, has been conducted by advance research institute for science and engineering, Waseda University. In the project, laser photo-cathode RF-gun has been selected for the high quality electron beam source. RF cavities with low dark current, which were made by diamond turning technique, have been successfully manufactured. The low emittance electron beam was realized by choosing the modified laser injection technique. The obtained normalized emittance was about 3 mm·mrad at 100 pC of electron charge. The soft X-ray beam generation with the energy of 370 eV, which is in the energy region of so-called 'water window', by inverse Compton scattering has been performed by the collision between IR laser and the low emittance electron beams. (authors)
Polarized γ source based on Compton backscattering in a laser cavity
Directory of Open Access Journals (Sweden)
V. Yakimenko
2006-09-01
Full Text Available We propose a novel gamma source suitable for generating a polarized positron beam for the next generation of electron-positron colliders, such as the International Linear Collider (ILC, and the Compact Linear Collider (CLIC. This 30-MeV polarized gamma source is based on Compton scattering inside a picosecond CO_{2} laser cavity generated from electron bunches produced by a 4-GeV linac. We identified and experimentally verified the optimum conditions for obtaining at least one gamma photon per electron. After multiplication at several consecutive interaction points, the circularly polarized gamma rays are stopped on a target, thereby creating copious numbers of polarized positrons. We address the practicality of having an intracavity Compton-polarized positron source as the injector for these new colliders.
Compton backscattered annihilation line emission: A new diagnostic of accreting compact sources
Lingenfelter, Richard E.; Hua, Xin-Min
1992-01-01
It is shown that Compton scattering of 511 keV electron-positron annihilation radiation produces a line like feature at approx. 170 keV from backscattered photons. Assuming a simple model of an accretion disk around a compact source, the spectrum is explored of the spectrum of Compton scattered annihilation line emission for a range of conditions. It is further shown that such Compton baskscattering of annihilation line emission from the inner edge of an accretion disk could account for the previously unidentified 170 keV line emission and high energy continuum observed from a variable, compact source, or sources, of annihilation radiation near the Galactic Center. Identification of the observed 170 keV line as an annihilation line reflection feature provides strong new evidence that the source of the emission is an accreting compact object. Further study of these features in existing spectra and in forthcoming GRO observation of these and other sources can provide unique new diagnostics of the innermost regions of accretion disks around compact objects.
Full traveltime inversion in source domain
Liu, Lu
2017-06-01
This paper presents a new method of source-domain full traveltime inversion (FTI). The objective of this study is automatically building near-surface velocity using the early arrivals of seismic data. This method can generate the inverted velocity that can kinetically best match the reconstructed plane-wave source of early arrivals with true source in source domain. It does not require picking first arrivals for tomography, which is one of the most challenging aspects of ray-based tomographic inversion. Besides, this method does not need estimate the source wavelet, which is a necessity for receiver-domain wave-equation velocity inversion. Furthermore, we applied our method on one synthetic dataset; the results show our method could generate a reasonable background velocity even when shingling first arrivals exist and could provide a good initial velocity for the conventional full waveform inversion (FWI).
Advanced Laser-Compton Gamma-Ray Sources for Nuclear Materials Detection, Assay and Imaging
Barty, C. P. J.
2015-10-01
Highly-collimated, polarized, mono-energetic beams of tunable gamma-rays may be created via the optimized Compton scattering of pulsed lasers off of ultra-bright, relativistic electron beams. Above 2 MeV, the peak brilliance of such sources can exceed that of the world's largest synchrotrons by more than 15 orders of magnitude and can enable for the first time the efficient pursuit of nuclear science and applications with photon beams, i.e. Nuclear Photonics. Potential applications are numerous and include isotope-specific nuclear materials management, element-specific medical radiography and radiology, non-destructive, isotope-specific, material assay and imaging, precision spectroscopy of nuclear resonances and photon-induced fission. This review covers activities at the Lawrence Livermore National Laboratory related to the design and optimization of mono-energetic, laser-Compton gamma-ray systems and introduces isotope-specific nuclear materials detection and assay applications enabled by them.
Development of a High-Average-Power Compton Gamma Source for Lepton Colliders
Pogorelsky, Igor; Polyanskiy, Mikhail N.; Yakimenko, Vitaliy; Platonenko, Viktor T.
2009-01-01
Gamma- (γ-) ray beams of high average power and peak brightness are of demand for a number of applications in high-energy physics, material processing, medicine, etc. One of such examples is gamma conversion into polarized positrons and muons that is under consideration for projected lepton colliders. A γ-source based on the Compton backscattering from the relativistic electron beam is a promising candidate for this application. Our approach to the high-repetition γ-source assumes placing the Compton interaction point inside a CO2 laser cavity. A laser pulse interacts with periodical electron bunches on each round-trip inside the laser cavity producing the corresponding train of γ-pulses. The round-trip optical losses can be compensated by amplification in the active laser medium. The major challenge for this approach is in maintaining stable amplification rate for a picosecond CO2-laser pulse during multiple resonator round-trips without significant deterioration of its temporal and transverse profiles. Addressing this task, we elaborated on a computer code that allows identifying the directions and priorities in the development of such a multi-pass picosecond CO2 laser. Proof-of-principle experiments help to verify the model and show the viability of the concept. In these tests we demonstrated extended trains of picosecond CO2 laser pulses circulating inside the cavity that incorporates the Compton interaction point.
Compact X-ray source based on Compton backscattering
Bulyak, E V; Zelinsky, A; Karnaukhov, I; Kononenko, S; Lapshin, V G; Mytsykov, A; Telegin, Yu P; Khodyachikh, A; Shcherbakov, A; Molodkin, V; Nemoshkalenko, V; Shpak, A
2002-01-01
The feasibility study of an intense X-ray source based on the interaction between the electron beam in a compact storage ring and the laser pulse accumulated in an optical resonator is carried out. We propose to reconstruct the 160 MeV electron storage ring N-100, which was shutdown several years ago. A new magnetic lattice will provide a transverse of electron beam size of approx 35 mu m at the point of electron beam-laser beam interaction. The proposed facility is to generate X-ray beams of intensity approx 2.6x10 sup 1 sup 4 s sup - sup 1 and spectral brightness approx 10 sup 1 sup 2 phot/0.1%bw/s/mm sup 2 /mrad sup 2 in the energy range from 10 keV up to 0.5 MeV. These X-ray beam parameters meet the requirements for most of technological and scientific applications. Besides, we plan to use the new facility for studying the laser cooling effect.
Compact X-ray source based on Compton backscattering
Energy Technology Data Exchange (ETDEWEB)
Bulyak, E.; Gladkikh, P.; Zelinsky, A. E-mail: zelinsky@kipt.kharkov.ua; Karnaukhov, I.; Kononenko, S.; Lapshin, V.; Mytsykov, A.; Telegin, Yu.; Khodyachikh, A.; Shcherbakov, A.; Molodkin, V.; Nemoshkalenko, V.; Shpak, A
2002-07-21
The feasibility study of an intense X-ray source based on the interaction between the electron beam in a compact storage ring and the laser pulse accumulated in an optical resonator is carried out. We propose to reconstruct the 160 MeV electron storage ring N-100, which was shutdown several years ago. A new magnetic lattice will provide a transverse of electron beam size of {approx}35 {mu}m at the point of electron beam-laser beam interaction. The proposed facility is to generate X-ray beams of intensity {approx}2.6x10{sup 14} s{sup -1} and spectral brightness {approx}10{sup 12} phot/0.1%bw/s/mm{sup 2}/mrad{sup 2} in the energy range from 10 keV up to 0.5 MeV. These X-ray beam parameters meet the requirements for most of technological and scientific applications. Besides, we plan to use the new facility for studying the laser cooling effect.
Geoacoustic inversion using combustive sound source signals.
Potty, Gopu R; Miller, James H; Wilson, Preston S; Lynch, James F; Newhall, Arthur
2008-09-01
Combustive sound source (CSS) data collected on single hydrophone receiving units, in water depths ranging from 65 to 110 m, during the Shallow Water 2006 experiment clearly show modal dispersion effects and are suitable for modal geoacoustic inversions. CSS shots were set off at 26 m depth in 100 m of water. The inversions performed are based on an iterative scheme using dispersion-based short time Fourier transform in which each time-frequency tiling is adaptively rotated in the time-frequency plane, depending on the local wave dispersion. Results of the inversions are found to compare favorably to local core data.
International Nuclear Information System (INIS)
Chaleil, Annaig
2016-01-01
X-ray sources based on inverse Compton scattering process produce tunable near-monochromatic and highly directive X-rays. Recent advances in laser and accelerator technologies make the development of such very compact hard X-ray sources possible. These sources are particularly attractive in several applications such as medical imaging, cancer therapy or culture-heritage study, currently performed in size-limited infrastructures. The main objective of this thesis is the development of an inverse Compton scattering source on the ELSA linac of CEA at Bruyeres-le-Chatel as a calibration tool for ultra-fast detectors. A non-resonant cavity was designed to multiply the number of emitted X-ray photons. The laser optical path is folded to pile-up laser pulses at the interaction point, thus increasing the interaction probability. Another way of optimizing the X-ray yield consists in increasing the electron bunch density at the interaction point, which is strongly dependent on the electron energy. A facility up-grade was performed to increase the electron energy up to 30 MeV. The X-ray output gain obtained thanks to this system was measured and compared with calculated expectations and 3D PIC simulations. (author) [fr
Source of X-ray radiation based on back compton scattering
Bulyak, E V; Karnaukhov, I M; Kononenko, S G; Lapshin, V G; Mytsykov, A O; Telegin, Yu P; Shcherbakov, A A; Zelinsky, Andrey Yurij
2000-01-01
Applicability was studied and previous estimation was done of power X-ray beams generation by backward Compton scattering of a laser photon beam on a cooled down electron beam. The few MeV electron beam circulating in a compact storage ring can be cooled down by interaction of that beam with powerful laser radiation of micrometer wavelength to achieve normalized emittance of 10 sup - sup 7 m. A tunable X-ray source of photons of energy ranging from few keV up to a hundred keV could result from the interaction of the laser beam with a dense electron beam.
Source of X-ray radiation based on back compton scattering
Energy Technology Data Exchange (ETDEWEB)
Bulyak, E.V.; Gladkikh, P.I.; Karnaukhov, I.M.; Kononenko, S.G.; Lapshin, V.I.; Mytsykov, A.O.; Telegin, Yu.N.; Shcherbakov, A.A. E-mail: shcherbakov@kipt.kharkov.ua; Zelinsky, A.Yu
2000-06-21
Applicability was studied and previous estimation was done of power X-ray beams generation by backward Compton scattering of a laser photon beam on a cooled down electron beam. The few MeV electron beam circulating in a compact storage ring can be cooled down by interaction of that beam with powerful laser radiation of micrometer wavelength to achieve normalized emittance of 10{sup -7} m. A tunable X-ray source of photons of energy ranging from few keV up to a hundred keV could result from the interaction of the laser beam with a dense electron beam.
Source of X-ray radiation based on back compton scattering
International Nuclear Information System (INIS)
Bulyak, E.V.; Gladkikh, P.I.; Karnaukhov, I.M.; Kononenko, S.G.; Lapshin, V.I.; Mytsykov, A.O.; Telegin, Yu.N.; Shcherbakov, A.A.; Zelinsky, A.Yu.
2000-01-01
Applicability was studied and previous estimation was done of power X-ray beams generation by backward Compton scattering of a laser photon beam on a cooled down electron beam. The few MeV electron beam circulating in a compact storage ring can be cooled down by interaction of that beam with powerful laser radiation of micrometer wavelength to achieve normalized emittance of 10 -7 m. A tunable X-ray source of photons of energy ranging from few keV up to a hundred keV could result from the interaction of the laser beam with a dense electron beam
THE γ-RAY SPECTRUM OF GEMINGA AND THE INVERSE COMPTON MODEL OF PULSAR HIGH-ENERGY EMISSION
International Nuclear Information System (INIS)
Lyutikov, Maxim
2012-01-01
We reanalyze the Fermi spectra of the Geminga and Vela pulsars. We find that the spectrum of Geminga above the break is well approximated by a simple power law without the exponential cutoff, making Geminga's spectrum similar to that of Crab. Vela's broadband γ-ray spectrum is equally well fit with both the exponential cutoff and the double power-law shapes. In the broadband double power-law fits, for a typical Fermi spectrum of a bright γ-ray pulsar, most of the errors accumulate due to the arbitrary parameterization of the spectral roll-off. In addition, a power law with an exponential cutoff gives an acceptable fit for the underlying double power-law spectrum for a very broad range of parameters, making such fitting procedures insensitive to the underlying Fermi photon spectrum. Our results have important implications for the mechanism of pulsar high-energy emission. A number of observed properties of γ-ray pulsars—i.e., the broken power-law spectra without exponential cutoffs and stretching in the case of Crab beyond the maximal curvature limit, spectral breaks close to or exceeding the maximal breaks due to curvature emission, patterns of the relative intensities of the leading and trailing pulses in the Crab repeated in the X-ray and γ-ray regions, presence of profile peaks at lower energies aligned with γ-ray peaks—all point to the inverse Compton origin of the high-energy emission from majority of pulsars.
Laser-Wakefield driven compact Compton scattering gamma-ray source
Energy Technology Data Exchange (ETDEWEB)
Albert, F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Froula, D. H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hartemann, F. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Joshi, C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2010-04-13
We propose to demonstrate a novel x-ray and gamma-ray light source based on laser-plasma electron acceleration and Compton scattering at the Jupiter Laser Facility at LLNL. This will provide a new versatile and compact light source capability at the laboratory with very broad scientific applications that are of interest to many disciplines. The source’s synchronization with the seed laser system at a femtosecond time scale (i-e, at which chemical reactions occur) will allow scientists to perform pump-probe experiments with x-ray and gamma-ray beams. Across the laboratory, this will be a new tool for nuclear science, high energy density physics, chemistry, biology, or weapons studies.
Mesoscale inversion of carbon sources and sinks
International Nuclear Information System (INIS)
Lauvaux, T.
2008-01-01
Inverse methods at large scales are used to infer the spatial variability of carbon sources and sinks over the continents but their uncertainties remain large. Atmospheric concentrations integrate the surface flux variability but atmospheric transport models at low resolution are not able to simulate properly the local atmospheric dynamics at the measurement sites. However, the inverse estimates are more representative of the large spatial heterogeneity of the ecosystems compared to direct flux measurements. Top-down and bottom-up methods that aim at quantifying the carbon exchanges between the surface and the atmosphere correspond to different scales and are not easily comparable. During this phD, a mesoscale inverse system was developed to correct carbon fluxes at 8 km resolution. The high resolution transport model MesoNH was used to simulate accurately the variability of the atmospheric concentrations, which allowed us to reduce the uncertainty of the retrieved fluxes. All the measurements used here were observed during the intensive regional campaign CERES of May and June 2005, during which several instrumented towers measured CO 2 concentrations and fluxes in the South West of France. Airborne measurements allowed us to observe concentrations at high altitude but also CO 2 surface fluxes over large parts of the domain. First, the capacity of the inverse system to correct the CO 2 fluxes was estimated using pseudo-data experiments. The largest fraction of the concentration variability was attributed to regional surface fluxes over an area of about 300 km around the site locations depending on the meteorological conditions. Second, an ensemble of simulations allowed us to define the spatial and temporal structures of the transport errors. Finally, the inverse fluxes at 8 km resolution were compared to direct flux measurements. The inverse system has been validated in space and time and showed an improvement of the first guess fluxes from a vegetation model
Sensitivity analysis of distributed volcanic source inversion
Cannavo', Flavio; Camacho, Antonio G.; González, Pablo J.; Puglisi, Giuseppe; Fernández, José
2016-04-01
A recently proposed algorithm (Camacho et al., 2011) claims to rapidly estimate magmatic sources from surface geodetic data without any a priori assumption about source geometry. The algorithm takes the advantages of fast calculation from the analytical models and adds the capability to model free-shape distributed sources. Assuming homogenous elastic conditions, the approach can determine general geometrical configurations of pressured and/or density source and/or sliding structures corresponding to prescribed values of anomalous density, pressure and slip. These source bodies are described as aggregation of elemental point sources for pressure, density and slip, and they fit the whole data (keeping some 3D regularity conditions). Although some examples and applications have been already presented to demonstrate the ability of the algorithm in reconstructing a magma pressure source (e.g. Camacho et al., 2011,Cannavò et al., 2015), a systematic analysis of sensitivity and reliability of the algorithm is still lacking. In this explorative work we present results from a large statistical test designed to evaluate the advantages and limitations of the methodology by assessing its sensitivity to the free and constrained parameters involved in inversions. In particular, besides the source parameters, we focused on the ground deformation network topology, and noise in measurements. The proposed analysis can be used for a better interpretation of the algorithm results in real-case applications. Camacho, A. G., González, P. J., Fernández, J. & Berrino, G. (2011) Simultaneous inversion of surface deformation and gravity changes by means of extended bodies with a free geometry: Application to deforming calderas. J. Geophys. Res. 116. Cannavò F., Camacho A.G., González P.J., Mattia M., Puglisi G., Fernández J. (2015) Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises, Scientific Reports, 5 (10970) doi:10.1038/srep
Source Estimation by Full Wave Form Inversion
Energy Technology Data Exchange (ETDEWEB)
Sjögreen, Björn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; Petersson, N. Anders [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing
2013-08-07
Given time-dependent ground motion recordings at a number of receiver stations, we solve the inverse problem for estimating the parameters of the seismic source. The source is modeled as a point moment tensor source, characterized by its location, moment tensor components, the start time, and frequency parameter (rise time) of its source time function. In total, there are 11 unknown parameters. We use a non-linear conjugate gradient algorithm to minimize the full waveform misfit between observed and computed ground motions at the receiver stations. An important underlying assumption of the minimization problem is that the wave propagation is accurately described by the elastic wave equation in a heterogeneous isotropic material. We use a fourth order accurate finite difference method, developed in [12], to evolve the waves forwards in time. The adjoint wave equation corresponding to the discretized elastic wave equation is used to compute the gradient of the misfit, which is needed by the non-linear conjugated minimization algorithm. A new source point moment source discretization is derived that guarantees that the Hessian of the misfit is a continuous function of the source location. An efficient approach for calculating the Hessian is also presented. We show how the Hessian can be used to scale the problem to improve the convergence of the non-linear conjugated gradient algorithm. Numerical experiments are presented for estimating the source parameters from synthetic data in a layer over half-space problem (LOH.1), illustrating rapid convergence of the proposed approach.
Taira, Y.; Adachi, M.; Zen, H.; Tanikawa, T.; Yamamoto, N.; Hosaka, M.; Takashima, Y.; Soda, K.; Katoh, M.
2011-10-01
Inverse Compton-scattered gamma rays of tunable energy were generated by changing the collision angle between a laser and an electron beam of fixed energy at the electron storage ring, UVSOR-II. Analytic expressions were derived for energy and intensity of the gamma rays. The measured energy and intensity of the gamma rays agreed with the theoretical values, and the pulse width was calculated to be a few ps, under experimental conditions. It was shown that ultra-short gamma ray pulses with a pulse width of 150 fs can be generated by optimizing the size of the laser spot.
Radiation Source Mapping with Bayesian Inverse Methods
Hykes, Joshua Michael
We present a method to map the spectral and spatial distributions of radioactive sources using a small number of detectors. Locating and identifying radioactive materials is important for border monitoring, accounting for special nuclear material in processing facilities, and in clean-up operations. Most methods to analyze these problems make restrictive assumptions about the distribution of the source. In contrast, the source-mapping method presented here allows an arbitrary three-dimensional distribution in space and a flexible group and gamma peak distribution in energy. To apply the method, the system's geometry and materials must be known. A probabilistic Bayesian approach is used to solve the resulting inverse problem (IP) since the system of equations is ill-posed. The probabilistic approach also provides estimates of the confidence in the final source map prediction. A set of adjoint flux, discrete ordinates solutions, obtained in this work by the Denovo code, are required to efficiently compute detector responses from a candidate source distribution. These adjoint fluxes are then used to form the linear model to map the state space to the response space. The test for the method is simultaneously locating a set of 137Cs and 60Co gamma sources in an empty room. This test problem is solved using synthetic measurements generated by a Monte Carlo (MCNP) model and using experimental measurements that we collected for this purpose. With the synthetic data, the predicted source distributions identified the locations of the sources to within tens of centimeters, in a room with an approximately four-by-four meter floor plan. Most of the predicted source intensities were within a factor of ten of their true value. The chi-square value of the predicted source was within a factor of five from the expected value based on the number of measurements employed. With a favorable uniform initial guess, the predicted source map was nearly identical to the true distribution
X-band RF gun and linac for medical Compton scattering X-ray source
Dobashi, Katsuhito; Uesaka, Mitsuru; Fukasawa, Atsushi; Sakamoto, Fumito; Ebina, Futaro; Ogino, Haruyuki; Urakawa, Junji; Higo, Toshiyasu; Akemoto, Mitsuo; Hayano, Hitoshi; Nakagawa, Keiichi
2004-12-01
Compton scattering hard X-ray source for 10-80 keV are under construction using the X-band (11.424 GHz) electron linear accelerator and YAG laser at Nuclear Engineering Research laboratory, University of Tokyo. This work is a part of the national project on the development of advanced compact medical accelerators in Japan. National Institute for Radiological Science is the host institute and U.Tokyo and KEK are working for the X-ray source. Main advantage is to produce tunable monochromatic hard (10-80 keV) X-rays with the intensities of 108-1010 photons/s (at several stages) and the table-top size. Second important aspect is to reduce noise radiation at a beam dump by adopting the deceleration of electrons after the Compton scattering. This realizes one beamline of a 3rd generation SR source at small facilities without heavy shielding. The final goal is that the linac and laser are installed on the moving gantry. We have designed the X-band (11.424 GHz) traveling-wave-type linac for the purpose. Numerical consideration by CAIN code and luminosity calculation are performed to estimate the X-ray yield. X-band thermionic-cathode RF-gun and RDS(Round Detuned Structure)-type X-band accelerating structure are applied to generate 50 MeV electron beam with 20 pC microbunches (104) for 1 microsecond RF macro-pulse. The X-ray yield by the electron beam and Q-switch Nd:YAG laser of 2 J/10 ns is 107 photons/RF-pulse (108 photons/sec at 10 pps). We design to adopt a technique of laser circulation to increase the X-ray yield up to 109 photons/pulse (1010 photons/s). 50 MW X-band klystron and compact modulator have been constructed and now under tuning. The construction of the whole system has started. X-ray generation and medical application will be performed in the early next year.
Recent Measurements And Plans for the SLAC Compton X-Ray Source
Energy Technology Data Exchange (ETDEWEB)
Vlieks, A.E.; Akre, R.; Caryotakis, G.; DeStefano, C.; Frederick, W.J.; Heritage, J.P.; Luhmann, N.C.; Martin, D.; Pellegrini, C.; /SLAC /UC, Davis /UCLA
2006-02-14
A compact source of monoenergetic X-rays, generated via Compton backscattering, has been developed in a collaboration between U.C Davis and SLAC. The source consists of a 5.5 cell X-band photoinjector, a 1.05 m long high gradient accelerator structure and an interaction chamber where a high power (TW), short pulse (sub-ps) infrared laser beam is brought into a nearly head-on collision with a high quality focused electron beam. Successful completion of this project will result in the capability of generating a monoenergetic X-ray beam, continuously tunable from 20 - 85 keV. We have completed a series of measurements leading up to the generation of monoenergetic X-rays. Measurements of essential electron beam parameters and the techniques used in establishing electron/photon collisions will be presented. We discuss the design of an improved interaction chamber, future electro-optic experiments using this chamber and plans for expanding the overall program to the generation of Terahertz radiation.
Source inversion in the full-wave tomography; Full wave tomography ni okeru source inversion
Energy Technology Data Exchange (ETDEWEB)
Tsuchiya, T. [DIA Consultants Co. Ltd., Tokyo (Japan)
1997-10-22
In order to consider effects of characteristics of a vibration source in the full-wave tomography (FWT), a study has been performed on a method to invert vibration source parameters together with V(p)/V(s) distribution. The study has expanded an analysis method which uses as the basic the gradient method invented by Tarantola and the partial space method invented by Sambridge, and conducted numerical experiments. The experiment No. 1 has performed inversion of only the vibration source parameters, and the experiment No. 2 has executed simultaneous inversion of the V(p)/V(s) distribution and the vibration source parameters. The result of the discussions revealed that and effective analytical procedure would be as follows: in order to predict maximum stress, the average vibration source parameters and the property parameters are first inverted simultaneously; in order to estimate each vibration source parameter at a high accuracy, the property parameters are fixed, and each vibration source parameter is inverted individually; and the derived vibration source parameters are fixed, and the property parameters are again inverted from the initial values. 5 figs., 2 tabs.
Source-independent elastic waveform inversion using a logarithmic wavefield
Choi, Yun Seok
2012-01-01
The logarithmic waveform inversion has been widely developed and applied to some synthetic and real data. In most logarithmic waveform inversion algorithms, the subsurface velocities are updated along with the source estimation. To avoid estimating the source wavelet in the logarithmic waveform inversion, we developed a source-independent logarithmic waveform inversion algorithm. In this inversion algorithm, we first normalize the wavefields with the reference wavefield to remove the source wavelet, and then take the logarithm of the normalized wavefields. Based on the properties of the logarithm, we define three types of misfit functions using the following methods: combination of amplitude and phase, amplitude-only, and phase-only. In the inversion, the gradient is computed using the back-propagation formula without directly calculating the Jacobian matrix. We apply our algorithm to noise-free and noise-added synthetic data generated for the modified version of elastic Marmousi2 model, and compare the results with those of the source-estimation logarithmic waveform inversion. For the noise-free data, the source-independent algorithms yield velocity models close to true velocity models. For random-noise data, the source-estimation logarithmic waveform inversion yields better results than the source-independent method, whereas for coherent-noise data, the results are reversed. Numerical results show that the source-independent and source-estimation logarithmic waveform inversion methods have their own merits for random- and coherent-noise data. © 2011.
Design of a Polarised Positron Source Based on Laser Compton Scattering
Araki, S; Honda, Y; Kurihara, Y; Kuriki, M; Okugi, T; Omori, T; Taniguchi, T; Terunuma, N; Urakawa, J; Artru, X; Chevallier, M; Strakhovenko, V M; Bulyak, E; Gladkikh, P; Mönig, K; Chehab, R; Variola, A; Zomer, F; Guiducci, S; Raimondi, Pantaleo; Zimmermann, Frank; Sakaue, K; Hirose, T; Washio, M; Sasao, N; Yokoyama, H; Fukuda, M; Hirano, K; Takano, M; Takahashi, T; Sato, H; Tsunemi, A; Gao, J; Soskov, V
2005-01-01
We describe a scheme for producing polarised positrons at the ILC from polarised X-rays created by Compton scattering of a few-GeV electron beam off a CO2 or YAG laser. This scheme is very energy effective using high finesse laser cavities in conjunction with an electron storage ring.
International Nuclear Information System (INIS)
Carvalho Campos, J.S. de.
1984-01-01
The project and construction of a Compton current detector, with cylindrical geometry using teflon as dielectric material; for electromagnetic radiation in range energy between 10 KeV and 2 MeV are described. The measurements of Compton current in teflon were obtained using an electrometer. The Compton current was promoted by photon flux proceeding from X ray sources (MG 150 Muller device) and gamma rays of 60 Co. The theory elaborated to explain the experimental results is shown. The calibration curves for accumulated charge and current in detector in function of exposition rates were obtained. (M.C.K.) [pt
Rapid probabilistic source inversion using pattern recognition
Käufl, Paul J.
2015-01-01
Numerous problems in the field of seismology require the determination of parameters of a physical model that are compatible with a set of observations and prior assumptions. This type of problem is generally termed inverse problem. While, in many cases, we are able to predict observations, given a
International Nuclear Information System (INIS)
Antoniassi, M.; Conceição, A.L.C.; Poletti, M.E.
2012-01-01
In this work we measured X-ray scatter spectra from normal and neoplastic breast tissues using photon energy of 17.44 keV and a scattering angle of 90°, in order to study the shape (FWHM) of the Compton peaks. The obtained results for FWHM were discussed in terms of composition and histological characteristics of each tissue type. The statistical analysis shows that the distribution of FWHM of normal adipose breast tissue clearly differs from all other investigated tissues. Comparison between experimental values of FWHM and effective atomic number revealed a strong correlation between them, showing that the FWHM values can be used to provide information about elemental composition of the tissues. - Highlights: ► X-ray scatter spectra from normal and neoplastic breast tissues were measured. ► Shape (FWHM) of Compton peak was related with elemental composition and characteristics of each tissue type. ► A statistical hypothesis test showed clear differences between normal and neoplastic breast tissues. ► There is a strong correlation between experimental values of FWHM and effective atomic number. ► Shape (FWHM) of Compton peak can be used to provide information about elemental composition of the tissues.
The Earthquake‐Source Inversion Validation (SIV) Project
Mai, Paul Martin
2016-04-27
Finite-fault earthquake source inversions infer the (time-dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, multiple source models for the same earthquake, obtained by different research teams, often exhibit remarkable dissimilarities. To address the uncertainties in earthquake-source inversion methods and to understand strengths and weaknesses of the various approaches used, the Source Inversion Validation (SIV) project conducts a set of forward-modeling exercises and inversion benchmarks. In this article, we describe the SIV strategy, the initial benchmarks, and current SIV results. Furthermore, we apply statistical tools for quantitative waveform comparison and for investigating source-model (dis)similarities that enable us to rank the solutions, and to identify particularly promising source inversion approaches. All SIV exercises (with related data and descriptions) and statistical comparison tools are available via an online collaboration platform, and we encourage source modelers to use the SIV benchmarks for developing and testing new methods. We envision that the SIV efforts will lead to new developments for tackling the earthquake-source imaging problem.
Lee, S. H.; Yang, B. X.; Collins, J. T.; Ramanathan, M.
2017-02-01
Accurate and stable x-ray beam position monitors (XBPMs) are key elements in obtaining the desired user beam stability in the Advanced Photon Source Upgrade. In the next-generation XBPMs for the canted-undulator front ends, where two undulator beams are separated by 1.0 mrad, the lower beam power (changes through the interface via thermal simulations, the thermal contact resistance (TCR) of TIMs at an interface between two solid materials under even contact pressure must be known. This paper addresses the TCR measurements of several TIMs, including gold, silver, pyrolytic graphite sheet, and 3D graphene foam. In addition, a prototype of a Compton-scattering XBPM with diamond blades was installed at APS Beamline 24-ID-A in May 2015 and has been tested. This paper presents the design of the Compton-scattering XBPM, and compares thermal simulation results obtained for the diamond blade of this XBPM by the finite element method with in situ empirical measurements obtained by using reliable infrared technology.
Acoustic source inversion to estimate volume flux from volcanic explosions
Kim, Keehoon; Fee, David; Yokoo, Akihiko; Lees, Jonathan M.
2015-07-01
We present an acoustic waveform inversion technique for infrasound data to estimate volume fluxes from volcanic eruptions. Previous inversion techniques have been limited by the use of a 1-D Green's function in a free space or half space, which depends only on the source-receiver distance and neglects volcanic topography. Our method exploits full 3-D Green's functions computed by a numerical method that takes into account realistic topographic scattering. We apply this method to vulcanian eruptions at Sakurajima Volcano, Japan. Our inversion results produce excellent waveform fits to field observations and demonstrate that full 3-D Green's functions are necessary for accurate volume flux inversion. Conventional inversions without consideration of topographic propagation effects may lead to large errors in the source parameter estimate. The presented inversion technique will substantially improve the accuracy of eruption source parameter estimation (cf. mass eruption rate) during volcanic eruptions and provide critical constraints for volcanic eruption dynamics and ash dispersal forecasting for aviation safety. Application of this approach to chemical and nuclear explosions will also provide valuable source information (e.g., the amount of energy released) previously unavailable.
Ai, Shunke; Gao, He
2018-01-01
The recent observations of GW170817 and its electromagnetic (EM) counterparts show that double neutron star mergers could lead to rich and bright EM emissions. Recent numerical simulations suggest that neutron star and neutron star/black hole (NS–NS/BH) mergers would leave behind a central remnant surrounded by a mildly isotropic ejecta. The central remnant could launch a collimated jet and when the jet propagates through the ejecta, a mildly relativistic cocoon would be formed and the interaction between the cocoon and the ambient medium would accelerate electrons via external shock in a wide angle, so that the merger-nova photons (i.e., thermal emission from the ejecta) would be scattered into higher frequency via an inverse Compton (IC) process when they propagate through the cocoon shocked region. We find that the IC scattered component peaks at the X-ray band and it will reach its peak luminosity on the order of days (simultaneously with the merger-nova emission). With current X-ray detectors, such a late X-ray component could be detected out to 200 Mpc, depending on the merger remnant properties. It could serve as an important electromagnetic counterpart of gravitational-wave signals from NS–NS/BH mergers. Nevertheless, simultaneous detection of such a late X-ray signal and the merger-nova signal could shed light on the cocoon properties and the concrete structure of the jet.
Point sources and multipoles in inverse scattering theory
Potthast, Roland
2001-01-01
Over the last twenty years, the growing availability of computing power has had an enormous impact on the classical fields of direct and inverse scattering. The study of inverse scattering, in particular, has developed rapidly with the ability to perform computational simulations of scattering processes and led to remarkable advances in a range of applications, from medical imaging and radar to remote sensing and seismic exploration. Point Sources and Multipoles in Inverse Scattering Theory provides a survey of recent developments in inverse acoustic and electromagnetic scattering theory. Focusing on methods developed over the last six years by Colton, Kirsch, and the author, this treatment uses point sources combined with several far-reaching techniques to obtain qualitative reconstruction methods. The author addresses questions of uniqueness, stability, and reconstructions for both two-and three-dimensional problems.With interest in extracting information about an object through scattered waves at an all-ti...
Fully probabilistic seismic source inversion – Part 1: Efficient parameterisation
Directory of Open Access Journals (Sweden)
S. C. Stähler
2014-11-01
Full Text Available Seismic source inversion is a non-linear problem in seismology where not just the earthquake parameters themselves but also estimates of their uncertainties are of great practical importance. Probabilistic source inversion (Bayesian inference is very adapted to this challenge, provided that the parameter space can be chosen small enough to make Bayesian sampling computationally feasible. We propose a framework for PRobabilistic Inference of Seismic source Mechanisms (PRISM that parameterises and samples earthquake depth, moment tensor, and source time function efficiently by using information from previous non-Bayesian inversions. The source time function is expressed as a weighted sum of a small number of empirical orthogonal functions, which were derived from a catalogue of >1000 source time functions (STFs by a principal component analysis. We use a likelihood model based on the cross-correlation misfit between observed and predicted waveforms. The resulting ensemble of solutions provides full uncertainty and covariance information for the source parameters, and permits propagating these source uncertainties into travel time estimates used for seismic tomography. The computational effort is such that routine, global estimation of earthquake mechanisms and source time functions from teleseismic broadband waveforms is feasible.
Strategies for source space limitation in tomographic inverse procedures
Energy Technology Data Exchange (ETDEWEB)
George, J.S.; Lewis, P.S.; Schlitt, H.A.; Kaplan, L.; Gorodnitsky, I.; Wood, C.C.
1994-02-01
The use of magnetic recordings for localization of neural activity requires the solution of an ill-posed inverse problem: i.e. the determination of the spatial configuration, orientation, and timecourse of the currents that give rise to a particular observed field distribution. In its general form, this inverse problem has no unique solution; due to superposition and the existence of silent source configurations, a particular magnetic field distribution at the head surface could be produced by any number of possible source configurations. However, by making assumptions concerning the number and properties of neural sources, it is possible to use numerical minimization techniques to determine the source model parameters that best account for the experimental observations while satisfying numerical or physical criteria. In this paper the authors describe progress on the development and validation of inverse procedures that produce distributed estimates of neuronal currents. The goal is to produce a temporal sequence of 3-D tomographic reconstructions of the spatial patterns of neural activation. Such approaches have a number of advantages, in principle. Because they do not require estimates of model order and parameter values (beyond specification of the source space), they minimize the influence of investigator decisions and are suitable for automated analyses. These techniques also allow localization of sources that are not point-like; experimental studies of cognitive processes and of spontaneous brain activity are likely to require distributed source models.
Strategies for source space limitation in tomographic inverse procedures
International Nuclear Information System (INIS)
George, J.S.; Lewis, P.S.; Schlitt, H.A.; Kaplan, L.; Gorodnitsky, I.; Wood, C.C.
1994-01-01
The use of magnetic recordings for localization of neural activity requires the solution of an ill-posed inverse problem: i.e. the determination of the spatial configuration, orientation, and timecourse of the currents that give rise to a particular observed field distribution. In its general form, this inverse problem has no unique solution; due to superposition and the existence of silent source configurations, a particular magnetic field distribution at the head surface could be produced by any number of possible source configurations. However, by making assumptions concerning the number and properties of neural sources, it is possible to use numerical minimization techniques to determine the source model parameters that best account for the experimental observations while satisfying numerical or physical criteria. In this paper the authors describe progress on the development and validation of inverse procedures that produce distributed estimates of neuronal currents. The goal is to produce a temporal sequence of 3-D tomographic reconstructions of the spatial patterns of neural activation. Such approaches have a number of advantages, in principle. Because they do not require estimates of model order and parameter values (beyond specification of the source space), they minimize the influence of investigator decisions and are suitable for automated analyses. These techniques also allow localization of sources that are not point-like; experimental studies of cognitive processes and of spontaneous brain activity are likely to require distributed source models
Compton backscattering axial spectrometer
International Nuclear Information System (INIS)
Rad'ko, V.E.; Mokrushin, A.D.; Razumovskaya, I.V.
1981-01-01
Compton gamma backscattering axial spectrometer of new design with the 200 time larger aperture as compared with the known spectrometers at the equal angular resolution (at E=159 keV) is described. Collimator unit, radiation source and gamma detector are located in the central part of the spectrometer. The investigated specimen (of cylindrical form) and the so called ''black body'' used for absorption of photons, passed through the specimen are placed in the peripheric part. Both these parts have an imaginary symmetry axis that is why the spectrometer is called axial. 57 Co is used as the gamma source. The 122 keV spectral line which corresponds to the 83 keV backscattered photon serves as working line. Germanium disk detector of 10 mm diameter and 4 mm height has energy resolution not worse than 900 eV. The analysis of results of test measurements of compton water profile and their comparison with data obtained earlier show that only finity of detector resolution can essentially affect the form of Compton profile. It is concluded that the suggested variant of the spectrometer would be useful for determination of Compton profiles of chemical compounds of heavy elements [ru
A finite-difference contrast source inversion method
International Nuclear Information System (INIS)
Abubakar, A; Hu, W; Habashy, T M; Van den Berg, P M
2008-01-01
We present a contrast source inversion (CSI) algorithm using a finite-difference (FD) approach as its backbone for reconstructing the unknown material properties of inhomogeneous objects embedded in a known inhomogeneous background medium. Unlike the CSI method using the integral equation (IE) approach, the FD-CSI method can readily employ an arbitrary inhomogeneous medium as its background. The ability to use an inhomogeneous background medium has made this algorithm very suitable to be used in through-wall imaging and time-lapse inversion applications. Similar to the IE-CSI algorithm the unknown contrast sources and contrast function are updated alternately to reconstruct the unknown objects without requiring the solution of the full forward problem at each iteration step in the optimization process. The FD solver is formulated in the frequency domain and it is equipped with a perfectly matched layer (PML) absorbing boundary condition. The FD operator used in the FD-CSI method is only dependent on the background medium and the frequency of operation, thus it does not change throughout the inversion process. Therefore, at least for the two-dimensional (2D) configurations, where the size of the stiffness matrix is manageable, the FD stiffness matrix can be inverted using a non-iterative inversion matrix approach such as a Gauss elimination method for the sparse matrix. In this case, an LU decomposition needs to be done only once and can then be reused for multiple source positions and in successive iterations of the inversion. Numerical experiments show that this FD-CSI algorithm has an excellent performance for inverting inhomogeneous objects embedded in an inhomogeneous background medium
The NuSTAR Extragalactic Surveys: Source Catalog and the Compton-thick Fraction in the UDS Field
Masini, A.; Civano, F.; Comastri, A.; Fornasini, F.; Ballantyne, D. R.; Lansbury, G. B.; Treister, E.; Alexander, D. M.; Boorman, P. G.; Brandt, W. N.; Farrah, D.; Gandhi, P.; Harrison, F. A.; Hickox, R. C.; Kocevski, D. D.; Lanz, L.; Marchesi, S.; Puccetti, S.; Ricci, C.; Saez, C.; Stern, D.; Zappacosta, L.
2018-03-01
We present the results and the source catalog of the NuSTAR survey in the UKIDSS Ultra Deep Survey (UDS) field, bridging the gap in depth and area between NuSTAR’s ECDFS and COSMOS surveys. The survey covers a ∼0.6 deg2 area of the field for a total observing time of ∼1.75 Ms, to a half-area depth of ∼155 ks corrected for vignetting at 3–24 keV, and reaching sensitivity limits at half-area in the full (3–24 keV), soft (3–8 keV), and hard (8–24 keV) bands of 2.2 × 10‑14 erg cm‑2 s‑1, 1.0 × 10‑14 erg cm‑2 s‑1, and 2.7 × 10‑14 erg cm‑2 s‑1, respectively. A total of 67 sources are detected in at least one of the three bands, 56 of which have a robust optical redshift with a median of ∼ 1.1. Through a broadband (0.5–24 keV) spectral analysis of the whole sample combined with the NuSTAR hardness ratios, we compute the observed Compton-thick (CT; N H > 1024 cm‑2) fraction. Taking into account the uncertainties on each N H measurement, the final number of CT sources is 6.8 ± 1.2. This corresponds to an observed CT fraction of 11.5% ± 2.0%, providing a robust lower limit to the intrinsic fraction of CT active galactic nuclei and placing constraints on cosmic X-ray background synthesis models.
A Kinematic Source Inversion Scheme With New Parametrisation
Burjanek, J.
2007-12-01
We present a kinematic finite extent source inversion scheme, introducing an innovative parametrization of the problem. Particularly, we assume a spatial slip distribution composed of overlapping 2D Gaussian functions on regular grid. Temporal evolution of slip is described with prescribed slip velocity function, with free rupture velocity and rise time parameters. Fixing the values of rupture velocity and rise time for whole fault makes the problem linear in static slip. The inversion algorithm works as follows. At first we fix rise time and rupture velocity and calculate seismograms for each Gaussian function separately. Then a linear inversion is performed to get optimal weights (=amplitudes) of these Gaussian functions. Such procedure is done for a number of rupture velocity and rise-time distributions. Optimal values of these distributions are obtained by neighborhood algorithm. L2 norm is used as an objective function. The linear inversion for static slip is done with positivity constraint, so called 'Quadratic programming' was applied to solve such problem. Our method benefits from simplicity of linear problems and favourable spectral properties of Gaussian function. The latter prevent from employment of artificial smoothing operators. Thus a direct insight into spectral properties of the static slip distribution of real earthquakes is obtained. The method has been applied to the 2000 M6.6 Western Tottori, Japan, earthquake.
Fast Bayesian optimal experimental design for seismic source inversion
Long, Quan
2015-07-01
We develop a fast method for optimally designing experiments in the context of statistical seismic source inversion. In particular, we efficiently compute the optimal number and locations of the receivers or seismographs. The seismic source is modeled by a point moment tensor multiplied by a time-dependent function. The parameters include the source location, moment tensor components, and start time and frequency in the time function. The forward problem is modeled by elastodynamic wave equations. We show that the Hessian of the cost functional, which is usually defined as the square of the weighted L2 norm of the difference between the experimental data and the simulated data, is proportional to the measurement time and the number of receivers. Consequently, the posterior distribution of the parameters, in a Bayesian setting, concentrates around the "true" parameters, and we can employ Laplace approximation and speed up the estimation of the expected Kullback-Leibler divergence (expected information gain), the optimality criterion in the experimental design procedure. Since the source parameters span several magnitudes, we use a scaling matrix for efficient control of the condition number of the original Hessian matrix. We use a second-order accurate finite difference method to compute the Hessian matrix and either sparse quadrature or Monte Carlo sampling to carry out numerical integration. We demonstrate the efficiency, accuracy, and applicability of our method on a two-dimensional seismic source inversion problem. © 2015 Elsevier B.V.
Fast Bayesian Optimal Experimental Design for Seismic Source Inversion
Long, Quan
2016-01-06
We develop a fast method for optimally designing experiments [1] in the context of statistical seismic source inversion [2]. In particular, we efficiently compute the optimal number and locations of the receivers or seismographs. The seismic source is modeled by a point moment tensor multiplied by a time-dependent function. The parameters include the source location, moment tensor components, and start time and frequency in the time function. The forward problem is modeled by the elastic wave equations. We show that the Hessian of the cost functional, which is usually defined as the square of the weighted L2 norm of the difference between the experimental data and the simulated data, is proportional to the measurement time and the number of receivers. Consequently, the posterior distribution of the parameters, in a Bayesian setting, concentrates around the true parameters, and we can employ Laplace approximation and speed up the estimation of the expected Kullback-Leibler divergence (expected information gain), the optimality criterion in the experimental design procedure. Since the source parameters span several magnitudes, we use a scaling matrix for efficient control of the condition number of the original Hessian matrix. We use a second-order accurate finite difference method to compute the Hessian matrix and either sparse quadrature or Monte Carlo sampling to carry out numerical integration. We demonstrate the efficiency, accuracy, and applicability of our method on a two-dimensional seismic source inversion problem.
Energy Technology Data Exchange (ETDEWEB)
Botto, D.J.; Pratt, R.H.
1979-05-01
The current status of Compton scattering, both experimental observations and the theoretical predictions, is examined. Classes of experiments are distinguished and the results obtained are summarized. The validity of the incoherent scattering function approximation and the impulse approximation is discussed. These simple theoretical approaches are compared with predictions of the nonrelativistic dipole formula of Gavrila and with the relativistic results of Whittingham. It is noted that the A/sup -2/ based approximations fail to predict resonances and an infrared divergence, both of which have been observed. It appears that at present the various available theoretical approaches differ significantly in their predictions and that further and more systematic work is required.
Review on solving the inverse problem in EEG source analysis
Directory of Open Access Journals (Sweden)
Fabri Simon G
2008-11-01
Full Text Available Abstract In this primer, we give a review of the inverse problem for EEG source localization. This is intended for the researchers new in the field to get insight in the state-of-the-art techniques used to find approximate solutions of the brain sources giving rise to a scalp potential recording. Furthermore, a review of the performance results of the different techniques is provided to compare these different inverse solutions. The authors also include the results of a Monte-Carlo analysis which they performed to compare four non parametric algorithms and hence contribute to what is presently recorded in the literature. An extensive list of references to the work of other researchers is also provided. This paper starts off with a mathematical description of the inverse problem and proceeds to discuss the two main categories of methods which were developed to solve the EEG inverse problem, mainly the non parametric and parametric methods. The main difference between the two is to whether a fixed number of dipoles is assumed a priori or not. Various techniques falling within these categories are described including minimum norm estimates and their generalizations, LORETA, sLORETA, VARETA, S-MAP, ST-MAP, Backus-Gilbert, LAURA, Shrinking LORETA FOCUSS (SLF, SSLOFO and ALF for non parametric methods and beamforming techniques, BESA, subspace techniques such as MUSIC and methods derived from it, FINES, simulated annealing and computational intelligence algorithms for parametric methods. From a review of the performance of these techniques as documented in the literature, one could conclude that in most cases the LORETA solution gives satisfactory results. In situations involving clusters of dipoles, higher resolution algorithms such as MUSIC or FINES are however preferred. Imposing reliable biophysical and psychological constraints, as done by LAURA has given superior results. The Monte-Carlo analysis performed, comparing WMN, LORETA, sLORETA and SLF
Liu, Lu
2017-08-17
This paper presents a workflow for near-surface velocity automatic estimation using the early arrivals of seismic data. This workflow comprises two methods, source-domain full traveltime inversion (FTI) and early-arrival waveform inversion. Source-domain FTI is capable of automatically generating a background velocity that can kinematically match the reconstructed plane-wave sources of early arrivals with true plane-wave sources. This method does not require picking first arrivals for inversion, which is one of the most challenging aspects of ray-based first-arrival tomographic inversion. Moreover, compared with conventional Born-based methods, source-domain FTI can distinguish between slower or faster initial model errors via providing the correct sign of the model gradient. In addition, this method does not need estimation of the source wavelet, which is a requirement for receiver-domain wave-equation velocity inversion. The model derived from source-domain FTI is then used as input to early-arrival waveform inversion to obtain the short-wavelength velocity components. We have tested the workflow on synthetic and field seismic data sets. The results show source-domain FTI can generate reasonable background velocities for early-arrival waveform inversion even when subsurface velocity reversals are present and the workflow can produce a high-resolution near-surface velocity model.
Inverse kinetics for subcritical systems with external neutron source
International Nuclear Information System (INIS)
Carvalho Gonçalves, Wemerson de; Martinez, Aquilino Senra; Carvalho da Silva, Fernando
2017-01-01
Highlights: • It was developed formalism for reactivity calculation. • The importance function is related to the system subcriticality. • The importance function is also related with the value of the external source. • The equations were analyzed for seven different levels of sub criticality. • The results are physically consistent with others formalism discussed in the paper. - Abstract: Nuclear reactor reactivity is one of the most important properties since it is directly related to the reactor control during the power operation. This reactivity is influenced by the neutron behavior in the reactor core. The time-dependent neutrons behavior in response to any change in material composition is important for the reactor operation safety. Transient changes may occur during the reactor startup or shutdown and due to accidental disturbances of the reactor operation. Therefore, it is very important to predict the time-dependent neutron behavior population induced by changes in neutron multiplication. Reactivity determination in subcritical systems driven by an external neutron source can be obtained through the solution of the inverse kinetics equation for subcritical nuclear reactors. The main purpose of this paper is to find the solution of the inverse kinetics equation the main purpose of this paper is to device the inverse kinetics equations for subcritical systems based in a previous paper published by the authors (Gonçalves et al., 2015) and by (Gandini and Salvatores, 2002; Dulla et al., 2006). The solutions of those equations were also obtained. Formulations presented in this paper were tested for seven different values of k eff with external neutrons source constant in time and for a powers ratio varying exponentially over time.
Fully probabilistic earthquake source inversion on teleseismic scales
Stähler, Simon; Sigloch, Karin
2017-04-01
Seismic source inversion is a non-linear problem in seismology where not just the earthquake parameters but also estimates of their uncertainties are of great practical importance. We have developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. These unknowns are parameterised efficiently by harnessing as prior knowledge solutions from a large number of non-Bayesian inversions. The source time function is expressed as a weighted sum of a small number of empirical orthogonal functions, which were derived from a catalogue of >1000 source time functions (STFs) by a principal component analysis. We use a likelihood model based on the cross-correlation misfit between observed and predicted waveforms. The resulting ensemble of solutions provides full uncertainty and covariance information for the source parameters, and permits propagating these source uncertainties into travel time estimates used for seismic tomography. The computational effort is such that routine, global estimation of earthquake mechanisms and source time functions from teleseismic broadband waveforms is feasible. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source
Regional Moment Tensor Inversion for Source Type Identification
Dreger, D. S.; Ford, S. R.; Walter, W. R.
2008-12-01
With Green's functions from calibrated seismic velocity models it is possible to use regional distance moment tensor inversion for source-type identification. The deviatoric and isotropic source components for 17 explosions at the Nevada Test Site, as well as 12 earthquakes and 3 collapses in the surrounding region of the western US, are calculated using a regional time-domain full waveform inversion for the complete moment tensor. The events separate into specific populations according to their deviation from a pure double-couple and ratio of isotropic to deviatoric energy. The separation allows for anomalous event identification and discrimination between explosions, earthquakes, and collapses. Confidence regions of the model parameters are estimated from the data misfit by assuming normally distributed parameter values. We investigate the sensitivity of the resolved parameters of an explosion to imperfect Earth models, inaccurate event depths, and data with a low signal-to-noise ratio (SNR) assuming a reasonable azimuthal distribution of stations. In the band of interest (0.02-0.10 Hz) the source-type calculated from complete moment tensor inversion is insensitive to velocity models perturbations that cause less than a half-cycle shift (explosion source-type is insensitive to an incorrect depth assumption (for a true depth of 1 km), and the goodness-of-fit of the inversion result cannot be used to resolve the true depth of the explosion. Noise degrades the explosive character of the result, and a good fit and accurate result are obtained when the signal-to-noise ratio (SNR) is greater than 5. We assess the depth and frequency dependence upon the resolved explosive moment. As the depth decreases from 1 km to 200 m, the isotropic moment is no longer accurately resolved and is in error between 50-200%. However, even at the most shallow depth the resultant moment tensor is dominated by the explosive component when the data has a good SNR. Finally, the sensitivity
Source rupture process inversion of the 2013 Lushan earthquake, China
Directory of Open Access Journals (Sweden)
Zhang Lifen
2013-05-01
Full Text Available The spatial and temporal slip distribution of the Lushan earthquake was estimated using teleseismic body wave data. To perform a stable inversion, we applied smoothing constraints and determined their optimal relative weights on the observed data using an optimized Akaike’s Bayesian Information Criterion (ABIC. The inversion generated the source parameters. Strike, dip and slip were 218°, 39° and 100. 8°, respectively. A seismic moment (M0 was 2. 1 × 1020 Nm with a moment magnitude (Mw of 6. 8, and a source duration was approximately 30 second. The rupture propagated along the dip direction, and the maximum slip occurred at the hypocenter. The maximum slip was approximately 2. 1 m, although this earthquake did not cause an apparent surface rupture. The energy was mainly released within 10 second. In addition, the Lushan earthquake was apparently related to the 2008 Wenchuan earthquake. However, the question of whether it was an aftershock of the Wenchuan earthquake requires further study.
Saha, Lab; Bhattacharjee, Pijushpani
2015-03-01
Origin of the TeV gamma ray emission from MGRO J2019+37 discovered by the Milagro experiment is investigated within the pulsar wind nebula (PWN) scenario using multiwavelength information on sources suggested to be associated with this object. We find that the synchrotron self-Compton (SSC) mechanism of origin of the observed TeV gamma rays within the PWN scenario is severely constrained by the upper limit on the radio flux from the region around MGRO J2019+37 given by the Giant Metrewave Radio Telescope (GMRT) as well as by the x-ray flux upper limit from SWIFT/XRT. Specifically, for the SSC mechanism to explain the observed TeV flux from MGRO J2019+37 without violating the GMRT and/or Swift/XRT flux upper limits in the radio and x-ray regions, respectively, the emission region must be extremely compact with the characteristic size of the emission region restricted to ≲ O (10-4 pc) for an assumed distance of ˜ few kpc to the source. This is at least four orders of magnitude less than the characteristic size of the emission region typically invoked in explaining the TeV emission through the SSC mechanism within the PWN scenario. On the other hand, inverse Compton (IC) scattering of the nebular high energy electrons on the cosmic microwave background (CMB) photons can, for reasonable ranges of values of various parameters, explain the observed TeV flux without violating the GMRT and/or SWIFT/XRT flux bounds.
Source localization in electromyography using the inverse potential problem
van den Doel, Kees; Ascher, Uri M.; Pai, Dinesh K.
2011-02-01
We describe an efficient method for reconstructing the activity in human muscles from an array of voltage sensors on the skin surface. MRI is used to obtain morphometric data which are segmented into muscle tissue, fat, bone and skin, from which a finite element model for volume conduction is constructed. The inverse problem of finding the current sources in the muscles is solved using a careful regularization technique which adds a priori information, yielding physically reasonable solutions from among those that satisfy the basic potential problem. Several regularization functionals are considered and numerical experiments on a 2D test model are performed to determine which performs best. The resulting scheme leads to numerical difficulties when applied to large-scale 3D problems. We clarify the nature of these difficulties and provide a method to overcome them, which is shown to perform well in the large-scale problem setting.
Source localization in electromyography using the inverse potential problem
International Nuclear Information System (INIS)
Van den Doel, Kees; Ascher, Uri M; Pai, Dinesh K
2011-01-01
We describe an efficient method for reconstructing the activity in human muscles from an array of voltage sensors on the skin surface. MRI is used to obtain morphometric data which are segmented into muscle tissue, fat, bone and skin, from which a finite element model for volume conduction is constructed. The inverse problem of finding the current sources in the muscles is solved using a careful regularization technique which adds a priori information, yielding physically reasonable solutions from among those that satisfy the basic potential problem. Several regularization functionals are considered and numerical experiments on a 2D test model are performed to determine which performs best. The resulting scheme leads to numerical difficulties when applied to large-scale 3D problems. We clarify the nature of these difficulties and provide a method to overcome them, which is shown to perform well in the large-scale problem setting
Energy Technology Data Exchange (ETDEWEB)
Antoniassi, M.; Conceicao, A.L.C. [Departamento de Fisica-Faculdade de Filosofia Ciencias e Letras de Ribeirao Preto, Universidade de Sao Paulo, Ribeirao Preto, 14040-901 Sao Paulo (Brazil); Poletti, M.E., E-mail: poletti@ffclrp.usp.br [Departamento de Fisica-Faculdade de Filosofia Ciencias e Letras de Ribeirao Preto, Universidade de Sao Paulo, Ribeirao Preto, 14040-901 Sao Paulo (Brazil)
2012-07-15
In this work we measured X-ray scatter spectra from normal and neoplastic breast tissues using photon energy of 17.44 keV and a scattering angle of 90 Degree-Sign , in order to study the shape (FWHM) of the Compton peaks. The obtained results for FWHM were discussed in terms of composition and histological characteristics of each tissue type. The statistical analysis shows that the distribution of FWHM of normal adipose breast tissue clearly differs from all other investigated tissues. Comparison between experimental values of FWHM and effective atomic number revealed a strong correlation between them, showing that the FWHM values can be used to provide information about elemental composition of the tissues. - Highlights: Black-Right-Pointing-Pointer X-ray scatter spectra from normal and neoplastic breast tissues were measured. Black-Right-Pointing-Pointer Shape (FWHM) of Compton peak was related with elemental composition and characteristics of each tissue type. Black-Right-Pointing-Pointer A statistical hypothesis test showed clear differences between normal and neoplastic breast tissues. Black-Right-Pointing-Pointer There is a strong correlation between experimental values of FWHM and effective atomic number. Black-Right-Pointing-Pointer Shape (FWHM) of Compton peak can be used to provide information about elemental composition of the tissues.
Dynamic Source Inversion of Intermediate Depth Earthquakes in Mexico
Yuto Sho Mirwald, Aron; Cruz-Atienza, Victor Manuel; Krishna Singh-Singh, Shri
2017-04-01
The source mechanisms of earthquakes at intermediate depth (50-300 km) are still under debate. Due to the high confining pressure at depths below 50 km, rocks ought to deform by ductile flow rather than brittle failure, which is the mechanism originating most earthquakes. Several source mechanisms have been proposed, but for neither of them conclusive evidence has been found. One of two viable mechanisms is Dehydration Embrittlement, where liberation of water lowers the effective pressure and enables brittle fracture. The other is Thermal Runaway, a highly localized ductile deformation (Prieto et. al., Tecto., 2012). In the Mexican subduction zone, intermediate depth earthquakes represent a real hazard in central Mexico due to their proximity to highly populated areas and the large accelerations induced on ground motion (Iglesias et. al., BSSA, 2002). To improve our understanding of these rupture processes, we use a recently introduced inversion method (Diaz-Mojica et. al., JGR, 2014) to analyze several intermediate depth earthquakes in Mexico. The method inverts strong motion seismograms to determine the dynamic source parameters based on a genetic algorithm. It has been successfully used for the M6.5 Zumpango earthquake that occurred at a depth of 62 km in the state of Guerrero, Mexico. For this event, high radiated energy, low radiation efficiency and low rupture velocity were determined. This indicates a highly dissipative rupture process, suggesting that Thermal Runaway could probably be the dominant source process. In this work we improved the inversion method by introducing a theoretical consideration for the nucleation process that minimizes the effects of rupture initiation and guarantees self-sustained rupture propagation (Galis et. al., GJInt., 2014). Preliminary results indicate that intermediate depth earthquakes in central Mexico may vary in their rupture process. For instance, for a M5.9 normal-faulting earthquake at 55 km depth that produced very
Connecting Compton and Gravitational Compton Scattering
Directory of Open Access Journals (Sweden)
Holstein Barry R.
2017-01-01
Full Text Available The study of Compton scattering—S + γ → S + γ—at MAMI and elsewhere has led to a relatively successful understanding of proton structure via its polarizabilities. The recent observation of gravitational radiation observed by LIGO has raised the need for a parallel understanding of gravitational Compton scattering—S + g → S + g—and we show here how it can be obtained from ordinary Compton scattering by use of the double copy theorem.
Four dimensional variational inversion of atmospheric chemical sources in WRFDA
Guerrette, J. J.
Atmospheric aerosols are known to affect health, weather, and climate, but their impacts on regional scales are uncertain due to heterogeneous source, transport, and transformation mechanisms. The Weather Research and Forecasting model with chemistry (WRF-Chem) can account for aerosol-meteorology feedbacks as it simultaneously integrates equations of dynamical and chemical processes. Here we develop and apply incremental four dimensional variational (4D-Var) data assimilation (DA) capabilities in WRF-Chem to constrain chemical emissions (WRFDA-Chem). We develop adjoint (ADM) and tangent linear (TLM) model descriptions of boundary layer mixing, emission, aging, dry deposition, and advection of black carbon (BC) aerosol. ADM and TLM model performance is verified against finite difference derivative approximations. A second order checkpointing scheme is used to reduce memory costs and enable simulations longer than six hours. We apply WRFDA-Chem to constraining anthropogenic and biomass burning sources of BC throughout California during the 2008 Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) field campaign. Manual corrections to the prior emissions and subsequent inverse modeling reduce the spread in total emitted BC mass between two biomass burning inventories from a factor of x10 to only x2 across three days of measurements. We quantify posterior emission variance using an eigendecomposition of the cost function Hessian matrix. We also address the limited scalability of 4D-Var, which traditionally uses a sequential optimization algorithm (e.g., conjugate gradient) to approximate these Hessian eigenmodes. The Randomized Incremental Optimal Technique (RIOT) uses an ensemble of TLM and ADM instances to perform a Hessian singular value decomposition. While RIOT requires more ensemble members than Lanczos requires iterations to converge to a comparable posterior control vector, the wall-time of RIOT is x10 shorter since the
Direct and inverse source problems for a space fractional advection dispersion equation
Aldoghaither, Abeer
2016-05-15
In this paper, direct and inverse problems for a space fractional advection dispersion equation on a finite domain are studied. The inverse problem consists in determining the source term from final observations. We first derive the analytic solution to the direct problem which we use to prove the uniqueness and the unstability of the inverse source problem using final measurements. Finally, we illustrate the results with a numerical example.
A new optimization approach for source-encoding full-waveform inversion
Moghaddam, P.P.; Keers, H.; Herrmann, F.J.; Mulder, W.A.
2013-01-01
Waveform inversion is the method of choice for determining a highly heterogeneous subsurface structure. However, conventional waveform inversion requires that the wavefield for each source is computed separately. This makes it very expensive for realistic 3D seismic surveys. Source-encoding waveform
DEFF Research Database (Denmark)
Annuar, A.; Gandhi, P.; Alexander, D. M.
2015-01-01
We present two Nuclear Spectroscopic Telescope Array (NuSTAR) observations of the local Seyfert 2 active galactic nucleus (AGN) and an ultraluminous X-ray source (ULX) candidate in NGC 5643. Together with archival data from Chandra, XMM-Newton, and Swift-BAT, we perform a high-quality broadband...... spectral analysis of the AGN over two decades in energy (∼ 0.5-100 keV). Previous X-ray observations suggested that the AGN is obscured by a Compton-thick (CT) column of obscuring gas along our line of sight. However, the lack of high-quality greater than or similar to 10 keV observations, together......-ray luminosity variations in the 3-8 keV band from 2003 to 2014, our results further strengthen the ULX classification of NGC 5643 X-1....
Choi, Yun Seok
2011-09-01
Full waveform inversion requires a good estimation of the source wavelet to improve our chances of a successful inversion. This is especially true for an encoded multisource time-domain implementation, which, conventionally, requires separate-source modeling, as well as the Fourier transform of wavefields. As an alternative, we have developed the source-independent time-domain waveform inversion using convolved wavefields. Specifically, the misfit function consists of the convolution of the observed wavefields with a reference trace from the modeled wavefield, plus the convolution of the modeled wavefields with a reference trace from the observed wavefield. In this case, the source wavelet of the observed and the modeled wavefields are equally convolved with both terms in the misfit function, and thus, the effects of the source wavelets are eliminated. Furthermore, because the modeled wavefields play a role of low-pass filtering, the observed wavefields in the misfit function, the frequency-selection strategy from low to high can be easily adopted just by setting the maximum frequency of the source wavelet of the modeled wavefields; and thus, no filtering is required. The gradient of the misfit function is computed by back-propagating the new residual seismograms and applying the imaging condition, similar to reverse-time migration. In the synthetic data evaluations, our waveform inversion yields inverted models that are close to the true model, but demonstrates, as predicted, some limitations when random noise is added to the synthetic data. We also realized that an average of traces is a better choice for the reference trace than using a single trace. © 2011 Society of Exploration Geophysicists.
Sound source reconstruction using inverse boundary element calculations
DEFF Research Database (Denmark)
Schuhmacher, Andreas; Hald, Jørgen; Rasmussen, Karsten Bo
2003-01-01
Whereas standard boundary element calculations focus on the forward problem of computing the radiated acoustic field from a vibrating structure, the aim in this work is to reverse the process, i.e., to determine vibration from acoustic field data. This inverse problem is brought on a form suited ...
Sound source reconstruction using inverse boundary element calculations
DEFF Research Database (Denmark)
Schuhmacher, Andreas; Hald, Jørgen; Rasmussen, Karsten Bo
2001-01-01
Whereas standard boundary element calculations focus on the forward problem of computing the radiated acoustic field from a vibrating structure, the aim of the present work is to reverse the process, i.e., to determine vibration from acoustic field data. This inverse problem is brought on a form ...
Kitaki, Takaaki; Mineshige, Shin; Ohsuga, Ken; Kawashima, Tomohisa
2017-12-01
X-ray continuum spectra of super-Eddington accretion flow are studied by means of Monte Carlo radiative transfer simulations based on the radiation hydrodynamic simulation data, in which both thermal- and bulk-Compton scatterings are taken into account. We compare the calculated spectra of accretion flow around black holes with masses of MBH = 10, 102, 103, and 104 M⊙ for a fixed mass injection rate (from the computational boundary at 103 rs) of 103 LEdd/c2 (with rs, LEdd, and c being the Schwarzschild radius, the Eddington luminosity, and the speed of light, respectively). The soft X-ray spectra exhibit mass dependence in accordance with the standard-disk relation; the maximum surface temperature is scaled as T ∝ M_{ BH}^{ -1/4}. The spectra in the hard X-ray band, by contrast with soft X-ray, look to be quite similar among different models, if we normalize the radiation luminosity by MBH. This reflects that the hard component is created by thermal- and bulk-Compton scatterings of soft photons originating from an accretion flow in the overheated and/or funnel regions, the temperatures of which have no dependence on mass. The hard X-ray spectra can be reproduced by a Wien spectrum with the temperature of T ˜ 3 keV accompanied by a hard excess at photon energy above several keV. The excess spectrum can be fitted well with a power law with a photon index of Γ ˜ 3. This feature is in good agreement with that of the recent NuSTAR observations of ULXs (ultra-luminous X-ray sources).
Rapid kinematic finite source inversion for Tsunamic Early Warning using high rate GNSS data
Chen, K.; Liu, Z.; Song, Y. T.
2017-12-01
Recently, Global Navigation Satellite System (GNSS) has been used for rapid earthquake source inversion towards tsunami early warning. In practice, two approaches, i.e., static finite source inversion based on permanent co-seismic offsets and kinematic finite source inversion using high-rate (>= 1 Hz) co-seismic displacement waveforms, are often employed to fulfill the task. The static inversion is relatively easy to be implemented and does not require additional constraints on rupture velocity, duration, and temporal variation. However, since most GNSS receivers are deployed onshore locating on one side of the subduction fault, there is very limited resolution on near-trench fault slip using GNSS in static finite source inversion. On the other hand, the high-rate GNSS displacement waveforms, which contain the timing information of earthquake rupture explicitly and static offsets implicitly, have the potential to improve near-trench resolution by reconciling with the depth-dependent megathrust rupture behaviors. In this contribution, we assess the performance of rapid kinematic finite source inversion using high-rate GNSS by three selected historical tsunamigenic cases: the 2010 Mentawai, 2011 Tohoku and 2015 Illapel events. With respect to the 2010 Mentawai case, it is a typical tsunami earthquake with most slip concentrating near the trench. The static inversion has little resolution there and incorrectly puts slip at greater depth (>10km). In contrast, the recorded GNSS displacement waveforms are deficit in high-frequency energy, the kinematic source inversion recovers a shallow slip patch (depth less than 6 km) and tsunami runups are predicted quite reasonably. For the other two events, slip from kinematic and static inversion show similar characteristics and comparable tsunami scenarios, which may be related to dense GNSS network and behavior of the rupture. Acknowledging the complexity of kinematic source inversion in real-time, we adopt the back
Application of the unwrapped phase inversion to land data without source estimation
Choi, Yun Seok
2015-08-19
Unwrapped phase inversion with a strong damping was developed to solve the phase wrapping problem in frequency-domain waveform inversion. In this study, we apply the unwrapped phase inversion to band-limited real land data, for which the available minimum frequency is quite high. An important issue of the data is a strong ambiguity of source-ignition time (or source shift) shown in a seismogram. A source-estimation approach does not fully address the issue of source shift, since the velocity model and the source wavelet are updated simultaneously and interact with each other. We suggest a source-independent unwrapped phase inversion approach instead of relying on source-estimation from this land data. In the source-independent approach, the phase of the modeled data converges not to the exact phase value of the observed data, but to the relative phase value (or the trend of phases); thus it has the potential to solve the ambiguity of source-ignition time in a seismogram and work better than the source-estimation approach. Numerical examples show the validation of the source-independent unwrapped phase inversion, especially for land field data having an ambiguity in the source-ignition time.
Compton profile of calcium fluoride
International Nuclear Information System (INIS)
Vijayakumar, R.; Rajasekaran, L.; Ramamurthy, N.; Shivaramu
2003-01-01
Full text: The Compton profile of polycrystalline calcium fluoride is measured using 661.6 keV γ- radiation from a 137 Cs source. The experimental data are compared with HF-LCAO model calculation computed using CRYSTAL98 program, Hartree-Fock free atom theoretical values and with the other available experimental data. Experimental results are found to be in good agreement with the HF-LCAO model calculations and in qualitative agreement with Hartree-Fock free atom theoretical values
On the source inversion of fugitive surface layer releases. Part II. Complex sources
Sanfélix, V.; Escrig, A.; López-Lilao, A.; Celades, I.; Monfort, E.
2017-06-01
The experimental measurement of fugitive emissions of particulate matter entails inherent complexity because they are usually discontinuous, of short duration, may be mobile, and are affected by weather conditions. Owing to this complexity, instead of experimental measurements, emission factors are used to inventory such emissions. Unfortunately, emission factor datasets are still very limited at present and are insufficient to identify problematic operations and appropriately select control measures. To extend these datasets, a source inversion methodology (described in Part I of this work) was applied to field campaigns in which operation-specific fugitive particulate matter emission factors were determined for several complex fugitive sources, some of which were mobile. Mobile sources were treated as a superposition of instantaneous sources. The experimental campaigns were conducted at ports (bulk solids terminals), aggregate quarries, and cement factories, encompassing powder handling operations and vehicle circulation on paved and unpaved roads. Emission factors were derived for the operations and materials involved in these scenarios and compared with those available in the emission factor compilations. Significant differences were observed between the emission factors obtained in the studied handling operations. These differences call into question the use of generic emission factors and highlight the need for more detailed studies in this field.
Source reconstruction accuracy of MEG and EEG Bayesian inversion approaches.
Directory of Open Access Journals (Sweden)
Paolo Belardinelli
Full Text Available Electro- and magnetoencephalography allow for non-invasive investigation of human brain activation and corresponding networks with high temporal resolution. Still, no correct network detection is possible without reliable source localization. In this paper, we examine four different source localization schemes under a common Variational Bayesian framework. A Bayesian approach to the Minimum Norm Model (MNM, an Empirical Bayesian Beamformer (EBB and two iterative Bayesian schemes (Automatic Relevance Determination (ARD and Greedy Search (GS are quantitatively compared. While EBB and MNM each use a single empirical prior, ARD and GS employ a library of anatomical priors that define possible source configurations. The localization performance was investigated as a function of (i the number of sources (one vs. two vs. three, (ii the signal to noise ratio (SNR; 5 levels and (iii the temporal correlation of source time courses (for the cases of two or three sources. We also tested whether the use of additional bilateral priors specifying source covariance for ARD and GS algorithms improved performance. Our results show that MNM proves effective only with single source configurations. EBB shows a spatial accuracy of few millimeters with high SNRs and low correlation between sources. In contrast, ARD and GS are more robust to noise and less affected by temporal correlations between sources. However, the spatial accuracy of ARD and GS is generally limited to the order of one centimeter. We found that the use of correlated covariance priors made no difference to ARD/GS performance.
Testing special relativity theory using Compton scattering
International Nuclear Information System (INIS)
Contreras S, H.; Hernandez A, L.; Baltazar R, A.; Escareno J, E.; Mares E, C. A.; Hernandez V, C.; Vega C, H. R.
2010-10-01
The validity of the special relativity theory has been tested using the Compton scattering. Since 1905 several experiments has been carried out to show that time, mass, and length change with the velocity, in this work the Compton scattering has been utilized as a simple way to show the validity to relativity. The work was carried out through Monte Carlo calculations and experiments with different gamma-ray sources and a gamma-ray spectrometer with a 3 x 3 NaI (Tl) detector. The pulse-height spectra were collected and the Compton edge was observed. This information was utilized to determine the relationship between the electron's mass and energy using the Compton -knee- position, the obtained results were contrasted with two collision models between photon and electron, one model was built using the classical physics and another using the special relativity theory. It was found that calculations and experiments results fit to collision model made using the special relativity. (Author)
Linearized versus non-linear inverse methods for seismic localization of underground sources
DEFF Research Database (Denmark)
Oh, Geok Lian; Jacobsen, Finn
2013-01-01
Difference elastic wave-field numerical method. In this paper, the accuracy and performance of the linear beamformer and nonlinear inverse methods to localize a underground seismic source are checked and compared using computer generated synthetic experimental data. © 2013 Acoustical Society of America......., and the Bayes nonlinear inversion method. The travel times used in the beamformer are derived from solving the Eikonal equation. In the linearized inversion method, we assume that the elastic waves are predominantly acoustic waves, and the acoustic approximation is applied. For the nonlinear inverse method, we...... apply the Bayesian framework where the misfit function is the posterior probability distribution of the model space. The model parameters are the location of the seismic source that we are interested in estimating. The forward problem solver applied for the nonlinear inverse method is a Finite...
A New Wave Equation Based Source Location Method with Full-waveform Inversion
Wu, Zedong
2017-05-26
Locating the source of a passively recorded seismic event is still a challenging problem, especially when the velocity is unknown. Many imaging approaches to focus the image do not address the velocity issue and result in images plagued with illumination artifacts. We develop a waveform inversion approach with an additional penalty term in the objective function to reward the focusing of the source image. This penalty term is relaxed early to allow for data fitting, and avoid cycle skipping, using an extended source. At the later stages the focusing of the image dominates the inversion allowing for high resolution source and velocity inversion. We also compute the source location explicitly and numerical tests show that we obtain good estimates of the source locations with this approach.
Upper bound of errors in solving the inverse problem of identifying a voice source
Leonov, A. S.; Sorokin, V. N.
2017-09-01
The paper considers the inverse problem of finding the shape of a voice-source pulse from a specified segment of a speech signal using a special mathematical model that relates these quantities. A variational method for solving the formulated inverse problem for two new parametric classes of sources is proposed: a piecewise-linear source and an A-source. The error in the obtained approximate solutions of the inverse problem is considered, and a technique to numerically estimate this error is proposed, which is based on the theory of a posteriori estimates of the accuracy in solving ill-posed problems. A computer study of the adequacy of the proposed models of sources, and a study of the a posteriori estimates of the accuracy in solving inverse problems for such sources were performed using various types of voice signals. Numerical experiments for speech signals showed satisfactory properties of such a posteriori estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes for the investigated speech material is on average 7%. It is noted that the a posteriori accuracy estimates can be used as a criterion for the quality of determining the voice-source pulse shape in the speaker-identification problem.
An inverse source location algorithm for radiation portal monitor applications
International Nuclear Information System (INIS)
Miller, Karen A.; Charlton, William S.
2010-01-01
Radiation portal monitors are being deployed at border crossings throughout the world to prevent the smuggling of nuclear and radiological materials; however, a tension exists between security and the free-flow of commerce. Delays at ports-of-entry have major economic implications, so it is imperative to minimize portal monitor screening time. We have developed an algorithm to locate a radioactive source using a distributed array of detectors, specifically for use at border crossings. To locate the source, we formulated an optimization problem where the objective function describes the least-squares difference between the actual and predicted detector measurements. The predicted measurements are calculated by solving the 3-D deterministic neutron transport equation given an estimated source position. The source position is updated using the steepest descent method, where the gradient of the objective function with respect to the source position is calculated using adjoint transport calculations. If the objective function is smaller than the convergence criterion, then the source position has been identified. This paper presents the derivation of the underlying equations in the algorithm as well as several computational test cases used to characterize its accuracy.
Inverse random source scattering for the Helmholtz equation in inhomogeneous media
Li, Ming; Chen, Chuchu; Li, Peijun
2018-01-01
This paper is concerned with an inverse random source scattering problem in an inhomogeneous background medium. The wave propagation is modeled by the stochastic Helmholtz equation with the source driven by additive white noise. The goal is to reconstruct the statistical properties of the random source such as the mean and variance from the boundary measurement of the radiated random wave field at multiple frequencies. Both the direct and inverse problems are considered. We show that the direct problem has a unique mild solution by a constructive proof. For the inverse problem, we derive Fredholm integral equations, which connect the boundary measurement of the radiated wave field with the unknown source function. A regularized block Kaczmarz method is developed to solve the ill-posed integral equations. Numerical experiments are included to demonstrate the effectiveness of the proposed method.
International Nuclear Information System (INIS)
Aghasi, Alireza; Mendoza-Sanchez, Itza; Miller, Eric L; Ramsburg, C Andrew; Abriola, Linda M
2013-01-01
This paper presents a new joint inversion approach to shape-based inverse problems. Given two sets of data from distinct physical models, the main objective is to obtain a unified characterization of inclusions within the spatial domain of the physical properties to be reconstructed. Although our proposed method generally applies to many types of inverse problems, the main motivation here is to characterize subsurface contaminant source zones by processing down-gradient hydrological data and cross-gradient electrical resistance tomography observations. Inspired by Newton's method for multi-objective optimization, we present an iterative inversion scheme in which descent steps are chosen to simultaneously reduce both data-model misfit terms. Such an approach, however, requires solving a non-smooth convex problem at every iteration, which is computationally expensive for a pixel-based inversion over the whole domain. Instead, we employ a parametric level set technique that substantially reduces the number of underlying parameters, making the inversion computationally tractable. The performance of the technique is examined and discussed through the reconstruction of source zone architectures that are representative of dense non-aqueous phase liquid (DNAPL) contaminant release in a statistically homogenous sandy aquifer. In these examples, the geometric configuration of the DNAPL mass is considered along with additional information about its spatial variability within the contaminated zone, such as the identification of low and high saturation regions. Comparison of the reconstructions with the true DNAPL architectures highlights the superior performance of the model-based technique and joint inversion scheme. (paper)
On rational approximation methods for inverse source problems
Rundell, William
2011-02-01
The basis of most imaging methods is to detect hidden obstacles or inclusions within a body when one can only make measurements on an exterior surface. Such is the ubiquity of these problems, the underlying model can lead to a partial differential equation of any of the major types, but here we focus on the case of steady-state electrostatic or thermal imaging and consider boundary value problems for Laplace\\'s equation. Our inclusions are interior forces with compact support and our data consists of a single measurement of (say) voltage/current or temperature/heat flux on the external boundary. We propose an algorithm that under certain assumptions allows for the determination of the support set of these forces by solving a simpler "equivalent point source" problem, and which uses a Newton scheme to improve the corresponding initial approximation. © 2011 American Institute of Mathematical Sciences.
Wave-equation migration velocity inversion using passive seismic sources
Witten, B.; Shragge, J. C.
2015-12-01
Seismic monitoring at injection sites (e.g., CO2 sequestration, waste water disposal, hydraulic fracturing) has become an increasingly important tool for hazard identification and avoidance. The information obtained from this data is often limited to seismic event properties (e.g., location, approximate time, moment tensor), the accuracy of which greatly depends on the estimated elastic velocity models. However, creating accurate velocity models from passive array data remains a challenging problem. Common techniques rely on picking arrivals or matching waveforms requiring high signal-to-noise data that is often not available for the magnitude earthquakes observed over injection sites. We present a new method for obtaining elastic velocity information from earthquakes though full-wavefield wave-equation imaging and adjoint-state tomography. The technique exploits the fact that the P- and S-wave arrivals originate at the same time and location in the subsurface. We generate image volumes by back-propagating P- and S-wave data through initial Earth models and then applying a correlation-based extended-imaging condition. Energy focusing away from zero lag in the extended image volume is used as a (penalized) residual in an adjoint-state tomography scheme to update the P- and S-wave velocity models. We use an acousto-elastic approximation to greatly reduce the computational cost. Because the method requires neither an initial source location or origin time estimate nor picking of arrivals, it is suitable for low signal-to-noise datasets, such as microseismic data. Synthetic results show that with a realistic distribution of microseismic sources, P- and S-velocity perturbations can be recovered. Although demonstrated at an oil and gas reservoir scale, the technique can be applied to problems of all scales from geologic core samples to global seismology.
Choi, Yun Seok
2012-05-02
Conventional multi-source waveform inversion using an objective function based on the least-square misfit cannot be applied to marine streamer acquisition data because of inconsistent acquisition geometries between observed and modelled data. To apply the multi-source waveform inversion to marine streamer data, we use the global correlation between observed and modelled data as an alternative objective function. The new residual seismogram derived from the global correlation norm attenuates modelled data not supported by the configuration of observed data and thus, can be applied to multi-source waveform inversion of marine streamer data. We also show that the global correlation norm is theoretically the same as the least-square norm of the normalized wavefield. To efficiently calculate the gradient, our method employs a back-propagation algorithm similar to reverse-time migration based on the adjoint-state of the wave equation. In numerical examples, the multi-source waveform inversion using the global correlation norm results in better inversion results for marine streamer acquisition data than the conventional approach. © 2012 European Association of Geoscientists & Engineers.
An inverse source problem of the Poisson equation with Cauchy data
Directory of Open Access Journals (Sweden)
Ji-Chuan Liu
2017-05-01
Full Text Available In this article, we study an inverse source problem of the Poisson equation with Cauchy data. We want to find iterative algorithms to detect the hidden source within a body from measurements on the boundary. Our goal is to reconstruct the location, the size and the shape of the hidden source. This problem is ill-posed, regularization techniques should be employed to obtain the regularized solution. Numerical examples show that our proposed algorithms are valid and effective.
Micro-seismic imaging using a source function independent full waveform inversion method
Wang, Hanchen; Alkhalifah, Tariq
2018-03-01
At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.
Micro-seismic imaging using a source function independent full waveform inversion method
Wang, Hanchen
2018-03-26
At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.
Venugopal, M.; Roy, D.; Rajendran, K.; Guillas, S.; Dias, F.
2017-01-01
Numerical inversions for earthquake source parameters from tsunami wave data usually incorporate subjective elements to stabilize the search. In addition, noisy and possibly insufficient data result in instability and non-uniqueness in most deterministic inversions, which are barely acknowledged. Here, we employ the satellite altimetry data for the 2004 Sumatra–Andaman tsunami event to invert the source parameters. We also include kinematic parameters that improve the description of tsunami generation and propagation, especially near the source. Using a finite fault model that represents the extent of rupture and the geometry of the trench, we perform a new type of nonlinear joint inversion of the slips, rupture velocities and rise times with minimal a priori constraints. Despite persistently good waveform fits, large uncertainties in the joint parameter distribution constitute a remarkable feature of the inversion. These uncertainties suggest that objective inversion strategies should incorporate more sophisticated physical models of seabed deformation in order to significantly improve the performance of early warning systems. PMID:28989311
Multi-source waveform inversion of marine streamer data using the normalized wavefield
Choi, Yun Seok
2012-01-01
Even though the encoded multi-source approach dramatically reduces the computational cost of waveform inversion, it is generally not applicable to marine streamer data. This is because the simultaneous-sources modeled data cannot be muted to comply with the configuration of the marine streamer data, which causes differences in the number of stacked-traces, or energy levels, between the modeled and observed data. Since the conventional L2 norm does not account for the difference in energy levels, multi-source inversion based on the conventional L2 norm does not work for marine streamer data. In this study, we propose the L2, approximated L2, and L1 norm using the normalized wavefields for the multi-source waveform inversion of marine streamer data. Since the normalized wavefields mitigate the different energy levels between the observed and modeled wavefields, the multi-source waveform inversion using the normalized wavefields can be applied to marine streamer data. We obtain the gradient of the objective functions using the back-propagation algorithm. To conclude, the gradient of the L2 norm using the normalized wavefields is exactly the same as that of the global correlation norm. In the numerical examples, the new objective functions using the normalized wavefields generate successful results whereas conventional L2 norm does not.
Microseismic imaging using a source-independent full-waveform inversion method
Wang, Hanchen
2016-09-06
Using full waveform inversion (FWI) to locate microseismic and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, waveform inversion of microseismic events faces incredible nonlinearity due to the unknown source location (space) and function (time). We develop a source independent FWI of microseismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with the observed and modeled data to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for source wavelet in z axis is extracted to check the accuracy of the inverted source image and velocity model. Also the angle gather is calculated to see if the velocity model is correct. By inverting for all the source image, source wavelet and the velocity model, the proposed method produces good estimates of the source location, ignition time and the background velocity for part of the SEG overthrust model.
Narrowband Compton Scattering Yield Enhancement
Rykovanov, Sergey; Seipt, Daniel; Kharin, Vasily
2017-10-01
Compton Scattering (CS) of laser light off high-energy electrons is a well-established source of X- and gamma-rays for applications in medicine, biology, nuclear and material sciences. Main advantage of CS photon sources is the possibility to generate narrow spectra as opposed to a broad continuum obtained when utilizing Bremsstrahlung. However, due to the low cross-section of the linear process, the total photon yield is quite low. The most straightforward way to increase the number of photon-electron beam scattering events is to increase the laser pulse intensity at the interaction point by harder focusing. This leads to an unfortunate consequence. Increase in the laser pulse normalized amplitude a0, leads to additional ponderomotive spectrum broadening of the scattered radiation. The ponderomotive broadening is caused by the v × B force, which slows the electron down near the peak of the laser pulse where the intensity is high, and can be neglected near the wings of the pulse, where the intensity is low. We show that laser pulse chirping, both nonlinear (laser pulse frequency ''following'' the envelope of the pulse) and linear, leads to compensation of the ponderomotive broadening and considerably enhances the yield of the nonlinear Compton sources. Work supported by the Helmholtz Association via Helmholtz Young Investigators Grant (VH-NG-1037).
The Compton generator revisited
Siboni, S.
2014-09-01
The Compton generator, introduced in 1913 by the US physicist A H Compton as a relatively simple device to detect the Earth's rotation with respect to the distant stars, is analyzed and discussed in a general perspective. The paper introduces a generalized definition of the generator, emphasizing the special features of the original apparatus, and provides a suggestive interpretation of the way the device works. To this end, an intriguing electromagnetic analogy is developed, which turns out to be particularly useful in simplifying the calculations. Besides the more extensive description of the Compton generator in itself, the combined use of concepts and methods coming from different fields of physics, such as particle dynamics in moving references frames, continuum mechanics and electromagnetism, may be of interest to both teachers and graduate students.
Inverse modeling of methane sources and sinks using the adjoint of a global transport model
Houweling, S; Kaminski, T; Dentener, F; Lelieveld, J; Heimann, M
1999-01-01
An inverse modeling method is presented to evaluate the sources and sinks of atmospheric methane. An adjoint version of a global transport model has been used to estimate these fluxes at a relatively high spatial and temporal resolution. Measurements from 34 monitoring stations and 11 locations
Directory of Open Access Journals (Sweden)
N. Evangeliou
2017-07-01
Full Text Available This paper describes the results of an inverse modeling study for the determination of the source term of the radionuclides 134Cs, 137Cs and 131I released after the Chernobyl accident. The accident occurred on 26 April 1986 in the Former Soviet Union and released about 1019 Bq of radioactive materials that were transported as far away as the USA and Japan. Thereafter, several attempts to assess the magnitude of the emissions were made that were based on the knowledge of the core inventory and the levels of the spent fuel. More recently, when modeling tools were further developed, inverse modeling techniques were applied to the Chernobyl case for source term quantification. However, because radioactivity is a sensitive topic for the public and attracts a lot of attention, high-quality measurements, which are essential for inverse modeling, were not made available except for a few sparse activity concentration measurements far from the source and far from the main direction of the radioactive fallout. For the first time, we apply Bayesian inversion of the Chernobyl source term using not only activity concentrations but also deposition measurements from the most recent public data set. These observations refer to a data rescue attempt that started more than 10 years ago, with a final goal to provide available measurements to anyone interested. In regards to our inverse modeling results, emissions of 134Cs were estimated to be 80 PBq or 30–50 % higher than what was previously published. From the released amount of 134Cs, about 70 PBq were deposited all over Europe. Similar to 134Cs, emissions of 137Cs were estimated as 86 PBq, on the same order as previously reported results. Finally, 131I emissions of 1365 PBq were found, which are about 10 % less than the prior total releases. The inversion pushes the injection heights of the three radionuclides to higher altitudes (up to about 3 km than previously assumed (≈ 2.2 km in order
Evangeliou, Nikolaos; Hamburger, Thomas; Cozic, Anne; Balkanski, Yves; Stohl, Andreas
2017-07-01
This paper describes the results of an inverse modeling study for the determination of the source term of the radionuclides 134Cs, 137Cs and 131I released after the Chernobyl accident. The accident occurred on 26 April 1986 in the Former Soviet Union and released about 1019 Bq of radioactive materials that were transported as far away as the USA and Japan. Thereafter, several attempts to assess the magnitude of the emissions were made that were based on the knowledge of the core inventory and the levels of the spent fuel. More recently, when modeling tools were further developed, inverse modeling techniques were applied to the Chernobyl case for source term quantification. However, because radioactivity is a sensitive topic for the public and attracts a lot of attention, high-quality measurements, which are essential for inverse modeling, were not made available except for a few sparse activity concentration measurements far from the source and far from the main direction of the radioactive fallout. For the first time, we apply Bayesian inversion of the Chernobyl source term using not only activity concentrations but also deposition measurements from the most recent public data set. These observations refer to a data rescue attempt that started more than 10 years ago, with a final goal to provide available measurements to anyone interested. In regards to our inverse modeling results, emissions of 134Cs were estimated to be 80 PBq or 30-50 % higher than what was previously published. From the released amount of 134Cs, about 70 PBq were deposited all over Europe. Similar to 134Cs, emissions of 137Cs were estimated as 86 PBq, on the same order as previously reported results. Finally, 131I emissions of 1365 PBq were found, which are about 10 % less than the prior total releases. The inversion pushes the injection heights of the three radionuclides to higher altitudes (up to about 3 km) than previously assumed (≈ 2.2 km) in order to better match both concentration
Energy Technology Data Exchange (ETDEWEB)
Annuar, A.; Gandhi, P.; Alexander, D. M.; Lansbury, G. B.; Moro, A. Del [Centre for Extragalactic Astronomy, Department of Physics, University of Durham, South Road, Durham, DH1 3LE (United Kingdom); Arévalo, P. [Instituto de Física y Astronomía, Facultad de Ciencias, Universidad de Valparaíso, Gran Bretana N 1111, Playa Ancha, Valparaíso (Chile); Ballantyne, D. R. [Center for Relativistic Astrophysics, School of Physics, Georgia Institute of Technology, Atlanta, GA 30332 (United States); Baloković, M.; Brightman, M.; Harrison, F. A. [Cahill Center for Astronomy and Astrophysics, California Institute of Technology, Pasadena, CA 91125 (United States); Bauer, F. E. [EMBIGGEN Anillo, Concepción (Chile); Boggs, S. E.; Craig, W. W. [Space Sciences Laboratory, University of California, Berkeley, CA 94720 (United States); Brandt, W. N. [Department of Astronomy and Astrophysics, The Pennsylvania State University, 525 Davey Lab, University Park, PA 16802 (United States); Christensen, F. E. [DTU Space, National Space Institute, Technical University of Denmark, Elektrovej 327, DK-2800 Lyngby (Denmark); Hailey, C. J. [Columbia Astrophysics Laboratory, Columbia University, New York, NY 10027 (United States); Hickox, R. C. [Department of Physics and Astronomy, Dartmouth College, 6127 Wilder Laboratory, Hanover, NH 03755 (United States); Matt, G. [Dipartimento di Matematica e Fisica, Universitá degli Studi Roma Tre, via della Vasca Navale 84, I-00146 Roma (Italy); Puccetti, S. [ASI Science Data Center, via Galileo Galilei, I-00044 Frascati (Italy); Ricci, C. [Department of Astronomy, Kyoto University, Kitashirakawa-Oiwake-cho, Sakyo-ku, Kyoto 606-8502 (Japan); and others
2015-12-10
We present two Nuclear Spectroscopic Telescope Array (NuSTAR) observations of the local Seyfert 2 active galactic nucleus (AGN) and an ultraluminous X-ray source (ULX) candidate in NGC 5643. Together with archival data from Chandra, XMM-Newton, and Swift-BAT, we perform a high-quality broadband spectral analysis of the AGN over two decades in energy (∼0.5–100 keV). Previous X-ray observations suggested that the AGN is obscured by a Compton-thick (CT) column of obscuring gas along our line of sight. However, the lack of high-quality ≳10 keV observations, together with the presence of a nearby X-ray luminous source, NGC 5643 X–1, have left significant uncertainties in the characterization of the nuclear spectrum. NuSTAR now enables the AGN and NGC 5643 X–1 to be separately resolved above 10 keV for the first time and allows a direct measurement of the absorbing column density toward the nucleus. The new data show that the nucleus is indeed obscured by a CT column of N{sub H} ≳ 5 × 10{sup 24} cm{sup −2}. The range of 2–10 keV absorption-corrected luminosity inferred from the best-fitting models is L{sub 2–10,int} = (0.8–1.7) × 10{sup 42} erg s{sup −1}, consistent with that predicted from multiwavelength intrinsic luminosity indicators. In addition, we also study the NuSTAR data for NGC 5643 X–1 and show that it exhibits evidence of a spectral cutoff at energy E ∼ 10 keV, similar to that seen in other ULXs observed by NuSTAR. Along with the evidence for significant X-ray luminosity variations in the 3–8 keV band from 2003 to 2014, our results further strengthen the ULX classification of NGC 5643 X–1.
Zhang, Y.; Dalguer, L. A.; Song, S.; Clinton, J. F.
2013-12-01
Detailed source imaging of the spatial and temporal slip distribution of earthquakes is a main research goal for seismology. In this study we investigate how the number and geometrical distribution of seismic stations affect finite kinematic source inversion results by inverting ground motions derived from a known synthetic dynamic earthquake rupture model, which is governed by the slip weakening friction law with heterogeneous stress distribution. Our target dynamic rupture model is a buried strike-slip event (Mw 6.5) in a layered half space (Dalguer & Mai, 2011) with broadband synthetic ground motions created at 168 near-field stations. In the inversion, we modeled low frequency (under 1Hz) waveforms using a genetic algorithm in a Bayesian framework (Moneli et al. 2008) to retrieve peak slip velocity, rupture time, and rise time of the source. The dynamic consistent regularized Yoffe function (Tinti et al. 2005) was applied as a single window slip velocity function. Tikhonov regularization was used to smooth final slip. We tested three station network geometry cases: (a) single station, in which we inverted 3 component waveforms from a single station varying azimuth and epicentral distance; (b) multi-station configurations with similar numbers of stations all at similar distances from, but regularly spaced around the fault; (c) irregular multi-station configurations using different numbers of stations. For analysis, waveform misfits are calculated using all 168 stations. Our results show: 1) single station tests suggest that it may be possible to obtain a relatively good source model even using one station, with a waveform misfit comparable to that obtained with the best source model. The best single station performance occurs with stations in which amplitude ratios between the three components are not large, indicating that P & S waves are all present. We infer that both body wave radiation pattern and distance play an important role in selection of optimal
Singh, S. K.; Kumar, P.; Turbelin, G.; Issartel, J. P.; Feiz, A. A.; Ngae, P.; Bekka, N.
2016-12-01
In accidental release scenarios, a reliable prediction of origin and strength of unknown releases is attentive for emergency response authorities in order to ensure safety and security towards human health and environment. The accidental scenarios might involve one or more simultaneous releases emitting the same contaminant. In this case, the field of plumes may overlap significantly and the sampled concentrations may become the mixture of the concentrations originating from all the releases. The study addresses an inverse modelling procedure for identifying the origin and strength of known number of simultaneous releases from the sampled mixture of concentrations. A two-step inversion algorithm is developed in conjunction with an adjoint representation of source-receptor relationship. The computational efficiency is increased by deriving the distributed source information observable from the given monitoring design and number of measurements. The technique leads to an exact retrieval of the true release parameters when measurements are noise free and exactly described by the dispersion model. The inversion algorithm is evaluated using the real data from Fusion Field Trials, involving multiple (two, three and four sources) release experiments emitting Propylene, in September 2007 at Dugway Proving Ground, Utah, USA. The release locations are retrieved, on average, within 45 m to the true sources. The analysis of posterior uncertainties shows that the variations in location error and retrieved strength are within 10 m and 0.07%, respectively. Further, the inverse modelling is tested using 4-16 measurements in retrieval of four releases and found to be working reasonably well (within 146±79 m). The sensitivity studies highlight that the covariance statistics, model representativeness errors, source-receptor distance, distance between localized sources, monitoring design and number of measurements plays an important role in multiple source estimation.
Sparse contrast-source inversion using linear-shrinkage-enhanced inexact Newton method
Desmal, Abdulla
2014-07-01
A contrast-source inversion scheme is proposed for microwave imaging of domains with sparse content. The scheme uses inexact Newton and linear shrinkage methods to account for the nonlinearity and ill-posedness of the electromagnetic inverse scattering problem, respectively. Thresholded shrinkage iterations are accelerated using a preconditioning technique. Additionally, during Newton iterations, the weight of the penalty term is reduced consistently with the quadratic convergence of the Newton method to increase accuracy and efficiency. Numerical results demonstrate the applicability of the proposed method.
Fast sampling algorithm for the simulation of photon Compton scattering
International Nuclear Information System (INIS)
Brusa, D.; Salvat, F.
1996-01-01
A simple algorithm for the simulation of Compton interactions of unpolarized photons is described. The energy and direction of the scattered photon, as well as the active atomic electron shell, are sampled from the double-differential cross section obtained by Ribberfors from the relativistic impulse approximation. The algorithm consistently accounts for Doppler broadening and electron binding effects. Simplifications of Ribberfors' formula, required for efficient random sampling, are discussed. The algorithm involves a combination of inverse transform, composition and rejection methods. A parameterization of the Compton profile is proposed from which the simulation of Compton events can be performed analytically in terms of a few parameters that characterize the target atom, namely shell ionization energies, occupation numbers and maximum values of the one-electron Compton profiles. (orig.)
Indian Academy of Sciences (India)
The Compton proﬁle of tantalum (Ta) has been measured using IGP type coaxial photon detector. The target atoms were excited by means of 59.54 keV -rays from Am-241. The measurements were carried out on a high purity thin elemental foil. The data were recoreded in a 4 K multichannel analyzer. These data duly ...
Indian Academy of Sciences (India)
tion and Compton scattering (Omega Scientific Publishers, New Delhi, 1990) p. 261. [9] N I Papanicolaou, N C Bacalis and D A Papaconstantopoulos, Phys. Status Solidi B137, 597. (1986). [10] B K Sharma, B L Ahuja, H Singh and F M Mohammad, Pramana – J. Phys. 40, 339 (1993). Pramana – J. Phys., Vol. 60, No.
New Insights on the Uncertainties in Finite-Fault Earthquake Source Inversion
Razafindrakoto, Hoby
2015-04-01
New Insights on the Uncertainties in Finite-Fault Earthquake Source Inversion Hoby Njara Tendrisoa Razafindrakoto Earthquake source inversion is a non-linear problem that leads to non-unique solutions. The aim of this dissertation is to understand the uncertainty and reliability in earthquake source inversion, as well as to quantify variability in earthquake rupture models. The source inversion is performed using a Bayesian inference. This technique augments optimization approaches through its ability to image the entire solution space which is consistent with the data and prior information. In this study, the uncertainty related to the choice of source-time function and crustal structure is investigated. Three predefined analytical source-time functions are analyzed; isosceles triangle, Yoffe with acceleration time of 0.1 and 0.3 s. The use of the isosceles triangle as source-time function is found to bias the finite-fault source inversion results. It accelerates the rupture to propagate faster compared to that of the Yoffe function. Moreover, it generates an artificial linear correlation between parameters that does not exist for the Yoffe source-time functions. The effect of inadequate knowledge of Earth’s crustal structure in earthquake rupture models is subsequently investigated. The results show that one-dimensional structure variability leads to parameters resolution changes, with a broadening of the posterior 5 PDFs and shifts in the peak location. These changes in the PDFs of kinematic parameters are associated with the blurring effect of using incorrect Earth structure. As an application to real earthquake, finite-fault source models for the 2009 L’Aquila earthquake are examined using one- and three-dimensional crustal structures. One- dimensional structure is found to degrade the data fitting. However, there is no significant effect on the rupture parameters aside from differences in the spatial slip extension. Stable features are maintained for both
Coherent source imaging and dynamic support tracking for inverse scattering using compressive MUSIC
Lee, Okkyun; Kim, Jong Min; Yoo, Jaejoon; Jin, Kyunghwan; Ye, Jong Chul
2011-09-01
The goal of this paper is to develop novel algorithms for inverse scattering problems such as EEG/MEG, microwave imaging, and/or diffuse optical tomograpahy, and etc. One of the main contributions of this paper is a class of novel non-iterative exact nonlinear inverse scattering theory for coherent source imaging and moving targets. Specifically, the new algorithms guarantee the exact recovery under a very relaxed constraint on the number of source and receivers, under which the conventional methods fail. Such breakthrough was possible thanks to the recent theory of compressive MUSIC and its extension using support correction criterion, where partial support are estimated using the conventional compressed sensing approaches, then the remaining supports are estimated using a novel generalized MUSIC criterion. Numerical results using coherent sources in EEG/MEG and dynamic targets confirm that the new algorithms outperform the conventional ones.
Yagi, Y.; Fukahata, Y.
2007-12-01
As computer technology advanced, it has become possible to observe seismic wave with a higher sampling rate and perform inversion for a larger data set. In general, to obtain a finer image of seismic source processes, waveform data with a higher sampling rate are needed. Then we encounter a problem whether there is no limitation of sampling rate in waveform inversion. In traditional seismic source inversion, covariance components of sampled waveform data have commonly been neglected. In fact, however, observed waveform data are not completely independent of each other at least in time domain, because they are always affected by un-elastic attenuation in the propagation of seismic waves through the Earth. In this study, we have developed a method of seismic source inversion to take the data covariance into account, and applied it to teleseismic P-wave data of the 2003 Boumerdes-Zemmouri, Algeria earthquake. From a comparison of final slip distributions inverted by the new formulation and the traditional formulation, we found that the effect of covariance components is crucial for a data set of higher sampling rates (≥ 5 Hz). For higher sampling rates, the slip distributions by the new formulation look stable, whereas the slip distributions by the traditional formulation tend to concentrate into small patches due to overestimation of the information from observed data. Our result indicates that the un-elastic effect of the Earth gives a limitation to the resolution of inverted seismic source models. It has been pointed out that seismic source models obtained from waveform data analyses are quite different from one another. One possible reason for the discrepancy is the neglect of covariance components. The new formulation must be useful to obtain a standard seismic source model.
Efficient Inversion of Mult-frequency and Multi-Source Electromagnetic Data
Energy Technology Data Exchange (ETDEWEB)
Gary D. Egbert
2007-03-22
The project covered by this report focused on development of efficient but robust non-linear inversion algorithms for electromagnetic induction data, in particular for data collected with multiple receivers, and multiple transmitters, a situation extremely common in eophysical EM subsurface imaging methods. A key observation is that for such multi-transmitter problems each step in commonly used linearized iterative limited memory search schemes such as conjugate gradients (CG) requires solution of forward and adjoint EM problems for each of the N frequencies or sources, essentially generating data sensitivities for an N dimensional data-subspace. These multiple sensitivities allow a good approximation to the full Jacobian of the data mapping to be built up in many fewer search steps than would be required by application of textbook optimization methods, which take no account of the multiplicity of forward problems that must be solved for each search step. We have applied this idea to a develop a hybrid inversion scheme that combines features of the iterative limited memory type methods with a Newton-type approach using a partial calculation of the Jacobian. Initial tests on 2D problems show that the new approach produces results essentially identical to a Newton type Occam minimum structure inversion, while running more rapidly than an iterative (fixed regularization parameter) CG style inversion. Memory requirements, while greater than for something like CG, are modest enough that even in 3D the scheme should allow 3D inverse problems to be solved on a common desktop PC, at least for modest (~ 100 sites, 15-20 frequencies) data sets. A secondary focus of the research has been development of a modular system for EM inversion, using an object oriented approach. This system has proven useful for more rapid prototyping of inversion algorithms, in particular allowing initial development and testing to be conducted with two-dimensional example problems, before
pyGIMLi: An open-source library for modelling and inversion in geophysics
Rücker, Carsten; Günther, Thomas; Wagner, Florian M.
2017-12-01
Many tasks in applied geosciences cannot be solved by single measurements, but require the integration of geophysical, geotechnical and hydrological methods. Numerical simulation techniques are essential both for planning and interpretation, as well as for the process understanding of modern geophysical methods. These trends encourage open, simple, and modern software architectures aiming at a uniform interface for interdisciplinary and flexible modelling and inversion approaches. We present pyGIMLi (Python Library for Inversion and Modelling in Geophysics), an open-source framework that provides tools for modelling and inversion of various geophysical but also hydrological methods. The modelling component supplies discretization management and the numerical basis for finite-element and finite-volume solvers in 1D, 2D and 3D on arbitrarily structured meshes. The generalized inversion framework solves the minimization problem with a Gauss-Newton algorithm for any physical forward operator and provides opportunities for uncertainty and resolution analyses. More general requirements, such as flexible regularization strategies, time-lapse processing and different sorts of coupling individual methods are provided independently of the actual methods used. The usage of pyGIMLi is first demonstrated by solving the steady-state heat equation, followed by a demonstration of more complex capabilities for the combination of different geophysical data sets. A fully coupled hydrogeophysical inversion of electrical resistivity tomography (ERT) data of a simulated tracer experiment is presented that allows to directly reconstruct the underlying hydraulic conductivity distribution of the aquifer. Another example demonstrates the improvement of jointly inverting ERT and ultrasonic data with respect to saturation by a new approach that incorporates petrophysical relations in the inversion. Potential applications of the presented framework are manifold and include time
International Nuclear Information System (INIS)
Gladkikh, P.I.; Telegin, Yu.N.; Karnaukhov, I.M.
2002-01-01
The feasibility of the development of intense X-ray sources based on Compton scattering in laser-electron storage rings is discussed. The results of the electron beam dynamics simulation involving Compton and intrabeam scattering are presented
Gladkikh, P I; Karnaukhov, I M
2002-01-01
The feasibility of the development of intense X-ray sources based on Compton scattering in laser-electron storage rings is discussed. The results of the electron beam dynamics simulation involving Compton and intrabeam scattering are presented.
CMT Source Inversions for Massive Data Assimilation in Global Adjoint Tomography
Lei, W.; Ruan, Y.; Bozdag, E.; Lefebvre, M. P.; Smith, J. A.; Modrak, R. T.; Komatitsch, D.; Song, X.; Liu, Q.; Tromp, J.; Peter, D. B.
2015-12-01
Full Waveform Inversion (FWI) is a vital tool for probing the Earth's interior and enhancing our knowledge of the underlying dynamical processes [e.g., Liu et al., 2012]. Using the adjoint tomography method, we have successfully obtained a first-generation global FWI model named M15 [Bozdag et al., 2015]. To achieve higher resolution of the emerging new structural features and to accommodate azimuthal anisotropy and anelasticity in the next-generation model, we expanded our database from 256 to 4,224 earthquakes. Previous studies have shown that ray-theory-based Centroid Moment Tensor (CMT) inversion algorithms can produce systematic biases in earthquake source parameters due to tradeoffs with 3D crustal and mantle heterogeneity [e.g., Hjorleifsdottir et al., 2010]. To reduce these well-known tradeoffs, we performed CMT inversions in our current 3D global model before resuming the structural inversion with the expanded database. Initial source parameters are selected from the global CMT database [Ekstrom et al., 2012], with moment magnitudes ranging from 5.5 to 7.0 and occurring between 1994 and 2015. Data from global and regional networks were retrieved from the IRIS DMC. Synthetic seismograms were generated based on the spectral-element-based seismic wave propagation solver (SPECFEM3D GLOBE) in model M15. We used a source inversion algorithm based on a waveform misfit function while allowing time shifts between data and synthetics to accommodate additional unmodeled 3D heterogeneity [Liu et al., 2004]. To accommodate the large number of earthquakes and time series (more than 10,000,000 records), we implemented a source inversion workflow based on the newly developed Adaptive Seismic Data Format (ASDF) [Krischer, Smith, et al., 2015] and ObsPy [Krischer et al., 2015]. In ASDF, each earthquake is associated with a single file, thereby eliminating I/O bottlenecks in the workflow and facilitating fast parallel processing. Our preliminary results indicate that errors
Padois, Thomas; Berry, Alain
2015-12-01
Microphone arrays and beamforming have become a standard method to localize aeroacoustic sources. Deconvolution techniques have been developed to improve spatial resolution of beamforming maps. The deconvolution approach for the mapping of acoustic sources (DAMAS) is a standard deconvolution technique, which has been enhanced via a sparsity approach called sparsity constrained deconvolution approach for the mapping of acoustic sources (SC-DAMAS). In this paper, the DAMAS inverse problem is solved using the orthogonal matching pursuit (OMP) and compared with beamforming and SC-DAMAS. The resulting noise source maps show that OMP-DAMAS is an efficient source localization technique in the case of uncorrelated or correlated acoustic sources. Moreover, the computation time is clearly reduced as compared to SC-DAMAS.
Regional W-Phase Source Inversion for Moderate to Large Earthquakes in China and Neighboring Areas
Zhao, Xu; Duputel, Zacharie; Yao, Zhenxing
2017-12-01
Earthquake source characterization has been significantly speeded up in the last decade with the development of rapid inversion techniques in seismology. Among these techniques, the W-phase source inversion method quickly provides point source parameters of large earthquakes using very long period seismic waves recorded at teleseismic distances. Although the W-phase method was initially developed to work at global scale (within 20 to 30 min after the origin time), faster results can be obtained when seismological data are available at regional distances (i.e., Δ ≤ 12°). In this study, we assess the use and reliability of regional W-phase source estimates in China and neighboring areas. Our implementation uses broadband records from the Chinese network supplemented by global seismological stations installed in the region. Using this data set and minor modifications to the W-phase algorithm, we show that reliable solutions can be retrieved automatically within 4 to 7 min after the earthquake origin time. Moreover, the method yields stable results down to Mw = 5.0 events, which is well below the size of earthquakes that are rapidly characterized using W-phase inversions at teleseismic distances.
Real-time Inversion of Tsunami Source from GNSS Ground Deformation Observations and Tide Gauges.
Arcas, D.; Wei, Y.
2017-12-01
Over the last decade, the NOAA Center for Tsunami Research (NCTR) has developed an inversion technique to constrain tsunami sources based on the use of Green's functions in combination with data reported by NOAA's Deep-ocean Assessment and Reporting of Tsunamis (DART®) systems. The system has consistently proven effective in providing highly accurate tsunami forecasts of wave amplitude throughout an entire basin. However, improvement is necessary in two critical areas: reduction of data latency for near-field tsunami predictions and reduction of maintenance cost of the network. Two types of sensors have been proposed as supplementary to the existing network of DART®systems: Global Navigation Satellite System (GNSS) stations and coastal tide gauges. The use GNSS stations to provide autonomous geo-spatial positioning at specific sites during an earthquake has been proposed in recent years to supplement the DART® array in tsunami source inversion. GNSS technology has the potential to provide substantial contributions in the two critical areas of DART® technology where improvement is most necessary. The present study uses GNSS ground displacement observations of the 2011 Tohoku-Oki earthquake in combination with NCTR operational database of Green's functions, to produce a rapid estimate of tsunami source based on GNSS observations alone. The solution is then compared with that obtained via DART® data inversion and the difficulties in obtaining an accurate GNSS-based solution are underlined. The study also identifies the set of conditions required for source inversion from coastal tide-gauges using the degree of nonlinearity of the signal as a primary criteria. We then proceed to identify the conditions and scenarios under which a particular gage could be used to invert a tsunami source.
von Ballmoos, Peter; Boggs, Steven E.; Jean, Pierre; Zoglauer, Andreas
2014-07-01
The All-Sky Compton Imager (ASCI) is a mission concept for MeV Gamma-Ray astronomy. It consists of a compact array of cross-strip germanium detectors, shielded only by a plastic anticoicidence, and weighting less than 100 kg. Situated on a deployable structure at a distance of 10 m from the spacecraft orbiting at L2 or in a HEO, the ASCI not only avoids albedo- and spacecraft-induced background, but it benefits from a continuous all-sky exposure. The modest effective area is more than compensated by the 4 π field-of-view. Despite its small size, ASCI's γ-ray line sensitivity after its nominal lifetime of 3 years is ~ 10-6 ph cm-2 s-1 at 1 MeV for every γ-ray source in the sky. With its high spectral and 3-D spatial resolution, the ASCI will perform sensitive γray spectroscopy and polarimetry in the energy band 100 keV-10 MeV. The All-Sky Compton Imager is particularly well suited to the task of measuring the Cosmic Gamma-Ray Background - and simultaneously covering the wide range of science topics in gamma-ray astronomy.
Gehrmann, Romina A. S.; Schwalenberg, Katrin; Riedel, Michael; Spence, George D.; Spieß, Volkhard; Dosso, Stan E.
2016-01-01
This paper applies nonlinear Bayesian inversion to marine controlled source electromagnetic (CSEM) data collected near two sites of the Integrated Ocean Drilling Program (IODP) Expedition 311 on the northern Cascadia Margin to investigate subseafloor resistivity structure related to gas hydrate deposits and cold vents. The Cascadia margin, off the west coast of Vancouver Island, Canada, has a large accretionary prism where sediments are under pressure due to convergent plate boundary tectonics. Gas hydrate deposits and cold vent structures have previously been investigated by various geophysical methods and seabed drilling. Here, we invert time-domain CSEM data collected at Sites U1328 and U1329 of IODP Expedition 311 using Bayesian methods to derive subsurface resistivity model parameters and uncertainties. The Bayesian information criterion is applied to determine the amount of structure (number of layers in a depth-dependent model) that can be resolved by the data. The parameter space is sampled with the Metropolis-Hastings algorithm in principal-component space, utilizing parallel tempering to ensure wider and efficient sampling and convergence. Nonlinear inversion allows analysis of uncertain acquisition parameters such as time delays between receiver and transmitter clocks as well as input electrical current amplitude. Marginalizing over these instrument parameters in the inversion accounts for their contribution to the geophysical model uncertainties. One-dimensional inversion of time-domain CSEM data collected at measurement sites along a survey line allows interpretation of the subsurface resistivity structure. The data sets can be generally explained by models with 1 to 3 layers. Inversion results at U1329, at the landward edge of the gas hydrate stability zone, indicate a sediment unconformity as well as potential cold vents which were previously unknown. The resistivities generally increase upslope due to sediment erosion along the slope. Inversion
An Adaptive Observer-Based Algorithm for Solving Inverse Source Problem for the Wave Equation
Asiri, Sharefa M.
2015-08-31
Observers are well known in control theory. Originally designed to estimate the hidden states of dynamical systems given some measurements, the observers scope has been recently extended to the estimation of some unknowns, for systems governed by partial differential equations. In this paper, observers are used to solve inverse source problem for a one-dimensional wave equation. An adaptive observer is designed to estimate the state and source components for a fully discretized system. The effectiveness of the algorithm is emphasized in noise-free and noisy cases and an insight on the impact of measurements’ size and location is provided.
Stritzel, J; Melchert, O; Wollweber, M; Roth, B
2017-09-01
The direct problem of optoacoustic signal generation in biological media consists of solving an inhomogeneous three-dimensional (3D) wave equation for an initial acoustic stress profile. In contrast, the more defiant inverse problem requires the reconstruction of the initial stress profile from a proper set of observed signals. In this article, we consider an effectively 1D approach, based on the assumption of a Gaussian transverse irradiation source profile and plane acoustic waves, in which the effects of acoustic diffraction are described in terms of a linear integral equation. The respective inverse problem along the beam axis can be cast into a Volterra integral equation of the second kind for which we explore here efficient numerical schemes in order to reconstruct initial stress profiles from observed signals, constituting a methodical progress of computational aspects of optoacoustics. In this regard, we explore the validity as well as the limits of the inversion scheme via numerical experiments, with parameters geared toward actual optoacoustic problem instances. The considered inversion input consists of synthetic data, obtained in terms of the effectively 1D approach, and, more generally, a solution of the 3D optoacoustic wave equation. Finally, we also analyze the effect of noise and different detector-to-sample distances on the optoacoustic signal and the reconstructed pressure profiles.
Stritzel, J.; Melchert, O.; Wollweber, M.; Roth, B.
2017-09-01
The direct problem of optoacoustic signal generation in biological media consists of solving an inhomogeneous three-dimensional (3D) wave equation for an initial acoustic stress profile. In contrast, the more defiant inverse problem requires the reconstruction of the initial stress profile from a proper set of observed signals. In this article, we consider an effectively 1D approach, based on the assumption of a Gaussian transverse irradiation source profile and plane acoustic waves, in which the effects of acoustic diffraction are described in terms of a linear integral equation. The respective inverse problem along the beam axis can be cast into a Volterra integral equation of the second kind for which we explore here efficient numerical schemes in order to reconstruct initial stress profiles from observed signals, constituting a methodical progress of computational aspects of optoacoustics. In this regard, we explore the validity as well as the limits of the inversion scheme via numerical experiments, with parameters geared toward actual optoacoustic problem instances. The considered inversion input consists of synthetic data, obtained in terms of the effectively 1D approach, and, more generally, a solution of the 3D optoacoustic wave equation. Finally, we also analyze the effect of noise and different detector-to-sample distances on the optoacoustic signal and the reconstructed pressure profiles.
New reconstruction method for the advanced compton camera
International Nuclear Information System (INIS)
Kurihara, Takashi; Ogawa, Koichi
2007-01-01
Conventional gammacameras employ a mechanical collimator, which reduces the number of photons detected by such cameras. To address this issue, a Compton camera has been proposed to improve the efficiency of data acquisition by employing electronic collimation. With regard to Compton cameras, the advanced Compton camera (ACC) which has been proposed by Tanimori et al. can restrict the source locations with the help of the recoil electrons that are emitted in the process of Compton scattering. However, the reconstruction methods employed in conventional Compton cameras are inefficient in reconstructing images from the data acquired with the ACC. In this paper, we propose a new reconstruction method that is designed specifically for the ACC. This method, which is an improved version of the source space tree algorithm (SSTA), permits the source distribution to be reconstructed accurately and efficiently. The SSTA is one of the reconstruction methods for conventional Compton cameras proposed by Rohe et al. Our proposed algorithm employs a set of lines that are defined at equiangular intervals in the reconstruction region and the specified voxels of interest that include the search points located on the above predefined lines at equally spaced intervals. The validity of our method is demonstrated by simulations involving the reconstruction of a point source and a disk source. (author)
Source-jerk analysis using a semi-explicit inverse kinetic technique
International Nuclear Information System (INIS)
Spriggs, G.D.; Pederson, R.A.
1985-01-01
A method is proposed for measuring the effective reproduction factor, k, in subcritical systems. The method uses the transient response of a subcritical system to the sudden removal of an extraneous neutron source (i.e., a source jerk). The response is analyzed using an inverse kinetic technique that least-squares fits the exact analytical solution corresponding to a source-jerk transient as derived from the point-reactor model. It has been found that the technique can provide an accurate means of measuring k in systems that are close to critical (i.e., 0.95 < k < 1.0). As a system becomes more subcritical (i.e., k << 1.0) spatial effects can introduce significant biases depending on the source and detector positions. However, methods are available that can correct for these biases and, hence, can allow measuring subcriticality in systems with k as low as 0.5. 12 refs., 3 figs
Source-jerk analysis using a semi-explicit inverse kinetic technique
International Nuclear Information System (INIS)
Spriggs, G.D.; Pederson, R.A.
1985-01-01
A method is proposed for measuring the effective reproduction factor, k, in subcritical systems. The method uses the transient responses of a subcritical system to the sudden removal of an extraneous neutron source (i.e., a source jerk). The response is analyzed using an inverse kinetic technique that least-squares fits the exact analytical solution corresponding to a source-jerk transient as derived from the point-reactor model. It has been found that the technique can provide an accurate means of measuring k in systems that are close to critical (i.e., 0.95 < k < 1.0). As a system becomes more subcritical (i.e., k << 1.0) spatial effects can introduce significant biases depending on the source and detector positions. However, methods are available that can correct for these biases and, hence, can allow measuring subcriticality in systems with k as low as 0.5
Energy Technology Data Exchange (ETDEWEB)
Fanelli, Cristiano V. [Sapienza Univ. of Rome (Italy)
2015-03-01
In this thesis work, results of the analysis of the polarization transfers measured in real Compton scattering (RCS) by the Collaboration E07-002 at the Je fferson Lab Hall-C are presented. The data were collected at large scattering angle (theta_cm = 70deg) and with a polarized incident photon beam at an average energy of 3.8 GeV. Such a kind of experiments allows one to understand more deeply the reaction mechanism, that involves a real photon, by extracting both Compton form factors and Generalized Parton Distributions (GPDs) (also relevant for possibly shedding light on the total angular momentum of the nucleon). The obtained results for the longitudinal and transverse polarization transfers K_LL and K_LT, are of crucial importance, since they confirm unambiguously the disagreement between experimental data and pQCD prediction, as it was found in E99-114 experiment, and favor the Handbag mechanism. The E99-114 and E07-002 results can contribute to attract new interest on the great yield of the Compton scattering by a nucleon target, as demonstrated by the recent approval of an experimental proposal submitted to the Jefferson Lab PAC 42 for a Wide-angle Compton Scattering experiment, at 8 and 10 GeV Photon Energies. The new experiments approved to run with the updated 12 GeV electron beam at JLab, are characterized by much higher luminosities, and a new GEM tracker is under development to tackle the challenging backgrounds. Within this context, we present a new multistep tracking algorithm, based on (i) a Neural Network (NN) designed for a fast and efficient association of the hits measured by the GEM detector which allows the track identification, and (ii) the application of both a Kalman filter and Rauch-Tung-Striebel smoother to further improve the track reconstruction. The full procedure, i.e. NN and filtering, appears very promising, with high performances in terms of both association effciency and reconstruction accuracy, and these preliminary results will
International Nuclear Information System (INIS)
Winiarek, Victor
2014-01-01
Uncontrolled releases of pollutant in the atmosphere may be the consequence of various situations: accidents, for instance leaks or explosions in an industrial plant, or terrorist attacks such as biological bombs, especially in urban areas. In the event of such situations, authorities' objectives are various: predict the contaminated zones to apply first countermeasures such as evacuation of concerned population; determine the source location; assess the long-term polluted areas, for instance by deposition of persistent pollutants in the soil. To achieve these objectives, numerical models can be used to model the atmospheric dispersion of pollutants. We will first present the different processes that govern the transport of pollutants in the atmosphere, then the different numerical models that are commonly used in this context. The choice between these models mainly depends of the scale and the details one seeks to take into account. We will then present several inverse modeling methods to estimate the emission as well as statistical methods to estimate prior errors, to which the inversion is very sensitive. Several case studies are presented, using synthetic data as well as real data such as the estimation of source terms from the Fukushima accident in March 2011. From our results, we estimate the Cesium-137 emission to be between 12 and 19 PBq with a standard deviation between 15 and 65% and the Iodine-131 emission to be between 190 and 380 PBq with a standard deviation between 5 and 10%. Concerning the localization of an unknown source of pollutant, two strategies can be considered. On one hand parametric methods use a limited number of parameters to characterize the source term to be reconstructed. To do so, strong assumptions are made on the nature of the source. The inverse problem is hence to estimate these parameters. On the other hand nonparametric methods attempt to reconstruct a full emission field. Several parametric and nonparametric methods are
Skeletonized inversion of surface wave: Active source versus controlled noise comparison
Li, Jing
2016-07-14
We have developed a skeletonized inversion method that inverts the S-wave velocity distribution from surface-wave dispersion curves. Instead of attempting to fit every wiggle in the surface waves with predicted data, it only inverts the picked dispersion curve, thereby mitigating the problem of getting stuck in a local minimum. We have applied this method to a synthetic model and seismic field data from Qademah fault, located at the western side of Saudi Arabia. For comparison, we have performed dispersion analysis for an active and controlled noise source seismic data that had some receivers in common with the passive array. The active and passive data show good agreement in the dispersive characteristics. Our results demonstrated that skeletonized inversion can obtain reliable 1D and 2D S-wave velocity models for our geologic setting. A limitation is that we need to build layered initial model to calculate the Jacobian matrix, which is time consuming.
Noise source localization on tyres using an inverse boundary element method
DEFF Research Database (Denmark)
Schuhmacher, Andreas; Saemann, E-U; Hald, J
1998-01-01
A dominating part of tyre noise is radiated from a region close to the tyre/road contact patch, where it is very difficult to measure both the tyre vibration and the acoustic near field. The approach taken in the present paper is to model the tyre and road surfaces with a Boundary Element Model...... (BEM), with unknown node vibration data on the tyre surface. The BEM model is used to calculate a set of transfer functions from the node vibrations to the sound pressure at a set of microphone positions around the tyre. By approximate inversion of the matrix of transfer functions, the surface...... vibration data can then be estimated from a set of measured sound pressure data. The paper describes the different elements of this so-called Inverse Boundary Element Method (IBEM) including the measurement system, and it gives results from a verification measurement on a loudspeaker sound source. Results...
On an inverse source problem for enhanced oil recovery by wave motion maximization in reservoirs
Karve, Pranav M.
2014-12-28
© 2014, Springer International Publishing Switzerland. We discuss an optimization methodology for focusing wave energy to subterranean formations using strong motion actuators placed on the ground surface. The motivation stems from the desire to increase the mobility of otherwise entrapped oil. The goal is to arrive at the spatial and temporal description of surface sources that are capable of maximizing mobility in the target reservoir. The focusing problem is posed as an inverse source problem. The underlying wave propagation problems are abstracted in two spatial dimensions, and the semi-infinite extent of the physical domain is negotiated by a buffer of perfectly-matched-layers (PMLs) placed at the domain’s truncation boundary. We discuss two possible numerical implementations: Their utility for deciding the tempo-spatial characteristics of optimal wave sources is shown via numerical experiments. Overall, the simulations demonstrate the inverse source method’s ability to simultaneously optimize load locations and time signals leading to the maximization of energy delivery to a target formation.
The feasibility of an inverse geometry CT system with stationary source arrays.
Hsieh, Scott S; Heanue, Joseph A; Funk, Tobias; Hinshaw, Waldo S; Wilfley, Brian P; Solomon, Edward G; Pelc, Norbert J
2013-03-01
Inverse geometry computed tomography (IGCT) has been proposed as a new system architecture that combines a small detector with a large, distributed source. This geometry can suppress cone-beam artifacts, reduce scatter, and increase dose efficiency. However, the temporal resolution of IGCT is still limited by the gantry rotation time. Large reductions in rotation time are in turn difficult due to the large source array and associated power electronics. We examine the feasibility of using stationary source arrays for IGCT in order to achieve better temporal resolution. We anticipate that multiple source arrays are necessary, with each source array physically separated from adjacent ones. Key feasibility issues include spatial resolution, artifacts, flux, noise, collimation, and system timing clashes. The separation between the different source arrays leads to missing views, complicating reconstruction. For the special case of three source arrays, a two-stage reconstruction algorithm is used to estimate the missing views. Collimation is achieved using a rotating collimator with a small number of holes. A set of equally spaced source spots are designated on the source arrays, and a source spot is energized when a collimator hole is aligned with it. System timing clashes occur when multiple source spots are scheduled to be energized simultaneously. We examine flux considerations to evaluate whether sufficient flux is available for clinical applications. The two-stage reconstruction algorithm suppresses cone-beam artifacts while maintaining resolution and noise characteristics comparable to standard third generation systems. The residual artifacts are much smaller in magnitude than the cone-beam artifacts eliminated. A mathematical condition is given relating collimator hole locations and the number of virtual source spots for which system timing clashes are avoided. With optimization, sufficient flux may be achieved for many clinical applications. IGCT with stationary
Directory of Open Access Journals (Sweden)
J. S. de Villiers
2014-10-01
Full Text Available This research focuses on the inversion of geomagnetic variation field measurement to obtain source currents in the ionosphere. During a geomagnetic disturbance, the ionospheric currents create magnetic field variations that induce geoelectric fields, which drive geomagnetically induced currents (GIC in power systems. These GIC may disturb the operation of power systems and cause damage to grounded power transformers. The geoelectric fields at any location of interest can be determined from the source currents in the ionosphere through a solution of the forward problem. Line currents running east–west along given surface position are postulated to exist at a certain height above the Earth's surface. This physical arrangement results in the fields on the ground having the magnetic north and down components, and the electric east component. Ionospheric currents are modelled by inverting Fourier integrals (over the wavenumber of elementary geomagnetic fields using the Levenberg–Marquardt technique. The output parameters of the inversion model are the current strength, height and surface position of the ionospheric current system. A ground conductivity structure with five layers from Quebec, Canada, based on the Layered-Earth model is used to obtain the complex skin depth at a given angular frequency. This paper presents preliminary and inversion results based on these structures and simulated geomagnetic fields. The results show some interesting features in the frequency domain. Model parameters obtained through inversion are within 2% of simulated values. This technique has applications for modelling the currents of electrojets at the equator and auroral regions, as well as currents in the magnetosphere.
Micro-seismic Imaging Using a Source Independent Waveform Inversion Method
Wang, Hanchen
2016-04-18
Micro-seismology is attracting more and more attention in the exploration seismology community. The main goal in micro-seismic imaging is to find the source location and the ignition time in order to track the fracture expansion, which will help engineers monitor the reservoirs. Conventional imaging methods work fine in this field but there are many limitations such as manual picking, incorrect migration velocity and low signal to noise ratio (S/N). In traditional surface survey imaging, full waveform inversion (FWI) is widely used. The FWI method updates the velocity model by minimizing the misfit between the observed data and the predicted data. Using FWI to locate and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. Use the FWI technique, and overcomes the difficulties of manual pickings and incorrect velocity model for migration. However, the technique of waveform inversion of micro-seismic events faces its own problems. There is significant nonlinearity due to the unknown source location (space) and function (time). We have developed a source independent FWI of micro-seismic events to simultaneously invert for the source image, source function and velocity model. It is based on convolving reference traces with the observed and modeled data to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. To examine the accuracy of the inverted source image and velocity model the extended image for source wavelet in z-axis is extracted. Also the angle gather is calculated to check the applicability of the migration velocity. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity in the synthetic experiments with both parts of the Marmousi and the SEG
Joint Inversion of Earthquake Source Parameters with local and teleseismic body waves
Chen, W.; Ni, S.; Wang, Z.
2011-12-01
In the classical source parameter inversion algorithm of CAP (Cut and Paste method, by Zhao and Helmberger), waveform data at near distances (typically less than 500km) are partitioned into Pnl and surface waves to account for uncertainties in the crustal models and different amplitude weight of body and surface waves. The classical CAP algorithms have proven effective for resolving source parameters (focal mechanisms, depth and moment) for earthquakes well recorded on relatively dense seismic network. However for regions covered with sparse stations, it is challenging to achieve precise source parameters . In this case, a moderate earthquake of ~M6 is usually recorded on only one or two local stations with epicentral distances less than 500 km. Fortunately, an earthquake of ~M6 can be well recorded on global seismic networks. Since the ray paths for teleseismic and local body waves sample different portions of the focal sphere, combination of teleseismic and local body wave data helps constrain source parameters better. Here we present a new CAP mothod (CAPjoint), which emploits both teleseismic body waveforms (P and SH waves) and local waveforms (Pnl, Rayleigh and Love waves) to determine source parameters. For an earthquake in Nevada that is well recorded with dense local network (USArray stations), we compare the results from CAPjoint with those from the traditional CAP method involving only of local waveforms , and explore the efficiency with bootstraping statistics to prove the results derived by CAPjoint are stable and reliable. Even with one local station included in joint inversion, accuracy of source parameters such as moment and strike can be much better improved.
Dettmer, J.; Benavente, R. F.; Cummins, P. R.
2017-12-01
This work considers probabilistic, non-linear centroid moment tensor inversion of data from earthquakes at teleseismic distances. The moment tensor is treated as deviatoric and centroid location is parametrized with fully unknown latitude, longitude, depth and time delay. The inverse problem is treated as fully non-linear in a Bayesian framework and the posterior density is estimated with interacting Markov chain Monte Carlo methods which are implemented in parallel and allow for chain interaction. The source mechanism and location, including uncertainties, are fully described by the posterior probability density and complex trade-offs between various metrics are studied. These include the percent of double couple component as well as fault orientation and the probabilistic results are compared to results from earthquake catalogs. Additional focus is on the analysis of complex events which are commonly not well described by a single point source. These events are studied by jointly inverting for multiple centroid moment tensor solutions. The optimal number of sources is estimated by the Bayesian information criterion to ensure parsimonious solutions. [Supported by NSERC.
Wang, Dafang; Kirby, Robert M.; MacLeod, Rob S.; Johnson, Chris R.
2013-01-01
With the goal of non-invasively localizing cardiac ischemic disease using body-surface potential recordings, we attempted to reconstruct the transmembrane potential (TMP) throughout the myocardium with the bidomain heart model. The task is an inverse source problem governed by partial differential equations (PDE). Our main contribution is solving the inverse problem within a PDE-constrained optimization framework that enables various physically-based constraints in both equality and inequality forms. We formulated the optimality conditions rigorously in the continuum before deriving finite element discretization, thereby making the optimization independent of discretization choice. Such a formulation was derived for the L2-norm Tikhonov regularization and the total variation minimization. The subsequent numerical optimization was fulfilled by a primal-dual interior-point method tailored to our problem’s specific structure. Our simulations used realistic, fiber-included heart models consisting of up to 18,000 nodes, much finer than any inverse models previously reported. With synthetic ischemia data we localized ischemic regions with roughly a 10% false-negative rate or a 20% false-positive rate under conditions up to 5% input noise. With ischemia data measured from animal experiments, we reconstructed TMPs with roughly 0.9 correlation with the ground truth. While precisely estimating the TMP in general cases remains an open problem, our study shows the feasibility of reconstructing TMP during the ST interval as a means of ischemia localization. PMID:23913980
Compton suppression naa in the analysis of food and beverages
International Nuclear Information System (INIS)
Ahmed, Y.A.; Ewa, I.O.B.; Umar, I.M.; Funtua, I.I.; Lanberger, S.; O'kelly, D.J.; Braisted, J.D.
2009-01-01
Applicability and performance of Compton suppression method in the analysis of food and beverages was re-established in this study. Using ''1''3''7Cs and ''6''0Co point sources Compton Suppression Factors (SF), Compton Reduction Factors (RF), Peak-to-Compton ratio (P/C), Compton Plateau (C p l), and Compton Edge (C e ) were determined for each of the two sources. The natural background reduction factors in the anticoincidence mode compared to the normal mode were evaluated. The reported R.F. values of the various Compton spectrometers for ''6''0Co source at energy 50-210 keV (backscattering region), 600 keV (Compton edge corresponding to 1173.2 keV gamma-ray) and 1110 keV (Compton edge corresponding to 1332.5 keV gamma-ray) were compared with that of the present work. Similarly the S.F. values of the spectrometers for ''1''3''7Cs source were compared at the backscattered energy region (S.F. b = 191-210 keV), Compton Plateau (S.F. p l = 350-370 keV), and Compton Edge (S.F. e = 471-470 keV) and all were found to follow a similar trend. We also compared peak reduction ratios for the two cobalt energies (1173.2 and 1332.5) with the ones reported in literature and two results agree well. Applicability of the method to food and beverages was put to test for twenty one major, minor, and trace elements (Ba, Sr, I, Br, Cu, V, Mg, Na, Cl, Mn, Ca, Sn,K, Cd, Zn, As, Sb, Ni, Cs, Fe, and Co) commonly found in food, milk, tea and tobacco. The elements were assayed using five National Institute for Standards and Technology (NIST) certified reference materials (Non-fat powdered milk, Apple leaves, Tomato leaves, and Citrus leaves). The results obtained shows good agreement with NIST certified values, indicating that the method is suitable for simultaneous determination of micro-nutrients, macro-nutrients and heavy elements in food and beverages without undue interference problems
On solvability for inverse problem of compact support source determination for the heat equation
Soloviev, V. V.; Tkachenko, D. S.
2017-12-01
An inverse problem of reconstructing the source for the heat equations on a plane is considered. As an “overdetermination” (additional information about the solution of the direct problem) a trace of it’s solution is given on a line segment inside of a bounded region. We give sufficient conditions for uniqueness of the solution of the task at hand, prove Fredholm alternative and sufficient conditions for existence and uniqueness of solution of the task. The studying of the problem is performed in the spaces of functions satisfying Hölder condition.
Gross, Lutz; Altinay, Cihan; Fenwick, Joel; Smith, Troy
2014-05-01
The program package escript has been designed for solving mathematical modeling problems using python, see Gross et al. (2013). Its development and maintenance has been funded by the Australian Commonwealth to provide open source software infrastructure for the Australian Earth Science community (recent funding by the Australian Geophysical Observing System EIF (AGOS) and the AuScope Collaborative Research Infrastructure Scheme (CRIS)). The key concepts of escript are based on the terminology of spatial functions and partial differential equations (PDEs) - an approach providing abstraction from the underlying spatial discretization method (i.e. the finite element method (FEM)). This feature presents a programming environment to the user which is easy to use even for complex models. Due to the fact that implementations are independent from data structures simulations are easily portable across desktop computers and scalable compute clusters without modifications to the program code. escript has been successfully applied in a variety of applications including modeling mantel convection, melting processes, volcanic flow, earthquakes, faulting, multi-phase flow, block caving and mineralization (see Poulet et al. 2013). The recent escript release (see Gross et al. (2013)) provides an open framework for solving joint inversion problems for geophysical data sets (potential field, seismic and electro-magnetic). The strategy bases on the idea to formulate the inversion problem as an optimization problem with PDE constraints where the cost function is defined by the data defect and the regularization term for the rock properties, see Gross & Kemp (2013). This approach of first-optimize-then-discretize avoids the assemblage of the - in general- dense sensitivity matrix as used in conventional approaches where discrete programming techniques are applied to the discretized problem (first-discretize-then-optimize). In this paper we will discuss the mathematical framework for
Full–waveform inversion using the excitation representation of the source wavefield
Kalita, Mahesh
2016-09-06
Full waveform inversion (FWI) is an iterative method of data-fitting, aiming at high resolution recovery of the unknown model parameters. However, it is a cumbersome process, requiring a long computational time and large memory space/disc storage. One of the reasons for this computational limitation is the gradient calculation step. Based on the adjoint state method, it involves the temporal cross-correlation of the forward propagated source wavefield with the backward propagated residuals, in which we usually need to store the source wavefield, or include an extra extrapolation step to propagate the source wavefield from its storage at the boundary. We propose, alternatively, an amplitude excitation gradient calculation based on the excitation imaging condition concept that represents the source wavefield history by a single, specifically the most energetic arrival. An excitation based Born modeling allows us to derive the adjoint operation. In this case, the source wavelet is injected by a cross-correlation step applied to the data residual directly. Representing the source wavefield through the excitation amplitude and time, we reduce the large requirements for both storage and the computational time. We demonstrate the application of this approach on a 2-layer model with an anomaly and the Marmousi II model.
Scherbaum, Frank
1990-08-01
The estimation of Q values and/or source corner frequencies fc from single-station narrow-band recordings of microearthquake spectra is a strongly nonunique problem. This is due to the fact that the spectra can be equally well fitted with low-Q/high-fc or a high-Q/low-fc spectral models. Here, a method is proposed to constrain this ambiguity by inverting a set of microearthquake spectra for a three-dimensional Q model structure and model source parameters seismic moment (Mo ) and corner frequency (fc ) simultaneously. The inversion of whole path Q can be stated as a linear problem in the attenuation operator t* and solved using a tomographic reconstruction of the three-dimensional Q structure. This Q structure is then used as a "geometrical constraint" for a nonlinear Marquardt-Levenberg inversion of Mo and fc and a new Q value. The first step of the method consists of interactively fitting the observed microearthquake spectra by spectral models consisting of a source spectrum with an assumed high-frequency decay, a single-layer resonance filter to account for local site effects, and additional "whole path attenuation" along the ray path. From the obtained Q values, a three-dimensional Q model is calculated using a tomographic reconstruction technique (SIRT). The individual Q values along each ray path are then used as Q starting values for a nonlinear iterative Marquardt-Levenberg inversion of Mo and fc and a "new" Q value. Subsequently, the "new" Q values are used to reconstruct the next Q model which again provides starting values for the "next" nonlinear inversion of Mo, fc, and Q. This process is repeated until the "goodness of fit measure" indicates no further improvement of the results. The method has been tested on a set of approximately 2800 P wave spectra (0.9 effects close to the surface shows that site effects may cause a corruption of the resulting Q model at shallow depths. For the given data set and depths below 3-5 km, the method is believed to be
Lloyd, Stephen F.
The purpose of this research is to test the effectiveness of forward and inverse modeling approaches in wave propagation analysis problems with complex settings and scenarios that include fluid-solid interfaces, non-stationary sources, and non-point sources not previously investigated. The research is made up of three components. First, finite element method modeling and a genetic algorithm are employed to assess the feasibility of using inverse modeling to determine the thickness of the surface ice on Europa, one of Jupiter's moons, and the depth of a possible subsurface ocean. The feasibility study presented in this dissertation considers the specific case in which inverse modeling might be used to determine the depths of ice and ocean layers on Europa for a possible space mission in which the effects of a spacecraft-released impactor on Europa's surface are measured. Second, reconstructing dynamic distributed loads, such as truck loads on highways, require inverting for large numbers of parameters. To address solving for the large number of unknowns in such problems, an adjoint-method-based acoustic-source inversion procedure for reconstructing multiple moving, non-point acoustic sources is developed and tested with numerical experiments. Third, forward modeling of moving sources in three-dimensional (3D) settings is tested with numerical experiments using SPECFEM3D, an open source spectral element method program. Researching forward modeling for complicated scenarios such as moving acoustic sources in fluid-solid coupled systems in 3D is an important step toward using SPECFEM3D for moving-source inversion problems in 3D. The conclusions of the research presented are as follows: It is feasible to estimate the thickness of the ice layer on the surface of Europa and the depth of a subsurface ocean with inverse modeling based on measured wave motions in the ice caused by a planned impact. The adjoint method is effective in reconstructing large numbers of acoustic
Landquake dynamics inferred from seismic source inversion: Greenland and Sichuan events of 2017
Chao, W. A.
2017-12-01
In June 2017 two catastrophic landquake events occurred in Greenland and Sichuan. The Greenland event leads to tsunami hazard in the small town of Nuugaarsiaq. A landquake in Sichuan hit the town, which resulted in over 100 death. Both two events generated the strong seismic signals recorded by the real-time global seismic network. I adopt an inversion algorithm to derive the landquake force time history (LFH) using the long-period waveforms, and the landslide volume ( 76 million m3) can be rapidly estimated, facilitating the tsunami-wave modeling for early warning purpose. Based on an integrated approach involving tsunami forward simulation and seismic waveform inversion, this study has significant implications to issuing actionable warnings before hazardous tsunami waves strike populated areas. Two single-forces (SFs) mechanism (two block model) yields the best explanation for Sichuan event, which demonstrates that secondary event (seismic inferred volume: 8.2 million m3) may be mobilized by collapse-mass hitting from initial rock avalanches ( 5.8 million m3), likely causing a catastrophic disaster. The later source with a force magnitude of 0.9967×1011 N occurred 70 seconds after first mass-movement occurrence. In contrast, first event has the smaller force magnitude of 0.8116×1011 N. In conclusion, seismically inferred physical parameters will substantially contribute to improving our understanding of landquake source mechanisms and mitigating similar hazards in other parts of the world.
Marine Controlled-Source Electromagnetic 2D Inversion for synthetic models.
Liu, Y.; Li, Y.
2016-12-01
We present a 2D inverse algorithm for frequency domain marine controlled-source electromagnetic (CSEM) data, which is based on the regularized Gauss-Newton approach. As a forward solver, our parallel adaptive finite element forward modeling program is employed. It is a self-adaptive, goal-oriented grid refinement algorithm in which a finite element analysis is performed on a sequence of refined meshes. The mesh refinement process is guided by a dual error estimate weighting to bias refinement towards elements that affect the solution at the EM receiver locations. With the use of the direct solver (MUMPS), we can effectively compute the electromagnetic fields for multi-sources and parametric sensitivities. We also implement the parallel data domain decomposition approach of Key and Ovall (2011), with the goal of being able to compute accurate responses in parallel for complicated models and a full suite of data parameters typical of offshore CSEM surveys. All minimizations are carried out by using the Gauss-Newton algorithm and model perturbations at each iteration step are obtained by using the Inexact Conjugate Gradient iteration method. Synthetic test inversions are presented.
Global inverse modeling of CH4 sources and sinks: an overview of methods
Houweling, Sander; Bergamaschi, Peter; Chevallier, Frederic; Heimann, Martin; Kaminski, Thomas; Krol, Maarten; Michalak, Anna M.; Patra, Prabir
2017-01-01
The aim of this paper is to present an overview of inverse modeling methods that have been developed over the years for estimating the global sources and sinks of CH4. It provides insight into how techniques and estimates have evolved over time and what the remaining shortcomings are. As such, it serves a didactical purpose of introducing apprentices to the field, but it also takes stock of developments so far and reflects on promising new directions. The main focus is on methodological aspects that are particularly relevant for CH4, such as its atmospheric oxidation, the use of methane isotopologues, and specific challenges in atmospheric transport modeling of CH4. The use of satellite retrievals receives special attention as it is an active field of methodological development, with special requirements on the sampling of the model and the treatment of data uncertainty. Regional scale flux estimation and attribution is still a grand challenge, which calls for new methods capable of combining information from multiple data streams of different measured parameters. A process model representation of sources and sinks in atmospheric transport inversion schemes allows the integrated use of such data. These new developments are needed not only to improve our understanding of the main processes driving the observed global trend but also to support international efforts to reduce greenhouse gas emissions.
An Inverse Source Problem for a One-dimensional Wave Equation: An Observer-Based Approach
Asiri, Sharefa M.
2013-05-25
Observers are well known in the theory of dynamical systems. They are used to estimate the states of a system from some measurements. However, recently observers have also been developed to estimate some unknowns for systems governed by Partial differential equations. Our aim is to design an observer to solve inverse source problem for a one dimensional wave equation. Firstly, the problem is discretized in both space and time and then an adaptive observer based on partial field measurements (i.e measurements taken form the solution of the wave equation) is applied to estimate both the states and the source. We see the effectiveness of this observer in both noise-free and noisy cases. In each case, numerical simulations are provided to illustrate the effectiveness of this approach. Finally, we compare the performance of the observer approach with Tikhonov regularization approach.
Seismic signal simulation and study of underground nuclear sources by moment inversion
International Nuclear Information System (INIS)
Crusem, R.
1986-09-01
Some problems of underground nuclear explosions are examined from the seismological point of view. In the first part a model is developed for mean seismic propagation through the lagoon of Mururoa atoll and for calculation of synthetic seismograms (in intermediate fields: 5 to 20 km) by summation of discrete wave numbers. In the second part this ground model is used with a linear inversion method of seismic moments for estimation of elastic source terms equivalent to the nuclear source. Only the isotrope part is investigated solution stability is increased by using spectral smoothing and a minimal phase hypothesis. Some examples of applications are presented: total energy estimation of a nuclear explosion, simulation of mechanical effects induced by an underground explosion [fr
Directory of Open Access Journals (Sweden)
Brian J. Gaudet
2017-10-01
Full Text Available The Indianapolis Flux Experiment (INFLUX aims to quantify and improve the effectiveness of inferring greenhouse gas (GHG source strengths from downstream concentration measurements in urban environments. Mesoscale models such as the Weather Research and Forecasting (WRF model can provide realistic depictions of planetary boundary layer (PBL structure and flow fields at horizontal grid lengths (Δ'x' down to a few km. Nevertheless, a number of potential sources of error exist in the use of mesoscale models for urban inversions, including accurate representation of the dispersion of GHGs by turbulence close to a point source. Here we evaluate the predictive skill of a 1-km chemistry-adapted WRF (WRF-Chem simulation of daytime CO2 transport from an Indianapolis power plant for a single INFLUX case (28 September 2013. We compare the simulated plume release on domains at different resolutions, as well as on a domain run in large eddy simulation (LES mode, enabling us to study the impact of both spatial resolution and parameterization of PBL turbulence on the transport of CO2. Sensitivity tests demonstrate that much of the difference between 1-km mesoscale and 111-m LES plumes, including substantially lower maximum concentrations in the mesoscale simulation, is due to the different horizontal resolutions. However, resolution is insufficient to account for the slower rate of ascent of the LES plume with downwind distance, which results in much higher surface concentrations for the LES plume in the near-field but a near absence of tracer aloft. Physics sensitivity experiments and theoretical analytical models demonstrate that this effect is an inherent problem with the parameterization of turbulent transport in the mesoscale PBL scheme. A simple transformation is proposed that may be applied to mesoscale model concentration footprints to correct for their near-field biases. Implications for longer-term source inversion are discussed.
International Nuclear Information System (INIS)
Ahuja, B.L.; Heda, N.L.
2006-01-01
In this paper we report on electron momentum densities in ZnSe using Compton scattering technique. For the directional measurements we have employed a newly developed 100 mCi 241 Am Compton spectrometer which is based on a small disc source with shortest geometry. For the theoretical calculations we have employed a self-consistent Hartree-Fock linear combination of atomic orbitals (HF-LCAO) approach. It is seen that the anisotropy in the measured Compton profiles is well reproduced by our HF-LCAO calculation and the other available pseudopotential data. The anisotropy in the Compton profiles is explained in terms of energy bands and bond length. - PACS numbers: 13.60.Fz, 78.70. Ck, 78.70.-g (orig.)
Efficient full waveform inversion using the excitation representation of the source wavefield
Kalita, Mahesh
2017-05-16
Full waveform inversion (FWI) is an iterative method of data-fitting, aiming at high-resolution recovery of the unknown model parameters. However, its conventional implementation is a cumbersome process, requiring a long computational time and large memory space/disk storage. One of the reasons for this computational limitation is the gradient calculation step. Based on the adjoint state method, it involves the temporal cross-correlation of the forward propagated sourcewavefield with the backward propagated residuals, inwhichwe usually need to store the source wavefield, or include an extra extrapolation step to propagate the source wavefield from its storage at the boundary. We propose, alternatively, an amplitude excitation gradient calculation based on the excitation imaging condition concept that represents the source wavefield history by a single, specifically the most energetic arrival. An excitation based Born modelling allows us to derive the adjoint operation. In this case, the source wavelet is injected by a cross-correlation step applied to the data residual directly. Representing the source wavefield through the excitation amplitude and time, we reduce the large requirements for both storage and the computational time. We demonstrate the application of this approach on a two-layer model with an anomaly, the Marmousi II model and a marine data set acquired by CGG.
Nelson, N.; Azmy, Y.; Gardner, R. P.; Mattingly, J.; Smith, R.; Worrall, L. G.; Dewji, S.
2017-11-01
Detector response functions (DRFs) are often used for inverse analysis. We compute the DRF of a sodium iodide (NaI) nuclear material holdup field detector using the code named g03 developed by the Center for Engineering Applications of Radioisotopes (CEAR) at NC State University. Three measurement campaigns were performed in order to validate the DRF's constructed by g03: on-axis detection of calibration sources, off-axis measurements of a highly enriched uranium (HEU) disk, and on-axis measurements of the HEU disk with steel plates inserted between the source and the detector to provide attenuation. Furthermore, this work quantifies the uncertainty of the Monte Carlo simulations used in and with g03, as well as the uncertainties associated with each semi-empirical model employed in the full DRF representation. Overall, for the calibration source measurements, the response computed by the DRF for the prediction of the full-energy peak region of responses was good, i.e. within two standard deviations of the experimental response. In contrast, the DRF tended to overestimate the Compton continuum by about 45-65% due to inadequate tuning of the electron range multiplier fit variable that empirically represents physics associated with electron transport that is not modeled explicitly in g03. For the HEU disk measurements, computed DRF responses tended to significantly underestimate (more than 20%) the secondary full-energy peaks (any peak of lower energy than the highest-energy peak computed) due to scattering in the detector collimator and aluminum can, which is not included in the g03 model. We ran a sufficiently large number of histories to ensure for all of the Monte Carlo simulations that the statistical uncertainties were lower than their experimental counterpart's Poisson uncertainties. The uncertainties associated with least-squares fits to the experimental data tended to have parameter relative standard deviations lower than the peak channel relative standard
Inverse analysis and regularisation in conditional source-term estimation modelling
Labahn, Jeffrey W.; Devaud, Cecile B.; Sipkens, Timothy A.; Daun, Kyle J.
2014-05-01
Conditional Source-term Estimation (CSE) obtains the conditional species mass fractions by inverting a Fredholm integral equation of the first kind. In the present work, a Bayesian framework is used to compare two different regularisation methods: zeroth-order temporal Tikhonov regulatisation and first-order spatial Tikhonov regularisation. The objectives of the current study are: (i) to elucidate the ill-posedness of the inverse problem; (ii) to understand the origin of the perturbations in the data and quantify their magnitude; (iii) to quantify the uncertainty in the solution using different priors; and (iv) to determine the regularisation method best suited to this problem. A singular value decomposition shows that the current inverse problem is ill-posed. Perturbations to the data may be caused by the use of a discrete mixture fraction grid for calculating the mixture fraction PDF. The magnitude of the perturbations is estimated using a box filter and the uncertainty in the solution is determined based on the width of the credible intervals. The width of the credible intervals is significantly reduced with the inclusion of a smoothing prior and the recovered solution is in better agreement with the exact solution. The credible intervals for temporal and spatial smoothing are shown to be similar. Credible intervals for temporal smoothing depend on the solution from the previous time step and a smooth solution is not guaranteed. For spatial smoothing, the credible intervals are not dependent upon a previous solution and better predict characteristics for higher mixture fraction values. These characteristics make spatial smoothing a promising alternative method for recovering a solution from the CSE inversion process.
Bagnardi, M.; Hooper, A. J.
2017-12-01
Inversions of geodetic observational data, such as Interferometric Synthetic Aperture Radar (InSAR) and Global Navigation Satellite System (GNSS) measurements, are often performed to obtain information about the source of surface displacements. Inverse problem theory has been applied to study magmatic processes, the earthquake cycle, and other phenomena that cause deformation of the Earth's interior and of its surface. Together with increasing improvements in data resolution, both spatial and temporal, new satellite missions (e.g., European Commission's Sentinel-1 satellites) are providing the unprecedented opportunity to access space-geodetic data within hours from their acquisition. To truly take advantage of these opportunities we must become able to interpret geodetic data in a rapid and robust manner. Here we present the open-source Geodetic Bayesian Inversion Software (GBIS; available for download at http://comet.nerc.ac.uk/gbis). GBIS is written in Matlab and offers a series of user-friendly and interactive pre- and post-processing tools. For example, an interactive function has been developed to estimate the characteristics of noise in InSAR data by calculating the experimental semi-variogram. The inversion software uses a Markov-chain Monte Carlo algorithm, incorporating the Metropolis-Hastings algorithm with adaptive step size, to efficiently sample the posterior probability distribution of the different source parameters. The probabilistic Bayesian approach allows the user to retrieve estimates of the optimal (best-fitting) deformation source parameters together with the associated uncertainties produced by errors in the data (and by scaling, errors in the model). The current version of GBIS (V1.0) includes fast analytical forward models for magmatic sources of different geometry (e.g., point source, finite spherical source, prolate spheroid source, penny-shaped sill-like source, and dipping-dike with uniform opening) and for dipping faults with uniform
Reproducible Hydrogeophysical Inversions through the Open-Source Library pyGIMLi
Wagner, F. M.; Rücker, C.; Günther, T.
2017-12-01
Many tasks in applied geosciences cannot be solved by a single measurement method and require the integration of geophysical, geotechnical and hydrological methods. In the emerging field of hydrogeophysics, researchers strive to gain quantitative information on process-relevant subsurface parameters by means of multi-physical models, which simulate the dynamic process of interest as well as its geophysical response. However, such endeavors are associated with considerable technical challenges, since they require coupling of different numerical models. This represents an obstacle for many practitioners and students. Even technically versatile users tend to build individually tailored solutions by coupling different (and potentially proprietary) forward simulators at the cost of scientific reproducibility. We argue that the reproducibility of studies in computational hydrogeophysics, and therefore the advancement of the field itself, requires versatile open-source software. To this end, we present pyGIMLi - a flexible and computationally efficient framework for modeling and inversion in geophysics. The object-oriented library provides management for structured and unstructured meshes in 2D and 3D, finite-element and finite-volume solvers, various geophysical forward operators, as well as Gauss-Newton based frameworks for constrained, joint and fully-coupled inversions with flexible regularization. In a step-by-step demonstration, it is shown how the hydrogeophysical response of a saline tracer migration can be simulated. Tracer concentration data from boreholes and measured voltages at the surface are subsequently used to estimate the hydraulic conductivity distribution of the aquifer within a single reproducible Python script.
A matrix-inversion method for gamma-source mapping from gamma-count data - 59082
International Nuclear Information System (INIS)
Bull, Richard K.; Adsley, Ian; Burgess, Claire
2012-01-01
Gamma ray counting is often used to survey the distribution of active waste material in various locations. Ideally the output from such surveys would be a map of the activity of the waste. In this paper a simple matrix-inversion method is presented. This allows an array of gamma-count data to be converted to an array of source activities. For each survey area the response matrix is computed using the gamma-shielding code Microshield [1]. This matrix links the activity array to the count array. The activity array is then obtained via matrix inversion. The method was tested on artificially-created arrays of count-data onto which statistical noise had been added. The method was able to reproduce, quite faithfully, the original activity distribution used to generate the dataset. The method has been applied to a number of practical cases, including the distribution of activated objects in a hot cell and to activated Nimonic springs amongst fuel-element debris in vaults at a nuclear plant. (authors)
International Nuclear Information System (INIS)
Beauchard, K; Cannarsa, P; Yamamoto, M
2014-01-01
The approach to Lipschitz stability for uniformly parabolic equations introduced by Imanuvilov and Yamamoto in 1998 based on Carleman estimates, seems hard to apply to the case of Grushin-type operators of interest to this paper. Indeed, such estimates are still missing for parabolic operators degenerating in the interior of the space domain. Nevertheless, we are able to prove Lipschitz stability results for inverse source problems for such operators, with locally distributed measurements in an arbitrary space dimension. For this purpose, we follow a mixed strategy which combines the approach due to Lebeau and Robbiano, relying on Fourier decomposition and Carleman inequalities for heat equations with non-smooth coefficients (solved by the Fourier modes). As a corollary, we obtain a direct proof of the observability of multidimensional Grushin-type parabolic equations, with locally distributed observations—which is equivalent to null controllability with locally distributed controls. (paper)
Łęski, Szymon; Pettersen, Klas H; Tunstall, Beth; Einevoll, Gaute T; Gigg, John; Wójcik, Daniel K
2011-12-01
The recent development of large multielectrode recording arrays has made it affordable for an increasing number of laboratories to record from multiple brain regions simultaneously. The development of analytical tools for array data, however, lags behind these technological advances in hardware. In this paper, we present a method based on forward modeling for estimating current source density from electrophysiological signals recorded on a two-dimensional grid using multi-electrode rectangular arrays. This new method, which we call two-dimensional inverse Current Source Density (iCSD 2D), is based upon and extends our previous one- and three-dimensional techniques. We test several variants of our method, both on surrogate data generated from a collection of Gaussian sources, and on model data from a population of layer 5 neocortical pyramidal neurons. We also apply the method to experimental data from the rat subiculum. The main advantages of the proposed method are the explicit specification of its assumptions, the possibility to include system-specific information as it becomes available, the ability to estimate CSD at the grid boundaries, and lower reconstruction errors when compared to the traditional approach. These features make iCSD 2D a substantial improvement over the approaches used so far and a powerful new tool for the analysis of multielectrode array data. We also provide a free GUI-based MATLAB toolbox to analyze and visualize our test data as well as user datasets.
Compton suppression gamma ray spectrometry
International Nuclear Information System (INIS)
Landsberger, S.; Iskander, F.Y.; Niset, M.; Heydorn, K.
2002-01-01
In the past decade there have been many studies to use Compton suppression methods in routine neutron activation analysis as well as in the traditional role of low level gamma ray counting of environmental samples. On a separate path there have been many new PC based software packages that have been developed to enhance photopeak fitting. Although the newer PC based algorithms have had significant improvements, they still suffer from being effectively used in weak gamma ray lines in natural samples or in neutron activated samples that have very high Compton backgrounds. We have completed a series of experiments to show the usefulness of Compton suppression. As well we have shown the pitfalls when using Compton suppression methods for high counting deadtimes as in the case of neutron activated samples. We have also investigated if counting statistics are the same both suppressed and normal modes. Results are presented in four separate experiments. (author)
High Resolution Regional Attenuation for the Source Physics Experiment Using Multiphase Inversion
Pyle, M. L.; Walter, W. R.; Pasyanos, M.
2015-12-01
Seismic event amplitude measurement plays a critical role in the discrimination between earthquakes and explosions. An accurate 2D model of the attenuation experienced by seismic waves traveling through the earth is especially important for reasonable amplitude estimation at small event-to-station distances. In this study, we investigate the detailed attenuation structure in the region around southern Nevada as part of the Source Physics Experiment (SPE). The SPE consists of a series of chemical explosions at the Nevada National Security Site (NNSS) designed to improve our understanding of explosion physics and enable better modeling of explosion sources. Phase I of the SPE is currently being conducted in the Climax Stock Granite and Phase II will move to a contrasting dry alluvium geology. A high-resolution attenuation model will aid in the modeling efforts of these experiments. To improve our understanding of the propagation of energy from sources in the area to local and regional stations in the western U.S., we invert regional phases Pn, Pg, and Lg to examine the crust and upper mantle attenuation structure of southern Nevada and the surrounding region. We consider observed amplitudes as the frequency-domain product of a source term, a site term, a geometrical spreading term, and an attenuation (Q) term (e.g. Walter and Taylor, 2001). Initially we take a staged approach to first determine the best 1D Q values; next we calculate source terms using the 1D model, and finally we solve for the best 2D Q parameters and site terms considering all frequencies simultaneously. Our preliminary results agree generally with those from the continent-wide study by Pasyanos (2013). With additional data we are working to develop a more detailed and higher frequency model of the region as well as move toward a fully non-linear inversion.
Weak Deeply Virtual Compton Scattering
Energy Technology Data Exchange (ETDEWEB)
Ales Psaker; Wolodymyr Melnitchouk; Anatoly Radyushkin
2007-03-01
We extend the analysis of the deeply virtual Compton scattering process to the weak interaction sector in the generalized Bjorken limit. The virtual Compton scattering amplitudes for the weak neutral and charged currents are calculated at the leading twist within the framework of the nonlocal light-cone expansion via coordinate space QCD string operators. Using a simple model, we estimate cross sections for neutrino scattering off the nucleon, relevant for future high intensity neutrino beam facilities.
Directory of Open Access Journals (Sweden)
M. Bauwens
2016-08-01
Full Text Available As formaldehyde (HCHO is a high-yield product in the oxidation of most volatile organic compounds (VOCs emitted by fires, vegetation, and anthropogenic activities, satellite observations of HCHO are well-suited to inform us on the spatial and temporal variability of the underlying VOC sources. The long record of space-based HCHO column observations from the Ozone Monitoring Instrument (OMI is used to infer emission flux estimates from pyrogenic and biogenic volatile organic compounds (VOCs on the global scale over 2005–2013. This is realized through the method of source inverse modeling, which consists in the optimization of emissions in a chemistry-transport model (CTM in order to minimize the discrepancy between the observed and modeled HCHO columns. The top–down fluxes are derived in the global CTM IMAGESv2 by an iterative minimization algorithm based on the full adjoint of IMAGESv2, starting from a priori emission estimates provided by the newly released GFED4s (Global Fire Emission Database, version 4s inventory for fires, and by the MEGAN-MOHYCAN inventory for isoprene emissions. The top–down fluxes are compared to two independent inventories for fire (GFAS and FINNv1.5 and isoprene emissions (MEGAN-MACC and GUESS-ES. The inversion indicates a moderate decrease (ca. 20 % in the average annual global fire and isoprene emissions, from 2028 Tg C in the a priori to 1653 Tg C for burned biomass, and from 343 to 272 Tg for isoprene fluxes. Those estimates are acknowledged to depend on the accuracy of formaldehyde data, as well as on the assumed fire emission factors and the oxidation mechanisms leading to HCHO production. Strongly decreased top–down fire fluxes (30–50 % are inferred in the peak fire season in Africa and during years with strong a priori fluxes associated with forest fires in Amazonia (in 2005, 2007, and 2010, bushfires in Australia (in 2006 and 2011, and peat burning in Indonesia (in 2006 and 2009, whereas
Bauwens, Maite; Stavrakou, Trissevgeni; Müller, Jean-François; De Smedt, Isabelle; Van Roozendael, Michel; van der Werf, Guido R.; Wiedinmyer, Christine; Kaiser, Johannes W.; Sindelarova, Katerina; Guenther, Alex
2016-08-01
As formaldehyde (HCHO) is a high-yield product in the oxidation of most volatile organic compounds (VOCs) emitted by fires, vegetation, and anthropogenic activities, satellite observations of HCHO are well-suited to inform us on the spatial and temporal variability of the underlying VOC sources. The long record of space-based HCHO column observations from the Ozone Monitoring Instrument (OMI) is used to infer emission flux estimates from pyrogenic and biogenic volatile organic compounds (VOCs) on the global scale over 2005-2013. This is realized through the method of source inverse modeling, which consists in the optimization of emissions in a chemistry-transport model (CTM) in order to minimize the discrepancy between the observed and modeled HCHO columns. The top-down fluxes are derived in the global CTM IMAGESv2 by an iterative minimization algorithm based on the full adjoint of IMAGESv2, starting from a priori emission estimates provided by the newly released GFED4s (Global Fire Emission Database, version 4s) inventory for fires, and by the MEGAN-MOHYCAN inventory for isoprene emissions. The top-down fluxes are compared to two independent inventories for fire (GFAS and FINNv1.5) and isoprene emissions (MEGAN-MACC and GUESS-ES). The inversion indicates a moderate decrease (ca. 20 %) in the average annual global fire and isoprene emissions, from 2028 Tg C in the a priori to 1653 Tg C for burned biomass, and from 343 to 272 Tg for isoprene fluxes. Those estimates are acknowledged to depend on the accuracy of formaldehyde data, as well as on the assumed fire emission factors and the oxidation mechanisms leading to HCHO production. Strongly decreased top-down fire fluxes (30-50 %) are inferred in the peak fire season in Africa and during years with strong a priori fluxes associated with forest fires in Amazonia (in 2005, 2007, and 2010), bushfires in Australia (in 2006 and 2011), and peat burning in Indonesia (in 2006 and 2009), whereas generally increased fluxes
Spectra of clinical CT scanners using a portable Compton spectrometer
International Nuclear Information System (INIS)
Duisterwinkel, H. A.; Abbema, J. K. van; Kawachimaru, R.; Paganini, L.; Graaf, E. R. van der; Brandenburg, S.; Goethem, M. J. van
2015-01-01
Purpose: Spectral information of the output of x-ray tubes in (dual source) computer tomography (CT) scanners can be used to improve the conversion of CT numbers to proton stopping power and can be used to advantage in CT scanner quality assurance. The purpose of this study is to design, validate, and apply a compact portable Compton spectrometer that was constructed to accurately measure x-ray spectra of CT scanners. Methods: In the design of the Compton spectrometer, the shielding materials were carefully chosen and positioned to reduce background by x-ray fluorescence from the materials used. The spectrum of Compton scattered x-rays alters from the original source spectrum due to various physical processes. Reconstruction of the original x-ray spectrum from the Compton scattered spectrum is based on Monte Carlo simulations of the processes involved. This reconstruction is validated by comparing directly and indirectly measured spectra of a mobile x-ray tube. The Compton spectrometer is assessed in a clinical setting by measuring x-ray spectra at various tube voltages of three different medical CT scanner x-ray tubes. Results: The directly and indirectly measured spectra are in good agreement (their ratio being 0.99) thereby validating the reconstruction method. The measured spectra of the medical CT scanners are consistent with theoretical spectra and spectra obtained from the x-ray tube manufacturer. Conclusions: A Compton spectrometer has been successfully designed, constructed, validated, and applied in the measurement of x-ray spectra of CT scanners. These measurements show that our compact Compton spectrometer can be rapidly set-up using the alignment lasers of the CT scanner, thereby enabling its use in commissioning, troubleshooting, and, e.g., annual performance check-ups of CT scanners
Spectra of clinical CT scanners using a portable Compton spectrometer.
Duisterwinkel, H A; van Abbema, J K; van Goethem, M J; Kawachimaru, R; Paganini, L; van der Graaf, E R; Brandenburg, S
2015-04-01
Spectral information of the output of x-ray tubes in (dual source) computer tomography (CT) scanners can be used to improve the conversion of CT numbers to proton stopping power and can be used to advantage in CT scanner quality assurance. The purpose of this study is to design, validate, and apply a compact portable Compton spectrometer that was constructed to accurately measure x-ray spectra of CT scanners. In the design of the Compton spectrometer, the shielding materials were carefully chosen and positioned to reduce background by x-ray fluorescence from the materials used. The spectrum of Compton scattered x-rays alters from the original source spectrum due to various physical processes. Reconstruction of the original x-ray spectrum from the Compton scattered spectrum is based on Monte Carlo simulations of the processes involved. This reconstruction is validated by comparing directly and indirectly measured spectra of a mobile x-ray tube. The Compton spectrometer is assessed in a clinical setting by measuring x-ray spectra at various tube voltages of three different medical CT scanner x-ray tubes. The directly and indirectly measured spectra are in good agreement (their ratio being 0.99) thereby validating the reconstruction method. The measured spectra of the medical CT scanners are consistent with theoretical spectra and spectra obtained from the x-ray tube manufacturer. A Compton spectrometer has been successfully designed, constructed, validated, and applied in the measurement of x-ray spectra of CT scanners. These measurements show that our compact Compton spectrometer can be rapidly set-up using the alignment lasers of the CT scanner, thereby enabling its use in commissioning, troubleshooting, and, e.g., annual performance check-ups of CT scanners.
Zephyr: Open-source Parallel Seismic Waveform Inversion in an Integrated Python-based Framework
Smithyman, B. R.; Pratt, R. G.; Hadden, S. M.
2015-12-01
Seismic Full-Waveform Inversion (FWI) is an advanced method to reconstruct wave properties of materials in the Earth from a series of seismic measurements. These methods have been developed by researchers since the late 1980s, and now see significant interest from the seismic exploration industry. As researchers move towards implementing advanced numerical modelling (e.g., 3D, multi-component, anisotropic and visco-elastic physics), it is desirable to make use of a modular approach, minimizing the effort developing a new set of tools for each new numerical problem. SimPEG (http://simpeg.xyz) is an open source project aimed at constructing a general framework to enable geophysical inversion in various domains. In this abstract we describe Zephyr (https://github.com/bsmithyman/zephyr), which is a coupled research project focused on parallel FWI in the seismic context. The software is built on top of Python, Numpy and IPython, which enables very flexible testing and implementation of new features. Zephyr is an open source project, and is released freely to enable reproducible research. We currently implement a parallel, distributed seismic forward modelling approach that solves the 2.5D (two-and-one-half dimensional) viscoacoustic Helmholtz equation at a range modelling frequencies, generating forward solutions for a given source behaviour, and gradient solutions for a given set of observed data. Solutions are computed in a distributed manner on a set of heterogeneous workers. The researcher's frontend computer may be separated from the worker cluster by a network link to enable full support for computation on remote clusters from individual workstations or laptops. The present codebase introduces a numerical discretization equivalent to that used by FULLWV, a well-known seismic FWI research codebase. This makes it straightforward to compare results from Zephyr directly with FULLWV. The flexibility introduced by the use of a Python programming environment makes
Compton profile study of V3Ge and Cr3Ge
Indian Academy of Sciences (India)
Abstract. In this paper the results of a Compton profile study of two polycrystalline. A15 compounds, namely, V3Ge and Cr3Ge, have been reported. The measurements have been performed using 59.54 keV γ-rays from an 241Am source. The theoretical Compton profiles have been computed for both the compounds using ...
An inverse-source problem for maximization of pore-fluid oscillation within poroelastic formations
Jeong, C.
2016-07-04
This paper discusses a mathematical and numerical modeling approach for identification of an unknown optimal loading time signal of a wave source, atop the ground surface, that can maximize the relative wave motion of a single-phase pore fluid within fluid-saturated porous permeable (poroelastic) rock formations, surrounded by non-permeable semi-infinite elastic solid rock formations, in a one-dimensional setting. The motivation stems from a set of field observations, following seismic events and vibrational tests, suggesting that shaking an oil reservoir is likely to improve oil production rates. This maximization problem is cast into an inverse-source problem, seeking an optimal loading signal that minimizes an objective functional – the reciprocal of kinetic energy in terms of relative pore-fluid wave motion within target poroelastic layers. We use the finite element method to obtain the solution of the governing wave physics of a multi-layered system, where the wave equations for the target poroelastic layers and the elastic wave equation for the surrounding non-permeable layers are coupled with each other. We use a partial-differential-equation-constrained-optimization framework (a state-adjoint-control problem approach) to tackle the minimization problem. The numerical results show that the numerical optimizer recovers optimal loading signals, whose dominant frequencies correspond to amplification frequencies, which can also be obtained by a frequency sweep, leading to larger amplitudes of relative pore-fluid wave motion within the target hydrocarbon formation than other signals.
Compton-thick AGN at high and low redshift
Akylas, A.; Georgantopoulos, I.; Corral, A.; Ranalli, P.; Lanzuisi, G.
2017-10-01
The most obscured sources detected in X-ray surveys, the Compton-thick AGN present great interest both because they represent the hidden side of accretion but also because they may signal the AGN birth. We analyse the NUSTAR observations from the serendipitous observations in order to study the Compton-thick AGN at the deepest possible ultra-hard band (>10 keV). We compare our results with our SWIFT/BAT findings in the local Universe, as well as with our results in the CDFS and COSMOS fields. We discuss the comparison with X-ray background synthesis models finding that a low fraction of Compton-thick sources (about 15 per cent of the obscured population) is compatible with both the 2-10keV band results and those at harder energies.
Further Study on Fast Cooling in Compton Storage Rings
Bulyak, Eugene; Zimmermann, Frank
2012-01-01
Compton sources of gamma-ray photons can produce ultimate intensity, but suffer from large recoils experienced by the circulating electrons scattering off the laser photons. We had proposed the asymmetric fast cooling which enables to mitigate the spread of energy in Compton rings. This report presents results of further study on the fast cooling: (1) A proper asymmetric setup of the scattering point results in signiﬁcant reduction of the quantum losses of electrons in Compton rings with moderate energy acceptance. (2) Proposed pulsed mode of operation in synchrotron–dominated rings enhances overall performance of such gamma-ray sources. Theoretical results are in good accordance with the simulations. Performance of an existing storage ring equipped with a laser system is also evaluated.
Energy Technology Data Exchange (ETDEWEB)
Krueger, O.; Ebinghaus, R.; Kock, H.H.; Richter-Politz, I.; Geilhufe, C.
1998-12-31
Anthropogenic emission sources of gaseous mercury at the contaminated industrial site BSL Werk Schkopau have been determined by measurements and numerical modelling applying a local dispersion model. The investigations are based on measurements from several field campaigns in the period of time between December 1993 and June 1994. The estimation of the source strengths was performed by inverse modelling using measurements as constraints for the dispersion model. Model experiments confirmed the applicability of the inverse modelling procedure for the source strength estimation at BSL Werk Schkopau. At the factory premises investigated, the source strengths of four source areas, among them three closed chlor-alkali productions, one partly removed acetaldehyde factory and additionaly one still producing chlor-alkali factory have been identified with an approximate total gaseous mercury emission of lower than 2.5 kg/day. (orig.)
Joint Source Location and Focal Mechanism Inversion: efficiency, accuracy and applications
Liang, C.; Yu, Y.
2017-12-01
The analysis of induced seismicity has become a common practice to evaluate the results of hydraulic fracturing treatment. Liang et al (2016) proposed a joint Source Scanning Algorithms (jSSA for short) to obtain microseismic events and focal mechanisms simultaneously. The jSSA is superior over traditional SSA in many aspects, but the computation cost is too significant to be applied in real time monitoring. In this study, we have developed several scanning schemas to reduce computation time. A multi-stage scanning schema is proved to be able to improve the efficiency significantly while also retain its accuracy. A series of tests have been carried out by using both real field data and synthetic data to evaluate the accuracy of the method and its dependence on noise level, source depths, focal mechanisms and other factors. The surface-based arrays have better constraints on horizontal location errors (0.5). For sources with varying rakes, dips, strikes and depths, the errors are mostly controlled by the partition of positive and negative polarities in different quadrants. More evenly partitioned polarities in different quadrants yield better results in both locations and focal mechanisms. Nevertheless, even with bad resolutions for some FMs, the optimized jSSA method can still improve location accuracies significantly. Based on much more densely distributed events and focal mechanisms, a gridded stress inversion is conducted to get a evenly distributed stress field. The full potential of the jSSA has yet to be explored in different directions, especially in earthquake seismology as seismic array becoming incleasingly dense.
DEFF Research Database (Denmark)
Oh, Geok Lian
properties such as the elastic wave speeds and soil densities. One processing method is casting the estimation problem into an inverse problem to solve for the unknown material parameters. The forward model for the seismic signals used in the literatures include ray tracing methods that consider only...... density values of the discretized ground medium, which leads to time-consuming computations and instability behaviour of the inversion process. In addition, the geophysics inverse problem is generally ill-posed due to non-exact forward model that introduces errors. The Bayesian inversion method through...... the probability density function permits the incorporation of a priori information about the parameters, and also allow for incorporation of theoretical errors. This opens up the possibilities of application of inverse paradigm in the real-world geophysics inversion problems. In this PhD study, the Bayesian...
Chouet, B. A.; Dawson, P. B.
2015-12-01
Among the broad range of magmatic processes observed in the Overlook pit crater in Kilauea Caldera are recurring episodes of gas-piston activity. This activity is accompanied by repetitive seismic signals recorded by a broadband network deployed in the summit caldera. We use the seismic data to model the source mechanism of representative gas-piston events in a sequence that occurred on 20-25 August 2011 during a gentle inflation of the Kilauea summit. We apply a new waveform inversion method that accounts for the contributions from both translation and tilt in horizontal seismograms through the use of Green's functions representing the seismometer response to translation and tilt ground motions. This method enables a robust description of the source mechanism over the period range of 1 - 10,000 s. Most of the seismic wave field produced by gas-pistoning originates in a source region ~1 km below the eastern perimeter of Halema'uma'u pit crater. The observed waveforms are well explained by a simple volumetric source with geometry composed of two intersecting cracks featuring an east-striking crack (dike) dipping 80° to the north, intersecting a north-striking crack (inclined sheet) dipping 65° to the east. Each gas-piston event is characterized by a rapid inflation lasting a few minutes trailed by a slower deflation ramp extending up to 15 minutes, attributed to the efficient coupling at the source centroid location of the pressure and momentum changes accompanying the growth and collapse of a layer of foam at the top of the magma column. Assuming a simple lumped parameter representation of the shallow magmatic system, the observed pressure and volume variations can be modeled with the following attributes: foam thickness (10 - 50 m), foam cell diameter (0.04 - 0.10 m), and gas-injection velocity (0.01 - 0.06 m s-1). Based on the change in the period of very-long-period oscillations accompanying the onset of the gas-piston signal and tilt evidence, the height of
The development of a Compton lung densitometer
Energy Technology Data Exchange (ETDEWEB)
Loo, B.W.; Goulding, F.S.; Madden, N.W.; Simon, D.S.
1988-11-01
A field instrument is being developed for the non-invasive determination of absolute lung density using unique Compton backscattering techniques. A system consisting of a monoenergetic gamma-ray beam and a shielded high resolution high-purity-germanium (HPGe) detector in a close-coupled geometry is designed to minimize errors due to multiple scattering and uncontrollable attenuation in the chestwall. Results of studies on system performance with phantoms, the optimization of detectors, and the fabrication of a practical gamma-ray source are presented. 3 refs., 6 figs., 2 tabs.
Observation of Nonlinear Compton Scattering
Energy Technology Data Exchange (ETDEWEB)
Kotseroglou, T.
2003-12-19
This experiment tests Quantum Electrodynamics in the strong field regime. Nonlinear Compton scattering has been observed during the interaction of a 46.6 GeV electron beam with a 10{sup 18} W/cm{sup 2} laser beam. The strength of the field achieved was measured by the parameter {eta} = e{var_epsilon}{sub rms}/{omega}mc = 0.6. Data were collected with infrared and green laser photons and circularly polarized laser light. The timing stabilization achieved between the picosecond laser and electron pulses has {sigma}{sub rms} = 2 ps. A strong signal of electrons that absorbed up to 4 infrared photons (or up to 3 green photons) at the same point in space and time, while emitting a single gamma ray, was observed. The energy spectra of the scattered electrons and the nonlinear dependence of the electron yield on the field strength agreed with the simulation over 3 orders of magnitude. The detector could not resolve the nonlinear Compton scattering from the multiple single Compton scattering which produced rates of scattered electrons of the same order of magnitude. Nevertheless, a simulation has studied this difference and concluded that the scattered electron rates observed could not be accounted for only by multiple ordinary Compton scattering; nonlinear Compton scattering processes are dominant for n {ge} 3.
1D inversion and analysis of marine controlled-source EM data
DEFF Research Database (Denmark)
Christensen, N.B.; Dodds, Kevin; Bulley, Ian
2006-01-01
been displaced by resistive oil or gas. We present preliminary results from an investigation of the applicability of one-dimensional inversion of the data. A noise model for the data set is developed and inversion is carried out with multi-layer models and 4-layer models. For the data set in question...
Locatelli, Robin; Bousquet, Philippe; Chevallier, Frédéric
2013-04-01
Since the nineties, inverse modelling by assimilating atmospheric measurements into a chemical transport model (CTM) has been used to derive sources and sinks of atmospheric trace gases. More recently, the high global warming potential of methane (CH4) and unexplained variations of its atmospheric mixing ratio caught the attention of several research groups. Indeed, the diversity and the variability of methane sources induce high uncertainty on the present and the future evolution of CH4 budget. With the increase of available measurement data to constrain inversions (satellite data, high frequency surface and tall tower observations, FTIR spectrometry,...), the main limiting factor is about to become the representation of atmospheric transport in CTMs. Indeed, errors in transport modelling directly converts into flux changes when assuming perfect transport in atmospheric inversions. Hence, we propose an inter-model comparison in order to quantify the impact of transport and modelling errors on the CH4 fluxes estimated into a variational inversion framework. Several inversion experiments are conducted using the same set-up (prior emissions, measurement and prior errors, OH field, initial conditions) of the variational system PYVAR, developed at LSCE (Laboratoire des Sciences du Climat et de l'Environnement, France). Nine different models (ACTM, IFS, IMPACT, IMPACT1x1, MOZART, PCTM, TM5, TM51x1 and TOMCAT) used in TRANSCOM-CH4 experiment (Patra el al, 2011) provide synthetic measurements data at up to 280 surface sites to constrain the inversions performed using the PYVAR system. Only the CTM (and the meteorological drivers which drive them) used to create the pseudo-observations vary among inversions. Consequently, the comparisons of the nine inverted methane fluxes obtained for 2005 give a good order of magnitude of the impact of transport and modelling errors on the estimated fluxes with current and future networks. It is shown that transport and modelling errors
Analysis of materials in ducts by Compton scattering
International Nuclear Information System (INIS)
Gouveia, M.A.G.; Lopes, R.T.; Jesus, E.F.O. de; Camerini, C.S.
2000-01-01
This work presents the use of the Compton Scattering Technique as essay, for materials characterization in petroleum ducts. The essay have been accomplished in laboratory ambit, so that the presented results should be analyzed so that the system can come to be used in the field. The inspection was performed using Compton Scattering techniques, with two detectors aligned, in an angle of 90 degrees with a source of Cs-137 with energy of 662 keV. The results demonstrated the good capacity of the system to detect materials deposited in petroleum ducts during petroleum transportation. (author)
Experimental and theoretical Compton profiles of Be, C and Al
Energy Technology Data Exchange (ETDEWEB)
Aguiar, Julio C., E-mail: jaguiar@arn.gob.a [Autoridad Regulatoria Nuclear, Av. Del Libertador 8250, C1429BNP, Buenos Aires (Argentina); Instituto de Fisica ' Arroyo Seco' , Facultad de Ciencias Exactas, U.N.C.P.B.A., Pinto 399, 7000 Tandil (Argentina); Di Rocco, Hector O. [Instituto de Fisica ' Arroyo Seco' , Facultad de Ciencias Exactas, U.N.C.P.B.A., Pinto 399, 7000 Tandil (Argentina); Arazi, Andres [Laboratorio TANDAR, Comision Nacional de Energia Atomica, Av. General Paz 1499, 1650 San Martin, Buenos Aires (Argentina)
2011-02-01
The results of Compton profile measurements, Fermi momentum determinations, and theoretical values obtained from a linear combination of Slater-type orbital (STO) for core electrons in beryllium; carbon and aluminium are presented. In addition, a Thomas-Fermi model is used to estimate the contribution of valence electrons to the Compton profile. Measurements were performed using monoenergetic photons of 59.54 keV provided by a low-intensity Am-241 {gamma}-ray source. Scattered photons were detected at 90{sup o} from the beam direction using a p-type coaxial high-purity germanium detector (HPGe). The experimental results are in good agreement with theoretical calculations.
Compton-thick AGN in the 3XMM spectral survey
Georgantopoulos, I.; Corral, A.; Watson, M.; Rosen, S.
2014-07-01
In the framework of an ESA Prodex project, we have derived X-ray spectral fits for a large number (120,000) of 3XMM sources. We focus our study on the 120 square degrees that overlap with the SDSS survey. For about 1,100 AGN there are spectroscopic redsifts available. We automatically select candidate Compton-thick sources using simple spectral models. Various selection criteria are applied including a) a high equivalent width FeK line b) a flat spectrum with a photon index of 1.4 or lower at the 90% confidence level or at higher redshift an absorption turnover consistent with a column density of logNh=24. We find 30 candidate Compton-thick sources. More detailed spectral models are applied trying to secure the Compton-thick nature of these sources. We compare our findings with X-ray background synthesis models as well as with Compton-thick surveys in the COSMOS and XMM/CDFS areas.
On a low intensity 241Am Compton spectrometer for measurement ...
Indian Academy of Sciences (India)
In this paper, a new design and construction of a low intensity (100 mCi) 241Am -ray Compton spectrometer is presented. The planar spectrometer is based on a small disc source with the shortest geometry. Measurement of the momentum density of polycrystalline Al is used to evaluate the performance of the new design.
On a low intensity 241 Am Compton spectrometer for measurement ...
Indian Academy of Sciences (India)
In this paper, a new design and construction of a low intensity (100 mCi) 241Am -ray Compton spectrometer is presented. The planar spectrometer is based on a small disc source with the shortest geometry. Measurement of the momentum density of polycrystalline Al is used to evaluate the performance of the new design.
Recent results from the Compton Observatory
Energy Technology Data Exchange (ETDEWEB)
Michelson, P.F.; Hansen, W.W. [Stanford Univ., CA (United States)
1994-12-01
The Compton Observatory is an orbiting astronomical observatory for gamma-ray astronomy that covers the energy range from about 30 keV to 30 GeV. The Energetic Gamma Ray Experiment Telescope (EGRET), one of four instruments on-board, is capable of detecting and imaging gamma radiation from cosmic sources in the energy range from approximately 20 MeV to 30 GeV. After about one month of tests and calibration following the April 1991 launch, a 15-month all sky survey was begun. This survey is now complete and the Compton Observatory is well into Phase II of its observing program which includes guest investigator observations. Among the highlights from the all-sky survey discussed in this presentation are the following: detection of five pulsars with emission above 100 MeV; detection of more than 24 active galaxies, the most distant at redshift greater than two; detection of many high latitude, unidentified gamma-ray sources, some showing significant time variability; detection of at least two high energy gamma-ray bursts, with emission in one case extending to at least 1 GeV. EGRET has also detected gamma-ray emission from solar flares up to energies of at least 2 GeV and has observed gamma-rays from the Large Magellanic Cloud.
A dual purpose Compton suppression spectrometer
Parus, J; Raab, W; Donohue, D
2003-01-01
A gamma-ray spectrometer with a passive and an active shield is described. It consists of a HPGe coaxial detector of 42% efficiency and 4 NaI(Tl) detectors. The energy output pulses of the Ge detector are delivered into the 3 spectrometry chains giving the normal, anti- and coincidence spectra. From the spectra of a number of sup 1 sup 3 sup 7 Cs and sup 6 sup 0 Co sources a Compton suppression factor, SF and a Compton reduction factor, RF, as the parameters characterizing the system performance, were calculated as a function of energy and source activity and compared with those given in literature. The natural background is reduced about 8 times in the anticoincidence mode of operation, compared to the normal spectrum which results in decreasing the detection limits for non-coincident gamma-rays up to a factor of 3. In the presence of other gamma-ray activities, in the range from 5 to 11 kBq, non- and coincident, the detection limits can be decreased for some nuclides by a factor of 3 to 5.7.
Waveform inversion of lateral velocity variation from wavefield source location perturbation
Choi, Yun Seok
2013-09-22
It is challenge in waveform inversion to precisely define the deep part of the velocity model compared to the shallow part. The lateral velocity variation, or what referred to as the derivative of velocity with respect to the horizontal distance, with well log data can be used to update the deep part of the velocity model more precisely. We develop a waveform inversion algorithm to obtain the lateral velocity variation by inverting the wavefield variation associated with the lateral shot location perturbation. The gradient of the new waveform inversion algorithm is obtained by the adjoint-state method. Our inversion algorithm focuses on resolving the lateral changes of the velocity model with respect to a fixed reference vertical velocity profile given by a well log. We apply the method on a simple-dome model to highlight the methods potential.
Effect of detector parameters on the image quality of Compton camera for 99mTc
An, S. H.; Seo, H.; Lee, J. H.; Lee, C. S.; Lee, J. S.; Kim, C. H.
2007-02-01
The Compton camera has a bright future as a medical imaging device considering its compactness, low patient dose, multiple-radioisotope tracing capability, inherent three dimensional (3D) imaging capability at a fixed position, etc. Currently, however, the image resolution of the Compton camera is not sufficient for medical imaging. In this study, we investigated the influence of various detector parameters on the image quality of the Compton camera for 99mTc with GEANT4. Our result shows that the segmentation of the detectors significantly affects the image resolution of the Compton camera. The energy discrimination of the detectors was found to significantly affect both the sensitivity and spatial resolution. The use of a higher energy gamma source (e.g., 18F emitting 511 keV photons), however, will significantly improve the spatial resolution of the Compton camera. It will also minimize the effect of the detector energy resolution.
Crick, Prof. Francis Harry Compton
Indian Academy of Sciences (India)
Elected: 1985 Honorary. Crick, Prof. Francis Harry Compton O.M., FRS, Nobel Laureate (Medicine) - 1962. Date of birth: 8 June 1916. Date of death: 29 July 2004. Last known address: The Salk Institute for Biological, Studies, P.O. Box 85800, San Diego, CA 92186-5800, U.S.A.. YouTube; Twitter; Facebook; Blog ...
Kesumastuti, Lintang; Marsono, Agus; Yatimantoro, Tatok; Pribadi, Sugeng
2017-07-01
This study performed W-Phase inversion for eight events with large magnitude (M>7) that occured in Indonesia for the period of 2006-2016 by using global data obtained from IRIS DMC (Incorporated Research Institutions for Seismology Data Management Center). The results of W-Phase inversion; both moment magnitude and focal mechanism were generally similar with the Global CMT (Centroid Moment Tensor) solutions. The result shows that maximum deviation of moment magnitude was 0,09 and the average of magnitudo deviation was 0.03625. Comparison of moment magnitude (Mw) indicates that seismic moments from Global CMT and W-Phase inversion are larger than that from body waves, especially for the 2010 Mentawai earthquake. Tsunami simulation was performed using two different source parameters and sea floor deformation, from Global CMT and W-Phase inversion to get arrival times and heights on the coasts to be validated by observation tide gauge data from IOC (Intergovernmental Oceanographic Commission). The simulation shows that these two models; Global CMT and W-Phase inversion yields similar tsunami arrival times and heights on the coasts, but they have a bit difference with the observation data for some tide gauge station.
Gourdji, S.; Yadav, V.; Karion, A.; Mueller, K. L.; Kort, E. A.; Conley, S.; Ryerson, T. B.; Nehrkorn, T.
2017-12-01
The ability of atmospheric inverse models to detect, spatially locate and quantify emissions from large point sources in urban domains needs improvement before inversions can be used reliably as carbon monitoring tools. In this study, we use the Aliso Canyon natural gas leak from October 2015 to February 2016 (near Los Angeles, CA) as a natural tracer experiment to assess inversion quality by comparison with published estimates of leak rates calculated using a mass balance approach (Conley et al., 2016). Fourteen dedicated flights were flown in horizontal transects downwind and throughout the duration of the leak to sample CH4 mole fractions and collect meteorological information for use in the mass-balance estimates. The same CH4 observational data were then used here in geostatistical inverse models with no prior assumptions about the leak location or emission rate and flux sensitivity matrices generated using the WRF-STILT atmospheric transport model. Transport model errors were assessed by comparing WRF-STILT wind speeds, wind direction and planetary boundary layer (PBL) height to those observed on the plane; the impact of these errors in the inversions, and the optimal inversion setup for reducing their influence was also explored. WRF-STILT provides a reasonable simulation of true atmospheric conditions on most flight dates, given the complex terrain and known difficulties in simulating atmospheric transport under such conditions. Moreover, even large (>120°) errors in wind direction were found to be tolerable in terms of spatially locating the leak rate within a 5-km radius of the actual site. Errors in the WRF-STILT wind speed (>50%) and PBL height have more negative impacts on the inversions, with too high wind speeds (typically corresponding with too low PBL heights) resulting in overestimated leak rates, and vice-versa. Coarser data averaging intervals and the use of observed wind speed errors in the model-data mismatch covariance matrix are shown to
The hydrogen anomaly problem in neutron Compton scattering
Karlsson, Erik B.
2018-03-01
Neutron Compton scattering (also called ‘deep inelastic scattering of neutrons’, DINS) is a method used to study momentum distributions of light atoms in solids and liquids. It has been employed extensively since the start-up of intense pulsed neutron sources about 25 years ago. The information lies primarily in the width and shape of the Compton profile and not in the absolute intensity of the Compton peaks. It was therefore not immediately recognized that the relative intensities of Compton peaks arising from scattering on different isotopes did not always agree with values expected from standard neutron cross-section tables. The discrepancies were particularly large for scattering on protons, a phenomenon that became known as ‘the hydrogen anomaly problem’. The present paper is a review of the discovery, experimental tests to prove or disprove the existence of the hydrogen anomaly and discussions concerning its origin. It covers a twenty-year-long history of experimentation, theoretical treatments and discussions. The problem is of fundamental interest, since it involves quantum phenomena on the subfemtosecond time scale, which are not visible in conventional thermal neutron scattering but are important in Compton scattering where neutrons have two orders of magnitude times higher energy. Different H-containing systems show different cross-section deficiencies and when the scattering processes are followed on the femtosecond time scale the cross-section losses disappear on different characteristic time scales for each H-environment. The last section of this review reproduces results from published papers based on quantum interference in scattering on identical particles (proton or deuteron pairs or clusters), which have given a quantitative theoretical explanation both regarding the H-cross-section reduction and its time dependence. Some new explanations are added and the concluding chapter summarizes the conditions for observing the specific quantum
Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager
Lowell, A. W.; Boggs, S. E.; Chiu, C. L.; Kierans, C. A.; Sleator, C.; Tomsick, J. A.; Zoglauer, A. C.; Chang, H.-K.; Tseng, C.-H.; Yang, C.-Y.; Jean, P.; von Ballmoos, P.; Lin, C.-H.; Amman, M.
2017-10-01
Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ˜21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. We find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.
Anderson, J.; Johnson, J. B.; Arechiga, R. O.; Edens, H. E.; Thomas, R. J.
2011-12-01
We use radio frequency (VHF) pulse locations mapped with the New Mexico Tech Lightning Mapping Array (LMA) to study the distribution of thunder sources in lightning channels. A least squares inversion is used to fit channel acoustic energy radiation with broadband (0.01 to 500 Hz) acoustic recordings using microphones deployed local (lightning. We model the thunder (acoustic) source as a superposition of line segments connecting the LMA VHF pulses. An optimum branching algorithm is used to reconstruct conductive channels delineated by VHF sources, which we discretize as a superposition of finely-spaced (0.25 m) acoustic point sources. We consider total radiated thunder as a weighted superposition of acoustic waves from individual channels, each with a constant current along its length that is presumed to be proportional to acoustic energy density radiated per unit length. Merged channels are considered as a linear sum of current-carrying branches and radiate proportionally greater acoustic energy. Synthetic energy time series for a given microphone location are calculated for each independent channel. We then use a non-negative least squares inversion to solve for channel energy densities to match the energy time series determined from broadband acoustic recordings across a 4-station microphone network. Events analyzed by this method have so far included 300-1000 VHF sources, and correlations as high as 0.5 between synthetic and recorded thunder energy were obtained, despite the presence of wind noise and 10-30 m uncertainty in VHF source locations.
van Dongen, Koen W A; Wright, William M D
2007-03-01
Imaging the two acoustic medium parameters density and compressibility requires the use of both the acoustic pressure and velocity wave fields, described via integral equations. Imaging is based on solving for the unknown medium parameters using known measured scattered wave fields, and it is difficult to solve this ill-posed inverse problem directly using a conjugate gradient inversion scheme. Here, a contrast source inversion method is used in which the contrast sources, defined via the product of changes in compressibility and density with the pressure and velocity wave fields, respectively, are computed iteratively. After each update of the contrast sources, an update of the medium parameters is obtained. Total variation as multiplicative regularization is used to minimize blurring in the reconstructed contrasts. The method successfully reconstructed three-dimensional contrast profiles based on changes in both density and compressibility, using synthetic data both with and without 50% white noise. The results were compared with imaging based only on the pressure wave field, where speed of sound profiles were solely based on changes in compressibility. It was found that the results improved significantly by using the full vectorial method when changes in speed of sound depended on changes in both compressibility and density.
Chang, T. W.; Ide, S.
2017-12-01
Slip inversion using empirical Green's function (EGF) method has its advantages of removing the complex path and site effect that is difficult to model theoretically. The method, which uses one "EGF event" that's smaller in magnitude for over 1.5 as the Green's function, is essentially an inversion highlighting the arrival time of the waveforms. In this study, inversions of very large earthquakes were conducted with far-field data, using non-negative-least-squares method, and taking EGF selection from Baltay et al. (2014). Objective way of screening station components is applied by evaluating the radiation pattern for the earthquakes of each stations. To better estimate model error due to the usage of empirical Green's function, which is also specific to station selection, bootstrapping is made on the station selection process, randomly selecting waveforms from P or SH components in various stations. This will give the average of inversion trials using different data components with different Green's Functions, resulting in a smoothed model with stable features of the individual results, without explicitly applying smoothing constraints. So far, the above method had been applied to the MW 8.8 2010 Maule, Chile, and the MW 9.0 2011 Tohoku-Oki, Japan earthquakes, both giving comparable slip pattern to previous studies, although slip is concentrated in very small regions with unreasonably large amount of slip. These results should be considered as an extreme case of concentrated slip, and further physical inference is necessary to understand the real rupture process.
Czech Academy of Sciences Publication Activity Database
Anikiev, D.; Valenta, Jan; Staněk, František; Eisner, Leo
2014-01-01
Roč. 198, č. 1 (2014), s. 249-258 ISSN 0956-540X R&D Projects: GA ČR GAP210/12/2451 Institutional support: RVO:67985891 Keywords : inverse theory * probability distributions * wave scattering and diffraction * fractures and faults Subject RIV: DB - Geology ; Mineralogy Impact factor: 2.724, year: 2013
The Compton polarimeter at ELSA
International Nuclear Information System (INIS)
Doll, D.
1998-06-01
In order to measure the degree of transverse polarization of the stored electron beam in the Electron Stretcher Accelerator ELSA a compton polarimeter is built up. The measurement is based on the polarization dependent cross section for the compton scattering of circular polarized photons off polarized electrons. Using a high power laser beam and detecting the scattered photons a measuring time of two minutes with a statistical error of 5% is expected from numerical simulations. The design and the results of a computer controlled feedback system to enhance the laser beam stability at the interaction point in ELSA are presented. The detection of the scattered photons is based on a lead converter and a silicon-microstrip detector. The design and test results of the detector module including readout electronic and computer control are discussed. (orig.)
Magnetic pair and Compton sectrometers
International Nuclear Information System (INIS)
Bartholomew, G.A.; Lee-Whiting, G.E.
1979-01-01
Most types of magnetic gamma-ray spectrometers are much less used than they were ten or fifteen years ago largely because of the increased use of solid-state detectors. The latter permit much greater rates of data accumulation and, in many cases, better resolutions than the former. Many of the best magnetic instruments have been discarded or dismantled. Pair and Compton spectrometers are no exception of this trend. Thus much of the material to be considered reviewed by the authors must be of an historical nature. Inclusion of a brief account of magnetic pair and Compton spectrometers may be useful, not only for historical completeness, but also to suggest circumstances in which these spectrometers may still have advantages. (Auth.)
A Bulk Comptonization Model for the Prompt GRB Emission and its Relation to the Fermi GRB Spectra
Kazanas, Demosthenes
2010-01-01
We present a model in which the GRB prompt emission at E E(sub peak) is due to bulk Comptonization by the relativistic blast wave motion of either its own synchrotron photons of ambient photons of the stellar configuration that gave birth to the GRB. The bulk Comptonization process then induces the production of relativistic electrons of Lorentz factor equal to that of the blast wave through interactions with its ambient protons. The inverse compton emission of these electrons produces a power law component that extends to multi GeV energies in good agreement with the LAT GRB observations.
Karve, Pranav M.
2016-12-28
We discuss a methodology for computing the optimal spatio-temporal characteristics of surface wave sources necessary for delivering wave energy to a targeted subsurface formation. The wave stimulation is applied to the target formation to enhance the mobility of particles trapped in its pore space. We formulate the associated wave propagation problem for three-dimensional, heterogeneous, semi-infinite, elastic media. We use hybrid perfectly matched layers at the truncation boundaries of the computational domain to mimic the semi-infiniteness of the physical domain of interest. To recover the source parameters, we define an inverse source problem using the mathematical framework of constrained optimization and resolve it by employing a reduced-space approach. We report the results of our numerical experiments attesting to the methodology\\'s ability to specify the spatio-temporal description of sources that maximize wave energy delivery. Copyright © 2016 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Okuyama, Shinichi; Matsuzawa, Taiju; Sera, Koichiro; Mishina, Hitoshi.
1982-01-01
Probability of tomographic fluoroscopy was investigated with an appropriately collimated monoenergetic gamma ray source and a gamma camera. Its clinical applications will be found in the orientation of paracenthesis, biopsy, endoscopy and removal of foreign bodies. The property that it gives positive images would enforce its usefulness. It will be useful in planning radiotherapy, too. Further expansion of radiological diagnosis is expectable if the technique is combined with direct magnification and contrast magnification with the Shinozaki color TV system. (author)
Performance evaluation of MACACO: a multilayer Compton camera
Muñoz, Enrique; Barrio, John; Etxebeste, Ane; Ortega, Pablo G.; Lacasta, Carlos; Oliver, Josep F.; Solaz, Carles; Llosá, Gabriela
2017-09-01
Compton imaging devices have been proposed and studied for a wide range of applications. We have developed a Compton camera prototype which can be operated with two or three detector layers based on monolithic lanthanum bromide (LaBr3 ) crystals coupled to silicon photomultipliers (SiPMs), to be used for proton range verification in hadron therapy. In this work, we present the results obtained with our prototype in laboratory tests with radioactive sources and in simulation studies. Images of a 22 Na and an 88 Y radioactive sources have been successfully reconstructed. The full width half maximum of the reconstructed images is below 4 mm for a 22 Na source at a distance of 5 cm.
DEFF Research Database (Denmark)
Oh, Geok Lian; Brunskog, Jonas
2014-01-01
Techniques have been studied for the localization of an underground source with seismic interrogation signals. Much of the work has involved defining either a P-wave acoustic model or a dispersive surface wave model to the received signal and applying the time-delay processing technique...... and frequency-wavenumber processing to determine the location of the underground tunnel. Considering the case of determining the location of an underground tunnel, this paper proposed two physical models, the acoustic approximation ray tracing model and the finite difference time domain three-dimensional (3D......) elastic wave model to represent the received seismic signal. Two localization algorithms, beamforming and Bayesian inversion, are developed for each physical model. The beam-forming algorithms implemented are the modified time-and-delay beamformer and the F-K beamformer. Inversion is posed...
Simple modification of Compton polarimeter to redirect synchrotron radiation
Directory of Open Access Journals (Sweden)
J. Benesch
2015-11-01
Full Text Available Synchrotron radiation produced as an electron beam passes through a bending magnet is a significant source of background in many experiments. Using modeling, we show that simple modifications of the magnet geometry can reduce this background by orders of magnitude in some circumstances. Specifically, we examine possible modifications of the four dipole magnets used in Jefferson Lab’s Hall A Compton polarimeter chicane. This Compton polarimeter has been a crucial part of experiments with polarized beams and the next generation of experiments will utilize increased beam energies, up to 11 GeV, requiring a corresponding increase in Compton dipole field to 1.5 T. In consequence, the synchrotron radiation (SR from the dipole chicane will be greatly increased. Three possible modifications of the chicane dipoles are studied; each design moves about 2% of the integrated bending field to provide a gentle bend in critical regions along the beam trajectory which, in turn, greatly reduces the synchrotron radiation within the acceptance of the Compton polarimeter photon detector. Each of the modifications studied also softens the SR energy spectrum at the detector sufficiently to allow shielding with 5 mm of lead. Simulations show that these designs are each capable of reducing the background signal due to SR by three orders of magnitude. The three designs considered vary in their need for vacuum vessel changes and in their effectiveness.
On the Compton scattering redistribution function in plasma
Madej, J.; Różańska, A.; Majczyna, A.; Należyty, M.
2017-08-01
Compton scattering is the dominant opacity source in hot neutron stars, accretion discs around black holes and hot coronae. We collected here a set of numerical expressions of the Compton scattering redistribution functions (RFs) for unpolarized radiation, which are more exact than the widely used Kompaneets equation. The principal aim of this paper is the presentation of the RF by Guilbert, which is corrected for the computational errors in the original paper. This corrected RF was used in the series of papers on model atmosphere computations of hot neutron stars. We have also organized four existing algorithms for the RF computations into a unified form ready to use in radiative transfer and model atmosphere codes. The exact method by Nagirner & Poutanen was numerically compared to all other algorithms in a very wide spectral range from hard X-rays to radio waves. Sample computations of the Compton scattering RFs in thermal plasma were done for temperatures corresponding to the atmospheres of bursting neutron stars and hot intergalactic medium. Our formulae are also useful to study the Compton scattering of unpolarized microwave background radiation in hot intracluster gas and the Sunyaev-Zeldovich effect. We conclude that the formulae by Guilbert and the exact quantum mechanical formulae yield practically the same RFs for gas temperatures relevant to the atmospheres of X-ray bursting neutron stars, T ≤ 108 K.
Kovalets, Ivan V.; Efthimiou, George C.; Andronopoulos, Spyros; Venetsanos, Alexander G.; Argyropoulos, Christos D.; Kakosimos, Konstantinos E.
2018-05-01
In this work, we present an inverse computational method for the identification of the location, start time, duration and quantity of emitted substance of an unknown air pollution source of finite time duration in an urban environment. We considered a problem of transient pollutant dispersion under stationary meteorological fields, which is a reasonable assumption for the assimilation of available concentration measurements within 1 h from the start of an incident. We optimized the calculation of the source-receptor function by developing a method which requires integrating as many backward adjoint equations as the available measurement stations. This resulted in high numerical efficiency of the method. The source parameters are computed by maximizing the correlation function of the simulated and observed concentrations. The method has been integrated into the CFD code ADREA-HF and it has been tested successfully by performing a series of source inversion runs using the data of 200 individual realizations of puff releases, previously generated in a wind tunnel experiment.
Compton Ring with Laser Radiative Cooling
Bulyak, E.; Urakawa, J.; Zimmermann, F.
2013-10-01
Proposed is an enhancement of laser radiative cooling by utilizing laser pulses of small spatial and temporal dimensions, which interact only with a fraction of an electron bunch circulating in a storage ring. The dynamics of such electron bunch when laser photons scatter off the electrons at a collision point placed in a section with nonzero dispersion is studied. In this case of `asymmetric cooling', the stationary energy spread is much smaller than under conditions of regular scattering where the laser spot size is larger than the electron beam; and the synchrotron oscillations are damped faster. Results of extensive simulations are presented for the performance optimization of Compton gamma-ray sources and damping rings.
A comparison of two methods for earthquake source inversion using strong motion seismograms
Directory of Open Access Journals (Sweden)
G. C. Beroza
1994-06-01
Full Text Available In this paper we compare two time-domain inversion methods that have been widely applied to the problem of modeling earthquake rupture using strong-motion seismograms. In the multi-window method, each point on the fault is allowed to rupture multiple times. This allows flexibility in the rupture time and hence the rupture velocity. Variations in the slip-velocity function are accommodated by variations in the slip amplitude in each time-window. The single-window method assumes that each point on the fault ruptures only once, when the rupture front passes. Variations in slip amplitude are allowed and variations in rupture velocity are accommodated by allowing the rupture time to vary. Because the multi-window method allows greater flexibility, it has the potential to describe a wider range of faulting behavior; however, with this increased flexibility comes an increase in the degrees of freedom and the solutions are comparatively less stable. We demonstrate this effect using synthetic data for a test model of the Mw 7.3 1992 Landers, California earthquake, and then apply both inversion methods to the actual recordings. The two approaches yield similar fits to the strong-motion data with different seismic moments indicating that the moment is not well constrained by strong-motion data alone. The slip amplitude distribution is similar using either approach, but important differences exist in the rupture propagation models. The single-window method does a better job of recovering the true seismic moment and the average rupture velocity. The multi-window method is preferable when rise time is strongly variable, but tends to overestimate the seismic moment. Both methods work well when the rise time is constant or short compared to the periods modeled. Neither approach can recover the temporal details of rupture propagation unless the distribution of slip amplitude is constrained by independent data.
Directory of Open Access Journals (Sweden)
B. de Foy
2012-10-01
Full Text Available Gaseous elemental mercury is a global pollutant that can lead to serious health concerns via deposition to the biosphere and bio-accumulation in the food chain. Hourly measurements between June 2004 and May 2005 in an urban site (Milwaukee, WI show elevated levels of mercury in the atmosphere with numerous short-lived peaks as well as longer-lived episodes. The measurements are analyzed with an inverse model to obtain information about mercury emissions. The model is based on high resolution meteorological simulations (WRF, hourly back-trajectories (WRF-FLEXPART and a chemical transport model (CAMx. The hybrid formulation combining back-trajectories and Eulerian simulations is used to identify potential source regions as well as the impacts of forest fires and lake surface emissions. Uncertainty bounds are estimated using a bootstrap method on the inversions. Comparison with the US Environmental Protection Agency's National Emission Inventory (NEI and Toxic Release Inventory (TRI shows that emissions from coal-fired power plants are properly characterized, but emissions from local urban sources, waste incineration and metal processing could be significantly under-estimated. Emissions from the lake surface and from forest fires were found to have significant impacts on mercury levels in Milwaukee, and to be underestimated by a factor of two or more.
de Foy, B.; Wiedinmyer, C.; Schauer, J. J.
2012-10-01
Gaseous elemental mercury is a global pollutant that can lead to serious health concerns via deposition to the biosphere and bio-accumulation in the food chain. Hourly measurements between June 2004 and May 2005 in an urban site (Milwaukee, WI) show elevated levels of mercury in the atmosphere with numerous short-lived peaks as well as longer-lived episodes. The measurements are analyzed with an inverse model to obtain information about mercury emissions. The model is based on high resolution meteorological simulations (WRF), hourly back-trajectories (WRF-FLEXPART) and a chemical transport model (CAMx). The hybrid formulation combining back-trajectories and Eulerian simulations is used to identify potential source regions as well as the impacts of forest fires and lake surface emissions. Uncertainty bounds are estimated using a bootstrap method on the inversions. Comparison with the US Environmental Protection Agency's National Emission Inventory (NEI) and Toxic Release Inventory (TRI) shows that emissions from coal-fired power plants are properly characterized, but emissions from local urban sources, waste incineration and metal processing could be significantly under-estimated. Emissions from the lake surface and from forest fires were found to have significant impacts on mercury levels in Milwaukee, and to be underestimated by a factor of two or more.
DEFF Research Database (Denmark)
Karamehmedovic, Mirza; Kirkeby, Adrian; Knudsen, Kim
2018-01-01
setting: From measurements made at a finite set of frequencies we uniquely determine and reconstruct sources in a subspace spanned by finitely many Fourier-Bessel functions. Further, we obtain a constructive criterion for identifying a minimal set of measurement frequencies sufficient for reconstruction......, and under an additional, mild assumption, the reconstruction method is shown to be stable." Our analysis is based on a singular value decomposition of the source-to-measurement forward operators and the distribution of positive zeros of the Bessel functions of the first kind. The reconstruction method...
Czech Academy of Sciences Publication Activity Database
Ardeleanu, L.; Radulian, M.; Šílený, Jan; Panza, G. F.
2005-01-01
Roč. 162, č. 3 (2005), s. 495-513 ISSN 0033-4553 R&D Projects: GA ČR GA205/02/0383 Grant - others:UNESCO-IGCP(XX) 414; NATO(XX) SfP 972266 Institutional research plan: CEZ:AV0Z30120515 Keywords : point source approximation * seismic moment tensor * source time function Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 0.975, year: 2005
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Šindelářová, Kateřina; Hýža, Miroslav; Stohl, Andreas
2017-10-01
In the fall of 2011, iodine-131 (131I) was detected at several radionuclide monitoring stations in central Europe. After investigation, the International Atomic Energy Agency (IAEA) was informed by Hungarian authorities that 131I was released from the Institute of Isotopes Ltd. in Budapest, Hungary. It was reported that a total activity of 342 GBq of 131I was emitted between 8 September and 16 November 2011. In this study, we use the ambient concentration measurements of 131I to determine the location of the release as well as its magnitude and temporal variation. As the location of the release and an estimate of the source strength became eventually known, this accident represents a realistic test case for inversion models. For our source reconstruction, we use no prior knowledge. Instead, we estimate the source location and emission variation using only the available 131I measurements. Subsequently, we use the partial information about the source term available from the Hungarian authorities for validation of our results. For the source determination, we first perform backward runs of atmospheric transport models and obtain source-receptor sensitivity (SRS) matrices for each grid cell of our study domain. We use two dispersion models, FLEXPART and Hysplit, driven with meteorological analysis data from the global forecast system (GFS) and from European Centre for Medium-range Weather Forecasts (ECMWF) weather forecast models. Second, we use a recently developed inverse method, least-squares with adaptive prior covariance (LS-APC), to determine the 131I emissions and their temporal variation from the measurements and computed SRS matrices. For each grid cell of our simulation domain, we evaluate the probability that the release was generated in that cell using Bayesian model selection. The model selection procedure also provides information about the most suitable dispersion model for the source term reconstruction. Third, we select the most probable location of
Directory of Open Access Journals (Sweden)
O. Tichý
2017-10-01
Full Text Available In the fall of 2011, iodine-131 (131I was detected at several radionuclide monitoring stations in central Europe. After investigation, the International Atomic Energy Agency (IAEA was informed by Hungarian authorities that 131I was released from the Institute of Isotopes Ltd. in Budapest, Hungary. It was reported that a total activity of 342 GBq of 131I was emitted between 8 September and 16 November 2011. In this study, we use the ambient concentration measurements of 131I to determine the location of the release as well as its magnitude and temporal variation. As the location of the release and an estimate of the source strength became eventually known, this accident represents a realistic test case for inversion models. For our source reconstruction, we use no prior knowledge. Instead, we estimate the source location and emission variation using only the available 131I measurements. Subsequently, we use the partial information about the source term available from the Hungarian authorities for validation of our results. For the source determination, we first perform backward runs of atmospheric transport models and obtain source-receptor sensitivity (SRS matrices for each grid cell of our study domain. We use two dispersion models, FLEXPART and Hysplit, driven with meteorological analysis data from the global forecast system (GFS and from European Centre for Medium-range Weather Forecasts (ECMWF weather forecast models. Second, we use a recently developed inverse method, least-squares with adaptive prior covariance (LS-APC, to determine the 131I emissions and their temporal variation from the measurements and computed SRS matrices. For each grid cell of our simulation domain, we evaluate the probability that the release was generated in that cell using Bayesian model selection. The model selection procedure also provides information about the most suitable dispersion model for the source term reconstruction. Third, we select the most
Inverse Compton Gamma Rays from Dark Matter Annihilation in the ...
Indian Academy of Sciences (India)
didates for dark matter search due to their high mass-to-light (M/L) ratio. One of the most favored dark matter candidates is the lightest neutralino. (neutral χ particle) as predicted in the Minimal Supersymmetric Standard. Model (MSSM). In this study, we model the gamma ray emission from dark matter annihilation coming ...
Spectral Evolution of Synchrotron and Inverse Compton Emission in ...
Indian Academy of Sciences (India)
emission peaks in the optical band (e.g., Nieppola et al. 2006). In order to under- stand the evolution of synchrotron and IC spectra of BL Lac objects, the X-ray spectral analysis with XMM–Newton X-ray observations of PKS 2155–304 and. S5 0716+7145 (see Zhang 2008, 2010 for details) was performed. Here, the results.
Constraint on Parameters of Inverse Compton Scattering Model for ...
Indian Academy of Sciences (India)
,28o,35o. 3. Results. Figure 1 presents the best fit regions at 68% confidence level for γ0 and ξ, which depend on α. The main results are summarized as follows: • γ0 and ξ are larger than ∼6000 and ∼90, respectively. The allowed ranges ...
Relativistic inverse Compton scattering of photons from the early universe.
Malu, Siddharth; Datta, Abhirup; Colafrancesco, Sergio; Marchegiani, Paolo; Subrahmanyan, Ravi; Narasimha, D; Wieringa, Mark H
2017-12-05
Electrons at relativistic speeds, diffusing in magnetic fields, cause copious emission at radio frequencies in both clusters of galaxies and radio galaxies through non-thermal radiation emission called synchrotron. However, the total power radiated through this mechanism is ill constrained, as the lower limit of the electron energy distribution, or low-energy cutoffs, for radio emission in galaxy clusters and radio galaxies, have not yet been determined. This lower limit, parametrized by the lower limit of the electron momentum - p min - is critical for estimating the total energetics of non-thermal electrons produced by cluster mergers or injected by radio galaxy jets, which impacts the formation of large-scale structure in the universe, as well as the evolution of local structures inside galaxy clusters. The total pressure due to the relativistic, non-thermal population of electrons can be measured using the Sunyaev-Zel'dovich Effect, and is critically dependent on p min , making the measurement of this non-thermal pressure a promising technique to estimate the electron low-energy cutoff. We present here the first unambiguous detection of this Sunyaev-Zel'dovich Effect for a non-thermal population of electrons in a radio galaxy jet/lobe, located at a significant distance away from the center of the Bullet cluster of galaxies.
Inverse Compton Gamma Rays from Dark Matter Annihilation in the ...
Indian Academy of Sciences (India)
Electron spectrum as a function of electron energy for three different values of Mχ annihilating into b¯b final state. the annihilation cross sections are obtained from Ackermann et al. (2014). The DM annihilation takes place predominantly through some combination of the final states b¯b, tt, W. +. W. − or ZZ. The gamma ray ...
Spectral Evolution of Synchrotron and Inverse Compton Emission in ...
Indian Academy of Sciences (India)
and the 0.5–10 keV fluxes for the IC component, and Fig. 2(c) the synchrotron and. IC 0.5–10 keV fluxes are plotted against the total (i.e., synchrotron plus IC) 0.5–10. keV fluxes, respectively. The results can be summarized as follows. The synchrotron spectra appear to harden with larger synchrotron fluxes, whereas the IC ...
Aur, K. A.; Poppeliers, C.; Preston, L. A.
2017-12-01
The Source Physics Experiment (SPE) consists of a series of underground chemical explosions at the Nevada National Security Site (NNSS) designed to gain an improved understanding of the generation and propagation of physical signals in the near and far field. Characterizing the acoustic and infrasound source mechanism from underground explosions is of great importance to underground explosion monitoring. To this end we perform full waveform source inversion of infrasound data collected from the SPE-6 experiment at distances from 300 m to 6 km and frequencies up to 20 Hz. Our method requires estimating the state of the atmosphere at the time of each experiment, computing Green's functions through these atmospheric models, and subsequently inverting the observed data in the frequency domain to obtain a source time function. To estimate the state of the atmosphere at the time of the experiment, we utilize the Weather Research and Forecasting - Data Assimilation (WRF-DA) modeling system to derive a unified atmospheric state model by combining Global Energy and Water Cycle Experiment (GEWEX) Continental-scale International Project (GCIP) data and locally obtained sonde and surface weather observations collected at the time of the experiment. We synthesize Green's functions through these atmospheric models using Sandia's moving media acoustic propagation simulation suite (TDAAPS). These models include 3-D variations in topography, temperature, pressure, and wind. We compare inversion results using the atmospheric models derived from the unified weather models versus previous modeling results and discuss how these differences affect computed source waveforms with respect to observed waveforms at various distances. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear
Virieux, J.; Bretaudeau, F.; Metivier, L.; Brossier, R.
2013-12-01
Simultaneous inversion of seismic velocities and source parameters have been a long standing challenge in seismology since the first attempts to mitigate trade-off between very different parameters influencing travel-times (Spencer and Gubbins 1980, Pavlis and Booker 1980) since the early development in the 1970s (Aki et al 1976, Aki and Lee 1976, Crosson 1976). There is a strong trade-off between earthquake source positions, initial times and velocities during the tomographic inversion: mitigating these trade-offs is usually carried empirically (Lemeur et al 1997). This procedure is not optimal and may lead to errors in the velocity reconstruction as well as in the source localization. For a better simultaneous estimation of such multi-parametric reconstruction problem, one may take benefit of improved local optimization such as full Newton method where the Hessian influence helps balancing between different physical parameter quantities and improving the coverage at the point of reconstruction. Unfortunately, the computation of the full Hessian operator is not easily computed in large models and with large datasets. Truncated Newton (TCN) is an alternative optimization approach (Métivier et al. 2012) that allows resolution of the normal equation H Δm = - g using a matrix-free conjugate gradient algorithm. It only requires to be able to compute the gradient of the misfit function and Hessian-vector products. Traveltime maps can be computed in the whole domain by numerical modeling (Vidale 1998, Zhao 2004). The gradient and the Hessian-vector products for velocities can be computed without ray-tracing using 1st and 2nd order adjoint-state methods for the cost of 1 and 2 additional modeling step (Plessix 2006, Métivier et al. 2012). Reciprocity allows to compute accurately the gradient and the full Hessian for each coordinates of the sources and for their initial times. Then the resolution of the problem is done through two nested loops. The model update Δm is
Inversion of GPS-measured coseismic displacements for source parameters of Taiwan earthquake
Lin, J. T.; Chang, W. L.; Hung, H. K.; Yu, W. C.
2016-12-01
We performed a method of determining earthquake location, focal mechanism, and centroid moment tensor by coseismic surface displacements from daily and high-rate GPS measurements. Unlike commonly used dislocation model where fault geometry is calculated nonlinearly, our method makes a point source approach to evaluate these parameters in a solid and efficient way without a priori fault information and can thus provide constrains to subsequent finite source modeling of fault slip. In this study, we focus on the resolving ability of GPS data for moderate (Mw=6.0 7.0) earthquakes in Taiwan, and four earthquakes were investigated in detail: the March 27 2013 Nantou (Mw=6.0), the June 2 2013 Nantou (Mw=6.3) , the October 31 2013 Ruisui (Mw=6.3), and the March 31 2002 Hualien (ML=6.8) earthquakes. All these events were recorded by the Taiwan continuous GPS network with data sampling rates of 30-second and 1 Hz, where the Mw6.3 Ruisui earthquake was additionally recorded by another local GPS network with a sampling rate of 20 Hz. Our inverted focal mechanisms of all these earthquakes are consistent with the results of GCMT and USGS that evaluates source parameters by dynamic information from seismic waves. We also successfully resolved source parameters of the Mw6.3 Ruisui earthquake within only 10 seconds following the earthquake occurrence, demonstrating the potential of high-rate GPS data on earthquake early warning and real-time determination of earthquake source parameters.
McAlpine, Jerrold D.
In arid regions, mechanical disturbances along the desert floor can result in large fluctuations of dust particles into the atmosphere. Rotorcraft operation near the surface may have the greatest potential for dust entrainment per vehicle. Due to this, there is a need for efficient tools to estimate the risk of air quality and visibility impacts in the neighborhood of rotorcraft operating near the desert surface. In this study, a set of parameterized models were developed to form a multi-component modeling system to simulate the entrainment and dispersion of dust from a rotorcraft wake. A simplified scheme utilizing momentum theory was applied to predict the shear stress at the ground under the rotorcraft. Stochastic dust emission algorithms were used to predict the PM10 emission rate from the wake. The distribution of dust emission from the wake was assigned at the walls of a box-volume that encapsulates the wake. The distribution was determined using the results of an inverse Lagrangian stochastic particle dispersion modeling study, using a dataset from a full-scale experiment. All of the elements were put together into a model that simulates the dispersion of PM10 dust from a rotorcraft wake. Downwind concentrations of PM10 estimated using the multi-component modeling system compared well to a set of experimental measurements.
Díaz-Mojica, J. J.; Cruz-Atienza, V. M.; Madariaga, R.; Singh, S. K.; Iglesias, A.
2013-05-01
We introduce a novel approach for imaging the earthquakes dynamics from ground motion records based on a parallel genetic algorithm (GA). The method follows the elliptical dynamic-rupture-patch approach introduced by Di Carli et al. (2010) and has been carefully verified through different numerical tests (Díaz-Mojica et al., 2012). Apart from the five model parameters defining the patch geometry, our dynamic source description has four more parameters: the stress drop inside the nucleation and the elliptical patches; and two friction parameters, the slip weakening distance and the change of the friction coefficient. These parameters are constant within the rupture surface. The forward dynamic source problem, involved in the GA inverse method, uses a highly accurate computational solver for the problem, namely the staggered-grid split-node. The synthetic inversion presented here shows that the source model parameterization is suitable for the GA, and that short-scale source dynamic features are well resolved in spite of low-pass filtering of the data for periods comparable to the source duration. Since there is always uncertainty in the propagation medium as well as in the source location and the focal mechanisms, we have introduced a statistical approach to generate a set of solution models so that the envelope of the corresponding synthetic waveforms explains as much as possible the observed data. We applied the method to the 2012 Mw6.5 intraslab Zumpango, Mexico earthquake and determined several fundamental source parameters that are in accordance with different and completely independent estimates for Mexican and worldwide earthquakes. Our weighted-average final model satisfactorily explains eastward rupture directivity observed in the recorded data. Some parameters found for the Zumpango earthquake are: Δτ = 30.2+/-6.2 MPa, Er = 0.68+/-0.36x10^15 J, G = 1.74+/-0.44x10^15 J, η = 0.27+/-0.11, Vr/Vs = 0.52+/-0.09 and Mw = 6.64+/-0.07; for the stress drop
International Nuclear Information System (INIS)
Antoniassi, M.; Conceição, A.L.C.; Poletti, M.E.
2012-01-01
Electron densities of 33 samples of normal (adipose and fibroglangular) and neoplastic (benign and malignant) human breast tissues were determined through Compton scattering data using a monochromatic synchrotron radiation source and an energy dispersive detector. The area of Compton peaks was used to determine the electron densities of the samples. Adipose tissue exhibits the lowest values of electron density whereas malignant tissue the highest. The relationship with their histology was discussed. Comparison with previous results showed differences smaller than 4%. - Highlights: ► Electron density of normal and neoplastic breast tissues was measured using Compton scattering. ► Monochromatic synchrotron radiation was used to obtain the Compton scattering data. ► The area of Compton peaks was used to determine the electron densities of samples. ► Adipose tissue shows the lowest electron density values whereas the malignant tissue the highest. ► Comparison with previous results showed differences smaller than 4%.
Bauwens, Maite; Stavrakou, Trissevgeni; Müller, Jean François; De Smedt, Isabelle; Van Roozendael, Michel; Van Der Werf, Guido R.; Wiedinmyer, Christine; Kaiser, Johannes W.; Sindelarova, Katerina; Guenther, Alex
2016-01-01
As formaldehyde (HCHO) is a high-yield product in the oxidation of most volatile organic compounds (VOCs) emitted by fires, vegetation, and anthropogenic activities, satellite observations of HCHO are well-suited to inform us on the spatial and temporal variability of the underlying VOC sources. The
Electronic properties and Compton scattering studies of monoclinic tungsten dioxide
Heda, N. L.; Ahuja, Ushma
2015-01-01
We present the first ever Compton profile measurement of WO2 using a 20 Ci 137Cs γ-ray source. The experimental data have been used to test different approximations of density functional theory in linear combination of atomic orbitals (LCAO) scheme. It is found that theoretical Compton profile deduced using generalized gradient approximation (GGA) gives a better agreement than local density approximation and second order GGA. The computed energy bands, density of states and Mulliken's populations (MP) data confirm a metal-like behavior of WO2. The electronic properties calculated using LCAO approach are also compared with those obtained using full potential linearized augmented plane wave method. The nature of bonding in WO2 is also compared with isoelectronic WX2 (X=S, Se) compounds in terms of equal-valence-electron-density profiles and MP data, which suggest an increase in ionic character in the order WSe2→WS2→WO2.
Nested atmospheric inversion for the terrestrial carbon sources and sinks in China
Directory of Open Access Journals (Sweden)
F. Jiang
2013-08-01
Full Text Available In this study, we establish a nested atmospheric inversion system with a focus on China using the Bayesian method. The global surface is separated into 43 regions based on the 22 TransCom large regions, with 13 small regions in China. Monthly CO2 concentrations from 130 GlobalView sites and 3 additional China sites are used in this system. The core component of this system is an atmospheric transport matrix, which is created using the TM5 model with a horizontal resolution of 3° × 2°. The net carbon fluxes over the 43 global land and ocean regions are inverted for the period from 2002 to 2008. The inverted global terrestrial carbon sinks mainly occur in boreal Asia, South and Southeast Asia, eastern America and southern South America. Most China areas appear to be carbon sinks, with strongest carbon sinks located in Northeast China. From 2002 to 2008, the global terrestrial carbon sink has an increasing trend, with the lowest carbon sink in 2002. The inter-annual variation (IAV of the land sinks shows remarkable correlation with the El Niño Southern Oscillation (ENSO. The terrestrial carbon sinks in China also show an increasing trend. However, the IAV in China is not the same as that of the globe. There is relatively stronger land sink in 2002, lowest sink in 2006, and strongest sink in 2007 in China. This IAV could be reasonably explained with the IAVs of temperature and precipitation in China. The mean global and China terrestrial carbon sinks over the period 2002–2008 are −3.20 ± 0.63 and −0.28 ± 0.18 PgC yr−1, respectively. Considering the carbon emissions in the form of reactive biogenic volatile organic compounds (BVOCs and from the import of wood and food, we further estimate that China's land sink is about −0.31 PgC yr−1.
Arnold, Tim; Manning, Alistair; Li, Shanlan; Kim, Jooil; Park, Sunyoung; Muhle, Jens; Weiss, Ray
2017-04-01
The fluorinated species carbon tetrafluoride (CF4; PFC-14), nitrogen trifluoride (NF3) and trifluoromethane (CHF3; HFC-23) are potent greenhouse gases with 100-year global warming potentials of 6,630, 16,100 and 12,400, respectively. Unlike the majority of CFC-replacements that are emitted from fugitive and mobile emission sources, these gases are mostly emitted from large single point sources - semiconductor manufacturing facilities (all three), aluminium smelting plants (CF4) and chlorodifluoromethane (HCFC-22) factories (HFC-23). In this work we show that atmospheric measurements can serve as a basis to calculate emissions of these gases and to highlight emission 'hotspots'. We use measurements from one Advanced Global Atmospheric Gases Experiment (AGAGE) long term monitoring sites at Gosan on Jeju Island in the Republic of Korea. This site measures CF4, NF3 and HFC-23 alongside a suite of greenhouse and stratospheric ozone depleting gases every two hours using automated in situ gas-chromatography mass-spectrometry instrumentation. We couple each measurement to an analysis of air history using the regional atmospheric transport model NAME (Numerical Atmospheric dispersion Modelling Environment) driven by 3D meteorology from the Met Office's Unified Model, and use a Bayesian inverse method (InTEM - Inversion Technique for Emission Modelling) to calculate yearly emission changes over seven years between 2008 and 2015. We show that our 'top-down' emission estimates for NF3 and CF4 are significantly larger than 'bottom-up' estimates in the EDGAR emissions inventory (edgar.jrc.ec.europa.eu). For example we calculate South Korean emissions of CF4 in 2010 to be 0.29±0.04 Gg/yr, which is significantly larger than the Edgar prior emissions of 0.07 Gg/yr. Further, inversions for several separate years indicate that emission hotspots can be found without prior spatial information. At present these gases make a small contribution to global radiative forcing, however, given
Quantitative Compton suppression spectrometry at elevated counting rates
International Nuclear Information System (INIS)
Westphal, G.P.; Joestl, K.; Schroeder, P.; Lauster, R.; Hausch, E.
1999-01-01
For quantitative Compton suppression spectrometry the decrease of coincidence efficiency with counting rate should be made negligible to avoid a virtual increase of relative peak areas of coincident isomeric transitions with counting rate. To that aim, a separate amplifier and discriminator has been used for each of the eight segments of the active shield of a new well-type Compton suppression spectrometer, together with an optimized, minimum dead-time design of the anticoincidence logic circuitry. Chance coincidence losses in the Compton suppression spectrometer are corrected instrumentally by comparing the chance coincidence rate to the counting rate of the germanium detector in a pulse-counting Busy circuit (G.P. Westphal, J. Rad. Chem. 179 (1994) 55) which is combined with the spectrometer's LFC counting loss correction system. The normally not observable chance coincidence rate is reconstructed from the rates of germanium detector and scintillation detector in an auxiliary coincidence unit, after the destruction of true coincidence by delaying one of the coincidence partners. Quantitative system response has been tested in two-source measurements with a fixed reference source of 60 Co of 14 kc/s, and various samples of 137 Cs, up to aggregate counting rates of 180 kc/s for the well-type detector, and more than 1400 kc/s for the BGO shield. In these measurements, the net peak areas of the 1173.3 keV line of 60 Co remained constant at typical values of 37 000 with and 95 000 without Compton suppression, with maximum deviations from the average of less than 1.5%
Mount St. Helens: Controlled-source audio-frequency magnetotelluric (CSAMT) data and inversions
Wynn, Jeff; Pierce, Herbert A.
2015-01-01
This report describes a series of geoelectrical soundings carried out on and near Mount St. Helens volcano, Washington, in 2010–2011. These soundings used a controlled-source audio-frequency magnetotelluric (CSAMT) approach (Zonge and Hughes, 1991; Simpson and Bahr, 2005). We chose CSAMT for logistical reasons: It can be deployed by helicopter, has an effective depth of penetration of as much as 1 kilometer, and requires less wire than a Schlumberger sounding.
A probabilistic approach for the estimation of earthquake source parameters from spectral inversion
Supino, M.; Festa, G.; Zollo, A.
2017-12-01
The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to
[Inversion of results upon using an "integral index" from an ostensibly authoritative source].
Galkin, A V
2012-01-01
General results and conclusions of Nesterenko et al. [Biofizika 57(4)] have been distorted by uncritically applying an invalid "unifying formula" from a recent monograph [ISBN 978-5-02-035593-4]. Readers are asked to ignore the Russian publication but take the fully revised English version [Biophysics 57(4)] or contact the authors (tv-nesterenko@mail.ru, ubflab@ibp.ru). Here I briefly show the basic defects in the quasi-quantitative means of data analysis offered in that book, and mention some more problems regarding erroneous information in ostensibly authoritative sources.
Development of electron-tracking Compton imaging system with 30-μm SOI pixel sensor
Yoshihara, Y.; Shimazoe, K.; Mizumachi, Y.; Takahashi, H.; Kamada, K.; Takeda, A.; Tsuru, T.; Arai, Y.
2017-01-01
Compton imaging is a useful method to localize gamma sources without using mechanical collimators. In conventional Compton imaging, the incident directions of gamma rays are estimated in a cone for each event by analyzing the sequence of interactions of each gamma ray followed by Compton kinematics. Since the information of the ejection directions of the recoil electrons is lost, many artifacts in the shape of cone traces are generated, which reduces signal-to-noise ratio (SNR) and angular resolution. We have developed an advanced Compton imaging system with the capability of tracking recoil electrons by using a combination of a trigger-mode silicon-on-insulator (SOI) pixel detector and a GAGG detector. This system covers the 660-1330 keV energy range for localization of contamination nuclides such as 137Cs and 134Cs inside the Fukushima Daiichi Nuclear Power Plant in Japan. The ejection directions of recoil electrons caused by Compton scattering are detected on the micro-pixelated SOI detector, which can theoretically be used to determine the incident directions of the gamma rays in a line for each event and can reduce the appearance of artifacts. We obtained 2-D reconstructed images from the first iteration of the proposed system for 137Cs, and the SNR and angular resolution were enhanced compared with those of conventional Compton imaging systems.
Electronic properties and Compton scattering studies of monoclinic tungsten dioxide
International Nuclear Information System (INIS)
Heda, N.L.; Ahuja, Ushma
2015-01-01
We present the first ever Compton profile measurement of WO 2 using a 20 Ci 137 Cs γ-ray source. The experimental data have been used to test different approximations of density functional theory in linear combination of atomic orbitals (LCAO) scheme. It is found that theoretical Compton profile deduced using generalized gradient approximation (GGA) gives a better agreement than local density approximation and second order GGA. The computed energy bands, density of states and Mulliken's populations (MP) data confirm a metal-like behavior of WO 2 . The electronic properties calculated using LCAO approach are also compared with those obtained using full potential linearized augmented plane wave method. The nature of bonding in WO 2 is also compared with isoelectronic WX 2 (X=S, Se) compounds in terms of equal-valence-electron-density profiles and MP data, which suggest an increase in ionic character in the order WSe 2 →WS 2 →WO 2 . - Highlights: • Presented first-ever Compton profile (CP) measurements on WO 2 . • Analyzed CP data in terms of LCAO–DFT calculations. • Discussed energy band, DOS and Mulliken's population. • Discussed equally scaled CPs and bonding of isoelectronic WO 2 , WS 2 and WSe 2 . • Reported metallic character and Fermi surface topology of WO 2
Computer control in a compton scattering spectrometer
International Nuclear Information System (INIS)
Cui Ningzhuo; Chen Tao; Gong Zhufang; Yang Baozhong; Mo Haiding; Hua Wei; Bian Zuhe
1995-01-01
The authors introduced the hardware and software of computer autocontrol of calibration and data acquisition in a Compton Scattering spectrometer which consists of a HPGe detector, Amplifiers and a MCA
Kharkov X-ray Generator Based On Compton Scattering
International Nuclear Information System (INIS)
Shcherbakov, A.; Zelinsky, A.; Mytsykov, A.; Gladkikh, P.; Karnaukhov, I.; Lapshin, V.; Telegin, Y.; Androsov, V.; Bulyak, E.; Botman, J.I.M.; Tatchyn, R.; Lebedev, A.
2004-01-01
Nowadays X-ray sources based on storage rings with low beam energy and Compton scattering of intense laser beams are under development in several laboratories. An international cooperative project of an advanced X-ray source of this type at the Kharkov Institute of Physics and Technology (KIPT) is described. The status of the project is reviewed. The design lattice of the storage ring and calculated X-ray beam parameters are presented. The results of numerical simulation carried out for proposed facility show a peak spectral X-ray intensity of about 1014 can be produced
Time Projection Compton Spectrometer (TPCS). User`s guide
Energy Technology Data Exchange (ETDEWEB)
Landron, C.O. [Sandia National Labs., Albuquerque, NM (United States); Baldwin, G.T. [International Atomic Energy Agency, Vienna (Austria)
1994-04-01
The Time Projection Compton Spectrometer (TPCS) is a radiation diagnostic designed to determine the time-integrated energy spectrum between 100 keV -- 2 MeV of flash x-ray sources. This guide is intended as a reference for the routine operator of the TPCS. Contents include a brief overview of the principle of operation, detailed component descriptions, detailed assembly and disassembly procedures, guide to routine operations, and troubleshooting flowcharts. Detailed principle of operation, signal analysis and spectrum unfold algorithms are beyond the scope of this guide; however, the guide makes reference to sources containing this information.
Colour coherence in deep inelastic Compton scattering
Energy Technology Data Exchange (ETDEWEB)
Lebedev, A.I.; Vazdik, J.A. (Lebedev Physical Inst., Academy of Sciences, Moscow (USSR))
1992-01-01
MC simulation of Deep Inelastic Compton on proton - both QED and QCD - was performed on the basis of LUCIFER program for HERA energies. Charged hadron flow was calculated for string and independent fragmentation with different cuts on p{sub t} and x. It is shown that interjet colour coherence leads in the case of QCD Compton to the drag effects diminishing the hadron flow in the direction between quark jet and proton remnant jet. (orig.).
Colour coherence in deep inelastic Compton scattering
International Nuclear Information System (INIS)
Lebedev, A.I.; Vazdik, J.A.
1992-01-01
MC simulation of Deep Inelastic Compton on proton - both QED and QCD - was performed on the basis of LUCIFER program for HERA energies. Charged hadron flow was calculated for string and independent fragmentation with different cuts on p t and x. It is shown that interjet colour coherence leads in the case of QCD Compton to the drag effects diminishing the hadron flow in the direction between quark jet and proton remnant jet. (orig.)
Development of compact Compton camera for 3D image reconstruction of radioactive contamination
Sato, Y.; Terasaka, Y.; Ozawa, S.; Nakamura Miyamura, H.; Kaburagi, M.; Tanifuji, Y.; Kawabata, K.; Torii, T.
2017-11-01
The Fukushima Daiichi Nuclear Power Station (FDNPS), operated by Tokyo Electric Power Company Holdings, Inc., went into meltdown after the large tsunami caused by the Great East Japan Earthquake of March 11, 2011. Very large amounts of radionuclides were released from the damaged plant. Radiation distribution measurements inside FDNPS buildings are indispensable to execute decommissioning tasks in the reactor buildings. We have developed a compact Compton camera to measure the distribution of radioactive contamination inside the FDNPS buildings three-dimensionally (3D). The total weight of the Compton camera is lower than 1.0 kg. The gamma-ray sensor of the Compton camera employs Ce-doped GAGG (Gd3Al2Ga3O12) scintillators coupled with a multi-pixel photon counter. Angular correction of the detection efficiency of the Compton camera was conducted. Moreover, we developed a 3D back-projection method using the multi-angle data measured with the Compton camera. We successfully observed 3D radiation images resulting from the two 137Cs radioactive sources, and the image of the 9.2 MBq source appeared stronger than that of the 2.7 MBq source.
Directory of Open Access Journals (Sweden)
C. B. Alden
2018-03-01
Full Text Available Advances in natural gas extraction technology have led to increased activity in the production and transport sectors in the United States and, as a consequence, an increased need for reliable monitoring of methane leaks to the atmosphere. We present a statistical methodology in combination with an observing system for the detection and attribution of fugitive emissions of methane from distributed potential source location landscapes such as natural gas production sites. We measure long (> 500 m, integrated open-path concentrations of atmospheric methane using a dual frequency comb spectrometer and combine measurements with an atmospheric transport model to infer leak locations and strengths using a novel statistical method, the non-zero minimum bootstrap (NZMB. The new statistical method allows us to determine whether the empirical distribution of possible source strengths for a given location excludes zero. Using this information, we identify leaking source locations (i.e., natural gas wells through rejection of the null hypothesis that the source is not leaking. The method is tested with a series of synthetic data inversions with varying measurement density and varying levels of model–data mismatch. It is also tested with field observations of (1 a non-leaking source location and (2 a source location where a controlled emission of 3.1 × 10−5 kg s−1 of methane gas is released over a period of several hours. This series of synthetic data tests and outdoor field observations using a controlled methane release demonstrates the viability of the approach for the detection and sizing of very small leaks of methane across large distances (4+ km2 in synthetic tests. The field tests demonstrate the ability to attribute small atmospheric enhancements of 17 ppb to the emitting source location against a background of combined atmospheric (e.g., background methane variability and measurement uncertainty of 5 ppb (1σ, when
Alden, Caroline B.; Ghosh, Subhomoy; Coburn, Sean; Sweeney, Colm; Karion, Anna; Wright, Robert; Coddington, Ian; Rieker, Gregory B.; Prasad, Kuldeep
2018-03-01
Advances in natural gas extraction technology have led to increased activity in the production and transport sectors in the United States and, as a consequence, an increased need for reliable monitoring of methane leaks to the atmosphere. We present a statistical methodology in combination with an observing system for the detection and attribution of fugitive emissions of methane from distributed potential source location landscapes such as natural gas production sites. We measure long (> 500 m), integrated open-path concentrations of atmospheric methane using a dual frequency comb spectrometer and combine measurements with an atmospheric transport model to infer leak locations and strengths using a novel statistical method, the non-zero minimum bootstrap (NZMB). The new statistical method allows us to determine whether the empirical distribution of possible source strengths for a given location excludes zero. Using this information, we identify leaking source locations (i.e., natural gas wells) through rejection of the null hypothesis that the source is not leaking. The method is tested with a series of synthetic data inversions with varying measurement density and varying levels of model-data mismatch. It is also tested with field observations of (1) a non-leaking source location and (2) a source location where a controlled emission of 3.1 × 10-5 kg s-1 of methane gas is released over a period of several hours. This series of synthetic data tests and outdoor field observations using a controlled methane release demonstrates the viability of the approach for the detection and sizing of very small leaks of methane across large distances (4+ km2 in synthetic tests). The field tests demonstrate the ability to attribute small atmospheric enhancements of 17 ppb to the emitting source location against a background of combined atmospheric (e.g., background methane variability) and measurement uncertainty of 5 ppb (1σ), when measurements are averaged over 2 min. The
Luminosity optimization schemes in Compton experiments based on Fabry-Perot optical resonators
Directory of Open Access Journals (Sweden)
Alessandro Variola
2011-03-01
Full Text Available The luminosity of Compton x-ray and γ sources depends on the average current in electron bunches, the energy of the laser pulses, and the geometry of the particle bunch to laser pulse collisions. To obtain high power photon pulses, these can be stacked in a passive optical resonator (Fabry-Perot cavity especially when a high average flux is required. But, in this case, owing to the presence of the optical cavity mirrors, the electron bunches have to collide at an angle with the laser pulses with a consequent luminosity decrease. In this article a crab-crossing scheme is proposed for Compton sources, based on a laser amplified in a Fabry-Perot resonator, to eliminate the luminosity losses given by the crossing angle, taking into account that in laser-electron collisions only the electron bunches can be tilted at the collision point. We report the analytical study on the crab-crossing scheme for Compton gamma sources. The analytical expression for the total yield of photons generated in Compton sources with the crab-crossing scheme of collision is derived. The optimal collision angle of the bunch was found to be equal to half of the collision angle. At this crabbing angle, the maximal yield of scattered off laser photons is attained thanks to the maximization, in the collision process, of the time spent by the laser pulse in the electron bunch. Estimations for some Compton source projects are presented. Furthermore, some schemes of the optical cavities configuration are analyzed and the luminosity calculated. As illustrated, the four-mirror two- or three-dimensional scheme is the most appropriate for Compton sources.
Lin, Hongxiang; Azuma, Takashi; Qu, Xiaolei; Takagi, Shu
2017-03-01
In this work, we construct a multi-frequency accelerating strategy for the contrast source inversion (CSI) method using pulse data in the time domain. CSI is a frequency-domain inversion method for ultrasound waveform tomography that does not require the forward solver through the process of reconstruction. Several prior researches show that the CSI method has a good performance of convergence and accuracy in the low-center-frequency situation. In contrast, utilizing the high-center-frequency data leads to a high-resolution reconstruction but slow convergence on large numbers of grid. Our objective is to take full advantage of all low frequency components from pulse data with the high-center-frequency data measured by the diagnostic device. First we process the raw data in the frequency domain. Then multi-frequency accelerating strategy helps restart CSI in the current frequency using the last iteration result obtained from the lower frequency component. The merit of multi- frequency accelerating strategy is that computational burden decreases at the first few iterations. Because the low frequency component of dataset computes on the coarse grid with assuming a fixed number of points per wavelength. In the numerical test, the pulse data were generated by the K-wave simulator and have been processed to meet the computation of the CSI method. We investigate the performance of the multi-frequency and single-frequency reconstructions and conclude that the multi-frequency accelerating strategy significantly enhances the quality of the reconstructed image and simultaneously reduces the average computational time for any iteration step.
Adriano, Bruno; Fujii, Yushiro; Koshimura, Shunichi; Mas, Erick; Ruiz-Angulo, Angel; Estrada, Miguel
2018-01-01
On September 8, 2017 (UTC), a normal-fault earthquake occurred 87 km off the southeast coast of Mexico. This earthquake generated a tsunami that was recorded at coastal tide gauge and offshore buoy stations. First, we conducted a numerical tsunami simulation using a single-fault model to understand the tsunami characteristics near the rupture area, focusing on the nearby tide gauge stations. Second, the tsunami source of this event was estimated from inversion of tsunami waveforms recorded at six coastal stations and three buoys located in the deep ocean. Using the aftershock distribution within 1 day following the main shock, the fault plane orientation had a northeast dip direction (strike = 320°, dip = 77°, and rake =-92°). The results of the tsunami waveform inversion revealed that the fault area was 240 km × 90 km in size with most of the largest slip occurring on the middle and deepest segments of the fault. The maximum slip was 6.03 m from a 30 × 30 km2 segment that was 64.82 km deep at the center of the fault area. The estimated slip distribution showed that the main asperity was at the center of the fault area. The second asperity with an average slip of 5.5 m was found on the northwest-most segments. The estimated slip distribution yielded a seismic moment of 2.9 × 10^{21} Nm (Mw = 8.24), which was calculated assuming an average rigidity of 7× 10^{10} N/m2.
Image reconstruction from limited angle Compton camera data
International Nuclear Information System (INIS)
Tomitani, T.; Hirasawa, M.
2002-01-01
The Compton camera is used for imaging the distributions of γ ray direction in a γ ray telescope for astrophysics and for imaging radioisotope distributions in nuclear medicine without the need for collimators. The integration of γ rays on a cone is measured with the camera, so that some sort of inversion method is needed. Parra found an analytical inversion algorithm based on spherical harmonics expansion of projection data. His algorithm is applicable to the full set of projection data. In this paper, six possible reconstruction algorithms that allow image reconstruction from projections with a finite range of scattering angles are investigated. Four algorithms have instability problems and two others are practical. However, the variance of the reconstructed image diverges in these two cases, so that window functions are introduced with which the variance becomes finite at a cost of spatial resolution. These two algorithms are compared in terms of variance. The algorithm based on the inversion of the summed back-projection is superior to the algorithm based on the inversion of the summed projection. (author)
Image reconstruction from limited angle Compton camera data
Tomitani, T.; Hirasawa, M.
2002-06-01
The Compton camera is used for imaging the distributions of γ ray direction in a γ ray telescope for astrophysics and for imaging radioisotope distributions in nuclear medicine without the need for collimators. The integration of γ rays on a cone is measured with the camera, so that some sort of inversion method is needed. Parra found an analytical inversion algorithm based on spherical harmonics expansion of projection data. His algorithm is applicable to the full set of projection data. In this paper, six possible reconstruction algorithms that allow image reconstruction from projections with a finite range of scattering angles are investigated. Four algorithms have instability problems and two others are practical. However, the variance of the reconstructed image diverges in these two cases, so that window functions are introduced with which the variance becomes finite at a cost of spatial resolution. These two algorithms are compared in terms of variance. The algorithm based on the inversion of the summed back-projection is superior to the algorithm based on the inversion of the summed projection.
New Compton densitometer for measuring pulmonary edema
Energy Technology Data Exchange (ETDEWEB)
Loo, B.W.; Goulding, F.S.; Simon, D.S.
1985-10-01
Pulmonary edema is the pathological increase of extravascular lung water found most often in patients with congestive heart failure and other critically ill patients who suffer from intravenous fluid overload. A non-invasive lung density monitor that is accurate, easily portable, safe and inexpensive is needed for clinical evaluation of pulmonary edema. Other researchers who have employed Compton scattering techniques generally used systems of extended size and detectors with poor energy resolution. This has resulted in significant systematic biases from multiply-scattered photons and larger errors in counting statistics at a given radiation dose to the patient. We are proposing a patented approach in which only backscattered photons are measured with a high-resolution HPGe detector in a compact system geometry. By proper design and a unique data extraction scheme, effects of the variable chest wall on lung density measurements are minimized. Preliminary test results indicate that with a radioactive source of under 30 GBq, it should be possible to make an accurate lung density measurement in one minute, with a risk of radiation exposure to the patient a thousand times smaller than that from a typical chest x-ray. The ability to make safe, frequent lung density measurements could be very helpful for monitoring the course of P.E. at the hospital bedside or outpatient clinics, and for evaluating the efficacy of therapy in clinical research. 6 refs., 5 figs.
INJECTION EFFICIENCY IN COMPTON RING NESTOR
Directory of Open Access Journals (Sweden)
P. I. Gladkikh
2017-12-01
Full Text Available NESTOR is the hard X-ray source that is under commissioning at NSC KIPT. NESTOR based on the Compton scattering of laser photons on relativistic electrons. The structure of the facility can be represented as the following components: a linear accelerator, a transport channel, a storage ring, and a laser-optical system. Electrons are stored in the storage ring for energy of 40-200 MeV. Inevitable alignment errors of magnetic elements are strongly effect on the beam dynamics in the storage ring. These errors lead to a shift of the equilibrium orbit relative to the ideal one. Significant shift of the equilibrium orbit could lead to loss of the beam on physical apertures. Transverse sizes of electron and laser beams are only few tens of microns at the interaction point. The shift of electron beam at the interaction point could greatly complicate the operation adjustment of storage ring without sufficient beam position diagnostic system. This article presents the simulation results of the efficiency of electron beam accumulation in the NESTOR storage ring. Also, this article is devoted to electron beam dynamics due to alignment errors of magnetic element in the ring.
Electron Storage Ring Development for ICS Sources
Energy Technology Data Exchange (ETDEWEB)
Loewen, Roderick [Lyncean Technologies, Inc., Palo Alto, CA (United States)
2015-09-30
There is an increasing world-wide interest in compact light sources based on Inverse Compton Scattering. Development of these types of light sources includes leveraging the investment in accelerator technology first developed at DOE National Laboratories. Although these types of light sources cannot replace the larger user-supported synchrotron facilities, they offer attractive alternatives for many x-ray science applications. Fundamental research at the SLAC National Laboratory in the 1990’s led to the idea of using laser-electron storage rings as a mechanism to generate x-rays with many properties of the larger synchrotron light facilities. This research led to a commercial spin-off of this technology. The SBIR project goal is to understand and improve the performance of the electron storage ring system of the commercially available Compact Light Source. The knowledge gained from studying a low-energy electron storage ring may also benefit other Inverse Compton Scattering (ICS) source development. Better electron storage ring performance is one of the key technologies necessary to extend the utility and breadth of applications of the CLS or related ICS sources. This grant includes a subcontract with SLAC for technical personnel and resources for modeling, feedback development, and related accelerator physics studies.
Where are Compton-thick radio galaxies? A hard X-ray view of three candidates
Ursini, F.; Bassani, L.; Panessa, F.; Bazzano, A.; Bird, A. J.; Malizia, A.; Ubertini, P.
2018-03-01
We present a broad-band X-ray spectral analysis of the radio-loud active galactic nuclei NGC 612, 4C 73.08 and 3C 452, exploiting archival data from NuSTAR, XMM-Newton, Swift and INTEGRAL. These Compton-thick candidates are the most absorbed sources among the hard X-ray selected radio galaxies studied in Panessa et al. We find an X-ray absorbing column density in every case below 1.5 × 1024 cm-2, and no evidence for a strong reflection continuum or iron K α line. Therefore, none of these sources is properly Compton-thick. We review other Compton-thick radio galaxies reported in the literature, arguing that we currently lack strong evidences for heavily absorbed radio-loud AGNs.
Development of a compact scintillator-based high-resolution Compton camera for molecular imaging
Energy Technology Data Exchange (ETDEWEB)
Kishimoto, A., E-mail: daphne3h-aya@ruri.waseda.jp [Research Institute for Science and Engineering, Waseda University, 3-4-1 Ohkubo, Shinjuku, Tokyo (Japan); Kataoka, J.; Koide, A.; Sueoka, K.; Iwamoto, Y.; Taya, T. [Research Institute for Science and Engineering, Waseda University, 3-4-1 Ohkubo, Shinjuku, Tokyo (Japan); Ohsuka, S. [Central Research Laboratory, Hamamatsu Photonics K.K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu, Shizuoka (Japan)
2017-02-11
The Compton camera, which shows gamma-ray distribution utilizing the kinematics of Compton scattering, is a promising detector capable of imaging across a wide range of energy. In this study, we aim to construct a small-animal molecular imaging system in a wide energy range by using the Compton camera. We developed a compact medical Compton camera based on a Ce-doped Gd{sub 3}Al{sub 2}Ga{sub 3}O{sub 12} (Ce:GAGG) scintillator and multi-pixel photon counter (MPPC). A basic performance confirmed that for 662 keV, the typical energy resolution was 7.4 % (FWHM) and the angular resolution was 4.5° (FWHM). We then used the medical Compton camera to conduct imaging experiments based on a 3-D imaging reconstruction algorithm using the multi-angle data acquisition method. The result confirmed that for a {sup 137}Cs point source at a distance of 4 cm, the image had a spatial resolution of 3.1 mm (FWHM). Furthermore, we succeeded in producing 3-D multi-color image of different simultaneous energy sources ({sup 22}Na [511 keV], {sup 137}Cs [662 keV], and {sup 54}Mn [834 keV]).
Energy Technology Data Exchange (ETDEWEB)
Brambilla, Sara [Los Alamos National Laboratory; Brown, Michael J. [Los Alamos National Laboratory
2012-06-18
zones. Due to a unique source inversion technique - called the upwind collector footprint approach - the tool runs fast and the source regions can be determined in a few minutes. In this report, we provide an overview of the BERT framework, followed by a description of the source inversion technique. The Joint URBAN 2003 field experiment held in Oklahoma City that was used to validate BERT is then described. Subsequent sections describe the metrics used for evaluation, the comparison of the experimental data and BERT output, and under what conditions the BERT tool succeeds and performs poorly. Results are aggregated in different ways (e.g., daytime vs. nighttime releases, 1 vs. 2 vs. 3 hit collectors) to determine if BERT shows any systematic errors. Finally, recommendations are given for how to improve the code and procedures for optimizing performance in operational mode.
Loubet, Benjamin; Carozzi, Marco
2015-04-01
Tropospheric ammonia (NH3) is a key player in atmospheric chemistry and its deposition is a threat for the environment (ecosystem eutrophication, soil acidification and reduction in species biodiversity). Most of the NH3 global emissions derive from agriculture, mainly from livestock manure (storage and field application) but also from nitrogen-based fertilisers. Inverse dispersion modelling has been widely used to infer emission sources from a homogeneous source of known geometry. When the emission derives from different sources inside of the measured footprint, the emission should be treated as multi-source problem. This work aims at estimating whether multi-source inverse dispersion modelling can be used to infer NH3 emissions from different agronomic treatment, composed of small fields (typically squares of 25 m side) located near to each other, using low-cost NH3 measurements (diffusion samplers). To do that, a numerical experiment was designed with a combination of 3 x 3 square field sources (625 m2), and a set of sensors placed at the centre of each field at several heights as well as at 200 m away from the sources in each cardinal directions. The concentration at each sensor location was simulated with a forward Lagrangian Stochastic (WindTrax) and a Gaussian-like (FIDES) dispersion model. The concentrations were averaged over various integration times (3 hours to 28 days), to mimic the diffusion sampler behaviour with several sampling strategy. The sources were then inferred by inverse modelling using the averaged concentration and the same models in backward mode. The sources patterns were evaluated using a soil-vegetation-atmosphere model (SurfAtm-NH3) that incorporates the response of the NH3 emissions to surface temperature. A combination emission patterns (constant, linear decreasing, exponential decreasing and Gaussian type) and strengths were used to evaluate the uncertainty of the inversion method. Each numerical experiment covered a period of 28
Compton camera study for high efficiency SPECT and benchmark with Anger system.
Fontana, M; Dauvergne, D; Létang, J M; Ley, J-L; Testa, É
2017-11-09
Single photon emission computed tomography (SPECT) is at present one of the major techniques for non-invasive diagnostics in nuclear medicine. The clinical routine is mostly based on collimated cameras, originally proposed by Hal Anger. Due to the presence of mechanical collimation, detection efficiency and energy acceptance are limited and fixed by the system's geometrical features. In order to overcome these limitations, the application of Compton cameras for SPECT has been investigated for several years. In this study we compare a commercial SPECT-Anger device, the General Electric HealthCare Infinia system with a High Energy General Purpose (HEGP) collimator, and the Compton camera prototype under development by the French collaboration CLaRyS, through Monte Carlo simulations (GATE-GEANT4 Application for Tomographic Emission-version 7.1 and GEANT4 version 9.6, respectively). Given the possible introduction of new radio-emitters at higher energies intrinsically allowed by the Compton camera detection principle, the two detectors are exposed to point-like sources at increasing primary gamma energies, from actual isotopes already suggested for nuclear medicine applications. The Compton camera prototype is first characterized for SPECT application by studying the main parameters affecting its imaging performance: detector energy resolution and random coincidence rate. The two detector performances are then compared in terms of radial event distribution, detection efficiency and final image, obtained by gamma transmission analysis for the Anger system, and with an iterative List Mode-Maximum Likelihood Expectation Maximization (LM-MLEM) algorithm for the Compton reconstruction. The results show for the Compton camera a detection efficiency increased by a factor larger than an order of magnitude with respect to the Anger camera, associated with an enhanced spatial resolution for energies beyond 500 keV. We discuss the advantages of Compton camera application for
Compton camera study for high efficiency SPECT and benchmark with Anger system
Fontana, M.; Dauvergne, D.; Létang, J. M.; Ley, J.-L.; Testa, É.
2017-12-01
Single photon emission computed tomography (SPECT) is at present one of the major techniques for non-invasive diagnostics in nuclear medicine. The clinical routine is mostly based on collimated cameras, originally proposed by Hal Anger. Due to the presence of mechanical collimation, detection efficiency and energy acceptance are limited and fixed by the system’s geometrical features. In order to overcome these limitations, the application of Compton cameras for SPECT has been investigated for several years. In this study we compare a commercial SPECT-Anger device, the General Electric HealthCare Infinia system with a High Energy General Purpose (HEGP) collimator, and the Compton camera prototype under development by the French collaboration CLaRyS, through Monte Carlo simulations (GATE—GEANT4 Application for Tomographic Emission—version 7.1 and GEANT4 version 9.6, respectively). Given the possible introduction of new radio-emitters at higher energies intrinsically allowed by the Compton camera detection principle, the two detectors are exposed to point-like sources at increasing primary gamma energies, from actual isotopes already suggested for nuclear medicine applications. The Compton camera prototype is first characterized for SPECT application by studying the main parameters affecting its imaging performance: detector energy resolution and random coincidence rate. The two detector performances are then compared in terms of radial event distribution, detection efficiency and final image, obtained by gamma transmission analysis for the Anger system, and with an iterative List Mode-Maximum Likelihood Expectation Maximization (LM-MLEM) algorithm for the Compton reconstruction. The results show for the Compton camera a detection efficiency increased by a factor larger than an order of magnitude with respect to the Anger camera, associated with an enhanced spatial resolution for energies beyond 500 keV. We discuss the advantages of Compton camera application
International Nuclear Information System (INIS)
Xing, Qiang; Wu, Bingfang; Zhu, Weiwei
2014-01-01
The aerodynamic roughness is one of the major parameters in describing the turbulent exchange process between terrestrial and atmosphere. Remote Sensing is recognized as an effective way to inverse this parameter at the regional scale. However, in the long time the inversion method is either dependent on the lookup table for different land covers or the Normalized Difference Vegetation Index (NDVI) factor only, which plays a very limited role in describing the spatial heterogeneity of this parameter and the evapotranspiration (ET) for different land covers. In fact, the aerodynamic roughness is influenced by different factors at the same time, including the roughness unit for hard surfaces, the vegetation dynamic growth and the undulating terrain. Therefore, this paper aims at developing an innovative aerodynamic roughness inversion method based on multi-source remote sensing data in a semiarid region, within the upper and middle reaches of Heihe River Basin. The radar backscattering coefficient was used to inverse the micro-relief of the hard surface. The NDVI was utilized to reflect the dynamic change of vegetated surface. Finally, the slope extracted from SRTM DEM (Shuttle Radar Topography Mission Digital Elevation Model) was used to correct terrain influence. The inversed aerodynamic roughness was imported into ETWatch system to validate the availability. The inversed and tested results show it plays a significant role in improving the spatial heterogeneity of the aerodynamic roughness and related ET for the experimental site
Electronic structure and Compton profiles of tungsten
International Nuclear Information System (INIS)
Lal Ahuja, Babu; Rathor, Ashish; Sharma, Vinit; Sharma, Yamini; Ramniklal Jani, Ashvin; Sharma, Balkrishna
2008-01-01
The energy bands, density of states and Compton profiles of tungsten have been computed using band structure methods, namely the spin-polarized relativistic Korringa-Kohn-Rostoker (SPR-KKR) approach as well as the linear combination of atomic orbitals with Hartree-Fock scheme and density functional theory. The full potential linearized augmented plane wave scheme to calculate these properties and the Fermi surface topology(except the momentum densities) have also been used to analyze the theoretical data on the electron momentum densities. The directional Compton profiles have been measured using a 100 mCi 241 Am Compton spectrometer. From the comparison, the measured anisotropies are found to be in good agreement with the SPR-KKR calculations. The band structure calculations are also compared with the available data. (orig.)
On the line-shape analysis of Compton profiles and its application to neutron scattering
International Nuclear Information System (INIS)
Romanelli, G.; Krzystyniak, M.
2016-01-01
Analytical properties of Compton profiles are used in order to simplify the analysis of neutron Compton scattering experiments. In particular, the possibility to fit the difference of Compton profiles is discussed as a way to greatly decrease the level of complexity of the data treatment, making the analysis easier, faster and more robust. In the context of the novel method proposed, two mathematical models describing the shapes of differenced Compton profiles are discussed: the simple Gaussian approximation for harmonic and isotropic local potential, and an analytical Gauss–Hermite expansion for an anharmonic or anisotropic potential. The method is applied to data collected by VESUVIO spectrometer at ISIS neutron and muon pulsed source (UK) on Copper and Aluminium samples at ambient and low temperatures. - Highlights: • A new method to analyse neutron Compton scattering data is presented. • The method allows many corrections on the experimental data to be avoided. • The number of needed fitting parameters is drastically reduced using the new method. • Mass-selective analysis is facilitated with parametric studies benefiting the most. • Observables linked to anisotropic momentum distribution are obtained analytically.
Tiampo, K. F.; Fernández, J.; Jentzsch, G.; Charco, M.; Rundle, J. B.
2004-11-01
Here we present an inversion methodology using the combination of a genetic algorithm (GA) inversion program, and an elastic-gravitational earth model to determine the parameters of a volcanic intrusion. Results from the integration of the elastic-gravitational model, a suite of FORTRAN 77 programs developed to compute the displacements due to volcanic loading, with the GA inversion code, written in the C programming language, are presented. These codes allow for the calculation of displacements (horizontal and vertical), tilt, vertical strain and potential and gravity changes on the surface of an elastic-gravitational layered Earth model due to the magmatic intrusion. We detail the appropriate methodology for examining the sensitivity of the model to variation in the constituent parameters using the GA, and present, for the first time, a Monte Carlo technique for evaluating the propagation of error through the GA inversion process. One application example is given at Mayon volcano, Philippines, for the inversion program, the sensitivity analysis, and the error evaluation. The integration of the GA with the complex elastic-gravitational model is a blueprint for an efficient nonlinear inversion methodology and its implementation into an effective tool for the evaluation of parameter sensitivity. Finally, the extension of this inversion algorithm and the error assessment methodology has important implications to the modeling and data assimilation of a number of other nonlinear applications in the field of geosciences.
Compton cooling of jets as the origin of radio-quiet quasars
Siemiginowska, Aneta; Elvis, Martin
1994-01-01
We consider the Compton drag as a possible mechanism for jet deceleration in radio-quiet quasars, using an isotropic radiation field in a central region as a source of soft photons, provides an efficient means of decelerating electrons in a jet. Depending on the energy density of the radiation field, the jet may be stopped before reaching the radius at which the dynamical timescale is shorter than the Compton cooling timescale. We present preliminary results of our calculations and conclude that radio-quiet quasars may differ from radio-loud in their central radiation density.
Study and development of a spectrometer with Compton suppression and gamma coincidence counting
International Nuclear Information System (INIS)
Masse, D.
1990-10-01
This paper presents the characteristics of a spectrometer consisting of a Ge detector surrounded by a NaI(T1) detector that can operate in Compton-suppression and gamma-gamma coincidence modes. The criteria that led to this measurement configuration are discussed and the spectrometer performances are shown for 60 Co and 137 Cs gamma-ray sources. The results for the measurement of 189 Ir (Compton suppression) and for the measurement of 101 Rh (gamma-gamma coincidence) in the presence of other radioisotopes are given. 83 Rb and 105 Ag isotopes are also measured with this spectrometer [fr
2012-03-01
aperture system was first suggested by Dicke [21] and Ables [2] in 1968. X-ray astronomers had been seeking a system in which heavenly X-ray sources...uses Compton scatter X-rays [66]. including a fresco, and Egyptian mummy, and a medieval clasp [30]. It was found that Compton scatter imaging was useful...astron- omy”. Proceedings of the Astronomical Society of Australia, 1:172, December 1968. URL http://adsabs.harvard.edu/abs/1968PASAu...1..172A. [3
Nucleon structure study by virtual compton scattering
International Nuclear Information System (INIS)
Berthot, J.; Bertin, P.Y.; Breton, V.; Fonvielle, H.; Hyde-Wright, C.; Quemener, G.; Ravel, O.; Braghieri, A.; Pedroni, P.; Boeglin, W.U.; Boehm, R.; Distler, M.; Edelhoff, R.; Friedrich, J.; Geiges, R.; Jennewein, P.; Kahrau, M.; Korn, M.; Kramer, H.; Krygier, K.W.; Kunde, V.; Liesenfeld, A.; Merle, K.; Neuhausen, R.; Offermann, E.A.J.M.; Pospischil, T.; Rosner, G.; Sauer, P.; Schmieden, H.; Schardt, S.; Tamas, G.; Wagner, A.; Walcher, T.; Wolf, S.
1995-01-01
We propose to study nucleon structure by Virtual Compton Scattering using the reaction p(e,e'p)γ with the MAMI facility. We will detect the scattered electron and the recoil proton in coincidence in the high resolution spectrometers of the hall A1. Compton events will be separated from the other channels (principally π 0 production) by missing-mass reconstruction. We plan to investigate this reaction near threshold. Our goal is to measure new electromagnetic observables which generalize the usual magnetic and electric polarizabilities. (authors). 9 refs., 18 figs., 7 tabs
Komara, R.; Ginsbourger, D.
2014-12-01
We present the implementation of Similarity Based Kriging (SBK). This approach extends Gaussian process regression (GPR) methods, typically restricted to Euclidean spaces, to spaces that are non-Euclidean or perhaps even non-metric. SBK was inspired by problems in aquifer modeling, where inputs of numerical simulations are typically curves and parameter fields, and predicting scalar or vector outputs by Kriging with such very high-dimensional inputs may seem not feasible at first. SBK combines ideas from the distance-based set-up of Scheidt and Caers (2009) with GPR and allows calculating Kriging predictions based only on similarities between inputs rather than on their high-dimensional representation. Written in open source code, this proposed approach includes automated construction of SBK models and provides diagnostics to assess model quality both in terms of covariance fitting and internal/external prediction validation. Covariance hyperparameters can be estimated both by maximum likelihood and leave-one-out cross validation relying in both cases on efficient formulas and a hybrid genetic optimization algorithm using derivatives. The determination of the best dimension for Classical multidimensional scaling (MDS) and non-metric MDS of the data will be investigated. Application of this software to real life data examples in Euclidean and non-Euclidean (dis)similarity settings will be covered and touch on aquifer modeling, hydrogeological forecasting, and sequential inverse problem solving. In the last case, a novel approach where a variant of the expected improvement criterion is used for choosing several points at a time will be presented. This part of the method and the previous covariance hyperparameter estimation parallelize naturally and we demonstrate how to save computation time by optimally distributing function evaluations over multiple cores or processors.
Zhai, G.; Shirzaei, M.
2014-12-01
The Kilauea volcano, Hawaii Island, is one of the most active volcanoes worldwide. Its complex system including magma reservoirs and rift zones, provides a unique opportunity to investigate the dynamics of magma transport and supply. The relatively shallow magma reservoir beneath the caldera stores magma prior to eruption at the caldera or migration to the rift zones. Additionally, the temporally variable pressure in the magma reservoir causes changes in the stress field, driving dike propagation and occasional intrusions at the eastern rift zone. Thus constraining the time-dependent evolution of the magma reservoir plays an important role in understanding magma processes such as supply, storage, transport and eruption. The recent development of space-based monitoring technology, InSAR (Interferometric synthetic aperture radar), allows the detection of subtle deformation of the surface at high spatial resolution and accuracy. In order to understand the dynamics of the magma chamber at Kilauea summit area and the associated stress field, we explored SAR data sets acquired in two overlapping tracks of Envisat SAR data during period 2003-2010. The combined InSAR time series includes 100 samples measuring summit deformation at unprecedented spatiotemporal resolutions. To investigate the source of the summit deformation field, we propose a novel time-dependent inverse modelling approach to constrain the dynamics of the reservoir volume change within the summit magma reservoir in three dimensions. In conjunction with seismic and gas data sets, the obtained time-dependent model could resolve the temporally variable relation between shallow and deep reservoirs, as well as their connection to the rift zone via stress changes. The data and model improve the understanding of the Kilauea plumbing system, physics of eruptions, mechanics of rift intrusions, and enhance eruption forecast models.
Voronina, Tatyana; Romanenko, Alexey; Loskutov, Artem
2017-04-01
The key point in the state-of-the-art in the tsunami forecasting is constructing a reliable tsunami source. In this study, we present an application of the original numerical inversion technique to modeling the tsunami sources of the 16 September 2015 Chile tsunami. The problem of recovering a tsunami source from remote measurements of the incoming wave in the deep-water tsunameters is considered as an inverse problem of mathematical physics in the class of ill-posed problems. This approach is based on the least squares and the truncated singular value decomposition techniques. The tsunami wave propagation is considered within the scope of the linear shallow-water theory. As in inverse seismic problem, the numerical solutions obtained by mathematical methods become unstable due to the presence of noise in real data. A method of r-solutions makes it possible to avoid instability in the solution to the ill-posed problem under study. This method seems to be attractive from the computational point of view since the main efforts are required only once for calculating the matrix whose columns consist of computed waveforms for each harmonic as a source (an unknown tsunami source is represented as a part of a spatial harmonics series in the source area). Furthermore, analyzing the singular spectra of the matrix obtained in the course of numerical calculations one can estimate the future inversion by a certain observational system that will allow offering a more effective disposition for the tsunameters with the help of precomputations. In other words, the results obtained allow finding a way to improve the inversion by selecting the most informative set of available recording stations. The case study of the 6 February 2013 Solomon Islands tsunami highlights a critical role of arranging deep-water tsunameters for obtaining the inversion results. Implementation of the proposed methodology to the 16 September 2015 Chile tsunami has successfully produced tsunami source model
Directory of Open Access Journals (Sweden)
Thirolf P.G.
2014-03-01
Full Text Available Presently large efforts are conducted in Munich towards the development of proton beams for bio-medical applications, generated via the technique of particle acceleration from high-power, short-pulse lasers. While so far mostly offline diagnostics tools are used in this context, we aim at developing a reliable and accurate online range monitoring technique, based on the position-sensitive detection of prompt γ rays emitted from nuclear reactions between the proton beam and the biological sample. For this purpose, we develop a Compton camera, designed to be able to track not only the Compton scattering of the primary photon, but also to detect the secondary Compton electron, thus reducing the Compton cone to an arc segment and by this increasing the source reconstruction efficiency. Design specifications and the status of the protype system are discussed.
DEFF Research Database (Denmark)
Yoon, Daeung; Zhdanov, Michael; Mattsson, Johan
2016-01-01
One of the major problems in the modeling and inversion of marine controlled-source electromagnetic (CSEM) data is related to the need for accurate representation of very complex geoelectrical models typical for marine environment. At the same time, the corresponding forward-modeling algorithms...... should be powerful and fast enough to be suitable for repeated use in hundreds of iterations of the inversion and for multiple transmitter/receiver positions. To this end, we have developed a novel 3D modeling and inversion approach, which combines the advantages of the finite-difference (FD......) and integral-equation (IE) methods. In the framework of this approach, we have solved Maxwell’s equations for anomalous electric fields using the FD approximation on a staggered grid. Once the unknown electric fields in the computation domain of the FD method are computed, the electric and magnetic fields...
Designing scheme of a γ-ray ICT system using compton back-scattering
International Nuclear Information System (INIS)
Xiao Jianmin
1998-01-01
The designing scheme of a γ ray ICT system by using Compton back-scattering is put forward. The technical norms, detector system, γ radioactive source, mechanical scanning equipment, and data acquisition and image reconstruction principle of this ICT are described
Compact source of narrowband and tunable X-rays for radiography
International Nuclear Information System (INIS)
Banerjee, Sudeep; Chen, Shouyuan; Powers, Nathan; Haden, Daniel; Liu, Cheng; Golovin, G.; Zhang, Jun; Zhao, Baozhen; Clarke, S.; Pozzi, S.; Silano, J.; Karwowski, H.; Umstadter, Donald
2015-01-01
We discuss the development of a compact X-ray source based on inverse-Compton scattering with a laser-driven electron beam. This source produces a beam of high-energy X-rays in a narrow cone angle (5–10 mrad), at a rate of 10 8 photons-s −1 . Tunable operation of the source over a large energy range, with energy spread of ∼50%, has also been demonstrated. Photon energies >10 MeV have been obtained. The narrowband nature of the source is advantageous for radiography with low dose, low noise, and minimal shielding
Development of double-scattering compton camera for gamma-ray emission imaging
Energy Technology Data Exchange (ETDEWEB)
Seo, Hee
2012-02-15
The Compton camera with its unique Compton kinematics-based electronic collimation method has attracted great interest as the most promising candidate for a future gamma-ray emission imaging device due to its various advantages over conventional imaging devices. In this thesis, the proper structure and detector configuration for high-energy gamma-ray emission imaging were determined. In addition, the imaging performance of the Compton camera was experimentally evaluated after constructing a proof-of-principle imaging system. The two, single- and double-scattering Compton cameras were modeled using the Geant4 Monte Carlo simulation toolkit, and their imaging sensitivities were calculated as a function of source energy. Above a certain source energy (> a few hundred keV, depending on the detector configuration), the double-scattering type showed higher imaging sensitivity than the single-scattering type, which difference widened with increasing source energy. As such, the double-scattering Compton camera was determined to be more suitable for high-energy gamma-ray emission imaging. The Si/Si/NaI(Tl) detector configuration was chosen, owing to the fact that it does not require a cooling system: this allowed for construction of a very compact imaging system. The performances of the double-scattering Compton camera's component detectors (two double-sided silicon strip detectors [DSSDs] as the scatter detectors; a NaI(Tl) scintillation detector as the absorber detector) were evaluated in terms of energy and timing resolution. According to these results, Geant4 Monte Carlo simulations were performed to determine the optimal geometrical configuration of the component detectors and to quantitatively evaluate the effect of the detector parameters on angular resolution. Also, a Compton-edge based energy-calibration method was developed and applied to the DSSDs. As a result, the energy calibration errors were significantly reduced, from 15.5% (356 keV peak of {sup 133}Ba
Compton scattering collision module for OSIRIS
Del Gaudio, Fabrizio; Grismayer, Thomas; Fonseca, Ricardo; Silva, Luís
2017-10-01
Compton scattering plays a fundamental role in a variety of different astrophysical environments, such as at the gaps of pulsars and the stagnation surface of black holes. In these scenarios, Compton scattering is coupled with self-consistent mechanisms such as pair cascades. We present the implementation of a novel module, embedded in the self-consistent framework of the PIC code OSIRIS 4.0, capable of simulating Compton scattering from first principles and that is fully integrated with the self-consistent plasma dynamics. The algorithm accounts for the stochastic nature of Compton scattering reproducing without approximations the exchange of energy between photons and unbound charged species. We present benchmarks of the code against the analytical results of Blumenthal et al. and the numerical solution of the linear Kompaneets equation and good agreement is found between the simulations and the theoretical models. This work is supported by the European Research Council Grant (ERC- 2015-AdG 695088) and the Fundao para a Céncia e Tecnologia (Bolsa de Investigao PD/BD/114323/2016).
Theorems of low energy in Compton scattering
International Nuclear Information System (INIS)
Chahine, J.
1984-01-01
We have obtained the low energy theorems in Compton scattering to third and fouth order in the frequency of the incident photon. Next we calculated the polarized cross section to third order and the unpolarized to fourth order in terms of partial amplitudes not covered by the low energy theorems, what will permit the experimental determination of these partial amplitudes. (Author) [pt
Compton profile of polycrystalline sodium chloride and sodium fluoride
International Nuclear Information System (INIS)
Vijayakumar, R.; Shivaramu; Rajasekaran, L.; Ramamurthy, N.; Ford, M.J.
2005-01-01
We present here the Compton profile (CP) of polycrystalline sodium chloride and sodium fluoride. Our results consists of spherical average Compton profile based on measurements and calculation of spherical average Compton profile, directional Compton profile and their anisotropic effect using self-consistent Hartree-Fock wave functions employed on the linear combination of atomic orbital (HF-LCAO) approximation. The experimental results are compared with the HF-LCAO spherical average Compton profile and with tabulated Hartree-Fock free atom results. For both compounds the experimental results are found to be in good agreement with the HF-LCAO results and in qualitative agreement with Hartree-Fock free atom values
Picozzi, M.; Oth, A.; Parolai, S.; Bindi, D.; De Landro, G.; Amoroso, O.
2017-05-01
The accurate determination of stress drop, seismic efficiency, and how source parameters scale with earthquake size is an important issue for seismic hazard assessment of induced seismicity. We propose an improved nonparametric, data-driven strategy suitable for monitoring induced seismicity, which combines the generalized inversion technique together with genetic algorithms. In the first step of the analysis the generalized inversion technique allows for an effective correction of waveforms for attenuation and site contributions. Then, the retrieved source spectra are inverted by a nonlinear sensitivity-driven inversion scheme that allows accurate estimation of source parameters. We therefore investigate the earthquake source characteristics of 633 induced earthquakes (Mw 2-3.8) recorded at The Geysers geothermal field (California) by a dense seismic network (i.e., 32 stations, more than 17.000 velocity records). We find a nonself-similar behavior, empirical source spectra that require an ωγ source model with γ > 2 to be well fit and small radiation efficiency ηSW. All these findings suggest different dynamic rupture processes for smaller and larger earthquakes and that the proportion of high-frequency energy radiation and the amount of energy required to overcome the friction or for the creation of new fractures surface changes with earthquake size. Furthermore, we observe also two distinct families of events with peculiar source parameters that in one case suggests the reactivation of deep structures linked to the regional tectonics, while in the other supports the idea of an important role of steeply dipping faults in the fluid pressure diffusion.
Czech Academy of Sciences Publication Activity Database
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, A.
2016-01-01
Roč. 9, č. 11 (2016), s. 4297-4311 ISSN 1991-959X R&D Projects: GA MŠk(CZ) 7F14287 Institutional support: RVO:67985556 Keywords : Linear inverse problem * Bayesian regularization * Source-term determination * Variational Bayes method Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.458, year: 2016 http://library.utia.cas.cz/separaty/2016/AS/tichy-0466029.pdf
Compton scattering for spectroscopic detection of ultra-fast, high flux, broad energy range X-rays
Energy Technology Data Exchange (ETDEWEB)
Cipiccia, S.; Wiggins, S. M.; Brunetti, E.; Vieux, G.; Yang, X.; Welsh, G. H.; Anania, M.; Islam, M. R.; Ersfeld, B.; Jaroszynski, D. A. [Scottish Universities Physics Alliance, Department of Physics, University of Strathclyde, John Anderson Building, 107 Rottenrow, Glasgow G4 0NG (United Kingdom); Maneuski, D.; Montgomery, R.; Smith, G.; Hoek, M.; Hamilton, D. J.; Shea, V. O. [Scottish Universities Physics Alliance, School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Issac, R. C. [Scottish Universities Physics Alliance, Department of Physics, University of Strathclyde, John Anderson Building, 107 Rottenrow, Glasgow G4 0NG (United Kingdom); Research Department of Physics, Mar Athanasius College, Kothamangalam 686666, Kerala (India); Lemos, N. R. C.; Dias, J. M. [GoLP/Instituto de Plasmas eFusão Nuclear, Instituto Superior Técnico, Universidade de Lisboa, Avenida Rovisco Pais, 1049-001 Lisbon (Portugal); Symes, D. R. [Central Laser Facility, Science and Technology Facilities Council, Rutherford Appleton Laboratory, Harwell Science and Innovation Campus, OX11 0QX Didcot (United Kingdom); and others
2013-11-15
Compton side-scattering has been used to simultaneously downshift the energy of keV to MeV energy range photons while attenuating their flux to enable single-shot, spectrally resolved, measurements of high flux X-ray sources to be undertaken. To demonstrate the technique a 1 mm thick pixelated cadmium telluride detector has been used to measure spectra of Compton side-scattered radiation from a Cobalt-60 laboratory source and a high flux, high peak brilliance X-ray source of betatron radiation from a laser-plasma wakefield accelerator.
Design Study for Direction Variable Compton Scattering Gamma Ray
Kii, T.; Omer, M.; Negm, H.; Choi, Y. W.; Kinjo, R.; Yoshida, K.; Konstantin, T.; Kimura, N.; Ishida, K.; Imon, H.; Shibata, M.; Shimahashi, K.; Komai, T.; Okumura, K.; Zen, H.; Masuda, K.; Hori, T.; Ohgaki, H.
2013-03-01
A monochromatic gamma ray beam is attractive for isotope-specific material/medical imaging or non-destructive inspection. A laser Compton scattering (LCS) gamma ray source which is based on the backward Compton scattering of laser light on high-energy electrons can generate energy variable quasi-monochromatic gamma ray. Due to the principle of the LCS gamma ray, the direction of the gamma beam is limited to the direction of the high-energy electrons. Then the target object is placed on the beam axis, and is usually moved if spatial scanning is required. In this work, we proposed an electron beam transport system consisting of four bending magnets which can stick the collision point and control the electron beam direction, and a laser system consisting of a spheroidal mirror and a parabolic mirror which can also stick the collision point. Then the collision point can be placed on one focus of the spheroid. Thus gamma ray direction and collision angle between the electron beam and the laser beam can be easily controlled. As the results, travelling direction of the LCS gamma ray can be controlled under the limitation of the beam transport system, energy of the gamma ray can be controlled by controlling incident angle of the colliding beams, and energy spread can be controlled by changing the divergence of the laser beam.
DEFF Research Database (Denmark)
Karamehmedovic, Mirza; Sørensen, Mads Peter; Hansen, Poul Erik
2010-01-01
the proposed method, we apply it in a concrete quantitative characterisation of a non-periodic, nano-scale grating defect, with numerically simulated measurements. It is shown that the presented procedure can solve the inverse problem with an accuracy usually thought to require rigorous electromagnetic...
Variation of Compton Profiles of the CO2 at different pressures
Gürol, Ali; Şakar, Erdem
2017-04-01
In this study, it has been measured the Compton Profile of the CO2 gas at different pressures by using a Compton Profile Spectrometer with an annular Am-241 radioactive source and a HPGe detector. CO2 gas molecules sealed in a gas chamber at different pressures. The gas pressure had been set by using an analog manometer before the measurements. The γ-rays emitted from source was incident into the gas from a hostaphan window. The detector recorded the scattered photons from molecules. To obtain correct Compton Profile values, the raw data were corrected for some effects; i.e. scattering from the gas chamber's walls, absorption effects of windows on gas chamber and detector, and multiple scattering corrections. A Matlab Code has been used for all calculations. The results clearly demonstrate the valance electronic structure of the materials is highly depending on the pressure. According to our experimental results, when the pressure in the gas chamber increase the Compton Profiles of valance electrons changes ˜ 8,0%.
Simulation for CZT Compton PET (Maximization of the efficiency for PET using Compton event)
International Nuclear Information System (INIS)
Yoon, Changyeon; Lee, Wonho; Lee, Taewoong
2011-01-01
Multiple interactions in positron emission tomography (PET) using scintillators are generally treated as noise events because each interacted position and energy of the multiple interactions cannot be obtained individually and the sequence of multiple scattering is not fully known. Therefore, the first interaction position, which is the crucial information for a PET image reconstruction, cannot be determined correctly. However, in the case of a pixelized semiconductor detector, such as CdZnTe, each specific position and energy information of multiple interactions can be obtained. Moreover, for the emission of two 511 keV radiations in PET, if one radiation deposits all the energy in one position (photoelectric effect) and the other radiation undergoes Compton scattering followed by the photoelectric effect, the sequence of Compton scattering followed by the photoelectric effect can be determined using the Compton scattering formula. Hence, the correct position of Compton scattering can be determined, and the Compton scattering effect, which is discarded in conventional PET systems can be recovered in the new system reported in this study. The PET system in this study, which was simulated using GATE 5.0 code, was composed of 20 mmx10 mmx10 mm CdZnTe detectors consisting of 1 mmx0.5 mmx2.5 mm pixels. The angular uncertainties caused by Doppler broadening, pixelization effect and energy broadening were estimated and compared. The pixelized effect was the main factor in increasing the angular uncertainty and was strongly dependent on the distance between the 1st and 2nd interaction positions. The effect of energy broadening to an angular resolution less than expected and that of Doppler broadening was minimal. The number of Compton events was double that of the photoelectric effect assuming full energy absorption. Therefore, the detection efficiency of this new PET system can be improved greatly because both the photoelectric effect and Compton scattering are utilized
Stochastic Electrodynamics and the Compton effect
International Nuclear Information System (INIS)
Franca, H.M.; Barranco, A.V.
1987-12-01
Some of the main qualitative features of the Compton effect are tried to be described within the realm of Classical Stochastic Electrodynamics (SED). It is found indications that the combined action of the incident wave (frequency ω), the radiation reaction force and the zero point fluctuating electromagnetic fields of SED, are able to given a high average recoil velocity v/c=α/(1+α) to the charged particle. The estimate of the parameter α gives α ∼ ℎω/mc 2 where 2Πℎ is the constant and mc 2 is the rest energy of the particle. It is verified that this recoil is just that necessary to explain the frequency shift, observed in the scattered radiation as due to a classical double Doppler shift. The differential cross section for the radiation scattered by the recoiling charge using classical electromagnetism also calculated. The same expression as obtained by Compton in his fundamental work of 1923 is found. (author) [pt
Compton effect in terms of spintronic
Directory of Open Access Journals (Sweden)
Ziya Saglam
Full Text Available Compton effect with spin effect is studied. Although the incoming wave has been taken into account in the current loop model the final result is the same as before. Namely, the total angular momentum in z-direction before the collision will be equal to the total angular momentum after the collision. Let us take the z-component of the spin of the incident light as, (1, 0, −1 and the spin of the electron as, (½, −½. Applying the conservation of the z-component of the total angular momentum gives access to spin-flips. We find that the probability of spin-flip is 40%. We believe that this analysis will be helpful for deepening in the spintronic event better. Keywords: Compton effect, Spin-flip, Total angular momentum
Angle-averaged Compton cross sections
Energy Technology Data Exchange (ETDEWEB)
Nickel, G.H.
1983-01-01
The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.
Angle-averaged Compton cross sections
International Nuclear Information System (INIS)
Nickel, G.H.
1983-01-01
The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV
Deeply virtual Compton scattering off nuclei
Energy Technology Data Exchange (ETDEWEB)
Voutier, Eric
2009-01-01
Deeply virtual Compton scattering (DVCS) is the golden exclusive channel for the study of the partonic structure of hadrons, within the universal framework of generalized parton distributions (GPDs). This paper presents the aim and general ideas of the DVCS experimental program off nuclei at the Jefferson Laboratory. The benefits of the study of the coherent and incoherent channels to the understanding of the EMC (European Muon Collaboration) effect are discussed, along with the case of nuclear targets to access neutron GPDs.
Laser Compton polarimetry of proton beams
International Nuclear Information System (INIS)
Stillman, A.
1995-01-01
A need exists for non-destructive polarization measurements of the polarized proton beams in the AGS and, in the future, in RHIC. One way to make such measurements is to scatter photons from the polarized beams. Until now, such measurements were impossible because of the extremely low Compton scattering cross section from protons. Modern lasers now can provide enough photons per laser pulse not only to scatter from proton beams but also, at least in RHIC, to analyze their polarization
Czech Academy of Sciences Publication Activity Database
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Šindelářová, Kateřina; Hýža, M.; Stohl, A.
2017-01-01
Roč. 17, č. 20 (2017), s. 12677-12696 ISSN 1680-7316 R&D Projects: GA MŠk(CZ) 7F14287 Institutional support: RVO:67985556 Keywords : Bayesian inverse modeling * iodine-131 * consequences of the iodine release Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability Impact factor: 5.318, year: 2016 http://library.utia.cas.cz/separaty/2017/AS/tichy-0480506.pdf
Liu, Yikan
2015-01-01
In this paper, we establish a strong maximum principle for fractional diffusion equations with multiple Caputo derivatives in time, and investigate a related inverse problem of practical importance. Exploiting the solution properties and the involved multinomial Mittag-Leffler functions, we improve the weak maximum principle for the multi-term time-fractional diffusion equation to a stronger one, which is parallel to that for its single-term counterpart as expected. As a direct application, w...
Energy Technology Data Exchange (ETDEWEB)
Butler, M.P.; Davis, K.J. (Dept. of Meteorology, Pennsylvania State Univ., University Park, PA 16802 (United States)); Denning, A.S. (Dept. of Atmospheric Science, Colorado State Univ., Fort Collins, CO (United States)); Kawa, S.R. (NASA Goddard Space Flight Center, Greenbelt, MD (United States))
2010-11-15
We evaluate North American carbon fluxes using a monthly global Bayesian synthesis inversion that includes well-calibrated carbon dioxide concentrations measured at continental flux towers. We employ the NASA Parametrized Chemistry Tracer Model (PCTM) for atmospheric transport and a TransCom-style inversion with subcontinental resolution. We subsample carbon dioxide time series at four North American flux tower sites for mid-day hours to ensure sampling of a deep, well-mixed atmospheric boundary layer. The addition of these flux tower sites to a global network reduces North America mean annual flux uncertainty for 2001-2003 by 20% to 0.4 Pg C/yr compared to a network without the tower sites. North American flux is estimated to be a net sink of 1.2 +- 0.4 Pg C/yr which is within the uncertainty bounds of the result without the towers. Uncertainty reduction is found to be local to the regions within North America where the flux towers are located, and including the towers reduces covariances between regions within North America. Mid-day carbon dioxide observations from flux towers provide a viable means of increasing continental observation density and reducing the uncertainty of regional carbon flux estimates in atmospheric inversions.
Beam Size Measurement by Optical Diffraction Radiation and Laser System for Compton Polarimeter
Energy Technology Data Exchange (ETDEWEB)
Liu, Chuyu [Peking Univ., Beijing (China)
2012-12-31
Beam diagnostics is an essential constituent of any accelerator, so that it is named as "organs of sense" or "eyes of the accelerator." Beam diagnostics is a rich field. A great variety of physical effects or physical principles are made use of in this field. Some devices are based on electro-magnetic influence by moving charges, such as faraday cups, beam transformers, pick-ups; Some are related to Coulomb interaction of charged particles with matter, such as scintillators, viewing screens, ionization chambers; Nuclear or elementary particle physics interactions happen in some other devices, like beam loss monitors, polarimeters, luminosity monitors; Some measure photons emitted by moving charges, such as transition radiation, synchrotron radiation monitors and diffraction radiation-which is the topic of the first part of this thesis; Also, some make use of interaction of particles with photons, such as laser wire and Compton polarimeters-which is the second part of my thesis. Diagnostics let us perceive what properties a beam has and how it behaves in a machine, give us guideline for commissioning, controlling the machine and indispensable parameters vital to physics experiments. In the next two decades, the research highlight will be colliders (TESLA, CLIC, JLC) and fourth-generation light sources (TESLA FEL, LCLS, Spring 8 FEL) based on linear accelerator. These machines require a new generation of accelerator with smaller beam, better stability and greater efficiency. Compared with those existing linear accelerators, the performance of next generation linear accelerator will be doubled in all aspects, such as 10 times smaller horizontal beam size, more than 10 times smaller vertical beam size and a few or more times higher peak power. Furthermore, some special positions in the accelerator have even more stringent requirements, such as the interaction point of colliders and wigglor of free electron lasers. Higher performance of these accelerators increases the
Helium Compton Form Factor Measurements at CLAS
Energy Technology Data Exchange (ETDEWEB)
Voutier, Eric J.-M. [Laboratoire de Physique Subatomique et Cosmologie
2013-07-01
The distribution of the parton content of nuclei, as encoded via the generalized parton distributions (GPDs), can be accessed via the deeply virtual Compton scattering (DVCS) process contributing to the cross section for leptoproduction of real photons. Similarly to the scattering of light by a material, DVCS provides information about the dynamics and the spatial structure of hadrons. The sensitivity of this process to the lepton beam polarization allows to single-out the DVCS amplitude in terms of Compton form factors that contain GPDs information. The beam spin asymmetry of the $^4$He($\\vec {\\mathrm e}$,e$' \\gamma ^4$He) process was measured in the experimental Hall B of the Jefferson Laboratory to extract the real and imaginary parts of the twist-2 Compton form factor of the $^4$He nucleus. The experimental results reported here demonstrate the relevance of this method for such a goal, and suggest the dominance of the Bethe-Heitler amplitude to the unpolarized process in the kinematic range explored by the experiment.
Energy Technology Data Exchange (ETDEWEB)
Lee, Taewoong; Lee, Hyounggun; Kim, Younghak; Lee, Wonho [Korea University, Seoul (Korea, Republic of)
2017-07-15
The performance of a Compton imager using a single three-dimensional position-sensitive LYSO scintillator detector was estimated using a Monte Carlo simulation. The Compton imager consisted of a single LYSO scintillator with a pixelized structure. The size of the scintillator and each pixel were 1.3 × 1.3 × 1.3 cm{sup 3} and 0.3 × 0.3 × 0.3 cm{sup 3}, respectively. The order of γ-ray interactions was determined based on the deposited energies in each detector. After the determination of the interaction sequence, various types of reconstruction algorithms such as simple back-projection, filtered back-projection, and list-mode maximum-likelihood expectation maximization (LM-MLEM) were applied and compared with each other in terms of their angular resolution and signal-tonoise ratio (SNR) for several γ-ray energies. The LM-MLEM reconstruction algorithm exhibited the best performance for Compton imaging in maintaining high angular resolution and SNR. The two sources of {sup 137}Cs (662 keV) could be distinguishable if they were more than 17 ◦ apart. The reconstructed Compton images showed the precise position and distribution of various radiation isotopes, which demonstrated the feasibility of the monitoring of nuclear materials in homeland security and radioactive waste management applications.
Multi-linear silicon drift detectors for X-ray and Compton imaging
Castoldi, A.; Galimberti, A.; Guazzoni, C.; Rehak, P.; Hartmann, R.; Strüder, L.
2006-11-01
Novel architectures of multi-anode silicon drift detectors with linear geometry (Multi-Linear Silicon Drift Detectors) have been developed to image X-rays and Compton electrons with excellent time resolution and achievable energy resolution better than 200 eV FWHM at 5.9 keV. In this paper we describe the novel features of Multi-Linear Silicon Drift Detectors and their possible operating modes highlighting the impact on the imaging and spectroscopic capabilities. An application example of Multi-Linear Silicon Drift Detectors for fast 2D elemental mapping by means of K-edge subtraction imaging is shown. The charge deposited by Compton electrons in a Multi-Linear Silicon Drift Detector prototype irradiated by a 22Na source has been measured showing the possibility to clearly resolve the 2D projection of the ionization track and to estimate the specific energy loss per pixel. The reconstruction of Compton electron tracks within a silicon detector layer can increase the sensitivity of Compton telescopes for nuclear medicine and γ-ray astronomy.
Multi-linear silicon drift detectors for X-ray and Compton imaging
Energy Technology Data Exchange (ETDEWEB)
Castoldi, A. [Politecnico di Milano, Dipartimento Ingegneria Nucleare Ce.S.N.E.F, Piazza L. da Vinci 32, 20133 Milan (Italy) and INFN, Sezione di Milano, Via Celoria 16, 20133 Milan (Italy)]. E-mail: Andrea.Castoldi@polimi.it; Galimberti, A. [Politecnico di Milano, Dipartimento Ingegneria Nucleare Ce.S.N.E.F, Piazza L. da Vinci 32, 20133 Milan (Italy); INFN, Sezione di Milano, Via Celoria 16, 20133 Milan (Italy); Guazzoni, C. [Politecnico di Milano, Dipartimento Elettronica e Informazione, Piazza L. da Vinci 32, 20133 Milan (Italy); INFN, Sezione di Milano, Via Celoria 16, 20133 Milan (Italy); Rehak, P. [Brookhaven National Laboratory, Instrumentation Division, Upton, NY 11973 (United States); Hartmann, R. [PNSensor GmbH, Roemerstrasse 28, 80803 Munich (Germany); MPI Halbleiterlabor, Otto-Hahn-Ring 6, 81739 Munich (Germany); Strueder, L. [MPI Halbleiterlabor, Otto-Hahn-Ring 6, 81739 Munich (Germany); Max-Planck-Institut fuer extraterrestrische Physik, Giessenbachstrasse, 85741 Garching (Germany); Universitaet Siegen, FB Physik, Emmy-Noether Campus, Walter Flex Strasse 3, 57068 Siegen (Germany)
2006-11-30
Novel architectures of multi-anode silicon drift detectors with linear geometry (Multi-Linear Silicon Drift Detectors) have been developed to image X-rays and Compton electrons with excellent time resolution and achievable energy resolution better than 200 eV FWHM at 5.9 keV. In this paper we describe the novel features of Multi-Linear Silicon Drift Detectors and their possible operating modes highlighting the impact on the imaging and spectroscopic capabilities. An application example of Multi-Linear Silicon Drift Detectors for fast 2D elemental mapping by means of K-edge subtraction imaging is shown. The charge deposited by Compton electrons in a Multi-Linear Silicon Drift Detector prototype irradiated by a {sup 22}Na source has been measured showing the possibility to clearly resolve the 2D projection of the ionization track and to estimate the specific energy loss per pixel. The reconstruction of Compton electron tracks within a silicon detector layer can increase the sensitivity of Compton telescopes for nuclear medicine and {gamma}-ray astronomy.
Electronic structure of hafnium: A Compton profile study
Indian Academy of Sciences (India)
In this paper, we report the first-ever isotropic Compton profile of hafnium measured at an intermediate resolution, with 661.65 keV γ-radiation. To compare ... 0.38 a.u. The raw data were accumulated for about 350 h resulting in an integrated. Compton intensity of 2.6 × 107 photons in the Compton profile region. To extract ...
Second LaBr3 Compton Telescope Prototype
International Nuclear Information System (INIS)
Llosa, Gabriela; Cabello, Jorge; Gillam, John-E.; Lacasta, Carlos; Oliver, Josep F.; Rafecas, Magdalena; Solaz, Carles; Solevi, Paola; Stankova, Vera; Torres-Espallardo, Irene; Trovato, Marco
2013-06-01
A Compton telescope for dose delivery monitoring in hadron therapy is under development at IFIC Valencia within the European project ENVISION. The telescope will consist of three detector planes, each one composed of a LaBr 3 continuous scintillator crystal coupled to four silicon photomultiplier (SiPM) arrays. After the development of a first prototype which served to assess the principle, a second prototype with larger crystals has been assembled and is being tested. The current version of the prototype consists of two detector layers, each one composed of a 32.5 x 35 mm 2 crystal coupled to four SiPM arrays. The VATA64HDR16 ASIC has been employed as front-end electronics. The readout system consists of a custom made data acquisition board. Tests with point-like sources have been carried out in the laboratory, assessing the correct functioning of the device. The system optimization is ongoing. (authors)
A Compton camera prototype for prompt gamma medical imaging
Directory of Open Access Journals (Sweden)
Thirolf P.G.
2016-01-01
Full Text Available Compton camera prototype for a position-sensitive detection of prompt γ rays from proton-induced nuclear reactions is being developed in Garching. The detector system allows to track the Comptonscattered electrons. The camera consists of a monolithic LaBr3:Ce scintillation absorber crystal, read out by a multi-anode PMT, preceded by a stacked array of 6 double-sided silicon strip detectors acting as scatterers. The LaBr3:Ce crystal has been characterized with radioactive sources. Online commissioning measurements were performed with a pulsed deuteron beam at the Garching Tandem accelerator and with a clinical proton beam at the OncoRay facility in Dresden. The determination of the interaction point of the photons in the monolithic crystal was investigated.
Generation of laser Compton gamma-rays using Compact ERL
International Nuclear Information System (INIS)
Shizuma, Toshiyuki; Hajima, Ryoichi; Nagai, Ryoji; Hayakawa, Takehito; Mori, Michiaki; Seya, Michio
2015-01-01
Nondestructive isotope-specific assay system using nuclear resonance fluorescence has been developed at JAEA. In this system, intense, mono-energetic laser Compton scattering (LCS) gamma-rays are generated by combining an energy recovery linac (ERL) and laser enhancement cavity. As technical development for such an intense gamma-ray source, we demonstrated generation of LCS gamma-rays using Compact ERL (supported by the Ministry of Education, Culture, Sports, Science and Technology) developed in collaboration with KEK. We also measured X-ray fluorescence for elements near iron region by using mono-energetic LCS gamma-rays. In this presentation, we will show results of the experiment and future plan. (author)
Fast cooling of bunches in compton storage rings*
Bulyak, E; Zimmermann, F
2011-01-01
We propose an enhancement of laser radiative cooling by utilizing laser pulses of small spatial and temporal dimensions, which interact only with a fraction of an electron bunch circulating in a storage ring. We studied the dynamics of such electron bunch when laser photons scatter off the electrons at a collision point placed in a section with nonzero dispersion. In this case of ‘asymmetric cooling’, the stationary energy spread is much smaller than under conditions of regular scattering where the laser spot size is larger than the electron beam; and the synchrotron oscillations are damped faster. Coherent oscillations of large amplitude may be damped within one synchrotron period, so that this method can support the rapid successive injection of many bunches in longitudinal phase space for stacking purposes. Results of extensive simulations are presented for the performance optimization of Compton gamma-ray sources and damping rings.
Hybrid coded aperture and Compton imaging using an active mask
International Nuclear Information System (INIS)
Schultz, L.J.; Wallace, M.S.; Galassi, M.C.; Hoover, A.S.; Mocko, M.; Palmer, D.M.; Tornga, S.R.; Kippen, R.M.; Hynes, M.V.; Toolin, M.J.; Harris, B.; McElroy, J.E.; Wakeford, D.; Lanza, R.C.; Horn, B.K.P.; Wehe, D.K.
2009-01-01
The trimodal imager (TMI) images gamma-ray sources from a mobile platform using both coded aperture (CA) and Compton imaging (CI) modalities. In this paper we will discuss development and performance of image reconstruction algorithms for the TMI. In order to develop algorithms in parallel with detector hardware we are using a GEANT4 [J. Allison, K. Amako, J. Apostolakis, H. Araujo, P.A. Dubois, M. Asai, G. Barrand, R. Capra, S. Chauvie, R. Chytracek, G. Cirrone, G. Cooperman, G. Cosmo, G. Cuttone, G. Daquino, et al., IEEE Trans. Nucl. Sci. NS-53 (1) (2006) 270] based simulation package to produce realistic data sets for code development. The simulation code incorporates detailed detector modeling, contributions from natural background radiation, and validation of simulation results against measured data. Maximum likelihood algorithms for both imaging methods are discussed, as well as a hybrid imaging algorithm wherein CA and CI information is fused to generate a higher fidelity reconstruction.
Afanasiev, M.; Boehm, C.; van Driel, M.; Krischer, L.; May, D.; Rietmann, M.; Fichtner, A.
2016-12-01
Recent years have been witness to the application of waveform inversion to new and exciting domains, ranging from non-destructive testing to global seismology. Often, each new application brings with it novel wave propagation physics, spatial and temporal discretizations, and models of variable complexity. Adapting existing software to these novel applications often requires a significant investment of time, and acts as a barrier to progress. To combat these problems we introduce Salvus, a software package designed to solve large-scale full-waveform inverse problems, with a focus on both flexibility and performance. Based on a high order finite (spectral) element discretization, we have built Salvus to work on unstructured quad/hex meshes in both 2 or 3 dimensions, with support for P1-P3 bases on triangles and tetrahedra. A diverse (and expanding) collection of wave propagation physics are supported (i.e. coupled solid-fluid). With a focus on the inverse problem, functionality is provided to ease integration with internal and external optimization libraries. Additionally, a python-based meshing package is included to simplify the generation and manipulation of regional to global scale Earth models (quad/hex), with interfaces available to external mesh generators for complex engineering-scale applications (quad/hex/tri/tet). Finally, to ensure that the code remains accurate and maintainable, we build upon software libraries such as PETSc and Eigen, and follow modern software design and testing protocols. Salvus bridges the gap between research and production codes with a design based on C++ mixins and Python wrappers that separates the physical equations from the numerical core. This allows domain scientists to add new equations using a high-level interface, without having to worry about optimized implementation details. Our goal in this presentation is to introduce the code, show several examples across the scales, and discuss some of the extensible design points.
Virtual Compton scattering off protons at moderately large momentum transfer
International Nuclear Information System (INIS)
Kroll, P.
1996-01-01
The amplitudes for virtual Compton scattering off protons are calculated within the framework of the diquark model in which protons are viewed as being built up by quarks and diquarks. The latter objects are treated as quasi-elementary constituents of the proton. Virtual Compton scattering, electroproduction off protons and the Bethe-Heitler contamination are photon discussed for various kinematical situations. We particularly emphasize the role of the electron asymmetry for measuring the relative phases between the virtual Compton and the Bethe-Heitler amplitudes. It is also shown that the model is able to describe very well the experimental data for real Compton scattering off protons. (orig.)
Induced Compton scattering effects in radiation transport approximations
International Nuclear Information System (INIS)
Gibson, D.R. Jr.
1982-01-01
In this thesis the method of characteristics is used to solve radiation transport problems with induced Compton scattering effects included. The methods used to date have only addressed problems in which either induced Compton scattering is ignored, or problems in which linear scattering is ignored. Also, problems which include both induced Compton scattering and spatial effects have not been considered previously. The introduction of induced scattering into the radiation transport equation results in a quadratic nonlinearity. Methods are developed to solve problems in which both linear and nonlinear Compton scattering are important. Solutions to scattering problems are found for a variety of initial photon energy distributions
Laser Compton polarimetry at JLab and MAMI. A status report
International Nuclear Information System (INIS)
Diefenbach, J.; Imai, Y.; Han Lee, J.; Maas, F.; Taylor, S.
2007-01-01
For modern parity violation experiments it is crucial to measure and monitor the electron beam polarization continuously. In the recent years different high-luminosity concepts, for precision Compton backscattering polarimetry, have been developed, to be used at modern CW electron beam accelerator facilities. As Compton backscattering polarimetry is free of intrinsic systematic uncertainties, it can be a superior alternative to other polarimetry techniques such as Moeller and Mott scattering. State-of-the-art high-luminosity Compton backscattering designs currently in use and under development at JLab and Mainz are compared to each other. The latest results from the Mainz A4 Compton polarimeter are presented. (orig.)
The research of thickness measurement base on γ-ray compton backscattering
International Nuclear Information System (INIS)
Chen Xinnan; Wang Guobao; Chen Yannan
2010-01-01
The source and detector should be on the same side in many industrial cases, such as measurement of large scale pipes or steel plates and some large objects through which gamma ray can not transmit. Unlike the transmission method, the advantage of Compton backscattering is the flexible method of layout between source and detector. The research of Compton backscattering is very important to solve these special problems. This work describes a new method of layout between the source and the detector. Using the Monte Carol method several key factors which were associated with the measurement were also simulated, such as collimator dimension, the distance between the detector and the sample, source intensity, measured pieces. The result indicated this measurement method could entirely meet the requirement. The simulated result from Monte Carol was proved by the experiment. The experiments accord with the simulated result: the bigger the collimator diameter is, the bigger the counting rate received by detector is, the less the linear range is; For the organic glass, 5-8 mm is the best distance range between detector and samples. Different thickness associate with different distance and collimator dimension. These results provide a good technical base for research of thickness measurement based on Compton backscattering, offer a guide to practical application. (authors)
Hunziker, J.; Thorbecke, J.; Slob, E. C.
2014-12-01
Commonly, electromagnetic measurements for exploring and monitoring hydrocarbon reservoirs are inverted for the subsurface conductivity distribution by minimizing the difference between the actual data and a forward modeled dataset. The convergence of the inversion process to the correct solution strongly depends on the shape of the solution space. Since this is a non-linear problem, there exist a multitude of minima of which only the global one provides the correct conductivity values. To easily find the global minimum we desire it to have a broad cone of attraction, while it should also feature a very narrow bottom in order to obtain the subsurface conductivity with high resolution. In this study, we aim to determine which combination of input data corresponds to a favorable shape of the solution space. Since the solution space is N-dimensional, with N being the number of unknown subsurface parameters, plotting it is out of the question. In our approach, we use a genetic algorithm (Goldberg, 1989) to probe the solution space. Such algorithms have the advantage that every run of the same problem will end up at a different solution. Most of these solutions are expected to lie close to the global minimum. A situation where only few runs end up in the global minimum indicates that the solution space consists of a lot of local minima or that the cone of attraction of the global minimum is small. If a lot of runs end up with a similar data-misfit but with a large spread of the subsurface medium parameters in one or more direction, it can be concluded that the chosen data-input is not sensitive with respect to that direction. Compared to the study of Hunziker et al. 2014, we allow also to invert for subsurface boundaries and include more combinations of input datasets. The results so far suggest that it is essential to include the magnetic field in the inversion process in order to find the anisotropic conductivity values. ReferencesGoldberg, D. E., 1989. Genetic
Kinematic source inversion of the 2017 Puebla-Morelos, Mexico earthquake (2017/09/19, Mw.7.1)
Iglesias, A.; Castro-Artola, O.; Hjorleifsdottir, V.; Singh, S. K.; Ji, C.; Franco-Sánchez, S. I.
2017-12-01
On September 19th 2017, an Mw 7.1 earthquake struck Central Mexico, causing severe damage in the epicentral region, especially in several small and medium size houses as well as historical buildings like churches and government offices. In Mexico City, at a distance of 100km from the epicenter, 38 buildings collapsed. Authorities reported that 369 persons were killed by the earthquake (> 60% in the Mexico City). We determined the hypocentral location (18.406N, 98.706W, d=57km), from regional data, situating this earthquake inside the subducted Cocos Plate, with a normal fault mechanism (Globalcmt: =300°, =44°, and =-82°). In this presentation we show the the slip on the fault plane, determined by 1) a frequency-domain inversion using local and regional acceleration records that have been numerically integrated twice and bandpass filtered between 2 and 30, and 2) a wavelet domain inversion using teleseismic body and surface-waves, filtered between 1-100 s and 50-150 s respectively, as well as static offsets. In both methods the fault plane is divided into subfaults, and for each subfault we invert for the average slip, and timing of initiation of slip. In the first method the slip direction is fixed to the ? direction and we invert for the rise time. In the second method the direction of slip is estimated, with values between -90 and +90 allowed, and the time history is an asymmetric cosine time function, for which we determine the "rise" and "fall" durations. For both methods, synthetic seismograms, based on the GlobalCMT focal mechanism, are computed for each subfault-station pair and for three components (Z, N-S, EW). Preliminary results, using local data, show some slip concentrated close to the hypocentral location and a large patch 20 km in NW direction far from the origin. Using teleseismic data, it is difficult to distinguish between the two fault planes, as the waveforms are equally well fit using either one of them. However, both are consistent with a
Energy Technology Data Exchange (ETDEWEB)
Matcha, R.L.; Pettitt, B.M.
1979-03-15
An interesting empirical relationship between zero point Compton profile anisotropies ..delta..J (0) and nuclear charges is noted. It is shown that, for alkali halide molecules AB, to a good approximation ..delta..J (0) =N ln(Z/sub b//Z/sub a/).
Nelson, Michael R.; Mccaffrey, Robert; Molnar, Peter
1987-01-01
The style and the distribution of faulting occurring today in the Tien Shan region were studied, by digitizing long-period World-Wide Standard Seismograph Network P and SH waveforms of 11 of the largest Tien Shan earthquakes between 1965 and 1982 and then using a least squares inversion routine to constrain their fault plane solutions and depths. The results of the examination indicate that north-south shortening is presently occurring in the Tien Shan, with the formation of basement uplifts flanked by moderately dipping thrust faults. The present-day tectonics of the Tien Shan seem to be analogous to those of the Rocky Mountains in Colorado, Wyoming, and Utah during the Laramide orogeny in Late Cretaceous and Early Tertiary time.
Cork quality estimation by using Compton tomography
Brunetti, A; Golosio, B; Luciano, P; Ruggero, A
2002-01-01
The quality control of cork stoppers is mandatory in order to guarantee the perfect conservation of the wine. Several techniques have been developed but until now the quality control was essentially related to the status of the external surface. Thus possible cracks or holes inside the stopper will be hidden. In this paper a new technique based on X-ray Compton tomography is described. It is a non-destructive technique that allows one to reconstruct and visualize the cross-section of the cork stopper analyzed, and so to put in evidence the presence of internal imperfections. Some results are reported and compared with visual classification.
Cork quality estimation by using Compton tomography
Brunetti, Antonio; Cesareo, Roberto; Golosio, Bruno; Luciano, Pietro; Ruggero, Alessandro
2002-11-01
The quality control of cork stoppers is mandatory in order to guarantee the perfect conservation of the wine. Several techniques have been developed but until now the quality control was essentially related to the status of the external surface. Thus possible cracks or holes inside the stopper will be hidden. In this paper a new technique based on X-ray Compton tomography is described. It is a non-destructive technique that allows one to reconstruct and visualize the cross-section of the cork stopper analyzed, and so to put in evidence the presence of internal imperfections. Some results are reported and compared with visual classification.
Zheng, Ao; Wang, Mingfeng; Yu, Xiangwei; Zhang, Wenbo
2018-03-01
On 2016 November 13, an Mw 7.8 earthquake occurred in the northeast of the South Island of New Zealand near Kaikoura. The earthquake caused severe damages and great impacts on local nature and society. Referring to the tectonic environment and defined active faults, the field investigation and geodetic evidence reveal that at least 12 fault sections ruptured in the earthquake, and the focal mechanism is one of the most complicated in historical earthquakes. On account of the complexity of the source rupture, we propose a multisegment fault model based on the distribution of surface ruptures and active tectonics. We derive the source rupture process of the earthquake using the kinematic waveform inversion method with the multisegment fault model from strong-motion data of 21 stations (0.05-0.35 Hz). The inversion result suggests the rupture initiates in the epicentral area near the Humps fault, and then propagates northeastward along several faults, until the offshore Needles fault. The Mw 7.8 event is a mixture of right-lateral strike and reverse slip, and the maximum slip is approximately 19 m. The synthetic waveforms reproduce the characteristics of the observed ones well. In addition, we synthesize the coseismic offsets distribution of the ruptured region from the slips of upper subfaults in the fault model, which is roughly consistent with the surface breaks observed in the field survey.
International Nuclear Information System (INIS)
Takeuchi, Mitsuo; Wada, Shigeru; Takahashi, Hiroyuki; Hayashi, Kazuhiko; Murayama, Yoji
2000-09-01
At the research reactor such as JRR-3M, the operation management is carried out in order to ensure safe operation, for example, the excess reactivity is measured regularly and confirmed that it satisfies a safety condition. The excess reactivity is calculated using control rod position in criticality and control rod worth measured by a positive period method (P.P method), the conventional inverse kinetic method (IK method) and so on. The neutron source, however, influences measurement results and brings in a measurement error. A new IK method considering the influence of the steady neutron sources is proposed and applied to the JRR-3M. This report shows that the proposed IK method measures control rod worth more precisely than a conventional IK method. (author)
Performance of a symmetric BGO-NaI anti-Compton shield
Energy Technology Data Exchange (ETDEWEB)
Alba, R.; Bellia, G.; Del Zoppo, A.
1988-09-01
A compact symmetric BGO-NaI shield for accurate discrete ..gamma..-ray spectroscopy with germanium detectors is described. Tests of the shield performed with radioactive sources are presented. The characteristics are expressed in terms of the Compton suppression factor, of the average detection efficiency and of the peak to total ratio deduced from the analysis of the response function of a Ge detector inserted into the shield.
Performance of a symmetric BGO-NaI anti-Compton shield
International Nuclear Information System (INIS)
Alba, R.; Del Zoppo, A.; Catania Univ.
1988-01-01
A compact symmetric BGO-NaI shield for accurate discrete γ-ray spectroscopy with germanium detectors is described. Tests of the shield performed with radioactive sources are presented. The characteristics are expressed in terms of the Compton suppression factor, of the average detection efficiency and of the peak to total ratio deduced from the analysis of the response function of a Ge detector inserted into the shield. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Antoniassi, M.; Conceicao, A.L.C. [Departamento de Fisica-Faculdade de Filosofia Ciencias e Letras de Ribeirao Preto-Universidade de Sao Paulo, Ribeirao Preto, Sao Paulo (Brazil); Poletti, M.E., E-mail: poletti@ffclrp.usp.br [Departamento de Fisica-Faculdade de Filosofia Ciencias e Letras de Ribeirao Preto-Universidade de Sao Paulo, Ribeirao Preto, Sao Paulo (Brazil)
2012-07-15
Electron densities of 33 samples of normal (adipose and fibroglangular) and neoplastic (benign and malignant) human breast tissues were determined through Compton scattering data using a monochromatic synchrotron radiation source and an energy dispersive detector. The area of Compton peaks was used to determine the electron densities of the samples. Adipose tissue exhibits the lowest values of electron density whereas malignant tissue the highest. The relationship with their histology was discussed. Comparison with previous results showed differences smaller than 4%. - Highlights: Black-Right-Pointing-Pointer Electron density of normal and neoplastic breast tissues was measured using Compton scattering. Black-Right-Pointing-Pointer Monochromatic synchrotron radiation was used to obtain the Compton scattering data. Black-Right-Pointing-Pointer The area of Compton peaks was used to determine the electron densities of samples. Black-Right-Pointing-Pointer Adipose tissue shows the lowest electron density values whereas the malignant tissue the highest. Black-Right-Pointing-Pointer Comparison with previous results showed differences smaller than 4%.
Study of Compton scattering influence in cardiac SPECT images
International Nuclear Information System (INIS)
Munhoz, A.C.L.; Abe, R.; Zanardo, E.L.; Robilotta, C.C.
1992-01-01
The reduction effect from Compton fraction in the quality of and image is evaluated, with two ways of acquisition data: one, with the window of energetic analyser dislocated over the photopeak and the other, with two windows, one over the Compton contribution and the other, placed in the center over the photopeak. (C.G.C.)
Simulation of Laser-Compton Cooling of Electron Beams
Ohgaki, T.
2000-01-01
We study a method of laser-Compton cooling of electron beams. Using a Monte Carlo code, we evaluate the effects of the laser-electron interaction for transverse cooling. The optics with and without chromatic correction for the cooling are examined. The laser-Compton cooling for JLC/NLC at E_0=2 GeV is considered.
Zhao, Lei; Yang, Jinlong; Weglein, Arthur B.
2017-12-01
The inverse scattering series free-surface-multiple-elimination (FSME) algorithm is modified and extended to accommodate the source property-source radiation pattern. That accommodation can provide additional value for the fidelity of the free-surface multiple predictions. The new extended FSME algorithm retains all the merits of the original algorithm, i.e., fully data-driven and with a requirement of no subsurface information. It is tested on a one-dimensional acoustic model with proximal and interfering seismic events, such as interfering primaries and multiples. The results indicate the new extended FSME algorithm can predict more accurate free-surface multiples than methods without the accommodation of the source property if the source has a radiation pattern. This increased effectiveness in prediction contributes to removing free-surface multiples without damaging primaries. It is important in such cases to increase predictive effectiveness because other prediction methods, such as the surface-related-multiple-elimination algorithm, has difficulties and problems in prediction accuracy, and those issues affect efforts to remove multiples through adaptive subtraction. Therefore accommodation of the source property can not only improve the effectiveness of the FSME algorithm, but also extend the method beyond the current algorithm (e.g. improving the internal multiple attenuation algorithm).
Energy Technology Data Exchange (ETDEWEB)
Pison, I.
2005-12-15
Atmospheric pollution at a regional scale is the result of various interacting processes: emissions, chemistry, transport, mixing and deposition of gaseous species. The forecast of air quality is then performed by models, in which the emissions are taken into account through inventories. The simulated pollutant concentrations depend highly on the emissions that are used. Now inventories that represent them have large uncertainties. Since it would be difficult today to improve their building methodologies, there remains the possibility of adding information to existing inventories. The optimization of emissions uses the information that is available in measurements to get the inventory that minimizes the difference between simulated and measured concentrations. A method for the inversion of anthropogenic emissions at a regional scale, using network measurements and based on the CHIMERE model and its adjoint, was developed and validated. A kriging technique allows us to optimize the use of the information available in the concentration space. Repeated kriging-optimization cycles increase the quality of the results. A dynamical spatial aggregation technique makes it possible to further reduce the size of the problem. The NO{sub x} emissions from the inventory elaborated by AIRPARIF for the Paris area were inverted during the summers of 1998 and 1999, the events of the ESQUIF campaign being studied in detail. The optimization reduces large differences between simulated and measured concentrations. Generally, however, the confidence level of the results decreases with the density of the measurement network. Therefore, the results with the higher confidence level correspond to the most intense emission fluxes of the Paris area. On the whole domain, the corrections to the average emitted mass and to the matching time profiles are consistent with the estimate of 15% obtained during the ESQUIF campaign. (author)
Yumimoto, Keiya; Morino, Yu; Ohara, Toshimasa; Oura, Yasuji; Ebihara, Mitsuru; Tsuruta, Haruo; Nakajima, Teruyuki
2016-11-01
The amount of 137 Cs released by the Fukushima Dai-ichi Nuclear Power Plant accident of 11 March 2011 was inversely estimated by integrating an atmospheric dispersion model, an a priori source term, and map of deposition recorded by aircraft. An a posteriori source term refined finer (hourly) variations comparing with the a priori term, and estimated 137 Cs released 11 March to 2 April to be 8.12 PBq. Although time series of the a posteriori source term was generally similar to those of the a priori source term, notable modifications were found in the periods when the a posteriori source term was well-constrained by the observations. Spatial pattern of 137 Cs deposition with the a posteriori source term showed better agreement with the 137 Cs deposition monitored by aircraft. The a posteriori source term increased 137 Cs deposition in the Naka-dori region (the central part of Fukushima Prefecture) by 32.9%, and considerably improved the underestimated a priori 137 Cs deposition. Observed values of deposition measured at 16 stations and surface atmospheric concentrations collected on a filter tape of suspended particulate matter were used for validation of the a posteriori results. A great improvement was found in surface atmospheric concentration on 15 March; the a posteriori source term reduced root mean square error, normalized mean error, and normalized mean bias by 13.4, 22.3, and 92.0% for the hourly values, respectively. However, limited improvements were observed in some periods and areas due to the difficulty in simulating accurate wind fields and the lack of the observational constraints. Copyright © 2016 Elsevier Ltd. All rights reserved.
An experimental method for the optimization of anti-Compton spectrometer
Badran, H M
1999-01-01
The reduction of the Compton continuum can be achieved using a Compton suppression shield. For the first time, an experimental method is purposed for estimating the optimum dimensions of such a shield. The method can also provide information on the effect of the air gap, source geometry, gamma-ray energy, etc., on the optimum dimension of the active shield. The method employs the measurements of the Compton suppression efficiency in two dimensions using small size scintillation detectors. Two types of scintillation materials; NaI(Tl) and NE-102A plastic scintillators, were examined. The effect of gamma-ray energy and source geometry were also investigated using sup 1 sup 3 sup 7 Cs and sup 6 sup 0 Co sources with three geometries; point, cylindrical, and Marinelli shapes. The results indicate the importance of both NaI(Tl) and NE-102A guard detectors in surrounding the main detector rather than the distance above it. The ratio between the part of the guard detector above the surface of the main detector to th...
Noise evaluation of Compton camera imaging for proton therapy
Ortega, P G; Cerutti, F; Ferrari, A; Gillam, J E; Lacasta, C; Llosá, G; Oliver, J F; Sala, P R; Solevi, P; Rafecas, M
2015-01-01
Compton Cameras emerged as an alternative for real-time dose monitoring techniques for Particle Therapy (PT), based on the detection of prompt-gammas. As a consequence of the Compton scattering process, the gamma origin point can be restricted onto the surface of a cone (Compton cone). Through image reconstruction techniques, the distribution of the gamma emitters can be estimated, using cone-surfaces backprojections of the Compton cones through the image space, along with more sophisticated statistical methods to improve the image quality. To calculate the Compton cone required for image reconstruction, either two interactions, the last being photoelectric absorption, or three scatter interactions are needed. Because of the high energy of the photons in PT the first option might not be adequate, as the photon is not absorbed in general. However, the second option is less efficient. That is the reason to resort to spectral reconstructions, where the incoming γ energy is considered as a variable in the recons...
Virtual compton scattering at low energy
International Nuclear Information System (INIS)
Lhuillier, D.
1997-09-01
The work described in this PhD is a study of the Virtual Compton scattering (VCS) off the proton at low energy, below pion production threshold. Our experiment has been carried out at MAMI in the collaboration with the help of two high resolution spectrometers. Experimentally, the VCS process is the electroproduction of photons off a liquid hydrogen target. First results of data analysis including radiative corrections are presented and compared with low energy theorem prediction. VCS is an extension of the Real Compton Scattering. The virtuality of the incoming photon allows us to access new observables of the nucleon internal structure which are complementarity to the elastic form factors: the generalized polarizabilities (GP). They are function of the squared invariant mass of the virtual photo. The mass limit of these observables restore the usual electric and magnetic polarizabilities. Our experiment is the first measurement of the VCS process at a virtual photon mass equals 0.33 Ge V square. The experimental development presents the analysis method. The high precision needed in the absolute cross-section measurement required an accurate estimate of radiative corrections to the VCS. This new calculation, which has been performed in the dimensional regulation scheme, composes the theoretical part of this thesis. At low q', preliminary results agree with low energy theorem prediction. At higher q', substraction of low energy theorem contribution to extract GP is discussed. (author)
Proton compton scattering in the resonance region
International Nuclear Information System (INIS)
Ishii, Takanobu.
1979-12-01
Differential cross sections of the proton Compton scattering have been measured in the energy range between 400 and 1150 MeV at CMS angles of 130 0 , 100 0 and 70 0 . The recoil proton was detected with a magnetic spectrometer using multi-wire proportional chambers and wire spark chambers. In coincidence with the proton, the scattered photon was detected with a lead glass Cerenkov counter of the total absorption type with a lead plate converter, and horizontal and vertical scintillation counter hodoscopes. The background due to the neutral pion photoproduction, was subtracted by using the kinematic relations between the scattered photon and the recoil proton. Theoretical calculations based on an isobar model with two components, that is, the resonance plus background, were done, and the photon couplings of the second resonance region were determined firstly from the proton Compton data. The results are that the helicity 1/2 photon couplings of P 11 (1470) and S 11 (1535), and the helicity 3/2 photon coupling of D 13 (1520) are consistent with those determined from the single pion photoproduction data, but the helicity 1/2 photon coupling of D 13 (1520) has a somewhat larger value than that from the single pion photoproduction data. (author)
Relativistic wave equations and compton scattering
International Nuclear Information System (INIS)
Sutanto, S.H.; Robson, B.A.
1998-01-01
Full text: Recently an eight-component relativistic wave equation for spin-1/2 particles was proposed.This equation was obtained from a four-component spin-1/2 wave equation (the KG1/2 equation), which contains second-order derivatives in both space and time, by a procedure involving a linearisation of the time derivative analogous to that introduced by Feshbach and Villars for the Klein-Gordon equation. This new eight-component equation gives the same bound-state energy eigenvalue spectra for hydrogenic atoms as the Dirac equation but has been shown to predict different radiative transition probabilities for the fine structure of both the Balmer and Lyman a-lines. Since it has been shown that the new theory does not always give the same results as the Dirac theory, it is important to consider the validity of the new equation in the case of other physical problems. One of the early crucial tests of the Dirac theory was its application to the scattering of a photon by a free electron: the so-called Compton scattering problem. In this paper we apply the new theory to the calculation of Compton scattering to order e 2 . It will be shown that in spite of the considerable difference in the structure of the new theory and that of Dirac the cross section is given by the Klein-Nishina formula
Lippi, Giuseppe; Salvagno, Gian Luca; Danese, Elisa; Lima-Oliveira, Gabriel; Brocco, Giorgio; Guidi, Gian Cesare
2014-09-25
This study was planned to establish whether random orientation of gel tubes after centrifugation may impair sample quality. Eight gel tubes were collected from 17 volunteers: 2 Becton Dickinson (BD) serum tubes, 2 Terumo serum tubes, 2 BD lithium heparin tubes and 2 Terumo lithium heparin tubes. One patient's tube for each category was kept in a vertical, closure-up position for 90 min ("upright"), whereas paired tubes underwent bottom-up inversion every 15 min, for 90 min ("inverted"). Immediately after this period of time, 14 clinical chemistry analytes, serum indices and complete blood count were then assessed in all tubes. Significant increases were found for phosphate and lipaemic index in all inverted tubes, along with AST, calcium, cholesterol, LDH, potassium, hemolysis index, leukocytes, erythrocytes and platelets limited to lithium heparin tubes. The desirable quality specifications were exceeded for AST, LDH, and potassium in inverted lithium heparin tubes. Residual leukocytes, erythrocytes, platelets and cellular debris were also significantly increased in inverted lithium heparin tubes. Lithium heparin gel tubes should be maintained in a vertical, closure-up position after centrifugation. Copyright © 2014 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Namatame, Hirofumi; Taniguchi, Masaki
1994-01-01
Photoelectron spectroscopy is regarded as the most powerful means since it can measure almost perfectly the occupied electron state. On the other hand, inverse photoelectron spectroscopy is the technique for measuring unoccupied electron state by using the inverse process of photoelectron spectroscopy, and in principle, the similar experiment to photoelectron spectroscopy becomes feasible. The development of the experimental technology for inverse photoelectron spectroscopy has been carried out energetically by many research groups so far. At present, the heightening of resolution of inverse photoelectron spectroscopy, the development of inverse photoelectron spectroscope in which light energy is variable and so on are carried out. But the inverse photoelectron spectroscope for vacuum ultraviolet region is not on the market. In this report, the principle of inverse photoelectron spectroscopy and the present state of the spectroscope are described, and the direction of the development hereafter is groped. As the experimental equipment, electron guns, light detectors and so on are explained. As the examples of the experiment, the inverse photoelectron spectroscopy of semimagnetic semiconductors and resonance inverse photoelectron spectroscopy are reported. (K.I.)
Science Flight Program of the Nuclear Compton Telescope
Boggs, Steven
This is the lead proposal for this program. We are proposing a 5-year program to perform the scientific flight program of the Nuclear Compton Telescope (NCT), consisting of a series of three (3) scientific balloon flights. NCT is a balloon-borne, wide-field telescope designed to survey the gamma-ray sky (0.2-5 MeV), performing high-resolution spectroscopy, wide-field imaging, and polarization measurements. NCT has been rebuilt as a ULDB payload under the current 2-year APRA grant. (In that proposal we stated our goal was to return at this point to propose the scientific flight program.) The NCT rebuild/upgrade is on budget and schedule to achieve flight-ready status in Fall 2013. Science: NCT will map the Galactic positron annihilation emission, shedding more light on the mysterious concentration of this emission uncovered by INTEGRAL. NCT will survey Galactic nucleosynthesis and the role of supernova and other stellar populations in the creation and evolution of the elements. NCT will map 26-Al and positron annihilation with unprecedented sensitivity and uniform exposure, perform the first mapping of 60-Fe, search for young, hidden supernova remnants through 44-Ti emission, and enable a host of other nuclear astrophysics studies. NCT will also study compact objects (in our Galaxy and AGN) and GRBs, providing novel measurements of polarization as well as detailed spectra and light curves. Design: NCT is an array of germanium gamma-ray detectors configured in a compact, wide-field Compton telescope configuration. The array is shielded on the sides and bottom by an active anticoincidence shield but is open to the 25% of the sky above for imaging, spectroscopy, and polarization measurements. The instrument is mounted on a zenith-pointed gondola, sweeping out ~50% of the sky each day. This instrument builds off the Compton telescope technique pioneered by COMPTEL on the Compton Gamma Ray Observatory. However, by utilizing modern germanium semiconductor strip detectors
Development of a Compton camera for prompt-gamma medical imaging
Aldawood, S.; Thirolf, P. G.; Miani, A.; Böhmer, M.; Dedes, G.; Gernhäuser, R.; Lang, C.; Liprandi, S.; Maier, L.; Marinšek, T.; Mayerhofer, M.; Schaart, D. R.; Lozano, I. Valencia; Parodi, K.
2017-11-01
A Compton camera-based detector system for photon detection from nuclear reactions induced by proton (or heavier ion) beams is under development at LMU Munich, targeting the online range verification of the particle beam in hadron therapy via prompt-gamma imaging. The detector is designed to be capable to reconstruct the photon source origin not only from the Compton scattering kinematics of the primary photon, but also to allow for tracking of the secondary Compton-scattered electrons, thus enabling a γ-source reconstruction also from incompletely absorbed photon events. The Compton camera consists of a monolithic LaBr3:Ce scintillation crystal, read out by a multi-anode PMT acting as absorber, preceded by a stacked array of 6 double-sided silicon strip detectors as scatterers. The detector components have been characterized both under offline and online conditions. The LaBr3:Ce crystal exhibits an excellent time and energy resolution. Using intense collimated 137Cs and 60Co sources, the monolithic scintillator was scanned on a fine 2D grid to generate a reference library of light amplitude distributions that allows for reconstructing the photon interaction position using a k-Nearest Neighbour (k-NN) algorithm. Systematic studies were performed to investigate the performance of the reconstruction algorithm, revealing an improvement of the spatial resolution with increasing photon energy to an optimum value of 3.7(1)mm at 1.33 MeV, achieved with the Categorical Average Pattern (CAP) modification of the k-NN algorithm.
Díaz-Mojica, John; Cruz-Atienza, Víctor M.; Madariaga, Raúl; Singh, Shri K.; Tago, Josué; Iglesias, Arturo
2014-10-01
We introduce a method for imaging the earthquake source dynamics from the inversion of ground motion records based on a parallel genetic algorithm. The source model follows an elliptical patch approach and uses the staggered-grid split-node method to simulate the earthquake dynamics. A statistical analysis is used to estimate errors in both inverted and derived source parameters. Synthetic inversion tests reveal that the average rupture speed (Vr), the rupture area, and the stress drop (Δτ) may be determined with formal errors of ~30%, ~12%, and ~10%, respectively. In contrast, derived parameters such as the radiated energy (Er), the radiation efficiency (ηr), and the fracture energy (G) have larger errors, around ~70%, ~40%, and ~25%, respectively. We applied the method to the Mw 6.5 intermediate-depth (62 km) normal-faulting earthquake of 11 December 2011 in Guerrero, Mexico. Inferred values of Δτ = 29.2 ± 6.2 MPa and ηr = 0.26 ± 0.1 are significantly higher and lower, respectively, than those of typical subduction thrust events. Fracture energy is large so that more than 73% of the available potential energy for the dynamic process of faulting was deposited in the focal region (i.e., G = (14.4 ± 3.5) × 1014J), producing a slow rupture process (Vr/VS = 0.47 ± 0.09) despite the relatively high energy radiation (Er = (0.54 ± 0.31) × 1015 J) and energy-moment ratio (Er/M0 = 5.7 × 10- 5). It is interesting to point out that such a slow and inefficient rupture along with the large stress drop in a small focal region are features also observed in both the 1994 deep Bolivian earthquake and the seismicity of the intermediate-depth Bucaramanga nest.
Rodríguez Cardozo, Félix; Hjörleifsdóttir, Vala; Jónsdóttir, Kristín; Geirsson, Halldor; Iglesias, Arturo
2017-04-01
In August 2014 a sequence of earthquakes took place in the Bardarbunga caldera (7x11 km) and a laterally propagating dike that connected the caldera with the Holuhraun lava field. The caldera earthquakes were coincident in time with the caldera subsidence ( 70 m) and the propagation of a dike, which ended in a fissural eruption in Holuhraun (Guðmundsson et al., 2016). The volcanic seismic sources represented by the moment tensor, commonly have a large non-double couple component, which implies that the source can not be described as a slip on a planar fault. However, encountering an apropiate physical mechanism that explain the non double couple component is a challenging task since there are several phenomena that could explain it, such as intrusive processes like dikes or sills (Kanamori et al 1993, Riel et al 2014) as well as geometric effects due slip on a curved fault (Nettles & Ekström, 1998). The earthquakes in the Bardarbunga caldera are quite interesting not only due to the magnitudes (around seventy events between 5.0radar (InSAR) also (Guðmundsson et al., 2016). Taking into account that the Bardarbunga caldera is covered by glacier (which makes difficult detecting changes in the surface using InSAR) and detecting waveforms in GPS stations is common only for large tectonics earthquakes (above Mw.7); observing a volcanic earthquakes simultaneously by InSAR and GPS is a rare and outstanding opportunity for constrain the volcanic seismic source . Likewise, if we assume that all the subsidence earthquakes in Bardarbunga have a common seismic source, we can use the same fault plane costrained for the 18th september earthquake, for inverting the seismic source of all the events in the caldera, only variying some parameters such as half duration and time shift. In this work, we obtained a source parameter for the 18th september earthquake and used it as an inicial solution for looking for a model of several point sources, that depict all the earthquakes
Liao, Renkuan; Yang, Peiling; Wu, Wenyong; Ren, Shumei
2016-01-01
The widespread use of superabsorbent polymers (SAPs) in arid regions improves the efficiency of local land and water use. However, SAPs' repeated absorption and release of water has periodic and unstable effects on both soil's physical and chemical properties and on the growth of plant roots, which complicates modeling of water movement in SAP-treated soils. In this paper, we proposea model of soil water movement for SAP-treated soils. The residence time of SAP in the soil and the duration of the experiment were considered as the same parameter t. This simplifies previously proposed models in which the residence time of SAP in the soil and the experiment's duration were considered as two independent parameters. Numerical testing was carried out on the inverse method of estimating the source/sink term of root water uptake in the model of soil water movement under the effect of SAP. The test results show that time interval, hydraulic parameters, test error, and instrument precision had a significant influence on the stability of the inverse method, while time step, layering of soil, and boundary conditions had relatively smaller effects. A comprehensive analysis of the method's stability, calculation, and accuracy suggests that the proposed inverse method applies if the following conditions are satisfied: the time interval is between 5 d and 17 d; the time step is between 1000 and 10000; the test error is ≥ 0.9; the instrument precision is ≤ 0.03; and the rate of soil surface evaporation is ≤ 0.6 mm/d.
International Nuclear Information System (INIS)
Burkhard, N.R.
1979-01-01
The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables
2009-09-30
for earthquakes in southern California, Bull. Seism . Soc. Am. 94: 1748-1761. Liu, Q., and J. Tromp (2006). Finite-frequency kernels based on adjoint...2008a). Component-dependent Frechet sensitivity kernels and utility of three- component seismic records. Bull. Seism . Soc. Am. 98: doi.10.1785/0120070283...L., P. Chen, and T. H. Jordan (2006). Strain Green tensor, reciprocity, and their applications to seismic source and structure studies, Bull. Seism
Directory of Open Access Journals (Sweden)
Shiann-Jong Lee
2012-01-01
Full Text Available This study investigated 18 broadband teleseismic records and 451 near field GPS coseismic deformation data to determine the spatial and temporal slip distribution of the 2011 Tohoku-Oki earthquake (M 9.0. The results show a large triangular shaped slip zone with several asperities. The largest asperity centered above the hypocenter at about 5 - 30 km depth. A secondary large asperity was found in the deeper subduction zone beneath the hypocenter. The average slip on the fault is ~15 m and the maximum displacement on the biggest asperity is > 30 m. The temporal rupture process shows that the slip nucleated near the hypocenter at the beginning, and then ruptured to the shallow fault plane forming the largest asperity. The slip developed in the deeper subduction zone in the second stage. Finally, the rupture propagated toward the north and south of the fault along the Japan Trench. The source time function shows three segments of energy releases with two large peaks related to the development of the asperities. The overall rupture process is ~180 seconds. This source model coincides well with the aftershock distribution and provides a first-order information on the source complexity of the earthquake which is crucial for further studies.
International Nuclear Information System (INIS)
Bakhlanov, S.V.; Bazlov, N.V.; Derbin, A.V.; Drachnev, I.S.; Kayunov, A.S.; Muratova, V.N.; Semenov, D.A.; Unzhakov, E.V.
2016-01-01
In this paper we present a method of scintillation detector energy calibration using the gamma-rays. The technique is based on the Compton scattering of gamma-rays in a scintillation detector and subsequent photoelectric absorption of the scattered photon in the Ge-detector. The novelty of this method is that the source of gamma rays, the germanium and scintillation detectors are immediately arranged adjacent to each other. The method presents an effective solution for the detectors consisting of a low atomic number materials, when the ratio between Compton effect and photoelectric absorption is large and the mean path of gamma-rays is comparable to the size of the detector. The technique can be used for the precision measurements of the scintillator light yield dependence on the electron energy.
High energy Compton scattering study of TiC and TiN
Energy Technology Data Exchange (ETDEWEB)
Joshi, Ritu; Bhamu, K.C.; Dashora, Alpa [Department of Physics, University College of Science, M.L. Sukhadia University, Udaipur 313001, Rajasthan (India); Ahuja, B.L., E-mail: blahuja@yahoo.co [Department of Physics, University College of Science, M.L. Sukhadia University, Udaipur 313001, Rajasthan (India)
2011-05-15
We present the experimental Compton profiles of TiC and TiN using 661.65 keV {gamma}-ray from 20 Ci {sup 137}Cs source. To explain our experimental data on momentum densities, we have computed the theoretical profiles, energy bands and density of states using linear combination of atomic orbitals scheme within the framework of density functional theory. In addition the energy bands, density of states and Fermi surfaces using full potential linearised augmented plane wave method have also been computed. Energy bands and density of states obtained from both the theoretical models show metallic character of TiC and TiN. The anisotropies in Compton line shapes and the Fermi surface topology are discussed in term of energy bands.
A didactic experiment showing the Compton scattering by means of a clinical gamma camera.
Amato, Ernesto; Auditore, Lucrezia; Campennì, Alfredo; Minutoli, Fabio; Cucinotta, Mariapaola; Sindoni, Alessandro; Baldari, Sergio
2017-06-01
We describe a didactic approach aimed to explain the effect of Compton scattering in nuclear medicine imaging, exploiting the comparison of a didactic experiment with a gamma camera with the outcomes from a Monte Carlo simulation of the same experimental apparatus. We employed a 99m Tc source emitting 140.5keV photons, collimated in the upper direction through two pinholes, shielded by 6mm of lead. An aluminium cylinder was placed on the source at 50mm of distance. The energy of the scattered photons was measured on the spectra acquired by the gamma camera. We observed that the gamma ray energy measured at each step of rotation gradually decreased from the characteristic energy of 140.5keV at 0° to 102.5keV at 120°. A comparison between the obtained data and the expected results from the Compton formula and from the Monte Carlo simulation revealed a full agreement within the experimental error (relative errors between -0.56% and 1.19%), given by the energy resolution of the gamma camera. Also the electron rest mass has been evaluated satisfactorily. The experiment was found useful in explaining nuclear medicine residents the phenomenology of the Compton scattering and its importance in the nuclear medicine imaging, and it can be profitably proposed during the training of medical physics residents as well. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Efthimiou, George C.; Kovalets, Ivan V.; Venetsanos, Alexandros; Andronopoulos, Spyros; Argyropoulos, Christos D.; Kakosimos, Konstantinos
2017-12-01
An improved inverse modelling method to estimate the location and the emission rate of an unknown point stationary source of passive atmospheric pollutant in a complex urban geometry is incorporated in the Computational Fluid Dynamics code ADREA-HF and presented in this paper. The key improvement in relation to the previous version of the method lies in a two-step segregated approach. At first only the source coordinates are analysed using a correlation function of measured and calculated concentrations. In the second step the source rate is identified by minimizing a quadratic cost function. The validation of the new algorithm is performed by simulating the MUST wind tunnel experiment. A grid-independent flow field solution is firstly attained by applying successive refinements of the computational mesh and the final wind flow is validated against the measurements quantitatively and qualitatively. The old and new versions of the source term estimation method are tested on a coarse and a fine mesh. The new method appeared to be more robust, giving satisfactory estimations of source location and emission rate on both grids. The performance of the old version of the method varied between failure and success and appeared to be sensitive to the selection of model error magnitude that needs to be inserted in its quadratic cost function. The performance of the method depends also on the number and the placement of sensors constituting the measurement network. Of significant interest for the practical application of the method in urban settings is the number of concentration sensors required to obtain a ;satisfactory; determination of the source. The probability of obtaining a satisfactory solution - according to specified criteria -by the new method has been assessed as function of the number of sensors that constitute the measurement network.
Li, Yongxing; Smith, Richard S.
2018-03-01
We present two examples of using the contrast source inversion (CSI) method to invert synthetic radio-imaging (RIM) data and field data. The synthetic model has two isolated conductors (one perfect conductor and one moderate conductor) embedded in a layered background. After inversion, we can identify the two conductors on the inverted image. The shape of the perfect conductor is better resolved than the shape of the moderate conductor. The inverted conductivity values of the two conductors are approximately the same, which demonstrates that the conductivity values cannot be correctly interpreted from the CSI results. The boundaries and the tilts of the upper and the lower conductive layers on the background can also be inferred from the results, but the centre parts of conductive layers in the inversion results are more conductive than the parts close to the boreholes. We used the straight-ray tomographic imaging method and the CSI method to invert the RIM field data collected using the FARA system between two boreholes in a mining area in Sudbury, Canada. The RIM data include the amplitude and the phase data collected using three frequencies: 312.5 kHz, 625 kHz and 1250 kHz. The data close to the ground surface have high amplitude values and complicated phase fluctuations, which are inferred to be contaminated by the reflected or refracted electromagnetic (EM) fields from the ground surface, and are removed for all frequencies. Higher-frequency EM waves attenuate more quickly in the subsurface environment, and the locations where the measurements are dominated by noise are also removed. When the data are interpreted with the straight-ray method, the images differ substantially for different frequencies. In addition, there are some unexpected features in the images, which are difficult to interpret. Compared with the straight-ray imaging results, the inversion results with the CSI method are more consistent for different frequencies. On the basis of what we learnt
Han, Shin-Chan; Riva, Ricccardo; Sauber, Jeanne; Okal, Emile
2013-01-01
We quantify gravity changes after great earthquakes present within the 10 year long time series of monthly Gravity Recovery and Climate Experiment (GRACE) gravity fields. Using spherical harmonic normal-mode formulation, the respective source parameters of moment tensor and double-couple were estimated. For the 2004 Sumatra-Andaman earthquake, the gravity data indicate a composite moment of 1.2x10(exp 23)Nm with a dip of 10deg, in agreement with the estimate obtained at ultralong seismic periods. For the 2010 Maule earthquake, the GRACE solutions range from 2.0 to 2.7x10(exp 22)Nm for dips of 12deg-24deg and centroid depths within the lower crust. For the 2011 Tohoku-Oki earthquake, the estimated scalar moments range from 4.1 to 6.1x10(exp 22)Nm, with dips of 9deg-19deg and centroid depths within the lower crust. For the 2012 Indian Ocean strike-slip earthquakes, the gravity data delineate a composite moment of 1.9x10(exp 22)Nm regardless of the centroid depth, comparing favorably with the total moment of the main ruptures and aftershocks. The smallest event we successfully analyzed with GRACE was the 2007 Bengkulu earthquake with M(sub 0) approx. 5.0x10(exp 21)Nm. We found that the gravity data constrain the focal mechanism with the centroid only within the upper and lower crustal layers for thrust events. Deeper sources (i.e., in the upper mantle) could not reproduce the gravity observation as the larger rigidity and bulk modulus at mantle depths inhibit the interior from changing its volume, thus reducing the negative gravity component. Focal mechanisms and seismic moments obtained in this study represent the behavior of the sources on temporal and spatial scales exceeding the seismic and geodetic spectrum.
Light Higgs production at the Compton Collider
International Nuclear Information System (INIS)
Jikia, G.; Soeldner-Rembold, S.
2000-01-01
We have studied the production of a light Higgs boson with a mass of 120 GeV in photon-photon collisions at a Compton collider. The event generator for the backgrounds to a Higgs signal due to b-barb and c-barc heavy quark pair production in polarized γγ collisions is based on a complete next-to-leading order (NLO) perturbative QCD calculation. For J z = 0 the large double-logarithmic corrections up to four loops are also included. It is shown that the two-photon width of the Higgs boson can be measured with high statistical accuracy of about 2% for integrated γγ luminosity in the hard part of the spectrum of 40 fb -1 . As a result the total Higgs boson width can be calculated in a model independent way to an accuracy of about 14%
Virtual Compton Scattering At High Energy
Zhang, C
2000-01-01
In this dissertation we develop a theoretical framework in the context of perturbative QuantumChromoDynamics (pQCD) for studying non-forward scattering processes. In particular, we investigate a non-forward unequal mass virtual Compton scattering amplitude by performing the general operator product expansion (OPE) and the formal renormalization group (RG) analysis. We discuss the general tensorial decomposition of the amplitude to obtain the invariant amplitudes in the non- forward kinematic region. We study the OPE to identify the relevant operators and their reduced matrix elements, as well as the corresponding Wilson coefficients. We find that the OPE now should be done in double moments with new moment variables. There are in the expansion new sets of leading twist operators which have overall derivatives. They mix under renormalization in a well- defined way. We compute the evolution kernels from which the anomalous dimensions for these operators can be extracted. We also obtain explicitly the lowest ord...
BLAZAR FLARES FROM COMPTON DRAGGED SHELLS
Energy Technology Data Exchange (ETDEWEB)
Golan, Omri; Levinson, Amir, E-mail: Levinson@wise.tau.ac.il [School of Physics and Astronomy, Tel Aviv University, Tel Aviv 69978 (Israel)
2015-08-10
We compute the dynamics and emission of dissipative shells that are subject to a strong Compton drag, under simplifying assumptions about the dissipation mechanism. We show that under conditions prevailing in blazars, substantial deceleration is anticipated on sub-parsec and parsec scales in cases of rapid dissipation. Such episodes may be the origin of some of the flaring activity occasionally observed in gamma-ray blazars. The shape of the light curves thereby produced reflects the geometry of the emitting surface if the deceleration is very rapid, or the dynamics of the shell if the deceleration is delayed, or initially more gradual, owing, e.g., to continuous injection of energy and momentum.
Energy Technology Data Exchange (ETDEWEB)
Martini, Elaine
1995-12-31
The basic operation, design and construction of the plastic scintillator detector is described. In order to increase the sensitivity of this detector, two blocks of plastic scintillator have been assembled to act as a anticompton system. The detectors were produced by polymerisation of styrene monomer with PPO (2,5 diphenyl-oxazole) and POPOP (1,4 bis (-5 phenyl-2- oxazoly)benzene) in proportions of 0.5 and 0.05 respectively. The transparency of this detector was evaluated by excitation of the {sup 241} Am source located directly in the back surface plastic coupled to a photomultiplier. The light attenuation according to the detector thickness has fitted to a two-exponential function: relative height pulse = 0,519 e{sup -0.0016} + 0.481 e{sup -0.02112.x}. Four radioactive sources{sup {sup 2}2} Na, {sup 54} Mn, {sup 137} Cs and {sup 131} I were used to evaluate the performance of this system. The Compton reduction factor, determined by the ratio of the energy peak values of suppressed and unsuppressed spectra was 1.16. The Compton suppression factor determined by the ratio of the net photopeak area to the area of an equal spectra width in the Compton continuum, was approximately 1.208 {+-} 0.109. The sensitivity of the system, defined as the least amount of a radioactivity that can be quantified in the photopeak region, was 9.44 cps. First, the detector was assembled to be applied in biological studies of whole body counter measurements of small animals. Using a phantom, (small animal simulator) and a punctual {sup 137} Cs source, located in the central region of the well counter the geometrical efficiency detector was about 5%. (author) 40 refs., 28 fifs., 2 tabs.
ECOSP: an enhanced Compton spectrometer proposal for fresco inspection
Tartari, A; Lodi-Rizzini, E; Bonifazzi, C
2001-01-01
This paper deals with a Compton scattering technique for non-destructive testing in the inspection of fresco substrates. The aim of the Enhanced Compton Spectrometer proposal is to consider the synergic effects of both the physical design of the apparatus and data processing management, in order to obtain an enhancement in the efficiency of photon collection and a minimization of statistical needs. To this end, the study of collimators devised to enhance the collection of photons which undergo a single scattering was explored. The pixel imaging obtained by collecting the total Compton scattered photons from a hidden layer was then examined in terms of a multivariate Principal Components Analysis approach.
Ingram, WT
2012-01-01
Inverse limits provide a powerful tool for constructing complicated spaces from simple ones. They also turn the study of a dynamical system consisting of a space and a self-map into a study of a (likely more complicated) space and a self-homeomorphism. In four chapters along with an appendix containing background material the authors develop the theory of inverse limits. The book begins with an introduction through inverse limits on [0,1] before moving to a general treatment of the subject. Special topics in continuum theory complete the book. Although it is not a book on dynamics, the influen
Beam Diagnostics for Laser Undulator Based on Compton Backward Scattering
Kuroda, R
2005-01-01
A compact soft X-ray source is required in various research fields such as material and biological science. The laser undulator based on Compton backward scattering has been developed as a compact soft X-ray source for the biological observation at Waseda University. It is performed in a water window region (250eV - 500 eV) using the interaction between 1047 nm Nd:YLF laser (10ps FWHM) and about 5 MeV high quality electron beam (10ps FWHM) generated from rf gun system. The range of X-ray energy in the water window region has K-shell absorption edges of Oxygen, Carbon and Nitrogen, which mainly constitute of living body. Since the absorption coefficient of water is much smaller than the protein's coefficient in this range, a dehydration of the specimens is not necessary. To generate the soft X-ray pulse stably, the electron beam diagnostics have been developed such as the emittance measurement using double slit scan technique, the bunch length measurement using two frequency analysis technique. In this confere...
Advanced PET using both Compton and photoelectric events
International Nuclear Information System (INIS)
Yoon, Changyeon; Lee, Wonho
2012-01-01
This study presents image reconstruction and evaluation of advanced positron emission tomography (PET) by using a simple backprojection and an maximum likelihood expectation maximization (MLEM) method. Advanced PET can use not only the photoelectric effect but also the Compton scattering effect for image reconstruction; hence, the detection efficiency should be inherently higher than that of conventional PET. By using a voxelized cadmium zinc telluride (CZT) detector, the detected position and deposited energy of the gamma ray were found precisely. With the position and energy information, the interaction sequence, which is one of the main factors to consider in the reconstruction of the source image, was identified correctly. The reconstruction algorithms were simple backprojection and MLEM methods, and three methods were used to evaluate the advanced PET compared with conventional PET, which uses the photoelectric effect only. The full widths at half maximum (FWHM) and the maximum counts of images reconstructed by using simple backprojection were calculated for comparison. Using an MLEM method, the FWHM and the relative standard deviation of the counts in the range of half of the FWHM around the maximum pixel were calculated at each iteration to evaluate the modalities quantitatively. For a 3D source phantom, the simple backprojection and the MLEM methods were applied to each modality, and the reconstructed images were compared with each other by using the relative standard deviation for each component of the reconstructed image and by using visual inspection.
Chouet, Bernard A.; Dawson, Phillip B.; Arciniega-Ceballos, Alejandra
2005-01-01
The source mechanism of very long period (VLP) signals accompanying volcanic degassing bursts at Popocatépetl is analyzed in the 15–70 s band by minimizing the residual error between data and synthetics calculated for a point source embedded in a homogeneous medium. The waveforms of two eruptions (23 April and 23 May 2000) representative of mild Vulcanian activity are well reproduced by our inversion, which takes into account volcano topography. The source centroid is positioned 1500 m below the western perimeter of the summit crater, and the modeled source is composed of a shallow dipping crack (sill with easterly dip of 10°) intersecting a steeply dipping crack (northeast striking dike dipping 83° northwest), whose surface extension bisects the vent. Both cracks undergo a similar sequence of inflation, deflation, and reinflation, reflecting a cycle of pressurization, depressurization, and repressurization within a time interval of 3–5 min. The largest moment release occurs in the sill, showing a maximum volume change of 500–1000 m3, pressure drop of 3–5 MPa, and amplitude of recovered pressure equal to 1.2 times the amplitude of the pressure drop. In contrast, the maximum volume change in the dike is less (200–300 m3), with a corresponding pressure drop of 1–2 MPa and pressure recovery equal to the pressure drop. Accompanying these volumetric sources are single-force components with magnitudes of 108 N, consistent with melt advection in response to pressure transients. The source time histories of the volumetric components of the source indicate that significant mass movement starts within the sill and triggers a mass movement response in the dike within a few seconds. Such source behavior is consistent with the opening of a pathway for escape of pent-up gases from slow pressurization of the sill driven by magma crystallization. The opening of this pathway and associated rapid evacuation of volcanic gases induces the pressure drop. Pressure
Determination of Compton profiles in the metal-hydrogen systems VHx, VDx and PdHx
International Nuclear Information System (INIS)
Laesser, R.
1978-02-01
Compton profiles for polycrystalline PdH(0.72), VD(0.77), VH(0.71), Pd and V have been determined by Compton scattering of 159 keV photons from a Te-123sup(m) source. The difference in profiles before and after hydrogen loading is compared to different models for the electronic structure of the hydrides. It is shown that the Compton profile is a sensitive test of the accuracy of various model wave functions for the hydrides. In both palladium- and vanadium hydride (deuteride) the anionic model (where it is assumed that the hydrogen forms a negative ion H - in the hydride) does not describe the experimental results. The best agreement with the experimental data for PdH(0.72), VD(0.77) and VH(0.71) is obtained for a model which is based on band structure calculations for these hydrides and which takes into account that hydrogen-palladium or hydrogen-vanadium bonding states are created below the Fermi level by the introduction of hydrogen in the host lattice. (orig.) [de
Hall, G N; Izumi, N; Tommasini, R; Carpenter, A C; Palmer, N E; Zacharias, R; Felker, B; Holder, J P; Allen, F V; Bell, P M; Bradley, D; Montesanti, R; Landen, O L
2014-11-01
Compton radiography is an important diagnostic for Inertial Confinement Fusion (ICF), as it provides a means to measure the density and asymmetries of the DT fuel in an ICF capsule near the time of peak compression. The AXIS instrument (ARC (Advanced Radiography Capability) X-ray Imaging System) is a gated detector in development for the National Ignition Facility (NIF), and will initially be capable of recording two Compton radiographs during a single NIF shot. The principal reason for the development of AXIS is the requirement for significantly improved detection quantum efficiency (DQE) at high x-ray energies. AXIS will be the detector for Compton radiography driven by the ARC laser, which will be used to produce Bremsstrahlung X-ray backlighter sources over the range of 50 keV-200 keV for this purpose. It is expected that AXIS will be capable of recording these high-energy x-rays with a DQE several times greater than other X-ray cameras at NIF, as well as providing a much larger field of view of the imploded capsule. AXIS will therefore provide an image with larger signal-to-noise that will allow the density and distribution of the compressed DT fuel to be measured with significantly greater accuracy as ICF experiments are tuned for ignition.
Compton scattering: the investigation of electron momentum distributions
International Nuclear Information System (INIS)
Williams, B.
1977-01-01
A collection of review papers is presented on Compton scattering. The history, theory, experimentation, multiple scattering, atoms, solids, chemistry, electron scattering, positron annihilation, and the reconstruction of the three-dimensional distributions are the topics considered. 88 references
Deconvolution of shift-variant broadening for Compton scatter imaging
International Nuclear Information System (INIS)
Evans, Brian L.; Martin, Jeffrey B.; Roggemann, Michael C.
1999-01-01
A technique is presented for deconvolving shift-variant Doppler broadening of singly Compton scattered gamma rays from their recorded energy distribution. Doppler broadening is important in Compton scatter imaging techniques employing gamma rays with energies below roughly 100 keV. The deconvolution unfolds an approximation to the angular distribution of scattered photons from their recorded energy distribution in the presence of statistical noise and background counts. Two unfolding methods are presented, one based on a least-squares algorithm and one based on a maximum likelihood algorithm. Angular distributions unfolded from measurements made on small scattering targets show less evidence of Compton broadening. This deconvolution is shown to improve the quality of filtered backprojection images in multiplexed Compton scatter tomography. Improved sharpness and contrast are evident in the images constructed from unfolded signals
Gamma-ray burst observations with the Compton/Ulysses/Pioneer-Venus network
International Nuclear Information System (INIS)
Cline, T.L.; Hurley, K.C.; Sommer, M.; Boer, M.; Niel, M.; Fishman, G.J.; Kouveliotou, C.; Meegan, C.A.; Paciesas, W.S.; Wilson, R.B.; Fenimore, E.E.; Laros, J.G.; Klebesadel, R.W.
1993-01-01
The third and latest interplanetary network for the precise directional analysis of gamma ray bursts consists of the Burst and Transient Source Experiment in Compton Gamma Ray Observatory and instruments on Pioneer-Venus Orbiter and the deep-space mission Ulysses. The unsurpassed resolution of the BATSE instrument, the use of refined analysis techniques, and Ulysses' distance of up to 6 AU all contribute to a potential for greater precision than had been achieved with former networks. Also, the departure of Ulysses from the ecliptic plane in 1992 avoids any positional alignment of the three instruments that would lessen the source directional accuracy
Directory of Open Access Journals (Sweden)
O. Tichý
2016-11-01
Full Text Available Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values as a product of the source-receptor sensitivity (SRS matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.
Deeply virtual Compton scattering off 4He
International Nuclear Information System (INIS)
Hattawy, M.
2015-01-01
The 4 He nucleus is of particular interest to study nuclear GPDs (Generalized Parton Distributions) as its partonic structure is described by only one chirally-even GPD. It is also a simple few-body system and has a high density that makes it the ideal target to investigate nuclear effects on partons. The experiment described in this thesis is JLab-E08-24, which was carried out in 2009 by the CLAS collaboration during the 'EG6' run. In this experiment, a 6 GeV longitudinally-polarized electron beam was scattered onto a 6 atm 4 He gaseous target. During this experiment, in addition to the CLAS detector, a Radial Time Projection Chamber (RTPC), to detect low-energy nuclear recoils, and an Inner Calorimeter (IC), to improve the detection of photons at very forward angles, were used. We carried out a full analysis on our 6 GeV dataset, showing the feasibility of measuring exclusive nuclear Deeply Virtual Compton Scattering (DVCS) reactions. The analysis included: the identification of the final-state particles, the DVCS event selection, the π 0 background subtraction. The beam-spin asymmetry was then extracted for both DVCS channels and compared to the ones of the free-proton DVCS reaction, and to theoretical predictions from two models. Finally, the real and the imaginary parts of the 4 He CFF (Compton Form Factor) HA have been extracted. Different levels of agreement were found between our measurements and the theoretical calculations. This thesis is organized as follows: In chapter 1, the available theoretical tools to study hadronic structure are presented, with an emphasis on the nuclear effects and GPDs. In chapter 2, the characteristics of the CLAS spectrometer are reviewed. In chapter 3, the working principle and the calibration aspects of the RTPC are discussed. In chapter 4, the identification of the final-state particles and the Monte-Carlo simulation are presented. In chapter 5, the selection of the DVCS events, the background subtraction, and
Compton-backscattered annihilation radiation from the Galactic Center region
Smith, D. M.; Lin, R. P.; Feffer, P.; Slassi, S.; Hurley, K.; Matteson, J.; Bowman, H. B.; Pelling, R. M.; Briggs, M.; Gruber, D.
1993-01-01
On 1989 May 22, the High Energy X-ray and Gamma-ray Observatory for Nuclear Emissions, a balloon-borne high-resolution germanium spectrometer with an 18-deg FOV, observed the Galactic Center (GC) from 25 to 2500 keV. The GC photon spectrum is obtained from the count spectrum by a model-independent method which accounts for the effects of passive material in the instrument and scattering in the atmosphere. Besides a positron annihilation line with a flux of (10.0 +/- 2.4) x 10 exp -4 photons/sq cm s and a full width at half-maximum (FWHM) of (2.9 + 1.0, -1.1) keV, the spectrum shows a peak centered at (163.7 +/- 3.4) keV with a flux of (1.55 +/- 0.47) x 10 exp -3 photons/sq cm s and a FWHM of (24.4 +/- 9.2) keV. The energy range 450-507 keV shows no positronium continuum associated with the annihilation line, with a 2-sigma upper limit of 0.90 on the positronium fraction. The 164 keV feature is interpreted as Compton backscatter of broadened and redshifted annihilation radiation, possibly from the source 1E 1740.7-2942.
Directory of Open Access Journals (Sweden)
H. Bovensmann
2011-09-01
Full Text Available MAMAP is an airborne passive remote sensing instrument designed to measure the dry columns of methane (CH_{4} and carbon dioxide (CO_{2}. The MAMAP instrument comprises two optical grating spectrometers: the first observing in the short wave infrared band (SWIR at 1590–1690 nm to measure CO_{2} and CH_{4} absorptions, and the second in the near infrared (NIR at 757–768 nm to measure O_{2} absorptions for reference/normalisation purposes. MAMAP can be operated in both nadir and zenith geometry during the flight. Mounted on an aeroplane, MAMAP surveys areas on regional to local scales with a ground pixel resolution of approximately 29 m × 33 m for a typical aircraft altitude of 1250 m and a velocity of 200 km h^{−1}. The retrieval precision of the measured column relative to background is typically ≲1% (1σ. MAMAP measurements are valuable to close the gap between satellite data, having global coverage but with a rather coarse resolution, on the one hand, and highly accurate in situ measurements with sparse coverage on the other hand. In July 2007, test flights were performed over two coal-fired power plants operated by Vattenfall Europe Generation AG: Jänschwalde (27.4 Mt CO_{2} yr^{−1} and Schwarze Pumpe (11.9 Mt CO_{2} yr^{−1}, about 100 km southeast of Berlin, Germany. By using two different inversion approaches, one based on an optimal estimation scheme to fit Gaussian plume models from multiple sources to the data, and another using a simple Gaussian integral method, the emission rates can be determined and compared with emissions reported by Vattenfall Europe. An extensive error analysis for the retrieval's dry column results (XCO_{2} and XCH_{4} and for the two inversion methods has been performed. Both methods – the Gaussian plume model fit and the Gaussian integral method – are capable of deriving
DEFF Research Database (Denmark)
Balokovic´, M.; Comastri, A.; Harrison, F. A.
2014-01-01
We present X-ray spectral analyses for three Seyfert 2 active galactic nuclei (AGNs), NGC 424, NGC 1320, and IC 2560, observed by NuSTAR in the 3-79 keV band. The high quality hard X-ray spectra allow detailed modeling of the Compton reflection component for the first time in these sources. Using...... on Compton-thick material. Due to the very high obscuration, absorbed intrinsic continuum components are not formally required by the data in any of the sources. We constrain the intrinsic photon indices and the column density of the reflecting medium through the shape of the reflection spectra. Using...
Compton Composites Late in the Early Universe
Directory of Open Access Journals (Sweden)
Frederick Mayer
2014-07-01
Full Text Available Beginning roughly two hundred years after the big-bang, a tresino phase transition generated Compton-scale composite particles and converted most of the ordinary plasma baryons into new forms of dark matter. Our model consists of ordinary electrons and protons that have been bound into mostly undetectable forms. This picture provides an explanation of the composition and history of ordinary to dark matter conversion starting with, and maintaining, a critical density Universe. The tresino phase transition started the conversion of ordinary matter plasma into tresino-proton pairs prior to the the recombination era. We derive the appropriate Saha–Boltzmann equilibrium to determine the plasma composition throughout the phase transition and later. The baryon population is shown to be quickly modified from ordinary matter plasma prior to the transition to a small amount of ordinary matter and a much larger amount of dark matter after the transition. We describe the tresino phase transition and the origin, quantity and evolution of the dark matter as it takes place from late in the early Universe until the present.
The first demonstration of the concept of “narrow-FOV Si/CdTe semiconductor Compton camera”
Energy Technology Data Exchange (ETDEWEB)
Ichinohe, Yuto, E-mail: ichinohe@astro.isas.jaxa.jp [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo, Sagamihara, Kanagawa 252-5210 (Japan); University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033 (Japan); Uchida, Yuusuke; Watanabe, Shin [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo, Sagamihara, Kanagawa 252-5210 (Japan); University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033 (Japan); Edahiro, Ikumi [Hiroshima University, 1-3-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8526 (Japan); Hayashi, Katsuhiro [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo, Sagamihara, Kanagawa 252-5210 (Japan); Kawano, Takafumi; Ohno, Masanori [Hiroshima University, 1-3-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8526 (Japan); Ohta, Masayuki [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo, Sagamihara, Kanagawa 252-5210 (Japan); Takeda, Shin' ichiro [Okinawa Institute of Science and Technology Graduate University, 1919-1 Tancha, Onna-son, Okinawa 904-0495 (Japan); Fukazawa, Yasushi [Hiroshima University, 1-3-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8526 (Japan); Katsuragawa, Miho [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo, Sagamihara, Kanagawa 252-5210 (Japan); University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033 (Japan); Nakazawa, Kazuhiro [University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033 (Japan); Odaka, Hirokazu [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo, Sagamihara, Kanagawa 252-5210 (Japan); Tajima, Hiroyasu [Solar-Terrestrial Environment Laboratory, Nagoya University, Furo-cho, Chikusa, Nagoya, Aichi 464-8601 (Japan); Takahashi, Hiromitsu [Hiroshima University, 1-3-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8526 (Japan); and others
2016-01-11
The Soft Gamma-ray Detector (SGD), to be deployed on board the ASTRO-H satellite, has been developed to provide the highest sensitivity observations of celestial sources in the energy band of 60–600 keV by employing a detector concept which uses a Compton camera whose field-of-view is restricted by a BGO shield to a few degree (narrow-FOV Compton camera). In this concept, the background from outside the FOV can be heavily suppressed by constraining the incident direction of the gamma ray reconstructed by the Compton camera to be consistent with the narrow FOV. We, for the first time, demonstrate the validity of the concept using background data taken during the thermal vacuum test and the low-temperature environment test of the flight model of SGD on ground. We show that the measured background level is suppressed to less than 10% by combining the event rejection using the anti-coincidence trigger of the active BGO shield and by using Compton event reconstruction techniques. More than 75% of the signals from the field-of-view are retained against the background rejection, which clearly demonstrates the improvement of signal-to-noise ratio. The estimated effective area of 22.8 cm{sup 2} meets the mission requirement even though not all of the operational parameters of the instrument have been fully optimized yet.
Wei, Chun-Sheng; Zhao, Zi-Fu
2017-01-16
Since water is only composed of oxygen and hydrogen, δ 18 O and δ 2 H values are thus utilized to trace the origin of water(s) and quantify the water-rock interactions. While Triassic high pressure (HP) and ultrahigh pressure (UHP) metamorphic rocks across the Dabie-Sulu orogen in central-eastern China have been well documented, postcollisional magmatism driven hydrothermal systems are little known. Here we show that two sources of externally derived water interactions were revealed by oxygen isotopes for the gneissic country rocks intruded by the early Cretaceous postcollisional granitoids. Inverse modellings indicate that the degree of disequilibrium (doD) of meteoric water interactions was more evident than that of magmatic one (-65 ± 1 o vs. -20 ± 2°); the partial reequilibration between quartz and alkali feldspar oxygen isotopes with magmatic water was achieved at 340 °C with a water/rock (W/R) ratio of about 1.2 for an open-hydrothermal system; two-stage meteoric water interactions were unraveled with reequilibration temperatures less than 300 °C and W/R ratios around 0.4. The lifetime of fossil magmatic hydrothermal system overprinted on the low zircon δ 18 O orthogneissic country rocks was estimated to maintain up to 50 thousand years (Kyr) through oxygen exchange modellings. Four-stage isotopic evolutions were proposed for the magmatic water interacted gneiss.
Fully 3D and accelerated shift-variant resolution recovery reconstruction for Compton camera
Energy Technology Data Exchange (ETDEWEB)
Kim, Soo Mee; Lee, Jae Sung; Lee, Dong Soo [Seoul National Univ. (Korea, Republic of). Dept. of Nuclear Medicine; Seo, Hee; Park, Jin Hyung; Kim, Chan Hyeong [Hanyang Univ., Seoul (Korea, Republic of). Dept. of Nuclear Engineering; Lee, Chun Sik [Chung-ang Univ., Seoul (Korea, Republic of). Dept. of Physics; Lee, Soo-Jin [Paichai Univ., Daejeon (Korea, Republic of). Dept. of Electronic Engineering
2011-07-01
Comparing to SPECT and PET, a Compton camera based on electronic collimation has advantages of easy mobility, close-up scan, and simultaneous multi-tracer imaging for radiation therapy, molecular and nuclear medical applications. However, the spatial resolution of the Compton camera suffers from the measurement uncertainties of interaction positions and energies. Moreover, the degradation degree of the spatial resolution is shift-variant over field-of-view (FOV) due to the imaging principle based on the conical surface integration. In this study, the shift-variant point spread function (SV-PSF) is estimated from the measured resolution using 35 point sources in FOV and is incorporated into the system matrix of fully three-dimensional and accelerated reconstruction, i.e. list-mode OSEM (LMOSEM) algorithm, for resolution recovery. The measured resolution of the 35 point sources were fitted into the exponential function of radial (r) and axial (d) distances, f(r,d)=A*exp(Br+Cd). The coefficients (A, B, C) for fitting surface function of SV-PSF were not identical between x-axis (5.8, 0.0032, 0.019) and yz-palne (6.1, 0.0022, 0.013). LMOSEMs with SV-PSF of 2 point sources yielded more improved resolution over the FOV than LMOSEMs without PSF and with shift invariant PSF. The Compton camera can perform the volumetric and multi-tracer imaging in molecular and nuclear medical applications with the improved spatial resolution by LMOSEM with SV-PSF. (orig.)
Development of a Compton suppression whole body counting for small animals
International Nuclear Information System (INIS)
Martini, Elaine
1995-01-01
The basic operation, design and construction of the plastic scintillator detector is described. In order to increase the sensitivity of this detector, two blocks of plastic scintillator have been assembled to act as a anticompton system. The detectors were produced by polymerisation of styrene monomer with PPO (2,5 diphenyl-oxazole) and POPOP (1,4 bis (-5 phenyl-2- oxazoly)benzene) in proportions of 0.5 and 0.05 respectively. The transparency of this detector was evaluated by excitation of the 241 Am source located directly in the back surface plastic coupled to a photomultiplier. The light attenuation according to the detector thickness has fitted to a two-exponential function: relative height pulse = 0,519 e -0.0016 + 0.481 e -0.02112.x . Four radioactive sources 2 2 Na, 54 Mn, 137 Cs and 131 I were used to evaluate the performance of this system. The Compton reduction factor, determined by the ratio of the energy peak values of suppressed and unsuppressed spectra was 1.16. The Compton suppression factor determined by the ratio of the net photopeak area to the area of an equal spectra width in the Compton continuum, was approximately 1.208 ± 0.109. The sensitivity of the system, defined as the least amount of a radioactivity that can be quantified in the photopeak region, was 9.44 cps. First, the detector was assembled to be applied in biological studies of whole body counter measurements of small animals. Using a phantom, (small animal simulator) and a punctual 137 Cs source, located in the central region of the well counter the geometrical efficiency detector was about 5%. (author)
Kusunose, Masaaki; Takahara, Fumio
1990-01-01
The present account of the effects of soft photons from external sources on two-temperature accretion disks in electron-positron pair equilibrium solves the energy-balance equation for a given radial distribution of the input rate of soft photons, taking into account their bremsstrahlung and Comptonization. Critical rate behavior is investigated as a function of the ratio of the energy flux of incident soft photons and the energy-generation rate. As in a previous study, the existence of a critical accretion rate is established.
Development of compton scatter X-ray technique for bone density measurement in vivo
International Nuclear Information System (INIS)
Kapoor, K.K.; Clarke, R.L.; Barton, R.D.
1980-01-01
A technique for bone density measurement in vivo based on the fact that cross-section for compton scattering depends directly upon the electron density of the scattering material has been developed and described. The theory is explained. Electron density is converted to mass density by using weighted average of atomic number to mass number of the material. The method uses a low energy X-ray source and three scintillation detectors. The method has the advantage of permitting measurement of bones in vivo of different sizes and shapes without recalibration and without any specific knowledge of absorption of scattering properties of the surrounding tissue. (M.G.B.)
Compton scatter correction for planner scintigraphic imaging
Energy Technology Data Exchange (ETDEWEB)
Vaan Steelandt, E.; Dobbeleir, A.; Vanregemorter, J. [Algemeen Ziekenhuis Middelheim, Antwerp (Belgium). Dept. of Nuclear Medicine and Radiotherapy
1995-12-01
A major problem in nuclear medicine is the image degradation due to Compton scatter in the patient. Photons emitted by the radioactive tracer scatter in collision with electrons of the surrounding tissue. Due to the resulting loss of energy and change in direction, the scattered photons induce an object dependant background on the images. This results in a degradation of the contrast of warm and cold lesions. Although theoretically interesting, most of the techniques proposed in literature like the use of symmetrical photopeaks can not be implemented on the commonly used gamma camera due to the energy/linearity/sensitivity corrections applied in the detector. A method for a single energy isotope based on existing methods with adjustments towards daily practice and clinical situations is proposed. It is assumed that the scatter image, recorded from photons collected within a scatter window adjacent to the photo peak, is a reasonable close approximation of the true scatter component of the image reconstructed from the photo peak window. A fraction `k` of the image using the scatter window is subtracted from the image recorded in the photo peak window to produce the compensated image. The principal matter of the method is the right value for the factor `k`, which is determined in a mathematical way and confirmed by experiments. To determine `k`, different kinds of scatter media are used and are positioned in different ways in order to simulate a clinical situation. For a secondary energy window from 100 to 124 keV below a photo peak window from 126 to 154 keV, a value of 0.7 is found. This value has been verified using both an antropomorph thyroid phantom and the Rollo contrast phantom.
Polarization observables in Virtual Compton Scattering
Energy Technology Data Exchange (ETDEWEB)
Doria, Luca
2007-10-15
Virtual Compton Scattering (VCS) is an important reaction for understanding nucleon structure at low energies. By studying this process, the generalized polarizabilities of the nucleon can be measured. These observables are a generalization of the already known polarizabilities and will permit theoretical models to be challenged on a new level. More specifically, there exist six generalized polarizabilities and in order to disentangle them all, a double polarization experiment must be performed. Within this work, the VCS reaction p(e,e'p){gamma} was measured at MAMI using the A1 Collaboration three spectrometer setup with Q{sup 2}=0.33 (GeV/c){sup 2}. Using the highly polarized MAMI beam and a recoil proton polarimeter, it was possible to measure both the VCS cross section and the double polarization observables. Already in 2000, the unpolarized VCS cross section was measured at MAMI. In this new experiment, we could confirm the old data and furthermore the double polarization observables were measured for the first time. The data were taken in five periods between 2005 and 2006. In this work, the data were analyzed to extract the cross section and the proton polarization. For the analysis, a maximum likelihood algorithm was developed together with the full simulation of all the analysis steps. The experiment is limited by the low statistics due mainly to the focal plane proton polarimeter efficiency. To overcome this problem, a new determination and parameterization of the carbon analyzing power was performed. The main result of the experiment is the extraction of a new combination of the generalized polarizabilities using the double polarization observables. (orig.)
Neutron diagnostics using compton suppression gamma-ray spectrometer
Energy Technology Data Exchange (ETDEWEB)
Hong, S. P.; Kang, B. S. [Lab. Of Radiation Convergence Science, Dept. of Radiological Science, College of Medical Science, Konyang University, Daejeon (Korea, Republic of); Kim, C. S.; Cheon, M. S.; Cho, S. [National Fusion Research Institute, Daejeon (Korea, Republic of)
2016-12-15
Various neutron diagnostic systems such as a fission chamber, stilbene spectrometers, and a neutron activation system (NAS) have been installed at KSTAR for more accurate detection of neutron flux. Among the systems, the NAS is the most reliable and robust tool, and the measurement data of it generally are to be used for calibration of other systems. The Compton suppression gamma-ray spectrometer which can suppress the expected background, noise signal and Compton scattering was used to measure the gamma-rays of neutron activated samples. In this study, the encapsulated indium samples which are installed and irradiated by the neutrons released from the nuclear fusion reactions in the Korea Superconducting Tokamak Advanced Research (KSTAR) was used and measured using Compton suppressed gamma-ray spectrometer to minimize the measurement error. From the experimental results, the statistical error was decreased by Compton suppression system. the statistical error of the measured sample activity in the Compton suppressed system is estimated to be about 2.3 %, and the statistical error of the measured sample activity in the non-suppressed system was estimated to be about 4.9 %. It was found that the system can reduce the measurement error effectively. It is confirmed that this system can be applied to ITER TBM and future nuclear fusion devices.
Energy Technology Data Exchange (ETDEWEB)
Taya, T., E-mail: taka48138@ruri.waseda.jp [Research Institute for Science and Engineering, Waseda University, 3-4-1 Okubo, Shinjuku, Tokyo 169-8555 (Japan); Kataoka, J.; Kishimoto, A.; Iwamoto, Y.; Koide, A. [Research Institute for Science and Engineering, Waseda University, 3-4-1 Okubo, Shinjuku, Tokyo 169-8555 (Japan); Nishio, T. [Graduate School of Biomedical and Health Science, Hiroshima University, 1-2-3, Kasumi, Minami-ku, Hiroshima-shi, Hiroshima (Japan); Kabuki, S. [School of Medicine, Tokai University, 143 Shimokasuya, Isehara-shi, Kanagawa (Japan); Inaniwa, T. [National Institute of Radiological Sciences, 4-9-1 Anagawa, Inage-ku, Chiba-shi, Chiba (Japan)
2016-09-21
The use of real-time gamma imaging for cancer treatment in particle therapy is expected to improve the accuracy of the treatment beam delivery. In this study, we demonstrated the imaging of gamma rays generated by the nuclear interactions during proton irradiation, using a handheld Compton camera (14 cm×15 cm×16 cm, 2.5 kg) based on scintillation detectors. The angular resolution of this Compton camera is ∼8° at full width at half maximum (FWHM) for a {sup 137}Cs source. We measured the energy spectra of the gamma rays using a LaBr{sub 3}(Ce) scintillator and photomultiplier tube, and using the handheld Compton camera, performed image reconstruction when using a 70 MeV proton beam to irradiate a water, Ca(OH){sub 2}, and polymethyl methacrylate (PMMA) phantom. In the energy spectra of all three phantoms, we found an obvious peak at 511 keV, which was derived from annihilation gamma rays, and in the energy spectrum of the PMMA phantom, we found another peak at 718 keV, which contains some of the prompt gamma rays produced from {sup 10}B. Therefore, we evaluated the peak positions of the projection from the reconstructed images of the PMMA phantom. The differences between the peak positions and the Bragg peak position calculated using simulation are 7 mm±2 mm and 3 mm±8 mm, respectively. Although we could quickly acquire online gamma imaging of both of the energy ranges during proton irradiation, we cannot arrive at a clear conclusion that prompt gamma rays sufficiently trace the Bragg peak from these results because of the uncertainty given by the spatial resolution of the Compton camera. We will develop a high-resolution Compton camera in the near future for further study. - Highlights: • Gamma imaging during proton irradiation by a handheld Compton camera is demonstrated. • We were able to acquire the online gamma-ray images quickly. • We are developing a high resolution Compton camera for range verification.
Positioning of steel rods inclusions in reinforced concrete simulant by Compton backscattering
Energy Technology Data Exchange (ETDEWEB)
Boldo, Emerson M.; Prestes, Ana A.P.; Appoloni, Carlos R. [Universidade Estadual de Londrina (UEL), PR (Brazil). Dept. de Fisica. Lab. de Fisica Nuclear Aplicada
2011-07-01
Reinforced concrete is susceptible to a range of environmental degradation factors that can limit its service life. There has always been a need for test methods to measure, in situ, the properties of concrete for quality assurance and to evaluate the condition of existing structures. Compton scattering of gamma radiation is a nondestructive technique used for the detection of defects and inclusions in materials and it can be employed on reinforced concrete. The methodology allows for one-side inspection of large structures and can be implemented with a relatively inexpensive, portable apparatus. In this work, we used the Compton backscattering technique to measure both the size and depth of steel rod inclusions in plaster block samples. The samples were irradiated with gamma rays from a {phi}2 mm collimated {sup 241}Am (100 mCi) source, and the inelastically scattered photons were collected at an angle of 135 deg by a high-resolution CdTe semiconductor detector. Scanning was achieved by lateral movement of the sample blocks across the field of view of the source and detector in steps of 1 mm. The tests on plaster blocks with steel rod inclusions suggest that, for a low-energy and low-activity gamma source, beam attenuation has greater effects on the scattered intensity than does increased material density. Density contrast analysis allows determination of the size and depth of steel rods. Furthermore, the experimental results agree with theoretical data obtained through Monte Carlo simulation. (author)
Some Issues in Deeply-Virtual Compton Scattering
Directory of Open Access Journals (Sweden)
Ji C.-R.
2010-04-01
Full Text Available Compton scattering provides a unique tool for studying hadron structure. In contrast to elastic electron scattering, which provides information about the hadron’s structure in terms of form factors, Compton scattering is more versatile, as the basic process is the coupling of two electro-magnetic currents. Therefore, the hadronic structure can be described at high momentum transfer in the language of generalized parton distributions (GPDs, which code information about the light-front wave functions of the probed hadrons. In this paper we discuss some issues involved in the application of the GPD idea, in particular the eﬀectivity of Compton scattering as a ﬁlter of the hadron structure.
Electronic structure of lanthanum sesquioxide: A Compton scattering study
Sharma, Sonu; Sahariya, Jagrati; Arora, Gunjan; Ahuja, B. L.
2014-10-01
We present the first-ever experimental and theoretical momentum densities of La2O3. The Compton line shape is measured using a 20 Ci 137Cs Compton spectrometer at an intermediate resolution with full width at half maximum of 0.34 a.u. The experimental Compton profile is compared with the theoretical electron momentum densities computed using linear combination of atomic orbitals (LCAO) method with density functional theory (DFT). It is seen that the generalized gradient approximation (GGA) within DFT reconciles better with the experiment than other DFT based approximations, validating the GGA approximation for rare-earth sesquioxides. The energy bands and density of states computed using LCAO calculations show its wide band gap nature which is in tune with the available reflectivity and photo-absorption data. In addition, Mulliken's population and charge density are also computed and discussed.
Electronic structure of lanthanum sesquioxide: A Compton scattering study
Energy Technology Data Exchange (ETDEWEB)
Sharma, Sonu [Department of Physics, M.L. Sukhadia University, Udaipur 313001, Rajasthan (India); Sahariya, Jagrati [Department of Physics, Manipal University Jaipur, Jaipur 303007, Rajasthan (India); Arora, Gunjan [Department of Physics, Geetanjali Institute of Technical Studies, Udaipur 313022, Rajasthan (India); Ahuja, B.L., E-mail: blahuja@yahoo.com [Department of Physics, M.L. Sukhadia University, Udaipur 313001, Rajasthan (India)
2014-10-01
We present the first-ever experimental and theoretical momentum densities of La{sub 2}O{sub 3}. The Compton line shape is measured using a 20 Ci {sup 137}Cs Compton spectrometer at an intermediate resolution with full width at half maximum of 0.34 a.u. The experimental Compton profile is compared with the theoretical electron momentum densities computed using linear combination of atomic orbitals (LCAO) method with density functional theory (DFT). It is seen that the generalized gradient approximation (GGA) within DFT reconciles better with the experiment than other DFT based approximations, validating the GGA approximation for rare-earth sesquioxides. The energy bands and density of states computed using LCAO calculations show its wide band gap nature which is in tune with the available reflectivity and photo-absorption data. In addition, Mulliken's population and charge density are also computed and discussed.
Inclusive and Exclusive Compton Processes in Quantum Chromodynamics
Energy Technology Data Exchange (ETDEWEB)
Psaker, Ales [Old Dominion Univ., Norfolk, VA (United States)
2005-12-01
In our work, we describe two types of Compton processes. As an example of an inclusive process, we consider the high-energy photoproduction of massive muon pairs off the nucleon. We analyze the process in the framework of the QCD parton model, in which the usual parton distributions emerge as a tool to describe the nucleon in terms of quark and gluonic degrees of freedom. To study its exclusive version, a new class of phenomenological functions is required, namely, generalized parton distributions. They can be considered as a generalization of the usual parton distributions measured in deeply inelastic lepton-nucleon scattering. Generalized parton distributions (GPDs) may be observed in hard exclusive reactions such as deeply virtual Compton scattering. We develop an extension of this particular process into the weak interaction sector. We also investigate a possible application of the GPD formalism to wide-angle real Compton scattering.
International Nuclear Information System (INIS)
Rax, J.M.
1992-04-01
The dynamics of electrons in two-dimensional, linearly or circularly polarized, ultra-high intensity (above 10 18 W/cm 2 ) laser waves, is investigated. The Compton harmonic resonances are identified as the source of various stochastic instabilities. Both Arnold diffusion and resonance overlap are considered. The quasilinear kinetic equation, describing the evolution of the electron distribution function, is derived, and the associated collisionless damping coefficient is calculated. The implications of these new processes are considered and discussed
Bloser, P. F.; Legere, J. S.; Bancroft, C. M.; Ryan, J. M.; McConnell, M. L.
2016-03-01
We present the results of the first high-altitude balloon flight test of a concept for an advanced Compton telescope making use of modern scintillator materials with silicon photomultiplier (SiPM) readouts. There is a need in the fields of high-energy astronomy and solar physics for new medium-energy gamma-ray ( 0.4-10 MeV) detectors capable of making sensitive observations of both line and continuum sources over a wide dynamic range. A fast scintillator-based Compton telescope with SiPM readouts is a promising solution to this instrumentation challenge, since the fast response of the scintillators permits both the rejection of background via time-of-flight (ToF) discrimination and the ability to operate at high count rates. The Solar Compton Telescope (SolCompT) prototype presented here was designed to demonstrate stable performance of this technology under balloon-flight conditions. The SolCompT instrument was a simple two-element Compton telescope, consisting of an approximately one-inch cylindrical stilbene crystal for a scattering detector and a one-inch cubic LaBr3:Ce crystal for a calorimeter detector. Both scintillator detectors were read out by 2×2 arrays of Hamamatsu S11828-3344 MPPC devices. Custom front-end electronics provided optimum signal rise time and linearity, and custom power supplies automatically adjusted the SiPM bias voltage to compensate for temperature-induced gain variations. A tagged calibration source, consisting of 240 nCi of 60Co embedded in plastic scintillator, was placed in the field of view and provided a known source of gamma rays to measure in flight. The SolCompT balloon payload was launched on 24 August 2014 from Fort Sumner, NM, and spent 3.75 h at a float altitude of 123,000 ft. The instrument performed well throughout the flight. After correcting for small ( 10%) residual gain variations, we measured an in-flight ToF resolution of 760 ps (FWHM). Advanced scintillators with SiPM readouts continue to show great promise for
International Nuclear Information System (INIS)
Bloser, P.F.; Legere, J.S.; Bancroft, C.M.; Ryan, J.M.; McConnell, M.L.
2016-01-01
We present the results of the first high-altitude balloon flight test of a concept for an advanced Compton telescope making use of modern scintillator materials with silicon photomultiplier (SiPM) readouts. There is a need in the fields of high-energy astronomy and solar physics for new medium-energy gamma-ray (~0.4–10 MeV) detectors capable of making sensitive observations of both line and continuum sources over a wide dynamic range. A fast scintillator-based Compton telescope with SiPM readouts is a promising solution to this instrumentation challenge, since the fast response of the scintillators permits both the rejection of background via time-of-flight (ToF) discrimination and the ability to operate at high count rates. The Solar Compton Telescope (SolCompT) prototype presented here was designed to demonstrate stable performance of this technology under balloon-flight conditions. The SolCompT instrument was a simple two-element Compton telescope, consisting of an approximately one-inch cylindrical stilbene crystal for a scattering detector and a one-inch cubic LaBr 3 :Ce crystal for a calorimeter detector. Both scintillator detectors were read out by 2×2 arrays of Hamamatsu S11828-3344 MPPC devices. Custom front-end electronics provided optimum signal rise time and linearity, and custom power supplies automatically adjusted the SiPM bias voltage to compensate for temperature-induced gain variations. A tagged calibration source, consisting of ~240 nCi of 60 Co embedded in plastic scintillator, was placed in the field of view and provided a known source of gamma rays to measure in flight. The SolCompT balloon payload was launched on 24 August 2014 from Fort Sumner, NM, and spent ~3.75 h at a float altitude of ~123,000 ft. The instrument performed well throughout the flight. After correcting for small (~10%) residual gain variations, we measured an in-flight ToF resolution of ~760 ps (FWHM). Advanced scintillators with SiPM readouts continue to show great
Energy Technology Data Exchange (ETDEWEB)
Bloser, P.F., E-mail: Peter.Bloser@unh.edu; Legere, J.S.; Bancroft, C.M.; Ryan, J.M.; McConnell, M.L.
2016-03-11
We present the results of the first high-altitude balloon flight test of a concept for an advanced Compton telescope making use of modern scintillator materials with silicon photomultiplier (SiPM) readouts. There is a need in the fields of high-energy astronomy and solar physics for new medium-energy gamma-ray (~0.4–10 MeV) detectors capable of making sensitive observations of both line and continuum sources over a wide dynamic range. A fast scintillator-based Compton telescope with SiPM readouts is a promising solution to this instrumentation challenge, since the fast response of the scintillators permits both the rejection of background via time-of-flight (ToF) discrimination and the ability to operate at high count rates. The Solar Compton Telescope (SolCompT) prototype presented here was designed to demonstrate stable performance of this technology under balloon-flight conditions. The SolCompT instrument was a simple two-element Compton telescope, consisting of an approximately one-inch cylindrical stilbene crystal for a scattering detector and a one-inch cubic LaBr{sub 3}:Ce crystal for a calorimeter detector. Both scintillator detectors were read out by 2×2 arrays of Hamamatsu S11828-3344 MPPC devices. Custom front-end electronics provided optimum signal rise time and linearity, and custom power supplies automatically adjusted the SiPM bias voltage to compensate for temperature-induced gain variations. A tagged calibration source, consisting of ~240 nCi of {sup 60}Co embedded in plastic scintillator, was placed in the field of view and provided a known source of gamma rays to measure in flight. The SolCompT balloon payload was launched on 24 August 2014 from Fort Sumner, NM, and spent ~3.75 h at a float altitude of ~123,000 ft. The instrument performed well throughout the flight. After correcting for small (~10%) residual gain variations, we measured an in-flight ToF resolution of ~760 ps (FWHM). Advanced scintillators with SiPM readouts continue to show
γ-ray compton profiles of liquid and crystalline aluminum
International Nuclear Information System (INIS)
Honda, Toshihisa; Itoh, Fumitake; Suzuki, Kenji
1979-01-01
Compton profiles of liquid and crystalline aluminum metal have been measured using 59.54 keV γ-rays emitted from 241 Am and Ge(Li) solid state detectors. The difference of the Compton profile between the liquid and crystalline aluminum has been found and can be qualitatively understood in terms of the change in the electronic density upon melting. The experimental profile of crystalline aluminum is in good agreement with the result of the LCAO band calculation, while the profile of liquid aluminum is in poor agreement with the free electron model and the Green function method. (author)
Geometrical effects determinant of the Compton profile shape
International Nuclear Information System (INIS)
Sartori, Renzo; Mainardi, R.T.
1987-01-01
The main purpose of this work is to evaluate the influence of the experimental set up on the shape of the Compton line. In any scattering experiment, the scattering angle is not well defined due to the collimators aperture and thus, a distribution of angles is found for each set up. This, in turn, produces the energies' distribution of the scattered photons around a mean value. This contribution has been evaluated and found it to be significant for several cases. In order to do this evaluation, a response function, that is numerically generated for each experimental set up and convoluted with the Compton profile, was defined. (Author) [es
A counting silicon microstrip detector for precision compton polarimetry
Doll, D W; Hillert, W; Krüger, H; Stammschroer, K; Wermes, N
2002-01-01
A detector for the detection of laser photons backscattered off an incident high-energy electron beam for precision Compton polarimetry in the 3.5 GeV electron stretcher ring ELSA at Bonn University has been developed using individual photon counting. The photon counting detector is based on a silicon microstrip detector system using dedicated ASIC chips. The produced hits by the pair converted Compton photons are accumulated rather than individually read out. A transverse profile displacement can be measured with mu m accuracy rendering a polarization measurement of the order of 1% on the time scale of 10-15 min possible.
Observations of GRB 990123 by the Compton Gamma Ray Observatory
Briggs, M. S.; Band, D. L.; Kippen, R. M.; Preece, R. D.; Kouveliotou, C.; vanParadijs, J.; Share, G. H.; Murphy, R. J.; Matz, S. M.; Connors, A.
1999-01-01
GRB 990123 was the first burst from which simultaneous optical, X-ray, and gamma-ray emission was detected; its afterglow has been followed by an extensive set of radio, optical, and X-ray observations. We have studied the gamma-ray burst itself as observed by the Compton Gamma Ray Observatory detectors. We find that gamma-ray fluxes are not correlated with the simultaneous optical observations and that the gamma-ray spectra cannot be extrapolated simply to the optical fluxes. The burst is well fitted by the standard four-parameter GRB function, with the exception that excess emission compared with this function is observed below approx. 15 keV during some time intervals. The burst is characterized by the typical hard-to-soft and hardness-intensity correlation spectral evolution patterns. The energy of the peak of the vf (sub v), spectrum, E (sub p), reaches an unusually high value during the first intensity spike, 1470 plus or minus 110 keV, and then falls to approx. 300 keV during the tail of the burst. The high-energy spectrum above approx. 1 MeV is consistent with a power law with a photon index of about -3. By fluence, GRB 990123 is brighter than all but 0.4% of the GRBs observed with BATSE (Burst and Transient Source Experiment), clearly placing it on the -3/2 power-law portion of the intensity distribution. However, the redshift measured for the afterglow is inconsistent with the Euclidean interpretation of the -3/2 power law. Using the redshift value of greater than or equal to 1.61 and assuming isotropic emission, the gamma-ray energy exceeds 10 (exp 54) ergs.
The Compton hump and variable blue wing in the extreme low-flux NuSTAR observations of 1H0707-495
DEFF Research Database (Denmark)
Kara, E.; Fabian, A. C.; Lohfink, A. M.
2015-01-01
of a deep 250-ks NuSTAR (Nuclear Spectroscopic Telescope Array) observation of 1H0707-495, which includes the first sensitive observations above 10 keV. Even though the NuSTAR observations caught the source in an extreme low-flux state, the Compton hump is still significantly detected. NuSTAR, with its high...
Design of a Compton camera for 3D prompt-{gamma} imaging during ion beam therapy
Energy Technology Data Exchange (ETDEWEB)
Roellinghoff, F., E-mail: roelling@ipnl.in2p3.fr [Universite de Lyon, F-69622 Lyon (France); Universite Lyon 1 and CNRS/IN2P3, UMR 5822, IPNL, F-69622 Villeurbanne (France); INSA-Lyon Laboratory of Nondestructive Testing using Ionizing Radiation (CNDRI), F-69621 Villeurbanne Cedex (France); Richard, M.-H., E-mail: mrichard@ipnl.in2p3.fr [Universite de Lyon, F-69622 Lyon (France); Universite Lyon 1 and CNRS/IN2P3, UMR 5822, IPNL, F-69622 Villeurbanne (France); INSA-Lyon Laboratory of Nondestructive Testing using Ionizing Radiation (CNDRI), F-69621 Villeurbanne Cedex (France); Chevallier, M.; Constanzo, J.; Dauvergne, D. [Universite de Lyon, F-69622 Lyon (France); Universite Lyon 1 and CNRS/IN2P3, UMR 5822, IPNL, F-69622 Villeurbanne (France); Freud, N. [INSA-Lyon Laboratory of Nondestructive Testing using Ionizing Radiation (CNDRI), F-69621 Villeurbanne Cedex (France); Henriquet, P.; Le Foulher, F. [Universite de Lyon, F-69622 Lyon (France); Universite Lyon 1 and CNRS/IN2P3, UMR 5822, IPNL, F-69622 Villeurbanne (France); Letang, J.M. [INSA-Lyon Laboratory of Nondestructive Testing using Ionizing Radiation (CNDRI), F-69621 Villeurbanne Cedex (France); Montarou, G. [LPC, CNRS/IN2P3, Clermont-F. University (France); Ray, C.; Testa, E.; Testa, M. [Universite de Lyon, F-69622 Lyon (France); Universite Lyon 1 and CNRS/IN2P3, UMR 5822, IPNL, F-69622 Villeurbanne (France); Walenta, A.H. [Uni-Siegen, FB Physik, Emmy-Noether Campus, D-57068 Siegen (Germany)
2011-08-21
We investigate, by means of Geant4 simulations, a real-time method to control the position of the Bragg peak during ion therapy, based on a Compton camera in combination with a beam tagging device (hodoscope) in order to detect the prompt gamma emitted during nuclear fragmentation. The proposed set-up consists of a stack of 2 mm thick silicon strip detectors and a LYSO absorber detector. The {gamma} emission points are reconstructed analytically by intersecting the ion trajectories given by the beam hodoscope and the Compton cones given by the camera. The camera response to a polychromatic point source in air is analyzed with regard to both spatial resolution and detection efficiency. Various geometrical configurations of the camera have been tested. In the proposed configuration, for a typical polychromatic photon point source, the spatial resolution of the camera is about 8.3 mm FWHM and the detection efficiency 2.5x10{sup -4} (reconstructable photons/emitted photons in 4{pi}). Finally, the clinical applicability of our system is considered and possible starting points for further developments of a prototype are discussed.
Characterization of Compton camera LaBr{sub 3} absorber detector
Energy Technology Data Exchange (ETDEWEB)
Marinsek, T.; Liprandi, S.; Bortfeldt, J.; Lang, C.; Lutter, R.; Dedes, G.; Parodi, K.; Thirolf, P.G. [LMU Munich, Garching (Germany); Aldawood, S. [LMU Munich, Garching (Germany); King Saud University, Riyadh (Saudi Arabia); Maier, L.; Gernhaeuser, R. [TU Munich, Garching (Germany); Kolff, H. van der [LMU Munich, Garching (Germany); TU Delft (Netherlands); Castelhano, I. [LMU Munich, Garching (Germany); University of Lisbon, Lisbon (Portugal); Schaart, D.R. [TU Delft (Netherlands)
2015-07-01
Detection of prompt γ rays from nuclear interactions between a particle beam and organic tissue using a Compton camera to determine the Bragg peak position is a promising way of ion-beam range verification in hadron therapy. The Compton camera consists of a stack of six double-sided Si-strip detectors acting as scatterers, while the other essential part - the absorber - is made of a LaBr{sub 3} monolithic scintillator crystal (50 x 50 x 30 mm{sup 3}) with reflective side-surface wrapping, offering excellent time and energy resolution. Scintillation light induced in the crystal is detected by a 256-fold segmented multi-anode PMT. Prerequisite to reconstruct the γ source position is the determination of the photon interaction position in the crystal by applying ''k-nearest neighbors'' algorithm (van Dam et al., Nuclear Science (2011)) using the reference library of light distributions, obtained by performing a 2D scan of the detector using a strong collimated {sup 137}Cs source. The status of the spatial resolution characterization is presented.
Determining the covering factor of compton-thick active galactic nuclei with NuSTAR
DEFF Research Database (Denmark)
Brightman, M.; Balokovic, M.; Stern, D.
2015-01-01
covering factor. Determining this CT fraction is difficult, however, due to the extreme obscuration. With its spectral coverage at hard X-rays (>10 keV), Nuclear Spectroscopic Telescope Array (NuSTAR) is sensitive to the AGNs covering factor since Compton scattering of X-rays off optically thick material......The covering factor of Compton-thick (CT) obscuring material associated with the torus in active galactic nuclei (AGNs) is at present best understood through the fraction of sources exhibiting CT absorption along the line of sight (NH > 1.5 × 1024 cm-2) in the X-ray band, which reveals the average...... opening angle as a free parameter and aim to determine the covering factor of the CT gas in these sources individually. Across the sample we find mild to heavy CT columns, with NH measured from 1024 to 1026 cm-2, and a wide range of covering factors, where individual measurements range from 0.2 to 0.9. We...
The use of Compton scattering in detecting anomaly in soil-possible use in pyromaterial detection
Abedin, Ahmad Firdaus Zainal; Ibrahim, Noorddin; Zabidi, Noriza Ahmad; Demon, Siti Zulaikha Ngah
2016-01-01
The Compton scattering is able to determine the signature of land mine detection based on dependency of density anomaly and energy change of scattered photons. In this study, 4.43 MeV gamma of the Am-Be source was used to perform Compton scattering. Two detectors were placed between source with distance of 8 cm and radius of 1.9 cm. Detectors of thallium-doped sodium iodide NaI(TI) was used for detecting gamma ray. There are 9 anomalies used in this simulation. The physical of anomaly is in cylinder form with radius of 10 cm and 8.9 cm height. The anomaly is buried 5 cm deep in the bed soil measured 80 cm radius and 53.5 cm height. Monte Carlo methods indicated the scattering of photons is directly proportional to density of anomalies. The difference between detector response with anomaly and without anomaly namely contrast ratio values are in a linear relationship with density of anomalies. Anomalies of air, wood and water give positive contrast ratio values whereas explosive, sand, concrete, graphite, limestone and polyethylene give negative contrast ratio values. Overall, the contrast ratio values are greater than 2 % for all anomalies. The strong contrast ratios result a good detection capability and distinction between anomalies.
Hodoroaba, Vasile-Dan; Rackwitz, Vanessa
2014-07-15
The high specificity of the coherent (Rayleigh), as well as incoherent (Compton) X-ray scattering to the mean atomic number of a specimen to be analyzed by X-ray fluorescence (XRF), is exploited to gain more information on the chemical composition. Concretely, the evaluation of the Compton-to-Rayleigh intensity ratio from XRF spectra and its relation to the average atomic number of reference materials via a calibration curve can reveal valuable information on the elemental composition complementary to that obtained from the reference-free XRF analysis. Particularly for matrices of lower mean atomic numbers, the sensitivity of the approach is so high that it can be easily distinguished between specimens of mean atomic numbers differing from each other by 0.1. Hence, the content of light elements which are "invisible" for XRF, particularly hydrogen, or of heavier impurities/additives in light materials can be calculated "by difference" from the scattering calibration curve. The excellent agreement between such an experimental, empirical calibration curve and a synthetically generated one, on the basis of a reliable physical model for the X-ray scattering, is also demonstrated. Thus, the feasibility of the approach for given experimental conditions and particular analytical questions can be tested prior to experiments with reference materials. For the present work a microfocus X-ray source attached on an SEM/EDX (scanning electron microscopy/energy dispersive X-ray spectroscopy) system was used so that the Compton-to-Rayleigh intensity ratio could be acquired with EDX spectral data for improved analysis of the elemental composition.
The physics of radio sources and cosmology
International Nuclear Information System (INIS)
Scheuer, P.A.G.
1977-01-01
Malcolm Longair's first notable contribution to science was to point out that the radio source counts require not only that the source population should evolve, but that powerful sources should evolve much faster than weak sources. Ever since then he has been trying to define more quantitatively how one must fill up the P - z plane, and indeed much of this symposium has been devoted to that and closely related problems. The theoretician in each of us cannot help also wondering why. There are plenty of explanations for the greater abundance of radio sources in the past; all sorts of exciting things could have happened when the world was young and galaxies first shone forth out of the primaeval turbulence. There are fewer explanations of the fact that weaker radio sources weren't nearly as overabundant (relative to the present epoch) as powerful ones. However, there is one natural and elegant explanation, which depends on the idea that old, weak, diffuse sources are extinguished because of inverse Compton losses on the microwave background. This explanation is pursued in this paper. (Auth.)
Compton echoes from nearby Gamma-Ray Bursts
Beniamini, Paz; Giannios, Dimitrios; Younes, George; van der Horst, Alexander J.; Kouveliotou, Chryssa
2018-03-01
The recent discovery of gravitational waves from GW170817, associated with a short Gamma-Ray Burst (GRB) at a distance of 40Mpc, has demonstrated that short GRBs can occur locally and at a reasonable rate. Furthermore, gravitational waves enable us to detect close by GRBs, even when we are observing at latitudes far from the jet's axis. We consider here Compton echoes, the scattered light from the prompt and afterglow emission. Compton echoes, an as yet undetected counterpart of GRBs, peak in X-rays and maintain a roughly constant flux for hundreds to thousands of years after the burst. Though too faint to be detected in typical cosmological GRBs, a fraction of close by bursts with a sufficiently large energy output in X-rays, and for which the surrounding medium is sufficiently dense, may indeed be observed in this way. The detection of a Compton echo could provide unique insight into the burst properties and the environment's density structure. In particular, it could potentially determine whether or not there was a successful jet that broke through the compact binary merger ejecta. We discuss here the properties and expectations from Compton echoes and suggest methods for detectability.
Electronic properties and Compton profiles of silver iodide
Indian Academy of Sciences (India)
scattering correction upto triple scattering. The contribution from the higher-order scattering is expected within the statistical errors of the experiment. Further, as suggested by our group [26], we have also corrected the experimental profile for the bremsstrahlung (BS) background due to photo and Compton recoiled electrons.
Generalized parton distributions and deep virtual Compton scattering
International Nuclear Information System (INIS)
Hasell, D.; Milner, R.; Takase, K.
2001-01-01
A brief description of generalized parton distributions is presented together with a discussion on studying such distributions via deep virtual Compton scattering. The kinematics, estimates of rates, and accuracies achievable for measuring DVCS utilizing a 5+50 GeV ep collider are also provided
On a low intensity 241Am Compton spectrometer for measurement ...
Indian Academy of Sciences (India)
Phys. 111,. 163 (1999). [6] N I Papanicolaou, N C Bacalis and D A Papaconstantopoulos, Handbook of calcu- lated electron momentum distributions, Compton profiles and X-ray form factors of elemental solids (CRC Press, London, 1991). [7] V R Saunders, R Dovesi, C Roetti, R Orlando, C M Zicovich-Wilson, N M Harrison, ...
Comprehensive study of observables in Compton scattering on the nucleon
Grießhammer, Harald W.; McGovern, Judith A.; Phillips, Daniel R.
2018-03-01
We present an analysis of 13 observables in Compton scattering on the proton. Cross sections, asymmetries with polarised beam and/or targets, and polarisation-transfer observables are investigated for energies up to the Δ(1232) resonance to determine their sensitivity to the proton's dipole scalar and spin polarisabilities. The Chiral Effective Field Theory Compton amplitude we use is complete at N4LO, O(e2δ4), for photon energies ω˜ m_{π}, and so has an accuracy of a few per cent there. At photon energies in the resonance region, it is complete at NLO, O(e2δ0), and so its accuracy there is about 20%. We find that for energies from pion-production threshold to about 250 MeV, multiple asymmetries have significant sensitivity to presently ill-determined combinations of proton spin polarisabilities. We also argue that the broad outcomes of this analysis will be replicated in complementary theoretical approaches, e.g., dispersion relations. Finally, we show that below the pion-production threshold, 6 observables suffice to reconstruct the Compton amplitude, and above it 11 are required. Although not necessary for polarisability extractions, this opens the possibility to perform "complete" Compton-scattering experiments. An interactive Mathematica notebook, including results for the neutron, is available from judith.mcgovern@manchester.ac.uk.
Electronic structure of hafnium: A Compton profile study
Indian Academy of Sciences (India)
In this paper, we report the first-ever isotropic Compton profile of hafnium measured at an intermediate resolution, with 661.65 keV -radiation. To compare our experimental data, the theoretical computations have also been carried out within the framework of pseudopotential using CRYSTAL03 code and the ...
New JLab/Hall A Deeply Virtual Compton Scattering results
Energy Technology Data Exchange (ETDEWEB)
Defurne, Maxime [CEA, Centre de Saclay, IRFU/SPhN/LSN, F-91191 Gif-sur-Yvette, France
2015-08-01
New data points for unpolarized Deeply Virtual Compton Scattering cross sections have been extracted from the E00-110 experiment at Q^{2}=1.9 GeV^{2} effectively doubling the statistics available in the valence region. A careful study of systematic uncertainties has been performed.
Attenuation studies near K-absorption edges using Compton ...
Indian Academy of Sciences (India)
journal of. April 2008 physics pp. 633–641. Attenuation studies near K-absorption edges using. Compton scattered 241Am gamma rays. K K ABDULLAH1, N RAMACHANDRAN2, K KARUNAKARAN NAIR3,. B R S BABU4, ANTONY JOSEPH4, RAJIVE .... A Linux-based package,. 634. Pramana – J. Phys., Vol. 70, No.
Strong anisotropy in the low temperature Compton profiles of ...
Indian Academy of Sciences (India)
Compton proﬁles of momentum distribution of conduction electrons in the orthorhombic phase of -Ga metal at low temperature are calculated in the band model for the three crystallographic directions (100), (010), and (001). Unlike the results at room temperature, previously reported by Lengeler, Lasser and Mair, the ...
Electronic properties and Compton profiles of silver iodide
Indian Academy of Sciences (India)
We have carried out an extensive study of electronic properties of silver iodide in - and -phases. The theoretical Compton profiles, energy bands, density of states and anisotropies in momentum densities are computed using density functional theories. We have also employed full-potential linearized augmented ...
Attenuation studies near K-absorption edges using Compton ...
Indian Academy of Sciences (India)
The results are consistent with theoretical values derived from the XCOM package. Keywords. Photon interaction; 241Am; gamma ray attenuation; Compton scattering; absorption edge; rare earth elements. PACS Nos 32.80.-t; 32.90.+a. 1. Introduction. Photon interaction studies at energies around the absorption edge have ...
Momentum densities and Compton profiles of alkali-metal atoms
Indian Academy of Sciences (India)
Home; Journals; Pramana – Journal of Physics; Volume 60; Issue 3 ... Quantum defect theory; wave functions of alkali-metal atoms; momentum properties. ... to study the momentum properties of atoms from 3Li to 37Rb. The numerical results obtained for the momentum density, moments of momentum density and Compton ...
Infrared phenomena in quantum electrodynamics : II. Bremsstrahlung and compton scattering
Haeringen, W. van
The infrared aspects of quantum electrodynamics are discussed by treating two examples of scattering processes, bremsstrahlung and Compton scattering. As in the previous paper one uses a non-covariant diagram technique which gives very clear insight in the cancelling of infrared divergences between
Strong anisotropy in the low temperature Compton profiles of ...
Indian Academy of Sciences (India)
Abstract. Compton profiles of momentum distribution of conduction electrons in the orthorhom- bic phase of α-Ga metal at low temperature are calculated in the band model for the three crys- tallographic directions (100), (010), and (001). Unlike the results at room temperature, previously reported by Lengeler, Lasser and ...
Gamma-spectrometry with Compton suppressed detectors arrays
International Nuclear Information System (INIS)
Schueck, C.; Hannachi, F.; Chapman, R.
1985-01-01
Recent results of experiments performed with two different Compton-suppressed detectors arrays in Daresbury and Berkeley (/sup 163,164/Yb and 154 Er, respectively), are presented together with a brief description of the national French array presently under construction in Strasbourg. 25 refs., 15 figs
Strong anisotropy in the low temperature Compton profiles of ...
Indian Academy of Sciences (India)
B P PANDA and N C MOHAPATRA£. Department of Physics, Chikiti Mahavidyalaya, Chikiti 761 010, India. £Department of Physics, Berhampur University, Berhampur 760 007, India. MS received 12 April 2001; revised 1 September 2001. Abstract. Compton profiles of momentum distribution of conduction electrons in the ...
Design of a Paraxial Inverse Compton Scattering Diagnostic for an Intense Relativistic Electron Beam
2013-06-01
same energy range [26-29]. After the scattered photons are produced at z = 34.5 m they begin to propagate forward in a cone with a half angle of θs...34 Xray "Energy"(keV)" LIF" HOPG" PET" ADP" TAP" RAP" KAP" ODPB" Figure 7. Bragg angle plotted as a function crystal materials and X-ray energy. The
International Nuclear Information System (INIS)
Burks, M.; Verbeke, J.; Dougan, A.; Wang, T.; Decman, D.
2009-01-01
A feasibility study has been performed to determine the potential usefulness of Compton imaging as a tool for design information verification (DIV) of uranium enrichment plants. Compton imaging is a method of gamma-ray imaging capable of imaging with a 360-degree field of view over a broad range of energies. These systems can image a room (with a time span on the order of one hour) and return a picture of the distribution and composition of radioactive material in that room. The effectiveness of Compton imaging depends on the sensitivity and resolution of the instrument as well the strength and energy of the radioactive material to be imaged. This study combined measurements and simulations to examine the specific issue of UF 6 gas flow in pipes, at various enrichment levels, as well as hold-up resulting from the accumulation of enriched material in those pipes. It was found that current generation imagers could image pipes carrying UF 6 in less than one hour at moderate to high enrichment. Pipes with low enriched gas would require more time. It was also found that hold-up was more amenable to this technique and could be imaged in gram quantities in a fraction of an hour. another questions arises regarding the ability to separately image two pipes spaced closely together. This depends on the capabilities of the instrument in question. These results are described in detail. In addition, suggestions are given as to how to develop Compton imaging as a tool for DIV
International Nuclear Information System (INIS)
Chattopadhyay, T.; Vadawale, S. V.; Shanmugam, M.; Goyal, S. K.
2014-01-01
The polarization measurements in X-rays offer a unique opportunity for the study of physical processes under the extreme conditions prevalent at compact X-ray sources, including gravitation, magnetic field, and temperature. Unfortunately, there has been no real progress in observational X-ray polarimetry thus far. Although photoelectron tracking-based X-ray polarimeters provide realistic prospects of polarimetric observations, they are effective in the soft X-rays only. With the advent of hard X-ray optics, it has become possible to design sensitive X-ray polarimeters in hard X-rays based on Compton scattering. An important point that should be carefully considered for the Compton polarimeters is the lower energy threshold of the active scatterer, which typically consists of a plastic scintillator due to its lowest effective atomic number. Therefore, an accurate understanding of the plastic scintillators energy threshold is essential to make a realistic estimate of the energy range and sensitivity of any Compton polarimeter. In this context, we set up an experiment to investigate the plastic scintillators behavior for very low energy deposition events. The experiment involves the detection of Compton scattered photons from a long, thin, plastic scintillator (a similar configuration as the eventual Compton polarimeter) by a high resolution CdTe detector at different scattering angles. We find that it is possible to detect energy deposition well below 1 keV, though with decreasing efficiency. We present detailed semianalytical modeling of our experimental setup and discuss the results in the context of the energy range and sensitivity of the Compton polarimeter involving plastic scintillators
Accelerator-driven X-ray Sources
Energy Technology Data Exchange (ETDEWEB)
Nguyen, Dinh Cong [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-11-09
After an introduction which mentions x-ray tubes and storage rings and gives a brief review of special relativity, the subject is treated under the following topics and subtopics: synchrotron radiation (bending magnet radiation, wiggler radiation, undulator radiation, brightness and brilliance definition, synchrotron radiation facilities), x-ray free-electron lasers (linac-driven X-ray FEL, FEL interactions, self-amplified spontaneous emission (SASE), SASE self-seeding, fourth-generation light source facilities), and other X-ray sources (energy recovery linacs, Inverse Compton scattering, laser wakefield accelerator driven X-ray sources. In summary, accelerator-based light sources cover the entire electromagnetic spectrum. Synchrotron radiation (bending magnet, wiggler and undulator radiation) has unique properties that can be tailored to the users’ needs: bending magnet and wiggler radiation is broadband, undulator radiation has narrow spectral lines. X-ray FELs are the brightest coherent X-ray sources with high photon flux, femtosecond pulses, full transverse coherence, partial temporal coherence (SASE), and narrow spectral lines with seeding techniques. New developments in electron accelerators and radiation production can potentially lead to more compact sources of coherent X-rays.
Energy Technology Data Exchange (ETDEWEB)
Baloković, M.; Comastri, A.; Harrison, F. A.; Alexander, D. M.; Ballantyne, D. R.; Bauer, F. E.; Boggs, S. E.; Brandt, W. N.; Brightman, M.; Christensen, F. E.; Craig, W. W.; Moro, A. Del; Gandhi, P.; Hailey, C. J.; Koss, M.; Lansbury, G. B.; Luo, B.; Madejski, G. M.; Marinucci, A.; Matt, G.; Markwardt, C. B.; Puccetti, S.; Reynolds, C. S.; Risaliti, G.; Rivers, E.; Stern, D.; Walton, D. J.; Zhang, W. W.
2014-09-30
We present X-ray spectral analyses for three Seyfert 2 active galactic nuclei, NGC 424, NGC 1320, and IC 2560, observed by NuSTAR in the 3-79 keV band. The high quality hard X-ray spectra allow detailed modeling of the Compton reflection component for the first time in these sources. Using quasi-simultaneous NuSTAR and Swift/XRT data, as well as archival XMM-Newton data, we find that all three nuclei are obscured by Compton-thick material with column densities in excess of ~ 5 x 10^{24} cm-^{2}, and that their X-ray spectra above 3 keV are dominated by reflection of the intrinsic continuum on Compton-thick material. Due to the very high obscuration, absorbed intrinsic continuum components are not formally required by the data in any of the sources. We constrain the intrinsic photon indices and the column density of the reflecting medium through the shape of the reflection spectra. Using archival multi-wavelength data we recover the intrinsic X-ray luminosities consistent with the broadband spectral energy distributions. Our results are consistent with the reflecting medium being an edge-on clumpy torus with a relatively large global covering factor and overall reflection efficiency of the order of 1%. Given the unambiguous confirmation of the Compton-thick nature of the sources, we investigate whether similar sources are likely to be missed by commonly used selection criteria for Compton-thick AGN, and explore the possibility of finding their high-redshift counterparts.
Relativistic Turbulence with Strong Synchrotron and Synchrotron-Self-Compton Cooling
Uzdensky, D. A.
2018-03-01
Many relativistic plasma environments in high-energy astrophysics, including pulsar wind nebulae, hot accretion flows onto black holes, relativistic jets in active galactic nuclei and gamma-ray bursts, and giant radio lobes, are naturally turbulent. The plasma in these environments is often so hot that synchrotron and inverse-Compton (IC) radiative cooling becomes important. In this paper we investigate the general thermodynamic and radiative properties (and hence the observational appearance) of an optically thin relativistically hot plasma stirred by driven magnetohydrodynamic (MHD) turbulence and cooled by radiation. We find that if the system reaches a statistical equilibrium where turbulent heating is balanced by radiative cooling, the effective electron temperature tends to attain a universal value θ = kT_e/m_e c^2 ˜ 1/√{τ_T}, where τT = neσTL ≪ 1 is the system's Thomson optical depth, essentially independent of the strength of turbulent driving and hence of the magnetic field. This is because both MHD turbulent dissipation and synchrotron cooling are proportional to the magnetic energy density. We also find that synchrotron self-Compton (SSC) cooling and perhaps a few higher-order IC components are automatically comparable to synchrotron in this regime. The overall broadband radiation spectrum then consists of several distinct components (synchrotron, SSC, etc.), well separated in photon energy (by a factor ˜ τ_T^{-1}) and roughly equal in power. The number of IC peaks is checked by Klein-Nishina effects and depends logarithmically on τT and the magnetic field. We also examine the limitations due to synchrotron self-absorption, explore applications to Crab PWN and blazar jets, and discuss links to radiative magnetic reconnection.
Ronchin, Erika; Masterlark, Timothy; Dawson, John; Saunders, Steve; Martí Molist, Joan
2015-04-01
In this study, we present a method to fully integrate a family of finite element models (FEMs) into the regularized linear inversion of InSAR data collected at Rabaul caldera (PNG) between February 2007 and December 2010. During this period the caldera experienced a long-term steady subsidence that characterized surface movement both inside the caldera and outside, on its western side. The inversion is based on an array of FEM sources in the sense that the Green's function matrix is a library of forward numerical displacement solutions generated by the sources of an array common to all FEMs. Each entry of the library is the LOS surface displacement generated by injecting a unity mass of fluid, of known density and bulk modulus, into a different source cavity of the array for each FEM. By using FEMs, we are taking advantage of their capability of including topography and heterogeneous distribution of elastic material properties. All FEMs of the family share the same mesh in which only one source is activated at the time by removing the corresponding elements and applying the unity fluid flux. The domain therefore only needs to be discretized once. This precludes remeshing for each activated source, thus reducing computational requirements, often a downside of FEM-based inversions. Without imposing an a-priori source, the method allows us to identify, from a least-squares standpoint, a complex distribution of fluid flux (or change in pressure) with a 3D free geometry within the source array, as dictated by the data. The results of applying the proposed inversion to Rabaul InSAR data show a shallow magmatic system under the caldera made of two interconnected lobes located at the two opposite sides of the caldera. These lobes could be consistent with feeding reservoirs of the ongoing Tavuvur volcano eruption of andesitic products, on the eastern side, and of the past Vulcan volcano eruptions of more evolved materials, on the western side. The interconnection and
Bulk Comptonization: new hints from the luminous blazar 4C+25.05
Kammoun, E. S.; Nardini, E.; Risaliti, G.; Ghisellini, G.; Behar, E.; Celotti, A.
2018-01-01
Blazars are often characterized by a spectral break at soft X-rays, whose origin is still debated. While most sources show a flattening, some exhibit a blackbody-like soft excess with temperatures of the order of ∼0.1 keV, similar to low-luminosity, non-jetted Seyferts. Here, we present the analysis of the simultaneous XMM-Newton and NuSTAR observations of the luminous flat-spectrum radio quasar 4C+25.05 (z = 2.368). The observed 0.3-30 keV spectrum is best described by the sum of a hard X-ray power law (Γ = 1.38_{-0.03}^{+0.05}) and a soft component, approximated by a blackbody with kT_BB = 0.66_{-0.04}^{+0.05} keV (rest frame). If the spectrum of 4C+25.05 is interpreted in the context of bulk Comptonization by cold electrons of broad-line region photons emitted in the direction of the jet, such an unusual temperature implies a bulk Lorentz factor of the jet of Γbulk ∼ 11.7. Bulk Comptonization is expected to be ubiquitous on physical grounds, yet no clear signature of it has been found so far, possibly due to its transient nature and the lack of high-quality, broad-band X-ray spectra.
First coincidences in pre-clinical Compton camera prototype for medical imaging
Studen, A.; Burdette, D.; Chesi, E.; Cindro, V.; Clinthorne, N. H.; Dulinski, W.; Fuster, J.; Han, L.; Kagan, H.; Lacasta, C.; Llosá, G.; Marques, A. C.; Malakhov, N.; Meier, D.; Mikuž, M.; Park, S. J.; Roe, S.; Rogers, W. L.; Steinberg, J.; Weilhammer, P.; Wilderman, S. J.; Zhang, L.; Žontar, D.
2004-09-01
Compton collimated imaging may improve the detection of gamma rays emitted by radioisotopes used in single photon emission computed tomography (SPECT). We present a crude prototype consisting of a single 500μm thick, 256 pad silicon detector with pad size of 1.4×1.4mm2, combined with a 15×15×1cm3 NaI scintillator crystal coupled to a set of 20 photo multipliers. Emphasis is placed on the performance of the silicon detector and the associated read-out electronics, which has so far proved to be the most challenging part of the set-up. Results were obtained using the VATAGP3, 128 channel low-noise self-triggering ASIC as the silicon detector's front-end. The noise distribution (σ) of the spectroscopic outputs gave an equivalent noise charge (ENC) with a mean value of =137e with a spread of 10e, corresponding to an energy resolution of 1.15keV FWHM for the scattered electron energy. Threshold settings above 8.2keV were required for stable operation of the trigger. Coincident Compton scatter events in both modules were observed for photons emitted by 57Co source with principal gamma ray energies of 122 and 136keV.
The synchrotron-self-Compton process in spherical geometries. I - Theoretical framework
Band, D. L.; Grindlay, J. E.
1985-01-01
Both spatial and spectral accuracies are stressed in the present method for the calculation of the synchrotron-self-Compton model in spherical geometries, especially in the partially opaque regime of the synchrotron spectrum of inhomogeneous sources that can span a few frequency decades and contribute a significant portion of the scattered flux. A formalism is developed that permits accurate calculation of incident photon density throughout an optically thin sphere. An approximation to the Klein-Nishina cross section is used to model the effects of variable electron and incident photon cutoffs, as well as the decrease in the cross section at high energies. General results are derived for the case of inhomogeneous sources with power law profiles in both electron density and magnetic field.
International Nuclear Information System (INIS)
Singh, Manpreet; Singh, Gurvinderjit; Sandhu, B.S.; Singh, Bhajan
2006-01-01
The simultaneous effect of detector collimator and sample thickness on 0.662 MeV multiply Compton-scattered gamma photons was studied experimentally. An intense collimated beam, obtained from 6-Ci 137 Cs source, is allowed to impinge on cylindrical aluminium samples of varying diameter and the scattered photons are detected by a 51 mmx51 mm NaI(Tl) scintillation detector placed at 90 o to the incident beam. The full energy peak corresponding to singly scattered events is reconstructed analytically. The thickness at which the multiply scattered events saturate is determined for different detector collimators. The parameters like signal-to-noise ratio and multiply scatter fraction (MSF) have also been deduced and support the work carried out by Shengli et al. [2000. EGS4 simulation of Compton scattering for nondestructive testing. KEK proceedings 200-20, Tsukuba, Japan, pp. 216-223] and Barnea et al. [1995. A study of multiple scattering background in Compton scatter imaging. NDT and E International 28, 155-162] based upon Monte Carlo calculations
Polarized gamma-rays with laser-Compton backscattering
Energy Technology Data Exchange (ETDEWEB)
Ohgaki, H.; Noguchi, T.; Sugiyama, S. [Electrotechnical Lab., Ibaraki (Japan)] [and others
1995-12-31
Polarized gamma-rays were generated through laser-Compton backscattering (LCS) of a conventional Nd:YAG laser with electrons circulating in the electron storage ring TERAS at Electrotechnical Laboratory. We measured the energy, the energy spread, and the yield of the gamma-rays to characterize our gamma-ray source. The gamma-ray energy can be varied by changing the energy of the electrons circulating the storage ring. In our case, the energy of electrons in the storage ring were varied its energy from 200 to 750 MeV. Consequently, we observed gamma-ray energies of 1 to 10 MeV with 1064 run laser photons. Furthermore, the gamma-ray energy was extended to 20 MeV by using the 2nd harmonic of the Nd:YAG laser. This shows a good agreement with theoretical calculation. The gamma-ray energy spread was also measured to be 1% FWHM for -1 MeV gamma-rays and to be 4% FWHM for 10 MeV gamma-rays with a narrow collimator that defined the scattering cone. The gamma-ray yield was 47.2 photons/mA/W/s. This value is consistent with a rough estimation of 59.5 photons/mA/W/s derived from theory. Furthermore, we tried to use these gamma-rays for a nuclear fluorescence experiment. If we use a polarized laser beam, we can easily obtain polarized gamma-rays. Elastically scattered photons from {sup 208} Pb were clearly measured with the linearly polarized gamma-rays, and we could assign the parity of J=1 states in the nucleus. We should emphasize that the polarized gamma-ray from LCS is quit useful in this field, because we can use highly, almost completely, polarized gamma-rays. We also use the LCS gamma-rays to measure the photon absorption coefficients. In near future, we will try to generate a circular polarized gamma-ray. We also have a plan to use an FEL, because it can produce intense laser photons in the same geometric configuration as the LCS facility.
International Nuclear Information System (INIS)
Labaye, F.
2012-01-01
One of the critical points of the International Linear Collider (ILC) is the polarized positrons source. Without going through further explanation on the physical process of polarized positrons production, we point out that they are produced when circularly polarized gamma rays interact with mater. Thus, the critical point is the circularly polarized gamma-ray source. A technical solution for this source is the Compton backscattering and in the end, this thesis takes place in the framework for the design of a high average power laser systems enslaved to Fabry-Perot cavities for polarized gamma-ray production by Compton backscattering. In the first part, we present this thesis context, the Compton backscattering principle and the choice for an optical architecture based on a fiber laser and a Fabry-Perot cavity. We finish by enumerating several possible applications for Compton backscattering which shows that the work presented here might benefits from technology transfer through others research fields. In the second part, we present the different fiber laser architecture studied as well as the results obtained. In the third part, we remind the operating principle of a Fabry-Perot cavity and present the one used for our experiment as well as its specificities. In the fourth part, we address the Compton backscattering experiment which enables us to present the joint utilization of a fiber laser and a Fabry-Perot cavity in a particles accelerator to generate gamma rays for the first time to our knowledge. This experiment took place in the Accelerator Test Facility (ATF). The experimental apparatus as well as the results obtained are thus presented. In the end, we summarize the results presented in this manuscript and propose different evolution possibilities for the system in a general conclusion. (author)
Kaufman, J.; Blaes, O. M.; Hirose, S.
2018-03-01
Warm Comptonization models for the soft X-ray excess in AGN do not self-consistently explain the relationship between the Comptonizing medium and the underlying accretion disc. Because of this, they cannot directly connect the fitted Comptonization temperatures and optical depths to accretion disc parameters. Since bulk velocities exceed thermal velocities in highly radiation pressure dominated discs, in these systems bulk Comptonization by turbulence may provide a physical basis in the disc itself for warm Comptonization models. We model the dependence of bulk Comptonization on fundamental accretion disc parameters, such as mass, luminosity, radius, spin, inner boundary condition, and α. In addition to constraining warm Comptonization models, our model can help distinguish contributions from bulk Comptonization to the soft X-ray excess from those due to other physical mechanisms, such as absorption and reflection. By linking the time variability of bulk Comptonization to fluctuations in the disc vertical structure due to MRI turbulence, our results show that observations of the soft X-ray excess can be used to study disc turbulence in the radiation pressure dominated regime. Because our model connects bulk Comptonization to one dimensional vertical structure temperature profiles in a physically intuitive way, it will be useful for understanding this effect in future simulations run in new regimes.
Coffer, Amy Beth
Radiation imagers are import tools in the modern world for a wide range of applications. They span the use-cases of fundamental sciences, astrophysics, medical imaging, all the way to national security, nuclear safeguards, and non-proliferation verification. The type of radiation imagers studied in this thesis were gamma-ray imagers that detect emissions from radioactive materials. Gamma-ray imagers goal is to localize and map the distribution of radiation within their specific field-of-view despite the fact of complicating background radiation that can be terrestrial, astronomical, and temporal. Compton imaging systems are one type of gamma-ray imager that can map the radiation around the system without the use of collimation. Lack of collimation enables the imaging system to be able to detect radiation from all-directions, while at the same time, enables increased detection efficiency by not absorbing incident radiation in non-sensing materials. Each Compton-scatter events within an imaging system generated a possible cone-surface in space that the radiation could have originated from. Compton imaging is limited in its reconstructed image signal-to-background due to these source Compton-cones overlapping with background radiation Compton-cones. These overlapping cones limit Compton imaging's detection-sensitivity in image space. Electron-tracking Compton imaging (ETCI) can improve the detection-sensitivity by measuring the Compton-scattered electron's initial trajectory. With an estimate of the scattered electron's trajectory, one can reduce the Compton-back-projected cone to a cone-arc, thus enabling faster radiation source detection and localization. However, the ability to measure the Compton-scattered electron-trajectories adds another layer of complexity to an already complex methodology. For a real-world imaging applications, improvements are needed in electron-track detection efficiency and in electron-track reconstruction. One way of measuring Compton
Electronic properties of Be and Al by Compton scattering technique
International Nuclear Information System (INIS)
Aguiar, J.C.; Di Rocco, H.O.
2011-01-01
In this work, electronic properties of beryllium and aluminum are examined by using Compton scattering technique. The method is based on the irradiation of samples using a beam narrow of mono- energetic photons of 59.54 keV product of radioactive decay of Am -241 . Scattered radiation is collected by a high resolution semiconductor detector positioned at an angle of 90°. The measured spectrum is commonly called Compton profile and contains useful information about the electronic structure of the material. The experimental results are compared with theoretical calculations such as density functional theory showing a good agreement. However, these results show some discrepancies with many libraries used in codes such as Monte Carlo simulation. Since these libraries are based on the values tabulated by Biggs, Mendelsohn and Mann 1975 thus overestimating the scattered radiation on the material. (authors) [es
Laser Compton Scattering Gamma Ray Induced Photo-Trasmutation
Li, Dazhi
2004-01-01
High brightness beams of gamma rays produced with laser Compton scattering have the potential to realize photo-transmutation through (γ,n) reaction, implying an efficient method to dispose long-lived fission products. Preliminary investigations have been carried out in understanding the feasibility of development of a transmutation facility to repose nuclear waste. A laser Compton scattering experimental setup based on a storage ring started to generate gamma-ray beams for studying the coupling of gamma photons and nuclear giant resonance. This paper demonstrates the dependency of nuclear transmutation efficiency on target dimensions and gamma ray features. 197Au sample was adopted in our experiment, and experimental results correspond to the theoretical estimations.
Deeply virtual compton scattering on a virtual pion target
Energy Technology Data Exchange (ETDEWEB)
Amrath, D.; Diehl, M. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Lansberg, J.P. [Ecole Polytechnique, 91 - Palaiseau (France). Centre de Physique Theorique]|[Heidelberg Univ. (Germany). Inst. fuer Theoretische Physik
2008-07-15
We study deeply virtual Compton scattering on a virtual pion that is emitted by a proton. Using a range of models for the generalized parton distributions of the pion, we evaluate the cross section, as well as the beam spin and beam charge asymmetries in the leading-twist approximation. Studying Compton scattering on the pion in suitable kinematics puts high demands on both beam energy and luminosity, and we find that the corresponding requirements will first be met after the energy upgrade at Jefferson Laboratory. As a by-product of our study, we construct a parameterization of pion generalized parton distributions that has a non-trivial interplay between the x and t dependence and is in good agreement with form factor data and lattice calculations. (orig.)
Simplified slow anti-coincidence circuit for Compton suppression systems
Energy Technology Data Exchange (ETDEWEB)
Al-Azmi, Darwish [Department of Applied Sciences, College of Technological Studies, Public Authority for Applied Education and Training, P.O. Box 42325, Shuwaikh 70654 (Kuwait)], E-mail: ds.alazmi@paaet.edu.kw
2008-08-15
Slow coincidence circuits for the anti-coincidence measurements have been considered for use in Compton suppression technique. The simplified version of the slow circuit has been found to be fast enough, satisfactory and allows an easy system setup, particularly with the advantage of the automatic threshold setting of the low-level discrimination. A well-type NaI detector as the main detector surrounded by plastic guard detector has been arranged to investigate the performance of the Compton suppression spectrometer using the simplified slow circuit. The system has been tested to observe the improvement in the energy spectra for medium to high-energy gamma-ray photons from terrestrial and environmental samples.
Comparison between electron and neutron Compton scattering studies
Directory of Open Access Journals (Sweden)
Moreh Raymond
2015-01-01
Full Text Available We compare two techniques: Electron Compton Scattering (ECS and neutron Compton scattering (NCS and show that using certain incident energies, both can measure the atomic kinetic energy of atoms in molecules and solids. The information obtained is related to the Doppler broadening of nuclear levels and is very useful for deducing the widths of excited levels in many nuclei in self absorption measurements. A comparison between the atomic kinetic energies measured by the two methods on the same samples is made. Some results are also compared with calculated atomic kinetic energies obtained using the harmonic approximation where the vibrational frequencies were taken from IR/Raman optical measurements. The advantages of the ECS method are emphasized.
The Mathematical Foundations of 3D Compton Scatter Emission Imaging
Directory of Open Access Journals (Sweden)
T. T. Truong
2007-01-01
Full Text Available The mathematical principles of tomographic imaging using detected (unscattered X- or gamma-rays are based on the two-dimensional Radon transform and many of its variants. In this paper, we show that two new generalizations, called conical Radon transforms, are related to three-dimensional imaging processes based on detected Compton scattered radiation. The first class of conical Radon transform has been introduced recently to support imaging principles of collimated detector systems. The second class is new and is closely related to the Compton camera imaging principles and invertible under special conditions. As they are poised to play a major role in future designs of biomedical imaging systems, we present an account of their most important properties which may be relevant for active researchers in the field.
Simplified slow anti-coincidence circuit for Compton suppression systems
International Nuclear Information System (INIS)
Al-Azmi, Darwish
2008-01-01
Slow coincidence circuits for the anti-coincidence measurements have been considered for use in Compton suppression technique. The simplified version of the slow circuit has been found to be fast enough, satisfactory and allows an easy system setup, particularly with the advantage of the automatic threshold setting of the low-level discrimination. A well-type NaI detector as the main detector surrounded by plastic guard detector has been arranged to investigate the performance of the Compton suppression spectrometer using the simplified slow circuit. The system has been tested to observe the improvement in the energy spectra for medium to high-energy gamma-ray photons from terrestrial and environmental samples
Compact radio sources as a plasma turbulent reactor
International Nuclear Information System (INIS)
Atoyan, A.M.; Nagapetyan, A.
1987-01-01
The electromagnetic raiation spectra of a homogeneous cosmic radio source (CRS) wherein the relativistic electron acceleration on the langmuir waves leads to the formation of Maxwell-like spectra with characteristic value of the Lorentz-factor γ 0 ∼ 10 3 are considered. It has been shown that due to synchrotron radiation of relativistic electrons, usually observed from CRSs flat radiosepctra, gradually steepening at submillimeter wavelengths are naturally formed in the optically thin range of frequencies. The electromagnetic radiation at the scattering of the electron on the turbulence produces significant nonthermal infrared radiation. Inverse compton scattering of the relativistic electrons on the radio-infrared photons leads the production of X-rays. The characteristic of the electromagnetic radiation spectra obtained in the model are compared with the observational ones
The Compton-Schwarzschild correspondence from extended de Broglie relations
Energy Technology Data Exchange (ETDEWEB)
Lake, Matthew J. [The Institute for Fundamental Study, “The Tah Poe Academia Institute' ,Naresuan University, Phitsanulok 65000 (Thailand); Thailand Center of Excellence in Physics, Ministry of Education,Bangkok 10400 (Thailand); Carr, Bernard [School of Physics and Astronomy, Queen Mary University of London,Mile End Road, London E1 4NS (United Kingdom)
2015-11-17
The Compton wavelength gives the minimum radius within which the mass of a particle may be localized due to quantum effects, while the Schwarzschild radius gives the maximum radius within which the mass of a black hole may be localized due to classial gravity. In a mass-radius diagram, the two lines intersect near the Planck point (l{sub P},m{sub P}), where quantum gravity effects become significant. Since canonical (non-gravitational) quantum mechanics is based on the concept of wave-particle duality, encapsulated in the de Broglie relations, these relations should break down near (l{sub P},m{sub P}). It is unclear what physical interpretation can be given to quantum particles with energy E≫m{sub P}c{sup 2}, since they correspond to wavelengths λ≪l{sub P} or time periods τ≪t{sub P} in the standard theory. We therefore propose a correction to the standard de Broglie relations, which gives rise to a modified Schrödinger equation and a modified expression for the Compton wavelength, which may be extended into the region E≫m{sub P}c{sup 2}. For the proposed modification, we recover the expression for the Schwarzschild radius for E≫m{sub P}c{sup 2} and the usual Compton formula for E≪m{sub P}c{sup 2}. The sign of the inequality obtained from the uncertainty principle reverses at m≈m{sub P}, so that the Compton wavelength and event horizon size may be interpreted as minimum and maximum radii, respectively. We interpret the additional terms in the modified de Broglie relations as representing the self-gravitation of the wave packet.
Renormalized Compton scattering and nonlinear damping of collisionless drift waves
International Nuclear Information System (INIS)
Krommes, J.A.
1979-05-01
A kinetic theory for the nonlinear damping of collisionless drift waves in a shear-free magnetic field is presented. The general formalism is a renormalized version of induced scattering on the ions and reduces correctly to weak turbulence theory. The approximation studied explicitly reduces to Compton scattering, systematizes thee earlier calculations of Dupree and Tetreault (DT) [Phys. Fluids 21, 425 (1978)], and extends that theory to finite ion gyroradius. Certain conclusions differ significantly from those of DT
Compton scattering at finite temperature: thermal field dynamics approach
International Nuclear Information System (INIS)
Juraev, F.I.
2006-01-01
Full text: Compton scattering is a classical problem of quantum electrodynamics and has been studied in its early beginnings. Perturbation theory and Feynman diagram technique enables comprehensive analysis of this problem on the basis of which famous Klein-Nishina formula is obtained [1, 2]. In this work this problem is extended to the case of finite temperature. Finite-temperature effects in Compton scattering is of practical importance for various processes in relativistic thermal plasmas in astrophysics. Recently Compton effect have been explored using closed-time path formalism with temperature corrections estimated [3]. It was found that the thermal cross section can be larger than that for zero-temperature by several orders of magnitude for the high temperature realistic in astrophysics [3]. In our work we use a main tool to account finite-temperature effects, a real-time finite-temperature quantum field theory, so-called thermofield dynamics [4, 5]. Thermofield dynamics is a canonical formalism to explore field-theoretical processes at finite temperature. It consists of two steps, doubling of Fock space and Bogolyubov transformations. Doubling leads to appearing additional degrees of freedom, called tilded operators which together with usual field operators create so-called thermal doublet. Bogolyubov transformations make field operators temperature-dependent. Using this formalism we treat Compton scattering at finite temperature via replacing in transition amplitude zero-temperature propagators by finite-temperature ones. As a result finite-temperature extension of the Klein-Nishina formula is obtained in which differential cross section is represented as a sum of zero-temperature cross section and finite-temperature correction. The obtained result could be useful in quantum electrodynamics of lasers and for relativistic thermal plasma processes in astrophysics where correct account of finite-temperature effects is important. (author)
Detection of detachments and inhomogeneities in frescos by Compton scattering
International Nuclear Information System (INIS)
Castellano, A.; Cesareo, R.; Buccolieri, G.; Donativi, M.; Palama, F.; Quarta, S.; De Nunzio, G.; Brunetti, A.; Marabelli, M.; Santamaria, U.
2005-01-01
A mobile instrument has been developed for the detection and mapping of detachments in frescos by using Compton back scattered photons. The instrument is mainly composed of a high energy X-ray tube, an X-ray detection system and a translation table. The instrument was first applied to samples simulating various detachment situations, and then transferred to the Vatican Museum to detect detachments and inhomogeneities in the stanza di Eliodoro, one of the 'Raphael's stanze'
Detection of detachments and inhomogeneities in frescos by Compton scattering
Energy Technology Data Exchange (ETDEWEB)
Castellano, A. [Dipartimento di Scienza dei Materiali, Universita di Lecce, 73100 Lecce (Italy); INFN, Sezione di Lecce, via per Arnesano, 73100 Lecce (Italy); Cesareo, R. [Istituto di Matematica e Fisica, Universita di Sassari, 07100 Sassari (Italy) and INFN, Sezione di Cagliari, Cittadella Universitaria di Monserrato, 09042 Cagliari (Italy)]. E-mail: cesareo@uniss.it; Buccolieri, G. [Dipartimento di Scienza dei Materiali, Universita di Lecce, 73100 Lecce (Italy); INFN, Sezione di Lecce, via per Arnesano, 73100 Lecce (Italy); Donativi, M. [Dipartimento di Scienza dei Materiali, Universita di Lecce, 73100 Lecce (Italy); Palama, F. [Dipartimento di Scienza dei Materiali, Universita di Lecce, 73100 Lecce (Italy); INFN, Sezione di Lecce, via per Arnesano, 73100 Lecce (Italy); Quarta, S. [Dipartimento di Scienza dei Materiali, Universita di Lecce, 73100 Lecce (Italy); INFN, Sezione di Lecce, via per Arnesano, 73100 Lecce (Italy); De Nunzio, G. [Dipartimento di Scienza dei Materiali, Universita di Lecce, 73100 Lecce (Italy); INFN, Sezione di Lecce, via per Arnesano, 73100 Lecce (Italy); Brunetti, A. [Istituto di Matematica e Fisica, Universita di Sassari, 07100 Sassari (Italy); INFN, Sezione di Cagliari, Cittadella Universitaria di Monserrato, 09042 Cagliari (Italy); Marabelli, M. [Istituto Centrale del Restauro, P.zza S. Francesco di Paola, 00184 Rome (Italy); Santamaria, U. [Laboratori dei Musei Vaticani, Citta del Vaticano, Rome (Italy)
2005-07-01
A mobile instrument has been developed for the detection and mapping of detachments in frescos by using Compton back scattered photons. The instrument is mainly composed of a high energy X-ray tube, an X-ray detection system and a translation table. The instrument was first applied to samples simulating various detachment situations, and then transferred to the Vatican Museum to detect detachments and inhomogeneities in the stanza di Eliodoro, one of the 'Raphael's stanze'.
The Compton-Schwarzschild correspondence from extended de Broglie relations
International Nuclear Information System (INIS)
Lake, Matthew J.; Carr, Bernard
2015-01-01
The Compton wavelength gives the minimum radius within which the mass of a particle may be localized due to quantum effects, while the Schwarzschild radius gives the maximum radius within which the mass of a black hole may be localized due to classial gravity. In a mass-radius diagram, the two lines intersect near the Planck point (l P ,m P ), where quantum gravity effects become significant. Since canonical (non-gravitational) quantum mechanics is based on the concept of wave-particle duality, encapsulated in the de Broglie relations, these relations should break down near (l P ,m P ). It is unclear what physical interpretation can be given to quantum particles with energy E≫m P c 2 , since they correspond to wavelengths λ≪l P or time periods τ≪t P in the standard theory. We therefore propose a correction to the standard de Broglie relations, which gives rise to a modified Schrödinger equation and a modified expression for the Compton wavelength, which may be extended into the region E≫m P c 2 . For the proposed modification, we recover the expression for the Schwarzschild radius for E≫m P c 2 and the usual Compton formula for E≪m P c 2 . The sign of the inequality obtained from the uncertainty principle reverses at m≈m P , so that the Compton wavelength and event horizon size may be interpreted as minimum and maximum radii, respectively. We interpret the additional terms in the modified de Broglie relations as representing the self-gravitation of the wave packet.
Real and virtual Compton scattering: The nucleon polarizabilities
International Nuclear Information System (INIS)
Downie, E.J.; Fonvieille, H.
2011-01-01
We give an overview of low-energy Compton scattering γ (*) p → γp with a real or virtual incoming photon. These processes allow the investigation of one of the fundamental properties of the nucleon, i.e. how its internal structure deforms under an applied static electromagnetic field. Our knowledge of nucleon polarizabilities and their generalization to non-zero four-momentum transfer will be reviewed, including the presently ongoing experiments and future perspectives. (authors)
Deeply virtual Compton scattering: How to test handbag dominance?
International Nuclear Information System (INIS)
Gousset, T.; Gousset, T.; Diehl, M.; Pire, B.; Diehl, M.; Ralston, J.P.
1998-01-01
We propose detailed tests of the handbag approximation in exclusive deeply virtual Compton scattering. Those tests make no use of any prejudice about parton correlations in the proton which are basically unknown objects and beyond the scope of perturbative QCD. Since important information on the proton substructure can be gained in the regime of light cone dominance we consider that such a class of tests is of special relevance. copyright 1998 American Institute of Physics
Formal analogy between Compton scattering and Doppler effect
DEFF Research Database (Denmark)
Nielsen, A.; Olsen, Jørgen Seir
1966-01-01
Viewed from the scatterer, the energy of the incoming photon or particle is equal to that of the outgoing, and the angle of incidence is equal to the angle of reflection, when the direction of the velocity of the scatterer after the collision is taken as reference. This paper sets out to prove...... this statement in a more simple and direct way. The authors only consider the Compton scatting process as it is quite analogous to the particle case....
The electromagnetic calorimeter in JLab Real Compton Scattering Experiment
Energy Technology Data Exchange (ETDEWEB)
Albert Shahinyan; Eugene Chudakov; A. Danagoulian; P. Degtyarenko; K. Egiyan; V. Gorbenko; J. Hines; E. Hovhannisyan; Ch. Hyde; C.W. de Jager; A. Ketikyan; V. Mamyan; R. Michaels; A.M. Nathan; V. Nelyubin; I. Rachek; M. Roedelbrom; A. Petrosyan; R. Pomatsalyuk; V. Popov; J. Segal; Yu. Shestakov; J. Templon; H. Voskanyan; B. Wojtsekhowski
2007-04-16
A hodoscope calorimeter comprising of 704 lead-glass blocks is described. The calorimeter was constructed for use in the JLab Real Compton Scattering experiment. The detector provides a measurement of the coordinates and the energy of scattered photons in the GeV energy range with resolutions of 5 mm and 6\\%/$\\sqrt{E_\\gamma \\, [GeV]}$, respectively. Design features and performance parameters during the experiment are presented.
Zhang, Dongliang
2013-01-01
To increase the illumination of the subsurface and to eliminate the dependency of FWI on the source wavelet, we propose multiples waveform inversion (MWI) that transforms each hydrophone into a virtual point source with a time history equal to that of the recorded data. These virtual sources are used to numerically generate downgoing wavefields that are correlated with the backprojected surface-related multiples to give the migration image. Since the recorded data are treated as the virtual sources, knowledge of the source wavelet is not required, and the subsurface illumination is greatly enhanced because the entire free surface acts as an extended source compared to the radiation pattern of a traditional point source. Numerical tests on the Marmousi2 model show that the convergence rate and the spatial resolution of MWI is, respectively, faster and more accurate then FWI. The potential pitfall with this method is that the multiples undergo more than one roundtrip to the surface, which increases attenuation and reduces spatial resolution. This can lead to less resolved tomograms compared to conventional FWI. The possible solution is to combine both FWI and MWI in inverting for the subsurface velocity distribution.
Rosseland and Flux Mean Opacities for Compton Scattering
Energy Technology Data Exchange (ETDEWEB)
Poutanen, Juri, E-mail: juri.poutanen@utu.fi [Tuorla Observatory, Department of Physics and Astronomy, University of Turku, Väisäläntie 20, FI-21500 Piikkiö (Finland)
2017-02-01
Rosseland mean opacity plays an important role in theories of stellar evolution and X-ray burst models. In the high-temperature regime, when most of the gas is completely ionized, the opacity is dominated by Compton scattering. Our aim here is to critically evaluate previous works on this subject and to compute the exact Rosseland mean opacity for Compton scattering over a broad range of temperature and electron degeneracy parameter. We use relativistic kinetic equations for Compton scattering and compute the photon mean free path as a function of photon energy by solving the corresponding integral equation in the diffusion limit. As a byproduct we also demonstrate the way to compute photon redistribution functions in the case of degenerate electrons. We then compute the Rosseland mean opacity as a function of temperature and electron degeneracy and present useful approximate expressions. We compare our results to previous calculations and find a significant difference in the low-temperature regime and strong degeneracy. We then proceed to compute the flux mean opacity in both free-streaming and diffusion approximations, and show that the latter is nearly identical to the Rosseland mean opacity. We also provide a simple way to account for the true absorption in evaluating the Rosseland and flux mean opacities.
ILC beam energy measurement by means of laser Compton backscattering
Energy Technology Data Exchange (ETDEWEB)
Muchnoi, N. [Budker Inst. for Nuclear Physics, Novosibirsk (Russian Federation); Schreiber, H.J.; Viti, M. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)
2008-10-15
A novel, non-invasive method of measuring the beam energy at the International Linear Collider is proposed. Laser light collides head-on with beam particles and either the energy of the Compton scattered electrons near the kinematic end-point is measured or the positions of the Compton backscattered {gamma}-rays, the edge electrons and the unscattered beam particles are recorded. A compact layout for the Compton spectrometer is suggested. It consists of a bending magnet and position sensitive detectors operating in a large radiation environment. Several options for high spatial resolution detectors are discussed. Simulation studies support the use of an infrared or green laser and quartz fiber detectors to monitor the backscattered photons and edge electrons. Employing a cavity monitor, the beam particle position downstream of the magnet can be recorded with submicrometer precision. Such a scheme provides a feasible and promising method to access the incident beam energy with precisions of 10{sup -4} or better on a bunch-to-bunch basis while the electron and positron beams are in collision. (orig.)
Electronic structure of the palladium hydride studied by compton scattering
Mizusaki, S; Yamaguchi, M; Hiraoka, N; Itou, M; Sakurai, Y
2003-01-01
The hydrogen-induced changes in the electronic structure of Pd have been investigated by Compton scattering experiments associated with theoretical calculations. Compton profiles (CPs) of single crystal of Pd and beta phase hydride PdH sub x (x=0.62-0.74) have been measured along the [100], [110] and [111] directions with a momentum resolution of 0.14-0.17 atomic units using 115 keV x-rays. The theoretical Compton profiles have been calculated from the wavefunctions obtained utilizing the full potential linearized augmented plane wave method within the local density approximation for Pd and stoichiometric PdH. The experimental and the theoretical results agreed well with respect to the difference in the CPs between PdH sub x and Pd, and the anisotropy in the CPs of Pd or PdH sub x. This study provides lines of evidence that upon hydride formation the lowest valance band of Pd is largely modified due to hybridization with H 1s-orbitals and the Fermi energy is raised into the sp-band. (author)
Measurements of Compton Scattered Transition Radiation at High Lorentz Factors
Case, Gary L.; Cherry, Michael L.; Isbert, Joachim; Mitchell, John W.; Patterson, Donald; Case, Gary L.; Cherry, Michael L.; Isbert, Joachim; Mitchell, John W.; Patterson, Donald
2004-01-01
X-ray transition radiation can be used to measure the Lorentz factor of relativistic particles. Standard transition radiation detectors (TRDs) typically incorporate thin plastic foil radiators and gas-filled x-ray detectors, and are sensitive up to \\gamma ~ 10^4. To reach higher Lorentz factors (up to \\gamma ~ 10^5), thicker, denser radiators can be used, which consequently produce x-rays of harder energies (>100 keV). At these energies, scintillator detectors are more efficient in detecting the hard x-rays, and Compton scattering of the x-rays out of the path of the particle becomes an important effect. The Compton scattering can be utilized to separate the transition radiation from the ionization background spatially. The use of conducting metal foils is predicted to yield enhanced signals compared to standard nonconducting plastic foils of the same dimensions. We have designed and built a Compton Scatter TRD optimized for high Lorentz factors and exposed it to high energy electrons at the CERN SPS. We pres...
Electronic structure of AlAs: A Compton profile study
International Nuclear Information System (INIS)
Sharma, G.; Joshi, K.B.; Mishra, M.C.; Kothari, R.K.; Sharma, Y.C.; Vyas, V.; Sharma, B.K.
2009-01-01
The electronic structure of AlAs through a Compton profile study is presented in this paper. Theoretical calculations are performed following the linear combination of atomic orbitals (LCAO) method and the empirical pseudopotential method (EPM). In LCAO method, to treat exchange and correlation two cases are considered. In the first case, the exchange function of Becke and the correlation function of Perdew and Wang on the basis of generalized gradient approximation (PW-GGA) is adopted. In the second case, the hybrid function B3PW is adopted. Measurement of Compton profile on the polycrystalline AlAs is performed using 59.54 keV gamma-rays. The spherically averaged theoretical Compton profiles are in good agreement with the measurement. The best agreement is, however, shown by the EPM. The anisotropies predicted by B3PW and the EPM are smaller than those from the PW-GGA calculation. On the basis of equal-valence-electron-density profiles, it is found that AlAs is more covalent compared to AlN. The charge transfer model suggests transfer of 0.6e - from Al to As on compound formation.
Compton profile study and electronic properties of tantalum diboride
International Nuclear Information System (INIS)
Raykar, Veera; Bhamu, K.C.; Ahuja, B.L.
2013-01-01
We have reported the first-ever experimental Compton profile (CP) of TaB 2 using 20 Ci 137 Cs Compton spectrometer. To compare the experimental data, we have also computed the theoretical CPs using density functional theory (DFT) and hybridization of DFT and Hartree–Fock (HF) within linear combination of the atomic orbitals (LCAO) method. In addition, we have reported energy bands and density of states of TaB 2 using LCAO and full potential-linearized augmented plane wave (FP-LAPW) methods. A real space analysis of CP of TaB 2 confirms its metallic character which is in tune with the cross-overs of Fermi level by energy bands and Fermi surface topology. A comparison of equal-valence-electron-density (EVED) experimental profiles of isoelectronic TaB 2 and NbB 2 show more covalent (or less ionic) character of TaB 2 than that of NbB 2 which is in agreement with available ionicity data. - Highlights: ► Reported first-ever experimental Compton profile (CP) of TaB 2 . ► Interpreted experimental CP using theoretical CP within density functional theory. ► Analyzed equal-valence-electron-density experimental CPs of TaB 2 and NbB 2 . ► Established metallic character by taking Fourier transform of experimental CP. ► Reported energy bands, DOS and Fermi surface of TaB 2 using LCAO and FP-LAPW
Compton-thick AGNs in the NuSTAR Era
Marchesi, S.; Ajello, M.; Marcotulli, L.; Comastri, A.; Lanzuisi, G.; Vignali, C.
2018-02-01
We present the 2–100 keV spectral analysis of 30 candidate Compton-thick-(CT-)active galactic nuclei (AGNs) selected in the Swift-Burst Alert Telescope (BAT) 100 month survey. The average redshift of these objects is ∼ 0.03, and they all lie within ∼500 Mpc. We used the MyTorus model to perform X-ray spectral fittings both without and with the contribution of the Nuclear Spectroscopic Telescope Array (NuSTAR) data in the 3–50 keV energy range. When the NuSTAR data are added to the fit, 13 out of 30 of these objects (43% of the whole sample) have intrinsic absorption N H 3σ confidence level, i.e., they are reclassified from Compton thick to Compton thin. Consequently, we infer an overall observed fraction of the CT-AGN, with respect to the whole AGN population, lower than the one reported in previous works, as low as ∼4%. We find evidence that this overestimation of N H is likely due to the low quality of a subsample of spectra, either in the 2–10 keV band or in the Swift-BAT one.
Afanasiev, Michael; Boehm, Christian; van Driel, Martin; Krischer, Lion; May, Dave; Rietmann, Max; Fichtner, Andreas
2017-04-01
Recent years have been witness to the application of waveform inversion to new and exciting domains, ranging from non-destructive testing to global seismology. Often, each new application brings with it novel wave propagation physics, spatial and temporal discretizations, and models of variable complexity. Adapting existing software to these novel applications often requires a significant investment of time, and acts as a barrier to progress. To combat these problems we introduce Salvus, a software package designed to solve large-scale full-waveform inverse problems, with a focus on both flexibility and performance. Currently based on an abstract implementation of high order finite (spectral) elements, we have built Salvus to work on unstructured quad/hex meshes in both 2 or 3 dimensions, with support for P1-P3 bases on triangles and tetrahedra. A diverse (and expanding) collection of wave propagation physics are supported (i.e. viscoelastic, coupled solid-fluid). With a focus on the inverse problem, functionality is provided to ease integration with internal and external optimization libraries. Additionally, a python-based meshing package is included to simplify the generation and manipulation of regional to global scale Earth models (quad/hex), with interfaces available to external mesh generators for complex engineering-scale applications (quad/hex/tri/tet). Finally, to ensure that the code remains accurate and maintainable, we build upon software libraries such as PETSc and Eigen, and follow modern software design and testing protocols. Salvus bridges the gap between research and production codes with a design based on C++ template mixins and Python wrappers that separates the physical equations from the numerical core. This allows domain scientists to add new equations using a high-level interface, without having to worry about optimized implementation details. Our goal in this presentation is to introduce the code, show several examples across the scales, and
Gate simulation of Compton Ar-Xe gamma-camera for radionuclide imaging in nuclear medicine
Dubov, L. Yu; Belyaev, V. N.; Berdnikova, A. K.; Bolozdynia, A. I.; Akmalova, Yu A.; Shtotsky, Yu V.
2017-01-01
Computer simulations of cylindrical Compton Ar-Xe gamma camera are described in the current report. Detection efficiency of cylindrical Ar-Xe Compton camera with internal diameter of 40 cm is estimated as1-3%that is 10-100 times higher than collimated Anger’s camera. It is shown that cylindrical Compton camera can image Tc-99m radiotracer distribution with uniform spatial resolution of 20 mm through the whole field of view.